GraphSeam: Supervised Graph Learning Framework for Semantic UV Mapping
Fatemeh Teimury, Bruno Roy, Juan Sebastián Casallas, David MacDonald, Mark Coates
GGraphSeam: Supervised Graph Learning Framework for SemanticUV Mapping
FATEMEH TEIMURY,
McGill University and Autodesk, Canada
BRUNO ROY,
Autodesk, Canada
JUAN SEBASTIÁN CASALLAS,
Autodesk, USA
DAVID MACDONALD,
Autodesk, Canada
MARK COATES,
McGill University, Canada
Fig. 1. The proposed approach on a sample mesh. In the top row, we show the predicted outputs of our graph network in UV space (a). Additionalpost-processing steps (bottom row) improves our results by removing small shells (b) and by adding missing seams (c) reducing shell count anddistortion.
Recently there has been a significant effort to automate UV mapping, the pro-cess of mapping 3D-dimensional surfaces to the UV space while minimizingdistortion and seam length. Although state-of-the-art methods, Autocuts andOptCuts, addressed this task via energy-minimization approaches, they fail toproduce semantic seam styles, an essential factor for professional artists. Therecent emergence of Graph Neural Networks (GNNs), and the fact that a meshcan be represented as a particular form of a graph, has opened a new bridgeto novel graph learning-based solutions in the computer graphics domain. Inthis work, we use the power of supervised GNNs for the first time to proposea fully automated UV mapping framework that enables users to replicate theirdesired seam styles while reducing distortion and seam length. To this end,
Authors’ addresses: Fatemeh Teimury, McGill University and, Autodesk, Canada,[email protected]; Bruno Roy, Autodesk, Canada, [email protected]; Juan Sebastián Casallas, Autodesk, USA, [email protected]; DavidMacDonald, Autodesk, Canada, [email protected]; Mark Coates, McGillUniversity, Canada, [email protected].© 2020 Association for Computing Machinery.This is the author’s version of the work. It is posted here for your personal use. Notfor redistribution. The definitive Version of Record was published in
Preprint , https://doi.org/xxxx. we provide augmentation and decimation tools to enable artists to create theirdataset and train the network to produce their desired seam style. We provide acomplementary post-processing approach for reducing the distortion based ongraph algorithms to refine low-confidence seam predictions and reduce seamlength (or the number of shells in our supervised case) using a skeletonizationmethod.CCS Concepts: •
Computing methodologies → Shape modeling ; Machinelearning approaches .Additional Key Words and Phrases: graph neural networks, UV mapping,semantic, post-processing
ACM Reference Format:
Fatemeh Teimury, Bruno Roy, Juan Sebastián Casallas, David MacDonald,and Mark Coates. 2020. GraphSeam: Supervised Graph Learning Frameworkfor Semantic UV Mapping.
Preprint
0, 0, Article 0 ( 2020), 14 pages. https://doi.org/xxxx
UV mapping is a fundamental task in computer graphics that involvesthe projection of 3D surfaces to 2D representations. This process
Preprint, Vol. 0, No. 0, Article 0. Publication date: 2020. a r X i v : . [ c s . G R ] D ec :2 • Teimury et al. requires unwrapping the 3D mesh from the surface for texture andcolour assignment. Although different evaluation metrics exist forUV Mapping, prior methods [Li et al. 2018; Poranne et al. 2017;Sheffer and Hart 2002] solely focus on optimizing two critical met-rics: distortion and seam length. The main difference between earlierand more recent methods for UV Mapping is that the earlier methods,such as [Sheffer and Hart 2002], use separate steps for optimizingseam length and distortion since these parameters have differentnatures (distortion is continuous, and seam length is discrete). Incontrast, recent methods [Li et al. 2018; Poranne et al. 2017] employiterative frameworks to optimize seam length and distortion jointly.Although these methods have demonstrated promising perfor-mance that minimizes distortion and seam length, they usually pro-duce a single shell. Producing UV maps with semantic boundariesis a critical goal, and to our knowledge, this goal has not been ad-dressed so far. Moreover, artists usually ask for a framework that canmimic similar seam styles on objects within the same category. Forexample, the seam style for a collection of humanoid models shouldbe consistent. Ideally, the framework should also spend the sameamount of time for each model.Polygonal meshes have many intrinsic similarities with graphs, andthis motivated us to build our proposed method using state-of-the-artgraph learning approaches. Graph-based neural network architectures(e.g., graph convolution networks and graph attention networks) havebeen widely used recently in the machine learning community [Kipfand Welling 2016; Veliˇckovi´c et al. 2018] and have proved promisingfor analysis of data on graphs and point clouds. Our proposed methodrepresents the first application of such learning techniques for seamdetection in the UV Mapping context. Moreover, the supervisednature of these algorithms enables the proposed method to reproducesimilar styles on test objects, a feature that artists are always lookingfor. In addition, the translation of the distortion minimization taskto the graph learning context allows us to propose a post-processingalgorithm based on the Steiner tree problem [Sheffer and Hart 2002]and skeletonization [Abu-Ain et al. 2013; Yang et al. 2019; Youssefet al. 2015]. The post-processing step helps to produce seams thatalign with semantic boundaries and reduces the number of shells.Although we demonstrate our framework’s efficiency using a spe-cific dataset (Autodesk ® Character Generator), we appreciate thatthere is a great variation between 3D models and seam style basedon the dataset type. On the other hand, the artist’s task is usually todefine a particular seam style over a large set of objects, optimisti-cally taking months to complete. So, we provide augmentation anddecimation tools to enable artists to train the proposed network withany type of objects or seam style they require. We suggest that artistsapply their desired seam style on a few models manually and then useour augmentation tools to create their dataset. The main motivationfor producing seam cuts manually is that defining optimal cuts andseam style depends on multiple factors such as curvature, distortion,and the UV space. Providing manually labeled examples means thatartists will generate a superior training dataset for graph networks.In summary, our main contributions are:(1) we leverage the supervised nature of graph learning methods tomimic semantic seam style in UV Mapping; (2) we incorporate dual graphs and state-of-the-art graph-based learn-ing methods to address seam detection as an edge-classificationtask;(3) we suggest key informative edge features for our frameworkwhich provide a considerable improvement in all 3D mesh edge-based networks;(4) we restate the distortion minimization problem in UV Mappingin terms of graph connectivity; and(5) we then propose a post-processing procedure based on the SteinerTree problem [Sheffer and Hart 2002] and skeletonization toimprove seam detection network results.
Surface parametrization is the projection of a surface to 2D spaceand is central to a broad spectrum of problems in the computergraphics and animation communities. Surface parametrization canbe addressed via two separate pipelines: (i) specifying seams and (ii)minimizing distortion [Desbrun et al. 2002; Julius et al. 2005; Kho-dakovsky et al. 2003; Sander et al. 2003; Sheffer and Hart 2002]. Ini-tial UV mapping methods either considered only one of the problemsor considered both but addressed them via two sequential pipelines.Recent methods have started to focus on addressing minimizing dis-tortion and seam detection simultaneously [Li et al. 2018; Poranneet al. 2017]. In this section, we briefly review initial methods thataddress one of the tasks of specifying seams [Julius et al. 2005; Kho-dakovsky et al. 2003; Sander et al. 2003; Sheffer and Hart 2002] orminimizing distortion [Desbrun et al. 2002; Khodakovsky et al. 2003]and then focus on the state-of-the-art methods that simultaneouslyaddress both [Li et al. 2018; Poranne et al. 2017].Specifying cuts on the surface is the initial step to reach the surfaceparametrization. The papers [Julius et al. 2005; Khodakovsky et al.2003; Sander et al. 2003; Sheffer and Hart 2002] propose differentarchitectures that specify seams as the initial step and then minimizedistortion by adding more cuts. None of these frameworks is capa-ble of preserving semantic boundaries automatically; they rely onguidance from a user. The aim of parametrization approaches thatstrive to minimize distortion is to preserve angles and areas in orderto produce isometric mappings[Desbrun et al. 2002; Khodakovskyet al. 2003; Liu et al. 2008; Sander et al. 2001]. The main limitationof all of these pipelines is that the nonlinearity of the optimizationfunction is not satisfactorily addressed.The difference between the nature of distortion and cuts (i.e., dis-tortion is continuous and cuts are discrete) has hampered the designof an architecture that optimizes both parameters at the same time. So,in all the above-mentioned methods, the parametrization is mainlydependent on specifying cuts and then if the cuts do not produce thedesired distortion value, the architecture needs to suggest a new setof cuts. Recently, [Poranne et al. 2017] proposed Autocuts, whichis a fully automated architecture that minimizes distortion and opti-mizes cuts using an energy-based solver that is capable of iterativelyand jointly converging on soft constraints such as distortion met-rics. Later, [Li et al. 2018] proposed the OptCuts algorithm that startsfrom an arbitrary initial embedding and satisfies the distortion boundsrequested by the user.
Preprint, Vol. 0, No. 0, Article 0. Publication date: 2020. raphSeam: Supervised Graph Learning Framework for Semantic UV Mapping • 0:3
The main deficiency of these methods is that the final cuts do notpreserve semantic boundaries. Although [Li et al. 2018; Poranneet al. 2017] simultaneously optimize distortion and cuts, they gener-ate results with very few shells (e.g., a single shell for our CharacterGenerator dataset) while losing semantic boundaries. Another draw-back of the state-of-the-art methods is the mesh resolution scalability.Based on our experiments, meshes with more than 2000k polygonsrequire extensive resources. State-of-the-art methods construct amatrix of high-resolution meshes that requires a huge amount ofmemory and leads to expensive computations throughout the opera-tion of their iterative solvers. Eventually, it is desirable to incorporatethese algorithms in interactive tools used by artists and designers;due to the very high computational and memory requirements ofexisting techniques, the proposed methods struggle to be adequatelyresponsive. In addition to these two limitations, neither Autocuts norOptCuts guarantee convergence and whether meaningful results areobtained or not depends heavily upon the initializations. In otherwords, without very carefully chosen manual inputs from the user,even if the algorithms do reach convergence, the end-results mightbe unusable in practice.
Our main goal is to address UV mapping for 3D meshes. Sincedifferent evaluation metrics exist for UV mapping such as distortion(ratio of 2d area to 3d area or the ratio of 2d perimeter to 3d perimeterfor each face), the number of UV shells, semantic boundaries, 2dlayout efficiency (what percentage of the UV space is occupied), weaddress our problem via two different pipelines. In the first pipeline,we focus on finding a good initial seam placement, and in the second,we refine the seams to minimize the distortion.
We have access to a set of 3D meshes, 𝑀 𝑖 = ( 𝑉 𝑖 , 𝐸 𝑖 , 𝐹 𝑖 ) 𝑖 ∈ , , . . . , 𝐾 with | 𝑉 𝑖 | = 𝑁 𝑖 nodes. 𝐸 𝑖 ∈ 𝑉 𝑖 × 𝑉 𝑖 denotes the set of edges, and 𝐹 𝑖 ∈ 𝑉 𝑖 × 𝑉 𝑖 × 𝑉 𝑖 denotes the set of triangle faces for the 𝑖 -th mesh.Let 𝑋 𝑖 ∈ 𝑅 𝑁 × 𝑓 be the node feature matrix where 𝑓 is the numberof features of every node. 𝐴 𝑖 ∈ 𝑅 𝑁 𝑖 × 𝑁 𝑖 is the adjacency matrixcorresponding to the original mesh, and 𝐿 𝑖 ∈ ( , ) × 𝐸 𝑖 is the edgelabel vector for all edges in the mesh 𝑖 . A value of indicates that theedge is not a seam and indicates that it is. In our setting, we have alabelled training set and a test set where both the meshes and labelsof the test set are unavailable during training. The task is to predictall edge labels for meshes in the test set. Each mesh is defined as 𝑀 = ( 𝑉 , 𝐸, 𝐹 ) where 𝑉 is a set of nodes (inour context these are vertices), 𝐸 is a set of edges, and 𝐹 is a set offaces in the mesh. Also, we have access to a vector 𝐿 ∈ ( , ) ×| 𝐸 | which corresponds to proposed edge labels (i.e., seam or non-seam),and a vector of face distortions 𝐷 𝑓 ∈ 𝑅 ×| 𝐹 | in which each elementis a distortion value for a face of the mesh 𝑀 in the UV space. Thetask is to produce new edge labels 𝐿 ′ ∈ 𝑅 ×| 𝐸 | that represent amodification, including both addition of missing seams and removalof spurious seams, to the initial edge labels 𝐿 , with the goal ofreducing distortion in the shells. In the past years, there has been intensive research into the devel-opment of neural networks that can be applied to graph-structureddata[Defferrard et al. 2016; Estrach et al. 2019; Henaff et al. 2015;Levie et al. 2018]. At a high level, the core difference between con-ventional neural networks and GNNs is the need for more flexibilityin the convolution (or aggregation) operations that take place at eachlayer. Since graphs are irregular and there is no ordering or positionassociated with each node, the aggregation operator should be invari-ant to the ordering of the nodes in the neighbourhood. Also, sincenodes have different degrees, the size of the neighbourhoods vary.We now review the GNNs that we have experimented with in ourproposed framework.
Graph Convolutional Network (GCN).
The GCN [Kipf and Welling2016] is one of the simpler GNN approaches. Let ˆ 𝐴 𝐺 = 𝐷 − / ( 𝐴 + 𝐼 ) 𝐷 − / be a normalized adjacency matrix, where 𝐼 is the identitymatrix, 𝐴 is the original adjacency matrix of the graph 𝐺 , and 𝐷 is the degree matrix. Let 𝜎 be a non-linear activation function anddenote by 𝑊 ( 𝑘 ) the neural network weights at layer 𝑘 . Denoting theoutput of layer 𝑘 as 𝐻 ( 𝑘 + ) and letting 𝑋 be the input feature matrix,the graph convolution operation for a GCN can be written as follows: 𝐻 ( ) = 𝜎 (cid:16) (cid:98) 𝐴 𝐺 𝑋𝑊 ( ) (cid:17) 𝐻 ( 𝑘 + ) = 𝜎 (cid:16) (cid:98) 𝐴 𝐺 𝐻 ( 𝑘 ) 𝑊 ( 𝑘 ) (cid:17) (1) Graph Attention Networks (GAT).
While GCN [Kipf and Welling2016] uses a simple mean aggregation function, a GAT[Veliˇckovi´cet al. 2018] adopts an attention mechanism to learn how much weighta node should place on another node in its neighbourhood whenperforming the aggregation. The graph convolution operation in aGAT can be expressed for node 𝑣 as: ℎ ( 𝑘 ) 𝑣 = 𝜎 (cid:169)(cid:173)(cid:171) ∑︁ 𝑢 ∈ 𝑁 ( 𝑣 )∪ 𝑣 𝛼 ( 𝑘 ) 𝑢𝑣 𝑊 ( 𝑘 ) ℎ ( 𝑘 − ) 𝑢 (cid:170)(cid:174)(cid:172) , (2)where 𝑁 ( 𝑣 ) is the (possibly multi-hop) neighborhood of node 𝑣 , ℎ ( ) 𝑣 = 𝑥 𝑣 is the input feature vector, and 𝛼 ( 𝑘 ) 𝑢𝑣 is the attention weightthat node 𝑣 associates with node 𝑢 . The attention weight can becalculated via the equation: 𝛼 ( 𝑘 ) 𝑢𝑣 = 𝑠𝑜 𝑓 𝑡𝑚𝑎𝑥 (cid:16) 𝑔 (cid:16) 𝑎 𝑇 (cid:104) 𝑊 ( 𝑘 ) ℎ ( 𝑘 − ) 𝑣 || 𝑊 ( 𝑘 ) ℎ ( 𝑘 − ) 𝑢 (cid:105)(cid:17)(cid:17) . (3)where [·||·] denotes concatenation of two vectors, 𝑔 is the LeakyRELU activation function and 𝑎 is the vector of learnable parame-ters. The 𝑠𝑜 𝑓 𝑡𝑚𝑎𝑥 function is used for attention weight normaliza-tion. There are other possible choices for constructing the attentionfunction. Multi-head attention can be used to improve predictionperformance. GraphSAGE.
Many deeper GNNs suffer from over smoothing(aggregation occurs over too large a neighborhood). The size of amulti-hop neighborhood can expand exponentially, and this can maketraining for very large graphs extremely slow. GraphSAGE [Hamiltonet al. 2017] addresses these two issues by using residual (skip) con-nections to prevent oversmoothing and employing neighbor sampling
Preprint, Vol. 0, No. 0, Article 0. Publication date: 2020. :4 • Teimury et al. to place limits on the size of the neighborhood used for computations.The method allows for a variety of different aggregation mecha-nisms, including concatenation, max-pooling, and LSTMs. We cansummarize the GraphSAGE structure by: ℎ ( 𝑘 ) 𝑣 = 𝜎 ( 𝑊 𝑘 · 𝑓 ( 𝑘 ) ( ℎ ( 𝑘 − ) 𝑣 , (cid:110) ℎ ( 𝑘 − ) 𝑢 , ∀ 𝑢 ∈ 𝑆 𝑁 ( 𝑣 ) (cid:111) )) . (4)Here ℎ 𝑣 = 𝑥 𝑣 is the input feature vector for node 𝑣 , 𝑓 𝑘 (·) is theaggregation function and 𝑆 𝑁 ( 𝑣 ) is the set of sampled neighbors ofthe node 𝑣 . Graph Isomorphism Network (GIN).
In [Xu et al. 2019], Xu et al.highlight the inability of common GNNs such as the GCN and GATto learn different node embeddings to distinguish between differentgraph structures. Xu et al. demonstrate that a maximally powerfulgraph neural network can be constructed by using a multi-layer per-ceptron for the the aggregation combined with an irrational scalar.This GNN, called a Graph Isomorphism Network (GIN), is maxi-mally powerful in the sense that if any GNN can distinguish betweentwo different graphs, then GIN is also capable of distinguishing be-tween them. Note that this power is not equivalent to performanceon learning tasks such as node or edge classification. Denote by 𝜖 ( 𝑙 ) the irrational scalar associated with layer 𝑙 and let 𝑀𝐿𝑃 (·) representa multi-layer perceptron. The aggregation step in GIN can be writtenas: ℎ ( 𝑘 ) 𝑣 = 𝑀𝐿𝑃 (( + 𝜖 ( 𝑘 ) ) ℎ ( 𝑘 − ) 𝑣 + ∑︁ 𝑢 ∈ 𝑁 ( 𝑣 ) ℎ ( 𝑘 − ) 𝑢 ) . (5) In this section, we first introduce dual graphs and then we provide abrief explanation of methods that generalize graph neural networksvia dual graphs.
Dual Graphs.
Let 𝐺 = ( 𝑉 , 𝐸 ) be the original undirected graph.The dual graph (also known as line graph) of 𝐺 , denoted by (cid:101) 𝐺 = ( (cid:101) 𝑉 = 𝐸, (cid:101) 𝐸 ) , is constructed such that every vertex of the dual graph (cid:101) 𝑣 corresponds to an edge ( 𝑖, 𝑗 ) ∈ 𝐸 in the original graph 𝐺 . Considerany two dual vertices ˜ 𝑣 = ( 𝑖, 𝑗 ) and ˜ 𝑣 ′ = ( 𝑖 ′ , 𝑗 ′ ) ∈ (cid:101) 𝑉 in the dualgraph (cid:101) 𝐺 . If these are connected by an edge in dual graph, then thecorresponding edges ( 𝑖, 𝑗 ) and ( 𝑖 ′ , 𝑗 ′ ) must share an endpoint in theoriginal graph 𝐺 . The definition can be extended to directed graphs,but we focus on the undirected case because there is no directionassociated with 3D mesh edges.The GNN architectures can be directly applied to the dual graphor can be extended to learn jointly over both the original and dualgraphs [Jepsen et al. 2019; Monti et al. 2018; Zhang et al. 2019;Zhuang and Ma 2018]. Application on the dual graph is beneficialwhen the goal is classification or regression of edges. Joint learningcan improve performance because the architecture can more readilylearn structural relationships between both edges and nodes. Sinceone of our tasks is classifying edges as seams, it is natural to performlearning on the dual graph. A Steiner tree for a set of vertices (terminals) in a graph is a connectedsub-graph containing all the terminal vertices [Skiena 1998]. A mini-mal Steiner tree is a Steiner tree with minimal sum of edge weights. The problem of finding the minimal Steiner tree is NP-Complete[Skiena 1998]. The following is a standard algorithm to approximatethe minimal Steiner tree. It has been proven to be within a factor of √ of the optimum[Skiena 1998].(1) For each 𝑛, 𝑚 ∈ 𝑇 where 𝑇 is the tree, compute the shortestpath 𝑃 ( 𝑛, 𝑚 ) between vertices 𝑛 and 𝑚 .(2) Define a new graph where 𝑇 are the vertices and there is anedge between each pair of vertices. The new weight for eachedge is set to the weight of the shortest path between the twoterminals.(3) Compute the minimal spanning tree on the new graph. Our proposed method consists of two separate blocks: seam detectionand minimizing distortion and thinning seam lines. A more detailedpipeline of our approach is shown in Figure 2.
Polygonal meshes have many intrinsic similarities with graphs, andindeed, can be fully specified by an annotated graph. With such arepresentation, the seam detection task can be expressed as edgeclassification.State-of-the-art graph learning methods such as GAT and GCNfocus primarily on addressing node or graph classification. On theother hand, UV mapping aligns more naturally with edge classifica-tion. Instead of using the feature matrix and adjacency of the originalgraph, in which every element corresponds to a node, we constructan edge feature matrix and consider the dual graph.Our suggested framework is flexible and can be applied to anyedge feature matrix, but we propose a specific edge feature matrix inSection 5.2. We choose edge features that we have found experimen-tally to be particularly useful when trying to discriminate betweenseams and non-seams.
When performing edge classification, we first create a node featurematrix using normalized vertex coordinates ( 𝑥, 𝑦, 𝑧 ) , vertex normals ( 𝑣 𝑛𝑥 , 𝑣 𝑛𝑦 , 𝑣 𝑛𝑧 ) obtained from the object content, and discrete vertexGaussian curvature 𝑣 𝐾 , a vital distortion related feature that has beenused in [Sheffer and Hart 2002]. To prepare the edge feature matrix ˜ 𝑋 , we used concatenation of the two endpoints node features for eachedge. For example, ˜ 𝑥 ˜ 𝑣 = ( 𝑖,𝑗 ) = [ 𝑥 𝑖 || 𝑥 𝑗 ] where 𝑥 𝑖 is the node featurecorresponding to node 𝑖 .A concern with the strategy outlined above is that during the con-catenation procedure, we must choose the ordering of the two nodesthat form an edge. Since there is no ordering in the graph and edgesare undirected, this can lead to ambiguity. To resolve this, we followthe procedure of [Monti et al. 2018] and construct an augmented dualgraph. Each original edge is mapped to two nodes in the augmenteddual graph. For an edge ( 𝑖, 𝑗 ) in the original graph, the feature vec-tors associated with these two dual nodes are [ 𝑥 𝑖 || 𝑥 𝑗 ] and [ 𝑥 𝑗 || 𝑥 𝑖 ] ,respectively. Both nodes are connected to all nodes that share a com-mon endpoint in the original graph ( 𝑖 or 𝑗 ). During experiments, weobserve that this process, although desirable in terms of ensuring a Preprint, Vol. 0, No. 0, Article 0. Publication date: 2020. raphSeam: Supervised Graph Learning Framework for Semantic UV Mapping • 0:5
Fig. 2. Pipeline overview of the proposed approach. unique representation, has minimal effect on performance. We there-fore report the results achieved by assigning a random ordering tothe nodes and using the standard dual. Because the constructed graphis smaller and sparser, the computational demands are considerablyless, which is important for large meshes.After constructing the dual graph, we apply a graph neural network,choosing the aggregation strategy that best matches the characteristicsof the seam detection task. In particular, we find that it is criticalto avoid over-smoothing and the loss of local information. For thisreason, we incorporate residual connections after every layer in allof the GNNs we employ[He et al. 2016; Huang and Carley 2019; Liet al. 2019b,a; Zhou et al. 2020].
After applying the seam detection procedure, we have a vector ofedge probabilities 𝐿 𝑝 ∈ 𝑅 ×| 𝐸 | where | 𝐸 | is the number of edges.These probabilities are the softmax outputs of the GNN, and indicatehow likely an edge is to belong to a seam. We can apply a thresholdto these probabilities to derive binary classifications. The thresholddetermines the number of connected components that will be derived,and should be adjusted according to the size of the mesh and thedesired number of shells.We do not provide explicit encouragement to the GNN to producecontiguous seams, although there is implicit encouragement throughthe graph aggregation process. As a result, seams can be incomplete,missing a handful of critical edges. We address this problem byconsidering the graph connectivity. First, we separate the originalgraph into connected components by performing cuts along the edgeslabelled as seams. In each connected component, there is a pathbetween any two nodes.After this step, we reason that within one of these connectedcomponent, all remaining seam edges should be connected — a seamis supposed to define a boundary and there should not be isolated,incomplete seams. We are thus motivated to construct a tree thatconnects all of the nodes (in the original graph) that have beenidentified as belonging to one or more seam edges. We formulatethis task as a Steiner tree problem, assigning edge weights in orderto reduce the distortion. We then use the approximate algorithm presented in Section 4.3 to derive the tree. The edges identified inthis tree are considered to be the new seams.In order to reduce the distortion, we define the edge weights basedon the face distortion in the UV space. We cut the mesh using the edgelabels produced by the seam detection network, then construct theUV map and calculate the distortion value for every face in the mesh.The normalized distortion value for a face is the ratio of face area inUV space to the face area in 3D space, assuming that UV space hasbeen scaled so that the total area of all faces matches that of 3D space.As a result, a face distortion value of 1 corresponds to no distortion,values higher than one correspond to areas of magnification in UVspace, and values less than 1 correspond to minification. We definethe face distortion vector as 𝐷 𝐹 ∈ 𝑅 ×| 𝐹 | where | 𝐹 | is the number offaces in the mesh 𝑚 . Every edge 𝑒 in a manifold triangulated mesh isincident on two faces. We define the edge weight as follows: 𝑙 𝑒 = − (cid:12)(cid:12) 𝐷 𝑓 ,𝑒 − 𝐷 𝑓 ,𝑒 (cid:12)(cid:12) (6)where 𝐷 𝑓 ,𝑒 and 𝐷 𝑓 ,𝑒 correspond to the normalized face distortionvalues of the first and second face neighbors of the edge 𝑒 (theordering is irrelevant since we are taking the absolute value of thedifference). If there is a significant difference between the distortionvalues for the two faces, then the edge is a a good candidate for aseam. The assigned weight is low, so the approximate Steiner treealgorithm is encouraged to include them when constructing the seamtree. Skeletonization.
The seam prediction network often produces thickregions of several candidate edges connected to each other, whereaswe require seams to be topologically 1-dimensional curves. To ad-dress this, we incorporate an additional step that refines and thinsthe seam detection predictions. We use a relatively straightforwardadaptation of the idea of skeletonization of graphs and images [Abu-Ain et al. 2013; Yang et al. 2019; Youssef et al. 2015]. This firstrequires the estimated edge probabilities to be converted to a prob-ability value per vertex, which is simply the maximum probabilityof any edge incident on the vertex. The user supplies a threshold todefine the set of candidate vertices which will be thinned. A valuebetween 10% to 30% is typically a good selection, and the particularchoice will affect the topology of the seams somewhat, with higher
Preprint, Vol. 0, No. 0, Article 0. Publication date: 2020. :6 • Teimury et al. values producing more loops in the result. The skeletonization thenproceeds by repetitively removing the lowest probability vertex ofthis set whose removal does not change the connectivity of the set ofvertices, which corresponds to the essence of the image-based skele-tonization algorithms. In order to avoid reducing long thin regions toa single vertex at the centre, a vertex is not allowed to be removedif it would then make the distance from one of the deleted verticesto the remaining vertices higher than a user-defined threshold. Thisthreshold is typically set to the distance of about two to four edges.In this way, the essential loops and branches of the candidate seamsare preserved, but guaranteed to be exactly one edge thick. The skele-tonization step also explicitly eliminates tiny shells that only includeone or two triangles, using another user-defined threshold.
Fig. 3. We present the effect of the proposed post-processing stepson the UV shells of three (3) test 3D models, respectively from topto bottom: Ken, Sibilla, and Xena. The UV shells are color-codedto describe the resulting distortion at each step where blue showscompression and red stretching.
We conducted experiments on the Autodesk ® Character Generator(CG) dataset [Autodesk 2014]. CG dataset consists of procedurallygenerated 3D humanoid models, and includes seam labels generatedusing a specific seam style. Technical artists within Autodesk ® haveprovided ground truth seams for the 3D models. Also available aregeometric features such as vertex normals and vertex coordinates.The dataset is provided in commonly used file formats for data (e.g.,OBJ and PLY). The CG dataset consists of training objects and validation objects and test objects. We decimated all the objects to10000 faces resolution using the Autodesk ® decimation tool, and in Fig. 4. Comparing our GAT-based method with DST refinement (c)with Autocuts (a), OptCuts (b), and ground truth (d). the remainder of the paper, we will refer to this decimated dataset asCG10000. We provide a visualization of the resulting UV shells of thetest objects in Figure 3. In addition, we employ an augmentation toolto produce many augmented meshes. This tool creates new meshes byadding Gaussian random noise to the coordinates of random vertexesof the original mesh. In this setting, users can choose the numberof augmented objects and the mean and variance of the Gaussianrandom noise. Since we are conducting supervised inductive learningfor UV mapping for the first time, there is no benchmark datasetcurrently available. Autodesk ® Character Generator is available forusers, allowing for experimentation with the dataset.
We compare the performance of our approach with state-of-the-artenergy-based UV Mapping methods, Autocuts [Poranne et al. 2017]and OptCuts [Li et al. 2018]. We compare to these algorithms in twoways. First, we apply them directly on the 3D models (i.e., withoutany seam initialization). This allows us to evaluate how well ourproposed method compares to the state-of-the-art as an automatedtool for UV mapping. We compare quantitatively using the distortionmetric and the number of generated shells. We also conduct a qualita-tive assessment of whether the methods are able to identify semanticboundaries. Secondly, we evaluate how well Autocuts and OptCutsperform as post-processing methods to improve the quality of seams.We apply our proposed seam detection method to provide an ini-tial seam specification. This allows us to compare to our proposedDistortion Steiner Tree algorithm.
We experimented with a selection of graph neural networks, employ-ing modified versions of GCN [Kipf and Welling 2016], GAT [Veliˇckovi´cet al. 2018], GraphSAGE (GS) [Hamilton et al. 2017], and GIN [Xuet al. 2019]. We used the inductive learning implementation of GATprovided by the Deep Graph Library (DGL) [Wang et al. 2019] andwe implemented inductive versions of the rest of the baselines usingDGL similar to an inductive GAT implementation to provide a faircomparison. Moreover, we add skip connections to every layer ofeach architecture. The task of seam detection is binary classification,but the classes are imbalanced (fewer than 10% of the edges are seam
Preprint, Vol. 0, No. 0, Article 0. Publication date: 2020. raphSeam: Supervised Graph Learning Framework for Semantic UV Mapping • 0:7
Table 1. Performance of seam detection and UV mapping. Seamdetection is evaluated using false positive rate (FPR), true positive rate(TPR) and accuracy (Acc.). UV maps are evaluated based on averagedistortion and number of shells both before post-processing (BPP) andafter (APP).
Method Perf. Metrics (%) Avg. Dist.
Ground Truth - - - 0.294 - 10 -OptCuts - - - 0.107 - 1 -Autocuts - - - 0.282 - 1 -Prop-GCN 3.62 87.36 96.01 1.493 0.310 280 385Prop-GAT 0.34 edges). We therefore employ a weighted cross-entropy loss, usingweight for seams and for non-seam edges. Hyperparameters.
We used grid search for hyper-parameter tun-ing, using the validation set of 3 objects. The search grids are speci-fied in the supplementary material. For each GNN architecture weuse 3 layers with 64 hidden units per layer. In GAT [Veliˇckovi´c et al.2018] structure, the number of hidden attention units is 3, the numberof output attention units is 5, and the attention drop is 0.2. For Graph-SAGE [Hamilton et al. 2017], we use an LSTM aggregator. Resultsfor other GraphSAGE aggregators are reported in the supplementarymaterial. GIN [Xu et al. 2019] has two extra MLP layers with 64hidden units. For all the models, the early stop threshold is 50 andthe learning rate is . . Learning is conducted using the Adamoptimizer. GNN-based Seam Detection.
Table 1 compares the performance ofthe proposed method with different GNNs to Autocuts and OptCuts.These methods do not perform seam detection in the supervisedmanner as our proposed method, but instead perform an optimizedmapping after seams have been specified by a user. In this table, wecompare to the single-shell solution derived by these algorithms.The seam detection results suggest that the simple averaging ag-gregation of the GCN is inadequate for the task. The attention mecha-nism of GAT, the LSTM aggregator of GraphSAGE, and the MLP ofGIN provide the learning power to derive much better performance.The methods employing GraphSAGE and GAT perform best. Interms of UV mapping, OptCuts produces the lowest distortion map.OptCuts provides a lower distortion map than any of the proposedGNN-based algorithms before application of the post-processingDistortion Steiner Tree algorithm. After applying this, the GAT andGraphSAGE maps provide similar distortion values to the OptCutsmap.More importantly, unless seams are provided by a user, the energy-based baselines produce solutions with a single shell. The specifi-cation of seams is one of the most burdensome components of UVmapping for an artist, so both Autocuts and OptCuts are missingcritical functionality. By contrast, the proposed GNN-based methodsidentify multiple shells and do a much better job of identifying se-mantic boundaries (consistent with those recognized by the artists providing ground-truth). Figure 4 provides an example of the qualityof the maps produced by the approaches.Prior to the post-processing step, the proposed algorithms failto separate some of the larger shells identified in the ground truth.The Distortion Steiner Tree algorithm accomplishes the separation,both reducing the distortion and producing smaller shells that moreclosely match those identified by the artists in the ground truth. Thedisadvantage is that the method leads to many more single shells.
Distortion Steiner Tree Algorithm.
Table 2 examines post-processingperformance, comparing our proposed Distortion Steiner Tree algo-rithm to Autocuts and OptCuts (when these are used to refine theinitial seams produced by our GAT-based seam detection procedure).The proposed algorithm leads to UV maps with considerably lowerdistortion.Fig. 5 depicts the evolution of the UV maps when using Autocutsfor post-processing. Autocuts performs iterative unwrapping opera-tions (unfolding and optimizing the UV layout). We provide initialboundaries using the most probable seams from the graph-learningseam detection (i.e., using a threshold of 𝛼 𝑝 ≥ . ). By iteration 25,Autocuts has separated the upper body from the thighs, and afteriterations 50 and 100, the energy-based solver has managed to sepa-rate the arms from the upper body. After iteration 100 there are fewmodifications, although eventually the head is separated from theneck.Fig. 4(c) shows the UV map derived by using the GAT-based seamdetection followed by the Distortion Steiner Tree algorithm. Com-paring this to the map derived by post-processing with Autocuts inFig. 5, we see that the proposed GAT-DST method manages to pro-duce a single shell for the torso, following the ground-truth providedby the artists. In contrast, Autocuts divides the torso into severalshells. We observe similar behaviour for multiple examples (pleasesee the supplementary section), indicating that our proposed DST(using face distortion vector information) can better reproduce andpreserve artist-specified semantics. Table 2. Comparison of our post-processing method with baselinesAutocuts and OptCuts. We report average distortion and number ofshells metrics before (BPP) and after post-processing (APP). Autocutsand OptCuts are initialized with the best GNN seam detection resultbased on GAT. Our proposed method is initialized with GNN seamdetection using GAT, GCN, and GraphSAGE (GS) and applies theDistortion Steiner Tree algorithm (DST). The last row shows the effectof Skeletonization (SK).
Method Step(s) Avg. dist.
Autocuts it. 25 0.524 0.443 30.6 33it. 50 0.524 0.422 30.6 35.3it. 100 0.524 0.382 30.6 35it. 200 0.524 0.404 30.6 36.3OptCuts it. 25 0.524 0.453 30.6 31.3it. 50 0.524 0.435 30.6 34.3it. 100 0.524 0.377 30.6 35it. 200 0.524 0.322 30.6 37Prop-GAT DST 0.524 0.135 30.6 68Prop-GS DST 0.424 SK Preprint, Vol. 0, No. 0, Article 0. Publication date: 2020. :8 • Teimury et al.
Fig. 5. Evolution of UV maps using Autocuts for post-processing, with initialization by GAT-based seam detection method.
The final row of Table 2 shows the importance of the skeletoniza-tion for the GCN-based method, which is more prone to producingthick seams than the other GNNs. The average number of shells isreduced from 280.3 to 19.3. The effect is depicted in the secondcolumn of Fig. 3.
We have proposed a novel methodology for the task of UV mappingin computer graphics by leveraging graph learning approaches. Theproposed technique uses the dual graph and state-of-the-art graphlearning frameworks to address seam detection as an edge classifi-cation task. In contrast to existing baselines[Li et al. 2018; Poranneet al. 2017], the proposed algorithm produces a solution with mul-tiple shells. Visualization of the results suggests that the algorithmmanages to mimic the seam style of the training data.In order to further reduce the distortion and to catch any seamsthat the GNN-based method failed to detect, we proposed a graphalgorithm based on the Steiner Tree [Sheffer and Hart 2002] to mini-mize the distortion. Our results demonstrate that application of thisalgorithm during post-processing considerably reduces the distortionof the UV maps. Furthermore, it performs better than application dur-ing post-processing of the energy-minimization methods, Autocutsand OptCuts.We developed a skeletonization procedure that can reduce thenumber of small shells that are identified by our GNN-based seamdetection procedure. We believe that a promising future directioninvolves striving towards incorporating the three methods — seamdetection, post-processing distortion reduction, and skeletonization— in a single learning framework. This would likely involve theintroduction of a differentiable distortion proxy in the objective andregularizers that strongly encourage thin, contiguous seams.
ACKNOWLEDGMENTS
This work was supported and funded by Autodesk, Mitacs, andMcGill University. We would like to thank Autodesk for providingresources and insightful comments during our research. We wouldalso like to thank Hervé Lange, Group Architect for EntertainmentCreation Products at Autodesk, who provided insightful directions tothe project, and our internal artists, Sabrina Parent and Pierre Picard,for evaluating our results based on their artistic expertise.
REFERENCES
Waleed Abu-Ain, SNHS Abdullah, Bilal Bataineh, Tarik Abu-Ain, Khairuddin Omar,et al. 2013. Skeletonization algorithm for binary images.
Procedia Technology
11, 0(2013), 704–709.Autodesk. 2014. Character Generator. https://charactergenerator.autodesk.com/.Michaël Defferrard, Xavier Bresson, and Pierre Vandergheynst. 2016. Convolutionalneural networks on graphs with fast localized spectral filtering. In
Adv. Neural Inf.Proc. Systems . 3844–3852.Mathieu Desbrun, Mark Meyer, and Pierre Alliez. 2002. Intrinsic parameterizationsof surface meshes.
Computer Graphics Forum (Proc. Eurographics)
21, 3 (2002),210–218.Joan Bruna Estrach, Wojciech Zaremba, Arthur Szlam, and Yann LeCun. 2019. Spectralnetworks and deep locally connected networks on graphs.
Proc. Int. Conf. LearningRepresentations (2019), 1–14.Will Hamilton, Zhitao Ying, and Jure Leskovec. 2017. Inductive representation learningon large graphs. In
Adv. Neural Inf. Proc. Systems . 1024–1034.Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learningfor image recognition. In
Proc. IEEE conf. Computer Vision and Pattern Recognition .770–778.Mikael Henaff, Joan Bruna, and Yann LeCun. 2015. Deep convolutional networks ongraph-structured data. arXiv:1506.05163 (2015).Binxuan Huang and Kathleen M Carley. 2019. Residual or gate? towards deeper graphneural networks for inductive graph representation learning. arXiv:1904.08035 (2019).Tobias Skovgaard Jepsen, Christian S Jensen, and Thomas Dyhre Nielsen. 2019. Graphconvolutional networks for road networks. In
Proc.ACM SIGSPATIAL Int. Conf. Adv.Geographic Inf. Systems . 460–463.Dan Julius, Vladislav Kraevoy, and Alla Sheffer. 2005. D-charts: Quasi-developablemesh segmentation.
Computer Graphics Forum (Proc. Eurographics)
24, 3 (2005),581–590.Andrei Khodakovsky, Nathan Litke, and Peter Schröder. 2003. Globally smooth parame-terizations with low distortion.
ACM Transactions on Graphics (TOG)
22, 3 (2003),350–357.Thomas N Kipf and Max Welling. 2016. Semi-supervised classification with graphconvolutional networks.
Proc. Int. Conf. Learning Representations (2016).Ron Levie, Federico Monti, Xavier Bresson, and Michael M Bronstein. 2018. Cayleynets:Graph convolutional neural networks with complex rational spectral filters.
IEEETrans. Signal Process.
67, 1 (2018), 97–109.Guohao Li, Matthias Müller, Guocheng Qian, Itzel C Delgadillo, Abdulellah Abualshour,Ali Thabet, and Bernard Ghanem. 2019b. Deepgcns: Making gcns go as deep as cnns. arXiv:1910.06849 (2019).Minchen Li, Danny M Kaufman, Vladimir G Kim, Justin Solomon, and Alla Shef-fer. 2018. Optcuts: joint optimization of surface cuts and parameterization.
ACMTransactions on Graphics (TOG)
37, 6 (2018), 1–13.Zekun Li, Zeyu Cui, Shu Wu, Xiaoyu Zhang, and Liang Wang. 2019a. Fi-gnn: Modelingfeature interactions via graph neural networks for ctr prediction. In
Proc. ACM Int.Conf. Inf. Knowledge Management . 539–548.Dao-Jun Liu, Xin-Zhou Li, Jiangang Hao, and Xing-Hua Jin. 2008. Revisiting theparametrization of equation of state of dark energy via SNIa data.
Mont. Not. RoyalAstron. Soc. arXiv:1806.00770 (2018).Roi Poranne, Marco Tarini, Sandro Huber, Daniele Panozzo, and Olga Sorkine-Hornung.2017. Autocuts: simultaneous distortion and cut optimization for UV mapping.
ACMTransactions on Graphics (TOG)
36, 6 (2017), 1–11.Pedro V Sander, John Snyder, Steven J Gortler, and Hugues Hoppe. 2001. Texturemapping progressive meshes. In
Proc. SIGGRAPH . 409–416.Preprint, Vol. 0, No. 0, Article 0. Publication date: 2020. raphSeam: Supervised Graph Learning Framework for Semantic UV Mapping • 0:9
Pedro V Sander, Zoe J Wood, Steven J Gortler, John M Snyder, and Hugues H Hoppe.2003. Multi-chart geometry images.
Proc. Symp. on Geometry Process. (2003),146–155.Alla Sheffer and John C Hart. 2002. Seamster: inconspicuous low-distortion textureseam layout. In
Proc. IEEE Vis.
The algorithm design manual . Springer Science & BusinessMedia.Petar Veliˇckovi´c, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio,and Yoshua Bengio. 2018. Graph attention networks.
Proc. Int. Conf. LearningRepresentations (2018).Minjie Wang, Lingfan Yu, Da Zheng, Quan Gan, Yu Gai, Zihao Ye, Mufei Li, JinjingZhou, Qi Huang, Chao Ma, et al. 2019. Deep graph library: Towards efficient andscalable deep learning on graphs.
Proc. Int. Conf. Learning Representations WorkshopRepresent. Learn. Graphs Manifolds (2019), 1–7.Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. 2019. How powerful aregraph neural networks?
Proc. Int. Conf. Learning Representations (2019), 1–17.Liping Yang, Diane Oyen, and Brendt Wohlberg. 2019. A novel algorithm for skele-ton extraction from images using topological graph analysis. In
Proc. IEEE Conf.Computer Vision and Pattern Recognition Workshops . 0–0.Rabaa Youssef, Anis Kacem, Sylvie Sevestre-Ghalila, and Christine Chappard. 2015.Graph structuring of skeleton object for its high-level exploitation. In
Int. Conf. ImageAnalysis and Recognition . 419–426.Li Zhang, Xiangtai Li, Anurag Arnab, Kuiyuan Yang, Yunhai Tong, and Philip HS Torr.2019. Dual graph convolutional network for semantic segmentation.
British MachineVision Conference (2019).Kuangqi Zhou, Yanfei Dong, Wee Sun Lee, Bryan Hooi, Huan Xu, and Jiashi Feng.2020. Effective training strategies for deep graph neural networks. arXiv:2006.07107 (2020).Chenyi Zhuang and Qiang Ma. 2018. Dual graph convolutional networks for graph-basedsemi-supervised classification. In
Proc. World Wide Web Conf.
A GROUND TRUTH
We include some additional results and figures that provide furtherillustration of the behaviour our proposed approach GraphSeam.In this section we are providing ground truth visualizations (Fig. 6)for the test set of the Character Generator 10000 dataset (CG10000). (a) Ken:obj (b) Ken:UV (c) Sibilia:obj(d) Sibilia:UV (e) Xena:obj (f) Xena:UVFig. 6. Visualization of objects and their corresponding UV maps inthe test set
B AUGMENTATION TOOL
To provide better insight into our proposed augmentation tool, weillustrate five different augmentation visualization for Ken(one of ourtest models) in Fig. 7. All the augmentations are produced via addingGaussian noise that varies vertex positions up to 20%. (a) Original model (b) Augmentation 1 (c) Augmentation 2(d) Augmentation 3 (e) Augmentation 4 (f) Augmentation 5Fig. 7. Visualization of 3D model Ken and its augmentations.
C DECIMATION TOOL
To provide better insight into our proposed decimation tool, weillustrate five different resolutions for Ken(one of our test models) inFig. 8. This tool needs a predefined face resolution and, by removingor adding vertices and edges to the initial object, produces a newobject with the predefined resolution. (a) Res:10000f (b) Res:8000f (c) Res:6000f(d) Res:4000f (e) Res:2000f (f) Ken:500 fFig. 8. Visualization of different resolution of 3D model Ken usingdecimation tool. Reported resolutions are based on the number oftriangulated faces(f).
D POST-PROCESSING WITH AUTOCUTS ANDOPTCUTS WITHOUT INITIALIZATION
This section provides the UV Mapping visualization (Fig. 9) usingAutocuts [Poranne et al. 2017] and OptCuts [Li et al. 2018] on ourtest set objects. As illustrated below, Autocuts and OptCuts only
Preprint, Vol. 0, No. 0, Article 0. Publication date: 2020. :10 • Teimury et al. (a) Ken:Autocuts (b) Sibilia:Autocuts (c) Xena:Autocuts(d) Ken:Optcuts (e) Sibilia:Optcuts (f) Xena:OptcutsFig. 9. Visualization of the UV maps of the test set of CG10000 usingAutocuts (top row) and OptCuts (bottom row) approaches withoutinitialization. produce a single shell (i.e., without manual assistance provided by askilled artist) and cannot preserve semantic boundaries.
E AUTOCUTS AND OPTCUTS POST-PROCESSINGINITIALIZED WITH GAT OUTPUT SEAM LABELS
When used during post-processing, Autocuts [Poranne et al. 2017]and OptCuts [Li et al. 2018] can produce UV maps with fewer shellsthat those derived using our proposed minimum distortion SteinerTree (DST) algorithm. However, our proposed approach derivesresults that are closer to the artist-specified ground truth semantics.For example, our approach derives a single shell for the entire upperbody while Autocuts and OptCuts break this into multiple shells.Figure 15 shows the evolution of the maps produced by post-processing using Autocuts and OptCuts, respectively, when the GAT-based [Veliˇckovi´c et al. 2018] seam detection result is used for ini-tialization.
F VISUALIZATION OF GRAPHSEAM OUTPUTS FORTHE CG10000 TEST SET
In this section, we provide the results for all the objects in the testset for our proposed method, GraphSeam, using GNNs includingGCN, GAT, GraphSAGE and GIN for the seam detection block.The visualizations are derived after applying of our suggested post-processing approach, Distortion Steiner Tree (DST). Figures 10 to14 display visualizations of the derived UV maps.Based on the results, it is clear that the attention mechanism in GATand the more powerful aggregation method in GraphSAGE (we usethe LSTM aggregator) play a critical role in producing good resultsfor seam classification. GCN, which uses only a simple averagingaggregator, produces much poorer initial UV maps. GIN is known toperform excellently for graph classification and captures the graphstructure as well as most other GNNs. Its performance for node oredge classification tasks can be poorer, because it can be important (a) Ken:GCN (b) Sibilia:GCN (c) Xena:GCN(d) Ken:GCN-DST (e) Sibilia:GCN-DST (f) Xena:GCN-DSTFig. 10. Visualization of UV maps for GCN [Kipf and Welling 2016](top row) and GCN-DST (bottom row) on CG10000 test objects. to place more emphasis on local features. This probably explains theconsiderably weaker performance for the seam detection task, whichwe formulate as edge classification.All of the results clearly show how applying DST to the outputs ofthe seam detection block dramatically improves the final UV mapsand leads to better preservation of semantic boundaries.Figure 11 provides UV map visualizations that show the result ofapplying the proposed skeletonization(SK) procedure on the GCNoutput. Comparing to Fig. 10, we see that by thinning the edges thereis a dramatic reduction in the number of small shells. The single largeshell produced by GCN is decomposed into more meaningful shells,providing a better initialization for the DST algorithm. (a) Ken:
GCN-SK (b) Sibilia:
GCN-SK (c) Xena:
GCN-SK (d) Ken:
GCN-SK-DST (e)
Sibilia: GCN-SK-DST (f) Xena:
GCN-SK-DST
Fig. 11. Visualization of UV maps for GCN-SK (top row) and GCN-SK-DST (bottom row) on CG10000 test objects.
Preprint, Vol. 0, No. 0, Article 0. Publication date: 2020. raphSeam: Supervised Graph Learning Framework for Semantic UV Mapping • 0:11 (a) Ken:GAT (b) Sibilia:GAT (c) Xena:GAT(d) Ken:GAT-DST (e) Sibilia:
GAT-DST (f) Xena:GAT-DSTFig. 12. Visualization of UV maps for GAT[Veliˇckovi´c et al. 2018] (toprow) and GAT-DST (bottom row) on CG10000 test objects.(a) Ken:GS (b) Sibilia:GS (c) Xena:GS(d) Ken:GS-DST (e) Sibilia:GS-DST (f) Xena:GS-DSTFig. 13. Visualization of UV maps for GraphSAGE[Hamilton et al.2017] and GraphSAGE-DST on CG10000 test objects.
G ROBUSTNESS OF OUR METHODG.1 Using random splits
In the main paper we provide our experimental results for a singlevalidation/test split. To explore robustness to different data splits,we conducted experiments using five (5) random validation and testsplits. The results are reported in Table 5. We observe that althoughthere is some variability in performance, our proposed seam detectionmethod achieves high accuracy and low FPR for all splits when usingGAT and GraphSAGE.
G.2 Using augmentation on datasets
We have developed two tools for artists to create datasets that can beused to train the GraphSeam method. We employed the decimation (a) Ken:GIN (b) Sibilia:GIN (c) Xena:GIN(d) Ken:GIN-DST (e)
Sibilia:GIN-DST (f) Xena:GIN-DSTFig. 14. Visualization of UV maps for GIN[Xu et al. 2019] and GIN-DSTon CG10000 test objects. tool to produce meshes with 10000 faces resolution for the CG10000dataset.In this section, we explore the use of the augmentation tool toproduce a dataset that is twice the size of the original CG dataset.The augmentation tool is based on adding random Gaussian noise tothe original vertex positions. For this set of experiments, the amountof noise varies vertex positions up to 20%, leading to a much morevaried dataset and making the seam detection and mapping tasksharder.
Table 3. Performance of seam detection using false positive rate (FPR),true positive rate (TPR) and accuracy (Acc.) on CG10000 with aug-mentation.
Method Perf. Metrics (%)FPR TPR Acc.
Prop-GCN 3.30 94.83 96.62Prop-GAT 0.17 99.08 99.80Prop-GS (pool) 0.12 97.01 99.76Prop-GS (mean) 1.13 97.57 98.81Prop-GS (GCN) 4.64 95.80 95.37Prop-GS (LSTM) 0.03 98.20 99.89Prop-GIN 1.00 82.06 98.36To provide a fair comparison, we are using the same split fortraining, validation and test set as we reported in the paper and weuse the augmentation method on each set separately to ensure thereis no overlap between splits.Table 3 reports the results of the GraphSeam seam detection al-gorithm for the augmented dataset. Despite the increased variabilityin the dataset, the results are very similar to those achieved for theoriginal CG10000 dataset. This carries through to the derived UVmaps after post-processing, so we do not show the results here.
Preprint, Vol. 0, No. 0, Article 0. Publication date: 2020. :12 • Teimury et al. (a) Initial: 𝛼 𝑝 = . (b) 25 iterations (c) 50 iterations (d) 100 iterations (e) 200 iterationsFig. 15. Evolving state of a UV mapping generated using GAT [Veliˇckovi´c et al. 2018] edge probabilities as an initialization step for Autocuts [Poranneet al. 2017] (top row) and OptCuts [Li et al. 2018] (bottom row) approaches on the 3D model Ken. G.3 Creating stylized datasets
To show the power of the augmentation tool, we build our new datasetusing 19 random original 3D models from the initial training set ofCG10000 and via augmentation we increase the new training set to93 (the same size as the original CG10000 training set). To providea fair comparison we are using the same validation and test set asCG10000. It is required to mention that we did not make use of theaugmentation tool in the main paper, because professional artists hadalready provided a sufficient number of labelled 3D models.Our goal in providing this set of results is to show that our proposedaugmentation method enables artists to create their dataset based on afew original models containing manual seams using the augmentationtool. As shown in Table ?? , seam detection results illustrate similarperformance to that obtained with the original CG10000 dataset. Method Perf. Metrics (%)FPR TPR Acc.
Prop-GCN 7.89 92.54 92.13Prop-GAT 0.57 97.57 99.35Prop-GS (pool) 0.75 92.43 98.97Prop-GS (mean) 1.21 92.88 98.55Prop-GS (GCN) 10.25 92.99 89.88Prop-GS (LSTM) 0.10 93.76 99.64Prop-GIN 2.41 82.33 97.51
Table 4. Performance of seam detection using false positive rate (FPR),true positive rate (TPR) and accuracy (Acc.) on CG10000 containingfewer train models.
Preprint, Vol. 0, No. 0, Article 0. Publication date: 2020. raphSeam: Supervised Graph Learning Framework for Semantic UV Mapping • 0:13
Table 5. Performance of seam detection is evaluated using false positive rate (FPR), true positive rate (TPR) and accuracy (Acc.) on differentrandom validation and test splits.