DeformSyncNet: Deformation Transfer via Synchronized Shape Deformation Spaces
Minhyuk Sung, Zhenyu Jiang, Panos Achlioptas, Niloy J. Mitra, Leonidas J. Guibas
DDeformSyncNet: Deformation Transfer via Synchronized ShapeDeformation Spaces
MINHYUK SUNG ∗ , Adobe Research
ZHENYU JIANG † , The University of Texas at Austin
PANOS ACHLIOPTAS,
Stanford University
NILOY J. MITRA,
University College London and Adobe Research
LEONIDAS J. GUIBAS,
Stanford University
SourceEditedProjected
DeformationTransfer
Fig. 1. We propose DeformSyncNet to jointly learn an idealized canonical latent space, encoding all possible deformations from one shape to any other, aswell as an individualized linear deformation space, realized in a particular way for each specific shape. These individual deformation spaces are synchronized via the connection to the canonical space, resulting in consistent deformations across an entire shape category. Here, a user’s manipulations on a shape isprojected to a learned plausible shape space (left) while maintaining the shape structure such as part connectivity (red circle) and symmetry (green circle)and also instantly transferred to multiple other shapes from the same category (right), without leveraging any correspondence information during networktraining. Note how the edit only affects the other chairs with arms (blue circle).
Shape deformation is an important component in any geometry processingtoolbox. The goal is to enable intuitive deformations of single or multipleshapes or to transfer example deformations to new shapes while preservingthe plausibility of the deformed shape(s). Existing approaches assume accessto point-level or part-level correspondence or establish them in a prepro-cessing phase, thus limiting the scope and generality of such approaches.We propose DeformSyncNet, a new approach that allows consistent andsynchronized shape deformations without requiring explicit correspondence ∗ This work was equally contributed by M.Sung and Z. Jiang as co-first authors. † This work was done when Z. Jiang attended Tsinghua University.Authors’ addresses: Minhyuk Sung, Adobe Research; Zhenyu Jiang, The Universityof Texas at Austin; Panos Achlioptas, Stanford University; Niloy J. Mitra, UniversityCollege London and Adobe Research; Leonidas J. Guibas, Stanford University.Permission to make digital or hard copies of all or part of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for components of this work owned by others than ACMmust be honored. Abstracting with credit is permitted. To copy otherwise, or republish,to post on servers or to redistribute to lists, requires prior specific permission and/or afee. Request permissions from [email protected].© 2020 Association for Computing Machinery.0730-0301/2020/12-ART262 $15.00https://doi.org/10.1145/3414685.3417783 information. Technically, we achieve this by encoding deformations into aclass-specific idealized latent space while decoding them into an individ-ual, model-specific linear deformation action space, operating directly in3D. The underlying encoding and decoding are performed by specialized(jointly trained) neural networks. By design, the inductive bias of our net-works results in a deformation space with several desirable properties, suchas path invariance across different deformation pathways, which are thenalso approximately preserved in real space. We qualitatively and quantita-tively evaluate our framework against multiple alternative approaches anddemonstrate improved performance.CCS Concepts: • Computing methodologies → Shape modeling ; Shape analysis ; Machine learning approaches . Additional Key Words and Phrases:
Deformation, Deformation Transfer,Shape Editing, Shape Embedding, 3D Deep Learning
ACM Reference Format:
Minhyuk Sung, Zhenyu Jiang, Panos Achlioptas, Niloy J. Mitra, and LeonidasJ. Guibas. 2020. DeformSyncNet: Deformation Transfer via SynchronizedShape Deformation Spaces.
ACM Trans. Graph.
39, 6, Article 262 (Decem-ber 2020), 16 pages. https://doi.org/10.1145/3414685.3417783
ACM Trans. Graph., Vol. 39, No. 6, Article 262. Publication date: December 2020. a r X i v : . [ c s . G R ] S e p Shape deformation is an essential task for both computer graphicsand computer vision. For example, in shape retrieval applications,one can deform a retrieved shape to better meet the user’s desider-ata. No matter how large the available 3D shape repositories havebecome [Stratasys; Trimble; TurboSquid], shape deformation stillplays a critical role in shape modeling and shape retrieval becausethere are infinite imaginable shape variations, and creating novelshapes from scratch is still a taxing task. Due to its importanceand broad impact, shape deformation has been extensively studiedby the geometry processing community in the last decades. Suchwork can be broadly grouped into two categories. In the first cat-egory, one wants to plausibly deform a single shape based on userinput/interactions [Igarashi et al. 2005; Lipman et al. 2005; Sorkineet al. 2004; Yumer and Kara 2014]. Specifically, one can allow usersto edit shapes using a set of deformation handles , which are intuitiveand procedural parameters either directly coming from the 3D CADmodels or generated from parametrizing or approximating each partof a 3D model [Mo et al. 2019b; Xu et al. 2009; Zheng et al. 2011].Such deformation handles, however, mostly overparametrize thedeformation space making it hard to maintain the plausibility orsemantic constraints of a shape under arbitrary parameter changes.Consequently, user edits have to be constantly projected to the plau-sible deformation space [Gal et al. 2009; Liu et al. 2017], which isan error-prone process without strict quality guarantees. In thesecond category, one may want to apply deformation transfer , wherealgorithmic techniques propagate the deformation prescribed onone shape to another shape, or even to a collection of shapes [Baranet al. 2009; Ben-Chen et al. 2009; Chen et al. 2010; Fish et al. 2014;Ovsjanikov et al. 2011; Sumner and Popovic 2004; Zhou et al. 2010].While deformation transfer alleviates the burden of individuallydeforming each shape, such algorithms typically expect explicitcorrespondences between shapes or deformation handles, which isan unrealistic demand in practice.Nevertheless and leaving for a moment aside the practical consid-erations, if one assumes the existence of correspondences betweendeformation handles, some natural ideas emerge. In the projection’scase, the correspondences can help us discover and exploit statis-tical correlations among the deformation parameters, which canbe further incorporated in, and improve the quality of the projec-tion [Fish et al. 2014]. In the case of deformation-transfer, one cansimply transfer the desired changes at the granularity of handles(i.e., from the source’s handle(s) to the target’s ones) tapping onthe correspondence-induced regularization. These are some of theimportant reasons of why many previous works assume that corre-spondences are available as groundtruth supervision [Baran et al.2009; Ben-Chen et al. 2009; Chen et al. 2010; Sumner and Popovic2004; Yang et al. 2018; Zhou et al. 2010], or estimate correspondencesin a preprocessing phase [Yumer and Kara 2014].In practice, however, computing dense correspondences is expen-sive or even ill-defined for heterogeneous collection of shapes, e.g.,a collection of chairs that includes swivel and four-legged models.Some 3D datasets provide part-level semantic annotations [Mo et al.2019c; Yi et al. 2016], but these are insufficient to indicate corre-spondences for fine-grained deformations. For all these reasons, in this work, we propose novel neural networks that allow us to dodeformation projection and transfer without relying on any pre-computed notion of shape correspondences (i.e., not even betweenhandles).Concretely, we introduce DeformSyncNet, a neural architecturethat jointly learns (i) an idealized canonical latent space of shapeencodings, where all possible deformations from any shape to anyother are possible, as well as (ii) individual linear deformation spacesfor each real shape that reflect the particular ways that the shapemay ( or may not be able to) deform in 3D. These shape-specific de-formation spaces are synchronized by being connected through thecanonical space. We design the canonical latent space as an affine space where shape deformations, arising as vectors connecting la-tent shape encodings, are the main objects of interest — and have ameaning irrespective of the source and target shapes involved. A de-formation is transferred by decoding that latent deformation vectorin the context of the new source shape and applying the resulting3D deformation to that shape in real space. The individual deforma-tion spaces, each of which is represented as a dictionary of lineardeformations, are linked together so as to encourage the networkto share the same parameters for the individual linear functions.Additionally, the properties of the affine space naturally structurethe latent space to impose cycle consistency between various defor-mations, without additional loss functions or regularization duringthe neural network training.Our approach can benefit from (but does not require) deformationhandles. If we are given deformation handles for a particular shape,these can be easily connected to the individual deformation spacewe learn. Since, both spaces are linear in our setting, the projectionfrom editing via deformation handles to a plausible shape can besimply computed as an orthogonal projection to a linear learnedspace. During network training, we also enforce that the learneddeformation space for each shape becomes a subspace of the givendeformation handle space so that it follows the given deformationoperations, while capturing common constraints across the shapes.In this way, correlations among the deformation handles can emergein the learned space.In the experiments, we successfully apply our framework for thepurposes of co-editing of human-made shapes and show qualita-tively intuitive results. Quantitatively, we show that our trainednetworks outperform other modern (neural-net-based) and classical(ICP-based) shape deformation approaches, both in terms of fittingaccuracy and quality of deformation transfer.In summary, our contributions are:(i) a novel approach for learning synchronized linear deformationspaces for each shape in a category, without explicit correspon-dences;(ii) a unified framework enabling both projection of user-editedshapes to plausible counterparts in the shape space as wellas transfer of the deformation to other shapes in the samecategory; and(iii) a neural network design that is simple yet outperforms existingdeep deformation methods in terms of both quality of fittingand deformation transfer.
ACM Trans. Graph., Vol. 39, No. 6, Article 262. Publication date: December 2020. eformSyncNet: Deformation Transfer via Synchronized Shape Deformation Spaces • 262:3
3D shape deformation has been a long-standing problem in com-puter graphics. Earlier work introduced methods enabling inter-active free-form deformations, while preserving local characteris-tics of the shape. Some well-known examples are Laplacian edit-ing [Sorkine et al. 2004] and as-rigid-as-possible manipulation [Igarashiet al. 2005] that regularize deformations to maintain local curvaturebased on mesh Laplacians and local rigidity, respectively. Recentwork focuses more on target-driven deformation, deforming and fitting a source shape to a target. For the fitting stage, researchershave used iterated closest point (ICP) iterations [Huang et al. 2017;Li et al. 2008] to establish point-wise correspondences and subse-quently minimized an energy function based on them. However,ICP is prone to fail when the source shape is significantly differ-ent from the target, violating the ICP locality assumption. Recentneural-net-based learning approaches directly predict the deforma-tion offset either in voxel grid [Hanocka et al. 2018; Jack et al. 2018;Kurenkov et al. 2018; Yumer and Mitra 2016] or for point sampleson the source shape [Groueix et al. 2019; Mehr et al. 2019; Wanget al. 2019a] resulting in better fitting accuracy.Both traditional and recent learning-based approaches do not,however, learn the shape variability from the given dataset, andthus cannot verify the semantic plausibility of the deformed shapes.Specifically, they cannot examine whether the deformed shape lookslike one of the exemplars in the database. Instead, we train a networkby performing target-driven deformation, while simultaneously re-quiring it to learn the shape variation space. Hence, in interactiveediting, we can project the user’s input through the deformationhandles to the learned shape space and preserve semantic plausi-bility. We also demonstrate in our experiments that our methodperforms the target-driven deformation comparably or even betterthan previous methods.Note that our method also differs from other deep 3D generativemodels that learn a shape variation space [Achlioptas et al. 2018;Wu et al. 2016] in that it does not generate shapes from a latentrepresentation. Instead, when decoding, it takes an existing shapeas input and generates a variant by deforming it directly in 3D.Hence, our method enables reusing existing 3D shape data with theirassociated meta-information, e.g., mesh structure, color, texture, andpart hierarchy — which can be carried along in the deformation.
Deformation transfer has remained an important problem in 3Dshape editing after Sumner and Popovic [2004] introduced the con-cept in their pioneering work. The goal is to automatically propagatethe result of shape deformation performed on one shape to others,thus saving users’ time and effort. This problem has been com-prehensively investigated in many previous works. While Sumnerand Popovic [2004] require dense shape correspondences, Zhou etal . [2010] and Yang et al . [2018] extended their work to only use key-point correspondences. Ben-Chen et al . [2009] and Chen et al . [2010]overcame the limitation of requiring single-component manifoldmeshes, and proposed cage-based methods to deal with any repre-sentations of shapes. Baran et al . [2009] improved the method to cope with very different shapes for the transfer (e.g., humans toanimals) and require only region-level semantic correspondences.The main remaining limitation is the necessity of shape corre-spondences as input (in the level of either points, parts, or cages).Recently, Gao et al . [2018] proposed a neural-net-based method thatenables inter-class deformation transfer without correspondencesacross the classes. Their network, however, still needs to have intra-class correspondences of shapes during the training. In our work, weaim to deal with a diverse collection of man-made 3D shapes, suchas ShapeNet models [Chang et al. 2015], where the correspondencesbetween shapes are not identified or even clearly defined. Also, theirwork, and a recent work of Yin et al . [2019], aim to learn transformof shapes across different domains, inspired by analogous work inthe image space [Isola et al. 2017; Zhu et al. 2017], whereas our goalis to learn deformation transfer for shapes in the same domain.Another notable exception is the StructEdit work [Mo et al. 2019a]that learns transfer of shape differences without any correspon-dence supervision. Compared with our method, StructEdit transfers structural or topological differences of shapes, which cannot be im-mediately applied to modify a mesh or a point cloud. It also stronglydepends on training with shape data annotated with consistenthierarchies [Mo et al. 2019c]. In contrast, our method focuses on continuous deformation and direct editing of existing 3D models.Deformation transfer is closely related to shape analogies (orig-inated from image analogies [Hertzmann et al. 2001]), finding ashape x such that a : b = c : x given shapes a , b , and c . Rustamov etal . [2013] first introduced a method performing shape analogiesbased on functional correspondences. More sophisticated recentvariations on shape analogies and shape generation based on func-tional correspondences include [Huang et al. 2019a], [Huang et al.2019b]. Wu et al . [2016] showed how the latent space of 3D GANcan be used for that (similar to word2vec [Mikolov et al. 2013]in machine learning). Compared with these, we neither requirecorrespondences nor decode directly from a latent space withoutdeforming an input shape.Lastly, we note that similar ideas to deformation transfer havebeen also studied for other research topics in graphics such as styletransfer [Ma et al. 2009; Xu et al. 2010] and motion retargetting [Vil-legas et al. 2018; Xia et al. 2015], but under the same assumption ofthat shape correspondences are available. While many 3D human-made objects are highly structured, defor-mation handles accompanied by the 3D models often do not fullyacknowledge the underlying structure and thus allow breaking itwith arbitrary modifications. As alternatives, researchers have in-troduced 3D shape analyses extracting the global structure of 3Dshapes for editing, such as orthogonality and parallelism of wire-frames [Gal et al. 2009] and bounding boxes [Zheng et al. 2011],symmetry [Wang et al. 2011], and articulation of parts [Xu et al.2009]. Such analyses for individual shapes are, however, not ableto capture semantic constraints or correlations of the handles thatcan only be observed from families of shapes. Hence, the otherline of work extended these ideas to shape co-analysis and aimedto discover the semantic relationships among the handles from a
ACM Trans. Graph., Vol. 39, No. 6, Article 262. Publication date: December 2020. collection of shapes. For example, Kim et al . [2013] fit a templatepart bounding box structure to shapes in order to be able to clustershapes and find part-level correspondences. Based on such fittedtemplate structures, Ovsjanikov et al . [2011] demonstrates waysof exploring the shape space, Zheng et al . [2014] investigates co-occurrence of parts, and Fish et al . [2014] calculate joint distributionsof part box bounding parameters. Yumer et al . [2014] also discov-ers co-constrained abstraction of shapes in the same family. Theseworks, however, analyze shapes based on a template structure andthus cannot be scaled to handle 3D models in online repositoriesthat have immense diversity.
As mentioned in the introduction, key to our approach is an idealizedlatent space where points represent shapes, and vectors connectingpoints represent shape differences or deformations. A traditionallatent space approach would implement deformation transfer assimply adding the deformation vector to the point representinga new latent source shape (in order to obtain its deformed targetversion), followed by a decoding of the latter from latent to 3D space(Figure 2). Instead, we propose to decode the deformation vectoritself into a deformation action which can be applied directly in real3D space to the new shape to be deformed.In Section 3.1, before we discuss the realization of these operationson real shapes, we introduce an abstract mathematical frameworkof how things might work when point-wise correspondences acrossthe shapes are known. We then relax these idealized assumptionsto generate a specialized set of deformations a real shape can un-dergo, by developing shape-specific deformation action dictionaries.In Section 3.2, we introduce how the learned shape deformationspace can be leveraged in a practical shape editing scenario to pro-duce a plausible output shape by projecting the user’s input to thedeformation space. In Section 3.3, we describe how we implementthe shared functional form of deformations via neural networks.Finally, in section 3.4, we compare our neural networks with theother recent neural networks learning deformations and discuss theadvantages of our approach.
We consider a collection of shapes X , where each shape x ∈ X isrepresented as a point cloud with n points (i.e., ∀ x ∈ X , x ∈ R n ). Weaim to encode deformations between all ordered pairs in X within alow-dimensional continuous canonical latent space. In particular,we create an affine action space ⟨ X , V , ⊕⟩ , where X ⊆ X ⊆ R n , V is a k -dimensional vector space, and ⊕ is an action of the additivegroup of V on the set X : X × V → X . An affine action space isdefined with the following three properties [Gallier 2011; Tarrida2011]:(i) (Free action) x ⊕ (cid:174) = x , for every x ∈ X .(ii) (Additive action) ( x ⊕ u ) ⊕ v = x ⊕ ( u + v ) , for every x ∈ X ,and every u , v ∈ V .(iii) (Transitive action) For every x , y ∈ X , there exists a unique v ∈ V such that x ⊕ v = y .Note that the vector space V parametrizes not the shape (point)space but the deformation (vector) space of shape differences. Our LatentSpaceNew Source
AEOurs
Source TargetNew Source Source Target
Fig. 2. Comparison of latent vector transfer between an autoencoder (AE)latent space [Achlioptas et al. 2018] and our latent space. The vector fromsource shape to target is transferred to another source shape, and severalpoints are sampled along the direction of the deformation vector, at scales . , . , . , . , and . . In the autoencoder latent space, the vectordoes not convey the difference between two shapes and, sometimes, evenpushes the new shape outside the valid region, making it less plausible. Inour latent space, the difference between the source and target shapes isproperly transferred, regardless of the location of the new source shape. goal is to make a vector v ∈ V be a anchor-free representation ofdeformation, forgetting the original shape where the deformation isapplied. Thus, the same type of deformation action can be appliedto any arbitrary shape x ∈ X by applying the same deformationparameter v ∈ V — i.e., the deformation −→ xy ∈ V such that x ⊕ −→ xy = y can be transferred to the other shape z ∈ X by computing z ⊕ −→ xy .We choose an affine space as our latent space of deformationsbecause of its desirable properties, facilitating exploration of thespace in downstream applications. From the basic properties above,the following properties can also be derived (Refer to Gallier etal . [2011] and Tarrida et al . [2011] for proofs. −→ xy ∈ V denotes adeformation vector from shape x to y , i.e., x ⊕ −→ xy = y .):(i) (Identity) −→ xx = (cid:174)
0, for every x ∈ X .(ii) (Anticommutativity) −→ xy = −−→ yx , for every x , y ∈ X .(iii) (Transitivity) −→ xy + −→ yz + −→ zx = (cid:174)
0, for every x , y , z ∈ X .(iv) (Parallelogram law) −→ xy = −→ zw ⇔ −→ xz = −→ yw , ∀ x , y , z , w ∈ X .The challenge here is how to make the vectors in V encode the same type of deformation(s) across all the shapes in X . Considera shape autoencoder, where E : R n → R k and D : R k → R n are encoding and decoding neural networks, respectively. Giventhese mappings, the simplest way to create an affine space for theshape differences is to take the Euclidean embedding space R k asthe vector space V of the affine space: x ⊕ v (cid:66) D (E( x ) + v ) .A vanilla autoencoder, however, fails to structure the embeddingspace in a way that a vector indicates the same type of deformation everywhere over the space. Figure 2 shows an example of takinga vector from the source to target shape in the embedding spaceand adding this to a new source shape, with different scales alongthe vector. This fails to make the new shape adapt the deformationfrom the source to target, transforming it into implausible shapes. ACM Trans. Graph., Vol. 39, No. 6, Article 262. Publication date: December 2020. eformSyncNet: Deformation Transfer via Synchronized Shape Deformation Spaces • 262:5
We aim to design the latent space so that such vector addition canproperly transfer the shape difference.
Linear Deformation Representations.
Let us first assume that point-wise correspondences are known for all pairs of shapes — to under-stand a simpler setting. In this case, we can specify a deformationas an offset vector for corresponding points. When the points areordered in a consistent way so that corresponding points across theshapes have the same index in the order, we can consider a linear deformation function as an action ⊕ of the affine space: x ⊕ v (cid:66) A v + x , (1)where A ∈ R n × k , with k being the dimension of V . The deformationfor a v ∈ V is now explicitly defined as adding per-point offsets ( A v ) to the source point cloud, and the same v produces the same offsetsfor the corresponding points across all the shapes. This action onshapes is free ( x ⊕ (cid:174) = x ), transitive if k is smaller or equal to 3 n ,yet large enough to capture all possible differences of shapes in X ,and also is in the additive group of V over X . In the context of anautoencoder, this can be understood as constructing a latent shapespace to be decoded to a linear subspace A of the shape point clouds,so that a free vector v over that latent space describes the samepoint displacements in 3 D space, regardless of the original shape.Alternatively, one can simply interpret A as a set of principal axesof the point clouds x ∈ X , in the context of PCA.Now we consider the case when the point-wise correspondencesare unknown , and possibly even ill-defined . Hence, in this case, itis not possible to define a canonical linear deformation function forall shapes with a single matrix A . Instead, we propose to predict amatrix A for each individual shape x ∈ X using a neural network F : X → R n × k . By denoting the output for the input shape x as A x , the action ⊕ is now written as follows: x ⊕ v (cid:66) A x v + x . (2)The action is still free and transitive (with a large enough k ≤ n )but is not in the additive group anymore, since now A x is source-dependent; A x and A y for different shapes x and y may be inconsis-tent. To impose the property the additive action above, we jointlytrain a shape encoder E : X → R k along with the above network F so that the deformation vector −→ xy from shape x to y is given by E( y ) − E( x ) , while the matrix A x is predicted as F ( x ) . The defor-mation from the point cloud x to y is now computed as follows: d ( x → y ) (cid:66) F ( x ) (E( y ) − E( x )) + x , (3)where the deformation vector is computed in latent space through E , decoded into real space by A x as a set of point offsets in an x -specific way, and added to the original point cloud for x .This utilization of the two networks E and F realizes our overallplan: learning (i) a canonical affine deformation space from E ; and(ii) an individual linear (affine) deformation space for each shapefrom F , and connecting these individual shape action spaces bysharing the same source latent space. The individual deformationspaces, defined by the matrices A x , are thus synchronized across the For a heterogeneous collection of shapes, particularly for human-made objects, thecorrespondences may not be clearly defined for some pairs, either geometrically orsemantically. shapes via the connection through the canonical latent deformationspace.Empirically, even without the joint network training, the matrix A x can be predicted consistently for similar input shapes due to thecharacteristics of neural networks when learning a smooth function— several recent works have proposed unsupervised methods findingshape correspondences based on this network property [Genovaet al. 2019; Groueix et al. 2018; Li et al. 2019; Sung et al. 2018; Tulsianiet al. 2017; Zhao et al. 2019]. However, we found that enforcing theproperty of the additive action regularizes the A x matrices to be-come consistent even when the shapes are dissimilar (see Section 4.1and Figure 5). Relation to Deep Functional Dictionaries.
The matrix A x can beinterpreted as a dictionary of deformations (point-wise displace-ments), and in this context, we call the network F a deformationdictionary predictor in the rest of the paper. The idea of learningdictionaries of functions on shapes is analogous to the Deep Func-tional Dictionaries work by Sung et al . [2018], but there are twomain differences. First, our dictionaries are learned from pairwisedeformations, not from the input functions over the shapes (thus,out work is purely self-supervised). Second, we synchronize ourdictionaries by explicitly learning a mapping of the dictionaries to acanonical space, instead of solely relying on the smoothness of thelearned functions. Many 3D models in online repositories are either described with geo-metric primitives or spline surfaces or are decomposed into smallerparts that can be easily approximated with bounding primitives [Moet al. 2019c; Sung et al. 2017; Yi et al. 2017]. This information pro-vides the user with intuitive deformation handles, although mostlythey overparametrize the deformation space, meaning that arbitrarychanges of the parameters do not always give a valid shape. Whenassuming that the deformation function of the given handles is linear — we observed that most deformation parameters comingfrom the human-made 3D models are translation and scaling ofparts, which are linear deformations — in our framework, we caneasily project the user input on the given handles to the learneddeformation space using simple computation. Additionally, we canguide our networks during training to learn the deformations thatare present in the given parameter space.
Projecting Shape Editing to Deformation Space.
Let the input lineardeformation function (defined by the deformation handles) denotedas B x ( z + z ) , where B x ∈ R n × m x , m x is the number of deforma-tion handles, and z ∈ R m x is the default parameters. We also havethe learned linear deformation function for the shape: A x v + x (Equa-tion 2), where x = B x z ∈ R n is the initial shape and A x ∈ R n × k gives a valid variation space of x . The goal of the projection is this:given the user input via the deformation handles z ∈ R m x , we find v ∈ R k (a valid variation) that minimizes the difference of point-wiseoffsets from the edited shape to the valid shape: ∥ B x z − A x v ∥ . Inparticular, we consider a shape editing scenario where the user editsthe shape through some of the deformation handles and prescribesvalues for them at each time, and then the system automatically ACM Trans. Graph., Vol. 39, No. 6, Article 262. Publication date: December 2020. projects the edit to the valid space. Thus, given the equality con-straints to the deformation handles C z = z in , where C ∈ R m x × m x isa matrix indicating the selected handles and z in ∈ R m x is the sparseuser inputs, we find (cid:98) z ∈ R m x defined as follows: (cid:98) z (cid:66) argmin z min v ∥ B x z − A x v ∥ s.t. C z = z in . (4)When B ′ x denotes the matrix B x except for the columns of theselected handles, Equation 4 can be written in a following form: (cid:98) z ′ (cid:66) argmin z ′ min v (cid:13)(cid:13)(cid:0) B ′ x z ′ + c (cid:1) − A x v (cid:13)(cid:13) , (5)where (cid:98) z ′ is the best values for the rest of the parameters, and c = BC z in . For any point cloud y ∈ R n , the projection distance to thelinear space of A x is computed as:min v ∥ A x v − y ∥ = (cid:13)(cid:13)(cid:13)(cid:16) I − A x A † x (cid:17) y (cid:13)(cid:13)(cid:13) , where A † x is a pseudoinverse of A x (note that A x A † x (cid:44) I since A x is a thin matrix, i.e., 3 n ≫ m x ). Hence, Equation 5, again, can berewritten as follows: (cid:98) z ′ (cid:66) argmin z ′ (cid:13)(cid:13)(cid:13)(cid:16) I − A x A † x (cid:17) (cid:0) B ′ x z ′ + c (cid:1)(cid:13)(cid:13)(cid:13) , (6)The solution of this linear regression (cid:98) z ′ is defined as: (cid:98) z ′ = P † x Q x c , (7)where P x = (cid:16) I − A x A † x (cid:17) B ′ x , Q x = − (cid:16) I − A x A † x (cid:17) , and P † x is pseu-doinverse of P x . Note that, when the user edits through one de-formation at each time, P † x and Q x can be precomputed for eachhandle.Some deformation handles may have inequality constraints; e.g.,scale parameters must be positive numbers. The least squares prob-lem in Equation 6 can also be quickly solved with inequality con-straints using standard techniques such as interior-point methodsor active-set methods. Projecting Network Output to Deformation Handle Space.
At train-ing time, we can also project the output deformed shape of thenetworks to the given deformation handle space to make the defor-mation describable with the given handles (the effect of this optionis investigated in Section 4.5): d proj ( x → y ) (cid:66) B x B † x (F ( x ) (E( y ) − E( x )) + x ) . (8)We remark that our framework can easily exploit any type oflinear deformation handles, even when they are inconsistent andtheir numbers are different across the shapes. For the networks E and F in Section 3.1, we can use any neuralnetwork architecture that processes point clouds and produces aglobal shape feature and per-point features, respectively. Note thatevery three rows of A x ∈ R n × k determine the offset of each point independently and thus can be predicted as a per-point feature. In ∈ ℝ $% ℰℱ − + ℰ = ∈ ℝ $% ℱ() ∈ ℝ $%×/
ℰ() ∈ ℝ / ℰ() ∈ ℝ / ( → ) ∈ ℝ $% SourceshapeTargetshape DeformedsourceshapePoint cloudNetwork DictionaryLatent code
Fig. 3. Neural network pipeline. The encoder E takes both the sourceand target point clouds, and computes the latent deformation vector as (E( y ) − E( x )) . The deformation dictionary predictor F takes only thesource shape and predicts the dictionary F( x ) . The outputs of these twomodules are multiplied and added to the source point cloud to produce thedeformed shape as x → x + F( x )(E( y ) − E( x )) . our experiments (Section 4), we use PointNet [Qi et al. 2017]; its clas-sification architecture for the encoder E and the segmentation archi-tecture for the deformation dictionary predictor F . Figure 3 showsthe entire pipeline. A Siamese structure of the encoder E takes a pairof source x ∈ X and target y ∈ X shapes as input and predicts thedeformation vector (E( y ) − E( x )) ∈ R k . The dictionary predictor F takes only the source shape as input and predicts the linear deforma-tion dictionary F ( x ) ∈ R n × k . For faster convergence in training,we normalize each column in the dictionary F ( x ) to have unit norm.The deformation of the source shape fitting to the target is predictedby computing point offsets (F ( x ) (E( y ) − E( x ))) ∈ R n and addingit to the source shape as described in Equation 3. Additionally, ifdeformation handles are provided for each shape, the deformedsource shape can also be projected to their space as described inEquation 8. The fitting error from the output deformed source shape d ( x → y ) to the target shape y is measured using Chamfer Distance(cf., [Achlioptas et al. 2018; Fan et al. 2017; Groueix et al. 2019; Wanget al. 2019a; Yifan et al. 2020]): L F : = Ch ( d ( x → y ) , y ) . (9)We also follow the idea of Wang et al . [2019a] and Yifan et al . [Yi-fan et al. 2020] to preserve symmetries of man-made objects in thedeformation. When a global reflection symmetry axis is given forthe source shape, we flip the output deformed shape along the axisand minimize Chamfer distance to this: L R : = Ch ( d ( x → y ) , R ( d ( x → y ))) , (10)where R is the mirroring operation along the given reflection axis.Additionally, we also support sparsity regularization losses toenforce structure in the output dictionaries. For example, in shapeediting scenarios, the user may want to identify the essential corre-lations among the given deformation handles. To discover them, wecan apply l after projectingthem to the given deformation handle space (as in Equation 8): L S : = k ∥ B † x F ( x )∥ . (11) ACM Trans. Graph., Vol. 39, No. 6, Article 262. Publication date: December 2020. eformSyncNet: Deformation Transfer via Synchronized Shape Deformation Spaces • 262:7
This loss can encourage the columns in the projected dictionarymatrices to be sparse, so that they can capture strongly correlateddeformation handles. Also, while we preset the number of columnsin the dictionary k during training ( i.e ., the dimension of the latentspace), we can find the minimal set of the columns necessary tohandle all possible deformations by imposing a column-wise l , L S , : = k ∥ B † x F ( x )∥ , , (12)which makes the norm of each column to be close to zero . Empiri-cally, we found that this loss also plays the role of l l , L = L F + L R + w S , L S , , (13)where w S , is a weight for the sparsity loss. In Section 4.1, weanalyze the effect of the sparsity loss.Please note that we do not use any cycle-consistency loss since theconsistency of the dictionaries F ( x ) across the shapes automaticallyemerges during network training, as discussed in Section 3.1. InSection 4.3, we empirically evaluate cycle-consistency with a datasetwhere, as ground-truth, the point correspondences are known acrossthe shapes. The experiment demonstrates that our method, withoutany loss function for consistency, performs even better results com-pared with the network of Groueix et al . [2019], which explicitlyoptimizes using cycle-consistency losses. Recent neural networks learning target-driven deformations, suchas 3DN [Wang et al. 2019a], Cycle Consistency [Groueix et al. 2019],and Neural Cages [Yifan et al. 2020], have an architecture analogouswith ours in that they take a source-target pair of shapes as input,compute a latent vector for the shape difference, and apply it to thesource shape. We discuss the main differences between these meth-ods and ours, and describe how the differences affect performanceand usage in downstream applications.
Learning Shape-Dependent Variation Space.
While both our net-work and the others are trained to learn deformation from one shapeto the other, we not only learn how to fit the input to the target butalso discover the plausible variation space of the input shape, whichis represented as a linear space with the deformation dictionary A x . Hence, in shape editing, when the deformation handles aregiven as linear deformation functions, the user’s input can be easilyprojected back to the learned variation space as described in Sec-tion 3.2 and also demonstrated in Section 4.1. This is an additionalcapability of our method compared to the other methods. Moreover,our deformation dictionary A x captures strong correlations amongthe given deformation handles with additional regularization loss(see Equation 12), and thus provides more intuitive interpretationof the learned deformation space, as discussed in Section 4.1. Note that we normalize the dictionary columns before projection, and thus the normof dictionary columns after the projection can still be minimized (but cannot be zero).
Airplane(2390) Car(1200) Chair(1927) Sofa(947) Table(1857)
Fig. 4. ShapeNet part bounding box dataset. Each column shows the numberof shapes, a histogram of the number of components, and a sample objectwith part bounding boxes for each category. We preprocess raw CAD modelsas described in Section 4.1 and collect models with number of componentsis in the range of [ , ] . Factorizing Deformation Representation.
We also found that thefactorization of shape offset representation into a source-dependentdictionary
F ( x ) and a latent vector for a pair of shapes (E( y ) − E( x )) gives a better performance in the fitting. See Section 4.2 for quan-titative evaluations. While Neural Cages [Yifan et al. 2020] have asimilar factorization to ours, the others (3DN [Wang et al. 2019a] andCycle Consistency [Groueix et al. 2019]) immediately combine theinformation of the pair to the source shape in the network withoutseparately manipulating the source shape beforehand. Enforcing Affine Space Properties.
To attain the affine space prop-erties in the latent space, in our framework, the deformation fromone shape to the other is represented as the subtraction of two la-tent codes (E( y ) − E( x )) , as described in Section 3.1. In all the othermethods, however, the deformation is learned by first concatenating two latent codes and processing it in the next layers. In Section 4.2,we conduct an ablation study, changing the computation of the la-tent deformation vector in our framework, similarly with the othermethods. The results demonstrate that our architecture enforcingthe affine space properties produces more plausible shapes in thedeformation transfer compared with the other methods and thearchitecture in the ablation study. We utilize our framework in the context of shape co-editing appli-cation. Also, we quantitatively evaluate our method and compareagainst state-of-the-art shape deformation techniques.
Dataset and Network Training.
We experiment with five cate-gories from the ShapeNet repository [Chang et al. 2015], namelyAirplane, Car, Chair, Sofa, and Table. Most of the 3D models inShapeNet are composed of smaller parts that have simpler geome-try. Thus, we compute oriented bounding boxes for parts using theTrimesh library [Dawson-Haggerty et al.] and take translation andanisotropic scales along each local coordinate as our deformationhandles. Figure 4 shows the number of shapes in each category (inparentheses), the distribution of the number of components in eachshape (first row), and a sample object with part bounding boxes(second row). Before computing the bounding boxes, we run a pre-processing to merge small components to their closest neighbors
ACM Trans. Graph., Vol. 39, No. 6, Article 262. Publication date: December 2020.
Fig. 5. Elements in the learned deformation dictionary. The columns are deformations along some elements in the dictionary (columns of A x ). Each elementshows a natural mode of shape variations changing local parts, and the variations of each element are also consistent across the shapes despite the differencein styles of the shapes. The second column of chairs shows scaling of swivel leg (red circle), and the element does not affect the shapes not including the swivelleg. Also, the second column of tables lifts up the shelf in the middle (blue circle) and makes no change if the part does not exist. Refer to the supplementalvideo for animated visualizations. and also combine overlapping components, as illustrated in the workof Sung et al . [2017]. However, we do not group symmetric parts inorder to have more freedom in editing each part. (Symmetries willbe learned by our networks, as illustrated in Section 4.1 and 4.4.)After preprocessing, we take models with a number of componentparts in the range of [ , ] . We do not normalize the scale of eachshape individually to see natural shape variations (e.g., making achair to a bench without decreasing the height), but all shapes arein the range of [− . , . ] in each axis.We train the networks for each category. We sample 2 K pointsrandomly over each shape and feed the networks with random pairsof the source and target shapes. Training, validation, and test setsare randomly split with the 85-5-10 ratio. We set the dimension oflatent space (i.e., the number of dictionary elements) k to 64 and theweight of sparsity regularization loss w S , to 10.Since we aim to deal with uncurated datasets where no manualannotations are present, we do not use any part labels in ShapeNet(unlike those used by Mo et al . [2019a]) or any correspondencesupervision in network training. Also, such part labels are often in-sufficient to infer correspondences across the deformation handles. Deformation Dictionaries.
We illustrate the learned deformationdictionary A x in Figure 5. Each column visualizes shapes deformedalong some elements in the dictionaries. Thanks to the sparsity reg-ularization loss (Equation 12), the elements show local and naturaldeformation modes and correlations among the given deformationhandles. For example, the first and third columns of chairs translate and scale the seat along up and front directions, respectively, whilepreserving connectivities with the other parts. The last columnalso elongates the back part upward. Similar local deformationsare also observed in sofas. Interestingly, the second column scalesonly swivel leg (red circle), and it does not affect the shape with adifferent type of leg. Similarly, in the second column of tables, theelement translates the shelf in the middle along the up directionand does not make any changes if the shape does not include theshelf. The supplementary video shows how the shapes vary alongthe dictionary elements in animations. Shape Co-Editing.
We also describe how our framework can beemployed for interactive shape co-editing and demonstrate qualita-tive results. We consider a scenario that the user modifies a sourceshape through one of the given deformations handles at each step.Given the user input, the system (i) first automatically snaps theedited shape to the plausible shape space while fixing the modifiedparameter as described in Section 3.2. When solving the least squarein Equation 6, we use a constraint that all scale parameters should begreater than zero. Then, (ii) the projected deformation is transferredto the desired new source shape in the same category.Figure 1 and 6 show some examples of the deformation projectionand transfer. In Figure 6, given a source shape (first column), werandomly select one of the deformation handles and perturb theparameter (second column). If the selected handle is a translationalong one of the local coordinates of a box (blue arrow in the figure),we vary the parameter in the range of [ − . + t , 0 . + t ], where t ACM Trans. Graph., Vol. 39, No. 6, Article 262. Publication date: December 2020. eformSyncNet: Deformation Transfer via Synchronized Shape Deformation Spaces • 262:9
Source Edited(w/ boxes) Projected(w/ boxes) Projected New Source Transferred(w/ boxes) Transferred
Fig. 6. Shape editing projection and transfer results. The first column is the source shape, the second column is the user edit (blue and red arrows meantranslation and scaling along the direction, respectively), the third and fourth columns are the results of projection (with and without the part boundingboxes), the fifth column is the new source shape, and the sixth and seventh columns are the results of transferring the projected deformation to the newsource shape (with and without the part bounding boxes). The projection adjusts the rest of the deformation handles while preserving part connectivity (redcircles) and symmetry (green circles). Also, the projected deformations are naturally transferred to the new shape despite the different part structures (bluecircles). See the supplemental video for more examples. is the given number. If the select handle is a scaling (red arrow inthe figure), we pick a random number in the range of [0 . s , 1 . s ],where s is the default scale. Then, the modified shape is projected tothe learned plausible shape space (third and fourth column). Givenanother source shape (fifth column), the projected deformation istransferred (sixth and seventh columns) by taking the latent defor-mation vector and applying it to the new source shape with thelearned shape-dependent action. The results show that the projec-tion adjusts the rest of the parameters in a way to maintain theshape structure such as symmetry and part connectivity. For in-stance, right-wing of the airplane (first row) and a leg of the chair(third row) and the table (last row) are properly attached to the otherparts after the projection while preserving symmetry. Also, the de-formation on the source shape is naturally transferred to the newsource shape even when the geometry or the deformation handlesare different. For example, the tables in the last row have differentpart structure, but the translation of legs is naturally adopted to thenew shape as decreasing the width. Refer to the supplemental videofor more examples. Effect of Sparsity Regularization Losses.
We analyze the effect ofsparsity regularization loss in Equation 12 by varying its weight.Figure 7 (a) shows the mean fitting error (Chamfer distance) of 1 k random chair pairs when varying the weight from 0 . B † x F ( x ) when increasing the weight and also varying thethreshold of zero. The numbers of non-zero elements and columnsdramatically decrease, indicating that the network discovers morestrongly correlated deformation handles, while still learning allpossible deformations. Next, we quantitatively evaluate our method using ShapeNet [Changet al. 2015] dataset and compare with the other baseline methods.
Dataset and Network Training.
For the quantitative evaluations,we use the same preprocessed dataset of ShapeNet with Groueix etal . [2019]. This dataset contains five categories: Airplane, Car, Chair,
ACM Trans. Graph., Vol. 39, No. 6, Article 262. Publication date: December 2020. −1 Weight (Log-10 Scale)0.0000.0020.0040.0060.0080.010 F i tt i n g C D L S2,1 (a) −4 −3 −2 Zero Threshold (Log-10 Scale)020406080100 P e r c e n t a g e ( % ) w S2,1 (b) −4 −3 Zero Threshold (Log-10 Scale)020406080100 P e r c e n t a g e ( % ) w S2,1 (c)Fig. 7. Effect of Sparsity Regularization Losses. (a) Fitting error comparison when varying the weight of sparsity regularization loss in Equation 12. The x-axisis the weight (in log-10 scale), and the y-axis is the fitting error measured as Chamfer distance. The dot lines indicate the fitting errors of baseline methods:3DN [Wang et al. 2019a], Cycle Consistency (CC) [Groueix et al. 2019] and Neural Cages (NC) [Yifan et al. 2020]. (b) The percentage of non-zero elements withvarying weights. The x-axis is the zero threshold (in log-10 scale). (c) The number of non-zero columns with varying weights.
Lamp, and Table. The difference of this dataset with the one used inSection 4.1 is that it normalizes each point cloud to be fit in a uniformcube, which size is 2 for each axis. All networks including ours andbaselines are trained as described in Section 4.1, by taking 2 k randomsample points in each shape and feeding random source-targetshape pairs in each category. To maximize the fitting capability,in this experiment, we set the dimension of the latent space inour framework, k , to 512 and disable sparsity regularization loss( w S , = Baselines.
We compare our method with non-rigid ICP by Huang etal . [2017] and three recent neural-network-based deformation meth-ods mentioned in Section 3.4: 3DN [Wang et al. 2019a], Cycle Con-sistency (CC) [Groueix et al. 2019], and Neural Cages (NC) [Yifanet al. 2020]. We take point clouds as input to all networks and donot use the differentiable mesh sampling operator introduced in3DN [Wang et al. 2019a], which can be easily plugged into any ofthe networks. All the baseline neural-net-based methods have a partin their architecture to compute deformation ( i.e . shape difference)from the given source and target shapes and apply it to the sourceshape. Thus, we implement the deformation transfer of the othermethods as applying the given source-target pair information to theother source shape, similarly with our method.
Ablation Study.
We test the impact of enforcing affine space prop-erties by modifying the part of computing latent deformation vectorin our networks. As mentioned in Section 3.4, instead of taking the difference of two latent codes (E( y ) − E( x )) , we concatenate latentcodes of source and target shapes and process it to produce thedeformation vector . Within the PointNet [Qi et al. 2017] classification architecture, we concatenate globalfeatures of the shapes produced from the max-pooling layer and then pass it throughthe same next MLP layers. The final output replaces (E( y ) − E( x )) in our originalarchitecture. Table 1. Comparisons of fitting accuracy of deformation and plausibilityof deformation transfer results. We compare our DeformSyncNet (DSN)with Non-rigid ICP (NR-ICP) [Huang et al. 2017], 3DN [Wang et al. 2019a],Cycle Consistency (CC) [Groueix et al. 2019] and Neural Cages (NC) [Yifanet al. 2020]. Concat ∗ is an ablation study computing the latent deformationvector by concatenating two latent codes of shapes instead of taking thedifference. As evaluation metrics for the plausibility and the fitting accuracyof deformation transfer results, we use mean Intersection over Union (mIoU)and Fitting Chamfer Distance (CD) of semantic parts for the former, andMinimum Matching Distance (MMD) and Coverage (Cov) for the latter.MMD and Cov are measured based on Chamfer distance. Bold is the bestresult, and underscore is the second-best result. See Section 4.2 for details. Category Airplane Car Chair Lamp TablemIoU(%) ↑ is better NR-ICP 66.7 61.2 77.5 65.5 66.03DN 58.4 48.3 58.1 45.9 46.6CC 67.8 61.1 77.6 DSN ∗ × − ) ↓ is better NR-ICP 8.99 8.35 26.86 61.58 44.513DN 2.54 5.04 6.32 14.15 8.51CC 3.26 4.18 9.81 30.65 14.61NC 6.73 7.49 18.82 41.40 25.80DSN 1.95 4.21 5.90 13.28 8.05Concat ∗ MMD-CD( × − ) ↓ is better 3DN 13.31 8.15 56.18 116.95 107.14CC 7.44 6.45 26.83 56.89 53.39NC Concat ∗ ↑ is better 3DN 16.0 6.9 6.6 8.1 10.0CC 30.3 14.1 30.9 27.6 25.7NC 30.6 Concat ∗ Target-Driven Deformation and Transfer.
We compare the meth-ods in two aspects: fitting accuracy of deformations and plausibilityof deformation transfer results.
ACM Trans. Graph., Vol. 39, No. 6, Article 262. Publication date: December 2020. eformSyncNet: Deformation Transfer via Synchronized Shape Deformation Spaces • 262:11
Source Target Deformed New Source TransferredNR-ICP 3DN CC NC DSN 3DN CC NC DSN
Fig. 8. Examples of qualitative evaluation results. The first and second columns are the source and target shapes, the next five columns are deformed sourceshapes fitted to the target, resulted by different methods, the next eighth column is the new source shape, and the next four columns are the results ofdeformation transfer from the source-target pair to the new source. The baseline methods compared with our DeformSyncNet (DSN) are Non-rigid ICP(NR-ICP) [Huang et al. 2017], 3DN [Wang et al. 2019a], Cycle Consistency (CC) [Groueix et al. 2019] and Neural Cages (NC) [Yifan et al. 2020].
For the fitting accuracy, we perform target-driven deformation,deforming a source shape to fit it to a target shape, with random1 ,
000 pairs of source and target test shapes in each category. Then,we measure the average Chamfer distance between the target andthe deformed source shape (Fitting CD). Also, inspired by Groueix etal . [2019], we evaluate the fittings by finding the closest pointsfrom the deformed source shape to the target and measuring meanIntersection over Union (mIoU) of semantic parts provided by Yi etal . [2016]. This metric demonstrates how beneficial each method atunsupervised co-segmentation or few-shot segmentation.For the plausibility of deformation transfer results, the quantita-tive evaluation is challenging since it is not precisely determinedwithout involving human perception. In our evaluation, we followthe ideas of Achlioptas et al . [2018] evaluating generative mod-els. Achlioptas et al . introduce two data-driven evaluation metrics:Minimal Matching Distance (MMD-CD) and Coverage (Cov-CD).Minimal Matching Distance is the average of the Chamfer distancefrom each generated shape to its the closest shape in the referencedataset, indicating how likely the output shape looks like a realshape. Coverage is the proportion of the shapes in the referencedataset that are closest from each generated shape, showing howmany variations are covered by the generated shapes. For these two,we randomly choose 200 pairs of source and new source shapes, andfor each of the 10 target shapes, we transfer the source-to-targetdeformation to the new source shape. Then, we measure the metricsby taking the training set as the reference dataset; the averages forall target shapes are reported. We report all results in Table 1. Bold is the best result, and under-score is the second-best result. Note that our DeformSyncNet is theonly method that shows outstanding performance both in the fittingaccuracy and in the plausibility of deformation transfer results. Forthe Fitting CD and mIoU, DeformSyncNet gives better accuracyin most of the categories compared with the other methods. Thenetwork in the ablation study, concatenating two latent codes, is theonly one that shows the better fitting accuracy than DeformSync-Net in terms of Fitting CD, but its performances in other metrics arepoor, particularly in MMD-CD and Cov-CD. For the MMD-CD andCov-CD, our DeformSyncNet also outperforms the other methods.The only competitor is Neural Cages [Yifan et al. 2020], but it givesinferior accuracy of the fitting, as shown in Fitting CD.Figure 8 illustrates some results of quantitative evaluation. OurDeformSyncNet shows the best fitting results in most of the casesand also captures well the difference between the source and targetshapes in the transfer. For instance, the source and target chairs inthe fifth row have differences in the width and height of the backpart, and these differences are correctly transferred to the new shape,which has a different global structure. Also, in the sixth row, ourscombines chair arms of the source shape to seat in the deformation,and this change is properly transferred to the new shape. The othermethods tend not to transfer the difference but to do target-orienteddeformation, making the new source shape close to the target shape.
ACM Trans. Graph., Vol. 39, No. 6, Article 262. Publication date: December 2020. x y z z ⊕ −→ xy y ⊕ −→ xz z ⊕ −→ xy y ⊕ −→ xz z ⊕ −→ xy y ⊕ −→ xz z ⊕ −→ xy y ⊕ −→ xz Fig. 9. Qualitative results of parallelogram consistency test. Our DeformSyncNet provides the most consistent result of deformation given two differentpathways: z ⊕ −→ xy and y ⊕ −→ xz . See Section 4.2 for the details. ⨁⨁ Parallelogram Consistency Test.
One of the ben-eficial properties of affine space is the path in-variance , producing the same deformation regard-less of the pathway from the starting point tothe destination. This property is desirable in thedownstream applications since it allows the userto explore the latent shape variation space freely without specify-ing the order of steps. We test this property by taking a random parallelogram in the latent space. A triplet of shapes x , y , and z aregiven, and deformations are transferred in two ways: −→ xy to z and −→ xz to y (see inset). Then, we verify whether these two results arethe same by measuring Chamfer distance.The quantitative results comparing with other methods are re-ported in Table 2. Our DeformSyncNet gives much smaller dif-ferences between two pathways of deformations compared withthe other methods. Cycle Consistency [Groueix et al. 2019] usingregularization losses for the consistency provides the second-bestresults in all categories, but its fitting distance is much larger thanours. Figure 9 illustrates some of the qualitative results. For further quantitative analysis and user study, we experimentwith parametric 3D models provided by Schulz et al . [2017] andcompare our method with the other methods.
Dataset and Network Training.
The dataset of Schulz et al . [2017]contains 74 parametric models, each of which has its own deforma-tion parametrization. For each model, we uniformly sample 2 k pointsand generate 1 k variants with random parameters; 128 shapes out ofthem are used as the test set. The main difference of this dataset withShapeNet in Section 4.2 is that point correspondences are known across the shapes since they are generated from the same parametricmodels. The point correspondences are not used as supervision inany experiment, but will be used as ground-truth in evaluations. Table 2. Quantitative results of parallelogram consistency test. We measureChamfer distance between two deformation results (going to the samedestination but with different pathways): z ⊕ −→ xy and y ⊕ −→ xz . Our De-formSyncNet (DSN) gives the minimal difference between the two resultscompared with the other methods. Category Airplane Car Chair Lamp TableCD( × − ) ↓ is better 3DN 13.30 7.73 77.58 194.44 133.77CC 6.98 5.54 26.23 46.20 57.77NC 8.56 7.21 31.48 138.73 89.58DSN Concat ∗ During training, we do not project the deformed shape to the givendeformation parameter space (see Equation 8).
Two-Way Consistency Evaluation.
As discussed in Section 3.3, weempirically verify the effect of our network design enforcing con-sistency in the deformation, compared with Cycle Consistency inGroueix et al . [2019] leveraging dedicated loss functions for the con-sistency and also the other baseline methods. We train the networksfor the shapes of each parametric model and measure the two-wayconsistency error for a point cloud pair ( x , y ) as follows:1 n (cid:213) i ∥ ( d i ( x → y ) − x i ) + ( d i ( y → x ) − y i ) ∥ , where x i ∈ R and ( d i ( x → y ) − x ) ∈ R are i -th point of the pointcloud x and its new position after deformation toward y , respec-tively, n is the number of points, and here we assume that all pointclouds are ordered consistently based on the given point correspon-dences. We computed the mean of this two-way consistency errorfor 50 pairs, which are the first 50 of the largest Chamfer distancesamong 1 k randomly generated pairs. Figure 10 (a) shows a compari-son between the results of our method and the baseline methods.Each dot indicates the consistency error for a single parametricmodel, and the x-axis and y-axis are for our DeformSyncNet and ACM Trans. Graph., Vol. 39, No. 6, Article 262. Publication date: December 2020. eformSyncNet: Deformation Transfer via Synchronized Shape Deformation Spaces • 262:13 O t h e r M e t h o d E rr o r (a) O t h e r M e t h o d C D (b)Fig. 10. Two-way consistency and deformation transfer results using para-metric models from Schulz et al . [2017]. (a) Comparison of two-way cycleconsistency errors between DeformSyncNet and the other methods. Eachdot is a result of one of 74 parametric models, and x- and y-axes are Deform-SyncNet and the other methods, respectively. Note that in this visualization,above the diagonal line indicates that the corresponding method performsworse than ours. Compared with 3DN, Cycle Consistency (CC), and NeuralCages (NC), ours gives smaller errors in 70, 58, and 71 cases, respectively, outof total 74 cases. (b) Comparison of deformation transfer errors (Chamferdistance between the prediction and the ground truth). Again, ours givessmaller errors in 74, 69, and 73 cases compared with 3DN, Cycle Consistency(CC), and Neural Cages (NC), respectively. the other methods, respectively. Our DeformSyncNet gives smallertwo-way consistency errors than the other methods including Cy-cle Consistency in most cases: 70, 58, and 71 cases out of the 74models compared with 3DN, Cycle Consistency, and Neural Cages,respectively. Quantitative Evaluation of Deformation Transfer.
Using the givendeformation parametrization, we also quantitatively evaluate theperformance of deformation transfer. If we choose all of the source,target, and destination (the new shapes to transfer deformation)shapes from the same parametric model, the ground truth of thedeformation transfer can be computed by transferring the differ-ence of parameters from source to target to the destination shape.Given the 50 test pairs per model above, we randomly pick anothershape as the destination and compute Chamfer distance betweenthe predicted and the ground truth shape. Figure 10 (b) illustratesthe mean Chamfer distance; x-axis is DeformSyncNet, and y-axisis the other methods. In most cases, DeformSyncNet gives a mag-nitude smaller distances compared with the other methods. Thenumbers of cases out of the 74 models when DeformSyncNet out-performs 3DN, Cycle Consistency, and Neural Cages are 74, 69, and73, respectively.
User Study for Deformation Transfer.
We further assess the qualityof deformation transfer results by taking the destination shape froma different parametric model (but in the same category) with thesource and target shapes. Since we cannot compute the ground truthin this case, we conducted a user-study on Amazon Mechanical Turk.Among the 74 parametric models, we grouped 8 Airplane, Chair,and Table models per category and trained all the networks foreach group, while still taking the source and target shape from thesame parametric model. But in the test time, the destination shape israndomly selected from the other model in the same group — notethat we take the source and target shapes from the same parametricmodel so that the participants can clearly identify the difference.From the same 50 test pairs per model above, in total 1,200 (50 × : := ? A B C (1) (2) (3) (4)Given the three images A, B and C, select the image that has the most similar relation to C, as A has to B. You can choose multiple images.
Fig. 11. User study example. The source (A) and target (B) shapes in thefirst row are sampled from the same parametric model, while the destina-tion (C) shape is sampled from the other parametric model in the samecategory. The results of 3DN, Cycle Consistency, Neural Cages, and ourDeformSyncNet are shown in the second row with a randomly permutedorder. Multiple choices were allowed. models × at least one among four shown objects: the outputs of3DN, Cycle Consistency, Neural Cages, and our DeformSyncNet.The associations between the methods and the objects are hiddento the Turkers, and the order of the objects is randomized. TheTurkers are also encouraged to choose multiple objects if they thinkmore than one options are equally good. In total, we collected 1,200responses from a pool of 100 different participants (each questionwas answered by 10 distinct Turkers).In the results, our DeformSyncNet was selected in 54 .
7% of all1,200 responses, whereas 3DN, Cycle Consistency, Neural Cageswere selected in 30 . . . multiple choices, and thus the sum of percentages is greaterthan 100.) The performance gap between DeformSyncNet andNeural Cages (the second best) is statistically significant as theMcNemar’s contingency test [McNemar 1947] between the twoempirical (binomial) distributions has a p-value of 5 e − We examine the capability of our method of discovering shapestructure, such as symmetry and part connectivity. To simplify thetest, we employ a procedural model of tables from Tian et al . [Tianet al. 2019], which has a rectangular top and four legs at the corner.Once we train our networks with 5000 random samples of tables,in test time, we randomly perturb the default shape in 2000 timesand project them back to the learned shape space as describedin Section 4.1. Figure 12 shows examples of the perturbation andprojections. When measuring the difference of symmetric scales, x,y, and z scales of legs, and the gap between the top and legs, theaverage ratios compared to the scale of the default shape along eachaxis are 9 . . . . ACM Trans. Graph., Vol. 39, No. 6, Article 262. Publication date: December 2020.
Fig. 12. Examples of perturbed (left) and projected (right) synthetic tables.
Source Target Deformedw/o Proj. w/ Proj. w/o Proj. w/ Proj.
Fig. 13. Qualitative analysis of the effect of deformation projection to thedeformation handle space during the network training. The third and fourthcolumns demonstrate that the training with the projection gives moreplausible results in deformation. However, if the results are projected in testtime, the difference becomes negligible.
When the deformation handles are given for the shapes in the train-ing dataset, we can try to project the deformed source shape to thegiven deformation handle space during the network training, asexplained in Section 3.2. As well as the sparsity loss regularizingthe dictionaries (in Equation 12), we can also project the deformedshape to the given space (Equation 8) and measure the fitting losswith it (changing Equation 9 to Ch ( d proj ( x → y ) , y ) ). Interestingly,a qualitative analysis demonstrates that the projection during thetraining guides the network to learn more plausible variations closeto the input shape. Figure 13 shows some results when training ournetwork with and without the projection during the training usingthe ShapeNet dataset and part bounding boxes in Section 4.1. Intest time, the network model trained with the projection providesmore reasonable variations of the source shapes (third and fourthcolumns), while this visual improvement does not make a significantchange in the fitting distance to the target (Table 3). If we performthe same projection in test time (fifth and sixth columns) as well,the difference between two cases, training with and without theprojection, becomes negligible despite the discrepancy in the resultsbefore the project. Table 3. Quantitative result of training DeformSyncNet with and withoutthe projection to the deformation handle space. The projection duringthe training does not make notable changes in the quantities of all thedeformation transfer evaluation metrics. See Section 4.1 for the details ofthe evaluation metrics.
Category Airplane Car Chair Sofa Table
Fitting CD( × − ) ↓ w/o Proj. 3.53 MMD-CD( × − ) ↓ w/o Proj. w/ Proj. 1.13 2.11 7.25 Cov-CD(%) ↑ w/o Proj. While our framework is designed to decode the latent space to a linear subspace of point cloud offsets, we also demonstrate that ourframework can be extended to decode each axis of the latent spaceto a non-linear trajectory — but without guaranteeing the affineproperties. The matrix multiplication in Equation 3 can be rewrittenas the following per-point function: d i ( x → y ) (cid:66) (cid:213) j {F ij ( x ) (cid:0) E j ( y ) − E j ( x ) (cid:1) } + x i , (14)where F ij ( x ) ∈ R is the j -th offset at the i -th point, E j ( x ) ∈ R is j -th element of E( x ) , and x i is the position of i -th point of x . We gen-eralize this formulation by redefining F ij ( x ) as a function describingan arc-length trajectory with the parameter (cid:0) E j ( y ) − E j ( x ) (cid:1) : d i ( x → y ) (cid:66) (cid:213) j F ij (cid:0) x , E j ( y ) − E j ( x ) (cid:1) + x i . (15)For example, a uniform circular trajectory with a parameter t isformulated as follows: F ij ( x , t ) (cid:66) (cid:0) exp (cid:0) [ t R ij ( x )] × (cid:1) − I × (cid:1) (cid:0) x i − C ij ( x ) (cid:1) , (16)where R ij ( x ) ∈ R is the rotation vector describing the axis andangle, and C ij ( x ) ∈ R is the rotation center; [·] × is the cross productmatrix of the input vector. This formulation means that each elementin the latent vector (cid:0) E j ( y ) − E j ( x ) (cid:1) shared across all the pointsindicates a scale of the rotation angle or just a rotation angle if R ij ( x ) is normalized to a unit vector.We test this extension using the Shape2Motion dataset [Wanget al. 2019b] where the 3D models are annotated with movable partsand their motion parameters, e.g., parameters of rotation and/ortranslation. We picked three models in different categories, carton,eyeglasses, and swing, which include rotations and generated 1 k shape variations (128 out of them is the test set) by uniformly sam-pling rotation angles. Table 4 shows the comparison of fitting errorsbetween the cases of learning linear and circular trajectories (thefirst two rows). The dimension of the latent space k is set to thedegree of freedom of each model. The case learning circular tra-jectories gives a smaller fitting error. If we set a large number forthe dimension of the latent space k (the last two rows), however,the network learning linear trajectories can also provide a verysmall fitting error by encoding all the rotational motions in the ACM Trans. Graph., Vol. 39, No. 6, Article 262. Publication date: December 2020. eformSyncNet: Deformation Transfer via Synchronized Shape Deformation Spaces • 262:15
Table 4. Results of learning rotational motions in Shape2Motiondataset [Wang et al. 2019b] and fitting error comparison. DSN (L) andDSN (C) indicate DeformSyncNet learning linear and circular trajectories,respectively. The case learning circular trajectories performs better whenthe dimension of the latent space k is set to the number of rotating parts(DoF, the first two rows). The case learning linear trajectories also performswell if the dimension of latent space becomes high, e.g., k = (the lasttwo rows). Neural Cages (NC) fails to learn the rotational motions. Model Carton Eyeglasses SwingCD( × − ) ↓ is better DSN (L, k = DoF) 0.14 0.33 0.41DSN (C, k = DoF)
CC 0.09 0.26 0.78NC 1.94 0.71 7.04DSN (L, k = k = Fig. 14. The learned dictionaries of rotational motions using Shape2Motiondataset [Wang et al. 2019b]. Each element in the dictionaries describes themotion of each independent folding part, such as the flaps of the carton. Seesupplementary video for animation. high-dimensional latent space. Note that Neural Cages [Yifan et al.2020] which also only learns linear deformation offsets fails in thisdataset (the fifth row).We also observe that our network can discover independent ro-tation motions in the learned dictionaries, particularly when therotation vector R ij ( x ) is normalized to a unit vector; this normal-ization regularizes each rotation element to represent a same-angle rotation. Figure 14 visualizes the learned rotation dictionaries whereeach folding part is identified at each element. An animation canalso be seen in the supplementary video.Note that, among the affine properties in Section 3.1, the tran-sitivity property is not guaranteed when learning the non-lineardeformations; i.e., the latent vector may not be uniquely determinedgiven a specific deformation. However, the property can be practi-cally enforced when leveraging regularizations making each elementto be unique. We have proposed DeformSyncNet, a neural-network-based frame-work learning a synchronized linear deformation space for eachshape. The synchronization is achieved without supervised cor-respondences but by connecting each shape-specific deformationspace with an idealized canonical latent space, where all possibledeformations are encoded. From this latent space, an encoded defor-mation is realized directly on each shape through per-point offsetsvia a shape-specific action decoding. As applications, our framework demonstrates (i) deformation projection, snapping an edited shapeby the user to plausible shape space, and (ii) deformation transfer,adopting the modification performed on one shape to other shapesin the same category.Our framework has several limitations. While we leverage thedeformation handle information during the network training viaprojection, it is only exploited in the loss function but not fed asinput to the network. Since most of the deformation handles areassociated with not the whole but a part of the shape, part-levelfeatures related to them can provide additional information for thedeformation space of the shape. Also, we take point clouds as inputand use Chamfer distance as the only supervision. A more advancedbackbone architecture and regularization losses for handling meshescan help learn more plausible deformations [Gao et al. 2018; Wanget al. 2019a]. We also introduced the extension of our frameworkto non-linear deformation in Section 4.6, but enforcing the affineproperties remains to be explored.Furthermore, the variations of articulated shapes may include hierarchical structure, such that the variation of a smaller part is fac-torized from the variation of a larger part. Such a structure might bebetter understood by finding elements in the deformation dictionarynot in parallel, but sequentially. Finally, analogies, i.e., a transfer ofthe difference between two shapes to a third, can be extended tocross-domain cases, such as 3D shapes from/to images [Mo et al.2019b] or natural language [Achlioptas et al. 2019].
ACKNOWLEDGMENTS
We thank the anonymous reviewers for their comments and sug-gestions. N. J. Mitra acknowledges the support of ERC PoC Grant,Google Faculty Award, Royal Society Advanced Newton Fellowship,and gifts from Adobe. L. J. Guibas acknowledges the support of aVannevar Bush Faculty Fellowship, a Samsung GRO grant, a GoogleDaydream Research Award, and gifts from the Adobe, Autodesk,and Snap corporations.
REFERENCES
Panos Achlioptas, Olga Diamanti, Ioannis Mitliagkas, and Leonidas Guibas. 2018. Learn-ing Representations and Generative Models for 3D Point Clouds. In
ICML .Panos Achlioptas, Judy Fan, X.D. Robert Hawkins, D. Noah Goodman, and J. LeonidasGuibas. 2019. ShapeGlot: Learning Language for Shape Differentiation. In
ICCV .Ilya Baran, Daniel Vlasic, Eitan Grinspun, and Jovan Popovic. 2009. Semantic Deforma-tion Transfer. In
ACM SIGGRAPH .Mirela Ben-Chen, Ofir Weber, and Craig Gotsman. 2009. Spatial Deformation Transfer.In
ACM SIGGRAPH/Eurographics Symposium on Computer Animation .Angel X. Chang, Thomas A. Funkhouser, Leonidas J. Guibas, Pat Hanrahan, Qi-XingHuang, Zimo Li, Silvio Savarese, Manolis Savva, Shuran Song, Hao Su, JianxiongXiao, Li Yi, and Fisher Yu. 2015. ShapeNet: An Information-Rich 3D Model Repository.arXiv:1512.03012Lu Chen, Jin Huang, Hanqiu Sun, and Hujun Bao. 2010. Cage-based deformationtransfer.
Computer & Graphics (2010).Dawson-Haggerty et al. [n.d.]. trimesh . https://trimsh.org/Chris Ding, Ding Zhou, Xiaofeng He, and Hongyuan Zha. 2006. R1-PCA: RotationalInvariant L1-Norm Principal Component Analysis for Robust Subspace Factorization.In
ICML .Haoqiang Fan, Hao Su, and Leonidas Guibas. 2017. A Point Set Generation Networkfor 3D Object Reconstruction from a Single Image. In
CVPR .Noa Fish, Melinos Averkiou, Oliver van Kaick, Olga Sorkine-Hornung, Daniel Cohen-Or,and Niloy J. Mitra. 2014. Meta-representation of Shape Families. In
ACM SIGGRAPH .Ran Gal, Olga Sorkine, Niloy J. Mitra, and Daniel Cohen-Or. 2009. iWIRES: an analyze-and-edit approach to shape manipulation. In
ACM SIGGRAPH .Jean Gallier. 2011.
Geometric Methods and Applications . Springer.Lin Gao, Jie Yang, Yi-Ling Qiao, Yu-Kun Lai, Paul L. Rosin, Weiwei Xu, and ShihongXia. 2018. Automatic Unpaired Shape Deformation Transfer. In
ACM SIGGRAPH
ACM Trans. Graph., Vol. 39, No. 6, Article 262. Publication date: December 2020.
Asia .Kyle Genova, Forrester Cole, Daniel Vlasic, Aaron Sarna, William T. Freeman, andThomas Funkhouser. 2019. Learning Shape Templates with Structured ImplicitFunctions. In
ICCV .Thibault Groueix, Matthew Fisher, Vladimir G. Kim, Bryan Russell, and Mathieu Aubry.2018. AtlasNet: A Papier-Mâché Approach to Learning 3D Surface Generation. In
CVPR .Thibault Groueix, Matthew Fisher, Vladimir G. Kim, Bryan C. Russell, and MathieuAubry. 2019. Deep Self-Supervised Cycle-Consistent Deformation for Few-ShotShape Segmentation. In
Eurographics Symposium on Geometry Processing .Rana Hanocka, Noa Fish, Zhenhua Wang, Raja Giryes, Shachar Fleishman, and DanielCohen-Or. 2018. ALIGNet: Partial-Shape Agnostic Alignment via UnsupervisedLearning.
ACM Transactions on Graphics (2018).Aaron Hertzmann, Charles E. Jacobs, Nuria Oliver, Brian Curless, and David H. Salesin.2001. Image Analogies. In
ACM SIGGRAPH Asia .Haibin Huang, Evangelos Kalogerakis, Siddhartha Chaudhuri, Duygu Ceylan,Vladimir G. Kim, and Ersin Yumer. 2017. Learning Local Shape Descriptors fromPart Correspondences with Multiview Convolutional Networks.
ACM Transactionson Graphics (2017).Ruqi Huang, Panos Achlioptas, Leonidas Guibas, and Maks Ovsjanikov. 2019a. LimitShapes–A Tool for Understanding Shape Differences and Variability in 3D ModelCollections. In
Eurographics Symposium on Geometry Processing .Ruqi Huang, Marie-Julie Rakotosaona, Panos Achlioptas, Leonidas J. Guibas, and MaksOvsjanikov. 2019b. OperatorNet: Recovering 3D Shapes From Difference Operators.In
ICCV .Takeo Igarashi, Tomer Moscovich, and John F. Hughes. 2005. As-Rigid-as-PossibleShape Manipulation. In
ACM SIGGRAPH .Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A. Efros. 2017. Image-to-ImageTranslation with Conditional Adversarial Networks. In
CVPR .Dominic Jack, Jhony K. Pontes, Sridha Sridharan, Clinton Fookes, Sareh Shirazi, FredericMaire, and Anders Eriksson. 2018. Learning Free-Form Deformations for 3D ObjectReconstruction. In
ICCV .Vladimir G. Kim, Wilmot Li, Niloy J. Mitra, Siddhartha Chaudhuri, Stephen DiVerdi, andThomas Funkhouser. 2013. Learning Part-based Templates from Large Collectionsof 3D Shapes. In
ACM SIGGRAPH .Andrey Kurenkov, Jingwei Ji, Animesh Garg, Viraj Mehta, JunYoung Gwak, Christo-pher Bongsoo Choy, and Silvio Savarese. 2018. DeformNet: Free-Form DeformationNetwork for 3D Shape Reconstruction from a Single Image. In
WACV .Hao Li, Robert W. Sumner, and Mark Pauly. 2008. Global Correspondence Optimizationfor Non-Rigid Registration of Depth Scans. In
Eurographics Symposium on GeometryProcessing .Lingxiao Li, Minhyuk Sung, Anastasia Dubrovina, Li Yi, and Leonidas Guibas. 2019.Supervised Fitting of Geometric Primitives to 3D Point Clouds. In
CVPR .Yaron Lipman, Olga Sorkine, Daniel Cohen-Or, and David Levin. 2005. Linear Rotation-Invariant Coordinates for Meshes. In
ACM SIGGRAPH .Jerry Liu, Fisher Yu, and Thomas Funkhouser. 2017. Interactive 3D Modeling with aGenerative Adversarial Network. arXiv:1706.05170Chongyang Ma, Haibin Huang, Alla Sheffer, Evangelos Kalogerakis, and Rui Wang.2009. Analogy-driven 3D style transfer. In
Eurographics .Q. McNemar. 1947. Note on the sampling error of the difference between correlatedproportions or percentages.
Psychometrika (1947).Eloi Mehr, Ariane Jourdan, Nicolas Thome, Matthieu Cord, and Vincent Guitteny. 2019.DiscoNet: Shapes Learning on Disconnected Manifolds for 3D Editing. In
ICCV .Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Dis-tributed Representations of Words and Phrases and Their Compositionality. In
NeurIPS .Kaichun Mo, Paul Guerrero, Li Yi, Hao Su, Peter Wonka, Niloy Mitra, and LeonidasGuibas. 2019a. StructEdit: Learning Structural Shape Variations. arXiv:1911.11098Kaichun Mo, Paul Guerrero, Li Yi, Hao Su, Peter Wonka, Niloy J. Mitra, and LeonidasGuibas. 2019b. StructureNet: Hierarchical Graph Networks for 3D Shape Generation.In
ACM SIGGRAPH Asia .Kaichun Mo, Shilin Zhu, Angel X. Chang, Li Yi, Subarna Tripathi, Leonidas J. Guibas, andHao Su. 2019c. PartNet: A Large-scale Benchmark for Fine-grained and HierarchicalPart-level 3D Object Understanding. In
CVPR .Feiping Nie, Heng Huang, Xiao Cai, and Chris Ding. 2010. Efficient and Robust FeatureSelection via Joint l , -Norms Minimization. In NeurIPS .Maks Ovsjanikov, Wilmot Li, Leonidas Guibas, and Niloy Mitra. 2011. Exploration ofContinuous Variability in Collections of 3D Shapes. In
ACM SIGGRAPH .Charles Ruizhongtai Qi, Hao Su, Kaichun Mo, and Leonidas J. Guibas. 2017. PointNet:Deep Learning on Point Sets for 3D Classification and Segmentation. In
CVPR .Raif M. Rustamov, Maks Ovsjanikov, Omri Azencot, Mirela Ben-Chen, Frédéric Chazal,and Leonidas Guibas. 2013. Map-Based Exploration of Intrinsic Shape Differencesand Variability. In
ACM SIGGRAPH .Adriana Schulz, Ariel Shamir, Ilya Baran, David I.W. Levin, Pitchaya Sitthi-amorn,and Wojciech Matusik. 2017. Retrieval on Parametric Shape Collections. In
ACMSIGGRAPH . Olga Sorkine, Daniel Cohen-Or, Yaron Lipman, Marc Alexa, Christian Rössl, and Hans-Peter Seidel. 2004. Laplacian Surface Editing. In
Eurographics Symposium on Geome-try Processing .Stratasys. [n.d.]. GrabCAD Community. https://grabcad.com/libraryRobert W. Sumner and Jovan Popovic. 2004. Deformation Transfer for Triangle Meshes.In
ACM SIGGRAPH .Minhyuk Sung, Hao Su, Vladimir G. Kim, Siddhartha Chaudhuri, and Leonidas Guibas.2017. ComplementMe: Weakly-supervised Component Suggestions for 3D Modeling.In
ACM SIGGRAPH Asia .Minhyuk Sung, Hao Su, Ronald Yu, and Leonidas Guibas. 2018. Deep FunctionalDictionaries: Learning Consistent Semantic Structures on 3D Models from Functions.In
NeurIPS .A.R. Tarrida. 2011.
Affine Maps, Euclidean Motions and Quadrics . Springer.Yonglong Tian, Andrew Luo, Xingyuan Sun, Kevin Ellis, William T. Freeman, Joshua B.Tenenbaum, and Jiajun Wu. 2019. Learning to Infer and Execute 3D Shape Programs.In
ICLR .Trimble. [n.d.]. 3D Warehouse. https://3dwarehouse.sketchup.com/Shubham Tulsiani, Hao Su, Leonidas J. Guibas, Alexei A. Efros, and Jitendra Malik.2017. Learning Shape Abstractions by Assembling Volumetric Primitives. In
CVPR
CVPR .Weiyue Wang, Duygu Ceylan, Radomir Mech, and Ulrich Neumann. 2019a. 3DN: 3DDeformation Network. In
CVPR .Xiaogang Wang, Bin Zhou, Yahao Shi, Xiaowu Chen, Qinping Zhao, and Kai Xu. 2019b.Shape2Motion: Joint Analysis of Motion Parts and Attributes from 3D Shapes. In
CVPR .Yanzhen Wang, Kai Xu, Jun Li, Hao Zhang, Ariel Shamir, Ligang Liu, Zhi-Quan Cheng,and Y. Xiong. 2011. Symmetry Hierarchy of Man-Made Objects. In
Eurographics .Jiajun Wu, Chengkai Zhang, Tianfan Xue, William T. Freeman, and Joshua B. Tenen-baum. 2016. Learning a Probabilistic Latent Space of Object Shapes via 3DGenerative-Adversarial Modeling. In
NeurIPS .Shihong Xia, Congyi Wang, Jinxiang Chai, and Jessica Hodgins. 2015. Realtime styletransfer for unlabeled heterogeneous human motion.
ACM Transactions on Graphics (2015).Kai Xu, Honghua Li, Hao Zhang, Daniel Cohen-Or, Yueshan Xiong, and Zhi-QuanCheng. 2010. Style-Content Separation by Anisotropic Part Scales. In
ACM SIG-GRAPH Asia .Weiwei Xu, Jun Wang, KangKang Yin, Kun Zhou, Michiel van de Panne, Falai Chen,and Baining Guo. 2009. Joint-aware Manipulation of Deformable Models. In
ACMSIGGRAPH .Jie Yang, Lin Gao, Yu-Kun Lai, Paul L. Rosin, and Shihong Xia. 2018. Biharmonicdeformation transfer with automatic key point selection.
Graphical Models (2018).Li Yi, Leonidas Guibas, Aaron Hertzmann, Vladimir G. Kim, Hao Su, and Ersin Yumer.2017. Learning Hierarchical Shape Segmentation and Labeling from Online Reposi-tories. In
ACM SIGGRAPH .Li Yi, Vladimir G. Kim, Duygu Ceylan, I-Chao Shen, Mengyan Yan, Hao Su, CewuLu, Qixing Huang, Alla Sheffer, and Leonidas Guibas. 2016. A Scalable ActiveFramework for Region Annotation in 3D Shape Collections. In
ACM SIGGRAPHAsia .Wang Yifan, Noam Aigerman, Vladimir Kim, Siddhartha Chaudhuri, and OlgaSorkine-Hornung. 2020. Neural Cages for Detail-Preserving 3D Deformations.arXiv:1912.06395Kangxue Yin, Zhiqin Chen, Hui Huang, Daniel Cohen-Or, and Hao Zhang. 2019. LOGAN:Unpaired Shape Transform in Latent Overcomplete Space. In
ACM SIGGRAPH Asia .Ersin Yumer and Levent Burak Kara. 2014. Co-Constrained Handles for Deformationin Shape Collections. In
ACM SIGGRAPH Asia .Ersin Yumer and Niloy J. Mitra. 2016. Learning Semantic Deformation Flows with 3DConvolutional Networks. In
ECCV .Yongheng Zhao, Tolga Birdal, Haowen Deng, and Federico Tombari. 2019. 3D PointCapsule Networks. In
CVPR .Youyi Zheng, Daniel Cohen-Or, Melinos Averkiou, and Niloy J. Mitra. 2014. RecurringPart Arrangements in Shape Collections. In
Eurographics .Youyi Zheng, Hongbo Fu, Daniel Cohen-Or, Oscar Kin-Chung Au, and Chiew-Lan Tai.2011. Component-wise Controllers for Structure-Preserving Shape Manipulation.In
Eurographics .Kun Zhou, Weiwei Xu, Yiying Tong, and Mathieu Desbrun. 2010. Deformation Transferto Multi-Component Objects. In
Eurographics .Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A. Efros. 2017. Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks. In