A curvature and density-based generative representation of shapes
VVolume 39 ( ), Number 2 pp. 1–15
COMPUTER GRAPHICS forum
A curvature and density-based generative representation of shapes
Z. Ye and N. Umetani and T. Igarashi and T. Hoffmann TU Munich, Germany The University of Tokyo, Japan
OGNGround Truth point-cloud AEAtlasNet Ours(FreeSurfer) BaselineFigure 1: Brain autoencoder. We build a curvature-to-curvature autoencoder and compare to the models based on point clouds, the AtlasNet[GFK ∗
18] (point clouds to surface) and the point-cloud AE [ADMG18] (point clouds to point clouds), the voxel-based model OGN [TDB17](IDs to voxels), and a mesh-based baseline model, which replaces the curvature in our model with vertex coordinates. All the neural networks,except for OGN, are trained on 1400 cortical surfaces and validated on 200 surfaces, which do not appear in the training set. Three of thepredicted surfaces from the validation data are shown above. Although all the models can restore the brain structure in a large scale, only ourmodel preserves the local fine structure. For more details see Section 4.3.
Abstract
This paper introduces a generative model for 3D surfaces based on a representation of shapes with mean curvature and metric,which are invariant under rigid transformation. Hence, compared with existing 3D machine learning frameworks, our modelsubstantially reduces the influence of translation and rotation. In addition, the local structure of shapes will be more preciselycaptured, since the curvature is explicitly encoded in our model. Specifically, every surface is first conformally mapped to acanonical domain, such as a unit disk or a unit sphere. Then, it is represented by two functions: the mean curvature half-densityand the vertex density, over this canonical domain. Assuming that input shapes follow a certain distribution in a latent space,we use the variational autoencoder to learn the latent space representation. After the learning, we can generate variations ofshapes by randomly sampling the distribution in the latent space. Surfaces with triangular meshes can be reconstructed fromthe generated data by applying isotropic remeshing and spin transformation, which is given by Dirac equation. We demonstratethe effectiveness of our model on datasets of man-made and biological shapes and compare the results with other methods.
CCS Concepts • Computing methodologies → Learning latent representations; Mesh geometry models;
1. Introduction
While the convolutional neural network has achieved significantsuccess in 2D image processing, more and more attention has re- cently been drawn to applying the technique to the domain of 3Dshapes. Unlike 2D images, which are typically represented by amultidimensional tensor, the representation of 3D shapes is usu-ally unstructured, hence the convolutional neural network is not di- © 2020 The Author(s)Computer Graphics Forum © 2020 The Eurographics Association and JohnWiley & Sons Ltd. Published by John Wiley & Sons Ltd. a r X i v : . [ c s . G R ] S e p . Ye & N. Umetani & T. Igarashi & T. Hoffmann / A curvature and density-based generative representation of shapes ( a ) ( c ) ( f ) NeuralNetwork ( d )( b ) ( e ) Figure 2: The pipeline of our model for generating variant shapes: ( a ) the conformal parameterization (Section 3.1), ( b ) the density functionextraction (Section 3.2), ( c ) the mean curvature half-density extraction (Section 3.2), ( d ) learning and generating (Section 3.3), ( e ) theisotropic remeshing (Section 3.4), ( f ) solving the Dirac equation and applying the spin transformation (Section 3.5).rectly applicable. Thus the main challenge is how to create a suit-able representation for 3D shapes which can take advantage of thestate-of-art machine learning frameworks. Several such represen-tations based on point clouds [FSG17, ADMG18, GFK ∗ ∗ ∗ ∗
2. Related Work2.1. Which invariant quantities determine an immersedsurface in R ? It is well-known that an immersed surface in R is determined upto a Euclidean motion by its first and second fundamental forms.However, their representation depends on the choice of coordinate.Hence, in order to consistently represent 3D shapes based on the © 2020 The Author(s)Computer Graphics Forum © 2020 The Eurographics Association and John Wiley & Sons Ltd. . Ye & N. Umetani & T. Igarashi & T. Hoffmann / A curvature and density-based generative representation of shapes . − . mean curvaturehalf-density Figure 3: The spherical conformal parameterizations of two animals are aligned by a Möbius transformation with three landmark points. Then,they are packed into tensors with dimension 320 × × ×
2. This figure shows a linear interpolation between the curvature representationof two shapes and the resulting shape reconstruction from the curvature representation.two fundamental forms, an identical triangulation for all shapes,which is not always possible, is required.Other options are point-wise shape descriptors such as the heatkernel signature [SOG09] and the wave kernel signature [ASC11].Indeed, they have been employed in discriminative models for 3Dshape classification and segmentation [BMM ∗ Now, we sketch the idea how to construct a surface from the meancurvature half-density. Roughly speaking, for every point on thesurface we rotate its infinitesimal neighbourhood with a quaternion.Recall that a quaternion is a 4-dimensional vector q = a + bi + c j + d j with the multiplicative structure: i = j = k = − , i j = − ji = k , jk = − k j = i , ki = − ik = j . We always identify vectors in R as pure imaginary quaternions ( x , y , z ) (cid:55)→ xi + y j + zk . Any quaternion can be written as q = | q | ( cos θ + sin θ u ) , where θ ∈ [ , π ) and u ∈ R ⊂ H . It is well known that q gives a scalerotation in R with scaling factor | q | , rotation angle θ and rotationaxis u . The rotation is given by R q ( v ) = q · v · q . The explicit construction of shapes from mean curvature half-density and conformal structure is called spin transformation. Sup-pose given an immersion of a surface f : M → R and a quaternion-valued function on the surface φ : M → H , which is understood asa continuously varying rotation at each point. We scale and rotateevery tangent plane by d ˜ f = φ · d f · φ . (1)However, there is no guarantee that these rotated tangent planes willagain form a surface. For simply connected surface, d ˜ f is again thetangent plane of an immersion of surface if and only if it is closed: dd ˜ f = D f φ = ρφ , (2)where the Dirac operator is defined by D f φ = − d f ∧ d φ | d f | , (3)and ρ : M → R is a real-valued function. Therefore, any solutionof the equation (2) will induce a new immersion ˜ f : M → R by˜ f = (cid:82) M d ˜ f . Moreover, the mean curvature ˜ H of ˜ f is given by˜ H | d ˜ f | = H | d f | + ρ | d f | , (4)where H is the mean curvature of the original surface f . Observethat, due to the scaling factor | d ˜ f | in (4), one can not fully controlthe mean curvature ˜ H . However, by introducing a variant notion,namely the mean curvature half-density: h : = H | d f | , (5) © 2020 The Author(s)Computer Graphics Forum © 2020 The Eurographics Association and John Wiley & Sons Ltd. . Ye & N. Umetani & T. Igarashi & T. Hoffmann / A curvature and density-based generative representation of shapes the equation (4) turns to ˜ h = h + ρ | d f | . (6)This means that the mean curvature half-density ˜ h can be preciselyrealized as long as the solution φ for equation (2) exists.Crane et al. [CPS11] first discretize the equation (3) and showapplications in computer graphics, such as curvature painting. Thefollowing works are, e.g., Crane et al. [CPS13] use the spin trans-formation for surface fairing. Liu et al. [LJC17] construct a con-tinuous spectrum of operators between the square of the Dirac op-erator and the Laplace-Beltrami operator. These operators are uti-lized to enhance surface matching and segmentation problems. Yeet al. [YDT ∗
18] create a framework, which consistently discretizedthe extrinsic Dirac operator and an intrinsic Dirac operator. In thispaper, we improve the reconstruction based on [CPS11, YDT ∗ Various representations of surfaces have been proposed for 3Dshape generation, e.g., models based on volumetric representation[WZX ∗ ∗ ∗
18] propose a rep-resentation based on multiple charts, which conformally map dif-ferent parts of shapes to a domain. Since features over each chartare normalized separately, the fine structure will be better preservedthan with a single chart. However, while the creation of such chartsrequires a sparse correspondence, reconstruction of shapes from thecharts needs a template shape, which amounts to a dense correspon-dence. In order to find such correspondence, one has to introducea time-consuming workflow beforehand. Groueix et al. [GFK ∗ ∗ ∗
18] use thesame Dirac operator as ours. But they merely replaced the Laplace-Beltrami operator in the neural network with the Dirac operator,thus the real power of the Dirac operator, namely its connection toconformal transformation, is not exploited.
3. Method
The main pipeline of our model is depicted in Figure 2. In the se-quel, we will explain the detailed methods for encoding shapes withcurvature and vertex density in Section 3.1 and 3.2, building a neu-ral network based on our representation in Section 3.3 and recon-struction of shapes in Section 3.4 and 3.5.
Encoding the Conformal Structure
In discrete case, how to en-code a shape in the scheme of the Bonnet problem (Section 2.1)?While the mean curvature half-density can be represented by avertex-based or face-based function, it is not straightforward topack the conformal structure in a form that is suitable for machinelearning pipeline. For example, we can recover the shape of a cowfrom its spherical conformal parameterization ((b) in Figure 4) byprescribing the function of mean curvature half-density ((c) in Fig-ure 4). But it is not clear how to represent a spherical mesh that isconformal equivalent to a given shape purely by scalar functions.One might consider the notion of discrete conformal equivalencefor triangular meshes by length cross-ratio on edges ( [SSP08]).But it is unclear how to transfer the length cross-ratio across differ-ent meshes.conformal add the curvature reconstruction ( a ) ( b ) ( c ) ( d ) Figure 4: [YDT ∗
18] shows that a simply-connected surface in R can be faithfully reconstructed from its conformal parameterizationby prescribing the mean curvature half-density.Recall that the conformal structure is the set of metrics modulothe equivalence relation g ∼ e u g , i.e., two metrics are identifiedif they only differ by a scaling at each point. Therefore, instead ofencoding the conformal structure, we encode the metric of shapes.In general, the space of all metrics still does not have an efficientform of representation, thus we focus on a smaller subset, i.e., theisotropic meshing. Since the conformal map is locally isotropic,i.e., it takes an isotropic mesh to a close-to-isotropic mesh (see thezoom-in in Figure 4), and we know that the isotropic meshing isusually generated by the centroidal Voronoi tessellation (CVT) withrespect to a density function [ADVDI03], this density function canbe utilized as an approximation of a metric. Therefore, at the begin-ning of our pipeline all the input shapes are isotropically remeshed(like ( a ) in Figure 4). Then, we successively take the followingprocedures. We map all the shapes to a canonical domain, e.g., the unit disk fordisk-like surfaces and the unit sphere for spherical surfaces. Theresulting disk-like or spherical meshes are called the conformal pa-rameterization. However, these maps are not unique but differ by aconformal automorphism of the domain. To deal with the ambigu-ity one may choose from the following approaches depending onthe application: © 2020 The Author(s)Computer Graphics Forum © 2020 The Eurographics Association and John Wiley & Sons Ltd. . Ye & N. Umetani & T. Igarashi & T. Hoffmann / A curvature and density-based generative representation of shapes
Landmark alignment
We know that the conformal automorphismof S , i.e., the Möbius transformation, is fully determined by threedistinguished points and the conformal automorphism of a disk isdetermined by one point and one rotation. Hence we choose twolandmark points for disk-like surfaces and three landmark pointsfor closed surfaces and align these landmarks via a conformal map-ping. One example is shown in Figure 12. Landmark-free alignment
For example, [BCK18] proposed acanonical Möbius transformation such that the mass center isaligned with the sphere center. Then, we register two sphericalmeshes of centered Möbius transformations by searching for an op-timal rotation.
Without any alignment at all
This will result in a larger shape la-tent space and consequently poses higher demands on the capacityof neural network, because, for example, a rotation of shapes mightalso cause a rotation of curvature function. However, our model isparticularly good at capturing this uncertainty (see the discussionin Section 4.2).Specifically, there are many available algorithms for confor-mal parameterization for disk-like and spherical surfaces, e.g.,[GWC ∗
04, CPS13, CL15, CLL15, SC17, YDT ∗ In order to build the neural network, we need some fixed meshesfor canonical domains. In particular, we use the standard 256 × Mean curvature half-density
The mean curvature half-density h is a face-based function given by [YDT ∗ h i = ∑ j | e i j | tan θ i j / √ A i , (7)where the sum runs over all the edges e i j of the face T i , θ i j arebending angles at the edge e i j and A i are the face area. Vertex density function
We estimate the density function d bythe reciprocal of vertex area, d i : = / ˜ A i , where ˜ A i is the vertexarea of the conformal parameterization. We do not normalize thedensity d , since the integral of the piecewise constant function (cid:82) U d dA = ∑ i d i ˜ A i = i is equal to the number of points located inthe area U . At the step of reconstruction, this gives us the informa-tion about how many points should be sampled. In the experiment,we observe that the logarithmic density ˜ d : = log d is more evenlydistributed. Therefore, the logarithmic density ˜ d is instead recordedon the domain. Since the disk-like surface is represented like a 2D image with twochannels, any classical CNNs can be directly applied. Hence, wewill focus on the case of spherical surfaces.
Convolution layers
Many CNNs on arbitrary graphs or sur-faces have been proposed in recent works, see [BZSL14, KW17,DBV16, MBM ∗
17, BMM ∗
15, MBBV15, MGA ∗
17] and the sur-vey [BBL ∗ R , at thebarycenter. Let l be a positive number suchthat the projection of the triangular face liesentirely in the patch [ − l , l ] × [ − l , l ] on thetangent plane. This projection π gives a local coordinate system ofthe points in the pre-image π − ([ − l , l ] × [ − l , l ]) ⊂ S . Hence, thefunctions restricted in this region can be interpolated to some gridson the patch. The distortion caused by the projection is neglectablewhen the size of the patches is small. We choose a fixed length l such that all the triangular faces on the domain are projected insidethe corresponding patches. The convolution is the ordinary 2 D con-volution within each patch with the shared ïˇn ˛Alter weights acrossdifferent patches. D.S. 1 D.S. 2Figure 5: Downsampling layers based on the subdivision structureof the spherical meshes. The tensors in the previous layer, which arecorresponding to a common triangle in the next layer, are merged tothe tensor associated with the father triangle. These downsamplinglayers respect the spatial relations among the triangles. Downsampling and Upsampling layers
Like the MaxPoolingand UpPooling layers for classical CNNs, we need the same sortof operations for mesh domain to decrease and increase the spatialdimension of neural network. One can first apply the ordinary 2 D pooling layers within each patch. Furthermore, since our sphericaldomain is constructed by subdividing an icosahedron, it is naturallyendowed with a hierarchical structure (Figure 5), which gives riseto downsampling and upsampling layers between spherical mesheswith different refinements.The detailed architectures of our convolutional neural networksare depicted in the appendix. In order to construct a conformal parameterization from a givenvertex density function d , we first randomly sample n i points inevery faces of the domain, where n i = d i ˜ A i and ˜ A i is the face area.Next, an isotopic meshing is constructed as follows. © 2020 The Author(s)Computer Graphics Forum © 2020 The Eurographics Association and John Wiley & Sons Ltd. . Ye & N. Umetani & T. Igarashi & T. Hoffmann / A curvature and density-based generative representation of shapes Centroidal Voronoi Tessellation
The isotropic meshing is usuallymade by centroidal Voronoi tessellation [DFG99]. Given a set ofpoints { v i } in a metric space, particularly R or S . The Voronoiregion V i corresponding to v i is defined by V i = { x || x − v i | ≤ | x − v j | , j (cid:54) = i } , (8)which are polygons (see Appendix 7.1 for the formula for comput-ing the weighted centroid of polygons). Given a density function d ,the centroid v ∗ i of the polygon V i is given by v ∗ = (cid:82) V y d ( y ) dy (cid:82) V d ( y ) dy . (9)We call a point set { v i } the weighted centroidal Voronoi tessela-tion if v i = v ∗ i holds true for all i .Sampling Voronoi D. 1-st iter. 5-th iter. Delaunay Tri.Figure 6: Centroidal Voronoi Tessellation. In order to obtain anisotropic meshing with respect to a given density, we first sam-ple a point set according to the density and repeatedly apply theLloyd’s relaxation. Observe that the point set becomes more andmore isotropic as the iteration goes.In this paper we use Lloyd relaxation to compute the CVT. Givena point set { v i } we iteratively update the point v i with the corre-sponding centroid v ∗ i until it converges (see Figure 6):1. Randomly sample the points with respect to the density d (de-fined in Section 3.2).2. Create the Voronoi diagram. For the disk case, we have to bea bit careful that the Voronoi cells close to the boundary aremostly unbounded. Hence we reflect the points close to theboundary, so that all the Voronoi cells inside or close to the unitdisk are bounded.3. Compute the weighted centroids of the (bounded) Voronoi cellsand, for the disk case, remove the points lying outside the disk(see Figure 7).Voronoi D. Flipping Update RemoveFigure 7: Constraint CVT. To avoid dealing with unboundedVoronoi cells, we flip the points, which are close to the boundary,such that the cells close to the boundary are all bounded.Then, a Delaunay triangulation is constructed by taking the dualof the Voronoi diagram. Generally, this triangulation does not per-fectly fit the disk at the boundary, but it does not significantly affectthe global appearance of shapes. Now, we are ready to reconstruct the surface from a conformal pa-rameterization with prescribed mean curvature half-density. In thefollowing we first demonstrate an improved reconstruction methodwhich is a slight modification of [YDT ∗
18] and then introduce anew procedure of area calibration, which would be particularly ef-fective when the area scaling is not accurately restored by the pre-vious method.
Dirac Energy
In practice, the exact solution of the Dirac equation(2) can hardly be obtained, so we actually search for the solution φ : M → H such that: ( D f − ρ − σ ) φ = σ , which actually amounts to theeigenvalue problem ( D f − ρ ) φ = λφ , where λ is the eigenvalue with the smallest magnitude [CPS11].In discrete case, D f − ρ is a | F | × | F | quaternion-valued matrix[CPS11], or in practice, a 4 | F | × | F | real-valued matrix such thatany quaternion q = a + bi + c j + dk is represented by a 4 × a − b − c db a − d cc d a − bd − c b a . We briefly introduce the discretization of the matrix D f − ρ andrefer the reader to [CPS11, YDT ∗
18] for more details. Let e i j ∈ Im ( H ) be the oriented edge embedded in the quaternion space and H i j : = | e i j | tan θ ij be the integrated mean curvature at the edge e i j , where θ i j is the bending angle between the face i and j . Thematrix of the Dirac operator is a 4 | F | × | F | matrix D f given by( [YDT ∗ ( D f φ ) i = E i j · φ j − H i φ i , where E i j : = H i j + e i j and H i = ∑ j H i j . The discrete form of ρ isa 4 | F | × | F | diagonal matrix P with the discrete mean curvaturehalf-density (7) as the diagonal. Instead of building the target shapein one step, we slowly flow the initial shape to the target for thepurpose of stability. Hence, we build the matrix ˆ D ( t ) = D f − tP ,where t ∈ [ , ] is a step length parameter.We observe that, even though this face-based Dirac operatorgives the exact solution, it is not numerically stable, because its so-lution space is often too large (technically, some solutions that givethe edge-constraint normals far from the actual face normal willresult in unwanted transformations). On the other hand, while thevertex-based operators in [CPS11, YDT ∗
18] works well in manycases, they are not able to faithfully recover the high curvature re-gions on the surface, because their solution spaces are too limited.To have a balance between these two approaches we propose thefollowing regularized energy based on the face-based operator: E D ( t ) = ˆ D T ( t ) · ˆ D ( t ) + cR , © 2020 The Author(s)Computer Graphics Forum © 2020 The Eurographics Association and John Wiley & Sons Ltd. . Ye & N. Umetani & T. Igarashi & T. Hoffmann / A curvature and density-based generative representation of shapes where c is a positive coefficient and R is the 4 | F | × | F | regulariza-tion matrix such that R = ∑ i j | e ∗ i j | ( φ i − φ j ) , where the sum runs over all adjacent faces i and j . Note that theweights with the dual edge length are used in [CKPS18]. To havefiner control of the regularizer, one can decompose R into four com-ponents and set different weights as in [CKPS18], but we did notsee that this will make any obvious difference in our setting. Em-pirically, the coefficient c is set to be 0 .
001 max i j | e i j | .By the min-max principle, solving the generalized eigenvalueproblem E D ( t ) φ = λ M φ , where M is the mass matrix, is actually equivalent to minimizingthe energy min E D , s.t. | φ | = , with the metric defined by | φ | : = φ T · M · φ .Finally, the edges are constructed by the spin transformation e i j (cid:55)→ Im ( φ i · E i j · φ j ) , the position of vertices v i are recovered by solving the Poissonequation (see Section 3 of [SA07] or Section 5.6 of [CPS11]). Inthe attached videos, we prescribe the mean curvature half-densityof two shapes (red) on their conformal parameterization (blue) andit shows deformation from the sphere to the original shapes. Area calibration
Even though the Dirac operator with regulariza-tion term improves the accuracy of reconstruction, we observe thatsome area distortion is still visible, especially at the region withreally high curvature. To overcome this problem, we make the re-construction algorithm be aware of the area scaling factor. Chern etal. [CPS15] prescribe a volumetric scaling factor e u and obtains theclose-to-conformal volumetric deformation by minimizing an en-ergy E u depending on u . While the energy E u in [CPS15] is specif-ically designed for 3D volumetric meshes, an analogy for 2D sur-faces still holds in smooth case: Theorem 3.1
Let f : M → R ⊂ H be an isometric immersion and h : M → R be any function. The quaternion gradient is defined bygrad f h = d f ( grad h ) . The spin transformation d ˜ f : = φ · d f · φ with D f φ = d φφ − = − Gd f , (10)where G : = grad f u is the gradient of the logarithmic factor e u : = | φ | . Proof
See Appendix 7.2.Therefore, given a spin transformation induced from φ with thearea factor u = log | φ | , the quaternion-valued 1-form ω : = d φ + Gd f φ vanishes. In practice, we minimize the energy E u : = | ω | , wherethe metric for quaternion-valued 1-form is defined by (cid:104) ω , η (cid:105) : = (cid:90) M ω ∧ ( ∗ η ) . (11)In discrete case, minimizing the energy E u again amounts tosolving a generalized eigenvalue problem for a 4 | F | × | F | matrix(see Section 7.4). To avoid introducing the scaling factor as onemore function in our representation and subsequently increasingthe data size, we first apply the isotropic remeshing with approxi-mate equalized face area [FAKG10] for all shapes. In this case thelogarithmic factor u should be set to u i = log ( / (cid:112) | ˜ A i | ) , where ˜ A i is the face area of the conformal parameterization. ours without ours with area distortion r . W = . r . W = . r . W = . r . W = . r . W = . r . W = . area calibration area calibration Figure 8: Reconstruction of shapes from their conformal parame-terization. While the Willmore energy is defined by W = ∑ i h i , wedefine the relative Willmore energy between two meshes with iden-tical connectivity by r . W : = ∑ i (( h ) i − ( h ) i ) , which measureshow close the mean curvature half-density of two meshes are. Thisexperiment shows that our method substantially improves the accu-racy of curvature reconstruction. Furthermore, the area distortion,which usually appears in the regions with high curvature, gets muchreduced by the area calibration. Note that, in contrast to [CPS15],we only encode the expected scaling factor in the energy | ω | andthe factual scaling factor | φ | is determined by the optimizer.In summary, we first minimize the energy E D with a small steplength several times until the mean curvature half-density con-verges to the prescribed one. Then, we minimize the energy E u onceto get the correct area scaling factor.
4. Results
We use the Matlab package gptoolbox [J ∗
18] for data pre-processing and Tensorflow [AAB ∗
15] for building the neural net-works on meshes. All the neural networks are trained and evaluatedwith the GPU GeForce GTX 1080 with 8GB memory. © 2020 The Author(s)Computer Graphics Forum © 2020 The Eurographics Association and John Wiley & Sons Ltd. . Ye & N. Umetani & T. Igarashi & T. Hoffmann / A curvature and density-based generative representation of shapes | V | = | V | = | V | = | V | = | V | = .
25, 0 .
75, 1 and 2.The mean curvature half-density changes accordingly such that themean curvature is preserved.
We first present some simple applications that are unrelated to ma-chine learning.In smooth case, the mean curvature half-density changes covari-antly h (cid:55)→ m · h under the parameterization scaling x (cid:55)→ m · x , m ∈ R .Analogously, in discrete case, one can adjust the parameterizationby scaling the vertex density, i.e., multiplies the density d with aconstant number, d (cid:55)→ m d . In order to preserve the shape, one has toadjust the mean curvature half-density by h (cid:55)→ h √ m . The shapes re-constructed from the modified representation are actually remesh-ings with approximately m | V | vertices, where | V | is the number ofvertices of the original mesh. Figure 9 shows that our method willpreserve the smooth features on the shape. However, the regions ofhigh curvature tend to be smoothed with declining vertex number. Shape interpolation
We visualize the interpolation of ourcurvature-based representation. Figure 3 shows the shapes recon-structed from a linear interpolation of two animals, whose confor-mal parameterizations are matched by a Möbius transformation thataligns 3 chosen landmark points. In addition, one can interpolatethe latent space representation of a trained autoencoder (see Sec-tion 4.3). Figure 11 shows two latent space bi-linear interpolationsof cars.
Random generation of disk-like and spherical shapes
We testour model for disk-like surfaces on a dataset of anatomical shapesprovided by [BLC ∗ ∗ u i , v i for every shape M i . We knowthat the conformal automorphisms of the unit disk have the form f ( z ) = e i θ z − a − az , where θ ∈ R and a ∈ C . Set a = u i and θ such that f ( v i ) ∈ R .Clearly, this uniquely determined map f a , θ satisfies f ( u i ) = f ( v i ) ∈ R . Fixing a reference shape M , for any shape M i we applythe alignment map f − ◦ f i for every shapes.All the aligned disk meshes are then mapped to the square via theSchwarz-Christoffel mapping. The functions are interpolated on the256 ×
256 grid using the scatteredInterpolant function inMatlab. For spherical surfaces we take the dataset of 1240 cars fromShapeNet [CFG ∗ × =
320 faces. Each face isassigned with a 32 ×
32 grid. Hence, each shape is represented bya 320 × × × ( , )( , ) ( , )( , ) ( , )( , )( , )( , ) (a) local invariant(c) local invariant(b) not local invariant Discussion of local invariance
We calltwo functions f and f local invariantif they have the same function value butonly differ by a transformation g of do-main, i.e., f = f ◦ g . Traditional CNNsare able to capture the translational fea-tures such as (a) of inset. Hence onewould expect the CNNs for 3D shapeswith the similar properties like local in-variance under translation, rotation oreven scaling. However, 3D generativemodels based on position, such as pointcloud and mesh, will not have such properties due to the variedfunction value of coordinates (see (b)). This makes it more difficultfor CNNs to extract meaningful information. The voxel-based mod-els are local invariant, but they are not applicable for data with highresolution due to the high cost of memory and computation. Somemulti-resolution representations, e.g., octree [TDB17, WSLT18],are designed to overcome this problem, but the local invariant prop-erty does not hold anymore. In contrast, our model (sketched by(c)), together with the CNN on the sphere, provides an efficientway to learn the 3D data without a certain alignment. We verify ourargument with the following two examples. Learning unaligned anatomical data
We merge three different anatomicalmodels in [BLC ∗
11] and create therepresentations without any alignmentmethods. Insect shows the randomlygenerated bones of different types.Compared with Figure 16 the bones getsmoothed due to the expanded shape space. However, we showthat our model is still capable to extract the meaningful informa-tion from the ambiguity by visualizing the latent space distribution(Figure 13). We compare the result with a baseline model that hasthe same network architecture but operates on the coordinate func-tions.
Generation of transformed cars
In this experiment we wouldlike to see whether the 3D generative models are able to correctlypredict shapes with various transformations. The dataset is createdby randomly translating, rotating and scaling a single shape of carin the cube of size [ − , ] × [ − , ] × [ − , ] . We train autoencodersbased on different models on 900 training data and test them on © 2020 The Author(s)Computer Graphics Forum © 2020 The Eurographics Association and John Wiley & Sons Ltd. . Ye & N. Umetani & T. Igarashi & T. Hoffmann / A curvature and density-based generative representation of shapes MultichartOurs mean curvature W = W = W = W = W = W = W = W = W =
482 3 − W = half-density Figure 10: The randomly generated cortical surfaces by Multi-chart GAN [BHMK ∗
18] and the VAE based on our representation. Ourrepresentation has dimension 320 × × × = × × × = f Figure 12: For disk-like surfaces, given two landmark points thereis a unique conformal map which maps the first point (red) to zeroand maps the second one (blue) to the x -axis.100 validation data. The comparison shows that our method pro-duces more accurate predictions than others (see Figure 15). Since coordinate curvature teethmt1radius Figure 13: Latent space visualization. The dataset is composed ofthree different types of anatomical surfaces. We project the latentspace representation on a 2-dimensional space by PCA. Though allthe shapes are packed without alignment, the three types of bonesare clearly separated in the latent space. In contrast, the modelbased on the coordinate failed to learn the structure of the bones, sotheir distribution in the latent space is not well separated.only our model considers the mesh structure of shapes, to makea fair comparison, we evaluate the results with Chamfer distancewhich only depends on the underlying point clouds. Note that, asa trade-off, our representation loses the information of translationand scaling. Thus we first normalize the shapes reconstructed fromour model and then calculate the Chamfer distance to the groundtruth.
To show that our model is particularly good at preserving the finestructure, we perform the experiment on human cortical surfaces,which are highly folded with a lot of "hills" and "valleys". A datasetof cortical surfaces are available on the Open Access Series ofImaging Studies (OASIS) [MFC ∗ © 2020 The Author(s)Computer Graphics Forum © 2020 The Eurographics Association and John Wiley & Sons Ltd. . Ye & N. Umetani & T. Igarashi & T. Hoffmann / A curvature and density-based generative representation of shapes succeed in characterizing the shapes in a large scale, our modelpreserves much more small features, e.g., the curvature, than theothers. Training details
Our model and the baseline model are trainedwith 200 epochs for around 5 hours. The point-cloud AE[ADMG18] with 2048 points for each data and AtlasNet [GFK ∗ × × ∗
18] (Figure 10).While both mesh-based models generate significantly more faith-ful results than other types of representation in Figure 1, the "hills"and "valleys" are much more visible with our model. Moreover, weonly choose 3 landmark points on each shape to align the confor-mal parameterization, while it requires 21 landmark points to cre-ate 16 charts as in [BHMK ∗
5. Limitations and future work
Ground FailedTruth ExampleFirst, currently it is difficult to model theshapes like long tubes, such as arms andlegs of human, because the conformalparameterization of such shapes alwayshas extremely large area distortion. Theinformation easily gets lost while be-ing transferred from such regions tothe canonical domain (inset), unless oneuses a domain with extremely high res-olution. A solution might be a multi-resolution data structure, such as [GKS02,WSLT18]. Then it is de-sirable to design a structure of neural network that is specificallyadapted to such multi-resolutional data structures.Second, to make our model fully rotational invariant rather thanjust local invariant, one might combine our representation with theequivariant neural networks by Cohen et al. [CGKW18], so that thealignment procedure can be completely removed. Then it would beinteresting to develop a corresponding decoder network.
6. Conclusion
We propose a novel intrinsic representation of 3D surfaces based onmean curvature and metric. A 3D generative model is built based onthis representation and it manifests better performance than othermodels in capturing the fine structure and the symmetry of the am-bient space.
References [AAB ∗
15] A
BADI
M., A
GARWAL
A., B
ARHAM
P., B
REVDO
E., C
HEN
Z., C
ITRO
C., C
ORRADO
G. S., D
AVIS
A., D
EAN
J., D
EVIN
M., G HE - MAWAT
S., G
OODFELLOW
I., H
ARP
A., I
RVING
G., I
SARD
M., J IA Y.,J
OZEFOWICZ
R., K
AISER
L., K
UDLUR
M., L
EVENBERG
J., M
ANÉ
D., M
ONGA
R., M
OORE
S., M
URRAY
D., O
LAH
C., S
CHUSTER
M.,S
HLENS
J., S
TEINER
B., S
UTSKEVER
I., T
ALWAR
K., T
UCKER
P.,V
ANHOUCKE
V., V
ASUDEVAN
V., V
IÉGAS
F., V
INYALS
O., W
ARDEN
P., W
ATTENBERG
M., W
ICKE
M., Y U Y., Z
HENG
X.: TensorFlow:Large-scale machine learning on heterogeneous systems, 2015. Softwareavailable from tensorflow.org. URL: . 7[ADMG18] A
CHLIOPTAS
P., D
IAMANTI
O., M
ITLIAGKAS
I., G
UIBAS
L. J.: Learning representations and generative models for 3d pointclouds. In
ICML (2018). 1, 2, 4, 10, 14[ADVDI03] A
LLIEZ
P., D E V ERDIERE
E. C., D
EVILLERS
O., I
SEN - BURG
M.: Isotropic surface remeshing. In (2003), IEEE, pp. 49–58. 4[ASC11] A
UBRY
M., S
CHLICKEWEI
U., C
REMERS
D.: The wave kernelsignature: A quantum mechanical approach to shape analysis. In
Com-puter Vision Workshops (ICCV Workshops), 2011 IEEE InternationalConference on (2011), IEEE, pp. 1626–1633. 3[BBL ∗
17] B
RONSTEIN
M. M., B
RUNA
J., L E C UN Y., S
ZLAM
A.,V
ANDERGHEYNST
P.: Geometric deep learning: Going beyond eu-clidean data.
IEEE Signal Processing Magazine 34 , 4 (July 2017), 18–42. 5[BCK18] B
ADEN
A., C
RANE
K., K
AZHDAN
M.: Möbius registration.
Computer Graphics Forum 37 , 5 (2018), 211–220. 5, 8[BHMK ∗
18] B EN -H AMU
H., M
ARON
H., K
EZURER
I., A
VINERI
G.,L
IPMAN
Y.: Multi-chart generative surface modeling. In
SIGGRAPHAsia 2018 Technical Papers on - SIGGRAPH Asia '18 (2018), ACMPress. 2, 4, 9, 10[BLC ∗
11] B
OYER
D. M., L
IPMAN
Y., C
LAIR
E. S., P
UENTE
J., P
ATEL
B. A., F
UNKHOUSER
T., J
ERNVALL
J., D
AUBECHIES
I.: Algorithms toautomatically quantify the geometric similarity of anatomical surfaces.
Proceedings of the National Academy of Sciences 108 , 45 (oct 2011),18221–18226. 8[BMM ∗
15] B
OSCAINI
D., M
ASCI
J., M
ELZI
S., B
RONSTEIN
M. M.,C
ASTELLANI
U., V
ANDERGHEYNST
P.: Learning class-specific de-scriptors for deformable shapes using localized spectral convolutionalnetworks.
Computer Graphics Forum 34 , 5 (2015), 13–23. 3, 5[Bon67] B
ONNET
O.: Memoire sur la theorie des surfaces applicables.
JEC Polyt 42 (1967), 27–29. 3[BZSL14] B
RUNA
J., Z
AREMBA
W., S
ZLAM
A., L E C UN Y.: Spectralnetworks and deep locally connected networks on graphs. In
Proc. ICLR,2014 (2014). 5[CFG ∗
15] C
HANG
A. X., F
UNKHOUSER
T., G
UIBAS
L., H
ANRAHAN
P., H
UANG
Q., L I Z., S
AVARESE
S., S
AVVA
M., S
ONG
S., S U H.,X
IAO
J., Y I L., Y U F.:
ShapeNet: An Information-Rich 3D ModelRepository . Tech. Rep. arXiv:1512.03012 [cs.GR], Stanford University— Princeton University — Toyota Technological Institute at Chicago,2015. 8[CGKW18] C
OHEN
T. S., G
EIGER
M., K
ÖHLER
J., W
ELLING
M.:Spherical CNNs. In
International Conference on Learning Represen-tations (2018). 10 © 2020 The Author(s)Computer Graphics Forum © 2020 The Eurographics Association and John Wiley & Sons Ltd. . Ye & N. Umetani & T. Igarashi & T. Hoffmann / A curvature and density-based generative representation of shapes
193 80 195 ground-truthprediction(FreeSurfer)Figure 14: Volume-Curvature autoencoder. The input is the MRI volumetric data from [MFC ∗ × × [CKPS18] C HERN
A., K
NÖPPEL
F., P
INKALL
U., S
CHRÖDER
P.: Shapefrom metric.
ACM Trans. Graph. 37 , 4 (August 2018), 63:1–63:17.URL: https://doi.org/10.1145/3197517.3201276 , doi:10.1145/3197517.3201276 . 7[CL15] C HOI
P. T., L UI L. M.: Fast disk conformal parameterization ofsimply-connected open surfaces.
Journal of Scientific Computing 65 , 3(feb 2015), 1065–1090. 5, 8[CLL15] C
HOI
P. T., L AM K. C., L UI L. M.: FLASH: Fast landmarkaligned spherical harmonic parameterization for genus-0 closed brainsurfaces.
SIAM Journal on Imaging Sciences 8 , 1 (jan 2015), 67–94.5[CPS11] C
RANE
K., P
INKALL
U., S
CHRÖDER
P.: Spin transformationsof discrete surfaces. In
ACM Transactions on Graphics (TOG) (2011),vol. 30, ACM, p. 104. 4, 6, 7[CPS13] C
RANE
K., P
INKALL
U., S
CHRÖDER
P.: Robust fairing viaconformal curvature flow. In
ACM Transactions on Graphics (TOG) (2013), vol. 32, p. 61. 2, 4, 5[CPS15] C
HERN
A., P
INKALL
U., S
CHRÖDER
P.: Close-to-conformaldeformations of volumes.
ACM Trans. Graph. 34 , 4 (July 2015), 56:1–56:13. 2, 7[DBV16] D
EFFERRARD
M., B
RESSON
X., V
ANDERGHEYNST
P.: Con-volutional neural networks on graphs with fast localized spectral filter-ing. In
Proceedings of the 30th International Conference on Neural In-formation Processing Systems (Red Hook, NY, USA, 2016), NIPS 16,Curran Associates Inc., pp. 3844–3852. 5[DFG99] D U Q., F
ABER
V., G
UNZBURGER
M.: Centroidal voronoi tes-sellations: Applications and algorithms.
SIAM review 41 , 4 (1999), 637–676. 6[FAKG10] F
UHRMANN
S., A
CKERMANN
J., K
ALBE
T., G
OESELE
M.:Direct resampling for isotropic surface remeshing. In
VMV (2010), Cite-seer, pp. 9–16. 7[FSG17] F AN H., S U H., G
UIBAS
L. J.: A point set generation net-work for 3d object reconstruction from a single image. (2017),2463–2471. 2, 4[GFK ∗
18] G
ROUEIX
T., F
ISHER
M., K IM V. G., R
USSELL
B., A
UBRY
M.: AtlasNet: A Papier-Mâché Approach to Learning 3D Surface Gen-eration. In
Proceedings IEEE Conf. on Computer Vision and PatternRecognition (CVPR) (2018). 1, 2, 4, 10, 14[GKS02] G
RINSPUN
E., K
RYSL
P., S
CHR Ã ˝UDER P.: CHARMS: a sim-ple framework for adaptive simulation.
ACM Transactions on Graphics21 , 3 (jul 2002). 10[GWC ∗
04] G U X., W
ANG
Y., C
HAN
T., T
HOMPSON
P., Y AU S.-T.:Genus zero surface conformal mapping and its application to brain sur-face mapping.
IEEE Transactions on Medical Imaging 23 , 8 (aug 2004),949–958. 5 [J ∗
18] J
ACOBSON
A.,
ET AL .: gptoolbox: Geometry processing toolbox,2018. http://github.com/alecjacobson/gptoolbox. 7[Kam98] K
AMBEROV
G.: Prescribing mean curvature: existence anduniqueness problems.
Electronic Research Announcements of the Amer-ican Mathematical Society 4 , 2 (1998), 4–11. 3[KJP ∗
18] K
OSTRIKOV
I., J
IANG
Z., P
ANOZZO
D., Z
ORIN
D., B
RUNA
J.: Surface networks. In
The IEEE Conference on Computer Vision andPattern Recognition (CVPR) (June 2018). 4[KPP98] K
AMBEROV
G., P
EDIT
F., P
INKALL
U.: Bonnet pairs andisothermic surfaces.
Duke Math. J. 92 , 3 (04 1998), 637–644. 3[KW17] K
IPF
T. N., W
ELLING
M.: Semi-supervised classification withgraph convolutional networks. In
Proc. ICLR, 2017 (2017). 5[LJC17] L IU D., J
ACOBSON
A., C
RANE
K.: A dirac operator for ex-trinsic shape analysis.
Computer Graphics Forum (SGP) 36 , 5 (2017).4[MBBV15] M
ASCI
J., B
OSCAINI
D., B
RONSTEIN
M. M., V AN - DERGHEYNST
P.: Geodesic convolutional neural networks on rieman-nian manifolds. In
Proc. of the IEEE International Conference on Com-puter Vision (ICCV) Workshops (2015), pp. 37–45. 5[MBM ∗
17] M
ONTI
F., B
OSCAINI
D., M
ASCI
J., R
ODOLA
E., S VO - BODA
J., B
RONSTEIN
M. M.: Geometric deep learning on graphs andmanifolds using mixture model CNNs. In (jul 2017), IEEE. 5[MFC ∗
10] M
ARCUS
D. S., F
OTENOS
A. F., C
SERNANSKY
J. G.,M
ORRIS
J. C., B
UCKNER
R. L.: Open access series of imaging studies:longitudinal mri data in nondemented and demented older adults.
Jour-nal of cognitive neuroscience 22 , 12 (2010), 2677–2684. 9, 11[MGA ∗
17] M
ARON
H., G
ALUN
M., A
IGERMAN
N., T
ROPE
M., D YM N., Y
UMER
E., K IM V. G., L
IPMAN
Y.: Convolutional neural networkson surfaces via seamless toric covers.
SIGGRAPH (2017). 5[NW17] N
ASH
C., W
ILLIAMS
C. K. I.: The shape variational autoen-coder: A deep generative model of part-segmented 3d objects.
ComputerGraphics Forum 36 , 5 (aug 2017), 1–12. 4[Pin85] P
INKALL
U.: Regular homotopy classes of immersed surfaces.
Topology 24 , 4 (1985), 421–434. 3[RRF10] R
EUTER
M., R
OSAS
H. D., F
ISCHL
B.: Highly accurateinverse consistent registration: a robust approach.
Neuroimage 53 , 4(2010), 1181–1196. 11[SA07] S
ORKINE
O., A
LEXA
M.: As-rigid-as-possible surface model-ing. In
Symposium on Geometry processing (2007), vol. 4, pp. 109–116.7[SC17] S
AWHNEY
R., C
RANE
K.: Boundary first flattening.
ACM Trans.Graph. 37 , 1 (Dec. 2017), 5:1–5:14. 5[SM17] S
MITH
E. J., M
EGER
D.: Improved adversarial systems for 3dobject generation and reconstruction. In
CoRL (2017). 4 © 2020 The Author(s)Computer Graphics Forum © 2020 The Eurographics Association and John Wiley & Sons Ltd. . Ye & N. Umetani & T. Igarashi & T. Hoffmann / A curvature and density-based generative representation of shapes [SOG09] S UN J., O
VSJANIKOV
M., G
UIBAS
L.: A concise and provablyinformative multi-scale signature based on heat diffusion. In
Computergraphics forum (2009), vol. 28, Wiley Online Library, pp. 1383–1392. 3[SSP08] S
PRINGBORN
B., S
CHRÖDER
P., P
INKALL
U.: Conformalequivalence of triangle meshes. In
ACM Transactions on Graphics(TOG) (2008), vol. 27, ACM, p. 77. 4[TDB17] T
ATARCHENKO
M., D
OSOVITSKIY
A., B
ROX
T.: Octreegenerating networks: Efficient convolutional architectures for high-resolution 3d outputs.
CoRR (2017). 1, 2, 4, 8[TPKZ18] T
ATARCHENKO
M., P
ARK
J., K
OLTUN
V., Z
HOU
Q.-Y.:Tangent convolutions for dense prediction in 3d. In
IEEE Conferenceon Computer Vision and Pattern Recognition (CVPR) (2018). 5[Ume17] U
METANI
N.: Exploring generative 3d shapes using autoen-coder networks. In
SIGGRAPH Asia 2017 Technical Briefs on - SA '17 (2017), ACM Press. 4, 8[WLG ∗
17] W
ANG
P.-S., L IU Y., G UO Y.-X., S UN C.-Y., T
ONG
X.: O-cnn: octree-based convolutional neural networks for 3d shape analysis.
ACM Trans. Graph. 36 (2017), 72:1–72:11. 2, 4[WSLT18] W
ANG
P.-S., S UN C.-Y., L IU Y., T
ONG
X.: Adaptive o-CNN. In
SIGGRAPH Asia 2018 Technical Papers on - SIGGRAPH Asia'18 (2018), ACM Press. 2, 4, 8, 10, 14[WZX ∗
16] W U J., Z
HANG
C., X UE T., F
REEMAN
W. T., T
ENENBAUM
J. B.: Learning a probabilistic latent space of object shapes via 3dgenerative-adversarial modeling. In
Advances in Neural InformationProcessing Systems (2016), pp. 82–90. 4[YDT ∗
18] Y E Z., D
IAMANTI
O., T
ANG
C., G
UIBAS
L., H
OFFMANN
T.: A unified discrete framework for intrinsic and extrinsic dirac opera-tors for geometry processing.
Computer Graphics Forum 37 , 5 (2018),93–106. 2, 4, 5, 6
7. Appendix7.1. Compute the weighted centroid of polygons
The weighted centroid of a polygon is given by v ∗ = (cid:82) V y d ( y ) dy (cid:82) V d ( y ) dy . A Voronoi cell is naturally decomposed in several triangles, ofwhich we first compute the weighted centroid. v v v Denote the density on the vertex v i by d i and we assume that the density is lin-early interpolated on every triangle. The de-nominator of v ∗ is called the weighted area,which is given by A i = d ( v )+ d ( v )+ d ( v ) A i ,where A i is the triangle area.Integrating the linear function on the tri-angle i , we obtain v ∗ i = ( d + d + d ) v + ( d + d + d ) v + ( d + d + d ) v ( d + d + d ) . Then, the centroid of the polygon is the weighted sum v ∗ = ∑ i v ∗ i · A i ∑ i A i . Proof
Let ( x , y ) be a conformal coordinate of the immersion f : M → R . The left-hand side of (10) is actually φ x · φ − dx + φ y · φ − dy , while the right hand side reads − ( − u x f − x − u y f − y ) · ( f x dx + f y dy )= (( u x + u y f − y f x ) dx + ( u y + u x f − x f y ) dy )= (( u x + u y n ) dx + ( u y − u x n ) dy ) . (12)The equation (10) implies that φ x · φ − = ( u x + u y n ) , φ y · φ − = ( u y − u x n ) . Substituting the equations above into the Dirac operator (2) in localform, we obtain D f φ = f x φ y − f y φ x = f x ( u y − u x n ) − f y ( u x + u y n )= , by f x · n = − f y and f y · n = − f x . To obtain the discrete formula of the energy | ω | , we first derivethe formula of the quaternion gradient in discrete case.Let h : M → R be any function. We know that the gradient isdefined by grad u : = ( du ) (cid:93) , where (cid:93) : T ∗ M → T M is called raisingindices defined by (cid:104) ω (cid:93) , v (cid:105) = ω ( v ) , for any v ∈ T M v v v ab c xy In a triangle i in quaternion space withthe oriented edges a , b , c ∈ H , we choose acoordinate ( x , y ) system (inset). Assumingthat h is a linear function with the value h , h , h at the vertices, write dh in local formas: dh = ( h − h ) dx + ( h − h ) dy Since (cid:104) ( dx ) (cid:93) , ∂ x (cid:105) = (cid:104) ( dx ) (cid:93) , ∂ y (cid:105) = d f (( dx ) (cid:93) ) is perpendic-ular to b and has the length | c | sin θ = | b | A , where A is the area of thetriangle. Thus d f ( dx (cid:93) ) = n · b A and, by the same argument, we have d f ( dy (cid:93) ) = n · c A .Therefore, grad f h = n A ( ah + bh + ch ) -form We discretize the energy E u = | ω | = | d φ + Gd f φ | © 2020 The Author(s)Computer Graphics Forum © 2020 The Eurographics Association and John Wiley & Sons Ltd. . Ye & N. Umetani & T. Igarashi & T. Hoffmann / A curvature and density-based generative representation of shapes in the scheme of finite element method. In the local coordinate sys-tem above, the metric and its inverse read: g = (cid:18) | c | −(cid:104) c , b (cid:105)−(cid:104) c , b (cid:105) b (cid:19) , g − = A (cid:18) b (cid:104) c , b (cid:105)(cid:104) c , b (cid:105) | c | (cid:19) . With ω = ω x dx + ω y dy , (11) becomes (cid:90) ( | ω x | | b | + (cid:104) c , b (cid:105) ( ω x ω y + ω y ω x ) + | ω y | | c | ) dx ∧ dy . Now, we work out the formula ω = d φ + Gd f φ in one triangle: ω = (cid:18) ( φ − φ ) + G · c (( − x − y ) φ + x φ + y φ ) (cid:19) dx + (cid:18) ( φ − φ ) − G · b (( − x − y ) φ + x φ + y φ ) (cid:19) dy where G · c = u − u + n A ( −(cid:104) a , c (cid:105) u − (cid:104) b , c (cid:105) u − | c | u ) G · b = − u + u + n A ( −(cid:104) a , b (cid:105) u − | b | u − (cid:104) c , b (cid:105) u ) The energy E u is a | V | × | V | quaternion-valued matrix. With a te-dious calculation the entries related to the triangle are given by | ω | = | a | − ( | a | u + (cid:104) b , a (cid:105) u + (cid:104) c , a (cid:105) u ) + | G | A , | ω | = (cid:104) b , c (cid:105) + | G | A + (( An + | a | ) u − ( a · b ) u − ( c · a ) u ) where | G | = A ( a u + b u + c u + (cid:104) a , b (cid:105) u u + (cid:104) b , c (cid:105) u u + (cid:104) c , a (cid:105) u u ) . © 2020 The Author(s)Computer Graphics Forum © 2020 The Eurographics Association and John Wiley & Sons Ltd. . Ye & N. Umetani & T. Igarashi & T. Hoffmann / A curvature and density-based generative representation of shapes Ground truth AtlasNet O-CNN point-cloud AE ours CD = . CD = . CD = . CD = . CD = . CD = . CD = . CD = . CD = . CD = . CD = . CD = . ∗ CD . However, since our model loses the information of translation and scaling,we have to first normalize the volume of the results with a centered position (unnormalized shapes are shown above). In the end we computethe Chamfer distance of the normalized outputs CP . mean curvature − half-density Figure 16: Randomly generated teeth and cars via the variational autoencoder. The first and third rows show the isotropic meshings, whichare induced from the generated density function, with the generated mean curvature half-density. The second and fourth rows show theresulting reconstruction. The architectures of neural networks are modified from the traditional autoencoders in Table 1 and 2 to variationalautoencoder. © 2020 The Author(s)Computer Graphics Forum © 2020 The Eurographics Association and John Wiley & Sons Ltd. . Ye & N. Umetani & T. Igarashi & T. Hoffmann / A curvature and density-based generative representation of shapes
8. Architectures
Encoderlayers input outputConv2D (4 × × × × × × × BatchNormalizationLeakyReLuConv2D (4 × × × × × × × BatchNormalizationLeakyReLuConv2D (4 × × × × × × × BatchNormalizationLeakyReLuConv2D (4 × × × ×
16 320 × × × BatchNormalizationLeakyReLuConv2D (4 × × × ×
32 320 × × × BatchNormalizationLeakyReLuConv2D (4 × × × ×
64 320 × × × BatchNormalizationLeakyReLuReshape × × ×
128 80 × FC ×
512 80 × BatchNormalizationLeakyReLuReshape ×
256 20 × FC × × BatchNormalizationLeakyReLuFC ×
512 200
Decoderlayers input outputFC
200 20480
BatchNormalizationLeakyReLuReshape × FC × × BatchNormalizationLeakyReLuReshape × × FC ×
512 80 × BatchNormalizationLeakyReLuReshape × × × × Deconv2D (4 × × × ×
64 320 × × × BatchNormalizationLeakyReLuDeconv2D (4 × × × ×
32 320 × × × BatchNormalizationLeakyReLuDeconv2D (4 × × × ×
16 320 × × × BatchNormalizationLeakyReLuDeconv2D (4 × × × × × × × BatchNormalizationLeakyReLuDeconv2D (4 × × × × × × × Table 1: The architecture for spherical surfaces. Encoderlayers input outputConv2D (4 × × × × × BatchNormalizationLeakyReLuConv2D (4 × × × × × BatchNormalizationLeakyReLuConv2D (4 × × × × × BatchNormalizationLeakyReLuConv2D (4 × × ×
16 16 × × BatchNormalizationLeakyReLuFC × ×
32 100
Decoderlayers input outputFC
100 8192
BatchNormalizationLeakyReLuReshape × × Deconv2D (4 × × ×
32 32 × × BatchNormalizationLeakyReLuDeconv2D (4 × × ×
16 64 × × BatchNormalizationLeakyReLuDeconv2D (4 × × × × × BatchNormalizationLeakyReLuDeconv2D (4 × × × × × Table 2: The architecture for disk-like surfaces.Encoderlayers input outputConv3D (4 × × × × × × × × BatchNormalizationLeakyReLuConv3D (4 × × × × × × × × BatchNormalizationLeakyReLuConv3D (4 × × × × × × × × BatchNormalizationLeakyReLuConv3D (4 × × × × ×
16 13 × × × BatchNormalizationLeakyReLuConv3D (4 × × × × ×
32 7 × × × BatchNormalizationLeakyReLuFC × × ×
64 200