Chordal Decomposition for Spectral Coarsening
Honglin Chen, Hsueh-Ti Derek Liu, Alec Jacobson, David I.W. Levin
CChordal Decomposition for Spectral Coarsening
HONGLIN CHEN,
University of Toronto, Canada
HSUEH-TI DEREK LIU,
University of Toronto, Canada
ALEC JACOBSON,
University of Toronto, Canada
DAVID I.W. LEVIN,
University of Toronto, Canada
Fig. 1. We approximate the vibration modes of the cotangent Laplacian derived from the ground truth high-resolution mesh (top) using a coarse mesh with250 vertices (the transparent cages on the left). A classical decimation method [Garland and Heckbert 1997] (bottom) preserves the appearance but fails inpreserving the ground truth vibration modes. Our chordal spectral coarsening detaches the differential operator from the mesh, enabling one to optimize theoperator independently to preserve the vibration modes (middle), without altering the coarse vertices. By visualizing the inner product matrices betweenvibration modes on the left, we show our approach leads to a result closer to the ground truth. Here we visualize the 9-th vibration mode with its frequency.
We introduce a novel solver to significantly reduce the size of a geometricoperator while preserving its spectral properties at the lowest frequencies.We use chordal decomposition to formulate a convex optimization problemwhich allows the user to control the operator sparsity pattern. This allowsfor a trade-off between the spectral accuracy of the operator and the cost ofits application. We efficiently minimize the energy with a change of variablesand achieve state-of-the-art results on spectral coarsening. Our solver furtherenables novel applications including volume-to-surface approximation anddetaching the operator from the mesh, i.e. , one can produce a mesh tailor-made for visualization and optimize an operator separately for computation.CCS Concepts: • Computing methodologies → Shape analysis ;• Mathematics of computing → Semidefinite programming . Additional Key Words and Phrases: geometry processing, numericalcoarsening, spectral geometry, chordal decomposition
Authors’ addresses: Honglin Chen, University of Toronto, 40 St. George Street, Toronto,ON, Canada, M5S 2E4, [email protected]; Hsueh-Ti Derek Liu, University ofToronto, 40 St. George Street, Toronto, ON, Canada, M5S 2E4, [email protected];Alec Jacobson, University of Toronto, 40 St. George Street, Toronto, ON, Canada, M5S2E4, [email protected]; David I.W. Levin, University of Toronto, 40 St. GeorgeStreet, Toronto, ON, Canada, M5S 2E4, [email protected] to make digital or hard copies of all or part of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full citationon the first page. Copyrights for components of this work owned by others than theauthor(s) must be honored. Abstracting with credit is permitted. To copy otherwise, orrepublish, to post on servers or to redistribute to lists, requires prior specific permissionand/or a fee. Request permissions from [email protected].© 2020 Copyright held by the owner/author(s). Publication rights licensed to ACM.0730-0301/2020/12-ART265 $15.00https://doi.org/10.1145/3414685.3417789
ACM Reference Format:
Honglin Chen, Hsueh-Ti Derek Liu, Alec Jacobson, and David I.W. Levin.2020. Chordal Decomposition for Spectral Coarsening.
ACM Trans. Graph.
39, 6, Article 265 (December 2020), 16 pages. https://doi.org/10.1145/3414685.3417789
Discrete operators, such as the cotangent Laplacian, the Hessian ofmesh energies, and the stiffness matrix in physics-based simulations,are ubiquitous in geometry processing. Many of these operators arerepresented by sparse positive semi-definite (PSD) matrices. Thesematrices are often constructed by looping over the elements of adiscretized domain. When defined on a high-resolution domain,those matrices are computationally expensive to use, even if thefinal result only requires low frequency information.Recent methods show that it is possible to simplify a discreteoperator while preserving its spectral properties and matrix char-acteristics, such as positive semi-definiteness, avoiding the pitfallsof the naïve “decimate and reconstruct” approach. However, previ-ous methods required the solution of a non-convex optimizationproblem, the solution to which sacrificed matrix sparsity.In this paper, we overcome these challenges by applying the chordal decomposition . In contrast to the previous non-convex formu-lation, our method is now convex and can freely control the outputsparsity, outperforming existing approaches for spectral coarseningand simplification. Our approach further enables novel applicationson optimizing the operator independently to preserve some desiredproperties for computation without changing the mesh vertices.
ACM Trans. Graph., Vol. 39, No. 6, Article 265. Publication date: December 2020. a r X i v : . [ c s . G R ] S e p Fig. 3. We use chordal decomposition to split a large sparse PSD constrainton X (left) into multiple small dense PSD constraints on Z i (right), wherewe use · ⪰ to denote the PSD constraint. This enables us to be moreefficient in handling optimization problems that involve sparse PSD matrixconstraints. In Fig. 1, we first decimate the model and optimize the operatorindependently to preserve the spectral properties of the cotangentLaplacian. Our approach achieves a higher quality approximationof the vibration modes of the high-resolution mesh compared toprevious approaches.
Fig. 2. Given a sparse matrix(left) where blue denotes non-zeros and gray denotes ze-ros, we can view the spar-sity pattern as a graph (right)and then apply theorems ofchordal graphs.
By viewing the sparsity patternof a matrix as a graph (see Fig. 2),one can utilize theories of chordalgraphs to decompose a sparse ma-trix into a set of small dense matri-ces. This decomposition enablesone to satisfy the sparse PSD con-straint by projecting each smalldense matrix to PSD in parallel(see Fig. 3). Such techniques havelong been applied in the creationof efficient solvers for Semidefinite Programming (SDP). Here wegeneralize these notions to the spectral coarsening problem whichleads to an accelerated solver that is faster, more accurate and withbetter sparsity control than the previous state-of-the-art. Our maincontribution is an algorithm for projecting general sparse matricesto PSD ones using chordal decomposition in the context of spectralcoarsening.
Spectral preservation is a widely studied topic in optimization andnumerical methods. Below we outline the most salient related worksfrom these areas, as well as recent developments in computer graph-ics and geometry processing.
Chordal graphs have been playing an important role in sparse ma-trix computation for decades [Blair and Peyton 1993; Vandenbergheand Andersen 2015]. Fukuda et al. [2001] and Nakata et al. [2003]introduce a generic framework to accelerate interior-point meth-ods for solving large sparse SDPs. Their key idea is to exploit thesparsity of the matrix and the properties of chordal graphs [Groneet al. 1984] to decompose a large sparse matrix variable into mul-tiple small dense ones. In the later literature, this is often calledthe chordal decomposition . Since then, this framework has beengreatly improved by [Andersen et al. 2010; Burer 2003; Fujisawaet al. 2009; Srijuntongsiri and Vavasis 2004; Sun et al. 2014]. The ideaof chordal decomposition has also been incorporated with otheroptimization methods. For instance, Sun and Vandenberghe [2015] combine chordal decomposition with projected gradient and theDouglas–Rachford algorithms for sparse matrix nearness and com-pletion problems. Zheng et al. [2017b, 2020] incorporate this idea tothe alternating direction method of multipliers (ADMM) for solv-ing SDPs. These chordal-based solvers have also been deployed tononlinear matrix inequalities [Kim et al. 2011], the optimal powerflow [Madani et al. 2015], controller synthesis [Zheng et al. 2018]and sum-of-squares problems [Zheng et al. 2017a, 2019].Recently, Maron et al. [2016] formulate the point cloud registra-tion problem into a SDP and use chordal decomposition to acceleratethe computation. However, their method only supports matriceswith a chordal sparsity pattern already, which is not applicable toour problem because most discrete operators are not chordal. Incontrast, we utilize the ideas from [Sun and Vandenberghe 2015] tohandle any sparsity pattern of choice, and the strategies in [Zhenget al. 2017b, 2020] to develop a chordal ADMM solver for the spec-tral coarsening energy [Liu et al. 2019]. We exploit the fact thatmany discrete operators are sparse and symmetric to perform achange of variables to significantly reduce the computational cost.We demonstrate that chordal decomposition is not only suitablefor large scale SDPs, but also for problems in graphics that involvesparse PSD matrix variables.
Geometric coarsening has been extensively studied in computergraphics with the aims of preserving different geometric and phys-ical properties. One class of methods focuses on preserving theappearance of a mesh for rendering purposes. Some prominentearly examples include mesh optimization [Cohen-Steiner et al.2004; Hoppe et al. 1993], mesh decimation [Garland and Heckbert1997], progressive refinement [Hoppe 1996, 1997], and approachesbased on parameterization [Cohen et al. 2003]. We refer readers to[Cignoni et al. 1998] for an overview and comparison of appearance-preserving simplification. Beyond preserving the appearance, thesetechniques have also been extended to preserve the texture informa-tion of a shape [Garland and Heckbert 1998; Lu et al. 2020]. Li et al.[2015] add modal displacement as part of the decimation metric tobetter preserve the acoustic transfer of a shape.
Numerical coarsening in simulation.
Coarsening the geometrymay alter the material properties and lead to numerical stiffening insimulations. Kharevych et al. [2009] propose a method to adjust theelasticity tensor of each element on a coarse mesh to approximatethe dynamics of the original high-resolution mesh. In a similar spirit,Chen et al. [2015] use a data-driven lookup approach to reduce theerror incurred by coarsening. To better capture vibration, Chenet al. [2017] address the numerical stiffening by simply rescaling theYoung’s modulus of the coarse model to match the lowest frequen-cies to its high-resolution counterpart. Chen et al. [2019b] extendthis idea to re-fit the eigenvalues iteratively at each time step. Chenet al. [2018] propose to construct matrix-valued and discontinu-ous basis functions by solving a large amount of local quadraticconstrained optimizations. Other recent approaches have includedthe wavelet approaches. Owhadi [2017] introduces a hierarchicalconstruction of operator-adapted basis functions and their associ-ated wavelets for scalar-valued PDE. The operator-adapted wavelets
ACM Trans. Graph., Vol. 39, No. 6, Article 265. Publication date: December 2020. hordal Decomposition for Spectral Coarsening • 265:3 have been extended to differential forms [Budninskiy et al. 2019]and to vector-valued equations [Chen et al. 2019a] which is thenapplied to fast simulation of heterogeneous materials with locallysupported basis functions. Different from [Chen et al. 2018] and[Chen et al. 2019a] which increase the degrees of freedom (DOF)by using matrix-valued shape functions, our method can supportmore DOF by directly controlling the sparsity pattern of the scalar-valued matrix. Moreover, our method can also preserve the spectralproperties using the same DOF and sparsity pattern.
Spectral graph coarsening in machine learning.
Spectral-preservinggraph reduction has been an active field in machine learning. Zhaoet al. [2018] introduce a scalable spectral graph reduction methodfor scalable graph partitioning and data visualization based on nodeaggregation and graph sparsification. Jin et al. [2020] propose twomethods for spectral graph coarsening based on iterative merg-ing and k-means clustering, respectively. Various other approacheshave also been recently adopted to coarsen a graph in a spectral-preserving way, including randomized edge contraction [Loukas andVandergheynst 2018], local variation algorithm [Loukas 2019] andprobabilistic algorithm [Bravo-Hermsdorff and Gunderson 2019].In contrast to these combinatorial methods which focus more onoptimizing the sparsity pattern, our algebraic approach enables oneto further optimize over a specific sparsity pattern based on a convexformulation.
Spectral coarsening in geometry processing.
Recently several ap-proaches consider coarsening a geometry while preserving its spec-tral properties, namely eigenvalues and eigenvectors of the opera-tors. Öztireli et al. [2010] compute samples on a manifold surfacein order to preserve the spectrum of the Laplace operator. Nasikunet al. [2018] use a combination of Poisson disk sampling and lo-cal polynomial bases to efficiently solve an approximate Laplacianeigenvalue problem of a mesh. Beyond the Laplace operator, Liuet al. [2019] propose an algebraic approach to coarsen common geo-metric operators while preserving spectral properties. Lescoat et al.[2020] extend the formulation to achieve spectral-preserving meshsimplification. Our approach is purely algebraic. Our convex formu-lation leads us to have better spectral preservation compared to thesimilar algebraic approach [Liu et al. 2019] in spectral coarsening.Our flexibility in controlling the sparsity allows us to post-processthe results of spectral simplification [Lescoat et al. 2020] and furtherimprove its quality. In addition, we enable a novel application whichindependently optimizes the operator for computation purposesand the mesh vertices for preserving the appearance (see Fig. 1).
The description of our method depends on manipulating variablesthat represent sparse matrices. Throughout the paper, we use P todenote selection matrices , and use subscripts to differentiate betweenthem. In practice, given a subset s , P s is a sparse matrix defined as ( P s ) jk = (cid:40) , k = s ( j ) , , otherwise . (1)Let x be a vector and z = x ( s ) be a sub-vector of x . Selecting a subsetfrom x can be achieved by a sparse matrix multiplication z = P s x . Fig. 4. Chordal decomposition decomposes the matrix X into a set of maxi-mal clique matrices Z i . We can extract each clique matrix via Z i = P i XP ⊤ i . Mapping the elements from z to a bigger vector y can be achievedwith y = P ⊤ s z c c c (cid:124)(cid:123)(cid:122)(cid:125) z = (cid:124) (cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32) (cid:123)(cid:122) (cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32) (cid:125) P s c c c c (cid:124)(cid:123)(cid:122)(cid:125) x , c c c (cid:124)(cid:123)(cid:122)(cid:125) y = (cid:124) (cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32) (cid:123)(cid:122) (cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32) (cid:125) P ⊤ s c c c (cid:124)(cid:123)(cid:122)(cid:125) z . Let X be an n -by- n matrix, we use s to denote a subset of rowand column indices into X . Z = X ( s , s ) creates a matrix Z that is ofsize | s | -by- | s | and contains all values in X ( s , s ) . We can compactlydescribe this operation using selection matrix P s as Z = P s XP ⊤ s (cid:20) c c c c (cid:21)(cid:124) (cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32) (cid:123)(cid:122) (cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32) (cid:125) Z = (cid:20) (cid:21)(cid:124) (cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32) (cid:123)(cid:122) (cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32) (cid:125) P s c c c c c c c c c (cid:124) (cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32) (cid:123)(cid:122) (cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32) (cid:125) X (cid:124) (cid:32)(cid:32) (cid:123)(cid:122) (cid:32)(cid:32) (cid:125) P ⊤ s . Similarly, we can map elements in Z back to Y via Y = P ⊤ s ZP s c c c c (cid:124) (cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32) (cid:123)(cid:122) (cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32) (cid:125) Y = (cid:124) (cid:32)(cid:32) (cid:123)(cid:122) (cid:32)(cid:32) (cid:125) P ⊤ s (cid:20) c c c c (cid:21)(cid:124) (cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32) (cid:123)(cid:122) (cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32) (cid:125) Z (cid:20) (cid:21)(cid:124) (cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32) (cid:123)(cid:122) (cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32) (cid:125) P s . A chordal graph is an undirected graph in which for every cycle oflength greater than three, there is an edge between nonconsecutivevertices in the cycle. Chordal graphs have drawn attention sincethe 1950s because a handful of NP-complete graph problems can besolved in polynomial time if the graph is chordal. Chordal graphsalso received interests from the optimization community for solvingsparse SDPs, combinatorial optimization, and Cholesky factorization.We refer readers to [Vandenberghe and Andersen 2015] for a surveyof chordal graphs in optimization. We focus on its application toproblems that involve sparse PSD matrices constraints, specificallyarising from geometry processing.An n -by- n symmetric matrix X has chordal sparsity pattern C ∈{ , } n × n if the graph induced by C is a chordal graph. The keytheorem that supports our method isTheorem 1. ([Agler et al. 1988; Kakimura 2010]) Let X be a n -by- n symmetric matrix with chordal sparsity, and let { Z , Z , · · · , Z p } bea set of its p clique matrices . Then X is PSD if and only if it can beexpressed as X = p (cid:213) i = P ⊤ i Z i P i (2) with all Z i being PSD. ACM Trans. Graph., Vol. 39, No. 6, Article 265. Publication date: December 2020. where a clique is a subset of vertices such that every two distinctvertices in the clique are adjacent to each other, thus a clique matrix is a dense matrix of the size of a clique. We use Z i to represent the i th clique matrix, P i as the selection matrix to the i th clique set.This decomposition from X to a set of clique matrices is called the chordal decomposition (see Fig. 4), which has been applied to manyrecent SDP solvers. Vectorization.
In practice, the “sandwich” format P ⊤ i Z i P i is notalways easy to work with. It is often more desirable to vectorize amatrix by concatenating the columns of the matrix into a vector (seethe inset). We can re-write the vectorized chordal decomposition asvec ( X ) = p (cid:213) i = vec ( P ⊤ i Z i P i ) = p (cid:213) i = ( P ⊤ i ⊗ P ⊤ i ) (cid:124) (cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32) (cid:123)(cid:122) (cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32) (cid:125) K i vec ( Z i ) , (3) vec -1 ( ) vec ( ) where we use vec (·) to denotethe vectorization, with its inversevec -1 (see the inset), and ⊗ to de-note the Kronecker product. Intu-itively, K i acts like the transposeof a selection matrix, putting ele-ments in vec ( Z i ) back to vec ( X ) . In practice, a majority of matrices we encounter in geometry process-ing do not naturally have chordal sparsity patterns, which makesTheorem 1 inapplicable. In response, we follow the idea in [Sun andVandenberghe 2015] to first perform a chordal extension to transformthe original non-chordal sparsity E to a chordal sparsity pattern C (see the inset). We maintain E by adding equality constraints toenforce new fill-in elements arising from the extension to be zeros X ∈ S n E ⇒ X ∈ S n C , X jk = , ∀ ( j , k ) ∈ C\E , (4)where S n E and S n C denote n -by- n symmetric matrices with sparsitypatterns E and C , respectively. Weuse C\E to denote the entries thatexist in C , but not in E . Chordalextension adds degrees of freedom to our optimization problem.Our zero constraints enforce that, at a particular new fill-in entry,the sum of projected dense matrices must equal zero, not that eachdense matrix must contribute a zero value to that entry. Comput-ing the minimum chordal extension, where the number of fill-inedges is minimized, is NP-complete [Yannakakis 1981]. However,finding a minimal chordal extension can be solved in polynomialtime [Heggernes 2006].Notice that Theorem 1 also has a dual format Y i = P i XP ⊤ i . If X has the chordal sparsity, this dual formulation can guarantee X tobe PSD by ensuring all Y i being PSD, proved by the Theorem 7 in[Grone et al. 1984]. However, this dual formulation cannot guarantee X to be PSD if the matrix X does not have chordal sparsity (seeSec. 3.2.2 in [Sun 2015]). Thus we build our algorithm surroundingTheorem 1. Fig. 5. We visualize the spectral preservation using the inner product matrix(middle) between the restricted eigenvectors R Φ of the original operator L to the coarsened domain and the eigenvectors (cid:101) Φ of the coarsened operator X . Due to the orthonormality, the ground truth should be a diagonal matrixof 1 and -1 (denoted by red and blue, respectively). The closer the matrixto a diagonal matrix, the better the preservation of eigenvectors. We use M and (cid:101) M to denote the mass matrices of the original and the coarsenedmeshes respectively. The goal of spectral coarsening is to reduce the size of a discreteoperator, derived from a 3D shape, while preserving its spectralproperties. Liu et al. [2019] show that it is possible to have a signif-icant reduction without affecting the low-frequency eigenvectorsand eigenvalues. They visualize the preservation of spectral proper-ties with the inner product matrix between eigenvectors (see Fig. 5).This inner product matrix can be perceived as a functional map [Ovs-janikov et al. 2012], expressing how eigenfunctions on the originaldomain are mapped to the simplified domain (see Sec. 5.1).Preserving the spectral properties of an operator can be cast asan optimization problem, minimizing the commutative energy [Liuet al. 2019] f ( X ) = ∥ RM -1 L Φ − (cid:101) M -1 XR Φ ∥ (cid:101) M , (5)where L and X denote the original and the coarsened operators, M and (cid:101) M are the original and the coarsened mass matrices, R is therestriction operator restricting functions from the original domain tothe coarsened domain, and Φ are the functions ( e.g. , eigenfunctions)used to measure the commutativity.Intuitively, if the coarsened operator X preserves the spectralproperties of the original operator L , then given some functions Φ on the original domain, first applying the original operator M -1 L and then restricting the functions to the coarsened domain via R should be the same as first restricting the functions via R and thenapplying the coarsened operator (cid:101) M -1 X . In the Appendix C of [Liuet al. 2019] they show that, when Φ are eigenfunctions, minimizingthe commutative energy also preserves eigenvalues. Relationship to [Liu et al. 2019].
Many differential operators ingeometry processing are sparse, symmetric, and positive semidefi-nite. Thus, the method of [Liu et al. 2019] adds constraints to Eq. 5in order to preserve the three operator properties. They satisfy theconstraints via change of variables from X to G minimize X f ( X ) ⇒ minimize G f ( G ⊤ LG ) . (6)where G has a predetermined sparsity pattern. However, this trans-forms the original convex formulation into a non-convex quartic one (see Eq.7 in [Liu et al. 2019]) and increases the sparsity of the ACM Trans. Graph., Vol. 39, No. 6, Article 265. Publication date: December 2020. hordal Decomposition for Spectral Coarsening • 265:5 output operator to 3-rings. It also artificially limits the feasible re-gion to a subset of PSD matrices determined by G . In contrast, wewill show how to directly optimize the commutative energy withrespect to X while maintaining the convexity and enabling one tocontrol over the output sparsity. Spectral coarsening can be written as the following optimizationminimize X f ( X ) (7)subject to Xv = e (8) X ⪰ X ∈ S n E , (10)where f is the spectral coarsening energy in Eq. 5, X ⪰ S n E denotes the set of n -by- n sparse symmetricmatrices with a user-defined (non-chordal) sparsity pattern E . Theequality Xv = e represents the null-space constraint of a differentialoperator, in the case of Laplacian v = is a constant function and e = is a zero vector because every row or column of a Laplaciansums to zero. For the sake of simplicity, we describe the entireprocess without expanding the spectral coarsening energy f , andthe complete formulation is detailed in App. E.Applying the chordal extension (Sec. 3.2) and the chordal decom-position (Sec. 3.1) to Eq. 7 leads tominimize X , { Z i } f ( X ) (11)subject to Xv = e (12) X ∈ S n C (13) X jk = , ∀ ( j , k ) ∈ C\E (14) X = p (cid:213) i = P ⊤ i Z i P i (15) Z i ⪰ , k = , · · · , p , (16)where p is the number of maximal cliques. We convert the PSDconstraint X ⪰ Z i ⪰ E to a chordal C withadditional equality constraints X jk = Z i to PSD in parallel. Utilizing the fact that X is symmetric with a sparsity pattern E , wepropose to accelerate the solver via change of variables from X toa compressed vector x E which consists of the non-zero elementsof the lower triangular part defined by E . This change of variablesrestricts the optimization to search only within the feasible sparsity E . This is crucial to the performance of the solver because X issparse thus | x E | ≪ | vec ( X )| significantly reduces the degrees of freedom. The relationship between X and x E is described byvec ( X ) = P -1 E x E , x E = P E vec ( X ) , (17) vec ( ( where P E is a selection matrix tothe sub-vector x E . P -1 E is the in-verse of P E which is another ma-trix to re-index elements in x E back to vec ( X ) . Note that P -1 E is dif-ferent from the P ⊤E as each non-diagonal element in x E gets mappedto two entries in vec ( X ) , instead of one entry, and P -1 E can be as-sembled easily without the need of explicitly inverting the matrix.This change of variables incorporates both the chordal symmetricconstraint X ∈ S n C and the equality constraints X jk = x E , { z i } f ( x E ) (18)subject to Gx E = e (19) P -1 E x E = p (cid:213) i = K i z i (20)vec -1 ( z i ) ⪰ , i = , · · · , p , (21)We define z i (cid:66) vec ( Z i ) to be the vectorized clique matrix. Gx E = e is the vectorized version of the Xv = e in Eq. 12. P -1 E x E = (cid:205) pi = K i z i is the vectorized chordal decomposition Eq. 15. Here K i denotes theindex selection matrix for vectorized clique matrix z i .We use another change of variables to further accelerate thealgorithm by restricting the vectorized chordal decomposition inEq. 20 to only the non-zeros in the chordal sparsity pattern C . Thatis because the summation of { z i } in Eq. 20 only has non-zeros in thechordal sparsity pattern C . We introduce another index selectionmatrix P C to change Eq. 20 into P -1 E x E = p (cid:213) i = K i z i ⇒ P C P -1 E x E = P C p (cid:213) i = K i z i , (22)where P C selects the lower triangular non-zeros in C from theoriginal vec ( X ) . Here P C is defined the same as the P E in Eq. 17 butwith a different sparsity pattern C . vec ( ( vec ( ( Z i Q i Z i ~ As z i is the vectorization of a symmetric matrix Z i , another re-duction is achieved by applyingthe same trick as Eq. 17 to restrictthe degrees of freedom of Z i to its lower triangular part (cid:101) Z i via anexpansion matrix Q i (see the inset).vec ( Z i ) (cid:124) (cid:32)(cid:32) (cid:123)(cid:122) (cid:32)(cid:32) (cid:125) z i = Q i vec ( (cid:101) Z i ) (cid:124) (cid:32)(cid:32) (cid:123)(cid:122) (cid:32)(cid:32) (cid:125) ˜ z i , (23)We use z i , ˜ z i to denote the vectorized Z i and the vectorized lowertriangular part (cid:101) Z i , respectively. We define Q i as an inverse indexselection matrix that expands the vector of the lower triangularelement ˜ z i to z i . ACM Trans. Graph., Vol. 39, No. 6, Article 265. Publication date: December 2020.
Combining the above results leads us to the reduced optimizationproblem minimize x E , ˜ z f ( x E ) (24)subject to Gx E = e (25) P C P -1 E x E = P C (cid:101) K ˜ z (26)vec -1 ( Q i ˜ z i ) ⪰ , i = , · · · , p , (27)where (cid:101) K = (cid:2) K Q , · · · , K p Q p (cid:3) , ˜ z = ˜ z ... ˜ z p . (28)This final reduced formulation is an optimization problem whichinvolves only linear equalities and small dense PSD constraints.We solve this optimization using ADMM (see App. A), alternatingbetween solving for x E and ˜ z . Solving for x E when f is the spectralcoarsening energy boils down to a single linear solve; solving for˜ z leads to a subroutine of projecting each clique matrix to PSD byremoving the negative eigenvalues. The update on ˜ z is efficient aseach ˜ z i is small and can be trivially parallelized. We provide detailsof the ADMM derivation in App. C. Solving Eq. 24 results in a coarsened operator that preserves thespectral properties of the original one. One can freely control thesparsity pattern of the output by changing E . In our experiments,we choose either 1-, 2-, or 3-ring sparsities. The more rings in use,the better the results because we have more degrees of freedom inminimizing the spectral coarsening energy Eq. 5.When the degrees of freedom are limited, such as using only 1-ring, we notice that the solver would emphasize preserving relativelyhigher frequencies and lead to worse performance in preserving thelowest frequencies. In response, we weight the spectral coarseningenergy Eq. 5 via the inverse of eigenvalues, which leads to thisweighted version f w ( X ) = ∥ RM -1 L ΦΛ -1 − (cid:101) M -1 XR ΦΛ -1 ∥ (cid:101) M , (29)where Λ is a diagonal matrix of the eigenvalues of the originaloperators L . In Sec. 5, we show that the weighted version leads to abetter spectral preservation in the low frequencies when using oursolver. This weighted formulation also naturally captures the notionof “null-space reproduction” in Eq. 25, as we explicitly enforce thenull-space corresponding to the eigenvalue 0 as a hard constraint, i.e. , with infinite weight. We evaluate our solver by comparing against the existing state-of-the-art spectral coarsening [Liu et al. 2019] and simplification[Lescoat et al. 2020], using functional maps and the quantitativemetrics ∥ · ∥ L and ∥ · ∥ D proposed in [Lescoat et al. 2020]. We furtherdemonstrate the power of our solver in controlling the sparsitypatterns, approximating volumetric behavior using only boundarysurface vertices and detaching the differential operator from themesh. We provide implementation details in App. F. Fig. 6. Using the same 3-ring sparsity pattern, our convex formulationenables the ADMM solver to converge to a better result on shape (from80,000 vertices to 600) where the gradient descent in [Liu et al. 2019] maystruggle to converge.
Functional maps [Ovsjanikov et al. 2012] describe how to trans-port functions from one shape M to another shape N . The ideaof functional map has led to breakthroughs in computing shapecorrespondences [Ovsjanikov et al. 2017]. In the context of spectralcoarsening, functional maps become a tool for evaluating how theeigenvectors of a discrete operator L ∈ R n × n derived on a high-resolution mesh are maintained by a coarsened operator X ∈ R m × m .Following the notation in Fig. 5, let Φ ∈ R n × k and (cid:101) Φ ∈ R m × k betwo set of eigenvectors of L and X , respectively, the functional map C can be computed as C = (cid:101) Φ ⊤ (cid:101) MR Φ . (30)Here (cid:101) M is the mass matrix in the coarse domainand R is a restriction operator, encoding the cor-respondences information from the original meshto its coarsened counterpart. The restriction op-erator is computed either during the decimation[Lescoat et al. 2020] or simply a subset selection matrix as in [Liuet al. 2019]. One can also perceive the matrix C as an inner productmatrix between the eigenvectors (cid:101) Φ on the coarsened domain andthe restricted eigenvectors R Φ to the coarsened domain. Due to theorthonormality between eigenvectors, the optimal functional map(or inner product matrix) C should be a diagonal matrix of values 1and -1 (see inset). Laplacian commutativity and Orthonormality norm.
The func-tional map should be orthonormal and commute with the originalLaplace operator in the reduced basis if and only if it preservescorresponding eigenfunctions and eigenvalues exactly, as shown in[Lescoat et al. 2020]. Thus the spectral preservation before and aftercoarsening and simplification can be quantified using two norms:Laplacian commutativity: ∥ · ∥ L = ∥ C Λ − (cid:101) Λ C ∥ ∥ C ∥ (31)Orthonormality: ∥ · ∥ D = ∥ C ⊤ C − I ∥ . (32)In our experiments, we visualize the functional map C and reportboth norms to convey a complete picture of spectral preservation. Comparing to the original non-convex formulation Eq. 6 [Liu et al.2019], in Fig. 10 we show that our convex formulation consistentlyachieves lower objective values across different number of coarsenedvertices (from 200 to 1200) and leads to better qualitative results (see
ACM Trans. Graph., Vol. 39, No. 6, Article 265. Publication date: December 2020. hordal Decomposition for Spectral Coarsening • 265:7
Fig. 7. Using the same 3-ring sparsity as [Liu et al. 2019], our methodachieves better quality of resulting functional maps for both the weightedand unweighted versions, measured by the metrics proposed in [Lescoatet al. 2020].
Fig. 7, Fig. 6 and Fig. 9). For a fair comparison, we set the sparsitypattern of our approach to be 3 rings, the same as the method of[Liu et al. 2019]. We can further show that through weighting theenergy with the inverse of the eigenvalues (see Eq. 29), we obtainan even better preservation of the low-frequencies, see Fig. 7 (righttwo) and Fig. 8. In general, the weighted version performs better inmaintaining the lowest frequencies, while the unweighted versiontends to preserve all the eigenmodes in a least-square sense.With the reusable numerical factorization and separable PSD pro-jection structures, our ADMM solver is able to solve the problemefficiently while the method of [Liu et al. 2019] takes longer to con-verge. In Fig. 11, we compare the runtime with [Liu et al. 2019], bothusing the optimal setups (our weighted version and [Liu et al. 2019]unweighted version). The arg min X step requires a linear solve of aKKT system and arg min Z are a set of PSD projections of the smallclique matrices. For details about arg min X and arg min Z step, seeApp. A. Leveraging the fact that the KKT system matrix in arg min X remains the same until ρ is updated, we only perform numericalfactorization when ρ is updated and reuse it until ρ changes again(usually after tens of iterations). As shown in Fig. 12, most of ourruntime is spent on numerical factorization while the time spenton each arg min X and arg min Z step is relatively small. We reportour detailed runtime in Fig. 23. For detailed runtime comparisonwithin the weighted and unweighted versions, see Fig. 28.We also compare the total runtime of our sparse ADMM solverwith the MOSEK solver in CVX [Grant and Boyd 2008, 2014] in Fig. 8. For applications that desire to preserve low frequencies, our weightedformulation can focus on preserving the first few eigenvectors and eigenval-ues (shown in increasing order). Our weighted formulation achieves betterresults comparing to [Liu et al. 2019] under the same sparsity pattern whencoarsening the shapes from 8,000 (left) and 28,000 (right) vertices to 400,respectively. Here we show the Laplacian commutativity norm and Or-thonormality norm based on the functional map of the first 50 eigenvectors(inside the dashed lines).Fig. 9. As degrees of freedom increase for volumetric Laplacian, our methodis still able to maintain the spectral properties of the tetrahedral meshes(from 32,000 and 27,000 vertices to 400 respectively) using the same sparsityas [Liu et al. 2019]. Here the eigenvalues are shown in increasing order.
ACM Trans. Graph., Vol. 39, No. 6, Article 265. Publication date: December 2020.
Fig. 10. Under the same 3-ring sparsity, our method consistently achievesbetter objective values comparing to the spectral coarsening method pro-posed by [Liu et al. 2019]. We evaluate both unweighted (top) and theweighted (bottom) versions across different numbers of coarse vertices rang-ing from 200 (light) to 1200 (dark). Note that the dashed lines denote thatthe optimization has already converged.Fig. 11. We compare the runtime of our algorithm (weighted) with [Liuet al. 2019] (unweighted) using the same 3-ring sparsity pattern with respectto the number of coarsened vertices | V c | , as our method performs betterwith the weighted version and [Liu et al. 2019] shows the opposite. Herewe only consider the solve time, factoring out the precomputation for bothour method and the method of [Liu et al. 2019]. As in our formulation thesolve involved in ADMM is independent of the resolution of the originalmesh, we are able to coarsen a high-resolution mesh without a significantlyincreased solve time compared to [Liu et al. 2019].Fig. 12. We show the decomposition of the total runtime of our algorithmusing the same 3 rings sparsity pattern as in Fig. 11. From bottom to topare the precomputation time, arg min X time, arg min Z time, other ADMMtime (including numerical factorization), chordal decomposition time andeigendecomposition time, respectively. As shown in the figure, most of theruntime of our algorithm is spent on numerical factorization. By reusingnumerical factorization until ρ changes, the time spent on each arg min X and arg min Z step is relatively small. Fig. 13. We compare the total runtime of our solver and the MOSEK solverin CVX [Grant and Boyd 2008, 2014], which only supports dense SDP con-straints and uses interior point method to solve the dense SDP problemusing the 1-, 2- and 3-ring sparsity patterns of [Garland and Heckbert 1997].As a dense SDP solver that is not designed to solve large sparse SDP problem,MOSEK takes a relatively long runtime when the matrix size is large or therings of neighborhood increases. Here | V c | is number of the vertices in thecoarse mesh.Fig. 14. We visualize the biharmonic distance of our method and [Garlandand Heckbert 1997] using the same 1-ring sparsity pattern. Our method canfurther postprocess and improve spectral preservation of the result from[Garland and Heckbert 1997] (from 110,000 vertices to 500).Fig. 15. By visualizing the biharmonic distance, we show that our approachcan also postprocess the result from [Lescoat et al. 2020] (from 10,000 verticesto 800) and achieve better spectral preservation while still maintaining thesame 1-ring sparsity pattern. Fig. 13, which uses the interior point method to solve the problemwith dense PSD constraints. We show our solver can work on largeproblems in a more efficient way than MOSEK, while MOSEK, whichonly supports dense semi-definiteness constraints, is not designedfor large sparse SDP problem and takes a relatively long time toconverge when the matrix size is large or the rings of neighbor-hood increases. Here we use 0 . × | V c | eigenvectors to ensure bothmethods converge. Our approach could further improve the results from the spectralsimplification via post-processing. The method of [Lescoat et al.2020] performs spectral simplification by greedily collapsing theedge with the minimum cost, thus it may result in suboptimal re-sults. In Fig. 16 and Fig. 27 we post-process the cotangent Laplacian
ACM Trans. Graph., Vol. 39, No. 6, Article 265. Publication date: December 2020. hordal Decomposition for Spectral Coarsening • 265:9
Fig. 16. Using the same 1-ring sparsity pattern, our method can serve asa post-processing tool to further improve the resulting operator from themethod of [Lescoat et al. 2020]. Given the original mesh with 26,000 ver-tices, our post-processed operators result in better functional maps (middle)compared to the output operators from [Lescoat et al. 2020] (left), as wellas closer eigenvalues (right) to the reference.Fig. 17. When the coarsening is aggressive, our method can still postpro-cess the results of [Lescoat et al. 2020] to improve the quality of spectralpreservation. from the results of [Lescoat et al. 2020] in a global manner to furtherimprove the spectral preservation while keeping the sparsity patternand the mesh vertices fixed. We further demonstrate the improve-ment of the spectral preservation by visualizing the biharmonicdistance of our method and [Garland and Heckbert 1997] (Fig. 14)or [Lescoat et al. 2020] (Fig. 15) using the same 1-ring sparsity pat-tern. Our method can also recover the spectral properties when
Fig. 18. We simplify the anisotropic Laplacian (with parameter 20) from50,000 vertices to 1,000 vertices using the same sparsity pattern as [Garlandand Heckbert 1997] or [Liu et al. 2019]. Our method can handle anisotropicoperators where [Garland and Heckbert 1997] may fail entirely due to theanisotropy. Our optimization scheme enables users to freely choose between1-ring or 3-ring sparsity. In contrast, [Liu et al. 2019] has much less controlon the sparsity pattern and only allows for 3-ring sparsity pattern, whichintroduces a significant amount of fill-ins.Fig. 19. Our optimization achieves better spectral preservation of theanisotropic Laplace operator (with parameter 60, from 5000 vertices to400 vertices) when the rings of neighborhood increases. Increasing the non-zeros in the sparsity pattern will allow for more degrees of freedom, whichenables our solver to converge to a better result.
ACM Trans. Graph., Vol. 39, No. 6, Article 265. Publication date: December 2020.
Fig. 20. Our method can encode the spectral behavior of a volumetricmesh only using its surface mesh with some added links. We approximatethe volumetric behavior using a sparse matrix with a controllable sparsitypattern, while the corresponding matrix has to be dense in the traditionalBoundary Element Method [James and Pai 1999]. Here the source vertex ofthe biharmonic distance is visualized as a green dot, and the added linksare visualized as the gray lines (bottom two). the coarsening is extreme for complicated shapes (see Fig. 17 andFig. 26). In addition to the isotropic cotangent Laplacian, in Fig. 18we demonstrate our capability in handling anisotropic operatorswithout introducing any new fill-ins.For downstream applications that accept changes in the sparsitypattern, our method enables one to freely control the sparsity pat-terns to achieve better results. As shown in Fig. 19, we can freelyincrease the sparsity pattern from 1 ring to 3 rings in order to allowmore degrees of freedom and better results. But one should alsoconsider the trade-off between the number of non-zero fill-ins andthe quality of the results because more degrees of freedom impliesa denser output operator with a longer runtime (see Fig. 13).
Surface-only representation is a more efficient alternative comparedto its volumetric counterpart because three dimensional (volumet-ric) problem is reduced to two dimensions (surface). However, incomputer animation and simulation, it is often more desirable to use
Fig. 21. Starting from the constrained Delaunay tetrahedralization, we canincrease the number of rings of neighborhood to better approximate thevolumetric Laplacian using a surface mesh with random links. Similar tothe partial functional correspondence in [Rodolí et al. 2017], the diagonalof our functional map may be skewed because we may lose some internaleigenvectors during this partial matching. a volumetric representations to simulate the volumetric behavior.We show that our approach can optimize the Laplacian of a surface-only mesh with random distant connections generated via TetGen[Si 2015] to approximate the spectral behavior of a volumetric mesh.Taking the boundary surfacemesh of a volumetric tetrahedralmesh as the input, we first add dis-tant edges to the surface Laplacianto determine the sparsity pattern.We use the constrained Delaunaytetrahedralization in TetGen [Si2015] to add the edges between “visible” but distant vertices (seeinset), and use its pattern as the sparsity pattern of our modified sur-face Laplacian. Then we optimize the modified operator to preservethe spectral behavior of the volumetric Laplacian. Compared to thetraditional discretization in Boundary Element Method [James andPai 1999] where the boundary matrices are usually dense, in ourmethod the surface-only Laplacian can still remain sparse and main-tain a similar sparsity pattern as its surface cotangent Laplacian.In Fig. 20 and Fig. 21, we visualize the functional map and bihar-monic distance of our optimized surface Laplacian. We show that
ACM Trans. Graph., Vol. 39, No. 6, Article 265. Publication date: December 2020. hordal Decomposition for Spectral Coarsening • 265:11
Fig. 22. When one ties the differential operator with the mesh, we caneither preserves the appearance of the mesh [Garland and Heckbert 1997]or the spectral properties of the operator [Lescoat et al. 2020], but not both.Our approach enables one to detach the operator from the mesh (right)to achieve both simultaneously: using an appearance-preserving mesh forvisualization and a spectral-preserving operator with user-desired sparsitypatterns ( e.g. , 1-ring, 2-ring, or 3-ring) for computation. we can further capture the volumetric behavior by increasing therings of neighborhood.Similar to [Rodolí et al. 2017], our volume-to-surface mapping isalso a partial functional mapping, which may lose some (internal)eigenvectors and result in a skewed functional map when the in-ternal volume is large (see Fig. 21). Our method can also serve asa possible way to generate training data to find the best sparsitypattern without the presence of a volumetric mesh.
Sharp et al. [2019] propose to represent the same geometry usingtwo discrete representations: one for visualization and one for com-putation. In a similar spirit to [Sharp et al. 2019], our approachenables one to have one mesh for visualization and one detachedoperator for computation.Previous decimation methods either preserve the appearancebut fail in preserving spectral properties or preserve the spectralproperties but fail in preserving the appearance. This is partly dueto the perspective of defining the operator directly on the discretemesh, and partly due to the lack of tools to optimize the operatorindependently.In order to simultaneously preserve the appearance and the spec-tral properties, in Fig. 22 we first obtain a coarsened mesh from anappearance-preserving decimation, then we optimize the operatorseparately using the sparsity pattern defined by the connectivity ofthe mesh. Intuitively, this optimization tries to retrieve the desiredproperties on the original mesh by manipulating the metric “seen”by the coarsened operator. At the end of this process, even thoughthe “distorted” metric may not be embeddable, one can always usethe embeddable appearance-preserving mesh to visualize the resultsof the computation. In Fig. 22, this detachment allows us to preserveboth the appearance and the spectral properties, while the methodof [Garland and Heckbert 1997] fails in preserving spectral proper-ties and the method of [Lescoat et al. 2020] fails in preserving theappearance. In Fig. 1, we demonstrate the strength of this approachin approximating the vibration modes of a high-resolution meshusing a coarse mesh with a detached coarsened operator. Compared
Fig. 23. Our runtime shows that our method is more suitable for aggressivecoarsening (middle). When many eigenvectors are in use (top) or inputmeshes are large (middle), computing eigendecomposition can be the bot-tleneck. to [Liu et al. 2019] which does not allow inputting an arbitrarysparsity pattern (instead it builds the output sparsity pattern by“squaring” an incidence matrix, see their Eq. 7), our method can takeany sparsity pattern as input. This means one can geometrically sim-plify a mesh, then use that new mesh’s sparsity pattern as input toour algorithm to optimize a compatible operator (see Fig. 1), whichenables its use in applications that require an embedded mesh andan accurate coarse operator ( e.g. , simulation with contact handling).
Further exploiting the limited degrees of freedom would enablean even better spectral preservation for 1-ring isotropic operator.Jointly optimizing the sparsity pattern and the operator entries maylead to even finer solutions, especially for volume-to-surface approx-imation. Exploring different regularizers and energy formulationswould be desirable for solving the underdetermined system when de-grees of freedom are too large compared to the number of eigenvec-tors in use. Avoid introducing additional low frequency eigenvectors
21K → 0.8K 1 ring during the optimization wouldbenefit the downstream applica-tions (see the inset). Reducingthe memory consumption of theKronecker product would furtherincrease the scalability of ourmethod (see App. ?? ). Incorporat-ing a fast eigen-approximation or removing the use of eigen de-composition would further accelerate the spectral coarsening (see ACM Trans. Graph., Vol. 39, No. 6, Article 265. Publication date: December 2020.
Fig. 23). Further analysis of the tradeoff between the convergenceand the number of cliques could offer insight towards future applica-tions of chordal decomposition. Extending our spectral coarseningof surface-based geometric operators to volumetric stiffness matrixcould also provide an alternative way to deal with the numericalstiffening in simulation. As a first order method, ADMM is slowto obtain highly accurate solutions, but fast in getting moderatelyaccurate solutions. Similar to other splitting methods, ADMM issensitive to the conditioning of the problem data. Thus adding apreconditioner could make our solver more robust to the scalingproblem and increase its performance. Finally, it would be also in-teresting to extend our method to many other applications beyondgeometry processing and shape matching, such as physics-basedsimulation, topology optimization, algebraic multigrid and spectralgraph reduction.
ACKNOWLEDGMENTS
This work is funded in part by NSERC Discovery (RGPIN-2017-05524, RGPIN-2017–05235, RGPAS–2017–507938), Connaught Fund(503114), CFI-JELF Fund, Accelerator (RGPAS-2017-507909), NewFrontiers of Research Fund (NFRFE–201), the Ontario Early ResearchAward program, the Canada Research Chairs Program, the FieldsCentre for Quantitative Analysis and Modelling and gifts by AdobeSystems, Autodesk and MESH Inc. We especially thank Yifan Sun,Giovanni Fantuzzi and Yang Zheng for their enlightening discus-sions and advice about the chordal decomposition, and ThibaultLescoat for sharing the spectral simplification implementation anddiscussions about running experiments. We thank Abhishek Madan,Silvia Sellán, Michael Xu, Sarah Kushner, Rinat Abdrashitov, Heng-guang Zhou and Kaihua Tang for proofreading; Mirela Ben-Chenfor insightful discussions about the weighted functional map; JoshHolinaty for testing the code; John Hancock for the IT support;anonymous reviewers for their helpful comments and suggestions.
REFERENCES
Jim Agler, William Helton, Scott McCullough, and Leiba Rodman. 1988. Positivesemidefinite matrices with a given sparsity pattern.
Linear algebra and its applications
107 (1988), 101–149.Martin S. Andersen, Joachim Dahl, and Lieven Vandenberghe. 2010. Implementation ofnonsymmetric interior-point methods for linear optimization over sparse matrixcones.
Math. Program. Comput.
2, 3-4 (2010), 167–201.Jean RS Blair and Barry Peyton. 1993. An introduction to chordal graphs and cliquetrees. In
Graph theory and sparse matrix computation . 1–29.Stephen P. Boyd, Neal Parikh, Eric Chu, Borja Peleato, and Jonathan Eckstein. 2011.Distributed Optimization and Statistical Learning via the Alternating DirectionMethod of Multipliers.
Foundations and Trends in Machine Learning
3, 1 (2011),1–122.Gecia Bravo-Hermsdorff and Lee Gunderson. 2019. A Unifying Framework forSpectrum-Preserving Graph Sparsification and Coarsening. In
Advances in NeuralInformation Processing Systems 32 . 7736–7747.Max Budninskiy, Houman Owhadi, and Mathieu Desbrun. 2019. Operator-adaptedwavelets for finite-element differential forms.
J. Comput. Phys.
388 (2019), 144–177.Samuel Burer. 2003. Semidefinite Programming in the Space of Partial Positive Semi-definite Matrices.
SIAM Journal on Optimization
14, 1 (2003), 139–172.Desai Chen, David I. W. Levin, Wojciech Matusik, and Danny M. Kaufman. 2017.Dynamics-aware numerical coarsening for fabrication design.
ACM Transactions onGraphics (TOG)
36, 4 (2017), 84:1–84:15.Desai Chen, David I. W. Levin, Shinjiro Sueda, and Wojciech Matusik. 2015. Data-drivenfinite elements for geometry and material design.
ACM Transactions on Graphics(TOG)
34, 4 (2015), 74:1–74:10.Jiong Chen, Hujun Bao, Tianyu Wang, Mathieu Desbrun, and Jin Huang. 2018. Nu-merical Coarsening Using Discontinuous Shape Functions.
ACM Transactions onGraphics (TOG)
37, 4, Article 120 (July 2018), 12 pages. Jiong Chen, Max Budninskiy, Houman Owhadi, Hujun Bao, Jin Huang, and MathieuDesbrun. 2019a. Material-Adapted Refinable Basis Functions for Elasticity Simu-lation.
ACM Transactions on Graphics (TOG)
38, 6, Article Article 161 (Nov. 2019),15 pages.Yu Ju Edwin Chen, David I. W. Levin, Danny Kaufmann, Uri M. Ascher, and Dinesh K.Pai. 2019b. EigenFit for consistent elastodynamic simulation across mesh resolution.In
Proceedings of the 18th annual ACM SIGGRAPH/Eurographics Symposium onComputer Animation, SCA 2019 . 5:1–5:13.Paolo Cignoni, Claudio Montani, and Roberto Scopigno. 1998. A comparison of meshsimplification algorithms.
Comput. Graph.
22, 1 (1998), 37–54.Jonathan D. Cohen, Dinesh Manocha, and Marc Olano. 2003. Successive Mappings: AnApproach to Polygonal Mesh Simplification with Guaranteed Error Bounds.
Int. J.Comput. Geometry Appl.
13, 1 (2003), 61.David Cohen-Steiner, Pierre Alliez, and Mathieu Desbrun. 2004. Variational shapeapproximation.
ACM Transactions on Graphics (TOG)
23, 3 (2004), 905–914.Katsuki Fujisawa, Sunyoung Kim, Masakazu Kojima, Yoshio Okamoto, and MakotoYamashita. 2009. B-453 User’s Manual for SparseCoLO: Conversion Methods forSPARSE COnic-form Linear Optimization Problems. (2009).Mituhiro Fukuda, Masakazu Kojima, Kazuo Murota, and Kazuhide Nakata. 2001. Ex-ploiting Sparsity in Semidefinite Programming via Matrix Completion I: GeneralFramework.
SIAM Journal on Optimization
11, 3 (2001), 647–674.Michael Garland and Paul S. Heckbert. 1997. Surface simplification using quadric errormetrics. In
Proceedings of the 24th Annual Conference on Computer Graphics andInteractive Techniques, SIGGRAPH 1997 . 209–216.Michael Garland and Paul S. Heckbert. 1998. Simplifying surfaces with color and textureusing quadric error metrics. In
Visualization ’98, Proceedings . 263–269.Michael Grant and Stephen Boyd. 2008. Graph implementations for nonsmooth convexprograms. In
Recent Advances in Learning and Control . 95–110.Michael Grant and Stephen Boyd. 2014. CVX: Matlab Software for Disciplined ConvexProgramming, version 2.1.Robert Grone, Charles R Johnson, Eduardo M Sá, and Henry Wolkowicz. 1984. Positivedefinite completions of partial Hermitian matrices.
Linear algebra and its applications
58 (1984), 109–124.Pinar Heggernes. 2006. Minimal triangulations of graphs: A survey.
Discret. Math.
Proceedings of the 23rd Annual Conferenceon Computer Graphics and Interactive Techniques, SIGGRAPH 1996 . 99–108.Hugues Hoppe. 1997. View-dependent refinement of progressive meshes. In
Proceedingsof the 24th Annual Conference on Computer Graphics and Interactive Techniques,SIGGRAPH 1997 . 189–198.Hugues Hoppe, Tony DeRose, Tom Duchamp, John Alan McDonald, and Werner Stuet-zle. 1993. Mesh optimization. In
Proceedings of the 20th Annual Conference onComputer Graphics and Interactive Techniques, SIGGRAPH 1993 . 19–26.Alec Jacobson et al. 2018. gptoolbox: Geometry Processing Toolbox.http://github.com/alecjacobson/gptoolbox.Doug L. James and Dinesh K. Pai. 1999. ArtDefo: Accurate Real Time DeformableObjects. In
Proceedings of the 26th Annual Conference on Computer Graphics andInteractive Techniques (SIGGRAPH ’99) . 65–72.Yu Jin, Andreas Loukas, and Joseph JáJá. 2020. Graph Coarsening with PreservedSpectral Properties. In
The 23rd International Conference on Artificial Intelligence andStatistics, AISTATS 2020 , Vol. 108. 4452–4462.Naonori Kakimura. 2010. A direct proof for the matrix decomposition of chordal-structured positive semidefinite matrices.
Linear Algebra Appl.
ACM Transactions on Graphics(TOG)
28, 3 (2009), 51.Sunyoung Kim, Masakazu Kojima, Martin Mevissen, and Makoto Yamashita. 2011. Ex-ploiting sparsity in linear and nonlinear matrix inequalities via positive semidefinitematrix completion.
Math. Program.
ComputerGraphics Forum
39, 2 (2020), 49–58.Dingzeyu Li, Yun (Raymond) Fei, and Changxi Zheng. 2015. Interactive AcousticTransfer Approximation for Modal Sound.
ACM Transactions on Graphics (TOG)
ACM Transactions on Graphics (TOG)
38, 4, Article Article105 (July 2019), 13 pages.Andreas Loukas. 2019. Graph Reduction with Spectral and Cut Guarantees.
J. Mach.Learn. Res.
20 (2019), 116:1–116:42.Andreas Loukas and Pierre Vandergheynst. 2018. Spectrally Approximating LargeGraphs with Smaller Graphs. In
Proceedings of the 35th International Conference onMachine Learning, ICML 2018 , Vol. 80. 3243–3252.Yunlong Lu, Qing Liu, Yi Wang, Peter Gardner, Wang He, Yi Chen, Jifu Huang, andTaijun Liu. 2020. Seamless Integration of Active Antenna With Improved PowerACM Trans. Graph., Vol. 39, No. 6, Article 265. Publication date: December 2020. hordal Decomposition for Spectral Coarsening • 265:13
Efficiency.
IEEE Access . 5932–5939.Haggai Maron, Nadav Dym, Itay Kezurer, Shahar Kovalsky, and Yaron Lipman. 2016.Point Registration via Efficient Convex Relaxation.
ACM Transactions on Graphics(TOG)
35, 4, Article Article 73 (July 2016), 12 pages.Kazuhide Nakata, Katsuki Fujisawa, and Mituhiro Fukuda. 2003. Exploiting sparsity insemidefinite programming via matrix completion II: implementation and numericalresults.
Math. Program.
95, 2 (2003), 303–327.Ahmad Nasikun, Christopher Brandt, and Klaus Hildebrandt. 2018. Fast Approximationof Laplace-Beltrami Eigenproblems.
Comput. Graph. Forum
37, 5 (2018), 121–134.Maks Ovsjanikov, Mirela Ben-Chen, Justin Solomon, Adrian Butscher, and Leonidas J.Guibas. 2012. Functional maps: a flexible representation of maps between shapes.
ACM Transactions on Graphics (TOG)
31, 4 (2012), 30:1–30:11.Maks Ovsjanikov, Etienne Corman, Michael M. Bronstein, Emanuele Rodolà, MirelaBen-Chen, Leonidas J. Guibas, Frédéric Chazal, and Alexander M. Bronstein. 2017.Computing and processing correspondences with functional maps. In
Special InterestGroup on Computer Graphics and Interactive Techniques Conference, SIGGRAPH ’17Courses . 5:1–5:62.Houman Owhadi. 2017. Multigrid with Rough Coefficients and Multiresolution OperatorDecomposition from Hierarchical Information Games.
SIAM Rev.
59, 1 (2017), 99–149.A. Cengiz Öztireli, Marc Alexa, and Markus H. Gross. 2010. Spectral sampling ofmanifolds.
ACM Transactions on Graphics (TOG)
29, 6 (2010), 168.E. Rodolí, L. Cosmo, M. M. Bronstein, A. Torsello, and D. Cremers. 2017. PartialFunctional Correspondence.
Comput. Graph. Forum
36, 1 (Jan. 2017), 222–236.Nicholas Sharp, Yousuf Soliman, and Keenan Crane. 2019. Navigating intrinsic triangu-lations.
ACM Transactions on Graphics (TOG)
38, 4 (2019), 55.Hang Si. 2015. TetGen, a Delaunay-Based Quality Tetrahedral Mesh Generator.
ACMTrans. Math. Softw.
41, 2, Article 11 (Feb. 2015), 36 pages.Gun Srijuntongsiri and Stephen A. Vavasis. 2004. A Fully Sparse Implementation of aPrimal-Dual Interior-Point Potential Reduction Method for Semidefinite Program-ming.
CoRR (2004).Yifan Sun. 2015.
Decomposition methods for semidefinite optimization . Ph.D. Dissertation.UCLA.Yifan Sun, Martin S. Andersen, and Lieven Vandenberghe. 2014. Decomposition in ConicOptimization with Partially Separable Structure.
SIAM Journal on Optimization
SIAM J. Matrix Analysis Applications
36, 4 (2015), 1691–1717.Lieven Vandenberghe and Martin S. Andersen. 2015. Chordal Graphs and SemidefiniteOptimization.
Foundations and Trends in Optimization
1, 4 (2015), 241–433.M. Yannakakis. 1981. Computing the Minimum Fill-In is NP-Complete.
SIAM Journalon Algebraic Discrete Methods
2, 1 (1981), 77–79.Zhiqiang Zhao, Yongyu Wang, and Zhuo Feng. 2018. Nearly-Linear Time SpectralGraph Reduction for Scalable Graph Partitioning and Data Visualization.
CoRR (2018).Yang Zheng, Giovanni Fantuzzi, and Antonis Papachristodoulou. 2017a. Exploitingsparsity in the coefficient matching conditions in sum-of-squares programmingusing ADMM.
IEEE control systems letters
1, 1 (2017), 80–85.Yang Zheng, Giovanni Fantuzzi, and Antonis Papachristodoulou. 2019. Fast ADMM forSum-of-Squares Programs Using Partial Orthogonality.
IEEE Trans. Automat. Contr.
64, 9 (2019), 3869–3876.Yang Zheng, Giovanni Fantuzzi, Antonis Papachristodoulou, Paul Goulart, and AndrewWynn. 2017b. Fast ADMM for semidefinite programs with chordal sparsity. In . 3335–3340.Yang Zheng, Giovanni Fantuzzi, Antonis Papachristodoulou, Paul Goulart, and An-drew Wynn. 2020. Chordal decomposition in operator-splitting methods for sparsesemidefinite programs.
Math. Program.
IEEE Trans. Automat. Contr.
63, 3 (2018), 752–767.
A ALTERNATING DIRECTION METHOD OFMULTIPLIERS
Alternating direction method of multipliers (ADMM) solves opti-mization problems in the following formatmin x , z f ( x ) + д ( z ) (33)s.t. Ax + Bz = c . (34) The (scaled) ADMM solves the problem by iteratively applying thefollowing steps x t + (cid:66) arg min x (cid:16) f ( x ) + ρ ∥ Ax + Bz t − c + u t ∥ (cid:17) z t + (cid:66) arg min z (cid:16) д ( z ) + ρ ∥ Ax t + + Bz − c + u t ∥ (cid:17) (35)˜ u t + (cid:66) u t + Ax t + + Bz t + − c ρ t + , u t + (cid:66) update ( ρ t ) , where ρ is the penalty parameter and u is the scaled dual variable .In the last step, a common strategy is to update the penalty ρ as ρ t + = τ incr ρ t if ∥ r t ∥ > µ ∥ s t ∥ ρ t / τ decr if ∥ s t ∥ > µ ∥ r t ∥ ρ t otherwise , (36)where τ incr > τ decr > µ > r and s are the primal residual and the dual residual , respectively. We can computethem as r t + = Ax t + + Bz t + − c , s k + = ρ A ⊤ B ( z t + − z t ) . (37)After updating ρ we must also scale the dual variable u as u t + = ˜ u t + × ρ t ρ t + . (38)A common stopping criteria is when both ∥ r t ∥ < ϵ pri and ∥ s t ∥ < ϵ dual are below the thresholds ϵ pri , ϵ dual . We only review basic con-cepts of ADMM here for self-containedness. We wholeheartedlyrefer the reader to a great survey [Boyd et al. 2011] for more infor-mation on ADMM. B CHANGE OF VARIABLES
We describe the details on how to apply change of variables andvectorization for the constraints presented in Eq. 11.Given the matrices P E and P -1 E in Eq. 17, which allow us to go backand forth between vec ( X ) and x E , we can vectorize the equalityconstraint in Eq. 11 asvec ( Xv ) = vec ( e ) ⇒ ( v ⊤ ⊗ I ) vec ( X ) = e (39) ⇒ ( v ⊤ ⊗ I ) P -1 E (cid:124) (cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32) (cid:123)(cid:122) (cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32) (cid:125) G x E = e (40) ⇒ Gx E = e , (41)where I is the identity matrix. For the chordal decomposition Eq. 15,we can directly apply the vectorization strategy discussed in Sec. 3.1as vec ( X ) = p (cid:213) i = vec ( P ⊤ i Z i P i ) ⇒ vec ( X ) = p (cid:213) i = K i vec ( Z i ) (cid:124) (cid:32)(cid:32) (cid:123)(cid:122) (cid:32)(cid:32) (cid:125) z i (42) ⇒ P -1 E x E = p (cid:213) i = K i z i , (43)where we define z i (cid:66) vec ( Z i ) . Therefore we can easily rewrite thePSD constraint on Z i as Z i = vec -1 ( z i ) ∈ S n i + . (44) ACM Trans. Graph., Vol. 39, No. 6, Article 265. Publication date: December 2020.
Combining all these results gives us the formulae in Eq. 18.
C DERIVATION OF ADMM STEPS
Here we describe how to derive the ADMM steps (see Eq. 35) tosolve the optimization in Eq. 24. Our derivation follows a similarstrategy described in Sec. 4.2 [Zheng et al. 2020].We start by introducing an auxiliary variable y such thatmin x E , y , ˜ z f ( x E ) (45)s.t. Gx E = e (46) P C P -1 E x E = P C (cid:101) Ky (47)vec -1 ( Q i ˜ z i ) ⪰ , i = , · · · , p (48) y = ˜ z , (49)Then we introduce the indicator function δ W as δ W ( x ) = (cid:40) , x ∈ W∞ , otherwise . (50)This allows us to rewrite Eq. 45 asmin x E , y , z f ( x E ) + δ e ( Gx E ) + δ ( P C P -1 E x E − P C (cid:101) Ky ) (cid:124) (cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32) (cid:123)(cid:122) (cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32) (cid:125) function of X = { x E , y } + p (cid:213) i = δ + (cid:0) vec -1 ( Q i ˜ z i ) (cid:1)(cid:124) (cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32) (cid:123)(cid:122) (cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32) (cid:125) function of Z = { ˜ z } s.t. y − ˜ z = , (51)where we use δ + to denote the indicator function for the PSD con-straint. This format of the optimization enables us directly applythe ADMM step Eq. 35. In particular the update of X = { x E , y } isas follows arg min x E , y f ( x E ) + ρ ∥ y − ˜ z t + u t ∥ s.t. Gx E = e (52) P C P -1 E x E − P C (cid:101) Ky = , where the solution depends on the energy function f in use. In thecase of spectral coarsening energy, this boils down to a single linearsolve of the KKT system (see App. E). The update of Z = { ˜ z } isarg min ˜ z p (cid:213) i = ∥ y t + i − ˜ z i + u ti ∥ (53)s.t. Q i ˜ z i ⪰ i = , · · · , p . (54)This can be solved by projecting a set of small dense matricesvec -1 (cid:0) Q i ( y t + i + u ti ) (cid:1) to PSD, which requires us to solve the eigen-decomposition and remove the negative eigenvalues. Note that thisprocess can be solved efficiently because each matrix to be projectedis small and this process can be trivially parallelized. D arg min X FOR SPECTRAL COARSENING
Applying ADMM to solve the spectral coarsening problem requiresus to derive the update on X (see Eq. 52). We start by vectorizingthe spectral coarsening energy Eq. 5 as f ( X ) = ∥ RM -1 L Φ − (cid:101) M -1 XR Φ ∥ (cid:101) M (55) = ∥ (cid:101) M / RM -1 L Φ (cid:124) (cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32) (cid:123)(cid:122) (cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32) (cid:125) W − (cid:101) M − / (cid:124)(cid:123)(cid:122)(cid:125) V X R Φ (cid:124)(cid:123)(cid:122)(cid:125) U ∥ F (56) = ∥ W − VXU ∥ F (57) = ∥ vec ( W ) − vec ( VXU )∥ (58) = ∥ vec ( W ) − ( U ⊤ ⊗ V ) vec ( X )∥ . (59)We then apply change of variables in Eq. 17 to modify the energyas follows f ( x E ) = ∥ vec ( W ) (cid:124) (cid:32)(cid:32) (cid:123)(cid:122) (cid:32)(cid:32) (cid:125) w − ( U ⊤ ⊗ V ) P -1 E (cid:124) (cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32) (cid:123)(cid:122) (cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32) (cid:125) E x E ∥ = ∥ w − Ex E ∥ . (60)Updating X = { x E , y } in the ADMM (Eq. 52) amounts to obtain-ing the minimizer of the following problemmin x E , y ∥ w − Ex E ∥ + ρ ∥ y − ˜ z t + u t ∥ (61)s.t. Gx E = e , (62) P C P -1 E (cid:124)(cid:123)(cid:122)(cid:125) C x E − P C (cid:101) K (cid:124)(cid:123)(cid:122)(cid:125) D y = . (63)We first derive the Lagrangian with multipliers µ , µ as L( x E , y , µ , µ ) = ∥ w − Ex E ∥ + ρ ∥ y − ˜ z t + u t ∥ (64) + µ ⊤ ( Cx E − Dy ) + µ ⊤ ( Gx E ) . (65)Setting the derivatives to zeros gives us ∂ L ∂ x E = ⇒ E ⊤ Ex E + C ⊤ µ + G ⊤ µ = E ⊤ w (66) ∂ L ∂ y = ⇒ y = ˜ z − u + ρ D ⊤ µ , (67) ∂ L ∂ µ = ⇒ Cx E − Dy = , (68) ∂ L ∂ µ = ⇒ Gx E = . (69)We can substitute the expression of y from ∂ L / ∂ y = ∂ L / ∂ µ = E ⊤ Ex E + C ⊤ µ + G ⊤ µ = E ⊤ w (70) Cx E − ρ DD ⊤ µ = D ( ˜ z − u ) (71) Gx E = . (72) ACM Trans. Graph., Vol. 39, No. 6, Article 265. Publication date: December 2020. hordal Decomposition for Spectral Coarsening • 265:15
This enables us to obtain the optimal x ⋆ E , µ ⋆ via solving a linearsystem E ⊤ E C ⊤ G ⊤ C − / ρ DD ⊤ G x E µ µ = E ⊤ wD ( ˜ z − u ) . (73)Then we can recover the optimal y ⋆ as y = ˜ z − u + ρ D ⊤ µ ⋆ . (74) E arg min X FOR SPECTRAL COARSENING
When the number of eigenvectors that are chosen to preserve islarge, the size of U ⊤ ⊗ V in Eq. 60 can be large. However, we can avoidexplicitly construct U ⊤ ⊗ V by leveraging the fact that only E ⊤ E and E ⊤ w are used in the linear solve Eq. 73. By using the properties ( A ⊗ B )( C ⊗ D ) = ( AC ) ⊗ ( BD ) and ( B ⊗ A ) vec ( X ) = vec ( AXB ) , wecan instead compute E ⊤ E and E ⊤ w as E ⊤ E = (( U ⊤ ⊗ V ) P -1 E ) ⊤ ( U ⊤ ⊗ V ) P -1 E (75) = ( P -1 E ) ⊤ ( U ⊤ ⊗ V ) ⊤ ( U ⊤ ⊗ V ) P -1 E (76) = ( P -1 E ) ⊤ ( U ⊗ V ⊤ )( U ⊤ ⊗ V ) P -1 E (77) = ( P -1 E ) ⊤ (( UU ⊤ ) ⊗ ( V ⊤ V )) P -1 E , (78) E ⊤ w = (( U ⊤ ⊗ V ) P -1 E ) ⊤ vec ( W ) (79) = ( P -1 E ) ⊤ ( U ⊗ V ⊤ ) vec ( W ) (80) = ( P -1 E ) ⊤ vec ( V ⊤ WU ⊤ ) , (81)where the size of ( UU ⊤ ) ⊗ ( V ⊤ V ) and V ⊤ WU ⊤ are independent ofthe number of eigenvectors we choose to preserve. F IMPLEMENTATION
Our solver is implemented in MATLAB using gptoolbox [Jacobsonet al. 2018]. We adapt the MATLAB code from [Sun and Vanden-berghe 2015] to compute the chordal decompoistion. Runtimes forall the examples were reported on a MacBook Pro with an Intel i52.3GHz processor, 16GB of RAM and an Intel Iris Plus Graphics 655GPU. Experiments for volume to surface were tested on a Linuxworkstation with an Dual 14 Core 2.2Ghz processor, 383GB of RAMand 2 Titan RTX 24GB GPU. We did not use multi-threading, thoughthe projection to PSD cones can be easily parallelized using MAT-LAB MEX file. Since the KKT system matrix in arg min X remains thesame until ρ is updated, we only perform numerical factorizationwhen ρ is updated and reuse it until ρ changes again (usually aftertens of iterations).For consistency, we choose to evaluate all the results on the first100 eigenvectors across the experiments unless specified otherwise.We preserve the first 100 eigenvectors for surface Laplacian in spec-tral coarsening and simplification, and use an increased numberof eigenvectors for volumetric Laplacian or when the system goesunderdetermined. We also normalize all the eigenvectors to haveunit length and scale the mesh to ensure each vertex has unit area.For a fair comparison, we compare the runtime of our MATLABimplementation with the MATLAB implementation of [Liu et al.2019]. When comparing against [Lescoat et al. 2020], we use their Fig. 24. We plot the change of the average number of cliques and the averagemaximal and minimal clique size with respect to clique parameters whichwe control in the chordal decomposition algorithm when coarsening variousmeshes to 800 vertices.Fig. 25. We show the change of the total ADMM runtime, the ADMMruntime per iteration and the number of iterations with respect to cliqueparameters when coarsening a number of meshes to 800 vertices. Herethe lines denote the average and the color regions denote the standarddeviation. decimation algorithm without edge flips and enable approximationof the minimizer on collapse edges.In our implementation, the projecting of each clique matrix to PSDis relatively cheap because the size of clique matrix usually variesfrom tens to a few hundreds and can be controlled by the parametersduring the clique merging stage of chordal decomposition. Thesize of the clique matrix after the chordal decomposition wouldbe approximately around the clique merging parameters. In ourexperiments, we set the parameters for clique merging to be 200 sothat the size of the clique matrix is in a few hundreds consideringthe tradeoff between eigendecomposition speed and convergencerate. As shown in Fig. 24 and Fig. 25, there is a non-monotonicrelationship between the clique parameter and the ADMM runtime,and we experimentally determine that a parameter of 200 worksbest for all our examples. Optimal parameter determination is leftfor future work. We recommend setting the clique parameters tobe larger than 100 when only preserving the first 100 eigenvectorsto ensure our method converges. We also notice that increasing thenumber of eigenvectors preserved would lead to better convergenceand avoid underdeterminism in the system. Let k be the numberof the eigenvectors we choose to preserve and m be the number of ACM Trans. Graph., Vol. 39, No. 6, Article 265. Publication date: December 2020.
Fig. 26. Our algorithm can further improve the spectral properties of [Le-scoat et al. 2020] as a post-processing step.Fig. 27. Due to the freedom of choosing the output sparsity pattern, ourmethod can serve as a post-processing tool to further improvze the resultingoperator from the method of [Lescoat et al. 2020]. The results indicatethat our post-processed operators result in better functional maps (middle)compared to the output operators from [Lescoat et al. 2020] (left), so as theeigenvalues (right). vertices in the coarsened domain. When the DOF defined by thesparsity pattern is large ( i.e. , volumetric Laplacian, 3-ring surfaceLaplacian) , we recommend setting the number of the preservedeigenvectors to be k > . ∗ m , and using the weighted energy topreserve the low-frequency modes. Experimentally, we observe thatwhen the DOF is too large, the system may become underdeterminedfor volumetric mesh and 2- or 3-ring if m > × k . Fig. 28. We compare the runtime of our optimization algorithm of boththe weighted and unweighted version with [Liu et al. 2019] using the same3-ring sparsity pattern. Here we only consider the solve time, factoring outthe precomputation for both our method and the method of [Liu et al. 2019].
G ADDITIONAL RESULTS