Mesh Total Generalized Variation for Denoising
JJOURNAL OF L A TEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2021 1
Mesh Total Generalized Variation for Denoising
Zheng Liu, YanLei Li, Weina Wang, Ligang Liu, and Renjie Chen † Abstract —Total Generalized Variation (TGV) has recently been proven certainly successful in image processing for preserving sharpfeatures as well as smooth transition variations. However, none of the existing works aims at numerically calculating TGV overtriangular meshes. In this paper, we develop a novel numerical framework to discretize the second-order TGV over triangular meshes.Further, we propose a TGV-based variational model to restore the face normal field for mesh denoising. The TGV regularization in theproposed model is represented by a combination of a first- and second-order term, which can be automatically balanced. This TGVregularization is able to locate sharp features and preserve them via the first-order term, while recognize smoothly curved regions andrecover them via the second-order term. To solve the optimization problem, we introduce an efficient iterative algorithm based onvariable-splitting and augmented Lagrangian method. Extensive results and comparisons on synthetic and real scanning data validatethat the proposed method outperforms the state-of-the-art methods visually and numerically.
Index Terms —Mesh denoising, total generalized variation, augmented Lagrangian method, total variation, normal filtering (cid:70)
NTRODUCTION M ESH denoising is one of the most fundamental re-search topics in geometry processing. With the rapiddevelopment of 3D scanning devices and depth cameras,automatically acquiring and reconstructing meshes from thereal world has become increasingly popular and common[1]. However, the acquired meshes are inevitably contam-inated by noise because of local measurement errors inthe scanning process and computational errors in the re-construction algorithm. The noise not only degrades thequality of meshes, but also causes errors in downstreamgeometry processing applications [2]. Thus, mesh denoisingis a widely studied topic in recent years. The main purposeof mesh denoising is to remove noise while recoveringgeometric features as accurately as possible [3]. In fact, thenoise and geometric features are both of high frequencyinformation, hence it is challenging to distinguish them fromthe noisy input, especially in the presence of large noise.To suppress noise while preserving geometric features,various mesh denoising methods have been investigated,including filtering-based methods [4], [5], [6], variationalmethods [7], [8], [9], [10], [11], [12], nonlocal-based methods[13], [14], [15], data-driven methods [3], [16], [17], [18],etc. Among them, variational methods have attracted muchattention, because they can preserve sharp features well andsuppress noise significantly.The variational model usually consists of a regular-ization term and a fidelity term. The total variation (TV)regularization is known for its excellent edge-preservingcapability in image processing, and it has been extendedby Zhang et al. [7] to restore the face normal field for tri-angular meshes. However, as the TV regularization uses the • Z. Liu and Y. Li are with National Engineering Research Center ofGeographic Information System, School of Geography and InformationEngineering, China University of Geosciences (Wuhan). • W. Wang is with Department of Mathematics, Hangzhou Dianzi Univer-sity. • R. Chen and L. Liu are with School of Mathematical Sciences, Universityof Science and Technology of China. † Corresponding author. E-mail:[email protected]. first-order operator, it tends to transform smooth transitionvariations into piecewise constant ones. Hence, results ofthe TV regularization often suffer from staircase artifacts insmooth regions. These artifacts degrade the visual qualityof the denoised result and sometimes induce false featuresthat do not exist in smooth regions. To reduce the unde-sired artifacts from the TV regularization, several higher-order regularization models [19], [20] have been proposed,which can preserve geometric features and simultaneouslyprevent introducing staircase artifacts. Unfortunately, whenthe noise level is high, these higher-order models may blurgeometric features in varying degrees and result in unsat-isfactory feature-preserving results. To address these issues,it is natural to combine the first- and higher-order terms.For example, Zhong et al. [21] proposed a new variationalmodel, which combines a first- and a higher-order termdirectly. Although the proposed model performs better thanTV, it still has some artifacts near sharp features with thestraightforward combined regularization technique. More-over, it is likely to flatten fine details more or less. Thus,although many variational methods have been introduced,it is still quite challenging to find one regularization tech-nique, which can effectively preserve sharp features in someparts of the surface while simultaneously recovering smoothregions in other parts.Recently, the total generalized variation (TGV), proposedby Bredies et al. [22], has become one of the most popularregularization technique in image processing. TGV is com-posed of polynomials of arbitrary order, which can recon-struct piecewise polynomial functions with automaticallybalanced first- and higher-order variations rather than usingfixed combination [23], [24]. TGV can be interpreted ascombining smoothness from the first-order up to arbitraryorder variations. It preserves sharp features via first-ordervariations while effectively approximates smooth transitionregions via higher-order variations. As a result, it does notproduce staircase artifacts. For most signal processing tasks,it seems that the second-order variant of TGV is sufficient.The reasons may be as follows. Most signals can be well ap- a r X i v : . [ c s . C G ] J a n OURNAL OF L A TEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2021 2 proximated by piecewise linear surfaces, and higher-orderTGV is hard to discretize. Therefore, in this paper, we focuson the second-order TGV, and we refer TGV in particular toits second-order version in the rest of the paper.It is non-trivial to extend the typical methods from 2Dimages to 3D meshes because of inherent data irregularitiesof the meshes. To the best of our knowledge, although TGVhas achieved great success in image processing (e.g. imagerestoration [22], depth upsampling [23], speckle reduction[25], texture decomposition [24], image reconstruction [26],[27]), none of the existing works aims at calculating TGVover triangular meshes. In this paper, we develop a numer-ical framework to discretize TGV over triangular meshes.Based on this discretization, a vectorial TGV regularizationmodel is proposed to restore the face normal field. Then, weintroduce an efficient and effective algorithm to solve theoptimization problem. Compared with the existing work,the main contributions of our work include: • We define discrete operators and further use theseoperators to discretize TGV over triangular meshes. Tothe best of our knowledge, this is the first work toestablish a numerical framework for discretizing TGVover triangular meshes. • We present a normal filtering model using TGV-basedregularization, which is able to preserve sharp features,recover smooth transition regions, as well as preventingunnatural artifacts in the denoised results. The opti-mization problem is solved by variable-splitting andaugmented Lagrangian method. • Qualitative and quantitative experiments on syntheticand real scanning data demonstrate that our meshdenoising method performs favorably against the state-of-the-art methods.The rest of this paper is organized as follows. A briefsurvey of mesh denoising methods is presented in section2. In section 3, we recall TGV in image processing. Section4 presents the computational framework for discretizingTGV and its vectorial version over meshes. In section 5,we propose a vectorial TGV regularization model to restorethe face normal field. An augmented Lagrangian methodis introduced to solve the problem. Our TGV-based meshdenoising method is discussed and compared to the state-of-the-art methods visually and quantitatively in section6. Finally, we conclude with some remarks and discussdirections for future work in section 7.
ELATED WORK
Because of the abundance of mesh denoising methods inthe literature, it is beyond the scope of this paper to reviewall existing previous work. Instead, among various method-ologies, we only review four categories of research that aremost relevant to this work.
Filter-based methods . The spatial filtering methods firstcompute filtering weights based on signal similarities, andthen average the neighboring signals in each local regionwith the computed weights. Early spatial filtering methodsdirectly applied isotropic smoothing [28], [29] or anisotropicsmoothing [30], [31], [32], [33], [34], [35] on mesh vertices toremove noise. Compared to the isotropic smoothing meth-ods, the anisotropic methods are more robust against noise and can preserve geometric features better. Unfortunately,in the case of heavy noise, it is hard for these anisotropicmethods to preserve geometric features, especially for sharpfeatures.Recently, normal filtering followed by updating verticeshas become so widespread that it could arguably substitutedirectly vertex positions smoothing [4], [36]. Zheng et al.[5] applied the bilateral filtering on the face normal field.Although their method preserves geometric features well, itmay blur sharp features when the noise level increases. Toaddress this problem, Zhang et al. [37] introduced a bilateralnormal filter based on a well-designed guidance normalfield. Later on, Zhang et al. [38] proposed a scale-awarenormal filter using both static and dynamic guidance. Yadavet al. [39] proposed a normal filter based on tensor votingand binary optimization. Furthermore, Yadav et al. [40]developed a normal filter in the robust statistics framework,which can preserve sharp features although may smoothsome weak features and fine details. Arvanitis et al. [41]introduced a coarse-to-fine framework to restore the facenormal field based on graph spectral processing. Zhao etal. [42] presented a feature-preserving normal filter. Theyfirst compute the guidance normal field using the graph-cut scheme, and then perform normal filtering with theguidance normal field.
Variational methods . The variational methods (e.g., totalvariation based methods) have been developed for variousapplication in geometry processing. For mesh denoising,variational methods aim to find appropriate priors to for-mulate the denoising process as an optimization problem.Based on the prior that geometric features are sparse overthe underlying surface, the variational methods typicallyapply sparse regularization to recover geometric features.He and Schaefer [43] and Zhao et al. [44] extended the (cid:96) minimization approach to triangular meshes using thepiecewise constant prior. These (cid:96) minimization methodsachieve impressive results in preserving sharp features, butinevitably flatten some weak features because of their highsparsity requirement. Moreover, solving the (cid:96) minimizationproblem is NP hard, hence the computation is highly timeconsuming. Another popular sparse regularization is thetotal variation (TV) minimization, which essentially imposesthe first-order (cid:96) minimization. Zhang et al. [7] extendedthe TV regularization for restoring the face normal field.They uses the sparsity of first-order information to preservesharp features. A commonly known drawback of the TVregularization is that it tends to produce staircase artifacts insmoothly curved regions. To address this problem, higher-order methods [19], [20], [45] have been introduced, whichcan preserve geometric features while simultaneously pre-vent producing staircase artifacts. Unfortunately, when thenoise level increases, these high-order methods tend to blurfine details and curve sharp features. Another technique[21] for reducing staircase artifacts is adding a high-orderterm to the TV term. While this straightforward combinedtechnique reduces staircase artifacts to some extent, theunnatural artifacts may still appear around sharp features.Thus, it is still challenging to preserve sharp features whilesimultaneously recover smooth transition variations. Nonlocal-based methods . Most of the above-mentionedvariational methods are local (using local operators to for-
OURNAL OF L A TEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2021 3 mulate the problem). Based on the observation that pat-tern similarity may exist on the underlying surface, severalresearchers introduced nonlocal-based methods [13], [14],[15], [46]. These nonlocal-based methods first group similarpatches together, and then perform a low-rank minimizationon the patch group to recover pattern similarity of theunderlying surface. These patch-group-based methods caneffectively recover the surfaces using the pattern similarityprior. However, due to the multi-patch collaborative mecha-nism, these low-rank recovery methods are computationallyintensive, and tend to blur sharp features.
Data-driven methods . More recently, the data-drivenmethods have received increasing attention. Wang et al.[16] presented the pioneering work using cascaded normalregression (CNR) to smooth face normals. Their methodfirst learns non-linear regression functions that map thefiltered normal descriptors to those of the ground-truthcounterparts, and then applies the learned functions tocompute filtered face normals. In order to better recoverlost details, Wang et al. [3] and Wei et al. [17] proposeda two-step denoising framework (denoising followed byrefinement). They first learn the mapping from noisy meshesto their ground-truth counterparts for smoothing face nor-mals. Then, they recover lost details by learning mappingfrom the filtered normals to the ground-truth counterparts.Later on, inspired by the two-step denoising framework, Liet al. [47] proposed a normal filtering neural network, calledNormalF-Net. It consists of a denoising and refinement sub-network. Li et al. [18] presented an end-to-end convolutionalneural network, named DNF-Net, to directly predict filteredface normals from noisy meshes. The above mentioneddata-driven methods can yield satisfactory denoising resultsusing convolutional network. However, the performance ofthese methods depends on the completeness of the trainingdata set, and the computation cost of the training processfor them is usually expensive.
ACKGROUND OF T OTAL G ENERALIZED V ARIA - TION
This section gives a brief review of the total generalized vari-ation (TGV). The total variation (TV) proposed by Rudin,Osher, and Fatemi (ROF) is a seminal work, which hasstarted the trend of variational methods in image process-ing. TV has been widely used as a regularization to recoveredges, which are key features of images. For an image u : Ω → R , TV of u can be defined as follows: TV( u ) = (cid:90) Ω |∇ u | . (1)However, a commonly known drawback of TV is it tendsto produce staircase artifacts in smooth transitions of therestored images, because TV favors solutions that are piece-wise constant. To address this problem, a more generalvariational method, called total generalized variation (TGV),was introduced by Bredies et al. [22]. In theory, TGV can beused to measure image characteristics up to a certain orderof differentiation [22]. As proved in [22], the first-order TGVis equivalent to TV. Thanks to the higher-order property,the higher-order TGV can eliminate staircase artifacts ef-fectively. Nevertheless, too higher-order TGV are difficult to discretize and very time-consuming. Considering thetradeoff between computational complexity and numericalaccuracy, we focus on the second-order TGV in this work.Given an image u , the second-order TGV of u is formu-lated as follows: TGV( u ) = min v (cid:110) α (cid:90) Ω |∇ u − v | + α (cid:90) Ω | ξ ( v ) | (cid:111) , (2)where α , α ∈ R + are weights, and ξ ( v ) = ( ∇ v + ∇ v T ) denotes the distributional symmetrized derivative.We should point out that the 2-tensor v is converted intoa vector column by column for computational convenience,i.e., v ∈ R . In the following, more intuitive explanationabout TGV is given. On one hand, in smooth transitionsof u , the second-order derivative ∇ u is locally small, andthe solution of (2) is obtained by locally choosing ∇ u ≈ v therein. On the other hand, in regions near edges, ∇ u isevidently larger than ∇ u , so the minimum of (2) tends tochoose v ≈ in these regions locally. This assumption isonly valid intuitively, and the actual values of minimum v are located anywhere in the range of [0 , ∇ u ] . With the helpof the edge-aware variable v , TGV automatically balancesbetween first- and second-order variations, rather than us-ing fixed combination. We refer the interested reader to [22],[26], [27] for more details about TGV.For a N -channel image u : Ω → R N , where u =( u , u , . . . , u N ) , the vectorial TGV of u is formulated as TGV ( u ) = min v (cid:110) α (cid:90) Ω (cid:107)∇ u − v (cid:107) + α (cid:90) Ω (cid:107) ξ ( v ) (cid:107) (cid:111) , (3)where (cid:107)∇ u − v (cid:107) = (cid:0) (cid:80) N i =1 |∇ u i − v i | (cid:1) , and (cid:107) ξ ( v ) (cid:107) = (cid:80) j =1 (cid:107) ξ j ( v ) (cid:107) = (cid:80) j =1 (cid:0) (cid:80) N i =1 | ξ j ( v i ) | (cid:1) . As we can see,(3) can be applied to process multi-spectral images with aspecial case N = 3 for RGB images. ISCRETIZATION OF T OTAL G ENERALIZED V ARIATION OVER TRIANGULAR MESHES
In this section, we first introduce some basic notation. Then,we elaborate on how to discretize TGV and its vectorial ver-sion over triangular meshes. Finally, we discuss the relatedwork [7], [19] with our discretized TGV.
Assume M be a compact triangulated surface of arbitrarytopology with no degenerate triangles in R . The set ofvertices, edges, and triangles of M are denoted as { p i : i = 1 , · · · , P } , { e i : i = 1 , · · · , E } , and { τ i : i = 1 , · · · , T } ,respectively. Here P , E , and T are the numbers of vertices,edges, and triangles of M , respectively. If p is an endpointof an edge e , then we write p ≺ e . Similarly, e ≺ τ denotes e is an edge of τ , and p ≺ τ denotes p is a vertex of τ .We further define the relative orientation of an edge e w.r.t. a triangle τ , denoted by sgn( e, τ ) , as follows. Assumeall triangles are with the counterclockwise orientation, whileall edges are with randomly chosen fixed orientations. Foran edge e ≺ τ , if its orientation is consistent with the orien-tation of τ , then sgn( e, τ ) = 1 ; otherwise sgn( e, τ ) = − . OURNAL OF L A TEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2021 4
To describe the piecewise constant data (e.g., the face nor-mal field) over the triangular mesh M , we introduce thepiecewise constant function space, which is related to thepiecewise constant finite element method in numerical PDE.Given a mesh M , we define the space U = R T , which isisomorphic to the piecewise constant function space over M . For example, u = ( u , · · · , u T ) ∈ U . It means that thevalue of u restricted on the triangle τ is u τ , which is writtenas u | τ sometimes for convenience. In order to describe thefunctional v further (see the definition of TGV (2)), wepropose the edge function space V = R E , whose elementsare the values at mesh edges of M . We also call the space V as the edge function space E sometimes. Likewise, thecomponent of v ∈ V is v e , which is the value restricted onthe edge e written as v | e sometimes.We can equip the spaces U and V with the standardEuclidean inner product and norm as follows. ∀ u , u , u ∈ U , we have ( u , u ) U = (cid:88) τ u | τ u | τ area( τ ) , (cid:107) u (cid:107) U = (cid:113) ( u, u ) U , (4)where area( τ ) is the area of τ . ∀ v , v , v ∈ V , we have ( v , v ) V = (cid:88) e v | e v | e len( e ) , (cid:107) v (cid:107) V = (cid:113) ( v, v ) V , (5)where len( e ) is the length of e .As described in [12], it is natural to define the first-orderdifference operator D M : U → V on M as ( D M u ) | e = (cid:40) (cid:80) τ,e ≺ τ u τ sgn( e, τ ) , e (cid:54)⊂ ∂ M , e ⊂ ∂ M , ∀ e. (6)The adjoint operator of D M , i.e., D (cid:63) M : V → U , is given by ( D (cid:63) M v ) | τ = − τ ) (cid:88) e ≺ τ,e (cid:54)⊂ ∂ M v e sgn( e, τ )len( e ) , ∀ τ. (7)In the discrete case, for one triangle τ , there are threefirst-order differences over the edges along three differentdirections. Thus, using the first-order differences, we canapproximate the gradient operator in one triangle τ as ∇ u | τ = ( D M u | e ,τ , D M u | e ,τ , D M u | e ,τ ) , where e i,τ ≺ τ, i = 1 , , . For convenience, we write thediscrete gradient as ∇ u = ( ∂ u, ∂ u, ∂ u ) . It is natural todenote the second-order gradient in τ as ∇ u | τ = ∂ ∂ u ∂ ∂ u ∂ ∂ u∂ ∂ u ∂ ∂ u ∂ ∂ u∂ ∂ u ∂ ∂ u ∂ ∂ u , (8)where the diagonal entries ∂ i ∂ i u are the second-order direc-tional derivatives in the same direction ( i =1 , , ), while theoff-diagonal entries ∂ i ∂ j u are the second-order directionalderivatives in two different directions ( i, j = 1 , , , i (cid:54) = j ).To further discretize TGV, we consider to define theoperators on the edge function space E . For one triangle τ , v | τ = ( v e ,τ , v e ,τ , v e ,τ ) expresses the values restrictedon the three edges of τ . For convenience, the values on E in one triangle can be written as v | τ = ( v , v , v ) . Then, the gradient on E in one triangle (along three differentdirections) can be given by ∇ v | τ = ∂ v ∂ v ∂ v ∂ v ∂ v ∂ v ∂ v ∂ v ∂ v , (9)where ∂ i v j are the first-order derivatives w.r.t. v ( i, j =1 , , . According to ξ ( v ) = ( ∇ v + ∇ v T ) , the symmetrizedtensor gradient ξ ( v ) in τ can be directly derived as ξ ( v ) | τ = ∂ v ∂ v + ∂ v ∂ v + ∂ v ∂ v + ∂ v ∂ v ∂ v + ∂ v ∂ v + ∂ v ∂ v + ∂ v ∂ v . (10)In the following, we will discretize the symmetrized ten-sor gradient operator ξ ( · ) , which is the core contribution ofthis paper. As mentioned in Section 2, in smooth transitionregions, the values of minimum v are close to ∇ u , whichintuitively tells us v ≈ ∇ u → ∇ v ≈ ∇ u → ∂ i v i ≈ ∂ i ∂ i u, ∂ i v j ≈ ∂ i ∂ j u, (11)where i, j = 1 , , , ignoring the order of i and j . As wecan see in (9) and (10), we need two discretization forms offirst-order derivatives w.r.t. v (one for computing ∂ i v i andthe other for calculating ∂ i v j ). From (11), we can easily seethat these two discretizations also intuitively determine thesecond-order derivatives w.r.t. u . b bb bb b ττ + τ − e + e − l (a) b b bb bb el l l l (b) Fig. 1: (a) Illustration of the definition of 1-form operator D E v. [ v ] is the jump of v over line l plotted in cyan in triangle τ with the barycenter plotted in red. (b) Illustration of thedefinition of adjoint operator D (cid:63) E w . B ( e ) is the set of linesassociated with edge e , which refers to four lines.Let l be a line connecting the barycenter and one vertexof the triangle τ . For v ∈ V , we define the 1-form jump of v over the line l as [ v ] l = v e + sgn( e + , τ ) + v e − sgn( e − , τ ) , (12)where e + and e − are two edges sharing the common vertexof l . e + enters the common vertex in the counterclockwisedirection, whereas e − leaves the vertex in the counter-clockwise direction. The two triangles sharing edges e + and e − are denoted as τ + and τ − respectively. All theaforementioned descriptions are illustrated in Fig. 1a. Then,the discrete 1-form operator D E is defined as D E : V → W , ( D E v ) | l = [ v ] l , ∀ l, for v ∈ V, (13) OURNAL OF L A TEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2021 5 bb bb b b b τ −− τ − ττ + τ ++ bb b bb +2 − − ∂ i ∂ i u bb bb b b b − − − = ∂ i ∂ j u + ∂ j ∂ i u bb bb b b +1 − − + ∂ j ∂ i u bb bb b b − − ∂ i ∂ j u Fig. 2: Illustration of computation of second-order directional derivatives for one triangle τ .where W = R × T . The W space is equipped with thefollowing inner product and norm: ( w , w ) W = (cid:88) l w | l w | l len( l ) , (cid:107) w (cid:107) W = (cid:113) ( w, w ) W , (14) ∀ w , w , w ∈ W , where len( l ) is the length of line l . Theadjoint operator of D E , that is D (cid:63) E : W → V , can be derivedusing the inner products in V and W . For w ∈ W , D (cid:63) E hasthe following form ( D (cid:63) E w ) | e = − e ) (cid:88) l ∈ B ( e ) w l sgn( e, τ l )len( l ) , ∀ e, (15)where B ( e ) is the set of lines associated with the edge e (seeFig. 1b) and τ l is the triangle containing the line l . Details forthe mathematical derivation of D (cid:63) E can be found in lemma 1of Appendix . Remark 1 . We will give an intuitive interpretation ofthe discrete 1-form operator D E . From (12) and (13), weobviously see that the 1-form operator w.r.t. v depicts thevariation of v over two adjacent edges. Moreover, this op-erator also can be seen as an analogue of the second-orderoperator w.r.t. u (in the same direction), which depicts ( D E v ) | l = v e + sgn( e + , τ ) + v e − sgn( e − , τ ) ≈ ( u τ sgn( e + , τ ) + u τ + sgn( e + , τ + ))sgn( e + , τ )+( u τ sgn( e − , τ ) + u τ − sgn( e − , τ − ))sgn( e − , τ )= (cid:0) u τ − u τ + (cid:1) + (cid:0) u τ − u τ − (cid:1) =2 u τ − u τ + − u τ − . (16)In summary, the 1-form operator D E v can depict first-ordervariations of v , which can be seen as the approximation ofthe first-order derivatives ∂ i v i . Besides, this operator alsocan describe the second-order variations of u , which can beseen as the approximation of the second-order directionalderivatives ∂ i ∂ i u . The computation of ∂ i ∂ i u over meshes isdemonstrated in Fig. 2.Let c be a curve passing through four edges ( e −− , e − , e + , e ++ ) and attaching itself to the line l of τ . For v ∈ V , we define the 2-form jump of v over the curve c as [[ v ]] c =[[ v ]] c − + [[ v ]] c + = (cid:16) v e −− sgn( e −− , τ − ) + v e + sgn( e + , τ + ) (cid:17) + (cid:16) v e − sgn( e − , τ − ) + v e ++ sgn( e ++ , τ + ) (cid:17) . (17)The two triangles sharing edges e −− and e ++ are denoted as τ −− and τ ++ , and the two auxiliary curves passing through ( e −− , e + ) and ( e − , e ++ ) are denoted as c − and c + , respec- b b bb bb b b b ττ + τ ++ τ − τ −− l ce + e ++ e − e −− c − c + Fig. 3: Illustration of the definition of 2-form operator (cid:101) D E v . [[ v ]] is the 2-form jump over curve c plotted in blue, whichpasses through four edges ( e −− , e − , e + , e ++ ) and attachesitself to line l . Auxiliary curve c − passing through ( e −− , e + ) is plotted in green, and curve c + passing through ( e − , e ++ ) is plotted in purple.tively. We demonstrate all aforementioned descriptions inFig. 3. Then, we define the 2-form operator (cid:101) D E as (cid:101) D E : V → (cid:102) W , ( (cid:101) D E v ) | c = [[ v ]] c , ∀ c, for v ∈ V, (18)where (cid:102) W = R × T . The (cid:102) W space is equipped with thefollowing inner product and norm: ( (cid:101) w , (cid:101) w ) (cid:102) W = (cid:88) c (cid:101) w | c (cid:101) w | c len( c ) , (cid:107) (cid:101) w (cid:107) (cid:102) W = (cid:113) ( (cid:101) w, (cid:101) w ) (cid:102) W , (19) ∀ (cid:101) w , (cid:101) w , (cid:101) w ∈ (cid:102) W , where len( c ) = (cid:0) len( l − ) + 2len( l ) +len( l + ) (cid:1) is the approximation of the length of c . l + is theline contained in triangle τ + and sharing the vertex with l ,while l − is the line contained in τ − . b bbb bb b b b b bb bb bbb b b b bbbb ec c c c c c c c Fig. 4: Illustration of the definition of adjoint operator (cid:101) D (cid:63) E (cid:101) w . B ( e ) is the set of curves associated with edge e , whichrefers to eight curves. The attached lines of curves in B ( e ) are also shown.Similarly, the adjoint operator of (cid:101) D E , that is (cid:101) D (cid:63) E : (cid:102) W → OURNAL OF L A TEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2021 6 V , is given by ( (cid:101) D (cid:63) E (cid:101) w ) | e = − e ) (cid:88) c ∈ B ( e ) (cid:101) w c sgn( e, τ c )len( c ) , ∀ e, (20)where B ( e ) is the set of curves associated with the edge e (see Fig. 4), and τ c is the triangle satisfying conditions e ≺ τ c and τ c ∈ { τ + , τ − } . Details for the mathematical derivationof (cid:101) D (cid:63) E can be found in lemma 2 of Appendix. Remark 2 . We will give an intuitive interpretation of the2-form operator (cid:101) D E . From (17) and (18), we can see that the2-form operator w.r.t. v describes the sum of variations of v across edges ( e −− , e + ) and across edges ( e − , e ++ ) . Thisoperator can also be seen as an analogue of the second-order operator w.r.t. u (in different directions), which canbe expressed as ( (cid:101) D E v ) | c = (cid:16) v e −− sgn( e −− , τ − ) + v e + sgn( e + , τ + ) (cid:17) + (cid:16) v e − sgn( e − , τ − ) + v e ++ sgn( e ++ , τ + ) (cid:17) ≈ ( u τ + + u τ − − u τ − u τ −− )+( u τ + + u τ − − u τ − u τ ++ ) . (21)More intuitively, (21) can be used not only to depict first-order derivatives ∂ i v j + ∂ j v i , but also to describe second-order directional derivatives ∂ i ∂ j u + ∂ j ∂ i u . We illustrate thecomputation of second-order directional derivatives ∂ i ∂ j u + ∂ j ∂ i u in Fig. 2.Note that with the 1- and 2-form operators (13) and (18),the discrete symmetrized gradient operator ξ : V → W can be directly approximated. The W = R × T space iscomposed of the spaces W and (cid:102) W , and it is equipped withthe following inner product and norm: ( w , w ) W = ( w , w ) W +( (cid:101) w , (cid:101) w ) (cid:102) W , (cid:107) w (cid:107) W = (cid:107) w (cid:107) W + (cid:107) (cid:101) w (cid:107) (cid:102) W , ∀ w , w , w ∈ W , w , w , w ∈ W , and (cid:101) w , (cid:101) w , (cid:101) w ∈ (cid:102) W . Then,the second-order term of TGV can be expressed as (cid:107) ξ ( v ) (cid:107) W = (cid:88) τ (cid:16) (cid:88) i (cid:107) ∂ i v i (cid:107) + (cid:88) i,j (cid:107) ∂ i v j + ∂ j v i (cid:107) (cid:17) = (cid:107)D E v (cid:107) W + (cid:107) (cid:101) D E v (cid:107) (cid:102) W , (22)where i, j = 1 , , and i (cid:54) = j . Given u ∈ U , with the abovedefinition of the second-order term, we now formulate thediscretized TGV as TGV( u ) = min v ∈ V (cid:110) α (cid:107)D M u − v (cid:107) V + α (cid:107) ξ ( v ) (cid:107) W (cid:111) . (23)The definition of (23) is the TGV semi-norm over meshes.We can extend the proposed TGV semi-norm (23) toits vectorial case. To consider vectorial data, three vectorialspaces U , V , and W are defined as U = U × · · · × U (cid:124) (cid:123)(cid:122) (cid:125) N , V = V × · · · × V (cid:124) (cid:123)(cid:122) (cid:125) N , W = W × · · · × W (cid:124) (cid:123)(cid:122) (cid:125) N , for N -channel data. The inner products and norms in U , V ,and W are as follows: ( u , u ) U = (cid:88) ≤ i ≤ N ( u i , u i ) U , (cid:107) u (cid:107) U = (cid:113) ( u , u ) U , ( v , v ) V = (cid:88) ≤ i ≤ N ( v i , v i ) V , (cid:107) v (cid:107) V = (cid:113) ( v , v ) V , ( w , w ) W = (cid:88) ≤ i ≤ N ( w i , w i ) W , (cid:107) w (cid:107) W = (cid:113) ( w , w ) W , for u , u , u ∈ U , v , v , v ∈ V , and w , w , w ∈ W . Thus,all the aforementioned discrete operators can be computedchannel by channel, and the vectorial TGV semi-norm isfurther defined as TGV ( u ) = min v ∈ V (cid:110) α (cid:107)D M u − v (cid:107) V + α (cid:107) ξ ( v ) (cid:107) W (cid:111) . (24) There are some existing works that are highly related to ourdiscretized TGV proposed in this paper. It is necessary todiscuss the differences between the discretized TGV, totalvariation (TV) in [7], and high-order variation (HO) in[19]. Because TV, HO, and TGV are all discretized usingpiecewise constant finite element, these discretizations aresuitable to deal with face-based variational problems. Givena signal u ∈ V , Zhang et al. [7] defined the discretized totalvariation (TV) over meshes as follows: TV( u ) = (cid:107)D M u (cid:107) V . (25)The definition of (25) describes first-order variations overmesh edges. The TV regularization using (25) works excep-tionally well in terms of preserving sharp features, but tendsto induce staircase artifacts over smooth regions.To overcome staircase artifacts of the TV regularization,Liu et al. [19] presented the high-order regularization overmeshes. Based on the first-order difference operator usedin TV, Liu et al. [19] introduced the second-order differenceoperator to describe second-order derivatives (in the samedirection). The discretized second-order variation in [19] isdefined as HO( u ) = (cid:88) l (cid:107) u τ − u τ + − u τ − (cid:107) len( l ) . (26)The HO regularization using (26) recovers smooth regionswell, but blurs sharp features in case of large noise.Next, we discuss the differences between the second-order term (22) of TGV and HO (26). Using (16), we have HO( u ) ≈ (cid:107)D E v (cid:107) W . Therefore, HO (26) can be seen asan analogue of the second-order derivative in the samedirection ( ∂ i ∂ i u ). In other words, the HO regularization justminimizes the second-order variations in the same direc-tion. In contrast, the minimization of the second-order termof TGV attempts to simultaneously minimize the second-order variations in the same direction ( ∂ i ∂ i u ) and those indifferent directions ( ∂ i ∂ j u + ∂ j ∂ i u ).As mentioned before, the TV regularization is betterthan HO in preserving sharp features, while the HO reg-ularization handles smooth regions better than TV. It ischallenging to have one regularization technique simulta-neously preserve sharp features in some parts of the meshand recover smooth regions well in other parts. To addressthis problem, we propose the discretized TGV. The TGVregularization automatically balances the first- and second-order terms via the auxiliary variable v . In consequence, theTGV regularization has the best properties of each of TVand HO, and manages to overcome the weakness of both;see Section 6 for more experiments and comparisons. OURNAL OF L A TEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2021 7
ESH D ENOISING USING V ECTORIAL
TGV
In this section, we propose a vectorial TGV normal filteringmodel to smooth face normals. The proposed model pre-serves sharp features and simultaneously recovers smoothregions well. Then, we discuss the algorithm to solve theproposed model. At last, we reconstruct vertex positionsaccording to the optimized face normals.
Given a noisy mesh, the face normal field of it can be writtenas N in . To remove noise in N in based on the vectorial TGV(24), our normal filtering model can be formulated as min N , v (cid:110) β (cid:13)(cid:13) N − N in (cid:13)(cid:13) U + α (cid:88) e w e (cid:107) ( D M N − v ) | e (cid:107) len( e )+ α (cid:107) ξ ( v ) (cid:107) W (cid:111) , s.t. (cid:107) N τ (cid:107) = 1 , ∀ τ, (27)where (cid:107) ξ ( v ) (cid:107) W = (cid:107)D E v (cid:107) W + (cid:107) (cid:101) D E v (cid:107) (cid:102) W . Note that N ∈ U and U denotes 3-channel U . The weight function w e of (27)is defined as w e = exp( −(cid:107) N e, − N e, (cid:107) / σ e ) , where N e, and N e, are normals of triangles sharing thecommon edge e , and σ e is a user-specified parameter whichcan be empirically fixed as . or . . w e is expected to belarge when the norm of first-order difference defined on e is small, and vice versa. Thus, it can offer large weights tosmooth regions (smoothly curved regions and flat regions),and small weights to sharp features, and therefore theproposed model (27) smoothes non-features regions andpreserves sharp features.In most cases, the vectorial TGV model (24) (applied tothe face normal field) can produce satisfactory denoising re-sults. However, for some meshes with large noise, the model(24) may smooth some sharp features slightly. Thus, we adddynamic weights into the model (24) to form the proposedvectorial TGV normal filtering model (27). These weights areupdated with respect to face normals in each iteration, andcan enhance the sparsity of the original vectorial TGV model(24) for improving sharp features reconstruction. Essentially,these dynamic weights penalize smooth regions more thansharp features, which can be applied to achieve the lower-than- (cid:96) -sparsity effect [48]. Because of the involvement of the vectorial (cid:96) semi-normand nonlinear constraints, the problem (27) is nondifferen-tiable and thus hard to solve. Here, we use variable-splittingand augmented Lagrange method (ALM), which has greatsuccess in handling (cid:96) related problems, to solve (27).By introducing the new variables P , Q , and (cid:101) Q , wereformulate (27) into a constrained optimization problem as min N , v , P , Q , (cid:101) Q (cid:110) β (cid:13)(cid:13) N − N in (cid:13)(cid:13) U + α (cid:88) e w e (cid:107) P e (cid:107) len( e )+ α (cid:107) Q (cid:107) W + α (cid:107) (cid:101) Q (cid:107) (cid:102) W + Ψ( N ) (cid:111) , s.t. P = D M N − v , Q = D E v , (cid:101) Q = (cid:101) D E v , where Ψ( N ) = (cid:26) , if (cid:107) N τ (cid:107) = 1 , ∀ τ, + ∞ , otherwise . To solve the above constrained optimization problem, weintroduce the augmented Lagrangian function as follows L ( N , v , P , Q , (cid:101) Q ; λ P , λ Q , λ (cid:101) Q ) = β (cid:13)(cid:13) N − N in (cid:13)(cid:13) U + α (cid:88) e w e (cid:107) P e (cid:107) len( e ) + α (cid:107) Q (cid:107) W + α (cid:107) (cid:101) Q (cid:107) (cid:102) W + Ψ( N )+ ( λ P , P − ( D M N − v )) V + r (cid:107) P − ( D M N − v ) (cid:107) V + ( λ Q , Q − D E v ) W + r (cid:107) Q − D E v (cid:107) W + ( λ (cid:101) Q , (cid:101) Q − (cid:101) D E v ) (cid:102) W + r (cid:107) (cid:101) Q − (cid:101) D E v (cid:107) (cid:102) W , where λ P , λ Q , and λ (cid:101) Q are the Lagrange multipliers, r , r are the positive penalty coefficients. This variables updateprocedure can be separated into five subproblems: • The N -subproblem: min N β (cid:107) N − N in (cid:107) U + Ψ( N )+ r (cid:107)D M N − v − ( P + λ P r ) (cid:107) V ; (28) • The v -subproblem: min v r (cid:107)D E v − ( Q + λ Q r ) (cid:107) W + r (cid:107) (cid:101) D E v − ( (cid:101) Q + λ (cid:101) Q r ) (cid:107) (cid:102) W + r (cid:107)D M N − v − ( P + λ P r ) (cid:107) V ; (29) • The P -subproblem: min P α (cid:88) e w e (cid:107) P e (cid:107) len( e )+ r (cid:107) P − ( D M N − v − λ P r ) (cid:107) V ; (30) • The Q -subproblem: min Q α (cid:13)(cid:13) Q (cid:13)(cid:13) W + r (cid:107) Q − ( D E v − λ Q r ) (cid:107) W ; (31) • The (cid:101) Q -subproblem: min (cid:101) Q α (cid:13)(cid:13)(cid:13) (cid:101) Q (cid:13)(cid:13)(cid:13) (cid:102) W + r (cid:107) (cid:101) Q − ( (cid:101) D E v − λ (cid:101) Q r ) (cid:107) (cid:102) W . (32)The N -subproblem (28) is a quadratic optimization withthe unit normal constraints. Here, we adopt an approximatestrategy to solve this problem. We first ignore the unitnormal constraints and solve a quadratic programming, andthen project the minimizer onto a unit sphere. Specifically,we compute the first-order optimal condition of (28), andreadily obtain the following Euler-Lagrange equation β N − r D (cid:63) M D M N = β N in − D (cid:63) M (cid:0) λ P + r ( P + v ) (cid:1) . (33)Obviously, by using the first-order operator (6) and itsadjoint operator (7), the above equation can be reformulatedinto a sparse and positive semidefinite linear system, whichcan be solved by various sparse linear solvers, such as Eigen,Taucs, and Math Kernel Library (MKL).The v -subproblem (29) is also a quadratic optimization. OURNAL OF L A TEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2021 8
The Euler-Lagrange equation of (29) is given as r v − r D (cid:63) E D E v − r (cid:101) D (cid:63) E (cid:101) D E v = − λ P − r ( P − D M N ) − D (cid:63) E ( λ Q + r Q ) − (cid:101) D (cid:63) E ( λ (cid:101) Q + r (cid:101) Q ) . (34)Using the 1- and 2-form operators (13) and (18) and theiradjoint operators (15) and (20), we reformulate the aboveequation into a sparse linear system, which can be solvedby various linear solvers.The P -subproblem (30) is solved directly because it canbe spatially decomposed, where the minimization problemw.r.t. each edge is performed individually. So, for each P e ,we have the following simplified problem min P e α w e (cid:107) P e (cid:107) + r (cid:107) P e − (cid:0) ( D M N ) | e − v e − λ P e r (cid:1) (cid:107) , which has a closed form solution as P e = Shrink( α w e , r , ( D M N ) | e − v e − λ P e r ) , (35)where the soft shrinkage operator is defined as Shrink( x, y, z ) = max(0 , − xy (cid:107) z (cid:107) ) z. The Q -subproblem (31) is solved independently as thissubproblem w.r.t. each line is individually performed. Thus,for each Q l , we solve the following problem min Q l α (cid:13)(cid:13) Q l (cid:13)(cid:13) + r (cid:107) Q l − (cid:0) ( D E v ) | l − λ Q l r (cid:1) (cid:107) , which has a closed form solution as Q l = Shrink( α , r , ( D E v ) | l − λ Q l r ) . (36)Similarly, the (cid:101) Q -subproblem (32) is separable and canbe reformulated curve-by-curve problems. For each (cid:101) Q c , wesolve the following problem min (cid:101) Q c α (cid:13)(cid:13)(cid:13) (cid:101) Q c (cid:13)(cid:13)(cid:13) + r (cid:107) (cid:101) Q c − (cid:0) ( (cid:101) D E v ) | c − λ (cid:101) Q c r (cid:1) (cid:107) , which has a closed form solution as (cid:101) Q c = Shrink( α , r , ( (cid:101) D E v ) | c − λ (cid:101) Q c r ) . (37)In summary, the entire procedure for solving the TGVnormal filtering model (27) is sketched in Algorithm 1.Based on variable-splitting and ALM, this algorithm itera-tively solves the above five subproblems and updates theLagrange multipliers. This algorithm is terminated whenone of the stopping criteria is satisfied. As mentioned insection 5.1, the dynamic weights w e play a role in recoveringsharp features. As shown in Fig. 5, without these weights,some sharp features are blurred in the denoised result. Thus,we dynamically update the weights w e in each iteration.This strategy can improve the quality of the result forpreserving sharp features; see Fig. 5 for an example. After smoothing the normal field via the proposed TGV-based normal filtering, vertex positions should be updatedto match the optimized face normals. To overcome the Fig. 5: Denoising results of Dodecahedron (corrupted with σ = 0 . l e , where σ is standard deviation of Guassiannoise and ¯ l e is mean edge length). From left to right: noisymesh, denoising results produced by vectorial TGV normalfiltering model (27) without and with dynamic weights,respectively. Algorithm 1:
ALM for solving TGV normal filteringmodel (27)
Initialization: N − = v − = P − = Q − = (cid:101) Q − =0 , λ P = λ Q = λ (cid:101) Q = 0 , k = 0 ; repeat
1. fix ( v k − , P k − , λ k P ) , solve N k by (33);normalize N k ;2. fix ( N k , P k − , Q k − , (cid:101) Q k − ,λ k P , λ k Q , λ k (cid:101) Q ) , solve v k by (34);3. fix ( N k , v k , λ k P ) , solve P k by (35);4. fix ( v k , λ k Q ) , solve Q k by (36);5. fix ( v k , λ k (cid:101) Q ) , solve (cid:101) Q k by (37);6. update Lagrange multipliers λ k +1 P = λ k P + r (cid:0) P k − ( D M N − v ) (cid:1) ; λ k +1 Q = λ k Q + r ( Q k − D E v ) ; λ k +1 (cid:101) Q = λ k (cid:101) Q + r ( (cid:101) Q k − (cid:101) D E v ) ;7. update weights w e according to N k ; until (cid:107) N k − N k − (cid:107) U < e −
10 or k ≥ ; return N k .triangle orientation ambiguity problem in the traditionalvertex updating scheme [4], we reconstruct the mesh us-ing the vertex updating scheme proposed by Zhang et al.[38], which can prevent orientation ambiguity. The iterationnumber is empirically fixed as 30 in our experiments forproducing satisfactory results. We refer the interested readerto the work [38] for more details. XPERIMENTS AND D ISCUSSION
In this section, we test the proposed denoising method ona variety of meshes including CAD, non-CAD, and realscanned data. The tested meshes are corrupted by eithersynthetic or raw noise. The synthetic noise is generated by azero-mean Gaussian function with standard deviation ( σ ) tothe mean edge length ( ¯ l e ) of the clean mesh. We also presentvisual and numerical comparisons between the proposedmethod (TGV) and the state-of-the-art methods, includingtotal variation normal filtering (TV) [7], high-order normal OURNAL OF L A TEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2021 9 filtering (HO) [19], (cid:96) minimization (L0) [43], bilateral nor-mal filtering (BF) [5], non-local low-rank normal filtering(NLLR) [13], and cascaded normal filtering (CNR) [16]. Weimplemented TV, HO, L0, and BF according to the referencesin C++. For NLLR, we ran the code kindly provided by theauthors of Li et al. [13] to produce results. For CNR, wedirectly use the trained neural networks kindly providedby Wang et al. [16] to generate results. We carefully tune theparameters of the testing methods for producing satisfactoryresults. All the methods are performed on a laptop with anIntel i7 dual core 2.6 GHz processor and 16 GB RAM; all themeshes are rendered in flat-shading to emphasize facetingeffect. We release our executable program and data in theGitHub page . Our TGV normal filtering model (27) has three parameters,i.e., α , α , and β . These three parameters are applied to bal-ance the first-, second-order, and fidelity term of the model(27). When the parameters are chosen properly, on one hand,in smooth regions we have v ≈ D M N , which results inthe first-order term closes to 0. Thus, the minimization of(27) is mainly controlled by the second-order term in thesesmooth regions; see the corresponding regions in Figs. 6band 6c. On the other hand, in regions near sharp featureswe have v ≈ , which causes the second-order term beingclose to 0. Thus, the minimization mainly depends on thefirst-order term in these regions near sharp features; see thecorresponding regions in Figs. 6b and 6c. (a) Input (Result) (b) D M N (c) v Fig. 6: (a) Noisy input (Denoising result). (b) The visual-ization of D M N on the result using color coding. (c) Thevisualization of v on the result.Parameter α controls the impact of the first-order termin (27). For each noisy mesh, these exist a range of valuesof α , which leads to promising results. This indicatesthat our method is insensitive to perturbations of α ; seeFigs. 7c and 7d. If α is too small, the first-order term isinvalid causing v ≈ over the whole mesh, which in turncauses the second-order term being close to 0. Therefore,the entire TGV regularization procedure fails, leading toresidual noise in the result; see Fig. 7b. If α is too large,in regions near sharp features, the first-order term tends tochoose v ≈ D M N , and the minimization of (27) will becontrolled by the second-order term in these regions, whichmay smooth sharp features easily; see Fig. 7e.Parameter α influences the effect of the second-orderterm in (27). Similar to α , for each noisy mesh, there exist
1. https://github.com/LabZhengLiu/MeshTGV a range of α for producing satisfactory results; see Figs.8c and 8d. The underweighting second-order term causesthe first-order term being invalid with v ≈ D M N , whichleads to residual noise in the result; see Fig. 8b. Becauseoverweighting penalizes smooth regions, fine features anddetails may be over-smoothed; see Fig. 8e.Parameter β controls the degree of denoising procedure,which helps the denoised result to harmonize well with theinput. Denoise CAD surfaces . In Fig. 9 we present results fordenoising the CAD surface containing sharp features (sharpedges and corners) and smooth regions. First, it can beseen that, except for BF and NLLR, all the other testingmethods preserve sharp features to some extent. Becauseboth geometric features and noise belong to high frequencyinformation, BF and NLLR cannot distinguish them easily,especially for sharp features. Thus, BF and NLLR may treatsome features as noise and blur these features; see Figs. 9eand 9f. Although the learning-based method CNR performswell on sharp features and smooth regions, it induces slightartifacts on the regions near sharp features; see Fig. 9g.One important thing can be observed is that, the sparseoptimization methods (TV, HO, TGV, and L0) preservesharp features more accurately. However, due to the highersparsity requirement of L0, it flattens some smooth regionsand induces false features in smooth regions sometimes; seeFig. 9d. In contrast, TGV avoids these unnatural artifacts insmooth regions, which makes an important improvementover TV; see Figs. 9b and 9h. Compared with HO, TGVrecovers sharp features and flat regions more accurately;see Fig. 9c. For each testing method, we also visualize thenormal error map defined as the angular difference betweenthe filtered normals and the ground truth. The normalsproduced by our method are noticeably closer to those ofthe ground truth; see the second row of Fig. 9.In Fig. 10 we compare the results for the CAD surfaceincluding sharp features and shallow edge. Again, BF andNLLR blur sharp features in varying degrees, and L0 flattenssmooth regions and produces false features; see Figs. 10e,10f, and 10d. On one hand, as TV only applies first-orderinformation, it suffers from undesired staircase artifacts insmoothly curved regions; see Fig. 10b. HO recovers smoothregions more accurately than TV. However, as HO onlyuses high-order information, it bends straight-line edgesand blurs the shallow edge; see the zoomed-in view ofFig. 10c. The competing methods TV and HO can performwell on either sharp features or smooth regions, but notboth. In contrast, TGV combines the advantages of bothto recover sharp features and smooth regions accurately;see Fig. 10h. Visual comparisons in this example show thesuperior performance of TGV in terms of simultaneouslypreserving features and recovering smooth regions.
Denoise non-CAD surfaces . In Fig. 11 we give thecomparisons for the non-CAD surface with rich geomet-ric features. As expected, TV tends to over-smooth somesmall-scale features, and exhibits slight staircase artifacts.L0 makes this situation even worse, which may transformsmooth regions into piecewise constant ones and over-sharpen medium-scale features; see Fig. 11d. HO and CNR
OURNAL OF L A TEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2021 10 (a) Noisy (b) α = 0 . (c) α = 0 . (d) α = 0 . (e) α = 2 . Fig. 7: Denoising results for varying α with fixed α and β . From left to right: noisy mesh (corrupted with σ = 0 . l e ), andresults with increasing α . (a) Noisy (b) α = 0 . (c) α = 0 . (d) α = 0 . (e) α = 1 . Fig. 8: Denoising results for varying α with fixed α and β . From left to right: noisy mesh (corrupted with σ = 0 . l e ),and results with increasing α .effectively preserve medium-scale features; see the torch inFigs. 11c and 11g. But they may smooth small-scale featuresand fine details; see the hand regions in Figs. 11c and 11g.Compared to the other testing methods, both NLLR and ourmethod TGV produce visually satisfactory results. However,from Table 1, we can see the numerical errors of our methodare always lower than those of NLLR. As a result, ourmethod produces appealing results with geometric featuresrestored better than the other competing methods.Fig. 12 shows the results on the non-CAD surface con-taining multi-scale features. As expected, compared to theother methods, NLLR, CNR, and our method TGV recoverdifferent levels of features in a better way. NLLR may retainsome extra noise in the result, and CNR slightly blurs small-scale features. In contrast, TGV produces visually the bestresult with most geometric features preserved.Overall, for non-CAD meshes, our method TGV gener-ates satisfactory results with better restored features, andprevents introducing additional artifacts (e.g., staircase arti-facts, over-smoothing, over-sharpening effects, extra noise). Denoise scanning data . We give comparisons on realscanned data, where the noise pattern is unknown. In Fig. 13we demonstrate the results for the data acquired by a laserscanner. First, TV, L0, and CNR over-smooth fine detailswhile sharpen some features, which makes the results lookless natural (see the zoomed-in views of Figs. 13b, 13d,and 13g). Moreover, HO and BF blur fine details to varyingdegrees. In this example, the results produced by NLLR andour method TGV look more satisfactory and natural thanthose produced by the other methods; see Figs. 13f and 13h.Besides, we show our results of more real scanned data inFig. 15. As can be seen, the results produced by our method do not show visible artifacts and almost do not lose anyfeatures of underlying surfaces.In Fig. 14 we verify the performance of our methodon the scanned meshes acquired by Kinect sensors. Thesescanned meshes are provided by Wang et al. [16]. As canbe seen, except for BF which cannot distinguish featuresand noise clearly, all the testing methods remove noiseeffectively. TV and L0 produce staircase artifacts in smoothregions and sharpen curved features; see Figs. 14b and 14d.This phenomenon is more severe for L0. Although HOdoes a good job in smooth regions, it slightly blurs small-scale features. Besides, NLLR and CNR yield visually goodresults, but they induce some bumps in smooth regions.In contrast, our method outperforms the other methods interms of geometric features preserving and smooth regionsrecovering, while preventing unnatural artifacts.Clearly, in all the meshes being tested, our results presentvisually cleaner geometric features without notable arti-facts. Particularly, our method TGV succeeds in recoveringsmoothly curved regions and simultaneously preservingsharp features, even in the case of large noise.
To further quantify the quality of the denoised results, weuse the error metric, mean angular difference abbreviatedas θ in degrees, to evaluate the performance of the testingmethods. This error metric is widely suggested in recentwork [16], [13], [18]. It measures the mean angular difference( θ ) of normals between the clean mesh and the denoisedresult. For fair comparisons, we calculate error metric θ afterthe normal filtering step for each testing method (except L0). OURNAL OF L A TEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2021 11 ° (a) Noisy (b) TV (c) HO (d) L0 (e) BF (f) NLLR (g) CNR (h) Ours Fig. 9: Denoising results of Block, corrupted with σ = 0 . l e . From left to right: input noisy mesh, denoising resultsproduced by TV [7], HO [19], L0 [43], BF [5], NLLR [13], CNR [16], and our method TGV, respectively. The second rowvisualizes the corresponding error maps, using the angular difference between face normals of denoised meshes andground truth meshes. (a) Noisy (b) TV (c) HO (d) L0 (e) BF (f) NLLR (g) CNR (h) Ours Fig. 10: Denoising results of Fandisk, corrupted with σ = 0 . l e . From left to right: input noisy mesh, denoising resultsproduced by TV [7], HO [19], L0 [43], BF [5], NLLR [13], CNR [16], and our method TGV, respectively.TABLE 1: Quantitative evaluation of the results in Figs. 9, 10, 11, 12, and 14 for TV [7], HO [19], L0 [43], BF [5], NLLR [13],CNR [16], and our method TGV. Mean angular difference θ (in degrees) , Standard deviation from θ ; Time (in seconds)
Mesh TV HO L0 BF NLLR CNR TGVBlock 3.12, 6.12; 1.44 2.90, 5.72; 2.03 4.35, 9.53; 13.3 5.30, 7.37; 1.07 11.7, 13.3; 10.4 2.63, 5.82; 0.71 , ; 6.28Fandisk 2.62, 4.90; 0.88 3.67, 4.72; 1.88 3.92, 8.15; 6.73 5.51, 6.86; 0.61 8.21, 10.1; 3.64 2.31, ; 0.55 , 3.99; 4.68Lucy 9.88, 10.7; 30.9 9.63, 10.2; 60.1 13.0, 12.0; 87.6 8.86, 8.61; 8.80 8.64, 7.90; 67.5 7.91, 8.92; 10.5 , ; 66.6Gargoyle 10.8, 9.27; 13.7 9.72, 6.90; 20.3 12.0, 8.77; 55.5 9.94, 6.16; 4.70 8.96, 6.07; 30.5 8.36, 6.22; 6.51 , ; 49.8Pyramid 6.79, 12.1; 1.28 7.18, 10.7; 1.93 6.50, 12.9; 8.03 8.45, 9.94; 0.77 9.27, 9.16; 104.2 6.40, ; 0.76 , 9.11; 4.14Cone 7.45, 12.1; 4.48 7.41, 11.3; 7.91 7.80, 12.5; 47.7 8.16, 11.3; 3.59 7.63, 11.4; 312.8 7.11, ; 1.82 , 11.3; 30.1Boy 9.16, 13.0; 10.2 8.98, 13.4; 17.3 9.42, 13.6; 88.5 10.0, 12.7; 12.3 , 12.8; 409.1 8.97, 13.1; 4.93 8.91, ; 55.8 TABLE 2: Mesh sizes of the surfaces in Table 1
Mesh Block Fandisk Lucy Gargoyle Pyramid Cone Boy | V | | F | The evaluation on error metric θ are recorded in Table 1,which are also illustrated via the curves in Fig. 16. The meshsizes of the tested surfaces in Table 1 are listed in Table 2.From Table 1, we observe that our method TGV producescompetitive results. Specifically, for the CAD surfaces, ourmethod outperforms the other compared methods in the sense that θ values are significantly smaller than the others;see the first column of Fig. 16. That is consistent with thevisual comparisons in Figs. 9 and 10. For the non-CADsurfaces, it is not surprising that our method gives θ valueslower than the compared methods, although the visualresults from ours and NLLR look almost the same in Fig.11. Thus, the results from our method are more faithful tothe corresponding ground truth surfaces. For the scanneddata, NLLR exhibits slightly better performance than ourmethod in the Boy example, even though they look almostidentical. However, in the other two examples (Cone andPyramid), our method shows better performance in terms of θ values; see the third column of Fig. 16. To further evaluate OURNAL OF L A TEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2021 12 (a) Noisy (b) TV (c) HO (d) L0 (e) BF (f) NLLR (g) CNR (h) Ours
Fig. 11: Denoising results of Lucy, corrupted with σ = 0 . l e . From left to right: input noisy mesh, denoising resultsproduced by TV [7], HO [19], L0 [43], BF [5], NLLR [13], CNR [16], and our method TGV, respectively. (a) Noisy (b) TV (c) HO (d) L0 (e) BF (f) NLLR (g) CNR (h) Ours Fig. 12: Denoising results of Gargoyle, corrupted with σ = 0 . l e . From left to right: input noisy mesh, denoising resultsproduced by TV [7], HO [19], L0 [43], BF [5], NLLR [13], CNR [16], and our method TGV, respectively. (a) Noisy (b) TV (c) HO (d) L0 (e) BF (f) NLLR (g) CNR (h) Ours Fig. 13: Denoising results of real scanned data acquired by the laser scanner. From left to right: input noisy mesh, denoisingresults produced by TV [7], HO [19], L0 [43], BF [5], NLLR [13], CNR [16], and our method TGV, respectively.the dispersion of the angular difference, we calculate thestandard deviation of angular difference from θ , which arealso listed in Table 1. As we can see, the standard deviationfor our method and CNR are evidently lower than those forthe other methods. In addition, our method yields the bestresults for some of the tested meshes, while it produces thesecond-best results for the remaining meshes.Overall, the quantitative results show that our methodrecovers underlying shapes from noisy surfaces better thanthe other methods, such as sharp features, multi-scale fea-tures, and smooth transition regions, leading to the leasterrors in the most cases. Note that our method performs well on all types of the tested meshes (CAD, non-CAD, and realscanned meshes) rather than having a peak performance onspecific one type. Computational time . We record the computational timeof each method in Table 1. As we can see, because of itspre-trained neural networks, CNR is the fastest method.BF is slower than CNR, but significantly faster than theother methods. L0 is the slowest method for the syntheticmeshes, while NLLR is the slowest for the real scanned data.Our method requires more computing time than TV andHO. Furthermore, we adopt the conjugate gradient (CG)method to iteratively solve our two linear systems (one for
OURNAL OF L A TEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2021 13 (a) Noisy (b) TV (c) HO (d) L0 (e) BF (f) NLLR (g) CNR (h) Ours
Fig. 14: Denoising results of real scanned data acquired by Kinect scanners. From left to right: input noisy meshes, denoisingresults produced by TV [7], HO [19], L0 [43], BF [5], NLLR [13], CNR [16], and our method TGV, respectively.Fig. 15: Our denoising results for three real scanned data. N -subproblem and the other for v -subproblem), and foundthat the computation time can be reduced by decreasing thenumber of iterations compromising accuracy.Overall, although our method seems to be computa-tionally more intensive, it can produce much better resultsin terms of visual quality and error metric values of θ inmost cases. Futhermore, using GPUs and multi-core CPUsto speed up our method is one of our further work. Li et al. [18] have recently proposed an end-to-end deepnormal filtering network, named DNF-Net, which has re-ceived wide attention. In Fig. 17, we compare our methodwith DNF-Net on three meshes (Sharpsphere, Carter, andCone04) visually and numerically. To further visualize themean angular difference ( θ ) distributions across the testedmeshes, we show histograms of θ in Fig. 18. For themesh containing sharp features and smooth regions (Sharp-sphere), our method clearly outperforms DNF-Net in termsof visual quality and the error metric θ ; see the first rowin Fig. 17. For the CAD mesh (Carter), both methods pro-duce excellent feature-preserving results. Nevertheless, the θ value of ours is lower than that of DNF-Net; see thesecond row in Fig. 17. For the scanned mesh (Cone04),although the θ value of DNF-Net is lower than that of ours,our method produces visually better result. The result ofDNF-Net has some bumps on the smooth regions, whileour method performs better on these regions; see the lastrow in Fig. 17. Moreover, as seen in Fig. 18, our methodconsistently produces the quality results that have the moredistributions of θ falling in the range of [0 ◦ , ◦ ] . Thus, ourmethod performs favorably against DNF-Net. We discuss our method from various aspects, including ef-ficiency for irregular sampling, robustness against differentlevels of noise, and robustness on mesh resolution.
Sampling . Non-uniform surface sampling problem iscommonplace in triangular meshes. Because the proposedoperators used in our TGV normal filtering model (27)
OURNAL OF L A TEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2021 14
NLLR
CNR
Our s BlockFandisk
NLLR
CNR
Our s LucyGargoyle
NLLR
CNR
Our s PyramidConeBoy
Fig. 16: Error curves of mean angular difference ( θ ) of the results in Figs. 9, 10, 11, 12, and 14 for TV [7], HO [19], L0 [43],BF [5], NLLR [13], CNR [16], and our method TGV. θ = 24.81 ° θ = 12.88 ° θ = 33.43 ° (a) Noisy θ = 6.59 ° θ = 5.67 ° θ = 7.40 ° (b) DNF-Net θ = 6.32 ° θ = 5.23 ° θ = 7. ° (c) Ours Fig. 17: Comparison between DNF-Net [18] and our methodTGV. From left to right: input noisy meshes, denoisingresults produced by DNF-Net and ours. The correspondingerror maps, using the mean angular difference between theface normals of denoised meshes and ground truth meshes,are also demonstrated.are rigorously defined for piecewise constant finite elementrepresentation in numerical PDE, our method is robustagainst non-uniform surface sampling. We demonstrate therobustness of our method against irregular sampling inFig. 19. As we can see, although the noisy meshes are ofvarying density distributions, the obtained results are stillof satisfactory quality.
Stress test . A stress test for our method with increasinglevel of noise is presented in Fig. 20. As can be seen, whenthe noise level is moderate, our method can remove noiseeffectively, while preserving sharp features and simultane-ously recover smooth regions well. Moreover, our methodcan preserve sharp features even for the highly noisy mesh;see Fig. 20c. However, when the noise level is larger thanthe feature size, our method may fail to produce satisfactoryresults; see Fig. 20d.
Resolution . A robustness test on our method for varyingmesh resolution is shown in Fig. 21. When the mesh resolu-tion decreases (from high- to low-resolution), the θ values of our method do not change significantly for the CAD mesh(Part) and the real scanned data (Cone). Moreover, for thenon-CAD mesh (Bunny), although there is a slight jumpof θ values in the case of low-resolution, the θ values ofthe results are still reasonable. Thus, our method is robustagainst mesh resolution. ONCLUSION
In this work, we present a numerical framework to dis-cretize the total generalized variation (TGV) for triangularmeshes. A normal filtering model based on the vectorialTGV is proposed to smooth the normal field. The proposedmodel can be efficiently solved by variable-splitting andaugmented Lagrangian method. Then, vertex positions areupdated to match the filtered normal field. We carefullyevaluate our method in various aspects and compare itto the state-of-the-art mesh denoising methods. The ex-perimental results show that our method has significantadvantages in preserving sharp features, recovering smoothtransition regions, as well as preventing unnatural artifacts(e.g., staircase artifacts, over-smoothing or over-sharpeningeffects, and extra noise). In summary, our method is highlyeffective for denoising CAD and man-made surfaces con-sisting of sharp features and smooth transition regions.Based on the current work, there are many interestingdirections, which can be pursued in the future. The pro-posed TGV model and its vectorial version can be applied inother geometry processing problems, such as mesh segmen-tation, reconstruction, simplification, feature detection, etc.Furthermore, we will investigate the possibility to extendour method to point clouds. R EFERENCES [1] Z. Liu, X. Xiao, S. Zhong, W. Wang, Y. Li, L. Zhang, and Z. Xie, “Afeature-preserving framework for point cloud denoising,”
Comput-Aided Des. (Solid and Physical Modeling) , vol. 127, p. 102857, 2020.[2] J. Wang, X. Zhang, and Z. Yu, “A cascaded approach for feature-preserving surface mesh denoising,”
Comput-Aided Des. , vol. 44,no. 7, pp. 597–610, 2012.[3] J. Wang, J. Huang, F. L. Wang, M. Wei, H. Xie, and J. Qin, “Data-driven geometry-recovering mesh denoising,”
Comput-Aided Des.(Solid and Physical Modeling) , vol. 114, pp. 133–142, 2019.[4] X. Sun, P. Rosin, R. Martin, and F. Langbein, “Fast and effec-tive feature-preserving mesh denoising,”
IEEE Trans. Vis. Comput.Graph. , vol. 13, no. 5, pp. 925–938, 2007.[5] Y. Zheng, H. Fu, K. C. Au, and C. L. Tai, “Bilateral normal filteringfor mesh denoising,”
IEEE Trans. Vis. Comput. Graph. , vol. 17,no. 10, pp. 1521–1530, 2011.
OURNAL OF L A TEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2021 15 . . . . . . . . DNF-NetOurs (a) Sharpsphere . . . . . . . . DNF-NetOurs (b) Carter . . . . . . . DNF-NetOurs (c) Cone04
Fig. 18: Histograms of mean angular difference ( θ ) of the results in Fig. 17. The horizontal axis shows the distributions of θ in degrees, and the vertical axis denotes the corresponding percentages of faces falling in the fixed ranges of θ .Fig. 19: Denoising results for noisy input with non-uniformsampling. From left to right: input noisy meshes, denoisingresults, and the corresponding clean meshes. (a) σ = 0 . l e (b) . l e (c) . l e (d) . l e Fig. 20: Denoising results of Part corrupted by differentlevels of noise. The first row shows noisy meshes corruptedwith increasing level of noise, while the second row showsthe corresponding denoising results. [6] M. Wei, J. Yu, W.-M. Pang, J. Wang, J. Qin, L. Liu, and P.-A. Heng,“Bi-normal filtering for mesh denoising,”
IEEE Trans. Vis. Comput.Graph. , vol. 21, no. 1, pp. 43–55, 2014.[7] H. Zhang, C. Wu, J. Zhang, and J. Deng, “Variational meshdenoising using total variation and piecewise constant functionspace,”
IEEE Trans. Vis. Comput. Graph. , vol. 21, no. 7, pp. 873–86,
PartBunnyCone
Fig. 21: Error curves of mean angular difference ( θ ) on threemeshes (Part, Bunny, Cone) in different resolution. All themeshes are corrupted with σ = 0 . l e . The horizontal axisdenotes the number of faces of the meshes, and the verticalaxis denotes the corresponding θ values. L fidelity,” Comput. Graph. Forum (PacificGraphics) , vol. 34, no. 7, pp. 35–45, 2015.[9] X. Lu, Z. Deng, and W. Chen, “A robust scheme for feature-preserving mesh denoising,”
IEEE Trans. Vis. Comput. Graph. ,vol. 22, no. 3, pp. 1181–1194, 2015.[10] R. Lai, X.-C. Tai, and T. F. Chan, “A ridge and corner preservingmodel for surface restoration,”
SIAM J. Sci. Comput. , vol. 35, no. 2,pp. 675–695, 2013.[11] X. Lu, W. Chen, and S. Schaefer, “Robust mesh denoising via ver-tex pre-filtering and L -median normal filtering,” Comput. AidedGeom. Des. , vol. 114, pp. 133–142, 2019.[12] Z. Liu, W. Wang, S. Zhong, B. Zeng, J. Liu, and W. Wang, “Meshdenoising via a novel Mumford-Shah framework,”
Comput-AidedDes. (Solid and Physical Modeling) , vol. 126, p. 102858, 2020.[13] X. Li, L. Zhu, C.-W. Fu, and P.-A. Heng, “Non-local low-ranknormal filtering for mesh denoising,”
Comput. Graph. Forum (PacificGraphics) , vol. 37, no. 7, pp. 155–166, 2018.[14] M. Wei, H. Jin, X. Xie, L. Liu, and Q. Jing, “Mesh denoising guidedby patch normal co-filtering via kernel low-rank recovery,”
IEEETrans. Vis. Comput. Graph. , vol. 25, no. 10, pp. 2910–2926, 2019.[15] H. Chen, J. Huang, O. Remil, H. Xie, J. Qin, Y. Guo, M. Wei,and J. Wang, “Structure-guided shape-preserving mesh texturesmoothing via joint low-rank matrix recovery,”
Comput-Aided Des.(Solid and Physical Modeling) , vol. 115, pp. 122–134, 2019.[16] P.-S. Wang, Y. Liu, and X. Tong, “Mesh denoising via cascaded
OURNAL OF L A TEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2021 16 normal regression,”
ACM Trans. Graph. , vol. 35, no. 6, pp. 232:1–232:12, 2016.[17] M. Wei, X. Guo, J. Huang, F. Wang, H. Xie, R. Kwan, andJ. Qin, “Mesh defiltering via cascaded geometry recovery,”
Com-put. Graph. Forum , vol. 38, no. 7, pp. 591–605, 2019.[18] X. Li, R. Li, L. Zhu, C. Fu, and P. Heng, “DNF-Net: a deep normalfiltering network for mesh denoising,”
IEEE Trans. Vis. Comput.Graph. , 2020.[19] Z. Liu, S. Zhong, Z. Xie, and W. Wang, “A novel anisotropic secondorder regularization for mesh denoising,”
Comput. Aided Geom.Des. (Geomtric Modeling and Processing) , vol. 71, pp. 190–201, 2019.[20] Z. Liu, R. Lai, H. Zhang, and C. Wu, “Triangulated surface denois-ing using high order regularization with dynamic weights,”
SIAMJ. Sci. Comput. , vol. 41, no. 1, pp. 1–26, 2019.[21] S. Zhong, Z. Xie, J. Liu, and Z. Liu, “Robust mesh denoising viatriple sparsity,”
Sensors , vol. 19, p. 1001, 2019.[22] K. Bredies, K. Kunisch, and T. Pock, “Total generalized variation,”
SIAM J. Imaging Sci. , vol. 3, no. 3, pp. 492–526, 2010.[23] D. Ferstl, C. Reinbacher, R. Ranftl, M. Ruether, and H. Bischof,“Image guided depth upsampling using anisotropic total general-ized variation,” in
IEEE International Conference on Computer Vision ,2013, pp. 993–1000.[24] M. Jung and M. Kang, “Simultaneous cartoon and texture imagerestoration with higher-order regularization,”
SIAM J. Imaging Sci. ,vol. 8, no. 1, pp. 721–756, 2015.[25] W. Feng, H. Lei, and Y. Gao, “Speckle reduction via higher ordertotal variation approach,”
IEEE Trans. Image Process. , vol. 23, no. 4,pp. 1831–1843, 2014.[26] F. Knoll, K. Bredies, T. Pock, and R. Stollberger, “Second order totalgeneralized variation (TGV) for MRI,”
Magn. Reson. Med. , vol. 65,no. 2, pp. 480–491, 2011.[27] S. Niu, Y. Gao, Z. Bian, J. Huang, W. Chen, G. Yu, Z. Liang, andJ. Ma, “Sparse-view x-ray CT reconstruction via total generalizedvariation regularization,”
Phys. Med. Biol. , vol. 59, no. 12, pp. 2997–3017, 2014.[28] G. Taubin, “A signal processing approach to fair surface design,”in
Proc. 22nd Annu. Conf. Comput. Graph. Interactive Tech. , 1995, pp.351–358.[29] M. Desbrun, M. Meyer, P. Schr¨oder, and A.-H. Barr, “Implicitfairing of irregular meshes using diffusion and curvature flow,”in
Proc. 26th Annu. Conf. Comput. Graph. Interactive Tech. , 1999, pp.317–324.[30] C. Bajaj and G. Xu, “Anisotropic diffusion of surfaces and func-tions on surfaces,”
ACM Trans. Graph. , vol. 22, no. 1, pp. 4–32,2003.[31] S. Fleishman, I. Drori, and D. Cohen-Or, “Bilateral mesh denois-ing,”
ACM Trans. Graph. (SIGGRAPH) , pp. 950–953, 2003.[32] T. R. Jones, F. Durand, and M. Desbrun, “Non-iterative, feature-preserving mesh smoothing,”
ACM Trans. Graph. (SIGGRAPH) , pp.943–949, 2003.[33] C. Wang, “Bilateral recovering of sharp edges on feature-insensitive sampled meshes,”
IEEE Trans. Vis. Comput. Graph. ,vol. 12, no. 4, pp. 629–639, 2006.[34] H. Huang and U. Ascher, “Surface mesh smoothing, regulariza-tion, and feature detection,”
SIAM J. Sci. Comput. , vol. 31, no. 1,pp. 74–93, 2008.[35] W. Pan, X. Lu, Y. Gong, W. Tang, J. Liu, Y. He, and G. Qiu, “Hlo:Half-kernel laplacian operator for surface smoothing,”
Comput-Aided Des. , vol. 121, p. 102807, 2020.[36] M. Wei, L. Liang, W. M. Pang, J. Wang, W. Li, and H. Wu, “Tensorvoting guided mesh denoising,”
IEEE Trans. Autom. Sci. Eng. ,vol. 14, no. 2, pp. 931–945, 2017.[37] W. Zhang, B. Deng, J. Zhang, S. Bouaziz, and L. Liu, “Guided meshnormal filtering,”
Comput. Graph. Forum (Pacific Graphics) , vol. 34,no. 7, pp. 23–34, 2015.[38] J. Zhang, B. Deng, Y. Hong, Y. Peng, W. Qin, and L. Liu,“Static/dynamic filtering for mesh geometry,”
IEEE Trans. Vis.Comput. Graph. , vol. 25, no. 4, pp. 1774–1787, 2019.[39] S. Yadav, U. Reitebuch, and K. Polthier, “Mesh denoising based onnormal voting tensor and binary optimization,”
IEEE Trans. Vis.Comput. Graph. , vol. 24, no. 8, pp. 2366–2379, 2018.[40] ——, “Robust and high fidelity mesh denoising,”
IEEE Trans. Vis.Comput. Graph. , vol. 25, no. 6, pp. 2304–2310, 2019.[41] G. Arvanitis, A. S. Lalos, K. Moustakas, and N. Fakotakis, “Featurepreserving mesh denoising based on graph spectral processing,”
IEEE Trans. Vis. Comput. Graph. , vol. 25, no. 3, pp. 1513–1527, 2018. [42] W. Zhao, X. Liu, S. Wang, X. Fan, and D. Zhao, “Graph-basedfeature-preserving mesh normal filtering,”
IEEE Trans. Vis. Com-put. Graph. , 2019.[43] L. He and S. Schaefer, “Mesh denoising via L minimization,” ACM Trans. Graph. , vol. 32, no. 4, pp. 1–8, 2013.[44] Y. Zhao, H. Qin, X. Zeng, J. Xu, and J. Dong, “Robust and effectivemesh denoising using L sparse regularization,” Comput-AidedDes. , vol. 101, pp. 82–97, 2018.[45] M. Centin and A. Signoroni, “Mesh denoising with (geo) metricfidelity,”
IEEE Trans. Vis. Comput. Graph. , vol. 24, no. 8, pp. 2380–2396, 2017.[46] X. Lu, S. Schaefer, J. Luo, L. Ma, and Y. He, “Low rank matrixapproximation for 3d geometry filtering,”
IEEE Trans. Vis. Comput.Graph. , 2020.[47] Z. Li, Y. Zhang, Y. Feng, X. Xie, Q. Wang, M. Wei, and P.-A.Heng, “NormalF-Net: Normal filtering neural network for feature-preserving mesh denoising,”
Comput-Aided Des. (Solid and PhysicalModeling) , vol. 127, p. 102861, 2020.[48] H. Avron, A. Sharf, C. Greif, and D. Cohen-Or, “ (cid:96) -sparse recon-struction of sharp point set surfaces,” ACM Trans. Graph. , vol. 29,no. 5, pp. 135:1–12, 2010. A PPENDIX lemma 1.
The adjoint operator of D E , that is D (cid:63) E : W → V , hasthe following form: ( D (cid:63) E w ) | e = − e ) (cid:88) l ∈ B ( e ) w l sgn( e, τ l )len( l ) , ∀ e. Proof.
By the definition of the adjoint operator, we have (cid:104)D E v, w (cid:105) W = (cid:104) v, −D (cid:63) E w (cid:105) V . (38)Using the inner products (14) and (5) in W and V , (38) canbe rewritten as (cid:88) l ( D E v ) | l w l len( l ) = (cid:88) e v e ( −D (cid:63) E w ) | e len( e ) . (39)Using (13), the left-hand side of (39) is actually (cid:88) l ( D E v ) | l w l len( l )= (cid:88) l [ v ] l w l len( l )= (cid:88) l (cid:16) v e + sgn( e + , τ l ) + v e − sgn( e − , τ l ) (cid:17) w l len( l )= (cid:88) e v e (cid:88) l ∈ B ( e ) sgn( e, τ l ) w l len( l ) Therefore, we have (cid:88) e v e (cid:88) l ∈ B ( e ) sgn( e, τ l ) w l len( l ) = (cid:88) e v e ( −D (cid:63) E w ) | e len( e ) . The assertion follows immediately. lemma 2.
The adjoint operator of (cid:101) D E , that is (cid:101) D (cid:63) E : (cid:102) W → V hasthe following form: ( (cid:101) D (cid:63) E (cid:101) w ) | e = − e ) (cid:88) c ∈ B ( e ) (cid:101) w c sgn( e, τ c )len( c ) , ∀ e. Proof.
By the definition, we have (cid:104) (cid:101) D E v, (cid:101) w (cid:105) (cid:102) W = (cid:104) v, − (cid:101) D (cid:63) E (cid:101) w (cid:105) V . (40) OURNAL OF L A TEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2021 17
Using the inner products (19) and (5) in (cid:102) W and V , (40) canbe rewritten as (cid:88) c ( (cid:101) D E v ) | c (cid:101) w c len( c ) = (cid:88) e v e ( − (cid:101) D (cid:63) E (cid:101) w ) | e len( e ) . (41)Using (18), the left-hand side of (41) is actually (cid:88) c ( (cid:101) D E v ) | c (cid:101) w c len( c )= (cid:88) c [[ v ]] c (cid:101) w c len( c )= (cid:88) c (cid:16) v e −− sgn( e −− , τ − ) + v e + sgn( e + , τ + )+ v e − sgn( e − , τ − ) + v e ++ sgn( e ++ , τ + ) (cid:17) (cid:101) w c len( c )= (cid:88) e v e (cid:88) c ∈ B ( e ) sgn( e, τ c ) (cid:101) w c len( c ) . Therefore, we have (cid:88) e v e (cid:88) c ∈ B ( e ) sgn( e, τ c ) (cid:101) w c len( c ) = (cid:88) e v e ( − (cid:101) D (cid:63) E (cid:101) w ) | e len( e ) ..