Skeletonization via Local Separators
SSkeletonization via Local Separators
Andreas Bærentzen and Eva RotenbergDept. of Applied Mathematics and Computer Science, Technical University of DenmarkSeptember 11, 2020
Abstract
We propose a new algorithm for curve skeletoncomputation which differs from previous algorithmsby being based on the notion of local separators .The main benefits of this approach are that it isable to capture relatively fine details and that itworks robustly on a range of shape representations.Specifically, our method works on shape representa-tions that can be construed as a spatially embed-ded graphs. Such representations include meshes,volumetric shapes, and graphs computed from pointclouds. We describe a simple pipeline where geo-metric data is initially converted to a graph, option-ally simplified, local separators are computed and se-lected, and finally a skeleton is constructed. We testour pipeline on polygonal meshes, volumetric shapes,and point clouds. Finally, we compare our results toother methods for skeletonization according to per-formance and quality.
Many of the common shape representations that weuse in computer graphics are, in essence, spatiallyembedded graphs. The polygonal mesh representa-tion is an obvious example since a polygonal mesh isusually understood to be a graph embeddable in asurface of appropriate genus. Moreover, a voxel gridcan also be seen as a graph, and it is easy to create agraph from a point cloud by connecting each point tonearby points. Thus, it is not surprising that graphalgorithms are often useful in geometry processing,but we also rely on, and enforce, more specific con- straints on the representations. For instance, sincewe tend to require manifoldness of surface meshes,we effectively impose strict limitations on the con-nectivity: if we were to connect two arbitrary, hith-erto unconnected, vertices in a triangle mesh, the re-sult would still be a spatially embedded graph, but itwould not be manifold – actually, it would not evenbe a triangle mesh due to the added edge. For manymesh algorithms, that would invalidate the object asinput. The morale seems to be that we usually haveto abide by the constraints of the geometry repre-sentation. However, there are cases where we canrelax these constraints and create algorithms whichoperate on the broader class of spatially embeddedgraphs. The benefit of this would be that we mightobtain algorithms applicable to a wider range of in-puts.Figure 1: For a simple 2D graph (left), we find thelocal separators shown as colored vertices and edges(middle). The separators are then replaced with ver-tices, thereby forming the skeleton (right).In this paper, we propose such an algorithm whichcomputes curve skeletons from spatially embeddedgraphs. This algorithm applies to a range of repre-1 a r X i v : . [ c s . C G ] S e p entations from triangle meshes over voxel grids tographs constructed from scattered points. Moreover,for a given shape, we may have a different situationin different places: it is quite possible that thin struc-tures in one place are represented by a sequence ofconnected nodes that already resemble a skeleton,while thicker structures are represented by a differ-ent part of the same graph where the points lie in a2-manifold. A very simple example of our algorithmapplied to a 2D toy example is shown in Figure 1 andFigure 2 shows the result of our algorithm applied tothe problem of reconstructing a tree from a pointcloud. Initialy, we construct a graph from the pointcloud, then we compute a skeleton using the proposedmethod, and, finally, the skeleton is garbed using con-volution surfaces [BS91] producing the model shownon the right. Unlike the notion of a medial surface , a precise defi-nition of a curve skeleton is elusive, but a number ofproperties are generally agreed upon. In particular, acurve skeleton is understood to be a locally centeredshape abstraction such as could be obtained from agiven shape by a process of continuous contractionuntil we arrive at a 1D structure. Given a point on acurve skeleton, we can identify a set of points on theoriginal shape which contracts to precisely this pointon the skeleton. We can think of this set as a skele-tal atom . Now, if we consider a part of the shapewhose corresponding sub-skeleton does not containany loops, a skeletal atom contained within this partcan be construed as a separator : its removal will dis-connect the part (cf. Figure 3). We will use the term local separator to clarify that a skeletal atom is notnecessarily a separator for the entire shape. It shouldalso be emphasized that since our method operateson discrete shapes, we are looking for discrete sepa-rators. Fortunately, the notion of a separator is wellknown in graph theory, and casting the search forskeletal atoms as a search for vertex separators in agraph is what allows us to generalize the skeletoniza-tion process to any object that can be representedas a spatially embedded graph. As mentioned above,this allows us to skeletonize meshes, voxel grids, and graphs constructed by connecting points in space toother points in their vicinity such as the tree shown inFigure 2. It bears mentioning that this type of datadoes not define a precise surface, but our method canstill compute a skeleton because it is based only onthe position and connectivity of the vertices in theinput graph.Normally, the search for skeletal atoms is coupledwith the process of finding the skeletal structure it-self. For instance, if we find the skeleton by a processof contraction, thinning, or segmentation, the skele-tal atoms are generally found on the same cadenceas their connectivity. Using the local separator prop-erty, we can find skeletal atoms independently of howthey are connected. Of course, we also need geomet-ric criteria for whether a separator is a useful skeletalatom, but we can cast a wider net in our search.Our first contribution is a method which finds lo-cal separators in a spatially embedded graph. Wepropose an algorithm which, starting from a singlevertex, grows a connected set of vertices until thisset becomes a separator. Subsequently, we shrink theseparator again until it becomes a minimal separator.This method is discussed in Section 2.1.Our second contribution is a procedure for selectinga non-over- lapping subset of the found separators.This is achieved by casting the selection as a weightedset packing problem. The fact that the separators areminimal allows us to pack them far more densely thanotherwise, obtaining a skeleton that appears to oftenresolve comparatively fine details. The approach isdiscussed in Section 2.2.While our method for finding local separators leadsto a good result, it would not be hard to come up withother methods than the one proposed. For instance,methods based on Reeb graphs could be used to gen-erate multiple skeletons which we can see as selectionsof local separators, and then these skeletons can beblended using the packing method. An example ofthis is shown in Figure 11.From the packed set of separators, we can easilyextract a skeleton as discussed in Section 2.3. Theresult of local separator finding and packing as well asskeleton extraction is shown in Figure 1 for a simple2D input graph.2igure 2: Skeletonization and reconstruction of a botanical tree. The image on the left is the input graph, andthe inset shows a close-up where the individual edges are easier to see. The middle image shows the skeletonproduced by our algorithm, and the image on the right is a reconstruction of the tree using convolutionsurfaces.Figure 3: This figure shows a shape (black outline)and its skeleton (blue). The skeletal atom (red curve)contracts to the point on the skeleton shown as a reddisk. Note that the skeletal atom is a separator inthe part of the shape indicated by the green region.
Blum defined the medial axis of a given shape as thelocus of points where a wave propagating uniformlyin all directions from the boundary collides with it-self [Blu67]. In 2D this locus is a collection of curves,and in 3D it is a collection of surface components de-noted the medial surface [SP08]. The medial surface(or axis) can also be defined in a number of otherways, and these several definitions have given rise toa range of methods for computing the medial sur-faces of a shape [TDS +
16, SBdB16]. Almost all ofthe methods are somewhat sensitive to noise, but inrecent work Rebain et al. proposed an algorithm thatproduces approximately inscribed maximal balls from an unorganized surface point cloud [RAV + +
16] based on the notion of erosion thickness.While these two works define the skeleton in terms ofthe medial surface, our method does not rely on themedial surface or require that the input is a manifoldsurface.Au et al. [ATC +
08] and later Tagliasacchi et al.[TAOZ12] provide algorithms based on mean curva-ture flow. Tagliasacchi et al. also let the skeleton3e attracted by the medial surface. Again, theseapproaches require a manifold surface mesh. A re-lated approach is that of Zhou et al. [ZYH + +
13] follow the strategy of combined clustering(of mesh vertices) and contraction (of skeletal edges)while maintaining a 1-1 relation between clusters andskeletal vertices. Like Jiang et al. we also find a par-titioning of the input vertices, but their clusteringapproach seems to lead to larger clusters and hencea coarser skeleton. Using machine learning, Xu et al.recently employed volumetric deep learning to inferskeletons for animation from 3D shapes [XZKS19].For volumetric images, there are established meth-ods based on mathematical morphology [Ser83] forboth medial surfaces and skeletal curves [SBdB17].An example is the thinning approach by Lee et al.[LKC94] which converts a binary image to a corre-sponding binary image of the skeleton.For point clouds, the literature is more sparse.Tagliasacchi et al. find points of approximate rota-tional symmetry from incomplete point clouds andreconstruct a skeleton from a collection of these[TZCO09]. Huang et al. also compute skeletonsfrom point clouds without connectivity [HWCO + +
10, Pre12,GZLC19] and then reconstructing the tree surfacefrom the skeleton.
Since we operate on discrete input, we will assumethat our shape has been sampled, producing a set ofvertices, V , and that a geometric position, p v , is as-sociated with each vertex v ∈ V . We will not makeany assumptions about whether the vertices are sam-pled from the surface (i.e. boundary) of the shapeor from the interior, but we do require that the ver-tices are connected by edges, E , thus forming a graph G = < V, E > . To exemplify, the graph could be anembedded graph as in the case of a manifold trianglemesh, but it could also be a k-nearest neighbor graphdefined on a set of scattered points or something else. Given a graph G , a separator Σ is a subset of verticeswith the property that its removal will disconnect thegraph into at least two non-empty sets. The size of aseparator is simply the number of vertices it contains,and we say that a separator is minimal if removingany of its vertices would result in it no longer beinga separator.Say our shape is a cylinder, and let the graph ver-tices form a regular quad mesh where edges are eitherparallel to the axis of the cylinder or perpendicularto the axis. Observe that the perpendicular edges4orm rings around the cylinder, and the vertices con-nected by these edges separate the vertices on eitherside of the rings. For a cylinder, it seems natural todefine the skeleton by connecting the centers of theserings. Thus, some separators relate to the notion ofa skeleton in a useful way, but not all separators areuseful. For instance, the neighbors of any vertex isa separator that separates said vertex from the othervertices in the graph, but it is rarely useful. It is alsoclear that separators do not easily describe skeletonsof higher genus surfaces; to overcome this challenge,we turn to local separators.Given a graph, G , an induced subgraph G (cid:48) consistsof a subset V (cid:48) of the vertices V of G , and all of theedges in G that connect a pair of vertices in V (cid:48) . Its boundary ∂G (cid:48) consists of those vertices of G (cid:48) that arelinked to the rest of G , that is, those that have edgesto V \ V (cid:48) . Given a graph, G , a local separator isa separator, Σ , of some connected induced subgraph G (cid:48) ⊆ G with the further property that no boundaryvertices belong to the separator, ∂G (cid:48) ∩ Σ = ∅ , andthat ∂G (cid:48) is disconnected by the separator. The mainadvantage of local separators is that they give in-formation about the structure even on higher genussurfaces. As a welcome secondary advantage, theyare more computationally efficient to find. As in thecase of normal separators, we can very easily findlocal separators that bear no meaningful relation tothe skeleton of a shape. However, given an algorithmthat finds appropriate local separators, we can hopeto obtain a meaningful skeleton. Such an algorithmwill be described in Section 2. Note that since weonly refer to local separators in the following, theword “separator” should be taken to mean “local sep-arator”. Given a discrete shape represented through a graph, G , a discrete curve skeleton , skel( G ) , is a graph whereeach vertex either corresponds to a local separator of G or is an auxiliary branch vertex which joins sev-eral vertices. (In a sense, these auxiliary vertices areformed to replace a hyper-edge by a star graph.)Desirable properties of a skeleton include: (i) Itcaptures the homology of the shape (i.e. it is a de- formation retract), (ii) it captures the geometry ofthe shape, (ii’) including all features, while still (ii”)handling noise consistently, and (iii) it is centered inthe shape, in particular, (iii’) the skeleton bones arecontained within the “meat” of the shape. We saya skeleton is good if it has these properties. If theskeleton successfully lives up to the third criterion, itwill be the case that (iii”) the skeleton is as smoothas the original shape.Note that there is a fine balance between beingrobust against noise, and being able to capture allfeatures. Several methods for skeletonization operate by con-tracting the shape towards the skeleton. This en-tails the risk that some features may by missed orthat noise might be represented as features. It de-pends on the speed of contraction, and usually thereare operating parameters which the user will have toadjust to stay clear of these issues. To a large ex-tent, we sidestep the problem by not contracting theshape. Our method operates by finding sets of ver-tices, i.e. separators, such that any path from thetip to the base of a feature must pass through thiscluster. This allows us to facilitate a relatively broadrange of inputs as long as they are in the form of spa-tially embedded graphs. Specifically, our algorithmworks well both if vertices are sampled only from thesurface and if they are also sampled in the interior.However, there are some implicit conditions nec-essary to ensure that we obtain a good skeleton asdefined above in Section 1.3.2. Note that the numberof local separators we can find depends on how con-nected the graph is. For instance, a complete graphcannot have a separator (local or global) since all ver-tices are connected. Thus, we need the right amountof edges in order to find a collection of local separa-tors which produces the desired skeleton.This reasoning leads to the following reconstruc-tion condition (illustrated in Figure 4) which assumesthat we locally know the (continuous) skeleton andwish to discover whether the connectivity is sufficientto reconstruct (the topology of) this skeleton from thegraph.5igure 4: On the left, the skeleton for a subgraph(represented by all the filled circles) is known a priorito contain no cycles (pink curve). Yet, the smallersubgraph (yellow and red vertices) does admit a sep-arator (red) which is not a separator in the containingsubgraph. Hence the sufficient connectivity conditionis violated unless the edges indicated by dashed linesare included. On the right, the skeleton is known tocontain a cycle, but now the red dot doesn’t separateany nodes, and the connectivity is excessive.Say we are given a subgraph, G (cid:48) , whose edges andvertices are known to be sampled from a coherentregion for which the corresponding part of the (con-tinuous) curve skeleton is known to contain no cycle.If there is a local separator, Σ , belonging to a smallersubgraph G (cid:48)(cid:48) ⊂ G (cid:48) then Σ must also be a local separa-tor of G (cid:48) . If this is true, the connectivity is sufficient ,otherwise it is insufficient.To understand why this condition must be fulfilled,consider the opposite: if the larger region G (cid:48) does nothave Σ as a separator, we can traverse vertices of G (cid:48) to go from one vertex of G (cid:48)(cid:48) to another vertex of G (cid:48)(cid:48) from which the first one would have been separatedby Σ if the path had been restricted to G (cid:48)(cid:48) , but sincethe part of the skeleton corresponding to G (cid:48) is notsupposed to contain a cycle, this should not have beenpossible.We can use the same condition with different as-sumptions in order to test for excessive connectivity .Assume the part of the shape covered by G (cid:48) containsprecisely one loop, but there is no induced subgraph G (cid:48)(cid:48) ⊂ G (cid:48) which contains a separator that is not alsoa separator of G (cid:48) , then the connectivity is excessivebecause the supposed cycle in the skeleton cannot berecovered by a collection of separators. The algorithm accepts a graph as input and producesa discrete curve skeleton in the form of another graphas output. Each vertex of the output graph corre-sponds to a local separator of the input graph. Hence,we can think of the local separators as skeletal atoms,and skeletonization largely becomes the process ofsearching for a collection of local separators.Informally, our method works as follows. Initially,we sample vertices on the input graph and grow a re-gion around each sampled vertex. At the point wherethe vertices just outside a given region form severaldisjoint components, that region is a separator whichwe then shrink back until it becomes minimal. Thisprocedure is described in detail in Section 2.1. Ourmethod for finding separators produces a large setof overlapping separators, and we use set packing toobtain a smaller set of disjoint separators. This isdescribed in Section 2.2. Finally, the skeleton is pro-duced as the quotient graph with respect to a par-titioning induced by the separators as we discuss inSection 2.3. The three steps are also illustrated inFigure 5.
Step 1 Step 2 Step 3
Figure 5: The three steps of the skeletonization pro-cess is shown here for a detail (a hand) from the Ar-madillo model. From left to right the images show theoriginal graph, all the found separators, the packedseparators, and the output skeleton. Colors indicatequality (red means high quality, blue means low qual-ity).6
A CB DE GF n Figure 6: The algorithms for computing a single local separator. A-G illustrate the operation of Algorithm1. The color red indicates a separator vertex, yellow a front vertex, and green is used for the front vertexclosest to the sphere center Image A shows the situation after we have found the neighbours (yellow andgreen) of start vertex, n (red) and the closest of the neighbors (green). We now use n and the closestvertex to redefine the sphere so it precisely contains both vertices. In B we see how all neighbors of thepreviously green sphere are now added to the front and the sphere redefined again. Images C through Gillustrate how more and more vertices are added to both the front and separator until the front is split intotwo components. At this point the separator is in fact a local separator. Image H shows the separator afterit has been thinned using Algorithm 2. The algorithm for computing local separators is basedon region growing. We assign an initial vertex to theseparator Σ and add all its neighbors to the frontwhich we denote F . Once a vertex has been addedto Σ any neighbors of the added vertex which are notin Σ are added to F unless they already belong to F .Thus, at any time, F is simply the set of vertices notin Σ but neighbors to vertices in Σ .Instead of growing evenly in all directions, wemaintain a bounding sphere around Σ , and, in eachstep, we find the vertex in F that is closest to thesphere and add it to Σ while expanding the sphere toexactly contain the vertex. It is worth pointing outthat the vertex in F closest to the sphere center maybe inside the sphere already. In this case the sphereis unchanged.The algorithm stops when the front F consists oftwo (or more) connected components as illustratedin Figure 6 (G). At this point, Σ is a separator ofthe subgraph G = Σ (cid:83) F , and thus a local separa-tor according to our definition from Section 1.3.1, and ∂G = F . The benefit of growing the separa-tor using this approach is that it favors going aroundas opposed to going along features. The center ofthe expanding sphere that we use to guide the regiongrowing will be used also in the next step when weshrink the separator to a minimal separator, and itsradius provides a hint as to local feature size. How-ever, the sphere is not a minimal bounding sphere of Σ or possessed of other specific properties of whichwe are aware.The region growing is illustrated in Figure 6 (A-G).If the algorithm is invoked for a valence 1 vertex(i.e. leaf), it immediately returns a separator con-sisting only of this vertex. Thus, leaf vertices aredefined to be local separators. This is because theskeletonization is designed to be idempotent : apply-ing skeletonization to a skeleton should not change it.Since the interior vertices of a skeleton are separatorsunless the skeleton contains cliques, this property canbe attained simply by defining leaves to be separa-tors. The algorithm also stops and returns an emptyseparator if the front is empty at any point since, inthis case, we have flooded an entire connected com-7 lgorithm 1 Local Separator.
The arguments are the graph G , the initial vertex, v ,and a threshold, τ . Front-Size-Ratio ( C F ) computesthe ratio of the cardinalities of the smallest set to thelargest set in a collection of sets passed to the func-tion. Connected-Components(G,F) computes theconnected components of the vertex set F given the con-nectivity of the graph G . (cid:15) is a small constant preventingdivision by zero. Local-Separator ( G, v , τ )Σ = { v } F = Neighbors G (Σ) // F is ∂ ( G \ Σ) if | F | = 0 return (0 , ∅ ) if | F | = 1 return (1 , Σ) C F = Connected-Components G ( F ) c = p v // centered in v r = 0 // radius while | C F | == 1 ∨ Front-Size-Ratio ( C F ) < τv = arg min f ∈ F (cid:107) c − p f (cid:107) if (cid:107) c − p v (cid:107) > rr = ( r + (cid:107) c − p v (cid:107) ) c = p v + r(cid:15) + (cid:107) c − p v (cid:107) ( c − p v )Σ = Σ (cid:83) { v } F = ( F (cid:83) Neighbors G ( v )) \ Σ if | F | = 0 return (0 , ∅ ) C F = Connected-Components G ( F ) q = Front-Size-Ratio ( C F )Σ , C F = Shrink-Separator ( G, Σ , C F , c ) if | Connected-Components G (Σ) | > return (0 , ∅ ) return ( q, Σ) ponent.If neither condition is met, the algorithm continueswhile F consists of a single significant connected com-ponent. In this context, “significant” means that thewe also continue if the size of the smallest componentis very small compared to the largest. Having foundtwo significant components, the algorithm stops, and Σ is first shrunk to a minimal separator (using anapproach described below) and then returned, unlessthe shrunk Σ consists of more than a single connectedcomponent. If this happens, we return an empty sep-arator. Before returning a non-empty separator, wecompute the quality which is simply the ratio of thesmallest front component of F to the largest. Pseudo-code for Local-Separator is shown in Algorithm 1.
Shrink Separator.
The arguments are the graph G , the separator, Σ ,and the front components C F = { C F , C F , . . . } . Laplacian-Smooth ( G, Σ) computes the Laplaciansmoothing of the vertices that belong to Σ keeping thevertices in the rest of G fixed and returns the smoothedpositions. Shrink-Separator ( G, Σ , C F , c ) P = Laplacian-Smooth ( G, Σ) for v ∈ Σ d v = (cid:107) P v − c (cid:107) repeat v = arg max ν ∈ Σ d ν γ = ∅ for w ∈ Neighbors G ( v ) γ = γ ∪ { i | w ∈ C Fi } if | γ | = 1Σ = Σ \ vC Fi = C Fi ∪ { v } where { i } = γ until Is-Minimal-Separator ( G, Σ) return Σ , C F We want to find a separator that is minimal, butalso smooth and balanced. The separator minimiza-8ion is performed by iteratively removing verticesfrom the separator until no vertex can be removed.The result clearly depends on the order in which weremove vertices from the separator.A simple heuristic would be to remove verticesin order of decreasing distance to the center of thesphere used to find the separator in the first place(as discussed above). While this is effective, it hasthe deficiency that for irregular structures, this canlead to somewhat meandering separators. We ini-tially perform Laplacian smoothing of the initial sep-arator, Σ , using inverse edge length as the weightingscheme and with fixed positions for the vertices of F . Having smoothed the separator, we remove ver-tices in order of decreasing distance to the sphere cen-ter until the separator is minimal. There are a fewcases where the shrunk separator breaks into disjointcomponents. We test for and discard these results.Pseudo-code for the function, Shrink-Separator isshown in Algorithm 2.
A minimal separator is optimal in the sense that novertex can be removed from it without it ceasing tobe a separator. However, we can also measure otherqualities in a separator such as the sum of the lengthof graph edges connecting two vertices that both be-long to the separator. Below, we describe an optionalstep that can be used to optimize minimal separatorsin this sense.It is possible to create a variation of a minimalseparator, Σ , by exchanging a vertex, v ∈ Σ , with aneighbor vertex, u / ∈ Σ , as long as • v does not have other neighbors that belong tothe same front component as u , and • N( u ) (cid:84) Σ = N( v ) (cid:84) Σ . • u belongs to the original (unminimized separa-tor)The first of these three conditions ensures that wedo not introduce tunnels through the separator. Thesecond condition ensures that u has the same neigh-bors in Σ as v . This translates into a more grad-ual change to the separator which we have observed leads to better results. The final condition ensuresthat we do not include vertices from outside the sub-graph created by the search in Algorithm 1 (beforeAlgorithm 2) as this could invalidate the separator.Abiding by the rules above, we can now performsubstitutions in order to minimize some particularmeasure such as the summed length of edges belong-ing to the separator, i.e E (Σ) = 0 . (cid:88) u ∈ Σ (cid:88) v ∈ N( u ) (cid:107) v − u (cid:107)· (cid:26) if v ∈ Σ0 otherwise (1)We implemented an algorithm that optimizes a sepa-rator by performing any substitution that reduces (1)until no further substitutions will reduce the energy.This algorithm is applied once to all minimal sep-arators. If it does improve the separator, we perturbit slightly by making random substitutions and thenrun the optimization again. This procedure is iter-ated a fixed, small number of times. Each time, webacktrack if the procedure does not improve the sep-arator. The time consumption for finding a separator canbe analyzed in two terms. Let s be the total num-ber of vertices in Σ returned by Algorithm 1. First,we spend s steps adding one vertex at a time to theseparator. In each step, we spend time proportionalto the current size of the front F performing a lin-ear scan for the closest vertex to the current cen-ter, and furthermore, time proportional to the num-ber of edges between vertices of the front findingthe connected components. Then, we spend O ( s ) time per round of diffusion, for a total of roughly O ( √ s ) rounds. Thus, the time consumption becomes O ( s · ( V [ F max ] + E [ F max ]) + s √ s ) , where V [ G (cid:48) ] and E [ G (cid:48) ] are the number of vertices and, respectively,edges of the induced subgraph G (cid:48) , and F max is thelargest front through the course of the algorithm.Thus, in general and in worst-case, we can make nobetter analysis than O ( s ) , since the front may beproportional to the separator, and may have manyedges. However, we do assume that the points aresampled somewhat evenly from a surface or a shape9n 3D space. If the points are sampled somewhatevenly from a surface, the front size is expected tobe √ s , while if points stem from a shape, the frontsize is expected to be s / . Furthermore, if pointsstem from a mesh or a voxel grid, the number ofedges in the subgraph induced by the front is indeedproportional to the number of vertices; indeed, notonly will the subgraph be planar, but voxel grids areconstant degree by construction. Thus, for meshes,we expect a running time of s √ s to find one separa-tor, and for volumetric graphs, we expect a runningtime of O ( s / ) . For point clouds where vertices havebounded degree, the running time will correspond tothat of meshes or voxels, depending on whether thepoints are sampled from the surface or the iterior ofthe shape. Since the algorithm for finding a local separator startsfrom a single vertex of G , we need to first choose a setof initial vertices. Clearly, one option is to start thesearch from all vertices of G , but we can improve per-formance dramatically by sampling. However, simplychoosing a random subset of the vertices would leadto small features being missed. Broadly speaking, wewant many separators to choose from in order to finda dense packing later.In our experience, a good compromise is to visitall vertices in random order and the use vertex v as starting point for Algorithm 1 with probability p ( v ) = 2 − val ( v ) where val ( v ) is the number of times v has been included in a separator. This heuristicprioritizes the more sparsely covered regions, but, ofcourse, it does not prevent that some vertices couldbe sampled repeatedly: that the separator foundstarting from x contains y does not imply the con-verse; for some inputs there may be “popular” ver-tices that belong to many separators even if they arenot used as starting point many times.The total running time of the separator findingstep thus becomes the product of the time to finda separator with the number of vertices sampled: (cid:80) v p ( v ) · f ( s v ) , where p ( v ) is the sampling proba-bility for v , s v is the separator ball found from v ,and f ( s ) is the time consumption to find the sepa- rator which is a function of the data set type (be itvoxel, mesh, or point cloud graph). Intuitively, es-pecially for locally cylindrical shapes, the probabilityof sampling n should decrease as the size of the lo-cal separator through n increases, leading to a bettertotal running time. The local separators found using the above methodcorrespond, roughly, to cross sections of the shape. Itis obvious to consider each separator to be a skeletalatom and map it to a vertex of a discrete curve skele-ton. However, the separators overlap which makesit difficult to infer a reasonable connectivity for sucha skeleton. Our solution is to pack the separatorson the graph. In a packing, each vertex can be cov-ered by only a single separator. Thus, we obtain apartitioning of the vertices in the graph into non-overlapping sets - plus an additional set of unassignedvertices, but these can be trivially assigned to theclosest separator. From this partitioning, we can eas-ily extract a skeleton using a method described inSection 2.3.Set packing is itself an NP-hard problem: De-ciding whether some given number of sets from afamily exist that cover the entire universe, or op-timizing for how few are needed. Even for sub-sets of size k , no better guarantee than a k/ -approximation is known [HS89], and this is tight upto log k -factors [HSS06]. However, for our purposes, agreedy heuristic often yields good results in practice.We use a greedy, weighted packing scheme [Kor13]since this allows us to prioritize the best separa-tors. While several weighting schemes were consid-ered [Chv79, Kor13], we have chosen a weight basedon how balanced the local separator is. Specifically,once the separators have been minimized using Algo-rithm 2, we compute the ratio of the smallest to thelargest front component meaning that balanced sep-arators are prioritized. The weight is returned fromAlgorithm 1 together with the separator itself.The heuristic takes l subsets (in our case, sepa-rators) of size S and does the following three steps:First, it computes a weighted redundancy count foreach separator: how many other separator does it10ntersect and by how much. Then, it sorts themaccording to redundancy. Finally, it greedily addssets of increasing redundancy. The running time isthus O ( l S + l log l + l ) = O ( l S ) , that is, dom-inated by the first term corresponding to the firststep. As future work, we could parallelize the compu-tation behind the first term, leading to an O ( l S/p ) time algorithm assuming p processors. Also, onecould use a MinHash algorithm to estimate set re-semblance [Bro97], in order to speed up computationtime for the set _ intersection subroutine.In our case, we will have at most n different sepa-rators, one for each vertex of the graph, of sizes thatare worst-case n , leading to an O ( n ) running time.However, in practice, for evenly spaced sample pointsalong a surface or within a shape, the smallest sep-arators through a point would be O ( √ n ) (assumingbounded genus), and even less for lanky shapes suchas a botanical tree (Figure 16) or foam (Figure 15). a b c d Figure 7: Steps of the skeleton extraction. The orig-inal (minimal) local separators (a), the maximizedseparators (b), the quotient graph (c), and the out-put (d) where each clique complex (bold colored dots)is replaced by a star graph with the cluster verticesas leaves, and a new vertex as center.From a packing of local separators it is now straightforward to compute the actual skeleton. Initially, we maximize the separators by assigning all vertices thatdo not already belong to a separator to their closestseparator in terms of distance along the graph edges.Since all vertices are now assigned to a separator,the separators form a partitioning of the graph, andthe skeleton is computed as the quotient graph by the equivalence relation that two vertices belong tothe same maximized separator [SS12]. The positionof each vertex of the quotient graph is simply theaverage of the positions of all of the vertices in theassociated separator (i.e. equivalence class).The quotient graph is close to the final skeleton,but it contains cliques of size ≥ and complexes ofsuch cliques. The cliques arise at junctions whereseveral branches meet, leading to several vertices inthe quotient graph that are all connected as exempli-fied in Figure 5 (b,c). Often these cliques are of sizethree meaning that the separators on three branchesare all connected to each other. Moreover, in a com-plex junction where more than three branches meet,the cliques will share sub-cliques, thus forming com-plexes. For instance, in Figure 5 (b) we observe thatthere are separators on each finger and on the hand.In the quotient graph (c), the separators are now ver-tices, and the connectivity of the vertices reflects theconnectivity of the separators, forming a complex ofthree cliques.The final step of skeletonization consists of remov-ing clique complexes from the quotient graph. To doso, we find all cliques of size three. Next, we com-pute the pairwise intersections between all cliques,merging those which share a clique of size two. Alledges in the resulting complexes are removed, andthe complex is replaced with a single vertex whichis then connected to all vertices of the complex. Anexample is shown in Figure 7 (d).Clearly, the search for -cliques dominates the run-ning time for skeleton extraction, and thus, it isquadratic in the number of packed separators, or,equivalently, quadratic in the number of vertices inthe final skeleton. In some cases, it is important to get a smooth skele-ton, and our scheme has no inherent smoothness sincethe position of each skeletal vertex is computed inde-pendently of other skeletal vertices as the average po-sition of the associated separator vertices. Figure 10(a & b) show examples where a very smooth skeletonresults, but the leftmost images of Figure 10 (c & d)show two cases where the resulting skeleton is less11mooth.To (optionally) smooth the output skeleton, we usea simple weighted Laplacian smoothing scheme p new v = p v | N( v ) | + (cid:18) − | N( v ) | (cid:19) (cid:80) u ∈ N( v ) √ W u N( u ) p u (cid:80) u ∈ N( v ) √ W u N( u ) , (2)where p v is the position of vertex v , N( v ) is the set ofneighbors, and W v is the weight. The weight is givenby the number of vertices in the separator associatedwith v in the input graph. If v is a branch vertexthat replaces a clique, W v is simply the average ofthe weights of the vertices in the clique.The logic behind this scheme is that we want highweight vertices to attract more but also lower valencyvertices since leaf vertices in particular should not besmoothed too much as that could lead to a loss offine detail. Clearly, this smoothing scheme can beapplied iteratively, depending on the desired level ofsmoothness as illustrated in Figure 10 (c & d). We tested our approach on three different types ofinput: 2-manifold polygonal meshes, volumetric gridsand point data. While the algorithm runs unchangedon all three, some preprocessing is required for otherinputs than triangle meshes. In the following, we firstdescribe some implementation details and then relatethe results of our experiments.
The graph skeletonization algorithm is implementedin C++. Graph and mesh data structures arebased on our open source XX library (omitted foranonymity). The graph data structure is imple-mented using adjacency lists.To speed up computations, we parallelize the sam-pling process using the threading facilities of C++.Since the decision of whether to compute a separatorfrom a given vertex, v , depends on how many sep-arators already include v that decision consequentlydepends on thread scheduling. This entails that theprogram is not fully deterministic; there is a slight variation between runs. However, in all cases, the re-sults shown are those produced by a single run of thealgorithm - not the best result from several runs.Our method relies only on one parameter: the qual-ity threshold, τ , in Algorithm 1 which was set to theempirically chosen 0.0875 in all experiments. Sam-pling was used in separator finding in all cases exceptFigure 1.In our comparisons with other methods, we usedthe CGAL implementation of Mean Curvature Skele-tons [TAOZ12] and the binary kindly provided bythe authors to compare with L-1 Medial Skeletons[HWCO + We compared our method (LS) to mean curvature(MCF) skeletonization [TAOZ12] using the CGALimplementation. The method exposes the w H pa-rameter which “controls the velocity of movement andapproximation quality” according to the CGAL doc-umentation. We ran our experiments with w H = 0 . (default) and with w H = 1 which is slower, but, insome cases, improves quality. The comparison wasperformed on a selection of meshes (many commonlyused) of varying complexity, genus, and level of geo-metric detail. The results are summarized in Table 1. General Comparison
A number of the outputskeletons are shown in Figure 8, and we have di-vided the results into four groups which are discussedbelow. We have assessed them against the criteriasketched in Section 1.3.2, and against the additionalcriterion that some skeleton should be computed.
Group A: both win contains the human hand anda synthetic mesh. For these two meshes the LSand MCF results are comparable. It may beobserved that the MCF result is smoother inthe case of the human hand. However, whilesmoothing is an intrinsic (and not always de-sired) part of the MCF method, we can easilyobtain a smoother output from the LS method12 ood SculptureAsian Dragon (Stanford)Fertility Neptune (hand)Neptune (trident)SpiderHuman Hand
Group A: MCF and LS produce satisfactory resultsGroup B: LS finds more features, MCF produces excessive smoothingGroup C: MCF fail casesGroup D: LS and MCF fail case
M18020 Noisy DinosaurM22081 M21362Rocker Arm Frog
Figure 8: Comparison of triangle mesh skeletonization using our method and the mean curvature [TAOZ12]method (MCF) as implemented in CGAL. Each mesh is shown shaded on the left, skeletonized with ourmethod (center left) and using MCF on the right (with w H = 0 . center right and w H = 1 far right). Themodels are divided into Groups A through D depending on the outcome for MCF and LS.13 nput Skeleton Leaf Branch Run TimeVertices Vertices Vertices Vertices (secs)LS MCF LS MCF LS MCF LS MCF0.1 1.0 0.1 1.0 0.1 1.0 0.1 1.0Human hand 12438 120 349 271 6 6 6 3 4 4 87.82 14.97 84.00M18020 75120 2719 2973 3510 0 2 2 40 80 67 24.20 51.49 60.43Fertility 24994 181 523 483 3 0 0 7 6 5 534.18 38.26 229.56Wood Sculpture 19803 241 501 426 7 5 3 11 10 9 141.15 22.90 140.86Asian Dragon 47701 1276 838 720 83 35 29 29 33 27 512.39 44.04 274.99Neptune 28052 491 606 569 27 9 12 22 11 16 247.42 27.82 141.26Rocker Arm 10803 67 230 183 4 2 2 4 2 2 107.33 19.01 86.76M22081 3970 130 301 290 4 4 3 10 11 11 5.92 4.18 4.50Frog 37225 483 820 712 23 23 22 14 20 18 1394.70 57.64 388.14Spider 4675 212 29 1 1.71Noisy Dinosaur 23302 240 487 13 34 7 4 299.63 26.78M21362 1828 342 4 36 23 1 8 1 1 3 8.76 2.65 29.77Armadillo 6488 270 402 372 20 10 14 9 7 12 11.45 9.28 67.07idem (noisy) 6488 315 347 346 21 8 13 8 5 8 13.61 9.25 74.22idem (noisier) 6488 494 348 279 36 8 10 11 6 6 17.61 9.04 73.02 Table 1: Summary of the results of triangle mesh skeletonization using our method and the mean curvature[TAOZ12] method (MCF) as implemented in CGAL. We compared MCF using w H = 0 . (default) and w H = 1 .using the approach described in Section 2.4, andthe result of smoothing the human hand is shownin Figure 10(d). Group B: excessive smoothing contains severalcases where the LS method yields a skeleton thatcaptures more details (criterion ii) and/or withhigher fidelity than MCF (criterion iii). In thecase of Fertility, Rocker Arm, and Wood Sculp-ture, the difference is mainly that the LS ap-proach captures small protrusions not capturedby the MCF skeleton. In the case of the AsianDragon (only head shown), significant head ap-pendages are missing from the MCF skeleton.When it comes to the frog, the LS skeleton cap-tures eyes and nostrils but both are missing fromthe MCF skeleton for w H = 1 and the eyes aremissing for w H = 0 . .The results for M22081 show clearly that toomuch smoothing can lead to loss of precision.The highly regular structure of M22081 is cap-tured quite well by the LS method whereasthe horizontal parts are deformed by the MCF method: excessively so for w H = 0 . and some-what less for w H = 1 . In a similar vein, theoverall shape of the trident from the Neptunemodel is captured better by the LS Method, andMCF misses one or more barbs. In the case ofthe hand of the Neptune statue, we also observethat the skeleton seems to be over-smoothed. Group C: MCF fails contains two models whereMCF fails to produce an output. The spidermodel consists of multiple intersecting parts andit contains non-manifold edges which is likely thereason for this failure. The noisy Dinosaur con-tains a few tiny connected components whichmay both be the cause of the strange isolatedcomponents in the skeleton for w H = 0 . andalso the reason for the failure in the case of w H = 1 . We note that the LS method producesvalid and reasonable output in both cases. Group D: both fail consists of a single model,M21362, which consists of large, planar regionsdefined by very few vertices and exceedingly ill-shaped triangles. Both LS and MCF produce a14alid but not a reasonable output according toour criterion (i) that the skeleton should capturethe homology of the shape, and probably modelslike this one can only be handled by resamplingthe mesh.
Comparison of Surface Reconstructions
In or-der to investigate the difference between LS (ourmethod) and MCF, we applied convolution surfacebased reconstruction [BS91] to the output from bothmethods. Since curve skeletonization is not invert-ible, this will not result in an accurate reconstruction,but it does facilitate a visual comparison of how welleach method qualitatively captures the input surface.We chose to do this on the Asian Dragon since thismodel has numerous fangs, digits and assorted ap-pendages which provide an interesting challenge. Toreconstruct a shape from the skeleton, we associatea radius with each node of the skeleton. This radiusis computed as the average of the distances from theskeletal node to the associated vertices. In the caseof LS, the associated vertices are of course the sep-arator, and MCF reports the vertices that are con-tracted to each node. The result is shown in Figure 9.It is very clear from the images, that LS captures farmore features than MCF: in fact, all digits are cap-tured by the LS skeleton and just a few by the MCFskeleton, and a number of other appendages are alsomissing from the MCF skeleton. This observation issupported by the data in Table 1 where we observethat the LS skeleton contains between two and threetimes more leaf vertices than the MCF skeleton. Be-cause the leaf nodes of the skeleton represent a largerpart of the input mesh in the case of the MCF recon-struction, the tips of features appear comparativelybulbous in the reconstruction as seen in Figure 9.
Smoothness of Skeletons
As previously noted,the result of the LS method can trivially be smoothedin a post process. However, this is not always neces-sary. Figure 10 (a) shows the result of Skeletonizinga Dupin cyclide. The figure shows both the separa-tors (color coded) and the resulting skeleton whichis quite smooth due to the regularity of the cyclideand its sampling which together ensure that the ob- tained separators are perfectly circular. It should benoted that to obtain this result, we apply the opti-mization (Section 2.1.2) which we do not enable forthe other examples, since the effect is only noticeableon a regular structure such as the cyclide.Figure 10 (b) demonstrates that we can also obtaina very regular skeleton for slightly less perfect input.In this case, edges of the thin part of the cyclide havebeen contracted (here the separators are individualvertices) and a section of the thick part has been cutout (here the separators are open circular arcs), butthe result is still a very regular skeleton.Figure 10 (c & d) show the Armadillo and the hu-man hand models skeletonized with 0, 1, 2, or 3 stepsof smoothing. In both cases, smooth skeletons result.We note that the effect seems to be purely benefi-cial for the hand, but in the case of the Armadillo,some distortion of fine features takes place, compara-ble to what we might see using the MCF method: cf.Netune’s hand and trident in Figure 8. We can alsocompare to the Armadillo skeleton produced by MCFwhich is shown in Figure 12, but since MCF fails tocapture several digits, it is not possible to make adirect comparison.
Blending Reeb Graphs
Given a shape, S , anda height function, h : S → R , the Reeb graph of S with respect to h is the quotient space of S underthe equivalence relation that two points x and y areequivalent if they belong to the same connected com-ponent of the same level set, i.e. h ( x ) = h ( y ) .Given a function h and a spatially embeddedgraph, we can easily extract a set of separators usingan approach that is similar to the discrete contourcomputation algorithm of Tierny et al. [TVD08]. Ifa value of h is associated with each vertex, we pro-ceed by marking the vertices which are discrete min-ima as sources and then we add all their neighbors toa queue, Q , of vertices being visited. At each step,we “freeze” the vertex in Q with the smallest valueand add all its unvisited neighboring vertices to Q .This is similar to the common implementation of Di-jkstra’s algorithm, except that we are not computingdistances but merely tracking an existing function.Importantly, at any time step, the vertices in Q form15igure 9: The original triangle mesh of the Asian Dragon is shown in the center. On the left is a surfacereconstructed from our LS skeleton using convolution surfaces and on the right a similar reconstruction fromthe MCF skeleton. In both cases the skeleton used for reconstruction is shown above the rendered model. ab c d Figure 10: a and b show a Dupin cyclide skeletonized using the LS method. In b, a segment of the surfacehas been removed introducing a boundary (indicated by a red box) and edge contraction has resulted inthe model being collapsed to a curve (indicated by the other red box). In both a and b, the graph of thecyclide is shown left with separators color coded, and the resulting skeleton is shown on the right. c and dshow the result of a skeletonization of the armadillo (c) and the human hand (d) with 0, 1, 2, and 3 stepsof smoothing going fron left to right.a separator (between frozen vertices and unvisitedvertices) whose connected components can be usedas local separators. By keeping track of the time stepat which vertices are added and removed, we can eas-ily output non-overlapping local separators, and theresult is tantamount to a discrete Reeb graph.Four such separator collections (and associatedskeletons) are shown in Figure 11(a-d). The height functions employed are simply the vertex positionsprojected onto an axis and subsequently smoothed.Applying our packing algorithm, we can now com-pute a set of combined separators and the resultingskeleton shown in Figure 11e. In effect, our sepa-rator packing algorithm blends the four input Reebgraphs, and the result is a skeleton that appears tobetter capture the overall shape than the individual16 b c d e
Figure 11: a,b,c, and d show separators (top) computed from height functions associated with four differentdirections. The resulting skeletons (below) are tantamount to Reeb graphs for these height functions. eshows the result of packing the "Reeb separators" using our separator packing approach. The result isskeleton that is less anisotropic and better captures the overall shape.Reeb graphs. In particular, centeredness (iii) is im-proved.Clearly, we could also have employed a number ofother functions such as eigenfunctions of the graphLaplacian or geodesic distance functions as choicesof h . We did experiment with all of these options,individually and in combination. However, in ourexperience, local separators are better at capturingfine details of the geometry. Noisy Meshes
We compared LS to MCF in termsof response to noise. The vertices of the Armadillowere corrupted by adding a random vector sampledfrom a ball of radius equal to half the average edgelength. This procedure was also applied a secondtime to measure the response to more extreme noise.The results are shown in Figure 12. We note that theeffect of noise on the LS method appears to mostlybe spurious leaf nodes, which could be removed fairlyeasily, whereas MCF seems to loose rather than gainfeatures. This is unsurprising since MCF is based onsmoothing, and more smoothing is probably neededin order to obtain a skeleton from a noisy model.
Quadrilateral Meshes
If a pure quadrilateralmesh is provided as input to our method, the out-put is an identical quad mesh since each vertex willbe a separator. Simply triangulating the quad mesh Figure 12: From left to right these images show: theinput mesh, the LS skeleton, the MCF skeleton with w H = 0 . , and with w H = 1 . The rows contain theArmadillo without noise (top row), the same modelafter one application of vertex displacement noise inthe middle row, and two applications of random dis-placement (bottom row).makes the algorithm behave as expected. However,17 b c Figure 13: The effect of increasing connectivity. Theoriginal quad mesh is shown left, (a) shows the sep-arators for a graph constructed from a triangulationof the quad mesh, (b) shows separators for a graphconstructed by connecting each vertex, v , to N ( v ) ,and (c) shows the corresponding result for N ( v ) .we can also increase the connectivity in other waysthan by triangulating. A simple procedure is to con-nect all vertices to the neighbors of their neighbors.More generally, we can define N k ( v ) as the set ofvertices at a distance of at most k edges. Figure 13shows the skeletons produced from a triangulation(a) and from a graph where each vertex, v , has beenconnected to N ( v ) (b) and N ( v ) (c). Unsurpris-ingly, the separators become fewer and thicker as theconnectivity increases. Like a triangle mesh, a voxel grid can be construedas a graph. In this case, the positions of the graphvertices are simply the positions of the voxels in the3D grid, but we remove vertices that correspond tobackground voxels and keep only those which corre-spond to foreground (or object). Each graph vertexshould be connected to all the vertices which cor-respond to foreground vertices in its 26-connectedneighborhood. If we think of a voxel as a small cubein a grid of cubes, 26-connectivity means that the voxel (and hence the graph vertex) is connected toall the voxels with which it shares a face, edge, or avertex.We conducted two experiments using voxel grids.In the first experiment, we sampled the Wood Sculp-ture model on a voxel grid in order to produce agraph. We did this for seven voxel grids with longestside lengths ranging from 30 through 90. The resultsare shown in Figure 14. There is a clear variation inthe skeletons, and comparing with Figure 8 it seemsthat slightly more details are captured in the meshskeleton. This is to be expected since the voxel gridsare derived from the mesh. Note how the local sepa-rators shown in Figure 14(b) are laminar rather thancycles.In another experiment, we ran the code on X-rayCT data of liquid foam from the TomoBank reposi-tory [DCGC + × × and we create graph nodes for voxels above 0.37 cor-responding to 11.3% of the voxels becoming graphnodes. The output is shown in Figure 15 (toptwo rows). We can easily compare our approachto image based approaches to skeletonization. Thescikit-image ( https://scikit-image.org ) packagefor Python contains a function which, given a 3D bi-nary image, returns a new 3D binary image contain-ing a 1 voxel thick skeleton. This function works byiterative thinning according to a method by [LKC94],and it is extremely fast, producing a skeleton inroughly 55 milliseconds. However, the output is sim-ply a binary image whereas our method estimatesmore precise vertex positions and their connectivity.A comparison is shown in Figure 15 (bottom row). In some respects, point clouds are a more challengingtype of data than meshes or voxel grids. Since ouralgorithm operates on graphs, we need to convert aninput point cloud to a graph before proceeding. Fig-ure 1 shows a simple example of a 2D point cloudwhere a graph has been constructed by connectingeach point to all neighbors in a given radius, andthen our method has been applied to the graph.Of course, Figure 1 is a toy example. For real pointclouds, we use the following pipeline:18 a hfedcb i
Figure 14: Volumetric skeletons generated from the Wood Sculpture. The original mesh (a), a graphcontaining 760 vertices created from a voxel grid with local separators shown in color (b). The skeletonproduced from this graph (c), the skeleton produced from a volumetric graph consisting of 1800 vertices (d),3540 vertices (e), 6166 vertices (f), 9830 vertices (g), 14723 vertices (h), and 20966 vertices (i).1. Construct a graph, G by connecting each pointto its at most k-nearest neighbors within a givenradius.2. Connect each vertex, v ∈ G to vertices in N l ( v ) for some l < (again within a certain radius).3. Simplify the graph by edge contraction.We will briefly justify this pipeline. In order toseparate features of the represented object (say twobranches of a tree) we often need to use a radius inStep 1 that is too small to ensure that we obtain alledges needed to fulfill the reconstruction condition.However, as long as these edges can be reached by go-ing from neighbor to neighbor, we can obtain themvia the second step. Thus, it is possible to main-tain a gap between features smaller than the greatestdistance between vertices that should be connected.Often the resulting graph contains more vertices thanneeded to capture the skeletal features we seek, hencethe third step. Note that while this pipeline is nottrivial and depends on several parameters that mustbe determined, it is simpler than any method that areaware of for reconstructing a manifold triangle meshfor an object of unknown genus.A LiDAR scan of a botanical tree consisting of832943 points was provided by collaborators. Ap-plying the pipeline above, we obtained a graph con-sisting of 48661 vertices. Once the LS method hadbeen applied to this graph, the skeleton was garbed using convolution surfaces [BS91] producing the re-sult shown in Figure 2.Using the botanical tree and three of their pro-vided example point clouds as test data, we comparedour LS method to the L1-medial skeleton method(L1M) by Huang et al. [HWCO + To experimentally evaluate the performance of our al-gorithm, we ran it on subsamples of
Wood Sculpture (see Figure 8). We ran two experiments: For one, wethinned out the mesh grid to obtain shapes of fewervertices (all, half, quarter, etc, down to (1 / − ).Secondly, we transformed the mesh to a voxel gridwith a variable number of voxels.In all experiments, we found that the time spent LS L1MModel Points vertices samples iterTree 48661 48661 48661 500Dinosaur 25713 2329 25713 255GLady 44230 859 5559 165Yoga3 21903 2307 21903 199Table 2: Comparison of the LS and L1M methods forskeletonization. This table contains input sizes andthe number of iterations used by L1M.computing the separators by far dominates the run-ning time (>95% of the time was spent on finding sep-arators as opposed to packing and extraction), andthat the proportion of total time spent on computingseparators tends to 1 as the number of vertices grows.Although the running time is indeed a function oftwo parameters, the number of vertices n and theaverage separator size Σ max found in Algorithm 1,we run the experiment only on this one figure (offixed lankiness), and analyse the running time simplyas a function of n . We expected a running time ofat most O ( n . ) for the mesh, and O ( n ) for thevoxels, though hopefully less because of the sampling.As shown in Figure 17, we did indeed observe anasymptotically improved running time compared tothe worst-case, and we did observe that voxels wereasymptotically slower than meshes.A natural question, since the running time dependson the number of vertices sampled, is: how many ver-tices are sampled? Here, we were expecting asymp-totically fewer points to be sampled from the voxelgrid in comparison to the mesh. Indeed, this appearsto be the case. Using power regression, we estimatea total sample of O ( n . ) for an n -vertex mesh, and O ( n . ) for an n -vertex voxel grid. Note, however,that these numbers are highly dependent on the lank-iness of the specific shape in question. We have presented our results for a skeletonizationalgorithm based on the notion of local separators.Our method accepts as input a graph whose verticesmap to spatial positions. While certain assumptions20igure 16: (a) shows a comparison using the botanical tree point cloud. On the left, the L1M algorithm byHuang et al. was employed [HWCO + + + +
08] where these twosteps are coupled, meaning that we only know theatoms when we have found the skeleton (at least lo-cally). In our approach, we require the atoms to beseparators and find a super set of the needed atoms,i.e minimal local separators. We speculate that thisis what allows the LS approach to resolve finer de-tails in the shapes we tested than MCF [TAOZ12]: ifthe skeletal atoms are a product of a process whichterminates only when the skeletal structure has beencomputed, then atoms corresponding to small struc-tures might have been incidentally removed whenthat point is reached. Of course, the advantage isonly reaped because the LS algorithm is able to re-solve small details in the first place. This is thanksin large part to the notion of separator minimality.Unlike a number of approaches to curve skele-tonization, notably [DS06], our method does not takethe medial surface into account. However, recentmethods compute curve skeletons which are attracted to the medial surface, and this would certainly bepossible also with our method and might be worth-while investigating. Likewise, smoothing is not in-herent in our method. Arguably, this is an advan-tage since smoothness can also make the skeletonless precise, but both attraction to the medial sur-face and optional smoothing could be combined lead-ing to a method that would flexibly allow the userto obtain skeletons with desired degrees of smooth-ness and proximity to the medial surface in the veinof [TAOZ12] - at least for shapes where the medialsurface is well defined.In this work, we dealt with performance throughparallelization and sampling, but the process of find-ing and packing local separators becomes very slowfor large separators. The use of a dynamic spa-tial data structure [AMN +
98] when searching thefront, and a structure for dynamic connectivity of thefront [HdLT01], would reduce the search time in Al-gorithm 1 from linear to polylogarithmic in the num-ber of vertices.The performance of the sampling scheme could alsobe improved. In particular, we might not have tostart from a vertex when computing every single sep-arator. In fact, given an existing separator, the con-nected components of its neighbors are also separa-tors in most cases. Thus, it is, in fact, quite possi-ble to find separators by a search that starts froma single separator and propagates out in this way.However, our preliminary experiments indicate thatsampling is still required, and it is not easier to findsmall features using this approach. Thus, more workis required to find a good sampling scheme that bal-ances coverage with the use of the information in theseparators already found.An important feature of our algorithm is that itworks for any input that is a graph or that can easilybe transformed into a graph. As such, it may beused for skeletonization of shapes not only in R and R , but shapes in higher dimensions, or non-euclideangraphs. An open question for future work is to findsuch applications and test our algorithm on those.22 cknowledgements We are indebted to Rasmus Reinhold Paulsen,Patrick Møller Jensen, Tim Felle Olsen, and NielsJeppesen for providing several types of data and test-ing the method on that data. We are grateful to EbbaDellwik for providing the LiDAR scan of a botani-cal tree and to Ida Bukh Villesen for her work ona method for constructing a graph from this pointset. We thank Aasa Feragen and Rasmus ReinholdPaulsen for their many useful comments and ideas onthe exposition of the paper.
References [AMN +
98] Sunil Arya, David M. Mount,Nathan S. Netanyahu, Ruth Sil-verman, and Angela Y. Wu. Anoptimal algorithm for approximatenearest neighbor searching fixed di-mensions.
J. ACM , 45(6):891–923,November 1998.[ATC +
08] Oscar Kin-Chung Au, Chiew-Lan Tai,Hung-Kuo Chu, Daniel Cohen-Or, andTong-Yee Lee. Skeleton extraction bymesh contraction.
ACM transactionson graphics (TOG) , 27(3):1–10, 2008.[BGSF08] Silvia Biasotti, Daniela Giorgi, MichelaSpagnuolo, and Bianca Falcidieno.Reeb graphs for shape analysis and ap-plications.
Theoretical computer sci-ence , 392(1-3):5–22, 2008.[Blu67] Harry Blum. A transformation for ex-tracting new descriptors of shape.
Mod-els for the perception of speech and vi-sual form , 19(5):362–380, 1967.[Bro97] A. Broder. On the resemblance andcontainment of documents. In
Proceed-ings of the Compression and Complex-ity of Sequences 1997 , SEQUENCES ’97, page 21, USA, 1997. IEEE Com-puter Society.[BS91] Jules Bloomenthal and Ken Shoe-make. Convolution surfaces.
ACMSIGGRAPH Computer Graphics ,25(4):251–256, 1991.[Chv79] Vasek Chvatal. A greedy heuristic forthe set-covering problem.
Mathemat-ics of operations research , 4(3):233–235, 1979.[CSM07] Nicu D Cornea, Deborah Silver, andPatrick Min. Curve-skeleton prop-erties, applications, and algorithms.
IEEE Transactions on Visualization &Computer Graphics , (3):530–548, 2007.[DCGC +
18] Francesco De Carlo, Doğa Gürsoy,Daniel J Ching, K Joost Batenburg,Wolfgang Ludwig, Lucia Mancini,Federica Marone, Rajmund Mokso,Daniël M Pelt, Jan Sijbers, et al. To-mobank: a tomographic data reposi-tory for computational x-ray science.
Measurement Science and Technology ,29(3):034004, 2018.[DFW13] Tamal K. Dey, Fengtao Fan, and YusuWang. An efficient computation of han-dle and tunnel loops via reeb graphs.
ACM Trans. Graph. , 32(4):32:1–32:10,2013.[DS06] Tamal K Dey and Jian Sun. Defin-ing and computing curve-skeletons withmedial geodesic function. In
Sympo-sium on geometry processing , volume 6,pages 143–152, 2006.[GZLC19] Linming Gao, Dong Zhang, Nan Li,and Lei Chen. Force field drivenskeleton extraction method for pointcloud trees.
Earth Science Informatics ,12(2):161–171, 2019.[HdLT01] Jacob Holm, Kristian de Lichtenberg,and Mikkel Thorup. Poly-logarithmic23eterministic fully-dynamic algorithmsfor connectivity, minimum spanningtree, 2-edge, and biconnectivity.
J.ACM , 48(4):723–760, 2001.[HS89] C.A.J. Hurkens and A. Schrijver. Onthe size of systems of sets every t ofwhich have an sdr, with an applica-tion to the worst-case ratio of heuris-tics for packing problems.
SIAM Jour-nal on Discrete Mathematics , 2(1):68–72, 1989.[HSS06] Elad Hazan, Shmuel Safra, and OdedSchwartz. On the complexity of approx-imating k-set packing.
Comput. Com-plex. , 15(1):20–39, May 2006.[HWCO +
13] Hui Huang, Shihao Wu, Daniel Cohen-Or, Minglun Gong, Hao Zhang,Guiqing Li, and Baoquan Chen. L1-medial skeleton of point cloud.
ACMTrans. Graph. , 32(4):65–1, 2013.[JK34] V. Jarník and M. Kossler. O min-imálních grafech obsahujících n danýchbodu.
Časopis Pěst. Mat. , 63:223–235,1934.[JXC +
13] Wei Jiang, Kai Xu, Zhi-Quan Cheng,Ralph R Martin, and Gang Dang.Curve skeleton extraction by coupledgraph contraction and surface cluster-ing.
Graphical Models , 75(3):137–148,2013.[KN01] Bernhard Korte and Jaroslav Nešetřil.Vojtěch jarník’s work in combinatorialoptimization.
Discrete Mathematics ,235(1):1 – 17, 2001. Chech and Slovak3.[Kor13] David Kordalewski. New greedy heuris-tics for set cover and set packing. arXivpreprint arXiv:1305.3584 , 2013.[LKC94] Ta-Chih Lee, Rangasami L Kashyap,and Chong-Nam Chu. Building skele-ton models via 3-d medial surface axis thinning algorithms.
CVGIP:Graphical Models and Image Process-ing , 56(6):462–478, 1994.[LYO +
10] Yotam Livny, Feilong Yan, Matt Olson,Baoquan Chen, Hao Zhang, and Ji-had El-Sana. Automatic reconstructionof tree skeletal structures from pointclouds.
ACM Transactions on Graphics(TOG) , 29(6):151, 2010.[Pre12] Chakkrit Preuksakarn.
Reconstruct-ing plant architecture from 3D laserscanner data. Modeling and Simula-tion . PhD thesis, Université Mont-pellier II - Sciences et Techniques duLanguedoc, 2012.[QLZF18] Ed Quigley, Winnie Lin, Yilin Zhu,and Ronald Fedkiw. Three dimen-sional reconstruction of botanical treeswith simulatable geometry.
CoRR ,abs/1812.08849, 2018.[RAV +
19] Daniel Rebain, Baptiste Angles,Julien Valentin, Nicholas Vining, JijuPeethambaran, Shahram Izadi, andAndrea Tagliasacchi. Lsmat leastsquares medial axis transform. In
Computer Graphics Forum , volume 38,pages 5–18. Wiley Online Library,2019.[SBdB16] Punam K Saha, Gunilla Borgefors, andGabriella Sanniti di Baja. A surveyon skeletonization algorithms and theirapplications.
Pattern recognition let-ters , 76:3–12, 2016.[SBdB17] Punam K Saha, Gunilla Borgefors, andGabriella Sanniti di Baja.
Skeletoniza-tion: Theory, methods and applica-tions . Academic Press, 2017.[Ser83] Jean Serra.
Image analysis and math-ematical morphology . Academic Press,Inc., 1983.24SP08] Kaleem Siddiqi and Stephen Pizer.
Me-dial representations: mathematics, al-gorithms and applications , volume 37.Springer Science & Business Media,2008.[SS12] Peter Sanders and Christian Schulz.High quality graph partitioning.
GraphPartitioning and Graph Clustering ,588(1):1–17, 2012.[TAOZ12] Andrea Tagliasacchi, Ibraheem Al-hashim, Matt Olson, and Hao Zhang.Mean curvature skeletons. In
ComputerGraphics Forum , volume 31, pages1735–1744. Wiley Online Library, 2012.[TDS +
16] Andrea Tagliasacchi, Thomas Delame,Michela Spagnuolo, Nina Amenta, andAlexandru Telea. 3d skeletons: A state-of-the-art report.
Computer GraphicsForum , 35(2):573–597, 2016.[TVD08] Julien Tierny, Jean-Philippe Vande-borre, and Mohamed Daoudi. Enhanc-ing 3d mesh topological skeletons withdiscrete contour constrictions.
The Vi-sual Computer , 24(3):155–172, 2008.[TZCO09] Andrea Tagliasacchi, Hao Zhang, andDaniel Cohen-Or. Curve skeleton ex-traction from incomplete point cloud.
ACM Trans. Graph. , 28(3):71:1–71:9,July 2009.[XZKS19] Zhan Xu, Yang Zhou, EvangelosKalogerakis, and Karan Singh. Predict-ing animation skeletons for 3d articu-lated models via volumetric nets. In , pages 298–307. IEEE,2019.[YSC +
16] Yajie Yan, Kyle Sykes, Erin W. Cham-bers, David Letscher, and Tao Ju.Erosion thickness on medial axes of3d shapes.
ACM Trans. Graph. ,35(4):38:1–38:12, 2016. [ZYH +
15] Yang Zhou, Kangxue Yin, Hui Huang,Hao Zhang, Minglun Gong, and DanielCohen-Or. Generalized cylinder de-composition.