Learning to Accelerate Decomposition for Multi-Directional 3D Printing
LLearning to Accelerate Decomposition for Multi-Directional 3D Printing
Chenming Wu , Yong-Jin Liu ∗ and Charlie C.L. Wang ∗ Abstract — Multi-directional 3D printing has the capabilityof decreasing or eliminating the need for support structures.Recent work proposed a beam-guided search algorithm to findan optimized sequence of plane-clipping, which gives volumedecomposition of a given 3D model. Different printing directionsare employed in different regions to fabricate a model withtremendously less support (or even no support in many cases).To obtain optimized decomposition, a large beam width needsto be used in the search algorithm, leading to a very time-consuming computation. In this paper, we propose a learningframework that can accelerate the beam-guided search by usinga smaller number of the original beam width to obtain resultswith similar quality. Specifically, we use the results of beam-guided search with large beam width to train a scoring functionfor candidate clipping planes based on six newly proposedfeature metrics. With the help of these feature metrics, both thecurrent and the sequence-dependent information are capturedby the neural network to score candidates of clipping. As aresult, we can achieve around × computational speed. Wetest and demonstrate our accelerated decomposition on a largedataset of models for 3D printing. I. I
NTRODUCTION
3D printing is a very popular technology that processesmaterials in an additive manner. Its capability to build com-plex objects rapidly has been widely used in many scenarios– from the micro-scale fabrication of bio-structures to in-situ construction of architecture. However,
Fused DepositionModeling (FDM) using planar layers with fixed 3D printingdirection suffers from the need of support structures (shortlycalled support in the following context), which are used toprevent the collapse of material in overhang regions due togravity. Supports bring in many problems, including hard-to-remove, surface damage and material waste as summarizedin [1].To avoid using a large number of supports, our previouswork [2] proposes an algorithm to decompose 3D modelsinto a sequence of sub-components, the volume of whichcan be printed one by one along with different directions fordifferent components. Candidates clipping planes are used asa set of samples to define the search space for determining anoptimized sequence of decomposition. Different criteria aredefined to ensure the feasibility and the manufacturability(e.g., collision-free and no floating region, etc.). The mostimportant part of the work presented in [2] is a beam-guidedsearch algorithm with progressive relaxation. The benefit C. Wu and Y.-J. Liu are with the Department of Computer Scienceand Technology, Tsinghua University, Beijing, China. { wcm15@mails,liuyongjin@mail } .tsinghua.edu.cn C.C.L. Wang is with the Department of Mechanical and Automa-tion Engineering The Chinese University of Hong Kong, Hong Kong. [email protected] *Corresponding authors Fig. 1. A 5-DOF multi-directional 3D printing system that can depositmaterial along with different directions: (left) the printer head can movealong x − , y − and z − axes and (right) the working table can rotate aroundtwo axes (see the arrows for the illustration of A-axis and C-axis). of the beam search algorithm is that it can avoid beingstuck in local minimum – a common problem of greedysearch. Beam width b = 10 is empirically used to balancethe trade-off between computational efficiency and searchingeffectiveness. Though conducting a parallel implementationrunning on a computer with Intel(R) Core(TM) i7 CPU (4cores), the method still results in an average computingtime of 6 minutes. On the other hand, using b = 10 is acompromise between performance and efficiency. Using alarger b would give us better results since the search spaceis expanded linearly when b increases – see Fig.2 for anexample.One question is, c Our answer is yes . To achieve thisgoal, we propose to learn a scoring function for candidateclipping planes by using six feature metrics. With the helpof these feature metrics, both the current and the sequence-dependent information are captured by the neural networkto score candidates for clipping. The learning is conductedon the results of beam-guided search with large beam width(i.e., b = 50 ) running on a large dataset of models for 3Dprinting, Thingi10k , recently published by [3]. As a result, wecan achieve around 3 times acceleration while still keepingthe similar quality on the results of volume decomposition.In summary, we make the following contributions: • A learning-to-accelerate framework that can rank a setof candidate planes that best-fit the optimal resultssampled on the large dataset, which significantly ac-celerates the beam search algorithm without sacrificingthe performance. a r X i v : . [ c s . G R ] J un ig. 2. An example of using different widths in beam search is given onthe frog model ( ID: 81368 from the
Thingi10k dataset [3]). A large numberof supports are needed by using the conventional 3D printing (left). Multi-directional 3D printing can significantly reduce the need of supports, andthe regions need additional supports can be reduced from . (middle)to . (right) of the total area when the beam width increases from to . Regions to be printed along different directions are displayedin different colors to represent the results of volume decomposition, andsupporting structures represented by red struts are added. • A method to convert the trajectories generated duringthe beam-guided search to listwise ranking orders atdistinct stages for training.The computational efficiency of the proposed work is muchbetter than our previous work [2] while keeping the qualityof searching results at a similar level. The implementation oflearning-based acceleration presented in this paper, togetherwith the solid decomposition approach presented in [2] isavailable at GitHub .II. R ELATED W ORK
The problems caused by support have motivated a lot ofresearch efforts to reduce the need for supports. There arethree significant threads of research towards this goal: 1)proposing better patterns of supports so that the numberof supports is smaller than the one generated by supportgenerators (ref. [4], [5]); 2) segmenting digital models intoseveral pieces, each of which can be built in a support-free orsupport-effective manner; 3) using high degree-of-freedom(DOFs) robotic systems to automatically change the builddirection so that the overhanging regions become safe regionsthat can be safely fabricated without the need of supports.Here we mainly review the prior work in the last two threadsthat are most relevant.
A. Segmentation-based Methods
A digital model can be first segmented into differentcomponents for fabrication and then assembled back toform the original model. There are several methods thathave explored to use segmentation to reduce the need ofsupports. Hu et al. [6] invented an algorithm to automat-ically decompose a 3D model into parts in approximatelypyramidal shapes to be printed without support. Herholz et https://github.com/chenming-wu/pymdp/ al. [7] proposed another algorithm to solve a similar problemby enabling slight deformation during decomposition whereeach component is in the shape of height-fields. RevoMaker[8] fabricated digital models by 3D printing on top of anexisting cubic component, which can rotate itself to fabricatethe shape of height-fields. Wei et al. [9] partitioned a shellmodel into a small number of support-free parts using askeleton-based algorithm. Muntoni et al. [10] also tackled theproblem of decomposing a 3D model into a small set of non-overlapped height field blocks, which can be fabricated byeither molding or AM. These methods are mostly algorithmicsystems that can be easily incorporated into off-the-shelfmanufacturing devices. However, the capability of manufac-turing hardware has not been considered in the design ofalgorithms. B. Multi-directional and Multi-axis Fabrication
The recent development in robotic systems enables re-searchers to think about a more flexible AM routine [11].Adding more DOFs into the process of 3D printing seemspromising and has gained much attention. Keating and Ox-man [12] proposed to use a 6-DOF manufacturing platformdriven by a robotic arm to fabricate the model either in anadditive or subtractive manner. Pan et al. [13] rethink theprocess of
Computer Numerical Control (CNC) machiningand proposed a 5-axis motion system to accumulate materi-als.
On-the-Fly Print system proposed by Peng et al. [14] isa fast, interactive printing system modified from an off-the-shelf Delta printing device but with two additional DOFs.Based on the same system, Wu et al. [15] proposed analgorithm that can plan the collision-free printing orders ofedges for wireframe models.Industrial robotic arms have been widely used in AM. Forexample, Huang et al. [16] built up a robotic system for3D printing wireframe models on a 6-DOF KUKA roboticarm. Dai et al. [17] developed a voxel-growing algorithmfor support-free printing digital models using a 6-DOF URrobotic arm. Shembekar et al. [18] proposed a method tofabricate conformal surfaces by collision-free 3D printingtrajectories on a 6-DOF robotic arm. To reduce the expenseof hardware, a -axis additive manufacturing is alsoproposed recently [19]. They adopted a flooding algorithmto plan collision-free and support-free paths. However, thisapproach can only be applied to tree-like 3D models withsimple topology. Volume decomposition-based algorithmshave been proposed in our prior work (ref. [2], [20]).
C. Learning to Accelerate Search
Efficiently searching a feasible solution is a common prob-lem in computer science, where most problems have amplesearch space and thus challenging to tackle. Recent researchadvances the state-of-the-art by incorporating machine learn-ing techniques. For example, optimizing a program usingdifferent predefined operators is a combinatorial problemthat is difficult to optimize. The work of Chen et al. [21]learned domain-specific models with statistical costs to guidethe search of tensor implementations over many possible ig. 3. A sequence of multi-directional 3D printing can be determined by computing a sequence of planar clipping (left), where the inverse order ofclipping gives the sequence of multi-directional 3D printing (right). Details can be found in [2]. choices for efficient deep-learning deployments. Recently,Adams et al. [22] improved a beam search algorithm forHalide program optimization. They proposed to learn a costmodel to predict running time by using the input of thederived features. We aim at searching optimal sequencesof operations as applying different cuts in different stages.Learning a scoring function is similar to the problem solvedin [22].Direct establishing a mapping from features to a scoreby supervised learning is difficult. Differently, we adopt thelearning-to-rank (LTR) [23] technique to solve our problem.LTR is one of the traditional topics in information retrieval,which aims at learning a scoring model from query-documentfeatures, thereafter the predicted scores can be used toorder (rank) the documents. There are three types of LTRapproaches: pointwise, pairwise, and listwise. Pointwise LTRapproach learns a direct mapping from a single feature toan exact score [23]. Pairwise LTR approach learns pairwiseinformation between two features and convert the pairwiserelationships to a ranking [24], [25]. Listwise LTR approachtreats a permutation of features as a basic unit and learns thebest permutation [26]–[28]. Our work is motivated by theidea of ranking the query-document’s features using listwiseLTR. A scoring function with our model-plane features asinput is learned to accelerate the beam search procedure.III. P
RELIMINARIES AND D ENOTATIONS
This section briefly introduces the idea of the beam-guidedalgorithm previously proposed in [2].
A. Problem Formulation
Whether fabricating a model M layer-by-layer needs ad-ditional supports can be determined by if risky faces exist onthe surface of M . A commonly used definition of identifyinga risky face f is e ( f, π ) = (cid:40) n f · d π + sin ( α max ) > , otherwise . (1)where d π (as the normal of π ) gives the printing directiondefined by a base plane π , n f is the normal of f and α max isthe maximal self-supporting angle (ref. [1]). Face f is riskyif e ( f, π ) = 1 and otherwise it is called safe .In [2], a multi-directional 3D printer is supervised byfabricating a sequence of parts decomposed from M where: • N components decomposed from M satisfies M = M ∪ M ∪ · · · ∪ M N = ∪ Ni =1 M i (2) Best resultInput model
Fig. 4. An example of beam-guided searching trajectories generated ina case with b = 6 . The trajectory in dark color is the best trajectory τ ∗ giving the lowest value of J . The trajectories shown in light colors are theother trajectories having smaller values of J than the one of τ ∗ . with ∪ denoting the union operator; • {M i =1 , ··· ,N } is an ordered sequence that can becollision-freely fabricated with π i +1 = M i +1 ∩ (cid:0) ∪ ij =1 M j (cid:1) (3)being the base plane of M i +1 , where ∩ denotes the intersection operator; • π is the working platform of a 3D printer; • All faces on a sub-region M i are safe according to d π i determined by π i .To tackle this problem, we use planes π to cut 3D models.If every clipped sub-region satisfies the manufacturabilitycriteria, we could use the inverse order of clipping as thesequence of printing for the multi-directional 3D printers (seeFig.3 for an illustration). The printing direction of a sub-part M i is determined by the normal of the clipping plane.We formulate the problem of reducing the area of riskyfaces on M i as a problem that minimizes J = (cid:88) i (cid:88) f ∈M i e ( f, π i ) A ( f ) (4)where A ( f ) is the area of a face f . As we are handling mod-els represented by triangle meshes, the computation of A ( f ) is straightforward. The metric J is employed to measurethe quality of different sequences of volume decomposition.While minimizing the objective function defined in Eq.(4),we need to ensure the manufacturability of each component. B. Beam-guided Search
The beam-guided search is to optimize Eq.(4). Consideringthe manufacturing constraints as well as search efficiency, wedefine four constraints in beam-guided search.
Criterion I:
All faces on M i should be self-supported. nput model 𝑏 = 2 → M = 76.50% Time:
117 seconds 𝑏 = 5 → M = 76.50% Time:
684 seconds 𝑏 = 20, M = 90.07% Time: 𝑏 = 2 𝐿𝑒𝑎𝑟𝑛𝑖𝑛𝑔
Time:
120 seconds 𝑏 = 10 → M = 76.55% Time:
Fig. 5. An example in the
Thingi10k dataset (
ID: 109926 ). Our learning-based method outperforms the beam-guided search algorithms with small beamwidth of b . Here, M indicates the percentage of the reduced risky area when using multi-directional 3D printing – the higher the better. From left to right,the results of the conventional beam search [2] when using different widths. It can be observed that better results with less risky areas can be obtainedwhen using large beam width. With the help of the scoring function G ( · ) learned in this paper, we can use a very small beam width (i.e., b = 2 ) to obtainthe same result obtained by large beam width (i.e., b = 20 ) in conventional beam search – see the result shown in the right-most. Supporting structuresare generated for multi-directional 3D printing by the method presented in [2] and given in the bottom-right corner of each column. Criterion II:
The remained model obtained by every clip-ping should be connected to the printing platform P . Criterion III:
The physical platform of the printer P isalways below the clipping plane. Criterion IV:
It is always preferred to have a large solidobtained above a clipping plane so that a large volume ofsolid can be fabricated along one fixed direction.A beam-guided search algorithm is proposed to guide thesearch. Beam search [29] is an efficient search technique thathas been widely used to improve the results of the best-firstgreedy search. It builds a search tree that explores the searchspace by expanding promising nodes ( b nodes as beam width )instead of the best one greedily (see Fig.4). It integrates therestrictive Criterion I (and its weak form) as an objectivefunction to ensure that the beam search is broad enough toinclude both the local optimum and configurations that maylead to a global optimum. Defining the residual risky area ofa model M k according to a clipping plane π as R ( M k , π ) = (cid:88) f ∈M + k e ( f, π ) A ( f ) , (5)where π separates M k into the part M + k above π andthe part M − k below π . The proposed beam-guided searchalgorithm starts from an empty beam with the most restrictiverequirement of R ( M k , π ) < δ , where δ is a thresholdprogressively increasing from a tiny number (e.g., 0.0001).Candidate clipping planes that satisfy this requirement andremove larger areas of risky faces have a higher priority tofill the b beams. If there are still empty beams after the first‘round’ of filling, we relax δ by letting δ = 5 δ until all b beams are filled. Detail algorithm can be found in [2].IV. L EARNING TO A CCELERATE D ECOMPOSITION
A. Methodology
The beam-guided algorithm [2] constrains the searchspace by imposing the manufacturing constraints (Criteria II& III) and the volume heuristic (Criterion IV) while progres-sively relaxing the selection of ‘best’ candidates (Criterion I). Larger beam width b keeps more less-optimal candidates,which will have better chance of obtaining a globally optimalsolution. We conduct an experiment on the Thingi10k datasetto compare different choices of b , and it turns out that theaverage performance by b = 50 is around better thanthe average performance generated by b = 1 while it takesmore than × computing time to obtain those results. Oneexample is given in the right of Fig.5. The experimentalresults encourage us to explore the feasibility of learningfrom the underlying experience produced by large beamwidth of B , and utilizing the learned policy to guide a moreeffective search, which only keeps a much smaller value ofbeam width b ( b (cid:28) B ) during the search procedure.Specifically, given b nodes for configurations kept in thebeam, we will be able to obtain thousands of candidatesfor the next cut. The original method presented in [2] isemployed to select the ‘best’ and the relaxed ‘best’ B candi-dates ( b (cid:28) B ). Here we will not keep all these B candidatesin the beam. Instead, only b candidates are selected fromthese B candidates, where the selection is conducted withthe help of a scoring function G ( · ) using six enriched featuremetrics as input for each candidate clipping. An illustrationof this selection procedure can be found in Fig.6. The scoringfunction is constructed by a neural network, which is trainedby using the samples from conducting beam-guided searches[2] on Thingi10k – a large dataset of 3D printing models witha large beam width B = 50 .In the rest of this section, we will first provide theenriched feature metrics. Then, we present the details ofthe accelerated search algorithm and the method to generatetraining samples. Lastly, the learning model of the scoringfunction is introduced. B. Featurization of Candidate Clipping
We featurize each candidate cut to a vector consisting ofsix metrics. The metrics are carefully designed according tothe criteria given in Sec. III, which consider both the currentand the sequence-dependent information for the configura-tion of a planar clipping. Note that, it is crucial to haveetrics to cover the sequence-dependent information (i.e., M and M below). Otherwise, it has a trivial chance tolearn the strategy of beam-guided search that will not bestuck at local optimum when using large beam width. Ratio of reduced risky area M : The reduced risky areais essentially the decreased value of Eq.(4). M is definedas the ratio of decreased risk area caused by a candidateclipping plane (the candidate ) over the value of J . Accumulated ratio of reduced risky area M : Differentstages have different values of M , which only reflect a localconfiguration. We define the sum of M as the accumulatedratio of reduced risky areas to describe the situation of asequence of planning. In short, M = (cid:80) M . Processed volume M : The volume of region removed bya clipping plane π directly determines the efficiency of acutting plan – larger volume removed per cut leads to fewertimes of clipping. We normalize it to [0 , by using thevolume of given model V ( M ) . Distance to platform M : To reflect the requirement onletting the working platform P always below a clipping plane π , we define a metric as the minimal distance between π and P . M is normalized by using the radius of M ’s boundingsphere. Distance to fragile regions M : To prevent from cuttingthrough fragile regions during volume decomposition, wedefine the minimal distance between a clipping plane π andall the fragile regions, which are thin ‘fins’ or ‘bridges’.These regions can be detected by the geometric analysis oflocal curvature and feature size [30]. Again, this distance isnormalized by the radius of M ’s bounding sphere. Accumulated residual risky area M : None of the abovemetric has considered the area that cannot be fully support-free even after decomposition – i.e., having residual riskyareas. Here we add a metric to consider the accumulatedresidual risky area, which is also normalized by the totalrisky area as M = (cid:80) R ( M k , π k ) /J .Without loss of generality, for a candidate clipping in anystage of the planning process, it can use the vector formedby the above six metrics to describe its configuration. Asillustrated in Fig.4, each candidate clipping is representedas a node during the beam-guided search. A node n isdenoted by n = [ M , M , M , . . . , M ] associated with thesix metrics. In this following sub-section, we will introducethe method to select nodes kept in the beam-guided searchby using the values of these metrics. C. Accelerated Search Algorithm
Using the beam-guided search algorithm, we can obtain alist of candidate cuts with feature vectors evaluated by sixmetrics. The beam-guided search algorithm always keeps upto B promising nodes N k = { n k , ..., n kB } at stage k . Weobserve that each node n ki may come from different parentnodes from its last stage k − , and n ki may result in differentoffspring nodes at the next stage k + 1 . This essentially Fig. 6. An example that shows the pipeline of our learning to acceleratedecomposition. At each stage, we use a relatively vast B = 50 to generatecandidate cuts and their metrics. Then we use the trained score function G ( · ) to predicate scores of cuts. After that, we convert the predicated scores to aranked order by arg-sorting. Lastly, we can select the first b cuts (with b (cid:28) B ) from the selection vector for the next-round searching of decomposition.Note that, the input of G ( · ) are the six metrics for all the B candidate cutsas a B × matrix M , and the output of G ( · ) is a column of B scores forthese candidates. constructs a set of trajectories starting from the input modelto the globally optimal solution of decomposition (see Fig.4for an example).When working on an input mesh M , we can search manypossible trajectories by running the beam-guided searchalgorithm. Each trajectory τ has a corresponding cost of J ( τ ) . Comparing two nodes n ka ∈ τ A and n kb ∈ τ B at thesame stage k belong to different trajectories τ A and τ B , wewould have more preference to keep the node n ka in the beamthan n kb when J ( τ A ) < J ( τ B ) as the trajectory τ A is moreoptimal. This is denoted as n ka (cid:46)n kb . Therefore, at any stage k ,we can always obtain a ranked order R k = { n ki } accordingto these relative relationships between the nodes.Selecting top- b nodes from the ranked order R k can havea high chance to keep nodes belong to the trajectories with asmaller value of J in the result of selection. In our algorithm,we are trying to learn a scoring function G ( · ) from theranked orders at different stages on different models. Withthe help of the scoring function G ( · ) learned from searcheswith large beam width. The search with smaller beam width b is expected to generate results with similar quality. See alsothe illustration of our scoring-function-based ranking step forselecting b nodes out of B candidates given in Fig.6. D. Listwise Learning
For an input mesh M , we can obtain a collection ofresultant trajectories by running the beam-guided searchalgorithm. Each trajectory has a corresponding cost of J ( τ ) .Here we propose a method to convert the trajectories tolistwise samples to be used for learning the scoring function G ( · ) . Specifically, a method is developed to sample trajec-tories obtained from beam-guided search with a large beamwidth B = 50 on a large dataset of 3D printing models.Our learning method consists of four major steps. • First, we need to featurize each candidate of clipping todistinguish the differences among the other candidate.Here the six metrics introduced above in Sec. IV-B (i.e., M , ··· , ) are used. Second, we build a dataset made up of these featuresby running the beam-guided algorithm with a vastvalue of beam width of B = 50 . This step is verytime-consuming because of the large B costs morecomputational resources. • Third, we convert the trajectories to listwise samples atevery stage of the beam-guided search which describesthe ranking of clipping candidates. Specifically, we tra-verse the collection of trajectories in descending orderwith respect to J ( τ ) , and use the selected node n k ∈ τ to construct a set of ranked lists {R k } . If a node is notcontained in any trajectory, it is regarded as worse thanall other nodes that are contained in any trajectory. If anode n k was used to construct R k from a trajectory τ A ,it would not be used again to prevent from introducingambiguity. We set the scores [ r , ..., r b ] of top- b nodesin R k as [ b, ..., and the scores of the other nodes aszero. The training samples are collected from all stagesof the beam-guided search. • Finally, we use the listwise data {R k } to train thescoring function G ( · ) by learning-to-rank.The resultant scoring function G ( · ) will be used to evaluateevery candidate of clipping in our algorithm.Now we have the dataset constituting of listwise rankingsfor training. Our goal is to train a scoring system on thelistwise dataset to score and rank candidate cuts at eachstage of the beam-guided search. Once the scoring systemis trained, it can be utilized to replace the original selectionstrategy used in the beam-guided algorithm.We use uRank [31] to train G ( · ) , which formulates thepurpose of ordering the nodes as selecting the most relevantones in |R k | steps. It selects all nodes that have the highestscore from a candidate set at each step. To solve the classiccross-entropy issue raised by the softmax function of ratingsin ListNet [26], it adopts multiple softmax operations andeach of which targets a single node from the set of nodesthat matches the ground-truth (we denote it as c t , where t corresponds to the step.). This method restricts the positivelabel appears once in candidate sets, so it only needs to selectone node at each step.The architecture of uRank consists of a neural networkwith two hidden layers with k and k hidden units re-spectively. Specifically, we have three trainable matrices W ∈ R × k , W ∈ R k × k , and W ∈ R k × . Let σ be the activation function, the closed-form of G ( M ) is σ ( σ ( σ ( M W ) W ) W ) with M B × being the input as sixmetrics of B nodes. The loss function is defined as follows. L ( G ; R k ) = 1 |R k | − |R k |− (cid:88) t =1 (2 r t − (cid:88) n ∈ c t ln P t ( n ) (6)where P t ( n ) is the likehood of selecting a node n ∈ c t atstep t . The network architecture of uRank and the selectionprocedure are shown in Fig.6. Fig. 7. We use the results generated by the original beam-guided algorithmwith b = 1 as a baseline to generate comparison, where the vertical axisindicates the reduced percentage of average J (i.e., Eq.(4) on all testexamples). The blue bars indicate the results of using conventional beamsearch, which is compared with the results of our learning-based methoddisplayed in yellow. V. T
RAINING AND E VALUATION
A. Dataset Preparation and Training
We implemented the proposed pipeline using C++ andPython, and trained the uRank network [31] using Tensor-Flow [32]. The trained model and source codes are publiclyaccessible. The dataset collection phase is conducted on ahigh-performance server equipped with two Intel E5-2698v3 CPUs and 128 GB RAM. All other tests are performedon a PC equipped with an Intel Core i7 4790 CPU, NVIDIAGeforce RTX 2060 GPU, and 24 GB RAM. We use di-rections sampled on the Gaussian sphere with mm intervalsto evaluate the metrics. The maximal self-supporting angleis set as α max = 45 ◦ .We trained our model on the Thingi10k dataset [3] repairedby Hu et al. [33]. Instead of training and evaluating onthe whole dataset, we extract a subset of the dataset (with2,099 models) to ensure every model in the selected datasetshould have a few risky faces that can be processed by ourplane-based cutting algorithm. The training dataset for ourscoring function is built by running the beam-guided searchalgorithm with B = 50 . By the aforementioned samplingmethods, we obtain a dataset with 7,961 listwise samples. Wesplit all the dataset to 60% samples for training, 15% samplesfor validation, and 25% data for testing. The numbers ofhidden units we used are k = 100 and k = 100 . In ourexperiments, we train the network by using maximal epochs and a learning rate of − . The early stop is invokedwhen no improvement is found after epochs. B. Evaluation on Accelerated Search
The beam-guided search algorithm’s computing timeis significantly influenced by the chosen value of beamwidth b . According to out experiments, the averagecomputing time on test models by the conventionalbeam search with beam width b = [5 , , , is at [2 . × , . × , . × , . × ] of the average time on thebeam search with b = 2 . ABLE IS
TATISTICS ON RANKING PERFORMANCE
NDCG@1 NDCG@2 NDCG@3 NDCG@4 NDCG@5Ours 0.423 0.455 0.483 0.510 0.532RankNet λ -Rank After the training phase is finalized, we use the trainedscoring function G ( · ) to rank a set of features evaluatedby candidate planes in our beam guided search, and use b = 2 and 5 for evaluation. To make the search procedureinsensitive to minor overfitting bias, we always check if thebest result ranked by the simple sort-and-rank module is inthe selected beam. We run both the algorithm with the trainedmodel and the original algorithm by different choices of b on the testing dataset (524 models). The statistical resultin terms of improvement on the average of J is given inFig.7, which shows that we can use a relatively small b with the trained model to achieve a similar performancegenerated by a larger b . In other words, the search speed canbe accelerated while the searched results are comparative tothe ones generated using longer computing time. Meanwhile,we can improve the quality of the results produced by theoriginal algorithm if using the trained model to select cuts. C. Evaluation on Ranking Performance
We compare our method with other classic ranking al-gorithms used in information retrieval, including anotherlistwise approach – LambdaRank [34] and the pairwiseapproach – RankNet [24]. We use the implementationsprovided in XGBoost with the parameters of { max depth =8, number of boosting =500 } . We use NDCG (Normalized Dis-counted Cumulative Gain) [35] to evaluate different methods.All experimental results are reported using NDCG metric atposition 1, 2, 3, 4, and 5 respectively in Table I. The resultsshow that our method the best performance among all otherapproaches. D. Feature Analysis1) Feature importance:
Our learning-based decomposi-tion method extracts six features to train a neural networkthat can score a list of nodes. In this section, we furtherinvestigate the learned model by analyzing the featuresproposed in Sec. IV-B. We use permutation importance [36]to analyze feature importance after our model is trained. Itis a widely used estimator of feature relevance in machinelearning, which randomly permutes each feature column inthe testing data to measure the importance of this feature.Randomization of different features will have different ef-fects on the performance of the trained model. For anyfeature M j , we compute its importance I j as follows. https://github.com/dmlc/xgboost/tree/master/demo/rank Fig. 8. Feature analysis: (a) feature importance generated by permutationimportance method [36], and (b) correlation analysis of six features M ,..., proposed in Sec.IV-B.Fig. 9. Two models fabricated by our multi-directional 3D printer withoutany support structure, which originally need tremendous support structuresif using traditional FDM 3D printers – the hardware given in Fig.1. I j = s − K K (cid:88) k =1 s k,j (7)where K is the number of repetitions, and s is the evaluationmetric that could be NDCG or any other metric. Herewe use NDCG@5 to analyze the feature importance. Theexperimental results are shown in Fig.8(a), in which M exhibits the most importance, and M is the least importantfeature among all others.
2) Feature correlation:
Correlation is a statistical measurethat indicates the relationship between two or more variables.In machine learning, Pearson correlation [37] is widelyused to measure how the degree of the linear relationshipbetween variables. Given two variables x and y , their Pearsoncorrelation r x,y is defined as r x,y = cov( x, y ) (cid:112) var( x ) · (cid:112) var( y ) (8)where var denotes the variance, and cov denotes the covari-ance. We use the Pearson correlation to build a correlationmatrix on the sampled dataset. The heatmap visualization isshown in Fig.8(b). It shows that different features used inour approach are not very relevant, the most relevant featurepair is M and M , where its r , = 0 . .I. C ONCLUSION
This paper presents an accelerated decomposition algo-rithm for multi-directional printing that can reduce the needof support structures. The proposed method utilizes learning-to-rank techniques to train a neural network that can score thecandidates of clipping. We use the trained scoring function toreplace the simple sort-and-rank module in the beam-guidedsearch algorithm. The computing time is reduced to aroundone third while keeping the results with similar quality. Theexperimental results demonstrate the effectiveness of ourproposed method. We provide an easy-to-use python packageand make the source code publicly accessible.R
EFERENCES[1] K. Hu, S. Jin, and C. C. L. Wang, “Support slimming for single mate-rial based additive manufacturing,”
Computer-Aided Design , vol. 65,pp. 1–10, 2015.[2] C. Wu, C. Dai, G. Fang, Y.-J. Liu, and C. C. Wang, “Generalsupport-effective decomposition for multi-directional 3-d printing,”
IEEE Transactions on Automation Science and Engineering , 2019.[3] Q. Zhou and A. Jacobson, “Thingi10k: A dataset of 10,000 3d-printingmodels,” arXiv preprint arXiv:1605.04797 , 2016.[4] J. Vanek, J. A. Galicia, and B. Benes, “Clever support: Efficientsupport structure generation for digital fabrication,” in
ComputerGraphics Forum , vol. 33, no. 5. Wiley Online Library, 2014, pp.117–125.[5] J. Dumas, J. Hergel, and S. Lefebvre, “Bridging the gap: Automatedsteady scaffoldings for 3d printing,”
ACM Trans. Graph. , vol. 33, no. 4,pp. 98:1–98:10, July 2014.[6] R. Hu, H. Li, H. Zhang, and D. Cohen-Or, “Approximate pyramidalshape decomposition,”
ACM Trans. Graph. , vol. 33, no. 6, pp. 213:1–213:12, 2014.[7] P. Herholz, W. Matusik, and M. Alexa, “Approximating free-formgeometry with height fields for manufacturing,”
Computer GraphicsForum , vol. 34, no. 2, pp. 239–251, 2015.[8] W. Gao, Y. Zhang, D. C. Nazzetta, K. Ramani, and R. J. Cipra,“RevoMaker: Enabling multi-directional and functionally-embedded3D printing using a rotational cuboidal platform,” in
Proceedings ofthe 28th Annual ACM Symposium on User Interface Software andTechnology , 2015, pp. 437–446.[9] X. Wei, S. Qiu, L. Zhu, R. Feng, Y. Tian, J. Xi, and Y. Zheng, “Towardsupport-free 3D printing: A skeletal approach for partitioning models,”
IEEE Transactions on Visualization and Computer Graphics , vol. 24,no. 10, pp. 2799–2812, Oct 2018.[10] A. Muntoni, M. Livesu, R. Scateni, A. Sheffer, and D. Panozzo, “Axis-aligned height-field block decomposition of 3D shapes,”
ACM Trans.Graph. , 2018.[11] P. Urhal, A. Weightman, C. Diver, and P. Bartolo, “Robot assistedadditive manufacturing: A review,”
Robotics and Computer-IntegratedManufacturing , vol. 59, pp. 335–345, 2019.[12] S. Keating and N. Oxman, “Compound fabrication: A multi-functionalrobotic platform for digital design and fabrication,”
Robotics andComputer-Integrated Manufacturing , vol. 29, no. 6, pp. 439–448,2013.[13] Y. Pan, C. Zhou, Y. Chen, and J. Partanen, “Multitool and multi-axiscomputer numerically controlled accumulation for fabricating confor-mal features on curved surfaces,”
ASME Journal of ManufacturingScience and Engineering , vol. 136, no. 3, 2014.[14] H. Peng, R. Wu, S. Marschner, and F. Guimbreti`ere, “On-the-fly print:Incremental printing while modelling,” in
Proceedings of the 2016 CHIConference on Human Factors in Computing Systems . ACM, 2016,pp. 887–896.[15] R. Wu, H. Peng, F. Guimbreti`ere, and S. Marschner, “Printing arbitrarymeshes with a 5dof wireframe printer,”
ACM Trans. Graph. , vol. 35,no. 4, p. 101, 2016.[16] Y. Huang, J. Zhang, X. Hu, G. Song, Z. Liu, L. Yu, and L. Liu,“Framefab: robotic fabrication of frame shapes,”
ACM Trans. Graph. ,vol. 35, no. 6, p. 224, 2016.[17] C. Dai, C. C. L. Wang, C. Wu, S. Lefebvre, G. Fang, and Y.-J. Liu,“Support-free volume printing by multi-axis motion,”
ACM Trans.Graph. , vol. 37, no. 4, pp. 134:1–134:14, July 2018. [18] A. V. Shembekar, Y. J. Yoon, A. Kanyuck, and S. K. Gupta, “Gener-ating robot trajectories for conformal three-dimensional printing usingnonplanar layers,”
Journal of Computing and Information Science inEngineering , vol. 19, no. 3, p. 031011, 2019.[19] K. Xu, L. Chen, and K. Tang, “Support-free layered process planningtoward 3 + 2-axis additive manufacturing,”
IEEE Transactions onAutomation Science and Engineering , vol. 16, no. 2, pp. 838–850,April 2019.[20] C. Wu, C. Dai, G. Fang, Y. J. Liu, and C. C. L. Wang, “RoboFDM:A robotic system for support-free fabrication using FDM,” in ,May 2017, pp. 1175–1180.[21] T. Chen, L. Zheng, E. Yan, Z. Jiang, T. Moreau, L. Ceze, C. Guestrin,and A. Krishnamurthy, “Learning to optimize tensor programs,” in
Advances in Neural Information Processing Systems , 2018, pp. 3389–3400.[22] A. Adams, K. Ma, L. Anderson, R. Baghdadi, T.-M. Li, M. Gharbi,B. Steiner, S. Johnson, K. Fatahalian, F. Durand, et al. , “Learningto optimize halide with tree search and random programs,”
ACMTransactions on Graphics (TOG) , vol. 38, no. 4, pp. 1–12, 2019.[23] T.-Y. Liu, “Learning to rank for information retrieval,”
Foundationsand trends in information retrieval , vol. 3, no. 3, pp. 225–331, 2009.[24] C. Burges, T. Shaked, E. Renshaw, A. Lazier, M. Deeds, N. Hamilton,and G. Hullender, “Learning to rank using gradient descent,” in
Pro-ceedings of the 22nd international conference on Machine learning ,2005, pp. 89–96.[25] C. J. Burges, “From ranknet to lambdarank to lambdamart: Anoverview,”
Learning , vol. 11, no. 23-581, p. 81, 2010.[26] Z. Cao, T. Qin, T.-Y. Liu, M.-F. Tsai, and H. Li, “Learning to rank:from pairwise approach to listwise approach,” in
Proceedings of the24th international conference on Machine learning , 2007, pp. 129–136.[27] J. Guiver and E. Snelson, “Bayesian inference for plackett-luce rankingmodels,” in proceedings of the 26th annual international conferenceon machine learning , 2009, pp. 377–384.[28] F. Xia, T.-Y. Liu, J. Wang, W. Zhang, and H. Li, “Listwise approachto learning to rank: theory and algorithm,” in
Proceedings of the 25thinternational conference on Machine learning , 2008, pp. 1192–1199.[29] B. T. Lowerre, “The harpy speech recognition system.” Ph.D. dis-sertation, Carnegie Mellon University, Pittsburgh, PA, USA, 1976,aAI7619331.[30] L. Luo, I. Baran, S. Rusinkiewicz, and W. Matusik, “Chopper: Parti-tioning models into 3D-printable parts,”
ACM Trans. Graph. , vol. 31,no. 6, pp. 129:1–129:9, Nov. 2012.[31] X. Zhu and D. Klabjan, “Listwise learning to rank by exploring uniqueratings,” in
Proceedings of the 13th International Conference on WebSearch and Data Mining , ser. WSDM 20. New York, NY, USA:Association for Computing Machinery, 2020, p. 798806.[32] M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin,S. Ghemawat, G. Irving, M. Isard, et al. , “Tensorflow: A systemfor large-scale machine learning,” in { USENIX } Symposium onOperating Systems Design and Implementation ( { OSDI } , 2016,pp. 265–283.[33] Y. Hu, Q. Zhou, X. Gao, A. Jacobson, D. Zorin, and D. Panozzo,“Tetrahedral meshing in the wild.” ACM Trans. Graph. , vol. 37, no. 4,pp. 60–1, 2018.[34] C. J. Burges, R. Ragno, and Q. V. Le, “Learning to rank with nons-mooth cost functions,” in
Advances in neural information processingsystems , 2007, pp. 193–200.[35] K. J¨arvelin and J. Kek¨al¨ainen, “Cumulated gain-based evaluation ofir techniques,”
ACM Transactions on Information Systems (TOIS) ,vol. 20, no. 4, pp. 422–446, 2002.[36] A. Altmann, L. Tolos¸i, O. Sander, and T. Lengauer, “Permutationimportance: a corrected feature importance measure,”
Bioinformatics ,vol. 26, no. 10, pp. 1340–1347, 2010.[37] D. Freedman, R. Pisani, and R. Purves, “Statistics (internationalstudent edition),”