Computational Parquetry: Fabricated Style Transfer with Wood Pixels
Julian Iseringhausen, Michael Weinmann, Weizhen Huang, Matthias B. Hullin
CComputational Parquetry: Fabricated Style Transfer with Wood Pixels
JULIAN ISERINGHAUSEN, MICHAEL WEINMANN, WEIZHEN HUANG, and MATTHIAS B. HULLIN,
University of Bonn, Germany
Parquetry is the art and craft of decorating a surface with a pattern ofdifferently colored veneers of wood, stone or other materials. Traditionally,the process of designing and making parquetry has been driven by color,using the texture found in real wood only for stylization or as a decorativeeffect. Here, we introduce a computational pipeline that draws from therich natural structure of strongly textured real-world veneers as a sourceof detail in order to approximate a target image as faithfully as possibleusing a manageable number of parts. This challenge is closely related to theestablished problems of patch-based image synthesis and stylization in someways, but fundamentally different in others. Most importantly, the limitedavailability of resources (any piece of wood can only be used once) turns therelatively simple problem of finding the right piece for the target locationinto the combinatorial problem of finding optimal parts while avoidingresource collisions. We introduce an algorithm that allows to efficientlysolve an approximation to the problem. It further addresses challenges likegamut mapping, feature characterization and the search for fabricable cuts.We demonstrate the effectiveness of the system by fabricating a selection ofpieces of parquetry from different kinds of unstained wood veneer.
The use of differently colored and structured woods and other mate-rials to form inlay and intarsia has been known at least since ancientRoman and Greek times. In the modern interpretation of this prin-ciple, pieces of veneer form a continuous thin layer that covers thesurface of an object ( marquetry or parquetry ) [Jackson et al. 1996].The techniques denoted by these two terms share many similaritiesbut are not identical. Marquetry usually refers to a process similarto “painting by numbers”, where a target image is segmented intomostly homogeneous pieces which are then cut from more or lessuniformly colored veneer and assembled to form the final ornamentor picture. Parquetry, on the other hand, denotes the (ornamental)covering of a surface using a regular geometric arrangement ofdifferently colored pieces. While most artists in their work embracethe grain and texture found in their source materials, they mostlyuse it as a decorative effect. Nevertheless, the resulting artworkscan attain high levels of detail, depending on the amount of laborand care devoted to the task (Figure 2).To overcome the “posterized” look of existing woodworking tech-niques, make use of fine-grained wood structures, and obtain resultsthat are properly shaded, we introduce computational parquetry .Our technique can be considered a novel hybrid of both methodsand is vitally driven by a computational design process. The goalof computational parquetry is to make deliberate use of the rich This work was supported by the German Research Foundation (DFG) under grant(HU-2273/2-1), the X-Rite Chair for Digital Material Appearance, and the ERC startinggrant “ECHO”. We also gratefully acknowledge the support of NVIDIA Corporationwith the donation a Titan Xp GPU that was used in this research.Authors’ address: Julian Iseringhausen, [email protected]; Michael Wein-mann, [email protected]; Weizhen Huang, [email protected]; Matthias B.Hullin, [email protected],University of Bonn, Institute of Computer Science II, Bonn, 53115, Germany.© 2019 Copyright held by the author(s).
Fig. 1.
Fabricated
A fabricated piece of wood parquetry, produced usingour pipeline. The inputs are a set of six different wood veneers (bottom leftcorner: poplar burl, walnut burl, santos rosewood, quartersawn zebrawood,olive, fir), and a target image (bottom right corner). The total size of theparquetry is approx.
27 cm ×
34 cm . By combining the different appearanceprofiles (including color and grain structures) of multiple wood types, weare able to produce results with high contrast and fine structural details. structure present in real woods, using heterogeneities such as knots,grain or other texture as a source of detail for recreating more faith-ful renditions of target images in wood, using a moderate number ofpieces, see e.g. Figure 1. Since this goal can only be achieved by ex-haustively searching suitable pieces of source material to representsmall regions of the target image, the task is absolutely intractableto solve by hand. In the computer graphics world, our technique isclosely related to patch-based image synthesis [Barnes and Zhang2017], texture synthesis [Wei et al. 2009] and photo mosaics [Bat-tiato et al. 2006], well-explored families of problems for which amultitude of very elaborate and advanced solutions exist today. Toour knowledge, none of these solutions are prepared to deal withthe fabrication-specific challenges that are inherent to our problem.Our end-to-end system for fabricated style transfer only usescommonly available real-world materials and can be implementedon hobby-grade hardware (laser cutter and flatbed scanner). To a r X i v : . [ c s . G R ] D ec • Iseringhausen, J. et al Fig. 2. Two modern examples of marquetry portraits of different complexity.Left: Self-portrait by Laszlo Sandor (using two maple specimens, brownand black walnut, beech, Indian rosewood, okoume and sapele; original sizeapprox.
10 cm ×
10 cm ). Right: Portrait of a girl by Rob Milam (using wenge,Carpathian elm burl, Honduran rosewood, lauan, pear, plaintree, maple andash; original size approx.
53 cm ×
53 cm ). make this new type of computational art accessible to a wide userbase, the source code will be made available at https://github.com/isering/WoodPixel . History of the craft.
History knows a rich tradition of techniquesthat use patches of material for the purpose of composing images.Ancient Roman and Greek mosaics are probably the best-knownearly instances of this idea. An exemplary mosaic from the secondcentury AD is shown in Figure 3. Often, such mosaics consist oflargely uniformly shaped primitive shapes (e.g., square tiles) thatare aligned with important structures, such as object boundaries,found in the target image. A modern counterpart of mosaics is pixelartwork, which has played a similarly ubiquitous role predominantlythrough video games in the 80s and 90s. Here, the design pattern isgenerally aligned with a Cartesian grid.Marquetry can be considered a generalization of mosaic. Thisart of forming decorative images by covering object surfaces withfragments of materials such as wood, bone, ivory, mother of pearlor metal, has also been known at least since Roman times [Ulrich2007], see Figure 3. The appearance of the resulting image, however,is mostly dominated by the choice of materials and the shape offragments. The closely related term parquetry refers to the assem-bly of wooden pieces to obtain decorative floor coverings. Eithertechnique can be implemented either by carving and filling a woodsurface (inlay) or covering the entire surface with a continuous layerof thin veneer pieces. The materials can be altered in appearance,for instance by staining, painting or carving.In this paper, we use the term parquetry more restrictively torefer to two-dimensional arrangements of wood veneer that areunaltered in color (except for a final layer of clear varnish that isapplied to the entire design). While some artists use computationaltools, such as posterization, to find image segmentations (Figure 2),we believe that our method marks the first time that a measuredtexture of the source material has been used to drive the designprocess, explicitly making use of features present in the wood.
Fig. 3. Examples for intarsia and ancient mosaics: The intarsia from theyear 1776 depicts the adoration of St. Theodulf of Trier and a landscape withplowing farmers and St. Theodulf (left). The mosaic from the 2nd centuryAD depicts a scene from the Odyssey (right).
Stylization.
With the goal of non-photorealistic rendering, nu-merous techniques have been proposed to transform 2D inputsinto artistically stylized renderings [Kyprianidis et al. 2013]. Thisincludes approaches for the simulation of different painting me-dia such as paints, charcoal and watercolor [Chen et al. 2015; Luet al. 2013; Panotopoulou et al. 2018]. In recent years, the potentialof deep learning has been revealed for rendering a given contentimage in different artistic styles [Jing et al. 2017]. Inspired by theancient mosaics and the application of mosaics for arts (see e.g. Sal-vador Dalí’s lithograph
Lincoln in Dalivision [1991] or
Self PortraitI by Chuck Close [1995]), a lot of effort has been spent on non-photorealistic rendering in mosaic-style. The original photo mosaicapproach [Silvers 1997] creates a mosaic by matching and stitchingimages from a database. Further work focused on the application tonon-rectangular grids and color correction [Finkelstein and Range1998] and tiles of arbitrary shape (jigsaw image mosaics or puzzleimage mosaics) [Di Blasi et al. 2005; Kim and Pellacini 2002; Pavićet al. 2009], the adjustment of the tiles in order to emphasize imagefeatures within the resulting mosaic [Battiato et al. 2012; Elber andWolberg 2003; Hausner 2001; Liu et al. 2010] as well as speed-ups ofthe involved search process [Blasi and Petralia 2005; Di Blasi et al.2005; Kang et al. 2011]. More recently, texture mosaics have also beengenerated with the aid of deep learning techniques (e.g. [Jetchevet al. 2017]). We refer to respective surveys [Battiato et al. 2006,2007] for a more detailed discussion of the underlying principles.Furthermore, panoramic image mosaics [Szeliski and Shum 1997]have been introduced where photos taken from different views arestitched based on correspondences within the individual imagesand a final image blending.
Example-based synthesis. Pixel-based synthesis techniques [Efrosand Leung 1999; Hertzmann et al. 2001; Paget and Longstaff 1998;Wei and Levoy 2000] rely on copying single pixels from an exem-plar to the desired output image while matching neighborhoodconstraints.In contrast, patch-based or stitching-based texture synthesis ap-proaches [Efros and Freeman 2001; Kwatra et al. 2003; Praun et al.2000] involve copying entire patches from given exemplars. Onemajor challenge of these approaches is the generation of correspon-dences between locations in the exemplar image and locations in thegenerated output image to copy the locally most suitable patchesfrom the exemplar to the output image. For this purpose, common omputational Parquetry: Fabricated Style Transfer with Wood Pixels • 3
Wood material(s) Texture(s) Puzzle generation Cutting Fixation and finishing Finished pieceAssemblyMatched gray mapTarget imageScanning
Fig. 4. The proposed end-to-end pipeline for creating faithful renditions of target images based on exploiting the rich structure present in input wood samplesas a source of detail. The involved major steps are data acquisition, cut pattern optimization and the final fabrication of the real-world rendition of the targetimage. strategies include arranging patches in raster scan order and subse-quently selecting several patch candidates that best fit to the alreadycopied patches. As this matching process becomes computationallychallenging for larger images, several investigations focused onimproving matching efficiency [Ashikhmin 2001; Barnes et al. 2011,2009, 2010; Datar et al. 2004; He and Sun 2012; Liang et al. 2001;Olonetsky and Avidan 2012; Simakov et al. 2008; Tong et al. 2002; Weiand Levoy 2000]. In addition, finding an adequate composition andblending of the copied patches has been addressed based on simplecompositions of irregularly shaped patches [Praun et al. 2000], theblending of overlapping patches within the overlap region [Lianget al. 2001], the specification of seams within the overlap regionusing dynamic programming or graph cuts [Efros and Freeman2001; Kwatra et al. 2003], or the application of a weighted averag-ing for several overlapping regions [Barnes et al. 2009; Simakovet al. 2008; Wexler et al. 2007]. Furthermore, optimization-based techniques [Darabi et al. 2012; Han et al. 2006; Kaspar et al. 2015;Kopf et al. 2007; Kwatra et al. 2005; Portilla and Simoncelli 2000] arebased on the formulation of texture synthesis in terms of an opti-mization problem which is solved by minimizing an energy functionand combines pixel-based and patch-based techniques. The globalstatistics of source patch usage and arrangement can be guidedbased on histogram matching [Kopf et al. 2007] or, in the contextof general image editing (including tasks such as re-organizationof objects in the image, image retargeting, inpainting, image fu-sion/composition), based on image statistics, saliency information,semantics, and user constraints [Pritch et al. 2009; Taeg Sang Choet al. 2008]. The latter applications are related to our work in thesense that they rely on re-arranging patches/regions of the inputimages, where Cho et al. [2008] even only use each patch once asalso done in our work. In contrast to these methods, our approachinvolves a cross-domain analysis between an input target imageand images of wooden veneers, where we use the patches of thewooden veneers to compose a stylized version of the target image.Recently, the potential of deep learning has also been demon-strated in the context of optimization-based texture synthesis (seee.g. [Gatys et al. 2015, 2016; Li and Wand 2016a,b]). For a moredetailed review, we refer to the surveys provided by Wei et al. [2009]and Barnes and Zhang [2017]. Patch-based synthesis in the real world, as described and per-formed in this work, is characterized by fundamental constraintsthat are inherent to the task of parquetry and other forms of real-world collage. Any piece of input material can only be used oncewithout being stained, scaled, stretched, copied, blended or filtered.Our synthesis algorithm therefore restricts itself to cutting oper-ations and rigid transformations. More importantly, it must keeptrack of resource use in order to prevent source patches from collid-ing with each other. On the output side, the cuts must be fabricable,i.e., the individual fragments must be connected (no isolated pixels)and they may not expose too thin protruding structures. We are notaware of prior work that addressed these specific challenges.
Computational fabrication.
Developments in the context of styl-ized fabrication [Bickel et al. 2018] took benefit from the rapidprogress in fabrication technology. In the context of 2D arts, the com-putational fabrication of paintings has been approached based onrobotic arms to paint strokes for a given input image (e.g. [Deussenet al. 2012; Lindemeier et al. 2013; Tresset and Leymarie 2012]). Thefabrication of artistic renditions of images has been approachedbased on a computational non-photorealistic rendering pipeline, thegeneration of respective woodblocks and a final woodblock printingprocess [Panotopoulou et al. 2018]. Further work addressed mosaicrendering using colored paper [Gi et al. 2006], where computationalapproaches have been used for tile generation and tile arrangement.This is followed by the respective generation of colored paper tilesand their arrangement according to the energy optimization.Most works in computational fabrication aim at obtaining con-stant results despite possible variations in the material used. Incontrast, we embrace the “personality” of the input and use it tocreate artworks that are inherently unique.
The main objective of this work is the development of a computa-tional pipeline for creating faithful renditions of a target image I T from wood samples by exploiting the rich structures in wood as asource of detail. The pipeline devised in this work takes n samples physical, wooden material samples and a target image as inputs andconsists of three major steps: data acquisition, data analysis and cut • Iseringhausen, J. et al pattern generation (i.e. tile generation, arrangement, and bound-ary shape optimization), and the final fabrication of the real-worldcounterpart (Figure 4).In the first step, the wooden samples are prepared before they canbe scanned with a flatbed scanner. This is followed by extractinglocal features in the input images and by detecting correspondingpatches between the source textures and the target image, yieldinga stylized, digital wood parquetry of the target image. Finally, thepatches are converted to cut instructions (taking into account thatthe cuts have to be fabricable by a laser cutter), specified piecesare cut with a laser cutter, and assembled to a physical sample ofparquetry. We discuss details in the following sections. Before the scanning can be conducted, we first prepare the woodsamples. Whereas thicker veneers can be utilized directly, standardveneers (0 . . . I S = { I S , i : i ∈ { , . . . , n samples }} ,one for each physical wood sample. In order to find patch correspondences between the target imageand the source images, we define a suitable representation for tex-tural structures within the individual patches. We densely evaluatetexture features using a filter bank consisting of 2 image filters, anintensity filter and a Sobel edge filter. We have experimented withhigher-dimensional filter banks similar to the Leung–Malik filterbank [Leung and Malik 2001] and found that the potential increase inreconstruction quality does not offset the additional computationalcost induced by the higher-dimensional feature space. Applyingthe filter bank to an image I results in the 2-dimensional featureresponse maps F ( I ) = (cid:16) w intens · F intens ( I ) , w edge · F edge ( I ) (cid:17) ⊤ , (1)where F x are the particular image filters, w x ∈ [ , ] are the fea-ture weights, x ∈ { intens , edge } . The weights allow artistic controlover the emphasis on overall intensity matching ( w intens ) and finescale gradient features ( w edge ). Please note that this approach caneasily be expanded to different feature vectors, allowing additionalartistic control. We increase the probability of finding good matchesby taking n rot rotated versions of the wood source textures intoaccount. Fig. 5.
Synthetic
Top row: For a target image exhibiting low contrast and abad foreground separation (left) the generated wood puzzle shows the same,undesirable effects when discarding histogram matching (middle left). Incontrast, applying histogram matching (right) allows to exploit the wholewood texture gamut, which yields a high contrast at the cost of a strongchange in appearance compared to the target. By interpolating between theintensity filter responses obtained with and without histogram matching,we generate a parquetry with medium contrast (middle right). Bottom row:The images depict the intensity filter response of a wooden veneer panel(left) and the respective responses obtained for the target image withouthistogram matching (middle left) and with histogram matching (right), aswell as their interpolation (middle right).
Source and target textures may exhibit highly different gamutsand filter response distributions, so we apply a histogram matchingstep in order to achieve a meaningful matching between target andsource patches. We use a CDF-based histogram equalization [Russ2002, ch.4] to transform the intensity distribution of the target imageto that of the available source textures. As the wood samples gener-ally span a smaller gamut than the target image, gamut mappingis of great importance to allow for the representation of the targetimage based on sampling the whole range of available wood patchintensities so that characteristic image structures can be empha-sized. For challenging target images with low foreground contrastor bad foreground separation (Figure 5) we found that the histogramequalization tends to overshoot. We alleviate this by interpolatingbetween equalized and original target intensity, F ′ intens ( I T ) = ( − w hist ) F intens ( I T ) + w hist F equalize ( I T , I S ) , (2)where w hist denotes the interpolation weight, F equalize the histogramequalization operator, and I S the set of all wood textures.The output of this step is a set of n samples · n rot filter responses F ( I S ) = (cid:110) F ( I S , i , ϕ j ) : i ∈ { , . . . , n samples } , j ∈ { , . . . , n rot } (cid:111) (3)for the source textures, and one filter response F ( I T ) for the targetimage. We typically used n rot =
15 source texture rotations for ourexperiments.
After evaluating the filter responses, the next step is to find corre-sponding patches between target image and source textures. Givena target patch P T ⊂ I T consisting of n P simply connected pixels p k , we determine a corresponding source patch P S by a dense tem-plate matching using the sum of squared differences between the omputational Parquetry: Fabricated Style Transfer with Wood Pixels • 5 Fig. 6. Before cut pattern optimization, target patches are sorted either bytheir accumulated saliency score (left) or by their distance to the imagecenter (right). Patches overlaid with red color are reconstructed first, yellowpatches are reconstructed last. respective feature maps F : D i , j ( x ) = n P (cid:213) k = (cid:16) F ( P T ( p k )) − F ( I S , i , ϕ j ( x + p k )) (cid:17) , P S = arg min i , j , x D i , j ( x ) . (4)We avoid the multiple usage of already matched veneer sampleregions by carrying along a binary mask for each source texture. Asmore and more wood area is consumed, the probability of findinggood patch correspondences decreases as the algorithm advances.The most effective way to remedy this effect is to ensure that asufficient amount of panel space for each utilized wood type isavailable at all times during reconstruction. Since this might notalways be feasible, we store the target patches P T in a priorityqueue, such that salient patches are reconstructed first. The patchesare sorted either based on their accumulated image saliency score[Montabone and Soto 2010] or by their distance to the image center,see Figure 6. While the former method assures that salient imageregions containing high-frequency features will be reconstructedfirst (and thus be of potentially higher quality), the latter approachcreates a bokeh-like effect, which produces more visually pleasingresults under a mild shortage of patches with correct low-frequency(intensity) features. The usage of a priority queue also allows usto reconstruct multiple target images using the same wood panel,either sequentially or interleaved. The latter allows us to reconstructmultiple target images simultaneously by inserting their respectivepatches into the queue and sorting them by their image saliencyscore.Target image regions with less salient features can be representedby larger patches. To exploit this, we implemented an adaptivepatch matching step, where a patch is broken into four smallerpatches if their combined matching cost is lower than the cost of thelarger patch multiplied by a factor w adaptive . The factor w adaptive canbe used to control the artistic balance between larger and smallerpatches. We apply this step n adaptive (typ. 0 to 2) times.At the end of this step, we have covered the target image planewith patches drawn from the source textures I S . In the previous section we have defined how, given a set of targetimage patches, correspondences between the target image and thesource textures are determined. In this section, we will discuss the segmentation of the target image into patches, which has anenormous influence on the appearance and fabricability of the finalpiece of computational wood parquetry art.Ideally, the shapes and placement of the wood patches wouldbe determined using a global scheme that jointly performs tem-plate matching and shape optimization. The computational costsfor solving such a multi-constrained problem, however, would beprohibitively high. Instead, we explore approaches for decouplingthe segmentation step from the matching problem, in order to makeboth computationally tractable and to enable detailed artistic con-trol.Our approach is to start with a Cartesian grid that is not alignedto image features. Reconstructions obtained from such grids have astrongly stylized look that resembles pixel art and can be attractivefor certain resolutions. However, it may fail to convey sufficientdetail for very coarse grids, in particular when high-contrast areashave to be reconstructed using a single patch and subsequentlya single type of wood. Consequently, we have implemented twodifferent refinement strategies.
A-priori, image-space grid morphing (optional).
In order to in-crease the contrast of the fabricated output, it is beneficial to per-form a feature-aligned segmentation of the target image that leadsto better matching performance than regular grids. We have imple-mented a semi-automatic, real-time grid morphing application thatallows the user to generate a curvilinear, morphed grid that followsedge features, preserves fabricability, and reflects the user’s artisticintent, see Figure 7.The first step is to generate the edge features that the morphedgrid should follow. To this end, we filter the input image usingtwo edge-preserving filters: we apply the rolling guidance filter[Zhang et al. 2014] to filter out small structures (which shouldbe reconstructed by the template matching, not the overall patchshape) and apply Canny edge detection [Canny 1986] to extract theskeletonized edge image E RG . To extract higher-frequency edges,we smooth the input image using the bilateral filter [Tomasi andManduchi 1998] and again use the Canny edge detector to generatean edge image E bilateral . We combine the edge images using user-editable masks M RG , M bilateral to generate the final edge image E that our cuts should follow, E = max ( M RG ◦ E RG , M bilateral ◦ E bilateral ) , (5)where ◦ denotes the Hadamard product. Given E , we can computea scalar potential field that allows us to snap grid vertices to thefiltered edges, P = max ( , min ( D ( E ) , r − D ( E ))) , (6)where D ( E ) is the distance transform of the binary edge image[Felzenszwalb and Huttenlocher 2012] and r is the attraction radiusof the edges, which is set to half the grid spacing.Next, we snap mesh vertices to the edge image by iterativelysolving an energy-minimization problem. We model the mesh asa damped spring-mass system, where each of the mesh vertices x i is connected to its eight neighbors by linear springs, exerting the • Iseringhausen, J. et al (a) Input image (b) Rolling guidance filter (c) Bilateral filter (d) Potential field (e) Resulting morphed grid Fig. 7. Selected stages of the grid morphing workflow. We filter the input image (a) using the rolling guidance filter (b) [Zhang et al. 2014] to extract large-scaleimage features, as well as using the bilateral filter (c) [Tomasi and Manduchi 1998] to extract higher-frequency features. Canny edge detection [Canny 1986] isapplied to extract edges, which can be selected using a binary mask. Finally, grid points are snapped to the edges under a fabricability constraint by iterativelysolving a mass-spring-system with an additional edge-snapping term using a scalar potential field (d). The output is a smooth, curvilinear grid that follows theuser-selected image edges (e). force F ( x i ) = (cid:213) x j ∈N( x i ) m (cid:0) x j − x i (cid:1) − γ dd t x i − w ∇ P ( x i ) = F spring ( x i ) + F damp ( x i ) + F edge ( x i ) , (7)where m denotes the vertices’ mass, γ the damping coefficient, and w the weight for the edge-snapping force term. Here, F edge forcesgrid vertices to snap to the edges, while the term F spring ensuresthat vertices adhere to a minimum distance to each other, which inturn enforces fabricability of the final grid. We find an equilibriumto Equation 7 using a simple Euler solver.After the positions of the mesh vertices have been determined,we continue to generate the mesh edges, i.e. the patch boundaries,by fitting cubic Bézier curves to the edge image E . Due to the edgesnapping process, some edges do not follow patch outlines, butrun along patch diagonals, in which case we fit an additional edgeand split the respective patches into two triangular ones. In orderto generate a smooth appearance, we enforce G2 continuity whilefilling in the remaining (not fitted) edges.Together, this preserves the pixelized look of the regular grid,while adding an almost 3D-like appearance due to the curved patchboundaries. A-posteriori, cost-based patch refinement (optional).
As an alterna-tive approach to increase the visual fidelity of the final wood puzzle,we introduce an refinement step after patch matching. To this end,the initial patch size needs to be increased in order to produce anoverlap between neighboring patches. This area of overlap can beused to find optimal cuts according to the target image reproductioncost (cid:213) x , y ( F ( R ( x , y )) − F ( I T ( x , y ))) , (8)where R denotes the reconstructed wood parquetry image and F denotes the filter response from Section 3.2. Overlapping areas canbe shared by either two or four individual source patches. For imageregions with only two overlapping patches, we obtain an optimumsolution using dynamic programming. For details regarding theimplementation of axis-aligned patch merging using dynamic pro-gramming see e.g. [Efros and Freeman 2001]. As we enforce ourcuts to be guided by features in the target image, the corresponding, local cost c ( x , y ) for merging two horizontally neighboring patches P S , { , } along pixel x is given by c ( x , y ) = x − (cid:213) x ′ = (cid:0) F ( P S , ( x ′ , y )) − F ( P T ( x ′ , y )) (cid:1) + n − (cid:213) x ′ = x (cid:0) F ( P S , ( x ′ , y )) − F ( P T ( x ′ , y )) (cid:1) , (9)where P T ⊂ I T and n is the size of the overlap. We assign patch P S , to the region left of the cut and P S , to the remaining region. Byapproaching this problem using dynamic programming, we enforce6-connectivity of the cut and in turn physical fabricability. Verticallyneighboring patches can be aligned in an analogous manner.In regions where four patches overlap, we have to find two inter-secting cuts, one for the horizontal and one for the vertical direction.This prevents cut optimization via dynamic programming and in-stead, we find an approximate solution by alternating optimizationsfor one cut direction while keeping the other direction fixed. Weexperimentally observed two repetitions of this process to be suffi-cient.In order to generate a representation that is laser-cuttable, wefit cubic Bézier curve segments to the cuts, where the user canchoose between G0 continuous and G1 continuous curve segments.Finally, the output of this step is a vector graphics file containingcut instructions which can be directly executed by the laser cutter. In the next step, the optimized, still digital piece of parquetry isphysically fabricated. To this end, we use a laser cutter for cuttingthe veneer boards from the back side and for engraving IDs whichfacilitate the identification of individual patches during their assem-bly. For other materials, this step could also be conducted using aCNC mill or a water jet cutter. The patches are separated from therest of the veneer and laid out in a frame. To fix the patches, weattach a back plate using wood putty. After the putty has dried, wesand the veneers and finish them with clear coat or hard wax oil.
The method was implemented in C ++ using the OpenCV library[Bradski 2000] and parallelized with OpenMP. Fitting a single patch omputational Parquetry: Fabricated Style Transfer with Wood Pixels • 7 typically takes 0 . laser andan Epilog Fusion 40 M2 engraver with a 75 W CO laser. We begin our evaluation with the analysis of user-controllable de-sign choices in the optimization, such as the effect of different en-ergy terms, different sizes and shapes of the individual patches. Thisis followed by an ablation study, where we investigate the grad-ual decline in quality that occurs when repeatedly producing thesame target image from the same wood veneer panel. We furtherdemonstrate a few examples of fabricated parquetry obtained fromdifferent woods and under different conditions. Finally, we showthe robustness of our method with respect to different target imagesby presenting synthetic results for different targets, each optimizedusing the default parameter set.
Symbol Parameter Default w intens Intensity priority weight 0 . w edge Edge priority weight 0 . w hist Histogram matching weight 0 . s image Reconstructed image size (shorter axis) 360 mm s patch Patch size 14 mm n adaptive Adaptive patch levels 0 w adaptive Adaptive patch quality factor 1 . Table 1. User-controllable stylization parameters and their default values.
Our method allows the stylization of the generated renditions oftarget images based on user guidance. Before discussing the effectof individual user-controllable parameter choices on the style ofthe generated renditions, we first provide insights regarding theinvolved physical materials. We found an image of a human eye(Figure 12) to be a good target for quality assessment, because itcontains features with different frequencies, as well as roundedstructures. An overview over the user-controllable parameters re-lated to stylization can be found in Table 1 and a more detaileddescription in Section 3.
Materials.
For the purpose of a better comparability, we gener-ated synthetic renderings using the same scan of a wooden veneerpanel as input for all results in this section (unless otherwise noted).The panel has a size of 1500 mm × Histogram matching.
The target image gamut is generally largerthan the gamut of the wood textures. Without taking this into ac-count, the template matching step will generally draw patches fromthe gamut boundaries, which results in reproductions with high con-trasts, but flat shading. By matching the target image histogram tothe wood texture histogram, we compress the target image gamut tomatch the wood textures. This reduces the overall contrast, but putsmore emphasis on shading nuances, see Figures 5 and 10. We founda simple interpolation between the matched and the unmatchedinput image to effectively improve contrast while preserving theoriginal style of the image (Figure 5).
Patch size.
We evaluated the influence of the patch size on thestyle of the resulting target image renditions. Figure 8 shows ren-dered results for different patch sizes ranging from 7 . . Feature vector weights.
To analyze the effect of differently weightedfeature vectors in the template matching step (Equation 4) on thewood puzzle appearance, we show results obtained for various pa-rameter choices in Figure 12. The obtained renditions for the high-lighted regions of the eyelid (top row) and the iris (lower row)show that high weights for the intensity penalty w intens enforce thematching regarding the intensity features. Finer structures, such aseyelashes, become better preserved by increasing the penalty w edge on the edge filter responses. Boundary shape optimization.
We also show the respective resultsbefore and after cut optimization. As demonstrated in Figure 13, theuse of square patches on a regular grid results in a pixel-like rendi-tion of the target image. Merging neighboring patches according tothe data term reduces the pixelation effect, thereby putting moreemphasis onto the underlying image structures. We found that the • Iseringhausen, J. et al
Fig. 8.
Synthetic
Effect of different resolutions and refinement strategies on the reconstruction quality. We show reconstructions obtained using regulargrids (top row), the a-priori grid morphing refinement (second row), the a-posteriori cost-based patch refinement (third row), and a “baseline” where highfrequency features are removed and each patch is replaced by its mean color (bottom row). With decreasing resolution (from left to right), we observe that thestructurally aware filters, as well as the refinement schemes, are having an increased influence on reconstruction quality. The reconstruction quality obtainedwith our proposed technique gracefully declines with patch resolution and still produces visually pleasing results for very coarse patches.Fig. 9. Scan of the wooden veneer panel used for the results in Section 4.The panel has physical dimensions of × and containsveneer samples from various wood types. The fiducial markers facilitateoptical calibration on suitably equipped cutting systems. representation of rounded, high-contrast image features specificallybenefits from the dynamic programming step. Fig. 10.
Synthetic
The effect of histogram matching. Without histogrammatching ( w hist = , left), we obtain a higher contrast. With histogrammatching ( w hist = , right), the contrast is reduced, but the shading appearsless flat. Our approach is inherently resource constrained. Thus we expectthe reconstruction quality to scale with the area of available woodsamples. To evaluate this effect, we applied our pipeline several timesto generate renditions of the same target image under a decreasingavailability (and quality) of source patches. We have evaluated theeffect of a sequential reconstruction using a circularly sorted prior-ity queue and an interleaved, simultaneous reconstruction using aqueue sorted by image saliency. The respective results are shownin Figure 14. We observe that the reconstruction quality decreases omputational Parquetry: Fabricated Style Transfer with Wood Pixels • 9
Fig. 11.
Synthetic
Effect of different adaptive reconstruction parameters. From left to right: ( n adaptive = , w adaptive = . ) , ( n adaptive = , w adaptive = . ) , ( n adaptive = , w adaptive = . ) . As expected, high-frequency image structures are only touched for large values of w quality (i.e. we accept a large decline inreconstruction quality). Nonetheless, we find the effect to be visually pleasing in all images and subject to personal preferences.Fig. 12. Synthetic
Effect of intensity vs. edge filter. The highlighted zoom-ins depict the respective reconstructed regions for weights ( w intens , w edge ) : ( . , . ) , ( . , . ) , and ( . , . ) from left to right. Using only intensitypenalty enforces the stylization to match intensity. Structural details becomeincreasingly well preserved with an increasing weight of the edge term.Fig. 13. Synthetic
The effect of the boundary shape optimization usingdynamic programming. Without dynamic programming (left), the generatedrendition of the target image has a pixelized style. With dynamic program-ming (right), the cuts are optimized according to the underlying data termand the rendition exhibits a smoother, more organic style. gracefully and the target image stays recognizable until the verylast reconstruction. After the last reconstruction (partially) finished,there was no space left on the veneer panel that was large enoughfor another patch. We noticed two types of degradation: intensity and high-frequencydetail degradation. Most noticeable is the degradation in overall in-tensity matching after the panel runs out of dark patches (iteration5). Less noticeable is the degradation of high-frequency content,e.g. around the eyes after iteration 3. We note that the interleaved,simultaneous reconstruction using image saliency produces gen-erally favorable results over the circularly sorted reconstructions.The degradations could be alleviated by reconstructing target im-ages with different intensity distributions or, preferably, enforcingsufficient resources during reconstruction.
We present exemplary results of physically produced veneer puzzlesin Figures 1, 15 and 16. The veneer puzzles in Figures 1 and 15 havebeen fabricated using multiple wood types. Since different woodtypes can differ vastly in color and grain structure, these resultsshow a high contrast and perceived resolution. Fine details, such ashair, eyebrows, or eyelashes are faithfully reproduced.The results in Figure 16 have each been produced using a differentsingle wood type. The amount and quality of detail within a pixel isinherently limited to the features present in the original material.Woods with a limited feature gamut thus lead to a strongly stylizedoutcome, which we imagine could also be utilized as an artistic tool.We decided to finish most of the pieces using hard wax oil inorder to accomplish a natural look. A clear coat finish (Figure 16,right) results in a highly specular appearance.With row/column labels engraved on the back side, it takes about1 h to 2 h for a single person to assemble a 500-piece parquetry insidea suitably dimensioned frame. Although somewhat repetitive, theauthors found this activity to be satisfying and relaxing. For thinveneers that are laminated onto a plywood substrate, the final imageremains hidden until the finished composition is turned around.
In addition to the evaluation of different parameter choices, we showrenditions for several target images depicting portraits and animalsin Figure 17. To demonstrate the robustness of our approach withrespect to different target images, each of these results has been pro-duced using the default parameters shown in Table 1. The depictedresults demonstrate the potential of computational parquetry for
Fig. 14.
Synthetic
Renditions of a target image generated under a decreasing amount, and quality, of available patches from a single wood sample. Toprow: Patches are sorted by their distance to the image center and reconstructions are conducted sequentially from left to right, i.e. matching of the secondcolumn starts after all patches of the first column have been matched. The last reconstruction did not complete because there were no valid patches left on thewood sample. Bottom row: Patches from multiple copies of the same image are sorted by their image saliency score and reconstructions are carried out in aninterleaved manner. The first column shows the result for a single reconstructed target image, the second column for two simultaneously reconstructed targetimages, and so on. For the first four columns, the differences are negligible. With less suitable source patches available, the interleaved reconstruction yieldshigher quality results compared to the sequential reconstruction. Please zoom in to see image details.Fig. 15.
Fabricated
A fabricated piece of wood parquetry made from fourdifferent quarter-cut thick veneers (bottom left corner, from top to bottom:oak, zebrawood, fir, American walnut). The target image is a human eye(bottom right corner). The veneer puzzle consists of × wooden pixelsand has a total size of approx.
28 cm ×
17 cm . fine arts. Portraits and animal pictures can be easily recognized astheir characteristic appearance is preserved in the stylized result.Please see the supplemental material for additional results. In this work, we have focused on the generation of cut patternsthat are fabricable and can easily be assembled even by untrainedusers, which has led us to a solution based on regularly or semi-regularly placed patches. We would like to note though that ourpipeline is not inherently restricted to these kinds of segmentationsand instead custom, user-defined segmentations can be provided as
Fig. 16.
Fabricated
Exemplary results of fabricated parquetries using thesame target image (bottom center), but different wood types and finish.The left image was fabricated using zebrawood with an oil finish. Theright image was produced using poplar burl veneer with clear coating,resulting in a highly specular appearance with limited contrast. The samplesconsist of × and × wooden pixels respectively and their physicaldimensions are about
15 cm ×
15 cm . The left puzzle has optimized patchboundaries, the right puzzle consists of square patches. well, e.g. to produce marquetry art comparable to the one presentedin Figure 2. Due to the structure-aware patch matching step, ourapproach could be able to produce art pieces of even higher fidelitythan by manual matching. However, in this case, the complexity ofthe wood puzzle art strongly depends on the provided segmentationand the assembly might require a skillful artist to execute. SeeFigure 18 for two synthetic results using user-defined segmentationswith highly varying style and complexity.A practical drawback of our method is that it requires a surfacefinish to be applied to the wood two times, once before scanningand then again after the final assembly of the finished puzzle. Thefirst application is important, since this step changes the appearance omputational Parquetry: Fabricated Style Transfer with Wood Pixels • 11
Fig. 17.
Synthetic
Exemplary synthetic renditions of portraits and animals. Each of these results has been composed using the veneer sample panel shown inFigure 9 and the default parameters listed in Table 1. Our algorithm is able to handle a wide range of input including color photographs, black and whitephotographs, drawings, and paintings. The images show, from left to right, top to bottom: Grace Hopper, Eileen Collins, Felix Hausdorff, Katherine Johnson,Ludwig van Beethoven, Whoopi Goldberg, Hedy Lamarr, Alan Turing, a piglet, a penguin, a Corgi, and a flamingo. of the wood samples significantly. For the algorithm, it is crucial tochoose suitable patches based on their final appearance. We applythe sanding/finishing procedure a second time in order to flattenout small height variations, which are inevitable after puzzling. For a large-scale, automatic production of custom, wooden parquetrypuzzles, we would like to minimize the amount of manual interac-tion. Thus, we conducted initial experiments on training a modelto predict the change of appearance from unfinished to finished
Fig. 18.
Synthetic
Two results using custom segmentations of highly vary-ing complexity. The left reconstruction is based on a hand-drawn segmenta-tion, while the right reconstruction is based on a posterized version of theinput image. Depending on the segmentation, cutting and assembly can becomplex and might require a skillful artist to execute.Fig. 19. We envision using deep learning to predict the change in surfacefinish induced by a layer of oil or clear coat. Being able to do so would allevi-ate the need for a pre-finishing step prior to texture acquisition. From left toright: input image, surface finish appearance predicted by our preliminarymodel, ground truth image. veneers. Using these predictions, it might become possible to deferthe application of surface finish until after the final puzzle has beenassembled. To this end, we trained a U-Net [Ronneberger et al. 2015]on image pairs before and after applying the finish. Based on thepreliminary results in Figure 19, we believe that this would be agood direction for future work.Our approach allowed us to produce visually pleasing pieces ofwood parquetry, even without having a professional wood-workingbackground. However, we expect that certain technical imprecisions(such as sub-perfectly applied clear coating) would be mitigated withmore experience. Also, we expect that cut clearances and discol-orations will be improved with further fine tuning of the cuttingequipment.Here, we treat wood as being a diffuse reflector and ignore anydirectional effects. Real wood exhibits anisotropic BRDF characteris-tics, which means that rotation of a part could be used to modulateits intensity. This might also enable the generation of new types ofpuzzles, where a hidden pattern is revealed by the right permutationand rotation of some parts, comparable to the work of Sakurai etal. [2018].In our experiments, we restricted ourselves to fabricating par-quetry based on wood veneers, since they are commonly availableand can be cut using a laser cutter. Generally, our pipeline is notrestricted to this type of material. Using a water jet cutter, other materials like marble or brushed metal could be processed as well.The process could also be extended to multi-material parquetry.Parquetry generation is inherently resource-constrained and inthe scope of our work, the amount of available source samples waslimited. Having access to a larger database of veneers (either byincreasing the number of samples per wood type, or by introducingnew wood types) would certainly improve the reconstruction qual-ity. However, since this is an artistic process, reaching the highestreconstruction quality might not always be the goal. Using only asingle type of wood, or a selection of wood samples with a particularstructure, can lead to equally interesting and fascinating results, seee.g. Figure 16.When preparing our puzzle for assembly as a game, variousdegrees of difficulty could be imagined. As all pieces are madefrom wood, semantic labels are not immediately accessible as theysometimes are in regular puzzles (water, buildings, skin, foliage,sky/clouds, etc.). Given a bag of identically-shaped (square) pieces, itwould seem extremely challenging to arrive at the one “correct” solu-tion; at the same time, there would be numerous mechanically valid“approximate” solutions, or permutations between sets of similar-looking parts. Here, the irregularly-shaped pieces generated by ourrefinement steps offer welcome visual and tactile cues for assembly.
We approached the fabrication of structure-aware parquetry basedon a novel end-to-end pipeline that takes wood samples and a targetimage as inputs and generates a cut pattern for parquetry puzzles.To the best of our knowledge, there is no prior work that addressesthe challenges inherent to the task of producing a physical sam-ple of wood parquetry using commodity hardware from minimalinput (a target image). The challenges include the single use of in-dividual pieces of input material without being deformed, scaled,blended, or filtered, as well as keeping track of resource use in orderto prevent source patches from colliding with each other, while stillfaithfully reproducing the target image. Practical aspects regardingthe fabricability have also been taken into account. The varyingstructural details within the wood samples lead to unique and fasci-nating artworks, and the design of the overall process allows evenusers without a particular woodworking background to experienceproducing pieces of this new type of art.
REFERENCES
Ashikhmin, M. 2001. Synthesizing Natural Textures. In
Proc. of the Symposium onInteractive 3D Graphics . ACM, New York, NY, USA, 217–226.Barnes, C., D. B. Goldman, E. Shechtman, and A. Finkelstein. 2011. The PatchMatchrandomized matching algorithm for image manipulation.
Commun. ACM
54, 11(2011), 103–110.Barnes, C., E. Shechtman, A. Finkelstein, and D. B. Goldman. 2009. PatchMatch: ARandomized Correspondence Algorithm for Structural Image Editing.
ACM Trans.Graph.
28, 3 (2009), 24:1–24:11.Barnes, C., E. Shechtman, D. B. Goldman, and A. Finkelstein. 2010. The GeneralizedPatchmatch Correspondence Algorithm. In
Proc. of the European Conference onComputer Vision Conference on Computer Vision: Part III . Springer-Verlag, Berlin,Heidelberg, 29–43.Barnes, C. and F.-L. Zhang. 2017. A survey of the state-of-the-art in patch-basedsynthesis.
Computational Visual Media
3, 1 (2017), 3–20.Battiato, S., G. D. Blasi, G. Farinella, and G. Gallo. 2006. A Survey of Digital MosaicTechniques. In
Eurographics Italian Chapter Conference . The Eurographics Associa-tion.Battiato, S., G. D. Blasi, G. M. Farinella, and G. Gallo. 2007. Digital Mosaic Frameworks- An Overview.
Computer Graphics Forum
26, 4 (2007), 794–812. omputational Parquetry: Fabricated Style Transfer with Wood Pixels • 13
Battiato, S., A. Milone, and G. Puglisi. 2012. Artificial Mosaics with Irregular TilesBased on Gradient Vector Flow. In
Computer Vision – ECCV 2012. Workshops andDemonstrations . Springer Berlin Heidelberg, Berlin, Heidelberg, 581–588.Bickel, B., P. Cignoni, L. Malomo, and N. Pietroni. 2018. State of the Art on StylizedFabrication.
Comput. Graph. Forum
37, 6 (2018), 325–342.Blasi, G. D. and M. Petralia. 2005. Fast Photomosaic. In
Poster Proc. of WSCG .Bradski, G. 2000. The OpenCV Library.
Dr. Dobb’s Journal of Software Tools (2000).C. Close, J. Y. 1995.
Recent Paintings . Pace Wildenstein.Canny, J. 1986. A Computational Approach to Edge Detection.
IEEE Transactionson Pattern Analysis and Machine Intelligence
PAMI-8, 6 (1986), 679–698. https://doi.org/10.1109/TPAMI.1986.4767851Chen, Z., B. Kim, D. Ito, and H. Wang. 2015. Wetbrush: GPU-based 3D PaintingSimulation at the Bristle Level.
ACM Trans. Graph.
34, 6 (2015), 200:1–200:11.Dali, S. 1991.
The Salvador Dali Museum Collection . Bulfinch Press, Boston.Darabi, S., E. Shechtman, C. Barnes, D. B. Goldman, and P. Sen. 2012. Image Melding:Combining Inconsistent Images Using Patch-based Synthesis.
ACM Trans. Graph.
31, 4 (2012), 82:1–82:10.Datar, M., N. Immorlica, P. Indyk, and V. S. Mirrokni. 2004. Locality-sensitive HashingScheme Based on P-stable Distributions. In
Proceedings of the Twentieth AnnualSymposium on Computational Geometry . ACM, New York, NY, USA, 253–262.Deussen, O., T. Lindemeier, S. Pirk, and M. Tautzenberger. 2012. Feedback-guided StrokePlacement for a Painting Machine. In
Proceedings of the Symposium on ComputationalAesthetics in Graphics, Visualization, and Imaging . Eurographics Association, GoslarGermany, Germany, 25–33.Di Blasi, G., G. Gallo, and M. Petralia. 2005. Puzzle Image Mosaic.
Proc. IASTED/VIIP2005 (2005).Efros, A. A. and W. T. Freeman. 2001. Image Quilting for Texture Synthesis and Transfer.
Proceedings of SIGGRAPH (2001), 341–346.Efros, A. A. and T. K. Leung. 1999. Texture synthesis by non-parametric sampling. In
Proc. of the IEEE International Conference on Computer Vision , Vol. 2. 1033–1038.Elber, G. and G. Wolberg. 2003. Rendering traditional mosaics.
The Visual Computer
Theory of Computing
8, 1 (2012), 415–428.Finkelstein, A. and M. Range. 1998. Image Mosaics. In
Proceedings of the InternationalConference on Electronic Publishing . Springer-Verlag, London, UK, UK, 11–22.Gatys, L. A., A. S. Ecker, and M. Bethge. 2015. Texture Synthesis Using ConvolutionalNeural Networks. In
Proc. of the International Conference on Neural InformationProcessing Systems . 262–270.Gatys, L. A., A. S. Ecker, and M. Bethge. 2016. Image Style Transfer Using ConvolutionalNeural Networks. In
IEEE Conference on Computer Vision and Pattern Recognition(CVPR) . 2414–2423.Gi, Y. J., Y. S. Park, S. H. Seo, and K. H. Yoon. 2006. Mosaic Rendering Using ColoredPaper. In
Proceedings of the International Conference on Virtual Reality, Archaeologyand Intelligent Cultural Heritage (VAST) . Eurographics Association, Aire-la-Ville,Switzerland, Switzerland, 25–30.Han, J., K. Zhou, L.-Y. Wei, M. Gong, H. Bao, X. Zhang, and B. Guo. 2006. Fast Example-based Surface Texture Synthesis via Discrete Optimization.
Vis. Comput.
22, 9 (2006),918–925.Hausner, A. 2001. Simulating Decorative Mosaics. In
Proceedings of the Annual Confer-ence on Computer Graphics and Interactive Techniques . ACM, New York, NY, USA,573–580.He, K. and J. Sun. 2012. Computing nearest-neighbor fields via Propagation-AssistedKD-Trees. In
IEEE Conference on Computer Vision and Pattern Recognition . 111–118.Hertzmann, A., C. E. Jacobs, N. Oliver, B. Curless, and D. H. Salesin. 2001. Image Analo-gies. In
Proceedings of the Annual Conference on Computer Graphics and InteractiveTechniques . ACM, New York, NY, USA, 327–340.Jackson, A., D. Day, and S. Jennings. 1996.
The Complete Manual of Woodworking .Knopf.Jetchev, N., U. Bergmann, and C. Seward. 2017. GANosaic: Mosaic Creation withGenerative Texture Manifolds.
CoRR abs/1712.00269 (2017).Jing, Y., Y. Yang, Z. Feng, J. Ye, and M. Song. 2017. Neural Style Transfer: A Review.
CoRR abs/1705.04058 (2017).Kang, D., S. Seo, S. Ryoo, and K. Yoon. 2011. A Parallel Framework for Fast Photomosaics.
IEICE Transactions on Information and Systems
Comput. Graph. Forum
34, 2 (2015), 349–359.Kim, J. and F. Pellacini. 2002. Jigsaw Image Mosaics.
ACM Trans. Graph.
21, 3 (2002),657–664.Kopf, J., C.-W. Fu, D. Cohen-Or, O. Deussen, D. Lischinski, and T.-T. Wong. 2007. SolidTexture Synthesis from 2D Exemplars.
ACM Trans. Graph.
26, 3 (2007).Kwatra, V., I. Essa, A. Bobick, and N. Kwatra. 2005. Texture Optimization for Example-based Synthesis.
ACM Trans. Graph.
24, 3 (2005), 795–802.Kwatra, V., A. Schödl, I. Essa, G. Turk, and A. Bobick. 2003. Graphcut Textures: Imageand Video Synthesis Using Graph Cuts.
ACM Trans. Graph.
22, 3 (2003), 277–286. Kyprianidis, J. E., J. Collomosse, T. Wang, and T. Isenberg. 2013. State of the “Art”: ATaxonomy of Artistic Stylization Techniques for Images and Video.
IEEE Transactionson Visualization and Computer Graphics
19, 5 (2013), 866–885.Leung, T. and J. Malik. 2001. Representing and Recognizing the Visual Appearanceof Materials using Three-dimensional Textons.
International Journal of ComputerVision
43, 1 (01 Jun 2001), 29–44. https://doi.org/10.1023/A:1011126920638Li, C. and M. Wand. 2016a. Combining Markov Random Fields and ConvolutionalNeural Networks for Image Synthesis.
CoRR abs/1601.04589 (2016).Li, C. and M. Wand. 2016b. Precomputed Real-Time Texture Synthesis with MarkovianGenerative Adversarial Networks. In
Proc. of European Conference on ComputerVision - ECCV 2016, Part III . 702–716.Liang, L., C. Liu, Y.-Q. Xu, B. Guo, and H.-Y. Shum. 2001. Real-time Texture Synthesisby Patch-based Sampling.
ACM Trans. Graph.
20, 3 (2001), 127–150.Lindemeier, T., S. Pirk, and O. Deussen. 2013. Image Stylization with a Painting MachineUsing Semantic Hints.
Comput. Graph.
37, 5 (2013), 293–301.Liu, Y., O. Veksler, and O. Juan. 2010. Generating Classic Mosaics with Graph Cuts.
Computer Graphics Forum
29, 8 (2010), 2387–2399.Lu, J., C. Barnes, S. DiVerdi, and A. Finkelstein. 2013. RealBrush: Painting with Examplesof Physical Media.
ACM Trans. Graph.
32, 4 (July 2013), 117:1–117:12.Montabone, S. and A. Soto. 2010. Human Detection Using a Mobile Platform and NovelFeatures Derived from a Visual Saliency Mechanism.
Image and Vision Computing
28, 3 (March 2010), 391–402. https://doi.org/10.1016/j.imavis.2009.06.006Olonetsky, I. and S. Avidan. 2012. TreeCANN - K-d Tree Coherence ApproximateNearest Neighbor Algorithm. In
Proceedings of the European Conference on ComputerVision - Volume Part IV . Springer-Verlag, Berlin, Heidelberg, 602–615.Paget, R. and I. D. Longstaff. 1998. Texture synthesis via a noncausal nonparametricmultiscale Markov random field.
IEEE Transactions on Image Processing
7, 6 (1998),925–931.Panotopoulou, A., S. Paris, and E. Whiting. 2018. Watercolor Woodblock Printing withImage Analysis.
Comput. Graph. Forum
37, 2 (2018), 275–286.Pavić, D., U. Ceumern, and L. Kobbelt. 2009. GIzMOs: Genuine Image Mosaics withAdaptive Tiling.
Computer Graphics Forum
28, 8 (2009), 2244–2254.Portilla, J. and E. P. Simoncelli. 2000. A Parametric Texture Model Based on JointStatistics of Complex Wavelet Coefficients.
International Journal of Computer Vision
40, 1 (2000), 49–70.Praun, E., A. Finkelstein, and H. Hoppe. 2000. Lapped Textures. In
Proceedings ofthe Annual Conference on Computer Graphics and Interactive Techniques . ACMPress/Addison-Wesley Publishing Co., New York, NY, USA, 465–470.Pritch, Y., E. Kav-Venaki, and S. Peleg. 2009. Shift-map image editing. In . 151–158. https://doi.org/10.1109/ICCV.2009.5459159Ronneberger, O., P. Fischer, and T. Brox. 2015. U-Net: Convolutional Networks forBiomedical Image Segmentation.
CoRR abs/1505.04597 (2015). arXiv:1505.04597http://arxiv.org/abs/1505.04597Russ, J. C. 2002.
Image Processing Handbook, Fourth Edition (4th ed.). CRC Press, Inc.,Boca Raton, FL, USA.Sakurai, K., Y. Dobashi, K. Iwasaki, and T. Nishita. 2018. Fabricating Reflectors forDisplaying Multiple Images.
ACM Transactions on Graphics (TOG)
37, 4, Article 158(July 2018), 10 pages. https://doi.org/10.1145/3197517.3201400Silvers, R. 1997.
Photomosaics . Henry Holt and Co., Inc., New York, NY, USA.Simakov, D., Y. Caspi, E. Shechtman, and M. Irani. 2008. Summarizing visual datausing bidirectional similarity. In
IEEE Conference on Computer Vision and PatternRecognition . 1–8.Szeliski, R. and H.-Y. Shum. 1997. Creating Full View Panoramic Image Mosaics andEnvironment Maps. In
Proceedings of the Annual Conference on Computer Graphicsand Interactive Techniques . ACM Press/Addison-Wesley Publishing Co., New York,NY, USA, 251–258.Taeg Sang Cho, , M. Butman, S. Avidan, and W. T. Freeman. 2008. The patch transformand its applications to image editing. In . 1–8.Tomasi, C. and R. Manduchi. 1998. Bilateral Filtering for Gray and Color Images. In
Sixth International Conference on Computer Vision . 839–846. https://doi.org/10.1109/ICCV.1998.710815Tong, X., J. Zhang, L. Liu, X. Wang, B. Guo, and H.-Y. Shum. 2002. Synthesis ofBidirectional Texture Functions on Arbitrary Surfaces.
ACM Trans. Graph.
21, 3(July 2002), 665–672.Tresset, P. A. and F. F. Leymarie. 2012. Sketches by Paul the Robot. In
Proceedings ofthe Symposium on Computational Aesthetics in Graphics, Visualization, and Imaging .Eurographics Association, Goslar Germany, Germany, 17–24.Ulrich, R. 2007.
Roman Woodworking . Yale University Press.Wei, L.-Y., S. Lefebvre, V. Kwatra, and G. Turk. 2009. State of the art in example-basedtexture synthesis. In
Eurographics 2009, State of the Art Reports (STAR) . EurographicsAssociation, 93–117.Wei, L.-Y. and M. Levoy. 2000. Fast Texture Synthesis Using Tree-structured VectorQuantization. In
Proceedings of the Annual Conference on Computer Graphics andInteractive Techniques . ACM Press/Addison-Wesley Publishing Co., New York, NY,
USA, 479–488.Wexler, Y., E. Shechtman, and M. Irani. 2007. Space-Time Completion of Video.
IEEETrans. Pattern Anal. Mach. Intell.
29, 3 (2007), 463–476.Zhang, Q., X. Shen, L. Xu, and J. Jia. 2014. Rolling Guidance Filter. In
Computer Vision –ECCV 2014 . Attribution of source materials. • Left Blue Eye (Figure 15): public domain. • Marquetry Self Portrait (Figure 2): © 2008 Laszlo Sandor, CC BY 4.0. • Marquetry portrait “Girl 1” (Figure 2): © 2015 Rob Milam, included with permissionof the artist. • Intarsia image, Workshop David Roentgen (Figure 3): 2011, public domain. • Mosaïque d’Ulysse et les sirènes, Bardo Museum in Tunis (Figure 3): public domain. • Adult brown tabby cat (Figures 4 and 16): © Tomas Andreopoulos, Pexels license. • Close-up Photo of Dog Wearing Golden Crown: © rawpixel.com, Pexels license. • Closeup Photo of Human Eye (Figure 12): © Skitterphoto, CC0 1.0. • Official presidential transitional photo of then-President-elect Barack Obama (Fig-ure 14): © 2008 The Obama-Biden Transition Project, CC BY 3.0. • Ludwig van Beethoven, oil on canvas (Figure 1): 1820 Joseph Karl Stieler, publicdomain. • Commodore Grace M. Hopper, USNR Official portrait photograph (Figure 17): 1984Naval History and Heritage Command, public domain. • STS-93 Commander Eileen M. Collins (Figure 17): 1998 NASA, Robert Markowitz,public domain. • Felix Hausdorff (Figure 17): 1913-1921 Universitätsbibliothek Bonn, public domain. • Katherine G. Johnson (Figure 17): 2018 NASA, public domain. • Ludwig van Beethoven (Figure 17): 1854 Emil Eugen Sachse, public domain. • Whoopi Goldberg in New York City, protesting California Proposition 8 (Figure 17):© 2008 David Shankbone, CC BY 3.0. • Hedy Lamarr in “The Heavenly Body” (Figure 17): 1944 Employees of MGM, publicdomain. • Passport photo of Alan Turing at age 16 (Figure 17): 1928-1929 unknown author,public domain. • Tiny cute piglet looking at the photographer (Figure 17): 2012 Petr Kratochvil, publicdomain. • Penguin (Figure 17): © 2016 Pexels, Pixabay license. • Adult Brown and White Pembroke Welsh Corgi Near the Body of Water (Figure 17):© Muhannad Alatawi, Pexels license. ••