Solid Texture Synthesis using Generative Adversarial Networks
Xin Zhao, Jifeng Guo, Lin Wang, Fanqi Li, Junteng Zheng, Bo Yang
SSolid Texture Synthesis using Generative Adversarial Networks
Xin Zhao Lin Wang Jifeng Guo Bo Yang Junteng Zheng Fanqi Li Abstract
Solid texture synthesis, as an effective way toextend 2D texture to 3D solid texture, exhibits ad-vantages in numerous application domains. How-ever, existing methods generally suffer from syn-thesis distortion due to the underutilization of tex-ture information. In this paper, we proposed anovel neural network-based approach for the solidtexture synthesis based on generative adversarialnetworks, namely STS-GAN, in which the gener-ator composed of multi-scale modules learns theinternal distribution of 2D exemplar and furtherextends it to a 3D solid texture. In addition, thediscriminator evaluates the similarity between 2Dexemplar and slices, promoting the generator tosynthesize realistic solid texture. Experiment re-sults demonstrate that the proposed method cansynthesize high-quality 3D solid texture with sim-ilar visual characteristics to the exemplar.
1. Introduction
Texture mapping, as a technique to enhance the visual effectof the object surface, is widely used in graphics applications(Haeberli & Segal, 1993; Earl et al., 2010). Currently, it usu-ally requires a planar parameterization for mapping textureattributes to a 3D object. However, for complex textures, itis still a challenge to find a good planar parameterization(Kopf et al., 2007). The above is the domain of the surfacetexture. Contrasty, solid texture can describe textures di-rectly from 3D space without planar parameterization.
Solidtexture synthesis attempts to learn a 3D solid texture genera-tion process from a given 2D exemplar, and generates solidtexture that has similar textural characteristics with the ex-emplar. It enables scientists to study 3D internal structureswhen only 2D examples are available (Turner & Kalidindi,2016), and to aid medical assessment when imaging studiesis expensive and may cause unreasonable high radiation for Shandong Provincial Key Laboratory of Network Based In-telligent Computing, University of Jinan, Jinan 250022, China.Correspondence to: Lin Wang < [email protected] > , BoYang < [email protected] > . Figure 1.
Exemplars of solid texture synthesized with STS-GAN.The left part illustrates the synthesized solid with anisotropic tex-ture and the right is the isotropic case. patient (Li et al., 2016).As shown in Figure 1 which demonstrates the proposedmethod, the solid approach raises dimension for a 2D ex-emplar and projects its properties to a 3D object, not onlythe surface but also the whole volume. In terms of visualeffects, a randomly taking slice from generated 3D solidtexture shares similar or preferably same proprieties withthe 2D exemplar.The solid texture synthesis (Perlin, 1985; Peachey, 1985),starts with procedural method at an early age, requiring onlya simple algorithm. Despite its advantages of efficiency andmemory-saving, tuning parameters manually for a specifictexture is burdensome. In addition, this type of model relieson human experiences, while humans only focus on promi-nent features in the texture, which may results in the lack ofdetails and the failure of generating solid textures.Compared with procedural method, the more recent example-based models (Heeger & Bergen, 1995; Jagnowet al., 2004; Kopf et al., 2007; Chen & Wang, 2010) areable to capture more details of texture and generate realisticsolid texture. By matching strategies such as statistical in-formation or similar neighborhoods, these methods producesolid textures approximating the texture characteristics of2D exemplars in the orthogonal directions. However, thesemethods only take into account some manually extractedfeatures, such as color and morphology, and do not take fulladvantage of the overall information of the texture, which a r X i v : . [ c s . C V ] F e b olid Texture Synthesis using Generative Adversarial Networks Exemplar Patch Synthetic solid Solid slices
Figure 2.
Solid texture synthesis based on isotropic exemplar. The first column illustrates isotropic exemplars with a resolution of512 × × ) generated bySTS-GAN is shown in the third column. In addition, four slices of the generated solid across the three orthogonal directions and anoblique direction with a 45 angle are listed in the last part. often results in distortion of the generated solid texture.Taking advantage of the universal approximation ability,Gutierrez et al. (2018) introduced neural networks to syn-thesize solid texture, which can conceptualize many texturesto synthesize objects with good visual results that are at leastequivalent to the state-of-the-art approaches. Nevertheless,the feature extraction approaches can not be generalized tothe universal set of textural distributions, which is still ahurdle at the front of current research. A question that arises here is: Can we design a modellearning an arbitrary distribution of the 3D solid texturefrom a 2D exemplar?
The Generative Adversarial Nets (GANs) (Goodfellow et al.,2014) have been proven to be able to capture arbitrary datadistribution by adversarial learning and has received lots ofsuccessful stories in 2D texture synthesis (Bergmann et al.,2017; Shaham et al., 2019). In order to answer the afore-mentioned question, inspired by GANs, this study proposesa STS-GAN framework to synthesize 3D solid textures from2D exemplars. In the framework, the generator learns the2D textural distribution, and extends it to 3D objects. The discriminator attempts to distinguish generated solid slicesfrom the given exemplar. After the adversarial learning, thegenerative model is able to synthesize the 3D solids withsimilar textures corresponding to different exemplars.This work makes the following major contributions.• First time that the generative adversarial nets are intro-duced into solid texture synthesis.• A multi-scale generative model is proposed to synthe-size solid textural details at multi-scale levels.• A discriminative model with pre-training module isadopted for accelerating convergence.The rest of this paper presents the overview of the relatedwork to solid texture synthesis in Section 2, describes ourapproach in detail in Section 3, and provides a demonstra-tion of experimental results for various textures in Section4. olid Texture Synthesis using Generative Adversarial Networks ...
Generator .....
Discriminator TrainingTrainingZ s ={Z ,Z ,...,Z k } {u ,u ,...,u i }v {v d,n ,d Є {1,...,D},n Є {1,...,N}}Mutil-scale noise input 3D solid 2D slices 2D patches . Discriminator lossGenerator loss
2D exemplar u
Figure 3.
The whole framework for STS-GAN. Particularly, the generated solid texture v has a specified size( N × N × N ), where N can be set by the user. The 2D patches are randomly cropped from the 2D exemplar and the 2D slices are randomly selected from thesynthesized 3D solid in multiple orthogonal directions D .
2. Related works
In computer graphics, texture is an important element to en-rich the details of a 3D object surface. Catmull (1974) firstintroduced the concept of texture into computer graphics.He defined texture mapping as a method which applies tex-ture attributes to a 3D object by using planar parameteriza-tion. Compared with solid texture synthesis, these methodsonly express the surface details of 3D objects which act likepaintings and can not reconstruct the internal part. In addi-tion, the texture mapping can not deal with the anisotropictextures which are widespread in reality.Plenty of methods have been proposed to synthesize solidtexture. Among them, procedural methods (Perlin, 1985;Peachey, 1985) were the earliest family with advantage oflow computational cost. They synthesize textures by usinga function of pixel coordinates and a set of manually tun-ing parameters. Perlin Noise (Perlin, 1985), perhaps themost widely used one, is a smooth gradient noise functionwhich is used to create pseudo-random patterns by perturb-ing mathematical functions. The synthesized solid textureexhibits spatial consistency randomly. Nevertheless, findinga suitable set of parameters for a given image requires te-dious trial-and-error. Furthermore, the existence of semanticgap prevents people from associating concepts, like marbleor gravel, with accurate parameters.Compared with procedural methods, example-based modelsare able to generate realistic solid texture because they couldextract more details of textures from exemplars, instead ofa accurate description. The pyramid histogram matching(Heeger & Bergen, 1995) method pioneered the work onsolid texture synthesis from 2D exemplars. It matches the texture appearance of a given digitized sample by the re-produces global statistics and further produces solid texture.Based on a spectral analysis of a 2D texture (digitized im-age) in various types, Ghazanfarpour and Dischler (1995)presented a solid texture generation method. Jagnow (2004)proposed a solid texture synthesis method based on stereo-scopic techniques, which effectively preserves the structureof the texture. However, the color of the generated solidsdiffers from the given exemplar due to segmentation.Wei (2002) first applied a 2D neighborhood matching syn-thesis method to solid texture synthesis. Kopf et al. (2007)extended 2D texture optimization technique (Kwatra et al.,2005; Wexler et al., 2007) to synthesize 3D solid texture.In this method, the histogram matching forces the globalstatistics of the synthesized solid to match those of exem-plars. Chen and Wang (2010) integrated position and indexhistogram matching into optimization framework with thek-coherence search, which effectively improve the qualityof synthetic items. Although these methods can produce im-pressive results, they only take into account some statisticsinformation, such as color and morphology, and do not takefull advantage of the overall information of exemplars.Gutierrez et al. (2018) introduced convolutional neural net-works to synthesize solid texture, which can synthesize solidtexture of arbitrary size that reconstruct the visual featuresof the exemplars along some directions. In their work, theimage descriptor based on pre-training network can concep-tualize various textures to synthesize solids with good visualresults. However, this method can not be generalized to theuniversal set of textural distributions. olid Texture Synthesis using Generative Adversarial Networks ... *** + * + * + Convolutional blockUpSamplingChannel concatenationFinal convolutional block + . .. Z Z Z N+8 * N+4 N333 ccc c 2c 2c 2c (k+1)×c (k+1)×c 3
N_2k-2 + 8N_2k-1 + 4N_2k-1 + 8N_2k-1 + 8N_2k + 4N_2k + 8N_2k-1 + 12N_2k-2 + 12 N_2k-1 + 8N_2k-2 + 8 Z k Figure 4.
The structure of the solid texture generator. Especially, the upsampling in this model contains the nearest-neighbor interpolationand convolution.
3. STS-GAN
Generating solid texture from one single exemplar may hasa shortage of information and further results in the failureof learning. Thus, inspired by (Bergmann et al., 2017),STS-GAN crops patches randomly from exemplars as realsamples for the discriminator. Based on the assumption thatif the 2D exemplar from a real 3D solid is similar to theslices from the synthesized 3D solid, then the synthesized3D solid is equivalent to the real one, the orthogonal slicesof the generated solid is fed into the discriminator as fakesamples (Gutierrez et al., 2018) to correlate the 2D exemplarwith the 3D solid texture. Importantly, this paper designsa multi-scale generator to synthesize solid textural detailscorresponding to different scales. Then all generated solidswill be concatenated to the desired size.As shown in Figure 3, the proposed STS-GAN consists twoparts: solid texture generator(STG) and slice texture dis-criminator(STD). Firstly, a group of multi-scale noise Z s is fed into the solid texture generator which is responsiblefor reconstructing a 3D solid texture v . Then a set of slices v d,n from the generated solid texture and a batch of imagepatches u i from the 2D exemplar are distinguished by theslice texture discriminator. Specifically, the slices direc-tion d can be controlled which enables generator to learndifferent texture in orthogonal directions. As a result, themodel will have the ability to deal with anisotropic solidtexture. During the adversarial learning, the STG devotesitself to generate 3D solid textures whose slices can confusethe STD, while the STD learn to discriminate slices fromgenerated solid texture and the given exemplar as accurateas possible. After learning, the optimal generator is able toreconstruct realistic 3D solid texture. The generative model in STS-GAN adopts the idea of multi-scale. Notably, the generated multi-scale solids that do not reach specified size, named temporary solids, are marked as
T S .As shown in Figure 4, the model contains K ( K =1 , , ..., m ) different scales. The multi-scale noises are firstprocessed by a fixed convolution block to form differentlower scale temporary solids. In order to concatenated themtogether, lower scale need to be expanded by upsampling.For example, after we got T S N k +8 from Z , we need toexpanded it to the size as same as the T S N k − +12 collectedfrom Z . Next, two temporary solids with same scale areconcatenated in channels C . Finally, the fused temporarysolid is processed by a final convolutional block to get a 3Dsolid with specified size( N × N × N ). In the rest of thissubsection, We will introduce those operations in detail. C o n v o l u t i o n a l L a y e r V GG ‐ c o v n D o w n ‐ s a m p li n g N N — — — V GG ‐ c o n v V GG ‐ c o n v D o w n ‐ s a m p li n g D o w n ‐ s a m p li n g Figure 5.
The structure for slices texture discriminator. Notably,the size of its input images N is same as that of the 3D solidsynthesised by the STG. Convolutional block
To refine texture in different scales,multi layers or kernels are designed in the convolutionalblocks. In addition, the batch-normalization (BN) (Ioffe &Szegedy, 2015) and the leaky-relu are added to accelerateSTG Training.
Upsampling
Before concatenating solids with the samescale, the one with a lower scale must be extended to a largescale. Inspired by Augustus Odena et al. (2016), we adopt olid Texture Synthesis using Generative Adversarial Networks
Exemplar S i n g l e - s c a l e M u t i l - s c a l e Time
Figure 6.
The performance comparison of the model with Single scale and mutil-scale. the nearest-neighbor interpolation and convolution on imagescaling. In interpolation, a 3D nearest neighbor is used, andthe low scale solid will increase by 8 times.
Final convolutional block
Lastly, the final convolutionalblock is introduced to map the temporary solid channels tothe standard number, i.e., 3 channels.
Figure 5 details the slice texture discriminator. Inspired by(Bergmann et al., 2017), STD takes an image as input andgets a two-dimensional field as output. Each position ofthe two-dimensional field responds only to a local effectivereceptive field. The STD score is the mean of the two-dimensional field.
Pre-training module
Since training a model from scratchby iterations takes a huge amount of time, a pre-trainingmodule is used in this paper to accelerate convergence.Previous studies have proven that using the VGG model(Simonyan & Zisserman, 2014) as a pre-training moduleachieves success in texture synthesis. Particularly, aftertrained with massive image data from ImageNet(Deng et al.,2009), VGG-19 possesses strong generalization capability.Therefore, this study adopts it as a pre-training module.
Downsampling
Inspired by DCGAN (Radford et al., 2015),the STD use convolution with a stride size of 2 to replace thepooling layer, which allows it to extract features efficiently.
In adversarial learning, the STG and STD trained alternately.The goal of STG is to generate realistic-looking texturethat can not be discriminated as fake one by STD, whileSTD attempts to improve its discriminate ability to prevent STG from reconstructing realistic texture. In this paper, theWGAN-GP (Gulrajani et al., 2017) loss is adopted becauseit can increase training stability.When discriminator is optimized, the loss, computed byEq.(1), is maximized. L D = E [ D ( v )] − E [ D ( u )]+ λ E [( (cid:107) ∇ r D ( r ) (cid:107) − ] (1)where u is a randomly orthogonal 2D slice from the 3D solidproduced by STG, v is a random patch from the exemplar,and r is a data point uniformly sampled along the straightline connecting v and u . After the STD possesses a gooddiscriminate ability, the STG can be trained. Similarly,it is optimized by minimizing the generator loss functionexpressed by Eq.(2) L G = E [ D ( v )] (2) Figure 7.
The performance of the STS-GAN for texture mappingon 3D mesh model. Source: The 3D mesh models come fromStanford 3D Scanning Repository.
4. Experiment
In this paper, the solid texture generator contains 3 differentscales, i.e., K = 3 , while the multi-scales noise obey a olid Texture Synthesis using Generative Adversarial Networks Exemplar w i t h p r e - t r a i n i n g m o d u l e w i t h o u t p r e - t r a i n i n g m o d u l e Time
Figure 8.
The performance comparison of the two models at different iterations, in which the STD with or without pre-training module. normal distribution. The convolutional block consists ofthree 3D convolution layers, in which the size of the kernelsis 3 × × × × × ×
3. In the slice texturediscriminator, the first three convolutional blocks of VGG-19 is used as pre-training modules and were fixed during thetraining process.This framework is implemented by Pytorch. Training is per-formed with the Adam optimizer (Kingma & Ba, 2014), inwhich learning rate is 0.0005. In addition, this paper adoptsa learning rate decay strategy that the learning rate is re-duced to half of the original value each 8000 iterations. Thebatch size of the STG and STD are 1 and 32, respectively.All these parameters are tuned by trial and error. Based onthe above settings, we need to spend about 20 hours to traina texture model (i.e., a patch with 128 ×
128 resolution) onone GPU Nvidia GeForce TITAN RTX.
To demonstrate the advantage of using the multi-scale gen-erator while learning textural distribution, STG with multi-scale noise is compared with one fed with single-scale noise.Both of them are used to generate 3D solid based on thesame exemplar. Figure 6 illustrates the synthesized resultsusing two models. Compared with the single-scale model,the multi-scale model can generate a higher quality solidtexture with sufficient information. It learns the texturedistribution of not only the whole properties but also thelocal details. Moreover, the 3D solid synthesized by themulti-scale model exhibits uniformity and consistency atdifferent iterations, revealing its stability in learning texturaldistribution.
In order to prove that using pre-training module is valid forimproving the learning capability, models with and withoutpre-training module are tested using the same 2D exemplar.As shown in Figure 8, the 3D solid generated by the modelwith the pre-training module performs high consistency ateach iteration, while the one without the pre-training mod-ule exhibits volatility to some extend. Particularly, the firstsolid generated by the latter is inconsistent with the 2D ex-emplar because its discriminator has a random parameterat first, which is not conducive to the generator’s learning.The results reveal that the STD with the pre-training mod-ule accelerating convergence and aids the STG to producerealistic solid texture.
Exemplars Synthetic solids
Figure 9.
The interior texture in different views of the 3D solidsynthesized by STS-GAN using the isotropic and anisotropic ex-emplar. Especially, to improve the display effect, this figure onlyshows the structure of one phase. olid Texture Synthesis using Generative Adversarial Networks
Exemplar Training configuration Solid Orthogonal slices
Figure 10.
The synthetic effect of the STS-GAN with two constraint directions. The last part illustrates the synthetic solid slices indifferent orthogonal directions.
Exemplars
Training configuration Solid
Figure 11.
Result of the STS-GAN on the different anisotropicexemplars. In this model, two 2D images in different directionsare required.
Compared with texture mapping, the solid texture synthesisalso reconstructs texture for the entire volume. In this ex-periment, the STS-GAN is applied to texture mapping byusing the synthesized solid. The pixels on the surface of the3D mesh model are assigned by the synthesized solid basedon the spatial coordinate information.Figure 7 presents the results of STS-GAN on texture map-ping. In terms of appearance, the texture on the 3D meshmodel is in accordance with the 2D exemplar, which provesthat the texture generated by STS-GAN presents regulartexture even in irregular space. In addition, the generatedsolid texture can be repeatedly applied to different 3D meshmodels.
In this experiment, the proposed STS-GAN performs ondifferent isotropic exemplars. As shown in Figure 2, thegenerated 3D solids are similar to the given 2D exemplarsand patches. Besides, the slices from the 3D solid in differ-ent angles still keep the same textural characteristics as theexemplar. To further demonstrate the spatial consistency of the generated solids, the interior texture in different viewsis also illustrated in Figure 9(the top case). It can be foundthat the interior texture exhibits highly spatial consistencythat there is no texture discontinuity for special views. Thisphenomenon reveals that STS-GAN has learned the accu-rate texture distribution, and effectively extended it to the3D solids.
Since the STS-GAN learn the textural distribution from a 2Dexemplar, it can generate 3D solid textures with anisotropiccharacteristics when the textures in different directions aregiven. In this experiment, based on the setting trainingconfiguration, which is shown in Figure 11, the STS-GANare trained using two anisotropic exemplars in three differentorthogonal directions.As shown in Figure 11, despite subtle color diversion, thesynthesized texture solids are highly consistent with twoanisotropic exemplars. In addition, the interior texture indifferent views is also illustrated in Figure 9. It can be ob-served that the properties of anisotropic texture still existand show continuity at different angles. These results indi-cate that STS-GAN enables learning anisotropic texture andextends it into a 3D solid.
Since the STS-GAN synthesizes the solid texture by learn-ing the textural distribution from 2D exemplars, this experi-ment attempts to verify its reasoning ability on the directionwith no constrain. As shown in Figure 10, the constrainacts in two directions. From the results, it can be found thatthe generated solid exhibits consistency in the constrainedand unconstrained directions. Interestingly, there is no tex-ture discontinuity between them. These results indicate thatthe proposed method does possess the reasoning ability onthe unconstrained direction. It learns the distribution, notjust the features, which could expand more reasonably andregularly. olid Texture Synthesis using Generative Adversarial Networks
Kopf et al. Gutierrez et al.Gutierrez et al.STS-GANChen & Wang STS-GANSTS-GANSTS-GANChen & WangKopf et al.Chen & Wang Chen & Wang
Figure 12.
The results comparison between the STS-GAN and the existing methods. The first line represents the simple texture and thelast two lines are complex texture.
In this experiment, the STS-GAN is compared with the threeexisting methods, which seem to produce the best results:Kopf et al. (2007) , Chen and Wang (2010) and Gutierrez etal. (2018).Figure 12 illustrates some results obtained from another pa-pers side by side with results using STS-GAN. For texturewith single structure(in the first row), STS-GAN is able tocapture more texture details, i.e., high frequency informa-tion, than that of Chen and Wang. For texture with complexstructures(the second row), the proposed method, comparedwith the other three methods, shows great ability in preserv-ing texture structure and richness. Nevertheless, for the lasttexture, solid texture generated by the STS-GAN containssome duplicate texture structure. This above phenomenonimplies that the STS-GAN is difficult to learn complex tex-tural information when the exemplar’s size is small. Thereason may due to the patches from the 2D exemplar onlycontains local information.
5. Conclusion
In this paper, a novel approach STS-GAN is proposed tosynthesize 3D solid texture from 2D exemplar. It is the firsttime that the generative adversarial nets are introduced intothe field of solid texture synthesis. This method takes amulti-scale generative model to generate details at multi-scale levels. Moreover, the pre-training module is adoptedin discriminator to accelerate convergence. Significantly,our method can reconstruct realistic solid textures from agiven exemplar.This work is the fist attempt to open a brand new gate, asso-ciating solid texture synthesis with the powerful generativeadversarial net. However, there is still room for improve-ment. A limitation is that the model is difficult to learnwhen the exemplar is small, because of the lack of informa-tion. The time consuming nature of neural networks alsoinfluences on real applications.In future, in order to further improve discriminator, themulti-scale patch from exemplar and random angle slicingfrom synthesized textures are also required to augment data. olid Texture Synthesis using Generative Adversarial Networks
Furthermore, the generating process also needs to be accel-erated to match real-time requirement in real applications.
References
Bergmann, U., Jetchev, N., and Vollgraf, R. Learning tex-ture manifolds with the periodic spatial GAN. In Precup,D. and Teh, Y. W. (eds.),
Proceedings of the 34th Inter-national Conference on Machine Learning , volume 70 of
Proceedings of Machine Learning Research , pp. 469–477,International Convention Centre, Sydney, Australia, 06–11 Aug 2017. PMLR. URL http://proceedings.mlr.press/v70/bergmann17a.html .Catmull, E. A subdivision algorithm for computer displayof curved surfaces. Technical report, UTAH UNIV SALTLAKE CITY SCHOOL OF COMPUTING, 1974.Chen, J. and Wang, B. High quality solid texture synthesisusing position and index histogram matching.
The VisualComputer , 26(4):253–262, 2010.Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-Fei,L. Imagenet: A large-scale hierarchical image database.In , pp. 248–255. Ieee, 2009.Earl, G., Martinez, K., and Malzbender, T. Archaeologicalapplications of polynomial texture mapping: analysis,conservation and representation.
Journal of Archaeologi-cal Science , 37(8):2040–2050, 2010.Ghazanfarpour, D. and Dischler, J.-M. Spectral analysis forautomatic 3-d texture generation.
Computers & Graphics ,19(3):413–422, 1995.Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B.,Warde-Farley, D., Ozair, S., Courville, A., and Bengio,Y. Generative adversarial nets. In Ghahramani, Z.,Welling, M., Cortes, C., Lawrence, N., and Weinberger,K. Q. (eds.),
Advances in Neural Information ProcessingSystems , volume 27, pp. 2672–2680. Curran Asso-ciates, Inc., 2014. URL https://proceedings.neurips.cc/paper/2014/file/5ca3e9b122f61f8f06494c97b1afccf3-Paper.pdf .Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V., andCourville, A. Improved training of wasserstein gans. arXiv preprint arXiv:1704.00028 , 2017.Gutierrez, J., Rabin, J., Galerne, B., and Hurtut, T.On Demand Solid Texture Synthesis Using Deep 3DNetworks. working paper or preprint, December2018. URL https://hal.archives-ouvertes.fr/hal-01678122 . Haeberli, P. and Segal, M. Texture mapping as a fundamen-tal drawing primitive. In
Fourth Eurographics Workshopon Rendering , volume 259, pp. 266. Citeseer, 1993.Heeger, D. J. and Bergen, J. R. Pyramid-based textureanalysis/synthesis. In
Proceedings of the 22nd annualconference on Computer graphics and interactive tech-niques , pp. 229–238, 1995.Ioffe, S. and Szegedy, C. Batch normalization: Acceleratingdeep network training by reducing internal covariate shift.In
International conference on machine learning , pp. 448–456. PMLR, 2015.Jagnow, R., Dorsey, J., and Rushmeier, H. Stereologi-cal techniques for solid textures.
ACM Transactions onGraphics (TOG) , 23(3):329–335, 2004.Kingma, D. P. and Ba, J. Adam: A method for stochasticoptimization. arXiv preprint arXiv:1412.6980 , 2014.Kopf, J., Fu, C.-W., Cohen-Or, D., Deussen, O., Lischinski,D., and Wong, T.-T. Solid texture synthesis from 2dexemplars. In
ACM SIGGRAPH 2007 papers , pp. 2–es.2007.Kwatra, V., Essa, I., Bobick, A., and Kwatra, N. Textureoptimization for example-based synthesis. In
ACM SIG-GRAPH 2005 Papers , pp. 795–802. 2005.Li, Z., Desolneux, A., Muller, S., and Carton, A.-K. A novel3d stochastic solid breast texture model for x-ray breastimaging. In
International Workshop on Breast Imaging ,pp. 660–667. Springer, 2016.Odena, A., Dumoulin, V., and Olah, C. Deconvolution andcheckerboard artifacts.
Distill , 1(10):e3, 2016.Peachey, D. R. Solid texturing of complex surfaces. In
Proceedings of the 12th annual conference on Computergraphics and interactive techniques , pp. 279–286, 1985.Perlin, K. An image synthesizer.
ACM Siggraph ComputerGraphics , 19(3):287–296, 1985.Radford, A., Metz, L., and Chintala, S. Unsupervised rep-resentation learning with deep convolutional generativeadversarial networks. arXiv preprint arXiv:1511.06434 ,2015.Shaham, T. R., Dekel, T., and Michaeli, T. Singan: Learn-ing a generative model from a single natural image. In
Proceedings of the IEEE/CVF International Conferenceon Computer Vision (ICCV) , October 2019.Simonyan, K. and Zisserman, A. Very deep convolu-tional networks for large-scale image recognition. arXivpreprint arXiv:1409.1556 , 2014. olid Texture Synthesis using Generative Adversarial Networks
Turner, D. M. and Kalidindi, S. R. Statistical con-struction of 3-d microstructures from 2-d exem-plars collected on oblique sections.
Acta Ma-terialia , 102:136–148, 2016. ISSN 1359-6454.doi: https://doi.org/10.1016/j.actamat.2015.09.011.URL .Wei, L.-Y.
Texture Synthesis by Fixed Neighborhood Search-ing . PhD thesis, Stanford, CA, USA, 2002. AAI3038169.Wexler, Y., Shechtman, E., and Irani, M. Space-time com-pletion of video.