A Flexible Neural Renderer for Material Visualization
Aakash KT, Parikshit Sakurikar, Saurabh Saini, P. J. Narayanan
AA Flexible Neural Renderer for Material Visualization
AAKASH KT,
IIIT Hyderabad
PARIKSHIT SAKURIKAR,
IIIT Hyderabad / DreamVu Inc.
SAURABH SAINI,
IIIT Hyderabad
P J NARAYANAN,
IIIT Hyderabad
Fig. 1. Shaderball visualizations of four selected materials produced by our network are shown at the bottom, for two lighting conditions :
Left - Daylight and
Right
Photo realism in computer generated imagery is crucially dependent on howwell an artist is able to recreate real-world materials in the scene. The work-flow for material modeling and editing typically involves manual tweakingof material parameters and uses a standard path tracing engine for visualfeedback. A lot of time may be spent in iterative selection and renderingof materials at an appropriate quality. In this work, we propose a convolu-tional neural network based workflow which quickly generates high-qualityray traced material visualizations on a shaderball. Our novel architectureallows for control over environment lighting and assists material selectionalong with the ability to render spatially-varying materials. Additionally,our network enables control over environment lighting which gives an artistmore freedom and provides better visualization of the rendered material.Comparison with state-of-the-art denoising and neural rendering techniquessuggests that our neural renderer performs faster and better. We providea interactive visualization tool and release our training dataset to fosterfurther research in this area.CCS Concepts: •
Computing methodologies → Rendering ; Ray trac-ing ;Additional Key Words and Phrases: Ray Tracing, Global Illumination, DeepLearning, Neural Rendering
Authors’ addresses: Aakash KT, CVIT, KCIS, IIIT Hyderabad, [email protected]; Parikshit Sakurikar, CVIT, KCIS, IIIT Hyderabad / DreamVu Inc. [email protected]; Saurabh Saini, CVIT, KCIS, IIIT Hyderabad, [email protected]; P J Narayanan, CVIT, KCIS, IIIT Hyderabad, [email protected].
Ray tracing has emerged as the industry standard for creating photorealistic images and visual effects [Keller et al. 2015]. Accurate mod-eling of the behavior and traversal of light, along with physically ac-curate material modeling, creates the beautiful computer generatedimagery we see today. Achieving photo realism in such renderings ishowever a tedious process. Ray tracing is a computationally expen-sive operation while physically accurate material modeling requiresexpertise in fine-tuning of parameters to achieve the desired look.Visualization of edits during fine-tuning is very time consumingif the target image is ray-traced. An artist might thereby end upspending a lot of time in a slow and iterative visualization loop.In this paper, we present a neural network architecture that canquickly output a high-quality ray-traced visualization of a material.Our work extends the state-of-the-art in material rendering by pro-viding the ability to deal with a large range of uniform as well asspatially-varying materials, along with control over the environ-ment lighting. We render on a fixed shaderball geometry which iscomplex enough to encode fine interactions between light and theunderlying material.We evaluate our method quantitatively and also compare qualita-tive render quality with existing neural rendering frameworks. Wealso conduct a user study to show the benefit of providing controlover lighting for material selection. We show that our proposed sys-tem is fast and therefore helps in real-time visualization of materials.Our method also compares favourably with denoising frameworks a r X i v : . [ c s . G R ] A ug • Aakash KT et al. in producing faster and better rendered images. In summary, thefollowing are the contributions of our work: • A neural renderer to aid in visualization of uniform andspatially-varying materials. • An architectural enhancement to provide control over theenvironment lighting, thereby increasing visualization capa-bility and freedom. • An interactive tool for material visualization and editing anda large-scale dataset of uniform and spatially-varying mate-rial parameters with corresponding ground truth ray-tracedimages (Project Page )Figure 1 is an example of the utility of our proposed system. Eachmaterial visualization is created within and accuratelymimics the behaviour of being rendered in a scene environment. The use of neural networks for rendering has gained popularityin the recent past. Existing approaches to neural rendering dealwith denoising low sample-count Monte-Carlo renders or neuralrendering of materials on a fixed geometry. We briefly discuss recentworks dealing with denoising, material modelling and acquisitionand image-based relighting, in the context of our contributions.
Material modelling:
Several material models have been pro-posed in the past [Guarnera et al. 2016]. While some models focuson physical accuracy others focus on intuitiveness and simplicity. Anintuitive but not strictly physically-accurate material model was pro-posed by Burley [2012]. The model, known as the principled shadermodel, is parameterized by 13 different values, that control a specificphysical aspect of the material. The Cook-Torrance model [Cookand Torrance 1982] is another popular material model that is param-eterized by four values -
Diffuse Color , Specular Color , Roughness and
Normal . Material model parameters can also be spatially-varyingi.e. they can consist of different values at different locations of a 3Dsurface. In this case, each value is assigned from a
UV-mapping ofthe 3D object to a 2D per-pixel parameter map. In our work, we usethe Cook-Torrance material model (Section 3).
Material acquisition:
Material recovery from a set of imagesor a single-image is an interesting problem and has been recentlystudied using neural networks. A method for accurate capture ofBRDF (Bi-directional Reflectance Distribution Function) using alight-stage setup and a deep neural network was proposed by Kanget al. [2018]. Their approach captures multiple images for accuratereconstruction, where the weights of their network control the illu-mination within the lightstage. A neural network architecture forrecovering the SVBRDF (Spatially Varying BRDF) from a single flashphotograph of a planar material was proposed by Deschaintre et al.[2018]. An even more challenging task of recovering the SVBRDFand shape of a free-form object from a single flash photograph isdemonstrated by Li et al. [2018]. While these approaches have signif-icantly improved the speed and accuracy of acquiring materials, thevisualization of acquired materials still requires ray-tracing. Withadvancements in the acquisition process, similar advancements invisualization are imperative, so as to make the whole pipeline func-tion faster. This is the main focus of our work. https://aakashkt.github.io/neural-renderer-material-visualization.html Rendering as Denoising:
Much attention has been paid in re-cent days towards denoising Monte-Carlo (MC) renders in orderto enable real-time ray-tracing. A recurrent autoencoder for thetask of denoising MC renders was proposed by Chaitanya et al.[2017] . A more recent method [Lehtinen et al. 2018] showcasesa general denoising framework, trained on noisy input and noisyground-truth pairs. Both of these approaches make use of auxiliarybuffers as additional input to their networks. Other approaches like[Kuznetsov et al. 2018] aim to efficiently distribute MC samples tobring down the overall render time. All these approaches requirea low sample count (spp) MC render as input. One could arguethat such methods can be used for quick material visualization, byrendering the a given material with a low sample count, and thendenoising the image. We show that a neural renderer specifically de-signed for material visualization produces faster and better qualityresults than the corresponding denoising approach.
Image-based relighting:
Neural networks have also been usedfor relighting an image [Ren et al. 2015]. A network that can relighta scene from five differently lit images of the scene was proposed byXu et al. [2018]. Yet another neural network for relighting humanfaces from a single input photograph was proposed by Sun et al.[2019]. While not directly related to our work, we take inspirationfrom such architectures to provide a control over the light directionin the rendered material output.
Neural rendering:
Closely related to our work, neural renderingfor material visualization has been proposed by Zsolnai-Fehér et al.[2018]. Their approach transforms a low dimensional parameterspace to a high dimensional image output, which is the renderedmaterial on a fixed shaderball, under fixed lighting conditions. How-ever, their neural renderer has a large number of parameters, whichaffects run-time and portability. Also, the target material is ren-dered under a fixed light position thereby reducing the flexibilityof visualization. Additionally, their work only deals with constantmaterial parameters while it is quite common and in fact necessaryto have spatially-varying parameters to increase photo realism inmaterials. Our neural renderer addresses all of these issues, in thatit is much smaller and hence faster, allows for control over the lightdirection and enables the use spatially-varying parameters. We alsodemonstrate quantitative improvements in rendering quality oncomparing our performance with [Zsolnai-Fehér et al. 2018].Ours is a flexible neural renderer for material visualization andcan be used in conjunction with material suggestion systems similarto those shown in [Zsolnai-Fehér et al. 2018], for building high-quality assistive tools for artists.
We seek to quickly and accurately render constant or spatially vary-ing material parameters on fixed geometry under controllable envi-ronment lighting. The incoming radiance at each pixel x of a such arendered 2D image can be modeled as: I ( x ) = ∫ Ω f r ( p x , ω i , ω x ) L ( p x , ω i )( ω i · n ) dω i , (1)where p x is the 3D point corresponding to the 2D pixel x , ω i is theincoming light direction, ω x is the direction towards pixel x frompoint p x , L ( p x , ω i ) is the radiance of incoming light at point p x from Flexible Neural Renderer for Material Visualization • 3
Fig. 2. An overview of our proposed workflow. From input SVBRDF maps (a), we create screen-space maps (b) by UV-mapping each map on the shaderball,and rasterizing the scene with that map as base texture. We provide the sun direction and turbidity [ ω s , c ] as an input along with the concatenation ofscreen-space maps. (c) shows the architecture of our proposed neural renderer. (d) shows the results of our network under different environment lighting. direction ω i , Ω is the set of directions on the upper hemisphere and f r is the Bi-directional Reflectance Distribution Function (BRDF).The choice of f r determines the material model in use. The hyper-parameters of f r describe the surface and material properties ofthe shaderball, which we refer to as the material parameters . Werefer the reader to [McAuley et al. 2012] for a complete overview ofsuch physically based shading models and [Hoffman 2012] for themathematical details.We use the Cook-Torrance [Cook and Torrance 1982] materialmodel ( f r ) for rendering. Our choice of this material model wasbased on two aspects: (1) The Cook-Torrance model is based on themicrofacet theory, which accurately models surface properties; (2)A large SVBRDF dataset for the Cook-Torrance model is publiclyavailable [Deschaintre et al. 2018].We parameterize the environment lighting in the scene using thesky model proposed by Hosek and Wilkie [2012]. Such a sky modelsimulates realistic and plausible environment lighting given onlythe sun direction ω s and turbidity (cloudiness) c as input. Hence,we can simulate large variations in the outdoor lighting with onlyfour parameters.We formally define the task of neural rendering as follows. Giventhe Cook-Torrance material parameters m f along with the incom-ing sun direction ω s and turbidity c , the solution of the renderingequation is estimated by a convolutional neural network ϕ as: I ( x ) = ϕ ( x , m f , ω s , c ) . (2)We do not explicitly parameterize the geometry of our target scenesince it remains constant across all renders. Figure 2 shows the overview of our proposed workflow. From inputSVBRDF maps, we first construct their corresponding screen-spacemaps, by UV mapping each input map to the shaderball, and raster-izing the scene with that map as the base texture (Figure 2(b)). Theconcatenation of these screen-space maps, along with the sun direc-tion ω s and turbidity c forms the input to our network. Figure 2(d) shows the rendered material output under different environmentlighting conditions.Our network architecture is inspired from the U-net-style au-toencoder architecture [Ronneberger et al. 2015]. The encoder takesthe concatenation of 400x400 screen space maps of the materialparameters: Diffuse , Specular , Roughness and
Normal , and passesit through a series of convolutional layers, with stride 2 for down-sampling. Each layer is followed by Batch Normalization and ReLUactivation. We encode the 3D vector for the directional light using aseparate, fully-connected encoder. Each fully-connected layer of thisencoder is followed by
Tanh activation. The encoder expands the3-dimensional vector to a 625-dimensional vector. We then reshapethis 625 dimensional vector to a 25x25 dimensional feature map,and replicate it 128-times along the channel dimension, to get a128x25x25 feature map. We append this feature map to the bottle-neck layer of the Ray-Trace network. The decoder then deconvolvesthe concatenated feature map and encoder output. Each decoderlayer is followed by Batch Normalization and ReLU activation. Weuse skip connections to recover the high frequency details in therendering and improve convergence. The last layer of the decoderuses
Sigmoid activation and converts the 64 channel feature map toan output 3 channel target RGB image.
We generate a synthetic dataset of 50,000 material parameter mapsand ground truth render pairs, containing equal number of spatially-varying and uniform parameter maps. We randomly choose 1000images from the above dataset for the test set, and train our networkon the remaining 49,000 images. For uniform maps, we randomlychoose one value for all the four parameters (Diffuse, Specular,Roughness, Normal), and replicate it along the width and height(400x400) to get a uniform parameter map. We use SVBRDF texturesfrom the dataset of Deschaintre et al. [2018] for spatially-varyingmaps. For each material parameter map, we sample 5 random sundirections on the upper hemisphere with random turbidity value,and render the scene at 150 samples-per-pixel (spp). We use the • Aakash KT et al.
Fig. 3. Average case and ablation study comparisons with [Zsolnai-Fehér et al. 2018] and [Chaitanya et al. 2017]. [Zsolnai-Fehér et al. 2018] has no results forspatially-varying materials, since they only handle uniform materials. Results are shown on a fixed sun direction. cycles ray-tracing engine in Blender 3D [Blender Online Community2018], in which we create the Cook-Torrance material for use onthe shaderball. This dataset is available at . The task of material visualization requires that the perceptual visualquality of the rendered images is impeccable. Cost functions basedon Euclidean ( L ) distance are known to be prone to blurring andpixel degradation. We therefore use a loss term which evaluatesthe perceptual quality of the rendered image, along with L loss fortraining, inspired from [Johnson et al. 2016].Specifically, we use the feature reconstruction loss from a pre-trained VGG16 [Simonyan and Zisserman 2015] network, which isgiven by : L jf eat ( y ′ , y ) = C j H j W j (cid:13)(cid:13) ϕ j ( y ′ ) − ϕ j ( y ) (cid:13)(cid:13) , (3)where ϕ j is the activation of the jth convolutional layer with di-mensions C j × H j × W j representing number of channels, widthand height of the feature map, respectively. Here, y denotes thepredicted output and y ′ is the ground truth. We use the relu_3_3 ( j=relu_3_3 ) feature representation in our experiments. Thus, ourfinal composite loss is given by: L train = L + L relu _3_3 f eat . (4) https://aakashkt.github.io/neural-renderer-material-visualization.html Algorithm
Zsolnai-Fehér et al.Params: 5,374,75,643Chaitanya et al.Params: 15,05,453OursParams: 117,52,404
PSNR(dB) Pre-proc. Network Total -36.10530.437
SSIM
Table 1: Quantitative and run-time comparisons (in milliseconds). The net-work of [Chaitanya et al. 2017] has a pre-process time for rendering the2spp image, our network has a pre-process time for UV-mapping. Run-timevalues are evaluated on a workstation with 40 CPU cores and one NVIDIAGTX 1080Ti GPU.
We train our network on an NVIDIA GTX 1080Ti, with a batchsize of six using the Adam Optimizer ( lr = 10 − , β = 0.9, β = 0.999).We initialize all weights using Glorot-initialization. The networkis trained for 30 epochs and takes around 90 hours to train on oursetup. We compare our performance with two contemporary paradigmsfor neural rendering: (1) Neural rendering based on denoising [Chai-tanya et al. 2017]; (2) Direct neural rendering [Zsolnai-Fehér et al.2018]. We also justify our network design choices with an ablation
Flexible Neural Renderer for Material Visualization • 5
Fig. 4. Comparisons with [Zsolnai-Fehér et al. 2018], [Chaitanya et al. 2017] and ablations of our network. Results are shown for both uniform materialparameter maps and spatially-varying material parameter maps. [Zsolnai-Fehér et al. 2018] has blank spots for spatially-varying materials, since they onlyhandle uniform materials. Results are shown on a fixed sun direction. study and conduct a user study for perceptual evaluation. Figures 4,8, 9 and 10 show extensive results and comparisons.
We show qualitative and run-time comparisons with the denoiserproposed by Chaitanya et al. [2017]. We implement their network inPyTorch and train on our dataset with 2spp render input and 150sppground truth, consisting of both uniform and spatially-varying ma-terials. The denoiser fails to recover accurate details, which areessential for material visualization (Fig 3). In terms of run-time, it is evident that first rendering a 2spp image and denoising it requiresa lot more time than what is required by our network, even withthe additional overhead of UV-mapping (Table 1).
We also compare our results with the neural renderer proposed byZsolnai-Fehér et al. [2018]. We implement their network in PyTorchand train on our dataset of uniform material parameter maps. For afair comparison, we compare results on a fixed sun direction. Ournetwork produces better results than their network both in terms • Aakash KT et al. of render quality and PSNR values (Fig. 3, 4 and Table 1). Since thenumber of parameters of our network differs from theirs by a factorof 10, the run-time of our network is also better.
We justify the impact of our composite loss function by training astandalone network using only the L loss. Figure 3, 4 demonstratesthat L loss alone is unable to capture fine details, especially of thenormal map. We also justify the benefit of using skip connectionsin our network. Figure 3, 4 shows this comparison. Without skipconnections, several high-frequency details are lost (Fig. 3, last tworows). Table 1 and Fig. 3 show that quantitative metrics like PSNR and SSIMdo not faithfully reflect the visual quality of results. Although theaverage PSNR value of Zsolnai-Fehér et al. [2018] are comparable toours, their result contains artifacts in near perfectly dark or whiteregions. Another point to note is the resultant PSNR values producedby the denoiser and by our network. The quantitative values arevery close, even though the latter’s visual quality is superior (Fig. 3,last row).
We conduct an extensive user study consisting of 70 users to qualita-tively validate the impact of flexible lighting for the task of materialselection. From a rendered scene, we pick one dominant materialfrom the scene and render four materials on the shaderball, withone of them being the correct material and others encoding slightvariations. We ask the users to pick the correct materials in two in-dependent experiments conducted with and without the freedom toview the shaderball under flexible lighting. Two example instancesof the user study are shown in Figure 5. We find that only 17.9% ofusers are able to identify the correct material under fixed lightingconditions. This number increases to 49.3% once the flexible lightingis made available. This clearly demonstrates the benefit of usingflexible lighting in our renderings for material visualization.
In summary, we present a convolutional neural network for accuraterendering and visualization of both uniform and spatially-varyingmaterials. We enable control over the environment lighting throughour architecture, and verify its benefit through a qualitative userstudy. Comparison with denoising and neural rendering methodsshows improved quantitative and qualitative results. We also re-lease a large-scale dataset of uniform and spatially-varying materialparameter-render pairs. In the future, we are interested in general-izing the network to arbitrary geometry.
ACKNOWLEDGMENTS
We thank the reviewers of our SIGGRAPH Asia 2019 submission fortheir valuable comments and suggestions.
Fig. 5. Two example questions and options presented to the users in ouruser study. The user was asked to identify the material in the scene. Thesecond row of options are where lighting control was allowed. On viewingunder certain lighting, the distinction between materials becomes clear. Thecorrect option is highlighted in green.
Flexible Neural Renderer for Material Visualization • 7 f32 c c c c c c c c c c c c c c c c f3f128f256f625 f f f f c c c Uniform materialparameters UV mapped spatially-varying mapsInputFully connected + Tanh fXcX
Convolutional layer + BN + ReLU c ω p f32 c c c c c c c c c f3f128f256f625 c ω pConvolutional + CELU(a) (b) Fig. 6. (a) Network to render a uniform materials on the shaderball. (b) Network to render spatially-varying materials on the shaderball. Each network wastrained separately on uniform material dataset and spatially-varying material dataset, respectively.Fig. 7. (a) Results of the network described in Figure 6(a) (Uniform materials) and (b) Results of the network described in Figure 6(b) (Spatially-varyingmaterials), for different locations of the planar light source.
A PRELIMINARY EXPERIMENTS
In this section, we describe various other experiments we conductedfor material visualization. To improve the neural renderer proposedby Zsolnai-Fehér et al. [2018] while also enabling flexible lighting,we used the network architectures shown in Figure 6.Figure 6(a) shows a network architecture which is an extension ofZsolnai-Fehér et al. [2018], which provided control over an area lightsource in the scene. We trained this network on a dataset of uniformmaterials using perceptual loss along with the L loss, as describedin this paper. Figure 7(a) shows some results from this network.We achieved better quantitative and qualitative results, on compari-son with [Zsolnai-Fehér et al. 2018], which further motivated theextension to handle spatially-varying materials.Consequently, we used the network shown in 6(b) to handlespatially-varying materials. We provided a UV mapped materialmap as input to the network (Sect. 3.1), since spatially-varyingmaterials can not be defined using singular values. We thereforeused only convolutional layers (highlighted in blue), in place offully connected + convolutional layers of Figure 6(a). We trainedthis network on a dataset of spatially-varying materials, using thetraining loss described in this paper. Results of this network areshown in Figure 7(b). Both of the networks described above provided control over light-ing through an area light source, whose location was restricted to asingle ring on the upper hemisphere. Moreover, uniform materialsare a special case of spatially-varying materials, which makes thetwo networks redundant. We thus propose a single network forhandling both uniform and spatially-varying materials in this paper(Sect. 3.1). We also extend our network to handle a full sky model[Hosek and Wilkie 2012], with sun locations defined at any pointon the upper hemisphere. This was motivated by the great fidelityof both the preceding networks to handle arbitrary light locationson the ring of the upper hemisphere, although trained on randomlysampled light locations. REFERENCES
Blender Online Community. 2018.
Blender - a 3D modelling and rendering package
ACM Trans. Graph.
36, 4, Article 98 (July 2017), 12 pages. https://doi.org/10.1145/3072959.3073601R. L. Cook and K. E. Torrance. 1982. A Reflectance Model for Computer Graphics.
ACMTrans. Graph.
1, 1 (Jan. 1982), 7–24. https://doi.org/10.1145/357290.357293Valentin Deschaintre, Miika Aittala, Frédo Durand, George Drettakis, and AdrienBousseau. 2018. Single-Image SVBRDF Capture with a Rendering-Aware Deep • Aakash KT et al.
Fig. 8. Shaderball visualizations under different environment lighting produced by our network corresponding to the input SVBRDF maps.
Network.
ACM Transactions on Graphics (SIGGRAPH Conference Proceedings)
Proceedings of the 37th Annual Conference ofthe European Association for Computer Graphics: State of the Art Reports (EG ’16) .Eurographics Association, Goslar Germany, Germany, 625–650. https://doi.org/10.1111/cgf.12867Naty Hoffman. 2012. Background : Physics and Math of Shading.Lukas Hosek and Alexander Wilkie. 2012. An Analytic Model for Full Spectral Sky-Dome Radiance.
ACM Transactions on Graphics
31, 4 (July 2012). Justin Johnson, Alexandre Alahi, and Fei-Fei Li. 2016. Perceptual Losses for Real-TimeStyle Transfer and Super-Resolution.
CoRR abs/1603.08155 (2016). arXiv:1603.08155http://arxiv.org/abs/1603.08155Kaizhang Kang, Zimin Chen, Jiaping Wang, Kun Zhou, and Hongzhi Wu. 2018. EfficientReflectance Capture Using an Autoencoder.
ACM Trans. Graph.
37, 4, Article 127(July 2018), 10 pages. https://doi.org/10.1145/3197517.3201279A. Keller, L. Fascione, M. Fajardo, I. Georgiev, P. Christensen, J. Hanika, C. Eisenacher,and G. Nichols. 2015. The Path Tracing Revolution in the Movie Industry. In
ACMSIGGRAPH 2015 Courses (SIGGRAPH ’15) . ACM, New York, NY, USA, Article 24,7 pages. https://doi.org/10.1145/2776880.2792699
Flexible Neural Renderer for Material Visualization • 9
Fig. 9. Visualization results for different sun directions and turbidity values of a specular uniform material.
Alexandr Kuznetsov, Nima Khademi Kalantari, and Ravi Ramamoorthi. 2018. DeepAdaptive Sampling for Low Sample Count Rendering.
Comput. Graph. Forum
CoRR abs/1803.04189 (2018). arXiv:1803.04189 http://arxiv.org/abs/1803.04189Zhengqin Li, Zexiang Xu, Ravi Ramamoorthi, Kalyan Sunkavalli, and ManmohanChandraker. 2018. Learning to Reconstruct Shape and Spatially-varying Reflectance from a Single Image.
ACM Trans. Graph.
37, 6, Article 269 (Dec. 2018), 11 pages.https://doi.org/10.1145/3272127.3275055Stephen McAuley, Stephen Hill, Naty Hoffman, Yoshiharu Gotanda, Brian Smits, BrentBurley, and Adam Martinez. 2012. Practical Physically-based Shading in Film andGame Production. In
ACM SIGGRAPH 2012 Courses (SIGGRAPH ’12) . ACM, NewYork, NY, USA, Article 10, 7 pages. https://doi.org/10.1145/2343483.2343493Peiran Ren, Yue Dong, Stephen Lin, Xin Tong, and Baining Guo. 2015. Image BasedRelighting Using Neural Networks.
ACM Trans. Graph.
34, 4, Article 111 (July 2015),
Fig. 10. Visualization results for different sun directions and turbidity values of a specular uniform material.
12 pages. https://doi.org/10.1145/2766899Olaf Ronneberger, Philipp Fischer, and Thomas Brox. 2015. U-Net: ConvolutionalNetworks for Biomedical Image Segmentation.
CoRR abs/1505.04597 (2015).arXiv:1505.04597 http://arxiv.org/abs/1505.04597Karen Simonyan and Andrew Zisserman. 2015. Very Deep Convolutional Networks forLarge-Scale Image Recognition.
CoRR abs/1409.1556 (2015).Tiancheng Sun, Jonathan T. Barron, Yun-Ta Tsai, Zexiang Xu, Xueming Yu, GrahamFyffe, Christoph Rhemann, Jay Busch, Paul Debevec, and Ravi Ramamoorthi. 2019. Single Image Portrait Relighting.Zexiang Xu, Kalyan Sunkavalli, Sunil Hadap, and Ravi Ramamoorthi. 2018. Deepimage-based relighting from optimal sparse samples.
ACM Transactions on Graphics(TOG)
37, 4 (2018), 126.Károly Zsolnai-Fehér, Peter Wonka, and Michael Wimmer. 2018. Gaussian materialsynthesis.