Neural BRDF Representation and Importance Sampling
Alejandro Sztrajman, Gilles Rainer, Tobias Ritschel, Tim Weyrich
NNeural BRDF Representation and Importance Sampling
A. Sztrajman , G. Rainer , T. Ritschel , T. Weyrich University College London, UK Universite Cote d’Azur, France
Abstract
Controlled capture of real-world material appearance yields tabulated sets of highly realistic reflectance data. In practice,however, its high memory footprint requires compressing into a representation that can be used efficiently in rendering whileremaining faithful to the original. Previous works in appearance encoding often prioritised one of these requirements at theexpense of the other, by either applying high-fidelity array compression strategies not suited for efficient queries during rendering,or by fitting a compact analytic model that lacks expressiveness. We present a compact neural network-based representation ofBRDF data that combines high-accuracy reconstruction with efficient practical rendering via built-in interpolation of reflectance.We encode BRDFs as lightweight networks, and propose a training scheme with adaptive angular sampling, critical for theaccurate reconstruction of specular highlights. Additionally, we propose a novel approach to make our representation amenableto importance sampling: rather than inverting the trained networks, we learn an embedding that can be mapped to parameters ofan analytic BRDF for which importance sampling is known. We evaluate encoding results on isotropic and anisotropic BRDFsfrom multiple real-world datasets, and importance sampling performance for isotropic BRDFs mapped to two different analyticmodels.
1. Introduction
Accurate reproduction of material appearance is a major challengein computer graphics. Currently, there are no standardised repre-sentations for reflectance acquisition data, and there is no univer-sal analytic model capable of representing the full range of real-world materials [GGG*16]. The development of new methods forappearance capture has led to an increasing amount of densely sam-pled data from real-world appearance [MPBM03; VF18; DJ18]. Al-though tabulated representations of reflectance data are usually veryaccurate, they suffer from a high memory footprint and computa-tional cost at evaluation time [HGC*20]. Reflectance data, however,exhibits strong coherence [Don19], which can be leveraged for effi-cient representation and evaluation of real-world materials. Existingapproaches perform dimensionality reduction using matrix factori-sation [LRR04; NDM06; NJR15] which requires a large number ofcomponents for high quality reproduction, or by fitting analytic mod-els [NDM05], usually relying on time-consuming and numericallyunstable nonlinear optimisation and presenting a limited capacity toaccurately reproduce real-world materials.Recent works successfully applied deep learning methods on re-flectance estimation [DAD*18], material synthesis [ZWW18] andBTF compression and interpolation [RJGW19; RGJW20]. Close toour work, Hu et al .’s DeepBRDF [HGC*20] use a deep convolu-tional autoencoder to generate compressed encodings of measuredBRDFs, which can be used for material estimation and editing; how-ever, their encoding depends on a rigid sampling of the tabulateddata, independent of the shape of the encoded BRDF, and Deep- BRDFs require back-transformation into tabulated form for evalu-ation, making them less suitable for rendering than for editing ofappearance.In contrast, we aim for a representation that allows for efficientrendering while retaining sufficient expressiveness for a wide rangeof materials. The contributions of our work are as follows: • A neural architecture for high-fidelity compression of measuredBRDF data that– can be trained with an arbitrary sampling of the originalBRDF, allowing for BRDF-aware sampling of the specularhighlights during training which is critical for their accuratereconstruction; additionally, our network– can be used directly as replacement of a BRDF in a render-ing pipeline, providing built-in evaluation and interpolationof reflectance values, with speeds comparable to fast analyticmodels. In Sections 4.1, 4.2 and 4.5 we compare our encodingwith other representations in terms of quality of reconstruc-tion, speed and memory usage. • Deployment of a learning-to-learn autoencoder architecture to ex-plore the subspace of real-world materials by learning a latent rep-resentation of our Neural-BRDFs (NBRDFs). This enables fur-ther compression of BRDF data to a -values encoding, whichcan be smoothly interpolated to create new realistic materials, asshown in Section 4.3. • A learned mapping between our neural representation and an in-vertible parametric approximation of the BRDF, enabling impor- a r X i v : . [ c s . G R ] F e b . Sztrajman, G. Rainer, T. Ritschel, T. Weyrich / Neural BRDF Representation and Importance Sampling tance sampling of NBRDFs in a rendering pipeline; in Section 4.4we compare our method with other sampling strategies.
2. Related Work2.1. BRDF Compression and Interpolation
Real-world captured material appearance is commonly representedby densely sampled and high-dimensional tabulated BRDF measure-ments. Usage and editing of these representations usually requiresstrategies for dimensionality reduction, most commonly through dif-ferent variants of matrix factorisation [LRR04; NDM06; NJR15],which require large storage in order to provide accurate recon-structions, or by fitting to an analytic model. BRDF models arelightweight approximations specifically designed for compact rep-resentation and efficient evaluation of reflectance data. However,fitting these models usually relies on unstable optimisations, andthey are capable of representing a limited gamut of real-world ap-pearances [SKWW17].Ngan et al . [NDM05] were the first to systematically study thefitting of analytical BRDF models to real-world materials. Sincethen, more complex models have been developed, many of thembased on the microfacet model originally proposed by Cook andTorrance [CT82]. In particular, two parameterisations of the micro-facet 𝐷 distribution are considered the state-of-the-art in parametricreconstruction: the shifted gamma distribution (SGD) by Bagher etal . [BSH12] and the ABC model by Low et al . [LKYU12].More recent models have been developed with non-parametricdefinitions of some or all component functions of the micro-facet model. Dupuy et al . [DHI*15] fit the 𝐷 distribution fromthe retro-reflective lobe using power iterations. Their fittingmethod avoids the instabilities of nonlinear optimisation and al-lows the subsequent translation to other microfacet-based modelssuch as GGX [WMLT07] and Cook-Torrance [CT82]. Bagher etal . [BSN16] define a non-parametric factor microfacet model (NPF),state-of-the-art in non-parametric reconstruction of isotropic BRDF,using tabulated definitions for the three functional components ( 𝐷 , 𝐹 and 𝐺 ) of the microfacet model, with a total memory footprint of . KB per material. Dupuy and Jakob [DJ18] define a new adaptiveparameterisation that warps the 4D angle domain to match the shapeof the material. This allows them to create a compact data-drivenrepresentation of isotropic and anisotropic reflectance. Their recon-structions compare favorably against NPF, although at the price ofan increased storage requirement ( KB for isotropic 3-channelsmaterials,
KB for anisotropic).Close to our work, Hu et al . [HGC*20] use a convolutionalautoencoder to generate compressed embeddings of real-worldBRDFs, showcasing applications on material capture and editing.In Section 3.1 we describe a method for BRDF compression basedon a neural representation of material appearance. In contrast withHu et al .’s, our neural BRDF network can be directly used as replace-ment of a BRDF in a rendering system, without the need to expandits encoding into a tabular representation. Moreover, NBRDF pro-vides built-in fast interpolated evaluation, matching the speed of ana-lytic models of much lower reconstruction quality. We compare ourmethod with other parametric and non-parametric representations in terms of reconstruction accuracy, compression and evaluationspeed.In Section 3.2 we describe a learning-to-learn autoencoder ar-chitecture that is able to further compress our NBRDF networksinto a low dimensional embedding. A similar architecture was previ-ously used by Maximov et al . [MLFR19] to encode deep appearancemaps, a representation of material appearance with baked scene illu-mination. Soler et al . [SSN18] explored a low-dimensional nonlin-ear BRDF representation via a Gaussian process model, supportingsmooth transitions across BRDFs. Similarly, in Section 4.3 we showthat the low dimensional embeddings generated by our autoencodercan be interpolated to create new realistic materials.
BRDF-based importance sampling is a common strategy used to re-duce the variance of rendering algorithms relying on Monte Carlo in-tegration [CPF10]. For some analytic BRDF models, such as Blinn-Phong [Bli77], Ward [War92], Lafortune [LFTG97] and Ashikhmin-Shirley [AS00], it is possible to compute the inverse cumulativedistribution function analytically, thus providing a fast method forimportance sampling. For the general case, however, closed-forminverse CDFs do not exist, requiring costly numerical calculation.A practical alternative is to approximate the original BRDF bya PDF with a closed-form inverse CDF, and to use them for impor-tance sampling instead [LRR04]. While generally sacrificing speedof convergence, this approach still leads to accurate, unbiased re-sults in the limit; however, it often introduces the requirement ofa potentially unreliable non-linear model fit. Accordingly, in thecontext of measured data, many works forgo non-linear models infavour of numerically more robust approximations, including matrixfactorisation [LRR04], as well as wavelets [CJAJ05] and sphericalharmonics approximations [JCJ09]. Our work, too, operates withan approximating PDF, but retains a physically-based, non-linearinvertable model and eliminate the non-linear fit by training a fastneural network to fit the model parameters to measured BRDF data(see Section 3.3).
Other recent works, too, leveraged neural networks for importancesampling. Müller et al . [MMR*19] and Zheng et al . [ZZ19] trainedinvertible RealNVP networks [DSB17] to generate importance sam-plers; however, in contrast to our method they exclusively operatein primary sample space.While importance sampling’s main objective is faster conver-gence, it has the secondary effect of reducing noise. Convolutionalnetworks have successfully been applied for denoising of MonteCarlo renderings [CKS*17; BVM*17] and radiance interpolationfrom sparse samples [RWG*13; KMM*17]. However these meth-ods do not converge to ground truth, since they act directly on ren-dered images, lacking information from the underlying scene.
3. Method and Implementation
Drawing upon the observations of Section 2, we propose a newrepresentation for measured BRDFs that maximises fidelity to the . Sztrajman, G. Rainer, T. Ritschel, T. Weyrich / Neural BRDF Representation and Importance Sampling data while retaining practicality. The remainder describes our ba-sic reflectance encoding (Section 3.1), an auto-encoder frameworkfor efficient representation (Section 3.2), as well as an importancesampling scheme to further speed-up rendering (Section 3.3).
Our representation for BRDF data uses a shallow fully-connectednetwork with ReLU activations and a final exponential layer, asshown in Figure 1, which we will refer to as NBRDF (Neural-BRDF). These NBRDFs work as a standard BRDF representationfor a single material: the network takes incoming and outgoing lightdirections as input, and outputs the associated RGB reflectancevalue. Interpolation is handled implicitly by the network, via thecontinuous input space. hd x x x x f r (h,d) e x p Figure 1: Diagram of a Neural-BRDF (NBRDF).The parametrisation of the network input strongly affects the re-construction quality as it favours the learning of different aspectsof the reflectance function. Rainer et al . [RJGW19] use a stereo-graphic projection of the light and view directions in euclidian coor-dinates as network parameters. While this parametrisation lends it-self well to the modelling of effects like anisotropy, inter-shadowingand masking, which dominate the appearance of sparsely sampledspatially-varying materials, it is not well-suited to reconstruct spec-ular highlights (as can be seen in Figure 2), which are much morenoticeable in densely sampled uniform materials. In contrast, weuse the Cartesian vectors h and d of the Rusinkiewicz parameteri-sation [Rus98] for directions, which are a much better suited set ofvariables to encode specular lobes.During training we compute the difference between predictedand ground-truth BRDF data using a logarithmic loss applied tocosine weighted reflectance values:Loss = (cid:12)(cid:12)(cid:12) log ( + 𝑓 true 𝑟 cos 𝜃 𝑖 ) − log ( + 𝑓 pred 𝑟 cos 𝜃 𝑖 ) (cid:12)(cid:12)(cid:12) , (1)Our architecture allows for arbitrary sampling of the angulardomain during training, which we leverage by implementing aBRDF-aware random sampling of the upper hemisphere, for a to-tal of × samples. We draw random uniform samples of theRusinkiewicz parameterisation angles, which emphasises directionsclose to the specular highlight. In Section 4.1 we show that this iscritical for accurate encoding of the specular highlights. The lossstabilises after epochs for the more diffuse materials in Matusiket al.’s MERL database [MPBM03] (detailed in Section 3.4) whilethe most mirror-like ones can take up-to epochs (between seconds and minutes on GPU). Figure 2: Top row: Ground truth. Bottom row: Reconstructionusing Rainer et al .’s architecture [RJGW19], treating each BRDFas a spatially uniform BTF.NBRDF networks can be used to encode both isotropic andanisotropic materials. The latter introduce a further dependenceon the Ruinskiewicz angle 𝜙 ℎ , which must be learnt by the net-work. Following our sampling strategy, during training we drawrandom uniform samples from all four Rusinkiewicz angles, in-creasing the total number of samples 5-fold to compensate for theincreased complexity of the BRDF functional shape. In Section 4.2we analyse the reconstruction of anisotropic materials from the RGLdatabase [DJ18], which contains isotropic and anisotropicmeasured materials. Figure 3 shows our architecture for an autoencoder that learns alatent representation for NBRDFs. Input and output are the flattenedweights of an NBRDF, which are further compressed by the networkinto short embeddings. In effect, the autoencoder learns to predictthe weights of an NBRDF neural network. We typically use NBRDFencodings with two hidden layers × × × for a total of parameters and encode them into embeddings of values. x x NBRDF x x x x x x x Z NBRDF
NBRDF x x x x Figure 3:
NBRDF autoencoder . Input and output are NBRDF net-works of shape × × × , flattened to 1D vectors of values.In addition to further compressing the NBRDF representations,the autoencoder provides consistent encodings of the MERL mate-rials that can be interpolated to generate new materials, as demon-strated in Section 4.3. . Sztrajman, G. Rainer, T. Ritschel, T. Weyrich / Neural BRDF Representation and Importance Sampling Training of the autoencoder is performed using NBRDFs pre-trained with materials from MERL, employing a 80%-20% split be-tween training and testing materials. To compensate for the limitedavailability of measured materials, we augment our data by applyingall permutations of RGB channels for each material in the trainingset. The training loss used is image-based: our custom loss layeruses the predicted × vector to construct an NBRDF networkof the original shape ( × × × ), and evaluates it to producesmall renderings ( × ) of a sphere illuminated by directionallight. A fixed tone mapping (simple gamma curve with 𝛾 = . andlow values bottom-clamped to − ) is then applied to the sphererenderings, and the loss is computed as point-by-point MSE. Theloss computation involves a differential implementation of the ren-dering pipeline for direct illumination and subsequent tone mapping,in order to keep the computation back-propagatable. Importance sampling of BRDFs requires producing angular sam-ples with a probability density function (PDF) approximately pro-portional to the BRDF. This can be accomplished by computing theinverse cumulative distribution function (inverse CDF) of the PDF,which constitutes a mapping between a uniform distribution and thetarget distribution. The computation of the inverse CDF of a PDFusually requires costly numerical integrations; however, for a set ofparametric BRDF models, such as Blinn-Phong or GGX, this canbe done analytically.Our proposed method for quick inverse CDF computation isbased on a shallow neural network, shown in Figure 4, that learnsthe mapping between the embeddings generated by the NBRDF au-toencoder and a set of model parameters from an invertible analyticBRDF. In essence, the network learns to fit NBRDFs to an analyticmodel, an operation that is commonly performed through nonlinearoptimisation, which is comparatively slow and prone to get lodgedin local minima.
BRDFModelParameters CDF -1 x x x Z NBRDF
Figure 4: Scheme for quick computation of inverse CDF froman NBRDF: we train a network to map from latent NBRDF em-beddings to importance sampling parameters of a chosen analyticBRDF model.We use Blinn-Phong as target model for our prediction. Althoughit contains a total of model parameters, its associated PDF ismonochrome and can be defined by only parameters, associatedwith the roughness of the material and the relative weight betweenspecular and diffuse components. Hence, we train our network tolearn the mapping between the NBRDF’s -value embeddings andthe Blinn-Phong importance sampling parameters. Although the pre-dicted PDF is an approximation of the original NBRDF, the result-ing sampling is unbiased due to the exact correspondence betweenthe sampling PDF and its inverse CDF, as shown in Section 4.4. The MERL BRDF database [MPBM03] contains reflectance mea-surements from real-world materials, with a dense sampling ofdirections given directly in terms of the spherical angles ( 𝜃 , 𝜙 ) of the h and d vectors from the Rusinkiewicz parameterisation [Rus98]: 𝜃 ℎ : samples from to , with inverse square-root samplingthat emphasises low angles. 𝜃 𝑑 : uniform samples from to . 𝜙 𝑑 : uniform samples from to . Values from to are computed by applying Helmholtz reciprocity.Isotropic BRDFs are invariant in 𝜙 ℎ , so the MERL database, whichwas created using a measurement setup relying on isotropic re-flectance [MWL*99], omits 𝜙 ℎ . Counting all samples for the threecolour channels, each material in MERL is encoded in tabular for-mat with . × reflectance values (approx. 34 MB).
4. Results
In this section, we analyse our results on the reconstruction andimportance sampling of measured materials. Although we centremost of our analysis on materials from the MERL database, weshow that our approach can be applied to any source of measuredBRDFs, as displayed in Figure 5. Reconstruction results for thecomplete set from MERL [MPBM03] and RGL [DJ18] databasescan be found in the supplemental material.
GT NBRDF (Ours) NBRDF SSIM GT DirLight NBRDF DirLight (Ours) CC _ NO T H E R N _ AU R O R A _ R G BC A R D B OA R D I L M _ S O L O _ M _68_ R G B V C H _ D R AGON _ E Y E _ R E D _ R G BB L U E B OOKG L O SS Y R E D P A P E R Figure 5: NBRDF reconstruction of measured materials from theRGL database [DJ18] (top 4) and from Nielsen et al . [NJR15] (bot-tom 2), using environment map and directional illuminations. . Sztrajman, G. Rainer, T. Ritschel, T. Weyrich / Neural BRDF Representation and Importance Sampling
GT NBRDF-Adaptive (Ours) NBRDF-Uniform (Ours) NPF [BSN16] Low et al. [LKYU12] Bagher et al. [BSH12] Dupuy et al. [DHIPO15] GGX NY L ONNA T U R A L - B L U E - A CR Y L I C SP E C U L A R - B L A C K - P H E NO L I CBR A SS T UNG S TE N - C A RB I D E Figure 6: Reconstruction of MERL materials using different BRDF representations, including the average SSIM value for each image. Forboth NBRDF columns we utilised a fixed network size of weights ( × × × ). NBRDF-Adaptive (Ours) NBRDF-Uniform (Ours) NPF [BSN16] Low et al. [LKYU12] Bagher et al. [BSH12] Dupuy et al. [DHIPO15] GGX
Figure 7: Average SSIM over all MERL materials for different BRDF representations.
Figure 6 shows reconstruction performance on a visually diverse setof materials of the MERL database, for different approaches. Wequalitatively compare the methods through renderings of a scenewith environment map illumination. Ground truth is produced byinterpolating the tabulated MERL data. The comparison reveals thatmost methods struggle with one particular type of materials: a GGXfit tends to blur the highlights, Bagher et al . [BSH12] on the other hand achieve accurate specular highlights, but the diffuse albedoseems too low overall. Out of all the proposed representations, ourmethod produces the closest visual fits, followed by NPF [BSN16],a non-parametric BRDF fitting algorithm recently cited as state-of-the-art [DJ18].A quantitative analysis of the results, seen in Figure 7 and Ta-ble 1, shows that our representation outperforms the other methodsin multiple image-based error metrics. In particular, NPF [BSN16] . Sztrajman, G. Rainer, T. Ritschel, T. Weyrich / Neural BRDF Representation and Importance Sampling
MAE RMSE SSIMNBRDF Adaptive Sampling . ± . . ± . . ± . NBRDF Uniform Sampling . ± . . ± . . ± . NPF [BSN16] . ± . . ± . . ± . Low et al . [LKYU12] (ABC) . ± . . ± . . ± . Bagher et al . [BSH12] (SGD) . ± . . ± . . ± . Dupuy et al . [DHI*15] . ± . . ± . . ± . GGX . ± . . ± . . ± . Table 1: Average image-based losses of representation methods from Figure 6 over all MERL materials.seems to lose fitting accuracy at very grazing angles, which is wherethe error is the highest on average (see Figure 7). A more detailedanalysis of the functional shape of the NPF lobes confirms this ob-servation. In Figure 8 we display polar plots (in log scale) of thespecular lobes of two materials from MERL, comparing NBRDFand NPF fittings with ground truth for fixed incident angles. For lowvalues of incident inclination 𝜃 𝑖 there is generally good agreementbetween all representations, while for grazing angles only NBRDFsare able to match the original shape. Furthermore, in the bottomplot we observe that NPF tends to produce unusually long tails. Inthe supplemental material we provide polar plot comparisons forthe the full set of MERL materials.Figure 8: Polar plots (log scale) comparing NPF [BSN16] andour NBRDF fittings with ground truth lobes for fixed incident in-clination angles 𝜃 𝑖 . Top : grease-covered-steel . Bottom : black-oxidized-steel with a single fixed 𝜃 𝑖 at 𝑜 .One of the key components in successfully training the NBRDF GT NBRDF SSIM C O PP E R - S H EET G R EE N - P V C M O R P HO - M ELE
NAU SS A R I- S I L K - C O L O R Figure 9: Neural BRDF reconstruction of anisotropic materialsfrom the RGL database [DJ18].networks is the angular sampling of the training loss. If trainingsamples are concentrated near the specular lobe, the NBRDF will ac-curately reproduce the highlights. On the other hand, if the samplesare regularly distributed, the lambertian reflectance component willbe captured more efficiently. We hence employ a BRDF-aware sam-pling of angles during training that emphasises samples close to thereflectance lobes. In practice, we uniformly (randomly) sample thespherical angles of the Rusinkiewicz parameterisation ( 𝜃 ℎ , 𝜃 𝑑 and 𝜙 𝑑 ), which results in a sample concentration around the speculardirection, while retaining sufficient coverage of the full hemisphere. . Sztrajman, G. Rainer, T. Ritschel, T. Weyrich / Neural BRDF Representation and Importance Sampling Figure 10: SSIM error for all materials from the MERL database using the BRDF reconstruction methods from Figure 6.Table 1 shows that this adaptive strategy for training sample gen-eration produces much better results over the whole database andallows us to outperform analytic model fits in various error metrics.Finally, in Figure 10 we display the SSIM error for all materialsfrom the MERL database, and for all discussed reconstruction meth-ods. Our NBRDF adaptive-sampling outperforms other methods foralmost all materials, with the exception of a small number of highlyspecular materials. Please refer to the supplemental material for afull detail of reconstructions, including all materials from the MERLand RGL [DJ18] databases.
In Figure 9 we display the NBRDF reconstructions of multipleanisotropic materials from the RGL database [DJ18]. The networksused are the same as shown in the isotropic results of Figure 6 ( i.e . × × × for a total of weights). The reconstruction of theanisotropy is surprisingly robust, especially taking into account thecompactness of the network size. There are, however, more perceiv-able differences in the visual fits than in the NBRDF isotropic encod-ings, which is reflected on the average SSIM error: . ± . .Lower reconstruction errors can be achieved by increasing the net-work size of the encoding NBRDF, providing great control over thelevel-of-detail of the representation. In Section 4.5 we will analysethe dependence of the reconstruction error with the network size,comparing with other representations in terms of memory footprint. Although our NBRDFs provide a very accurate fit of individualmaterials, unifying the encoding space opens many new possibili-ties. We use the NBRDF encodings of MERL materials to train our Figure 11: 𝑡 -SNE clustering of the latent embedding of MERLmaterials produced by the NBRDF autoencoder. Test set materialsare indicated in red.autoencoder that compresses NBRDFs to a 32-dimensional latentspace.In Table 2 we summarise various reconstruction error metricscomparing our autoencoding with PCA factorisation across MERL.Our implementation of PCA follows Nielsen et al .’s [NJR15], whoproposed various improvements over traditional PCA, most impor-tantly a log-mapping of reflectance values relative to a medianBRDF measured over the training set. The training of both meth-ods was performed with the same 80%-20% split of materials fromMERL. The full set of renderings and errors can be found in thesupplemental material. . Sztrajman, G. Rainer, T. Ritschel, T. Weyrich / Neural BRDF Representation and Importance Sampling A L U M I N I U M B L U E - F A BR I C G R EE N - A CR Y L I C B E I G E - F A BR I C TE F L ON SP E C U L A R - B L U E - P H E NO L I C Y ELL O W - P A I N T S I L V E R - M ET A LL I C - P A I N T B L U E - F A BR I C R E D - F A BR I C Figure 12: New materials generated by latent interpolating of MERL BRDFs [MPBM03]. Materials on the sides correspond to recon-structed original materials, while the materials in-between were created by uniform interpolation of the original embedding positions in theautoencoder’s latent space.
MAE RMSE SSIMNBRDF AE . ± .
013 0 . ± . . ± . PCA [NJR15] . ± .
008 0 . ± . . ± . Table 2: Average image-based reconstruction losses over all MERLmaterials for our NBRDF autoencoder and a PCA factorisation with32 components.The further compression of NBRDFs from parameters to inevitably leads to a degradation of the appearance after the decod-ing; however, this is not an issue as the main application of the au-toencoder lies in the material embedding. Figure 11 shows a 𝑡 -SNEclustering of the latent embedding learned by the autoencoder. Theprojection to the latent space behaves sensibly, as materials withsimilar albedo or shininess cluster together. This 32-dimensionalencoding is the basis for our subsequent importance sampling pa-rameter prediction.The stability of the latent space is further demonstrated in Fig-ure 12, where we linearly interpolate, in latent space, between en-codings of MERL materials, and visualise the resulting decodedmaterials. We leverage the stable embedding of materials provided by the au-toencoder to predict importance sampling parameters. In practice, we train a network to predict the 2 Blinn-Phong distribution param-eters that are used in the importance sampling routine. We train ona subset of materials from the MERL database, using fitted Blinn-Phong parameters [NDM05] as ground truth labels. In Figure 13we compare and analyse the effect of different importance samplingmethods, applied to multiple materials from MERL unseen by ourimportance sampling prediction network. Renderings are producedwith samples per pixel, with the exception of the ground truthat spp. Each column is associated with a different importancesampling method, with all reflectance values begin evaluated fromthe original tabulated MERL data We compare uniform sampling,Blinn-Phong distribution importance sampling (with fitted param-eters, and predicted parameters from our network), and Dupuy etal .’s [DHI*15] routine. Even though a Blinn-Phong lobe is not ex-pressive enough to accurately describe and fit the captured data, theparameters are sufficient to drive an efficient importance samplingof the reflectance distribution. Depending on the material, the pre-dicted Blinn-Phong parameters can even reveal themselves bettersuited for importance sampling than the ground truth, optimisedBlinn-Phong parameters.In addition to this image-based comparison, we also plot mul-tiple error metrics as a function of samples per pixel, to comparethe respective sampling methods. Both Phong and GGX-driven im-portance sampling converge quickly and keep a significant lead onuniform sampling. As shown in the plots, our importance samplingprediction can be tuned to GGX parameters (ground truth labelsfrom Bieron and Peers [BP20]) as well as to Blinn-Phong param-eters, or any arbitrary distribution. For simplicity, we choose the . Sztrajman, G. Rainer, T. Ritschel, T. Weyrich / Neural BRDF Representation and Importance Sampling GT Uniform Dupuy et al. [DHIPO15] Phong Fit NBRDF AE (Ours) Uniform RMS Dupuy et al. RMS Phong Fit RMS NBRDF AE RMS GO L D - M ET A LL I C - P A I N T DA R K - B L U E - P A I N T P U R P LE - P A I N T G R EE N - M ET A LL I C - P A I N T A L U M I N I U M Figure 13: Importance sampling of testset materials from the MERL database, using samples per pixel. Left to right:
Ground Truth ( spp), Uniform sampling, Phong sampling, our method.
SPP M A E UniformPhong FitPhong-based NBRDF Pred (Ours)GGX FitGGX-based NBRDF Pred (Ours)Dupuy et al. [DHIPO15]
SPP R M S E UniformPhong FitPhong-based NBRDF Pred (Ours)GGX FitGGX-based NBRDF Pred (Ours)Dupuy et al. [DHIPO15]
SPP M A P E UniformPhong FitPhong-based NBRDF Pred (Ours)GGX FitGGX-based NBRDF Pred (Ours)Dupuy et al. [DHIPO15]
SPP P S N R UniformPhong FitPhong-based NBRDF Pred (Ours)GGX FitGGX-based NBRDF Pred (Ours)Dupuy et al. [DHIPO15]
Figure 14: Average errors (log scale) vs SPP for all 20 MERL testset materials using the Veach scene from Figure 13.
Left to right:
MAE,RMSE, MAPE, PSNR.
GT MERL Materials Uniform Dupuy et al. [DHIPO15] NBRDF AE (Ours) GT
RMSE 0.024 0.010
Figure 15: Importance sampling of kitchen scene using and samples per pixel respectively for GT and compared methods. Mostmaterials in the scene, shown in flat-colour in the central image, have been replaced by MERL testset materials. SPP M A E UniformPhong FitNBRDF AE (Ours)Dupuy et al. [SPIHO15]
SPP R M S E UniformPhong FitNBRDF AE (Ours)Dupuy et al. [SPIHO15]
SPP M A P E UniformPhong FitNBRDF AE (Ours)Dupuy et al. [SPIHO15]
SPP P S N R UniformPhong FitNBRDF AE (Ours)Dupuy et al. [SPIHO15]
Figure 16: Average errors (log scale) vs SPP for MERL testset materials in the kitchen scene from Figure 15.
Left to right:
MAE, RMSE,MAPE, PSNR. . Sztrajman, G. Rainer, T. Ritschel, T. Weyrich / Neural BRDF Representation and Importance Sampling
Rays/sec (× ) Memory (KB)Bagher et al. [BSH12] .
64 0 . RGL [DJ18] .
66 48 . NBRDF + PhongIS (Ours) .
50 2 . Cook-Torrance .
59 0 . Dupuy et al. [DHI*15] .
05 2 . Low et al. [LKYU12] .
13 0 . GGX .
82 0 . NPF [BSN16] – . Table 3: Rays traced per second in Mitsuba [Jak10] and memoryfootprint, for different material representations. The NBRDF num-bers correspond to the -weights network ( × × × ).Blinn-Phong distribution: more advanced models will provide abetter reconstruction, but not necessarily provide a better samplingroutine. More complex models might fit the specular lobe more pre-cisely, but neglect other reflectance components of the data, such assheen in fabric datasets for instance.In Figure 15 we show importance sampling results for a complexscene. The majority of the original BRDFs in the scene have beenreplaced by materials from the MERL database, from the test set ofour importance sampling parameter prediction network. We showcrops from the renderings and compare our Phong-based importancesampling performance with uniform sampling and the method byDupuy et al . [DHI*15]. Our method consistently shows lower noisein the scene, as also reflected in the numerical errors of Figure 16which show a faster convergence for our method. We compare the performance of our combined pipeline (NBRDFreconstruction, with Phong-based importance sampling), to othercompact representations that combine fast BRDF evaluation andbuilt-in importance sampling strategies. Table 3 shows that an unop-timised implementation of NBRDFs, combined with Phong impor-tance sampling, although slower than other representations, offerscomparable rendering performance, even to simple analytic modelssuch as Cook-Torrance.Finally, in Figure 17 we compare multiple BRDF representationmethods in terms of the average reconstruction SSIM error in theMERL database, and the memory footprint of the encoding. Weshow that the NBRDF network size can be adjusted to select thereconstruction accuracy. For very small networks ( weights) theNBRDF reconstruction is inaccurate, and thus parametric represen-tations are to be preferred. However, for NBRDF networks of weights the reconstruction accuracy is already better than the bestparametric encoding (Low et al . [LKYU12]) and equivalent to astate-of-the-art non-parametric method (NPF [BSN16]).
5. Conclusions
We propose a compact, accurate neural model to encode real-worldisotropic and anisotropic measured BRDFs. Combining the learning Figure 17: Average SSIM vs Memory footprint (log scale) for mul-tiple representations of BRDFs, including standard deviations. Inour method (NBRDF) the network size can be adjusted to select thereconstruction accuracy, thus we include data points for multiplesizes.power of neural networks with a continuous parametrisation allowsus to train a representation that implicitly interpolates, and preservesfidelity to the original data at high compression rates. A new net-work instance is trained for every new material, but the training isfast and efficient as the networks are very light-weight.We also show that the models are sufficiently well behaved tobe further compressed by an autoencoder. The learned embeddingspace of materials open doors to new applications such as interpolat-ing between materials, and learning to predict material-related prop-erties. Specifically, we show that the latent positions can be mappedto importance sampling parameters of a given distribution. The com-putational cost of network evaluation is not significantly higher thanequivalent analytic BRDFs, and the added importance sampling rou-tine allows us to get comparable rendering convergence speed. Over-all, our model provides a high-accuracy real-world BRDF represen-tation, at a rendering performance comparable to analytic models.In future work, our architecture could be applied to spatially-varying materials, for instance to derive spatially-varying impor-tance sampling parameters on-the-fly, for procedurally created ob-jects and materials. Similarly to the importance sampling parameterprediction, our meta-learning architecture can be used to learn fur-ther mappings, enabling applications such as perceptual materialediting, and fast analytic model fitting.
References [AS00] A
SHIKHMIN , M
ICHAEL and S
HIRLEY , P
ETER . “An AnisotropicPhong BRDF Model”.
J. Graph. Tools
ISSN :1086-7651.
DOI : LINN , J
AMES
F. “Models of Light Reflection for Computer Syn-thesized Pictures”.
SIGGRAPH Comput. Graph.
ISSN : 0097-8930.
DOI : . Sztrajman, G. Rainer, T. Ritschel, T. Weyrich / Neural BRDF Representation and Importance Sampling [BP20] B IERON , J. and P
EERS , P. “An Adaptive BRDF Fitting Metric”.
Computer Graphics Forum
DOI : . eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1111/cgf.14054 AGHER , M. M., S
OLER , C., and H
OLZSCHUCH , N. “Accu-rate fitting of measured reflectances using a Shifted Gamma micro-facetdistribution”.
Computer Graphics Forum
DOI : . eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1111/j.1467-8659.2012.03147.x
2, 5, 6, 10.[BSN16] B
AGHER , M
AHDI
M., S
NYDER , J
OHN , andN
OWROUZEZAHRAI , D
EREK . “A Non-Parametric Factor Micro-facet Model for Isotropic BRDFs”.
ACM Trans. Graph.
ISSN : 0730-0301.
DOI :
2, 5, 6, 10.[BVM*17] B
AKO , S
TEVE , V
OGELS , T
HIJS , M
CWILLIAMS , B
RIAN , et al.“Kernel-Predicting Convolutional Networks for Denoising Monte CarloRenderings”.
ACM Trans. Graph.
ISSN : 0730-0301.
DOI : LARBERG , P
ETRIK , J
AROSZ , W
OJCIECH , A
KENINE -M ÖLLER , T
OMAS , and J
ENSEN , H
ENRIK W ANN . “Wavelet ImportanceSampling: Efficiently Evaluating Products of Complex Functions”.
ACMTrans. Graph.
ISSN : 0730-0301.
DOI : HAITANYA , C
HAKRAVARTY
R. A
LLA , K
APLANYAN , A N - TON
S., S
CHIED , C
HRISTOPH , et al. “Interactive Reconstruction ofMonte Carlo Image Sequences Using a Recurrent Denoising Autoen-coder”.
ACM Trans. Graph.
ISSN : 0730-0301.
DOI : OLBERT , M
ARK , P
REMOZE , S
IMON , and F
RANCOIS , G
UIL - LAUME . “Importance Sampling for Production Rendering”.
ACM SIG-GRAPH 2010 Courses . SIGGRAPH ‘10. 2010 2.[CT82] C
OOK , R. L. and T
ORRANCE , K. E. “A Reflectance Model forComputer Graphics”.
ACM Trans. Graph.
ISSN :0730-0301.
DOI : ESCHAINTRE , V
ALENTIN , A
ITTALA , M
IIKA , D
URAND ,F REDO , et al. “Single-Image SVBRDF Capture with a Rendering-AwareDeep Network”.
ACM Trans. Graph.
ISSN : 0730-0301.
DOI : UPUY , J
ONATHAN , H
EITZ , E
RIC , I
EHL , J
EAN -C LAUDE , etal. “Extracting Microfacet-Based BRDF Parameters from Arbitrary Mate-rials with Power Iterations”.
Comput. Graph. Forum
ISSN : 0167-7055 2, 6, 8, 10.[DJ18] D
UPUY , J
ONATHAN and J
AKOB , W
ENZEL . “An Adaptive Param-eterization for Efficient Material Acquisition and Rendering”.
Transac-tions on Graphics (Proceedings of SIGGRAPH Asia)
DOI : ONG , Y UE . “Deep appearance modeling: A survey”. Visual In-formatics
DOI :
10 . 1016 / j . visinf . 2019 . 07 .003
INH , L
AURENT , S
OHL -D ICKSTEIN , J
ASCHA , and B
ENGIO ,S AMY . “Density estimation using Real NVP”. . OpenReview.net, 2017 2.[GGG*16] G
UARNERA , D., G
UARNERA , G. C., G
HOSH , A., et al.“BRDF Representation and Acquisition”.
Proceedings of the 37th AnnualConference of the European Association for Computer Graphics: Stateof the Art Reports . EG ’16. Lisbon, Portugal: Eurographics Association,2016, 625–650 1.[HGC*20] H U , B INGYANG , G UO , J IE , C HEN , Y
ANJUN , et al. “Deep-BRDF: A Deep Representation for Manipulating Measured BRDF”.
Com-puter Graphics Forum
39 (May 2020), 157–166.
DOI :
1, 2.[Jak10] J
AKOB , W
ENZEL . Mitsuba renderer
AROSZ , W
OJCIECH , C
ARR , N
ATHAN , and J
ENSEN , H
ENRIK .“Importance Sampling Spherical Harmonics”.
Comput. Graph. Forum
28 (Apr. 2009), 577–586.
DOI : ALLWEIT , S
IMON , M
ÜLLER , T
HOMAS , M
CWILLIAMS ,B RIAN , et al. “Deep Scattering: Rendering Atmospheric Clouds withRadiance-Predicting Neural Networks”.
ACM Trans. Graph.
ISSN : 0730-0301.
DOI : AFORTUNE , E
RIC
P. F., F OO , S ING -C HOONG , T
ORRANCE ,K ENNETH
E., and G
REENBERG , D
ONALD
P. “Non-linear Approxima-tion of Reflectance Functions”.
Proceedings of the 24th Annual Confer-ence on Computer Graphics and Interactive Techniques . SIGGRAPH’97. New York, NY, USA: ACM Press/Addison-Wesley Publishing Co.,1997, 117–126.
ISBN : 0-89791-896-7.
DOI :
10 . 1145 / 258734 .258801 ÖW , J OAKIM , K
RONANDER , J
OEL , Y
NNERMAN , A
NDERS ,and U
NGER , J
ONAS . “BRDF Models for Accurate and Efficient Ren-dering of Glossy Surfaces”.
ACM Trans. Graph.
ISSN :0730-0301.
DOI :
2, 6, 10.[LRR04] L
AWRENCE , J
ASON , R
USINKIEWICZ , S
ZYMON , and R A - MAMOORTHI , R
AVI . “Efficient BRDF Importance Sampling Using aFactored Representation”.
ACM SIGGRAPH 2004 Papers . SIGGRAPH’04. Los Angeles, California: Association for Computing Machinery,2004, 496–505.
ISBN : 9781450378239.
DOI :
10 . 1145 / 1186562 .1015751
1, 2.[MLFR19] M
AXIMOV , M
AXIM , L
EAL -T AIXE , L
AURA , F
RITZ , M
ARIO ,and R
ITSCHEL , T
OBIAS . “Deep Appearance Maps”.
Proceedings of theIEEE/CVF International Conference on Computer Vision (ICCV) . Oct.2019 2.[MMR*19] M
ÜLLER , T
HOMAS , M C W ILLIAMS , B
RIAN , R
OUSSELLE ,F ABRICE , et al. “Neural Importance Sampling”.
ACM Trans. Graph.
ISSN : 0730-0301.
DOI :
10 . 1145 /3341156
ATUSIK , W
OJCIECH , P
FISTER , H
ANSPETER , B
RAND ,M ATTHEW , and M
CMILLAN , L
EONARD . “A Data-Driven ReflectanceModel”.
ACM Trans. Graph.
22 (July 2003), 759–769.
DOI :
1, 3, 4, 8.[MWL*99] M
ARSCHNER , S
TEPHEN
R., W
ESTIN , S
TEPHEN
H., L
AFOR - TUNE , E
RIC
P. F., et al. “Image-Based BRDF Measurement Includ-ing Human Skin”.
Proceedings of the 10th Eurographics Conferenceon Rendering . EGWR’99. Granada, Spain: Eurographics Association,1999, 131–144.
ISBN : 321183382X 4.[NDM05] N
GAN , A
DDY , D
URAND , F
RÉDO , and M
ATUSIK , W
OJCIECH .“Experimental Analysis of BRDF Models”.
Proceedings of the SixteenthEurographics Conference on Rendering Techniques . EGSR ’05. Kon-stanz, Germany: Eurographics Association, 2005, 117–126.
ISBN : 3-905673-23-1.
DOI :
1, 2, 8.[NDM06] N
GAN , A
DDY , D
URAND , F
RÉDO , and M
ATUSIK , W
OJCIECH .“Image-driven Navigation of Analytical BRDF Models”.
Proceedingsof the 17th Eurographics Conference on Rendering Techniques . EGSR’06. Nicosia, Cyprus: Eurographics Association, 2006, 399–407.
ISBN :3-905673-35-5.
DOI :
1, 2.[NJR15] N
IELSEN , J
ANNIK B OLL , J
ENSEN , H
ENRIK W ANN , and R A - MAMOORTHI , R
AVI . “On Optimal, Minimal BRDF Sampling for Re-flectance Acquisition”.
ACM Trans. Graph.
ISSN : 0730-0301.
DOI :
1, 2, 4, 7, 8.[RGJW20] R
AINER , G
ILLES , G
HOSH , A., J
AKOB , W
ENZEL , andW
EYRICH , T. “Unified Neural Encoding of BTFs”.
Comput. Graph. Fo-rum
39 (2020), 167–178 1.[RJGW19] R
AINER , G
ILLES , J
AKOB , W
ENZEL , G
HOSH , A
BHIJEET , andW
EYRICH , T IM . “Neural BTF Compression and Interpolation”. Com-puter Graphics Forum (Proc. Eurographics)
USINKIEWICZ , S
ZYMON . “A New Change of Variables for Effi-cient BRDF Representation”.
Rendering Techniques (Proc. EurographicsWorkshop on Rendering) . June 1998 3, 4. . Sztrajman, G. Rainer, T. Ritschel, T. Weyrich / Neural BRDF Representation and Importance Sampling [RWG*13] R EN , P EIRAN , W
ANG , J
IAPING , G
ONG , M
INMIN , et al.“Global Illumination with Radiance Regression Functions”.
ACM Trans.Graph.
ISSN : 0730-0301.
DOI : ZTRAJMAN , A
LEJANDRO , K ˇRIVÁNEK , J
AROSLAV ,W ILKIE , A
LEXANDER , and W
EYRICH , T IM . “Image-based Remappingof Material Appearance”. Proc. 5th Workshop on Material AppearanceModeling . Ed. by K
LEIN , R
EINHARD and R
USHMEIER , H
OLLY . MAM’17. Helsinki, Finland: The Eurographics Association, June 2017, 5–8.
ISBN : 978-3-03868-035-2.
DOI : OLER , C
YRIL , S
UBR , K
ARTIC , and N
OWROUZEZAHRAI ,D EREK . “A Versatile Parameterization for Measured Material Mani-folds”.
Computer Graphics Forum
37 (May 2018), 135–144.
DOI : ÁVRA , R. and F
ILIP , J. “Adaptive slices for acquisition ofanisotropic BRDF”.
Computational Visual Media
DOI : https://doi.org/10.1007/s41095-017-0099-z ARD , G
REGORY
J. “Measuring and Modeling Anisotropic Re-flection”.
SIGGRAPH Comput. Graph.
ISSN :0097-8930.
DOI : ALTER , B
RUCE , M
ARSCHNER , S
TEPHEN
R., L I , H ONG - SONG , and T
ORRANCE , K
ENNETH
E. “Microfacet Models for Refrac-tion Through Rough Surfaces”.
Proceedings of the 18th EurographicsConference on Rendering Techniques . EGSR’07. Grenoble, France: Eu-rographics Association, 2007, 195–206.
ISBN : 978-3-905673-52-4.
DOI : SOLNAI -F EHÉR , K
ÁROLY , W
ONKA , P
ETER , and W
IMMER ,M ICHAEL . “Gaussian material synthesis”.
ACM Trans. Graph.
DOI : HENG , Q
UAN and Z
WICKER , M
ATTHIAS . “Learning to Impor-tance Sample in Primary Sample Space”.
Computer Graphics Forum
DOI :10.1111/cgf.13628