EasyPBR: A Lightweight Physically-Based Renderer
EEasyPBR: A Lightweight Physically-Based Renderer
Radu Alexandru Rosu a and Sven Behnke b Autonomous Intelligent Systems, University of Bonn, [email protected], [email protected]
Keywords: Physically-Based Rendering, Synthetic Data Generation, Visualization ToolkitAbstract: Modern rendering libraries provide unprecedented realism, producing real-time photorealistic 3D graphics oncommodity hardware. Visual fidelity, however, comes at the cost of increased complexity and difficulty ofusage, with many rendering parameters requiring a deep understanding of the pipeline. We propose EasyPBRas an alternative rendering library that strikes a balance between ease-of-use and visual quality. EasyPBR con-sists of a deferred renderer that implements recent state-of-the-art approaches in physically based rendering.It offers an easy-to-use Python and C++ interface that allows high-quality images to be created in only a fewlines of code or directly through a graphical user interface. The user can choose between fully controlling therendering pipeline or letting EasyPBR automatically infer the best parameters based on the current scene com-position. The EasyPBR library can help the community to more easily leverage the power of current GPUs tocreate realistic images. These can then be used as synthetic data for deep learning or for creating animationsfor academic purposes.
Modern rendering techniques have become ad-vanced enough for photorealistic images to be pro-duced even on commodity hardware. Advances suchas real-time ray tracing and physically-based materi-als have allowed current rendering pipelines to closelyfollow the theoretical understanding of light propaga-tion and how it interacts with the real world. How-ever, such advancements in rendering come with anincrease in complexity for the end user, often requir-ing a deep understanding of the rendering pipeline toachieve good results.Our proposed EasyPBR addresses this issue by of-fering a 3D viewer for visualizing various types ofdata (meshes, point clouds, surfels, etc.) with high-quality renderings while maintaining a low barrier ofentry. Scene setup and object manipulation can bedone either through Python or C++. Furthermore,meshes can be manipulated through the powerful li- a https://orcid.org/0000-0001-7349-4126 b https://orcid.org/0000-0002-5040-7525This work has been funded by the DeutscheForschungsgemeinschaft (DFG, German Research Founda-tion) under Germany’s Excellence Strategy - EXC 2070 -390732324 and by the German Federal Ministry of Educa-tion and Research (BMBF) in the project ”Kompetenzzen-trum: Aufbau des Deutschen Rettungsrobotik-Zentrums”(A-DRZ). bigl (Jacobson et al., 2018) library for geometry pro-cessing since our mesh representation shares a com-mon interface. The user can choose to configure ren-dering parameters before the scene setup or at runtimethrough the GUI. If the parameters are left untouched,EasyPBR will try to infer them in order to best ren-der the given scene. EasyPBR uses state-of-the-artrendering techniques and offers easy extensions forimplementing novel methods through using a thin ab-straction layer on top of OpenGL.EasyPBR and all the code needed to reproduce thefigures in this paper is made available athttps://github.com/AIS-Bonn/easy pbrA video with additional footage is also available on-line .Our main contributions are:• a lightweight framework for real-time physically-based rendering,• an easy-to-use Python front-end for scene setupand manipulation, and• powerful mesh manipulation tools through the li-bigl (Jacobson et al., 2018) library. a r X i v : . [ c s . G R ] D ec Related Work
Various 3D libraries currently offer rendering ofhigh fidelity visuals. Here we compare against themost widely used ones.Meshlab (Cignoni et al., 2008) is a popular open-source tool for processing, editing, and visualizingtriangular meshes. Its functionality can be accessedeither through the graphical user interface (GUI) orthe provided scripting interface. This makes Mesh-lab difficult to integrate into current Python or C++projects. In contrast, EasyPBR offers both a Pythonpackage that can be easily imported and a shared li-brary that can be linked into an existing C++ project.EasyPBR also integrates with libigl (Jacobson et al.,2018), allowing the user to access powerful toolsfor geometry processing. Additionally, EasyPBR of-fers more realistic renderings of meshes together withfunctionality for creating high-resolution screenshotsor videos.Blender (Blender Online Community, 2018) is anopen-source 3D creation suite. It includes all aspectsof 3D creation, from modeling to rendering and videoediting; and it offers a Python API, which can be usedfor scripting. However, the main usage of Blenderis to create high-quality visuals through ray-tracedrendering, which is far from real-time capable. ThePython API is also not the main intended use case ofBlender, and while rendering commands can be is-sued through scripts, there is no visual feedback dur-ing the process. In contrast, we offer real-time ren-dering and control over the scene from small Pythonor C++ scripts.VTK (Schroeder et al., 2000) is an open-sourcescientific analysis and visualization tool. While ini-tially its main rendering method was based on Phongshading, recently a physically-based renderer togetherwith image-based lighting (IBL) has also been in-cluded. Extensions of the main rendering model withnew techniques is cumbersome as it requires exten-sive knowledge of the VTK framework. In contrast,our rendering methods are easy to use and we keep athin layer of abstraction on top of OpenGL for simpleextendibility using custom callbacks.Marmoset toolbag (Marmoset, 2020) is a visualtool designed to showcase 3D art. It features a real-time PBR renderer, which allows easy setup of ascene to create high-quality 3D presentations. How-ever, it is not available on Linux and is also distributedunder a paid license.Unreal Engine (Epic Games, 2007) is a state-of-the-art engine created with the goal to provide real-time high-fidelity visuals. It has been used in profes-sional game-making, architecture visualization, and a) Marmoset (Marmoset, 2020) b) EasyPBR(Ours) c) VTK (Schroeder et al., 2000)
Figure 1: Comparison of various PBR rendering tools.
VR experiences. While it provides a plethora of toolsfor content creation, the entry barriers can also bequite high. Additionally, the Python API providedcan only be used as an internal tool for scripting andresults in cumbersome setup code for even easy im-porting of assets and rendering. In contrast, EasyPBRacts as a Python library that can be readily importedin any existing project and used to draw to screen inonly a couple of lines of code.We showcase results from EasyPBR comparedwith VTK and Marmoset in Fig. 1. We use the high-quality 3D scan from (3D Scan Store, 2020) and ren-der it under similar setups. While our renderer doesnot feature sub-surface scattering shaders like Mar-moset, it can still achieve high-quality results in lessthan 10 lines of code. In contrast, for similar results,VTK requires more than 150 lines in which the userneeds to manually define rendering passes for effectssuch as shadows baking and shadow mapping.
Physically-based rendering or PBR is a set ofshading models designed to achieve high realismthough accurately modeling light and material inter-action. Previous shading models like Phong shadingare not based on mathematically rigorous analysis oflight and can lead to unrealistic results. PBR attemptsto address this issue by basing the shading equationson the laws of light interaction.PBR follows the mathematical modeling of lightbased on the reflectance equation: L o ( p , ω o ) = (cid:90) Ω f r ( p , ω i , ω o ) L i ( p , ω i )( n · ω i ) d ω i , (1)where L o ( p , ω o ) is the outgoing radiance from point p in direction ω o which gathers over the hemisphere Ω the incoming radiance L i weighted by the BRDF f r ( p , ω i , ω o ) and the angle of incidence between theincoming ray ω i and the surface normal n .To model materials in a PBR framework we usethe Cook-Torrance (Cook and Torrance, 1982) BRDF.Material properties are specified by two main param-eters: metalness and roughness. These parametersover the vast majority of the real-world materials.By using a physically-based renderer, they will lookrealistic under different illumination conditions.Since Cook-Torrance is just an approximation ofthe underlying physics and there are many variantsused in literature, some more realistic, others moreefficient, we choose the same approximation used inUnreal Engine 4 (Karis, 2013) which strikes a goodbalance between realism and efficiency. To fully solve the reflectance equation, light incom-ing onto the surface would have to be integrated overthe whole hemisphere. However, this integral is nottractable in practice, and therefore, one approxima-tion would be to gather only the direct contributionsof the light sources in the scene. This has the un-desirable effect of neglecting secondary bounces oflight and causing shadows to be overly dark, yield-ing a non-realistic appearance. To address this, weuse image-based lighting, which consists of embed-ding our 3D scene inside a high dynamic range (HDR)environment cubemap in which every pixel acts as asource of light. This greatly enhances the realism ofthe scene and gives a sense that our 3D models ”be-long” in a certain environment as changes in the HDRcubemap have a visible effect on the model’s lighting.Efficient sampling of the radiance from the envi-ronment map is done through precomputing increas-ingly blurrier versions of the cubemap, allowing forefficient sampling at runtime of only one texel thatcorresponds with a radiance over a large region ofthe environment map. Specular reflections are alsoprecomputed using the split-sum approximation. Formore detail, we refer to the excellent article from EpicGames (Karis, 2013).We further extend the IBL by implementing theapproach of (Fdez-Ag¨uera, 2019) which further im-proves the visual quality of materials by taking intoaccount multiple scatterings of light with only a slightoverhead in performance.
Rendering methods are often divided in twogroups: forward rendering and deferred rendering,both with different pros and cons.Forward rendering works by rendering the wholescene in one pass, projecting every triangle to thescreen and shading in one render call. This has the ad-vantage of being simple to implement but may suffer F i n a l M e t a l a nd r ough A l b e do SS AON o r m a l s Figure 2: The various G-Buffer channels together with Am-bient Occlusion are composed into one final texture to bedisplayed on the screen. Here we display slices of eachchannel that is used for compositing. from overdraw as having a lot of overlapping geome-try causes much wasted effort in shading and lighting.Deferred rendering attempts to solve this issue bydelaying the shading of the scene to a second step.The first step of a deferred renderer writes the mate-rial properties of the scene into a screen-size buffercalled the G-Buffer. The G-Buffer typically recordsthe position of the fragments, color, and normal. Asecond rendering pass reads the information from theG-Buffer and performs the light calculations. Thishas the advantage of performing costly shading oper-ations only for the pixels that will actually be visiblein the final image.EasyPBR uses deferred rendering as its perfor-mance scales well with an increasing number oflights. Additionally, various post-processing effectslike screen-space ambient occlusion (SSAO) are eas-ier to implement in a deferred renderer than a forwardone since all the screen-space information is alreadyavailable in the G-Buffer.
Table 1: We structure the G-Buffer into four render targets.
UsageFormat R G B A
RT0 RGBA8 Albedo WeightRT1 RGB8 Normals UnusedRT2 RG8 Metal Rough UnusedRT3 R32 Depth Unused
The layout of our G-Buffer is described in Tab. 1.Please note that in our implementation, we do notstore the position of each fragment but rather storeonly the depth map as a floating-point texture andreconstruct the position from the depth value. Thissaves us from storing three float values for the posi-tion, heavily reducing the memory bandwidth require-ments for writing and reading into the G-Buffer. Weadditionally store a weight value in the alpha chan-el of the first texture. This will be useful later whenwe render surfels which splat and accumulate onto thescreen with varying weights. Several channels in theG-Buffer are purposely left empty so that they can beused for further rendering passes.A visualization of the various rendering passesand the final composed image is shown in Fig. 2.
We represent objects in our 3D scene as a series ofmatrices containing per-vertex information and pos-sible connectivity to create lines and triangles. Thefollowing matrices can be populated:• V ∈ R ( n × ) vertex positions,• N ∈ R ( n × ) per-vertex normals,• C ∈ R ( n × ) per-vertex colors,• T ∈ R ( n × ) per-vertex tangent vectors,• B ∈ R ( n × ) per-vertex bi-tangent vector length,• F ∈ Z ( n × ) triangle indices for mesh rendering,• E ∈ Z ( n × ) edge indices for line rendering.Note that for the bi-tangent vector, we store only thelength, as the direction can be recovered through across product between the normal and the tangent.This saves significant memory bandwidth and is fasterthan storing the full vector. Mesh rendering follows the general deferred render-ing pipeline. The viewer iterates through the meshesin the scene and writes their attributes into the G-Buffer. The attributes used depend on the selectedvisualization mode for the mesh (either solid color,per-vertex color, or texture).When the G-Buffer pass is finished, we run a sec-ond pass which reads from the created buffer and cre-ates any effect textures that might be needed (SSAO,bloom, shadows, etc.).A third and final pass is afterwards run whichcomposes all the effect textures and the G-Buffer intothe final image using PBR and IBL.
Point cloud rendering is similar to mesh rendering,i.e. the attributes of the point cloud are written intothe G-Buffer.The difference lies in the compositing phasewhere PBR and IBL cannot be applied due to the lackof normal information. Instead, we rely on eye dome a) Plain cloud b) EDL c) EDL + SSAOFigure 3: a) plain rendered point clouds results in flat shad-ing and conveys little information. b) enabling Eye DomeLighting gives a slight perception of depth, allowing theuser to distinguish between various shapes. c) adding alsoAmbient Occlusion enhances the effect even further. lighting (EDL) (Boucheny and Ribes, 2011), whichis a non-realistic rendering technique used to improvedepth perception. The only information needed forEDL is a depth map. EDL works by looking at thedepth of adjacent pixels in screen space and darken-ing the pixels which exhibit a sudden change of depthin their neighborhood. The bigger the local differencein depth values is, the darker the color is. The effectof EDL can be seen in Fig. 3.Additionally, by sacrificing a bit more perfor-mance, the user can also enable SSAO which furtherenhances the depth perception by darkening crevicesin the model.
In various applications like simultaneous localiza-tion and mapping (SLAM) or 3D reconstruction, acommon representation of the world is through sur-fels (Droeschel et al., 2017; St¨uckler and Behnke,2014). Surfels are modeled as oriented disks withan ellipsoidal shape, and they can be used to modelshapes that lack connectivity information. Renderingsurfaces through surfels is done with splatting, whichaccumulates in screen space the contributions of var-ious overlapping surfels. The three-step process ofcreating the surfels is ilustrated in Fig. 4. ntba) b) c)
Figure 4: Surfel rendering is done in three steps. a) thevertex shader creates a basis from the normal, tangent andbitangent vectors. b) the geometry shader creates from eachvertex a rectangle orientated according to the basis. c) thefragment shader creates the elliptical shape by discardingthe fragments in the corners of the rectangle.igure 5: Comparison between mesh and surfel render-ing. For clarity, we reduce the radius of the surfels in thezoomed-in view.
Once the surfels are created, they are rendered intothe G-Buffer. Surfels that overlap within a small dis-tance to each other accumulate their attributes and in-crement a weight for the current pixels that will beused later for normalization.During surfel rendering, the G-Buffer is changedfrom being stored as unsigned bytes to half floats inorder to support the accumulation of attributes foroverlapping surfels.The composing pass then normalizes the G-Bufferby dividing the accumulated albedo, normals, metal-ness and roughness by the weight stored in the alphachannel of the albedo.Finally, composing proceeds as before with thePBR and IBL pipeline. This yields similar results asmesh rendering which can be seen in Fig. 5.
Line rendering is useful for showing the wireframeof meshes or for displaying edges between arbitraryvertices indicated by the E matrix. We perform linerendering by forward rendering directly into the finalimage as we do not want lines to be affected by light-ing and shadowing effects. Multiple post-processing effects that are sup-ported in EasyPBR: shadows, SSAO, bloom.
EasyPBR supports point-lights, which can castrealistic soft shadows onto the scene. Shadowscomputation is performed through shadow map-ping (Williams, 1978). The process works by firstrendering the scene only as a depth map into eachpoint-light as if they were a normal camera.Afterwards, during compositing, we check if afragment’s depth is greater than the depth recorded bya certain light. If it is greater, then the fragment liesbehind the surface lit by the light and is therefore inshadow. In order to render soft shadows, we perform a) Shadows and SSAO b) BloomFigure 6: Various post-processing effects can be enabled inthe renderer. Soft shadows and ambient occlusion conveya sense of depth, and bloom simulates the light bleed frombright parts of the scene like the sun or reflective surfaces.
Percentage Closer Filtering (Reeves et al., 1987) bychecking not only the depth around the current frag-ment but also the neighboring ones in a 3 × Ambient occlusion is used to simulate the shadowingeffect caused by objects blocking the ambient light.Simulating occlusion requires global information ofthe scene geometry and is usually performed throughray-tracing, which is costly to compute. Screen-spaceambient occlusion addresses this issue by using onlythe current depth buffer as an approximation of thescene geometry, therefore avoiding the use of costlyglobal information and making the ambient occlusionreal-time capable. The effect of SSAO can be viewedin Fig. 6a.Our SSAO implementation is based on theNormal-oriented Hemisphere method (Bavoil andSainz, 2008). After creating the G-Buffer, we run theSSAO pass in which we randomly take samples alongthe hemisphere placed at each pixel location and ori-entated according to the normal stored in the buffer.The samples are compared with the depth buffer inorder to get a proxy of how much the surface is oc-cluded by neighboring geometry. The SSAO effect iscomputed at half the resolution of the G-Buffer andbilaterally blurred in order to remove high-frequencynoise caused by the low sample count. .3 Bloom
Bloom is the process by which bright areas of the im-ages bleed their color onto adjacent pixels. This canbe observed, for example, with very bright sunlight,which causes the nearby parts of the image to increasein brightness. Bloom is implemented by renderinginto a bright-map only the parts of the scene that areabove a certain level of brightness.This bright map would now need to be blurredwith a Gaussian kernel and then added on top of theoriginal image. However, performing blurring at theresolution of the full screen is too expensive for real-time purposes, and we, therefore, rely on approxima-tions. We create an image pyramid with up to six lev-els from the bright map. We blur each pyramid levelstarting from the second one upwards. Blurring byusing an image pyramid allows us to use very largekernels.Finally, the bright-map pyramid is added on top ofthe original image in order to create a halo-like effect.The result can be seen in Fig. 6b.
The compositing is the final rendering pass beforeshowing the image to the screen. It takes all the pre-vious rendering passes (G-Buffer, SSAO, etc.) andcombines them to create the final image. Finally,after creating the composed image, it needs to betone-mapped and gamma-corrected in order to bringthe HDR values into a low dynamic range (LDR)range displayable on the screen. For this, we use theAcademy Color Encoding System (ACES) tone map-per due to its high-quality filmic look. We further of-fer support for the Reinhard (Reinhard et al., 2002)tone mapper.
Various parameters govern the rendering process.The user can leave them untouched, and our renderingtool will try to make an educated guess for them atruntime.By default, EasyPBR creates a 3-point light setupconsisting of a key light that provides most of the lightfor the scene, a fill light softening the shadows, and arim light placed behind the object to separate it fromthe background. The distances from the object centertowards the light are determined such that the sceneradiance at the object has a predefined value. Thismakes the lighting setup agnostic to scaling of themesh, so EasyPBR can out of the box render any kind
Table 2: Timings in milliseconds to render a frame.EasyPBR VTK Meshlab v2020.09
Meshlab v1.3.2
Goliath 6.2 6.1 6.0 558Head 1.6 1.6 1.1 1.1 of mesh regardless of the unit system it uses. At anypoint at runtime, the user can tweak the position, in-tensity, and color of the lights.The camera is placed in the world so that the entireobject is in view. Also, the near and far planes of thecameras are set according to the scale of the scene.SSAO radius is a function of the scene scale. Bydefault, we choose the radius to be 5% of the scale ofthe scene to be rendered.The rendering mode depends on the object in thescene. If the object has no connectivity provided astriangles in the F matrix, then we render it as a pointcloud using EDL. Otherwise, we render it as a mesh.If the user provides normals and tangent vectors, thenwe render it as a series of surfels. This ensures thatwhatever data we put in, our objects will be visualizedin an appropriate manner. We evaluate the performance of EasyPBR andcompare it to Meshlab and VTK, as they are com-mon tools used for visualization. We run all threetools on an Nvidia RTX 2060. As a metric, we usethe milliseconds per frame and test with two meshes,one high-resolution mesh with 23 million faces (Go-liath statue from Fig. 10) and the 3D scanned head(Fig. 1) with half a million faces and high-resolution8K textures. The results are shown in Tab. 2.First, we remark that Meshlab v1.3.2, the ver-sion that is available in the Ubuntu 18.04 reposito-ries, struggles to render the Goliath mesh, requiringalmost 500ms. This is due to an internal limitationon the amount of memory that is allowed for the ge-ometry. Once the mesh uses more memory than thisinternal threshold, Meshlab silently switches to im-mediate mode rendering, which causes a significantperformance drop. Newer versions of Meshlab ( ver-sion 2020.09 ) have to be compiled from source, butthey allow to increase this memory threshold abovethe default 350MB and render the mesh at 6ms perframe.We point out that Meshlab is faster than both ap-proaches due to the usage of only simple Phong shad-ing. ) Synthetic DJI M100 drone b) Detection in real imageFigure 7: Synthetic data can be easily rendered and usedfor deep learning applications. Images of drones togetherwith ground truth bounding box annotation were renderedand used for training a drone detector.
The flexibility offered by EasyPBR allows it to beused for a multitude of applications. We gather here aset of real cases in which it was used.
Deep learning approaches require large datasets in or-der to perform supervised learning, and the effort inannotating and labeling such datasets is significant.Consequently, interest has recently increased in usingsynthetic data to train the models and thus avoid orreduce the need for real labeled data.EasyPBR has been used in the context of deeplearning to create realistic 2D images for object detec-tion tasks. Specifically, it has been used for training adrone detector capable of recognizing a drone in mid-flight. The model requires large amounts of data inorder to cope with the variations in lighting, environ-ment conditions, and drone shape. EasyPBR was usedto create realistic environments in which we placedvarious drone types that were rendered together withground truth bounding boxes annotations.An example of a synthetic image and the outputfrom the drone detector model can be seen in Fig. 7.The core of the Python code used to render thesynthetic images can be compactly expressed as: view = Viewer()view.load_environment_map( "./map.hdr" )drone = Mesh( "./drone.ply" )Scene.show(drone, "drone" )view.recorder.record( "img.png" ) Many recent 3D deep learning applications take as in-put either raw point clouds or voxelized clouds. Visu-ally inspecting the inputs and outputs of the networkis critical for training such models.EasyPBR interfaces with PyTorch (Paszke et al.,2017) and allows for conversion between the CPU
Figure 8: Point cloud segmented by LatticeNet (Rosuet al., 2020) and visualized with the colormap of Se-manticKITTI (Behley et al., 2019).Figure 9: Instance segmentation of plant leaves using Lat-ticeNet (Rosu et al., 2020). data of the point cloud and GPU tensors for modelinput and output. EasyPBR is used for data loadingby defining a parallel thread that reads point clouddata onto the CPU and then uploads to GPU tensors.After the model processes the tensors, the predictionis directly read by EasyPBR and used for visualiza-tion. An example of 3D semantic segmentation andinstance segmentation of point clouds can be seenin Fig. 8 and Fig. 9 where our tool was used for vi-sualization and data loading.Inside the training loop of a 3D deep learn-ing approach, Python code similar to this one canbe used for visualization and input to the network: cloud = Mesh( "./lantern.obj" )points = eigen2tensor(cloud.V)pred = net(points)cloud.L = tensor2eigen(pred)Scene.show(cloud, "cloud" ) EasyPBR can also be used to create simple 2D and3D animations. The 3D viewer keeps a timer, whichstarts along with the creation of the application. Atany point, the user can query the delta time since thelast frame and perform incremental transformationson the objects in the scene.Additionally, the user can create small rigid kine-matic chains by specifying a parent-child hierarchybetween the objects. Transformations of the parentobject will, therefore, also cause a transformation ofthe child. This is useful when an object is part of an-other one. igure 10: Viewer GUI and camera trajectory for recordinga video of the 3D object.
EasyPBR can be used both for taking screenshots ofthe scene and for recording movies while the virtualcamera is moving through the environment. Throughthe GUI, the user can place a series of key-posesthrough which the camera should move. The user thenspecifies the time to transition from one pose to an-other and lets the animation run. The camera linearlyinterpolates between the specified SE ( ) poses whilecontinuously recording. The images saved can thenbe converted into a movie.An example of the camera trajectory surroundingan object to be captured can be seen in Fig. 10. We presented EasyPBR, a physically-basedrenderer with a focus on usability without compro-mising visual quality. Various state-of-the-art render-ing methods were implemented and integrated into aframework that allows easy configuration. EasyPBRsimplifies the rendering process by automaticallychoosing parameters to render a specific scene, alle-viating the burden on the user side.In future work, we intend to make EasyPBR eas-ier to integrate for remote visualizations and also addfurther effects like depth of field and transparencies.We make the code fully available together with thescripts to create all the figures shown in this paper. Wehope that this tool will empower users to create visu-ally appealing and realistic images without sacrificingperformance or enforcing the burden of a steep learn-ing curve.
REFERENCES
IEEE International Conference on Com-puter Vision (ICCV)
Kitware SourceQuarterly Magazine , 17.Cignoni, P., Callieri, M., Corsini, M., Dellepiane, M.,Ganovelli, F., and Ranzuglia, G. (2008). Meshlab:an open-source mesh processing tool. In
Eurograph-ics Italian Chapter Conference , volume 2008, pages129–136. Salerno.Cook, R. L. and Torrance, K. E. (1982). A reflectance modelfor computer graphics.
ACM Transactions on Graph-ics (ToG) , 1(1):7–24.Droeschel, D., Schwarz, M., and Behnke, S. (2017). Con-tinuous mapping and localization for autonomousnavigation in rough terrain using a 3D laser scanner.
Robotics and Autonomous Systems
Journal ofComputer Graphics Techniques (JCGT) , 8(1):45–55.Jacobson, A., Panozzo, D., et al. (2018). libigl: A simpleC++ geometry processing library. https://libigl.github.io/.Karis, B. (2013). Real shading in Unreal Engine 4.
Proc.Physically Based Shading Theory Practice .Marmoset (2020). Marmoset toolbag. https://marmoset.co/toolbag/.Paszke, A., Gross, S., Chintala, S., Chanan, G., Yang, E.,DeVito, Z., Lin, Z., Desmaison, A., Antiga, L., andLerer, A. (2017). Automatic differentiation in pytorch.Reeves, W. T., Salesin, D. H., and Cook, R. L. (1987). Ren-dering antialiased shadows with depth maps. In , pages 283–291.Reinhard, E., Stark, M., Shirley, P., and Ferwerda, J. (2002).Photographic tone reproduction for digital images. In , pages 267–276.Rosu, R. A., Sch¨utt, P., Quenzel, J., and Behnke, S. (2020).LatticeNet: Fast point cloud segmentation using per-mutohedral lattices. In
Proceedings of Robotics: Sci-ence and Systems (RSS) .Schroeder, W. J., Avila, L. S., and Hoffman, W. (2000).Visualizing with VTK: a tutorial.
IEEE ComputerGraphics and Applications , 20(5):20–27.St¨uckler, J. and Behnke, S. (2014). Multi-resolution surfelmaps for efficient dense 3D modeling and tracking.
Journal of Visual Communication and Image Repre-sentation , 25(1):137–147.Williams, L. (1978). Casting curved shadows on curved sur-faces. In5th Annual Conference on Computer Graph-ics and Interactive Techniques