Multi-Resolution Rendering for Computationally Expensive Lighting Effects
MMulti-Resolution Rendering for Computationally ExpensiveLighting Effects
Simon BesenthalUlm University [email protected]
Sebastian MaischUlm University [email protected]
Timo RopinskiUlm University [email protected]
Abstract
Many lighting methods used in computer graphics such as indirect illumination can have very high computationalcosts and need to be approximated for real-time applications. These costs can be reduced by means of upsam-pling techniques which tend to introduce artifacts and affect the visual quality of the rendered image. This papersuggests a versatile approach for accelerating the rendering of screen space methods while maintaining the visualquality. This is achieved by exploiting the low frequency nature of many of these illumination methods and thegeometrical continuity of the scene. First the screen space is dynamically divided into separate sub-images, thenthe illumination is rendered for each sub-image in an adequate resolution and finally the sub-images are put to-gether in order to compose the final image. Therefore we identify edges in the scene and generate masks preciselyspecifying which part of the image is included in which sub-image. The masks therefore determine which partof the image is rendered in which resolution. A step wise upsampling and merging process then allows opticallysoft transitions between the different resolution levels. For this paper, the introduced multi-resolution renderingmethod was implemented and tested on three commonly used lighting methods. These are screen space ambientocclusion, soft shadow mapping and screen space global illumination.
Keywords
Real-Time Rendering, Multi-resolution
As a subarea of computer science, real-time computergraphics has developed continuously since the mid-dle of the last century and is of great importance to-day. With a variety of applications, including medicineor computer-aided design (CAD), real-time computergraphics is nowadays indispensable in many areas oflife and is thus a relevant factor in research as wellas in business. To render a realistic image many op-tical and physical phenomena such as camera lenses,light transport, or micro-surface structure must be takeninto account. All of these phenomena need to be cal-culated at pixel level but might rely on information ofthe surrounding scene to create the effect. Therefore,the number of pixels to be rendered, especially withmore complex illumination, is crucial to the necessarycomputing power and thus to the performance of anapplication. While the increase in computing powerof modern graphics hardware allows for more compli-cated algorithms, the demand for photo-realistic globalillumination effects and high output resolutions in real-
Permission to make digital or hard copies of all or part ofthis work for personal or classroom use is granted withoutfee provided that copies are not made or distributed for profitor commercial advantage and that copies bear this notice andthe full citation on the first page. To copy otherwise, or re-publish, to post on servers or to redistribute to lists, requiresprior specific permission and/or a fee. time graphics can not be met by current hardware suffi-ciently.In order to reduce the computational effort upsamplingis often used. This technique renders individual effects,or sometimes the full image, in a lower resolution. Sub-sequently, the generated images are scaled back up tothe full resolution by interpolation. Ultimately, fewerpixels must be calculated and stored, which reducesthe computational effort and also the required storagespace. Upsampling is particularly common in soft, con-tinuous post-processing effects such as bloom filters orblur, in which quality losses are virtually invisible, de-pending on the scaling factor. If, on the other hand,you render effects with more concrete structures suchas shadows or reflections in a lower resolution and thenscale them up, hard edges are displayed washed out andaliasing becomes visible. In addition, there is a risk ofunder-sampling, which can cause visual artifacts affect-ing the image quality, especially in animated scenes orduring camera movements. Rendering such effects orthe entire image by upsampling is therefore usually notalways useful, however, two interesting observationscan be made: Although such effects may generally havemore concrete structures such as hard edges, these high-frequency details are firstly not necessarily evenly dis-tributed in the image space, and secondly, they are of-ten only marginally present in relation to the total area.For example, considering naive shadow mapping witha single light source, depending on the complexity ofthe scene, a rendered image may contain large areas a r X i v : . [ c s . G R ] J un hat are either completely shaded or fully illuminated.Nevertheless, the necessary operations to determine thebrightness of these areas are performed for each indi-vidual pixel. For naive shadow mapping, this is cer-tainly not important, but if one considers computation-ally more complex effects such as ambient occlusion orindirect illumination, the performance could be drasti-cally increased by an intelligent subsampling of certainimage areas.The technique developed in this work exploits the oftenexisting optical continuity of a scene in order to real-ize computationally intensive lighting effects more ef-ficiently. For this purpose, the image space is first di-vided into multiple disjoint partial images, so that areaswhich contain edges or are in their immediate vicinityare separated from areas without edges or with a greaterdistance to them. Each partial image can be renderedindividually with the illumination effects to be realizedin suitable resolutions. In principle, a higher resolu-tion is required to correctly create the effect in areaswith a higher detail density. However, areas that do notinclude edges and thus have a lower density of detailcan be rendered in lower resolution. The partial imagesare then reassembled to the original image. In the bestcase, this image should not differ visually from a full-resolution rendered image. Of particular importance forvisual quality and performance is the way in which theindividual steps of the technology work. For each dif-ferent step approaches are presented and explained inthis paper. In this section we present and explain the techniquesand approaches relevant to this work. They follow sim-ilar conceptual principles and can be considered as astarting point for the technique developed here. We alsohighlight the differences to these approaches.
Upsampling is a technique commonly used in low-frequency visual effects in real-time computer graphics.Examples of effects that are often realized are Bloom orGlare filters [1] and Depth of Field [2]. The blur for therespective effect is not rendered in the full resolution ofthe application, but in an often much lower resolution.Subsequently, the result is scaled back to the full screensize by means of bilinear interpolation. This can greatlyincrease the performance at the same optical quality.
There are several approaches that split the computa-tion of illumination effects into multiple resolutions toseparate the rendering of low frequency and higher fre-quency components of these effects. Examples are im-plementations for indirect light transport [3] and ScreenSpace Ambient Occlusion [4], which achieve better performance with optically good results. In both ap-proaches, multiple mipmap stages of the G-buffer areused to render the lighting effect to be realized in var-ious resolutions. Subsequently, an upsampling is per-formed by means of bilateral filters and the differ-ent levels are combined. The multi-resolution render-ing technique developed in this work makes use ofthe fundamental principle of separating high and low-frequency components of the illumination, but dividesthe image into several partial images on the basis ofthese different proportions. An area of the image is notrendered in all resolutions, but in the best case only inone. This makes it possible to drastically reduce the cal-culations for higher-frequency components in the im-age areas in which ultimately no high-frequency com-ponents occur exactly.Nichols and Wyman [5] describe a real-time techniquefor rendering indirect illumination using multi-resolution splatting. They use min-max mipmapsto find the discontinuities in the geometry. Usingthese discontinuities, the image space is hierarchicallydivided into smaller squares, so that areas with higher-frequency components obtain a finer resolution. Afterthe image is completely split into such ‘splats’ of anappropriate size, the indirect illumination is renderedin all resolutions and the layers are then combined byupsampling to produce the final image. Our techniquediffers from the algorithm presented by Nichols andWyman among other things in the method used todecide which resolution to render in. We can applymore flexible filters depending on the situation, whiletheir approach using min-max mipmaps can only findgeometric discontinuities. We also use a differentapproach to combine the final images that preventsvisible artifacts. Finally, our technique is not onlyspecialized for indirect illumination using ReflectiveShadow Maps, but can also be applied and optimizedfor various lighting effects due to its high flexibility.Iain Cantlay [6] describes a technique for renderinglower resolution particles offscreen and combining theresult with high resolution renderings of other geome-try. In contrast to our approach, this technique can onlybe applied, if distinct parts of the geometry (in this caseparticles) are to be rendered in a fixed lower resolutionwhile our technique is more flexible working on pixels.Guennebaud et al. [7] use variable resolutions for softshadow mapping in screen space. Again our approachis more flexible and can be applied to a multitude ofscreen space effects.
He et al. [8] propose an extension of the graphicspipeline to natively support adaptive sampling tech-niques. Nvidia’s Maxwell and Pascal architectures havealready implemented graphics hardware technologiesthat could speed up the rendering of an image throughthe use of different resolutions. Multi-Resolutionhading [9] and Lens Matched Shading [10] can beapplied in virtual reality applications to adapt theresolution of individual image areas to the opticalproperties of the physical lens that is part of the display.For more general uses Variable Rate Shading [11](VRS) was introduced as part of the Nvidia Turingarchitecture. With this technique, the image can bedivided into much finer regions, which can be renderedindependently in appropriate resolutions. The regionsare made up of squares with a edge length of sixteenpixels. Possible applications include ‘Content AdaptiveShading’ (as for example presented by Vaidyanathan etal. [12]), ‘Motion Adaptive Shading’ (as for examplepresented by Vaidyanathan et al. [13]), and ‘FoveatedRendering’ (as presented by Guenter et al. [14]). In thiscase, the sampling rate of the image areas is selectedadequately depending on the detail density, movement,or focus of the viewer.The multi-resolution rendering technique developed inthis work allows for an even finer and more flexible di-vision of the image, since image areas do not neces-sarily have to consist of square tiles, but can have anydesired shape. This means that a possibly even lowerpart of the image must be rendered in full resolution,and the performance can be further increased. Apartfrom that, in contrast to VRS, our technique allows forany number of levels and even lower sampling rates.Our technique is also not dependent on current graphicshardware and can be implemented for widely availablesystems. In our implementation we focus on the den-sity of details in a scene (Content Adaptive Shading) todecide for the resolution to render in but we can extendour technique by using different edge detection filtersor even masks that describe the geometry of lenses invirtual reality.
For the exemplary implementation of our technique weuse three illumination effects commonly used in mod-ern computer graphics.Screen Space Ambient Occlusion (SSAO) is a real-timeapproximation of the occlusion of ambient light by lo-cal geometry. The technique was first presented by Mit-tring [15] and further developed and improved (e.g. byBavoil et al. [16]).Shadow Mapping is an algorithm presented byWilliams [17] that allows for a fast calculation ofshadow rays using a depth buffer. Artifacts introducedby the resolution of the depth buffer can be reducedby percentage closer filtering, introduced by Reeves etal. [18] that also softens the shadows edges. A plau-sible penumbra can also be realized as described byFernando [19]. The shadow map is not only sampled ata single position but at multiple neighboring locations.Screen Space Global Illumination as, for example, de-scribed by Ritschel et al. [20] generalizes SSAO to not Figure 1: A possible edge image for the multi-resolution rendering technique, the edges are coloredfor better visualization: the red edges were determinedby the differentiation of the normals, the green ones bythe depth values and the blue ones by the shadows, nor-mal edges and depth edges are often determined at thesame point in the image space (yellow edges).only dim ambient illumination but also add indirect illu-mination from other surfaces visible on the screen. Thelight transport between chosen samples close to a pixelis calculated inducing information from the G-Buffer.
Our presented multi-resolution rendering technique canbe subdivided into three basic steps. In the first step, wecreate a mask in screen space, based on which the imageto be rendered is divided into disjoint or complementarysub-images. In the second step, the lighting method tobe implemented is rendered for each sub-image in itsadequate resolution. Finally the sub-images are com-bined to create the result image. The conceptual ap-proaches of these steps will be described in more detailbelow. A visual overview of the algorithms workflowwill be given in the supplementary material.
The masks are used to divide an image into individualsub-images. While masks can be acquired in multipleways and even combined using the minimum or maxi-mum (depending on the application) an obvious choiceis to use them to separate the higher-frequency imageparts from the low-frequency ones. It is often suffi-cient to use the geometry edges of the scene in screenspace to achieve this. These can be found through theinformation available in the G-Buffer by numericallydifferentiating depth values and normals for each pixel.For the normal, the first derivative in each of the twodimensions is sufficient, whereas for the depth values,the second derivative gives more reliable results. Thediscontinuities found reproduce the geometric edges ofthe scene and can be used to split the image. For screenspace ambient occlusion and screen space global illu-mination, the geometric edges are already sufficient butdepending on the illumination effect to be realized, ad-ditional information may be required. In case of soft
SSAO SSM SSGI σ i w i σ i w i σ i w i .
924 100 0 .
924 1000 0 .
924 10002 1 .
848 50 1 .
848 1000 – –3 3 .
696 20 3 .
696 1000 0 .
924 1004 0 1 0 1 0 1Table 1: Variances ( σ i ) and weights ( w i ) for each sub-image ( i ) of all techniques we used. The variances areused to blur the mask, while the weights are used tocombine the final image. For SSGI we did not use thesecond sub-image at all.shadow mapping for example, the shadow edges of thescene are needed above all. To this purpose, when creat-ing the mask using the previously created shadow map,a fast shadow calculation (one sample per pixel) canbe implemented. We differentiate these values to finddiscontinuities in the shading. To avoid artifacts at thegeometry edges, we also take them into account for themask when rendering the soft shadows. Fig. 1 showsan edge image of a scene in which normals, depths, andshadows are differentiated. As an alternative to the edgeimages we use, min-max mipmaps can also be usedto decompose the image as explained by Nichols andWyman [5].After we created the final high-resolution mask wedownsample it to the resolutions we want our final sub-images to be. We use blur filters with different vari-ances ( σ ) on the downsampled images to determinethe areas near the edges. The blurs variance gives thedeveloper control over the size of the area around theedges and determines which areas around the edges arerendered in which resolution. The variances we use canbe found in Tab. 1.Figure 2: Without accounting for overlap (left), „deadpixels“ (black) occur at the edges of the sub-images(red and blue), which are not contained in any of thesub-images and thus are not rendered. When ensuringan overlap (right), the intersection of the sub-images(green) prevents this circumstance.A simple way to separate the image into sub-imagesis to divide them into complementary tiles. An ad- vantage of this method is the disjoint decomposition,whereby no area of the image has to be rendered mul-tiple times. A drawback, however, is that the granu-larity of the decomposition of the image is limited bythe lowest resolution of a sub-image. When naively us-ing the granularity that is determined directly by theresolution of each sub-image, we obtained undefinedspaces in the final image between two masked areas.To avoid these we make sure areas of different resolu-tions have an overlap as shown in Fig. 2. Therefore, wedo not separate the image into almost disjoint areas, butalways completely include the higher resolution levelsin the underlying ones. This means, in particular, thatthe lowest resolution sub-image always renders the ef-fect to be realized for the entire image. Losses in per-formance due to the multiple rendering of some imageareas are extremely small, because the additional com-putational effort arises mainly in the lower resolutions.If the blur is optimally selected for the creation of themasks, this approach lets us keep the areas of the higherresolution levels extremely small, resulting in an overallgood performance. In addition, this decomposition ap-proach later allows for a very simple re-composition ofthe final image, because the masks together with fixedweights can serve as an alpha channel for blending thesub-images (see Section 3.3). Fig. 3 shows a possibledecomposition of an example scene in screen space.Figure 3: Visualization of the decomposition of an im-age into four sub-images by means of inclusive areas:The sub-image of the full resolution contains all the redareas, the sub-image of the half resolution all red andgreen areas, the sub-image of the quarter resolution allred, green and blue areas. The fourth sub-image rendersthe entire image space at an eighth of the resolution. Throughout the rendering process we generate all sub-images independently of each other in the chosen res-olution. Shape and resolution of the sub-image are de-fined by the masks determined in step one. Accord-ingly, an image area of a sub-image is only rendered ifand only if the corresponding mask in this image areapermits it. Fig. 4 shows an example of rendering foursub-images.igure 4: Screen space ambient occlusion rendered infour sub-images, no lighting is calculated for the blackareas. The individual sub-images render SSAO in full(top left), half (top right), quarter (bottom left) andeighth resolution (bottom right).
As the final step of the technique we blend the indi-vidually rendered sub-images in order to generate thefinal image. All sub-images are upsampled to the fullresolution and combined. Using a simple bilinear in-terpolation would lead to artifacts, as pixels containingvisual information can be interpolated with those thatcontain no information.A simple solution for this problem would be bilat-eral interpolation as described by Tomasi and Man-duchi [21]. When using this, the sub-images are grad-ually scaled and merged without scattering missing in-formation of a resolution level into the relevant pixels ofthe image. To this purpose, a sub-image is always com-bined with the sub-images already blended in one step.This upsampling technique is also used by Nichols andWyman [5].In our case we can use the decomposition masks tocalculate the final blending weights. Each sub-image,starting at the lowest resolution, is blended with the nexthigher resolution sub-image based on the alpha valueof each mask. The softness of the transitions betweenthe resolution levels can be determined flexibly usingweights. These weights are multiplied with the alphamask and define the final alpha value for blending.
In our implementation we applied our multi-resolutionrendering technique to three illumination effects com-monly found in modern real-time computer graphics.These effects are SSAO, soft shadow mapping (SSM)and screen space global illumination (SSGI). In thissection, we describe the implementation of our tech-nique and specific adjustments for the illumination ef-fects used. Our implementation relies solely on theOpenGL 3.3 core profile and can as such run on widelyavailable hardware. According to our experiences dur-ing the development stage, a decomposition in four sub-images appears as the best compromise between image quality and speed. The width of the sub-images is suc-cessively halved, starting at full resolution width, andare set to full, half, quarter, and eighth. For SSGI wefound that not using the halved sub-image did not re-sult in worse image quality. This contributed to a fur-ther performance enhancement.
To render the sub-images, we use the previously gener-ated masks to create a stencil buffer for each resolutiondetermining the areas. We check if the mask is greaterthan zero and set the stencil value to one or zero ac-cordingly. We thought about using different thresholdsfor creating the stencil masks but for our purposes justusing zero provided the best results. For each resolu-tion level used, we subsequently render each sub-imageusing the stencil buffer to eliminate regions that we donot want to render.For SSAO, depending on the number of samples used,we blur the resulting sub-images in order to reduce theoccurring variance of the effect, especially in the lowerresolutions. However, we needed to ensure not to trans-port missing pixel information into the defined areas ofthe respective sub-image. We achieved this, with a bi-lateral blur filter.
Subsequently, the rendered sub-images are blended tocompose the final image. We use bilinear interpola-tion to scale the sub-images to full size and then com-bine them sequentially, starting at the lowest resolutionlevel. We carry out the final blending between two sub-images by using the values of our masks ( a i ) multipliedby a weight ( w i ) as a linear interpolation parameter. Theweights of our example cases can be found in Tab. 1.We calculate the following for each pixel of the finalimage. We define c i as that pixels color value in the i -th sub-image, where c is the full resolution image.The composed image including the i -th sub-image asits highest resolution is called c (cid:48) i . The fourth sub-imagehas the lowest resolution, covers the entire image spaceand is defined for each pixel. We use its value as theinitial value c (cid:48) = c . All other c (cid:48) i are calculated succes-sively using the alpha values a i from the correspondingmasks and the weights w i by: c (cid:48) i = c i · min ( a i w i , ) + c (cid:48) i − · (cid:0) − min ( a i w i , ) (cid:1) (1)The last computed value c (cid:48) describes the pixel value ofthe final composite image. For a basic evaluation we applied our multi-resolutionrendering technique to the three illumination effectsmentioned (SSAO, SSM and SSGI). We used three test S p ee dup [ % ] SSAO SSM SSGIFigure 5: Average speedup in percent by using ourmulti resolution technique in 4K (3840x2160 Pixels).We show the speedup for our three tested techniquesusing different numbers of samples for each of them.scenes “Office” (20,189 triangles), “Hall” (183,333 tri-angles), and “Breakfast Room” (a slightly modified ver-sion of the one provided by Morgan McGuire [22] with269,565 triangles) with eight camera configurations forspeed and visual comparison. For Soft Shadow Map-ping and Screen Space Global Illumination, a modifiedversion of the second scene with 255,432 triangles wasused, because it works better with the given directionallight sources. For each perspective, the rendering speedwas measured using our technique and compared to thespeed measured for naive rendering in full resolution.In addition, comparison images of the test scenes areshown and their differences measured and visualized.All tests were performed on a Nvidia Geforce GTX1080.
For testing the speedup of our technique we used3840 × ×
720 1920 × × × S p ee dup [ % ] SSAO SSM SSGIFigure 6: Average speedup in percent of our multiresolution technique at different resolutions. We usedfixed numbers of samples for all techniques: 64 sam-ples for SSAO, 196 samples for SSM, and 228 samplesfor SSGI.relatively low. For lower resolutions the overhead ofgenerating the mask to divide the image and the costof the additional rendering passes for multiple resolu-tions dominate over the positive effect of our technique.Fig. 6 shows these results.
While our technique tries to prevent producing imagesthat differ from renderings created with naive full res-olution rendering, we could not prevent all visual arti-facts. As can be seen in Fig. 7 to 9 these errors occurat the borders of our masks and are mostly due to theGaussian blur we need to apply to the images to reduceFigure 7: The “Hall” dataset using SSAO and 64 sam-ples. The top image shows our multi resolution tech-nique while in the lower left corner the reference imageis shown. In the lower right corner is an enhanced dif-ference image between those two.igure 8: The “Breakfast Room” dataset using SSMand 196 samples. The top image shows our multi res-olution technique while in the lower corner the refer-ence image is shown. In the lower right corner is anenhanced difference image between those two.discontinuities at these edges. The blur kernel is verynarrow so it is hard to detect the errors when just com-paring the images directly but is visible in the differenceimages provided.Fig. 7 shows the results for SSAO using 64 samples. Wechose this number of samples as we think it is a reason-able choice for real applications and a good compro-mise between speed and image quality. As this imageis very bright the differences in the difference image arealso more prominent as with the other technique.Fig. 8 shows the results for SSM using 196 samples.For this lighting effect we can use masks that do not de-pend directly on the screen space geometry for our tech-nique. The occurring errors are relatively low comparedto the other techniques due to the parts of the scene inshadow that are lit with a constant ambient illumination.Results of the SSGI technique we implemented areshown in Fig. 9. For a visually plausible global illu-mination effect in screen space we needed a lot of sam-ples so we chose to present the results for 4224 samples.While our results are still convincing some small arti-facts can be seen in the corners of the right rack. Whilethese present visible differences to the original imagethe effects are very minor.Besides the visual results we provide an overview overall errors in the graphs in Fig. 10. These numbers do notonly include the presented images but include imagesfrom all three scenes with eight camera configurationseach. These numbers support our claim that the errorsintroduced by our technique are very low.
We presented the performance and visual quality of ourmethod and have two general findings. As a general Figure 9: The “Office” scene using SSGI and 4224samples. The top image shows our multi resolutiontechnique while in the lower corner the reference im-age is shown. In the lower right corner is an enhanceddifference image between those two. The image onlyshows the SSGI effect without direct illumination tobetter show the differences caused by our technique.rule, it was observed that illumination techniques thatare more computationally demanding can benefit morefrom our technique than less demanding ones. This isbecause of a constant overhead due to mask generationand multiple rendering passes. This overhead becomesdominant for techniques that are less computationallydemanding. The second finding is the fact that ourtechnique excels especially in higher resolutions for thesame reason. ×
720 1920 × × × . . · − Resolution R M S ( A b s o l u t e ) SSAO SSM SSGIFigure 10: The absolute root mean squared (RMS) er-rors between result images of our multi resolution tech-nique and images naively rendered with high resolution.We used 64 samples for the SSAO images, 196 samplesfor SSM and 288 samples for the SSGI images. Val-ues in the compared images ranged from 0 to 1 so theresulting errors can be considered low. minor finding is that masks which are more compli-cated to generate than by simply using the G-Bufferalso cause a greater overhead. This makes the use ofthese masks only feasible for the highest resolutionsor techniques that are more computationally demand-ing than the soft shadow mapping presented here.
We presented a technique for multi resolution render-ing that can be implemented on widely available graph-ics hardware. Our technique can improve the render-ing speed of screen space algorithms drastically (espe-cially for high resolutions) as we have shown for threecases. While the technique presented here is only usedfor ‘Content Adaptive Shading’ we can trivially extendit to ‘Foveated Rendering’ by modulating the mask weuse by an importance mask provided by eye trackers.Including ‘Motion Adaptive Shading’ is also possibleby using information of pixel motion in the mask gen-eration process.To further improve our technique we think that the maskgeneration process should be modified. For determin-ing the geometry edges, we use normals and depth val-ues from the G-Buffer in screen space. In practice how-ever, non-smooth, modified normals are often used tocalculate the illumination. For smooth shading, pixelnormals are calculated by the linear interpolation ofvertex normals, but in real applications bumpmaps ornormal maps are used to modify the normals. In thiscase, the edge filter could potentially find many moreedges, which can result in dramatically increased com-putational effort and significantly lower efficiency. Pos-sible solutions to these problems would be the exclu-sive use of unmodified normals or an alternative de-termination of the edges using the pixel locations inworld space. Another problem may arise with certaineffects, including, for example, reflections or caustics,since their edges can not be calculated with the informa-tion contained in the G-buffer. Also in this case, imageareas with higher-frequency components could be ren-dered in too low a resolution. For such lighting effects,further development of the progressive decompositionof the image would certainly be beneficial. To preventsub-sampling for some effects, sub-images could alsobe realized by just using a lower number of samples infull resolution instead of rendering the effect in a lowerresolution.Another interesting application for the multi resolu-tion rendering technique would be using ray tracingfor physically correct illumination. In particular, dif-fuse indirect illumination can only be achieved by rel-atively high computational effort and can barely be re-alized in real-time on current graphics hardware. Usingthe multi-resolution approach, the performance couldbe increased drastically.
REFERENCES [1] G. James,
GPU Gems . Pearson Higher Education, 2004, ch.Real-Time Glow.[2] T. Scheuermann et al. , “Advanced Depth of Field,”
GDC 2004 ,vol. 8, 2004.[3] C. Soler, O. Hoel, and F. Rochet, “A Deferred Shading Pipelinefor Real-Time Indirect Illumination,” in
ACM SIGGRAPH 2010Talks . ACM, 2010, p. 18.[4] T.-D. Hoang and K.-L. Low, “Multi-Resolution Screen-SpaceAmbient Occlusion,” in
Proceedings of the 17th ACM Sympo-sium on Virtual Reality Software and Technology . ACM, 2010,pp. 101–102.[5] G. Nichols and C. Wyman, “Interactive Indirect IlluminationUsing Adaptive Multiresolution Splatting,”
IEEE Transactionson Visualization and Computer Graphics , vol. 16, no. 5, pp.729–741, 2010.[6] I. Cantlay,
GPU Gems 3 . Addison-Wesley Professional, 2007,ch. High-Speed, Off-Screen Particles.[7] G. Guennebaud, L. Barthe, and M. Paulin, “High-Quality Adap-tive Soft Shadow Mapping,” in
Computer graphics forum ,vol. 26, no. 3. Wiley Online Library, 2007, pp. 525–533.[8] Y. He, Y. Gu, and K. Fatahalian, “Extending the graph-ics pipeline with adaptive, multi-rate shading,”
ACM Trans.Graph. , vol. 33, no. 4, pp. 142:1–142:12, Jul. 2014.[9] “VRWorks – Multi-Res Shading,” https://developer.nvidia.com/vrworks/graphics/multiresshading, accessed: 2019-02-05.[10] “VRWorks – Lens Matched Shading,” https://developer.nvidia.com/vrworks/graphics/lensmatchedshading, accessed: 2019-02-05.[11] “VRWorks – Variable Rate Shading (VRS),” https://developer.nvidia.com/vrworks/graphics/variablerateshading, accessed:2019-02-05.[12] K. Vaidyanathan, M. Salvi, R. Toth, T. Foley, T. Akenine-Möller, J. Nilsson, J. Munkberg, J. Hasselgren, M. Sugihara,P. Clarberg, T. Janczak, and A. Lefohn, “Coarse Pixel Shad-ing,” in
Proceedings of High Performance Graphics , ser. HPG’14. Goslar Germany, Germany: Eurographics Association,2014, pp. 9–18.[13] K. Vaidyanathan, R. Toth, M. Salvi, S. Boulos, and A. Lefohn,“Adaptive Image Space Shading for Motion and Defocus Blur,”in
Proceedings of the Fourth ACM SIGGRAPH / Eurograph-ics Conference on High-Performance Graphics , ser. EGGH-HPG’12. Goslar Germany, Germany: Eurographics Associ-ation, 2012, pp. 13–21.[14] B. Guenter, M. Finch, S. Drucker, D. Tan, and J. Snyder,“Foveated 3D Graphics,”
ACM TOG , vol. 31, no. 6, pp. 164:1–164:10, Nov. 2012.[15] M. Mittring, “Finding Next Gen: Cryengine 2,” in
ACM SIG-GRAPH 2007 courses . ACM, 2007, pp. 97–121.[16] L. Bavoil, M. Sainz, and R. Dimitrov, “Image-Space Horizon-Based Ambient Occlusion,” in
ACM SIGGRAPH 2008 talks .ACM, 2008, p. 22.[17] L. Williams, “Casting Curved Shadows on Curved Surfaces,”in
ACM Siggraph Computer Graphics , vol. 12, no. 3. ACM,1978, pp. 270–274.[18] W. T. Reeves, D. H. Salesin, and R. L. Cook, “Rendering An-tialiased Shadows With Depth Maps,” in
ACM Siggraph Com-puter Graphics , vol. 21, no. 4. ACM, 1987, pp. 283–291.[19] R. Fernando, “Percentage-Closer Soft Shadows,” in
ACM SIG-GRAPH 2005 Sketches . ACM, 2005, p. 35.[20] T. Ritschel, T. Grosch, and H.-P. Seidel, “Approximating Dy-namic Global Illumination in Image Space,” in
Proceedings ofthe 2009 symposium on Interactive 3D graphics and games .ACM, 2009, pp. 75–82.21] C. Tomasi and R. Manduchi, “Bilateral Filtering for Gray andColor Images,” in