NNext Event Backtracking
Jendersie, Johannes
TU Clausthal, Germany
September 4, 2019
Figure 1: Bidirectional Path Tracing (left, 212 spp) compared to Path Tracingwith Next Event Backtracking (341 spp, equal time 15 min). Bidirectional PathTracing and many more methods fail to produce the caustic in this scenario,because the light paths rarely hit the small teapot. Our method implicitlyguides the light paths to the important regions and scales well for large scenesand many lights.
Abstract
In light transport simulation, challenging situations are caused by thevariety of materials and the relative length of path segments. Path Tracingcan handle many situations and scales well to parallel hardware. However,it is not able to produce paths which have a smooth surface in connectionwith a small light source. Here, photon transports perform superior, whichcan be ineffective if the smooth object is small compared to the scene size.We propose to use the last segment of a Path Tracer path as the firstsegment of a photon path. As a result, the strengths of next event es-timation are inherited by the photon transport and photons are guidedtoward the regions where they are most useful. To that end, we devel-oped a lock-free sparse octree, which we use for fast and robust densityestimates. Summarizing, the new method can outperform state of the artalgorithms like Vertex Connection and Merging in certain scenarios. a r X i v : . [ c s . G R ] S e p Introduction
In stochastic light transport simulation there are three basic operators to createlight paths: random walks , connections and merges . Random walks start atthe observer or a light source and use ray tracing to find the next intersectionpoint (vertex) in a randomly sampled direction. If they hit a surface with theadjoint quantity, a full light transport path is found. Random walks have ahigh chance for finding contributions if there are large light sources and/or thepath to the light source is close to deterministic (for example, if the emitter isdirectly visible or reflected by mirrors).Connections can be created from each vertex of a random walk path. Bychoosing a random point on an adjoint surface, a contribution can be computedover arbitrary large distances. This is called Next Event Estimation (NEE),because the next event in the random walk could be the very same location.The strength of NEE is that it can find light sources with a very small solidangle, which have a much smaller probability to be found via random walkalone. Additionally, it is possible to use binary trees to select connection pointsproportional to their expected contribution to reduce the variance.Finally, merges combine the vertices of two random walks by assuming theyhit the same point, although they might be separated by a small distance.That means, if the end points of two paths have a distance smaller than somethreshold, they are combined to a full contribution path. In general, mergeshave one more random sampling event than random walks and connections,leading to a higher variance in the first place. However, their strength comesfrom the fact that neighbor points can be searched in the large set of verticesover all paths (often millions). Merges are the only really successful operator tofind the difficult specular-diffuse-specular (SDS) paths.We combine connections and merges by using the connection segment asthe first path segment of a new light walk. Thus, merges inherit the strengthsof next event estimation. Therefore, the vertices, from which the connectionswere initiated, are transformed into photon-emitting virtual light sources. Thisrequires an estimate of the density of such NEE vertices. For that purpose, weintroduce a new octree-based data structure which is faster than kd-tree-basedneighborhood searches. Independent of the used data structure, every densityestimate over a finite area will be biased and noisy, leading to a small bias inour new method.
The foundation of our method is the
Path Tracing (PT) algorithm which goesback to Kayjia [Kaj86]. Today, it is widely used in production renderers [FHF ∗ Multiple Importance Sampling (MIS) in Monte2arlo-based simulations. In graphics, the balance or power heuristic are usedto compute the weights. They were introduced by Veach and Guibas [VG95b]and extended to a large set of operators since then.PT can handle many situations, but fails for caustics and SDS paths.
Bidi-rectional Path Tracing (BPT) [VG95a, LW93] is a stronger method. It usesa random walk from the observer and another one from the light source, andcomputes all connections between the vertices of the two paths. It is able tofind caustics, but still fails for SDS paths.The only successful methods to efficiently find SDS paths are based on pho-ton transports.
Photon mapping [Jen96] is able to produce caustic and SDSpaths, but has problems with the scene scale. Large scenes or small specularobjects as well as many light sources can lead to highly noisy results. Jensenalready used a dedicated
Caustic Map where photons are explicitly send intodirections of specular objects. This will still fail if there are many large specularobjects and requires a generalization for glossy objects. Contrarily, NEB shootsphotons into important regions independent of the material.Similarly, the first importance based method to distribute photons was in-troduced by Peter and Pietrek [PP98]. It uses piecewise constant functions,created from the importance information of a view path tracing pass, to dis-tribute photons. Due to this piecewise approximation the method has problemswith highly glossy surfaces. The major difference to NEB is that the guidancein NEB will happen implicitly.A different approach to improve the photon map quality is to stir the de-position of photons. Suykens and Willems [SW00] fixed the photon mappingdensity to a constant, which reduces the number of photons in bright regions.Likewise, Keller and Wald [KW00] used an importance map to control the den-sity of stored photons. However, both methods are still sampling a large numberof photons paths and only reduce the number of stored photons, wasting theothers.Georgiev et al. [GKDS12] and Hachisuka et al. [HPJ12] simultaneously intro-duced the MIS-weight computation to successfully combine BPT with merges.The
Vertex Connection and Merging (VCM) algorithm is one of the most robustmethods so far. Still, it may fail for selected situations, where the MIS-weightunderestimates the variance of certain merge events. This issue was overcomeby Jendersie et al. [JG18, Jen19] with a small change to the heuristic. Whilewe only demonstrate our novel operator – the next event backtracking (NEB)– in Path Tracing, it is possible to integrate it into BPT or VCM, too. How-ever, NEB is already capable of handling many complex light situations withoutdoing so.Another important variant of the random walk operator is the
Markov CainMonte Carlo (MCMC) sampling. Instead of generating independent randomsampling events, MCMC samplers apply small mutations to the paths. Then,the new mutations are randomly discarded or accepted with respect to a targetfunction. This allows MCMC samplers to generate distributions of an unknownfunction to reduce noise opposed to the na¨ıve walk. For an overview of MCMCmethods we refer to the survey of ˇSik et al. [ˇSK18]. It is thinkable to use NEB3n connection with MCMC random walks, which we leave as future work.
Manifold Next Event Estimation (MNEE) from Hanika et al. [HDF15] is themost similar method to ours. There, random connections, which are blocked byrefractive surfaces, are iteratively moved on the surface until they form a validpath. This iteration requires multiple expensive evaluations and shadow testsand is biased in case there is an ambiguity of multiple possible lights paths.Finally, MNEE is not able to find caustics from mirrors and prisms, where thereflection surface is not blocking the direct connection between the caustic andthe light emitter. NEB shares the weakness in that scenario, but at least handlesit to a better degree than MNEE. We believe that NEB is more robust thanMNEE in general.Other methods which target the problem of small caustic throwing objectsare based on guidance. A general guidance method like the method from M¨ulleret al. [MGN17] reduces the overall variance in all sampling events by steering thesamplers into the direction of the adjoint quantity. A method which explicitlytargets the exploration of visible caustics was invented by Hachisuka and Jensen[HJ11]. It uses MCMC sampling for the light paths with the visibility of causticsas target function. Recently, Grittman et al. [GPGSK18] improved the caustichandling in large scenes by restricting the use of photons adaptively and learninga guidance information at the light emitter. Other than these methods ourNEB does not learn those connections over time. Instead, it samples these casesexplicitly with a much higher density.
The basic idea is simple: whenever a path vertex is connected to a light source,we use this very same connection as the first segment of a photon path bycreating a virtual light source at the path vertex. This, however, has severalcomplex implications and a lot of potential for possible modifications. Theoutline of the algorithm (Figure 2) is as follows:1. Trace paths as in
Path Tracing (a) Store the hit-points (called
NEE vertex )2. NEE and photon tracing(a) Estimate virtual light source density(b) Compute NEEs(c) Trace photons and apply contribution directly(d) [Optional] Trace photons from the light source as usual3. Compute self emittance contributions4 . P a ss × NEEvertexstorageRandomHitEvents . P a ss . Finalize self emittance × Figure 2: Algorithm outline: In the first pass a PT is executed and the in-termediate results are stored. In the second pass photons are traced from thenon-zero-estimate vertices from pass one. Finally the contributions from ran-dom hits, photons and NEE are weighted to compute the final contribution.5 .1 Trace View Paths
For tracing paths we use a conventional Path Tracer. However, it is not possibleto compute the results for random hits and NEE immediately. To calculate theMIS weights, it is necessary to know the photon events which itself depend onthe density of NEE vertices (which are being created in this pass). For thisreason the intermediate results for the events must be stored.
Our overall goal is to transform the stored vertices into virtual light sourceswhich emit photons. To transform the incident differential irradiance (unitW m − ), which is available through NEE, into a flux Φ (unit W) we need thedensity of source vertices. Effectively, we need to know the area A of the virtuallight source to integrate the incoming irradiance:Φ = d E · d A = d Eρ . (1)where ρ is the density of NEE vertices at the current virtual light source. We caninsert d A = 1 /ρ because the area, which is associated with each vertex, dependson this density: If there are more vertices, each one represents a smaller part ofthe surface area.Note that it is not important how the vertices were generated. It onlymatters how many vertices are stored in a local area to turn each of them intoan unbiased emitter. Especially, the past (sampling events, Russian Roulette,...) of the view path vertices is not important.So, in step (a) the density of vertices must be computed at each vertex.This can be achieved with k-nearest neighbor searches or and additional datastructure. We use a sparse octree to integrate density over regions and time, aswill be detailed in Section 5. The advantage of the octree is a higher performanceand less noise in the estimates.Now (step (b)), we can compute a next event estimation for each of thestored vertices and compute its contribution L E = w E,k · τ ( ν k ) · f ( ν k , d ) · d E (2)with d = Direction of NEEd E = Differential irradiance of NEE ν k = Vertex with index k (camera has index 0) τ ( ν ) = Path throughput from MC sampling k (cid:89) i =0 f i p i f ( ν, d ) = Bidirectional Scattering Distribution Function (BSDF) w E,k = MIS weight (see Section 4). Unbiased, if we know the true density, which is not the case p (cid:96) p → p → p → Figure 3: MIS-weight computation between several events along the same path.The shown example path has length (cid:96) = 4.It must be added to the pixel, in which the path originated. The details tocompute the MIS weight w E,k are given in the next section.Photons can be traced (step (c)) by starting a new random walk at thecurrent vertex, after applying Equation (1). The first sampling event is basedon the NEE connection direction d as incident direction and the vertex’s BSDF.It is not necessary to store photons, because we can invert the search for mergeevents. Instead of searching for photons around the view path vertices, we canas well search for view path vertices around photons, which produces identicalresults. Therefore, the NEE vertex storage must be some kind of spatial datastructure to support neighborhood searches.Then the contribution of a photon Φ i at ν i from direction d i for a foundvertex ν k is L P = w P,k,i · τ ( ν k ) · f ( ν k , d i ) · Φ i · K ( (cid:107) ν k − ν i (cid:107) ) (3)where details on the weight will follow in the next section, again. The kernel K can be the uniform kernel 1 /πr for the search radius r or any other kernelused in photon mapping.Optionally, it is possible to trace photons from the light source (step (d))and add their contributions the same way as the photons from NEE vertices.Only the MIS weights must be adapted accordingly. We will show in section 6that adding those photons complements NEB where it is weakest. Since it was not possible before step (2.a) to compute the MIS weights properly,we needed to store random hits of light source in pass 1. Now, we can iterateover the stored events and compute the contributions L e = w e · τ ( ν k ) · L e ( ν k ) (4) In our basic algorithm there are three types of events which must be weightedagainst each other. When conventional photons are traced too, there is one7dditional event type. Each path of length (cid:96) has a single random hit event, oneNEE event if (cid:96) ≥ , (cid:96) −
2) photon merge events as depicted in Figure3. Optionally, there are max(0 , (cid:96) −
1) merges with usual photons.In general, MIS-weights can be computed with the balance heuristic [VG95b]which reads w j = n j p ∗ j (cid:80) i n i p ∗ i (5)where p ∗ are the path sampling densities of the different events, with respect tothe area measure, and n are the number of samples drawn with the respectivesampler.For random hits we have n e = 1 and p ∗ e = (cid:96) − (cid:89) i =0 p i → i +1 (6)for the path density up to the vertex k . The Probability Density Functions (PDFs) p i → i +1 = p i · cos θ/d (unit m − ) describe the probability to reachvertex ν i +1 via random sampling of the BSDF at vertex ν i with the PDF p i .The path density for NEEs is defined as p ∗ E = p (cid:96) (cid:96) − (cid:89) i =0 p i → i +1 (7)where p (cid:96) is the sampling density per area to sample the specific point on the lightsource when attempting a random connection. The number of NEE samples isusually but not necessarily n E = 1.Next, we need the PDF for the new photon samplers. It is closely relatedto the NEE PDF in Equation (7) because each photon path starts with a nextevent estimation and therefore has to use p (cid:96) . From there, a random walk inbackward direction is performed up to the merge vertex ν k . Further, it consistsof another random walk beginning at the observer. Finally, we need the chancefor a successful merge ρ (cid:96) − · πr /n T with n T being the total number of photonpath starting points, i.e. the number of stored NEE vertices. Together thisgives p ∗ P,k = p (cid:96) · ρ (cid:96) − · πr n T · (cid:96) − (cid:89) i = k p i +1 → i · k − (cid:89) i =0 p i → i +1 (8)and n P = n T for the sampler count because we reuse photons from all paths.The above merge chance is derived as following: First, it must be proportionalto the size of our search region πr . The larger the search region, the largerthe chance to find something. Second, we need the probability density per areato start the respective photon random walk. The density ρ (cid:96) − is the one wecomputed in step (2.a). It is the total number of events per area at this startvertex. Thus, ρ (cid:96) − /n T is the PDF per area of a single sample. Both togethergive the chance to find the chosen photon sub-path.8inally, we have p ∗ LP,k = p (cid:96) · πr · (cid:96) − (cid:89) i = k p i +1 → i · k − (cid:89) i =0 p i → i +1 (9)for the conventional photons [GKDS12, HPJ12]. The number of samples isthe number of additional photon paths n LP due to global reuse of photons or n LP = 0 if disabled.Plugging Equations (6) to (9) and the sampler counts into the balance heuris-tic (5) gives us the searched weights: w e = p ∗ e p sum w E = n E · p ∗ E p sum w P,k = n P · p ∗ P,k p sum w LP,k = n LP · p ∗ LP,k p sum with p sum = p ∗ e + n E · p ∗ E + n P (cid:96) − (cid:88) k =1 p ∗ P,k + n LP (cid:96) − (cid:88) k =1 p ∗ LP,k (10)
An important point for the correctness and performance of the NEB operatoris the estimate of the density ρ (required in Equation (1)). Density estimationis a well explored research area for which we refer to the book of Silverman[Sil86] and the survey of Sheather [She04]. Unfortunately, the density estimationbecomes the bottleneck of the algorithm fast and the choice of the data structureis very important.Our first approach was to use a hash-grid to query the number of photonswith a predefined search radius. This is referred to as na¨ıve estimator in Sil-verman’s book and has a fundamental drawback: Its bias gets very large if thedistribution of NEE vertices is irregular. In low density regions (less than onevertex per search region) the estimate will always find the query point, but likelyno others. Therefore, it may overestimate the density by an unbounded factor.On the other hand, in high density regions it will blur the density function morethan necessary.Our second approach was to use a kd-tree [Ben75] to estimate the densitywith the k-nearest neighbor approach. The nearest neighbor approach scalesmuch better with irregular densities. We found that the bias was acceptablewith a k ≥ ∗ To improve performance, we implemented a dedicated data structure for thedensity estimate. Other than the hash-grid or the kd-tree, this new data struc-ture is not able to find the actual vertices because we store particle counts9 npb b b b b b b Figure 4: Concepts of the density octree. Left: a sparse quadtree with a uniformcount of particles per cell. Right: one possible case for the intersection areabetween plane and cube.explicitly. Therefore, it achieves a speedup of 3000 × opposed to the kd-treewhile having a comparable bias.The idea is to use a sparse octree which stores an atomic counter per cell.Whenever a vertex is created, it increases the counter of the cell in which it liesby one. If the number is greater than some predefined threshold, the cell is splitinto eight new cells. The resulting tree has large cells in low density regions andsmall ones in high density regions, like the kd-tree. An example quadtree (2Doctree) is shown in Figure 4. If splitting a cell, we need to initialize the eight new cells. Unfortunately, thedistribution of points which filled the cell is not known anymore and we needto make a guess. Without further knowledge we can only distribute the counteruniformly to all child cells.However, dividing the counter by eight, the number of children, systemati-cally produces an underestimated initialization. The reason is that a 2D surfaceonly intersects expectedly four of the eight children. Therefore, using the parentcounter divided by four as initialization value turned out to be less biased inpractice. The too large values in cells without an surface intersection do notmatter, since they are never queried.
When a query is made, the count c in the respective cell must be convertedto a density. Assuming locally flat surfaces and that the current surface is theonly one inside the cell, the intersection area between the cell’s bounding box B and the plane P can be calculated. Thereby, the plane is defined by the surfaceposition p and normal n . Thus, our density estimate is ρ = c |B ∩ P| = cA , (11)10igure 5: Queried density from a 4-nearest neighbor search (left) compared tothe octree results after the first and the fourth iteration.where the intersection area is computed with A = s x · s y , n x = n y = 0 s x (cid:12)(cid:12)(cid:12)(cid:12) n y n z (cid:80) i =0 (cid:15) i max (0 , (cid:104) n , p (cid:105) − (cid:104) n , b i (cid:105) ) (cid:12)(cid:12)(cid:12)(cid:12) , n x = 0 (cid:12)(cid:12)(cid:12)(cid:12) n x n y n z (cid:80) i =0 (cid:15) i max (0 , (cid:104) n , p (cid:105) − (cid:104) n , b i (cid:105) ) (cid:12)(cid:12)(cid:12)(cid:12) (12) s = b − b (cid:15) i = (cid:40) , i ∈ { , , , }− , i ∈ { , , , } . Here, x , y and z are the indices of the dimensions, s is the size of the box and (cid:15) i is the parity of the eight vertices. In the first case, two of the dimensions arezero. W.l.o.g. only n z (cid:54) = 0 is shown. The terms for the other two dimensions aredefined analogously. Similarly, case two shows the situation where one dimensionof n is zero. Again, there are three analogous terms for all dimensions. The thirdcase applies if no dimension is zero. A derivation can be found in Appendix A.An advantage of the dedicated structure is that the density can be integratedover time. Therefore, the split threshold must be increased proportionally to theiteration count and Equation (11) must be divided by the number of iterations.This reduces noise and increases the independence between the current sampleset and the density estimate. Figure 5 visualizes the query results and showsa fast convergence of the estimate. After four iterations the octree already hassignificantly less noise than the kd-tree-based search. Unfortunately, there aresmall dark points (see image on the right) which are caused by floating-pointprecision issues. These can cause too bright photons in the image which areoften hidden by the MIS. We store the eight counters of the sibling cells in a consecutive sequence. Thisallows us to use a single pointer in a parent to address all its children. Moreover,11EB, 68spp NEB + LP, 14sppFigure 6: Equal time comparison (5 min) without and with additional photons(Option 2.c., Section 6.1) in a light bulb scenario.it is possible to encode the counter and the child index in the same integer tosave memory. If the stored value is negative, its absolute value is the index ofthe first child node. Otherwise it is a counter.The aforementioned split threshold is set to four in our implementation. Us-ing larger numbers introduces too much bias, because the blurring area increasesand the planar surface assumption is less valid. Using smaller numbers increasesthe memory consumption and leads to more noise.The necessary memory depends on two factors only: the expected numberof NEE vertices divided by the split threshold. Particularly, it is independent ofthe scene. For typical setups the density octree requires less than 50 MB (oftenless than 10 MB suffice).For more details on the lock-free implementation, we provide the code inAppendix B.
Having a robust density estimate, all tools for NEB are given. In this section,we evaluate simple modifications to improve the performance or robustness ofthe basic algorithm
While the basic algorithm is very strong in scenarios like the teapot in a stadium(Figure 1), it fails when the caustic throwing object is much closer to the lightsource than to the receiver. Consider the example of a light bulb – an emitter12ingle NEE (cid:96) = 2 (cid:96) = 3 (cid:96) = 4 (cid:96) = 5 (cid:96) ∈ [2 , Since we already store the NEE vertices in a search data structure, it seemsreasonable to share the results of NEE events. Therefore, it is necessary tostore the NEE information along with the vertices and to query those with aneighborhood search. Then, all available NEEs at one vertex must be averagedand the effective count of NEEs n E in Equation (10) increases to the numberof found events. 13n Figure 7 we show an experiment with and without NEE reuse to judge theeffectiveness of the proposed modification. Enabling the merges is clearly slowerdue to the range query and the additional evaluations of BSDFs. For the rangequeries we used a hash grid with a fixed query radius. Despite the lower numberof samples, the noise level ( Root Mean Squared Error ) is slightly better whenreusing the NEEs. However, the effectiveness decreases with path length andgets worse than usual PT for practical path lengths. Repeated experiments withdifferent merge radii had the same outcome. The reason for the low effectivenessis that the noise in indirect lighting is dominated by the random walk and notthe NEE.Concluding, the idea of reusing the NEE events sounds promising, but doesnot pay off in this form. Therefore, we used only the one primary NEE withoutthis modification all other experiments.
The next event backtracking operator has clear strengths and weaknesses. It isstrong whenever NEE is more likely than other events on the path. This is thecase for small or distant light sources like in Figure 1. Additionally, it scales wellwith many lights, if contribution-sensitive NEEs are used. It is weak, wheneverthe vertex density is much smaller than the light contribution. In this situation,combining NEB with light photons (+LP) can alleviate this problem.In Figure 8 we show selected light situations with an equal time comparisonof the basic NEB method to other rendering algorithms. For the first twoscenarios our method is superior due to the uniform sampling density of NEEvertices on the glass surfaces. The
Mirrors scenario shows a small degenerationin quality where the distance to the caustic receiving surface increases (mirrorsat the top). The
Reflector scenario represents the worst case. Here, thetiny, bright surfaces close to the point light source are seldom found by a PathTracer.In Figure 9 more realistic scenes are compared. For some situations likein the
Watch scene, NEB is superior to the state of the art VCM. In othersituations (
Bathroom or Christmas scene) it shows more noise than VCMwithout modifications. Enabling the additional tracing of light photons makesNEB equally effective as VCM. In all our experiments we found that NEB+LPis very strong for caustics and SDS paths regardless of the scene scale. Fordiffuse indirect lighting it performs on an average level which is comparable tomost other methods.Comparing NEB as a method to produce high quality photons maps againstthe older methods [PP98, SW00, KW00], it produces a medium quality map. Inmost scenes, it performs similarly to the other methods without wasting sam-ples for the estimation of importance distributions. However, if there are smallreflectors close to the light source (light bulb or Figure 8) it will be less effec-tive, because it does not respect the radiance distribution. Adding standardphotons compensates this weakness, but their distribution does not follow im-14 austics SDS Mirrors Reflector R e f e r e n ce N E B P T B P T B P M V C M Figure 8: Equal time comparison (1 min) of different rendering algorithms indifficult light scenarios. 15EB NEB + LP VCM W a tc h ( m i n )
38 spp 35 spp 49 spp B a t h r oo m ( m i n )
24 spp 15 spp 13 spp C h r i s t m a s ( m i n )
42 spp 22 spp 13 spp
Figure 9: Comparison of NEB to VCM in more realistic scenes.16ortance again. However, since none of the photons is ever stored, the memoryconsumption of additional photons is of no interest.
We proposed a new light transport operator – the next event backtracking –which is strong in situations where the usual photon transport fails. It cansuccessfully render caustics of small objects in large scenes due to an implicitguidance of light paths toward the important regions and is often very strongfor SDS paths.The most difficult problem is a robust and fast density estimate to convertirradiance into flux at the selected positions. We introduced an octree-baseddata structure which is fast, but may cause small artifacts from its grid structureleading to overly bright photons.Our NEB algorithm used the operator in the setup of a conventional PathTracer. In this configuration, certain light situations can still cause high vari-ance. The modification to add conventional photon tracing, beginning at thelight source, solves this problem. Effectively, NEB is weak if standard photontracing is strong and vice versa. Therefore, both together result in a very robustalgorithm. Only scenes with bad visibility between observer and light sourceare still a problem.One of the practical weaknesses is the memory consumption, which is slightlyhigher than in other photon mapping-based approaches. The view path vertexstorage exchanges the one for photons and has a comparable size. Additionally,there is the octree to store the density values with up to 50 MB and the storagefor emissive events with another 50 MB per one million paths.Also, it would be interesting to implement the algorithm on a GPU. In con-trast to BPT or VCM, the number of connections is only linear in path lengths.This reduces divergence and per-path-time such that our method should scalewell on parallel hardware.Another future avenue would be to use MCMC samplers for the view pathsor to use the NEB paths as seed paths for MCMC samplers. This would increasethe method’s robustness with respect to bad visibility of the light source due tohigh occlusion.Finally, our approach allows the use of arbitrarily sampled positions as pho-ton emitters. It would be possible (but likely ineffective) to distribute emittersequally on the surfaces – independent of camera and light sources. More inter-esting would be to use Markov chains to sample the emitters on the surfacesthemselves. Similar to Lightweight Photonmapping [GPGSK18], it would thenbe possible to remove emitters on surfaces where other samplers are more suc-cessful and to increase the density otherwise.17 cknowledgments
I want to thank those people who make 3D scenes publicly available. The villaand the bathroom scene are taken form the PBRT repository. The stadiumwas modeled by Ericchip1983 and is provided on blendswap.com/blends/74526.The wrist watch was created by HeraSK (blendswap.com/blends/70232).
References [Ben75]
Bentley J. L. : Multidimensional Binary Search Trees Used for AssociativeSearching.
Communications of the ACM 18 , 9 (Sept. 1975), 509–517. doi:10.1145/361002.361007 .[CKL ∗ Choi B., Komuravelli R., Lu V., Sung H., Bocchino R. L., Adve S. V.,Hart J. C. : Parallel SAH k-D Tree Construction. In
Proc. of HighPerformance Graphics (HPG) (2010), Eurographics Association, pp. 77–86.https://github.com/bchoi/ParKD. URL: http://dl.acm.org/citation.cfm?id=1921479.1921492 .[FHF ∗ Fascione L., Hanika J., Fajardo M., Christensen P., Burley B., GreenB. : Path Tracing in Production - Part 1: Production Renderers. In
ACMSIGGRAPH Courses (2017), pp. 13:1–13:39. doi:10.1145/3084873.3084904 .[GKDS12]
Georgiev I., Kˇriv´anek J., Davidoviˇc T., Slusallek P. : Light Transport Sim-ulation with Vertex Connection and Merging.
ACM Transactions on Graphics(Proc. of SIGGRAPH Asia) 31 , 6 (2012), 192:1–192:10. URL: https://cgg.mff.cuni.cz/~jaroslav/papers/2012-vcm/ , doi:10.1145/2366145.2366211 .[GPGSK18] Grittmann P., P´erard-Gayot A., Slusallek P., Kˇriv´anek J. : EfficientCaustic Rendering with Lightweight Photon Mapping.
Computer Graphics Fo-rum 37 , 4 (2018), 133–142. doi:10.1111/cgf.13481 .[HDF15]
Hanika J., Droske M., Fascione L. : Manifold Next Event Estimation.
Computer Graphics Forum (Proc. EGSR) 34 , 4 (July 2015), 87–97. doi:10.1111/cgf.12681 .[HJ11]
Hachisuka T., Jensen H. W. : Robust Adaptive Photon Tracing using PhotonPath Visibility.
ACM Transactions on Graphics (TOG) 30 , 5 (2011), 114.[HPJ12]
Hachisuka T., Pantaleoni J., Jensen H. W. : A Path Space Extension forRobust Light Transport Simulation.
ACM Transactions on Graphics (Proc.of SIGGRAPH Asia) 31 , 6 (Nov. 2012), 191:1–191:10. doi:10.1145/2366145.2366210 .[Jen96]
Jensen H. W. : Global Illumination using Photon Maps. In
Proc. of EurographicsWorkshop on Rendering (EGWR) (1996), Springer, pp. 21–30. URL: http://dl.acm.org/citation.cfm?id=275458.275461 .[Jen19]
Jendersie J. : Variance Reduction via Footprint Estimation in the Presence ofPath Reuse. In
Ray Tracing Gems , Haines E., Akenine-M¨oller T., (Eds.), 1 ed.Apress, Feb. 2019, pp. 557–569. URL: http://raytracinggems.com .[JG18]
Jendersie J., Grosch T. : An Improved Multiple Importance Sampling Heuris-tic for Density Estimates in Light Transport Simulations. In
Proc. of Eurograph-ics Symposium on Rendering EI&I Track (EGSR) (July 2018), EurographicsAssociation, pp. 65–72. doi:10.2312/sre.20181173 .[Kaj86]
Kajiya J. T. : The Rendering Equation.
Computer Graphics (Proc. ACM SIG-GRAPH) 20 , 4 (Aug. 1986), 143–150. doi:10.1145/15886.15902 .[KW00]
Keller A., Wald I. : Efficient Importance Sampling Techniques for the PhotonMap. In
Proc. of Vision, Modeling, and Visualization (VMV) (2000), pp. 271–278. LW93]
Lafortune E. P., Willems Y. D. : Bi-Directional Path Tracing. In
Proc. ofConference on Computational Graphics and Visualization Techniques (1993),pp. 145–153. URL: https://lirias.kuleuven.be/handle/123456789/132773 .[MGN17]
M¨uller T., Gross M., Nov´ak J. : Practical Path Guiding for Efficient Light-Transport Simulation. In
Proc. of Eurographics Symposium on Rendering(EGSR) (June 2017). doi:10.1111/cgf.13227 .[PP98]
Peter I., Pietrek G. : Importance Driven Construction of Photon Maps.In
Proc. of Eurographics Workshop on Rendering (EGWR) (1998), Springer-Verlag, pp. 269–280. doi:10.1007/978-3-7091-6453-2_25 .[She04]
Sheather S. J. : Density Estimation.
Statistical Science 19 , 4 (2004), 588–597.URL: .[Sil86]
Silverman B. W. : Density Estimation for Statistics and Data Analysis . Chap-man and Hall/CRC, Apr. 1986.[ˇSK18] ˇSik M., Kˇriv´anek J. : Survey of Markov Chain Monte Carlo Methods inLight Transport Simulation.
IEEE Transactions on Visualization and ComputerGraphics (TVCG) (Nov. 2018). doi:10.1109/TVCG.2018.2880455 .[SW00]
Suykens F., Willems Y. D. : Density Control for Photon Maps. In
Proc.of Eurographics Workshop on Rendering (EGWR) (2000), Springer, pp. 23–34. URL: http://graphics.cs.kuleuven.be/publications/PHOTONDC/index.html , doi:10.1007/978-3-7091-6303-0_3 .[VG95a] Veach E., Guibas L. J. : Bidirectional Estimators for Light Transport. In
Photorealistic Rendering Techniques . Springer Berlin Heidelberg, 1995, pp. 145–167. URL: http://dx.doi.org/10.1007/978-3-642-87825-1_11 .[VG95b]
Veach E., Guibas L. J. : Optimally Combining Sampling Techniques for MonteCarlo Rendering. In
Proceedings of SIGGRAPH (1995), ACM, pp. 419–428. doi:10.1145/218380.218498 . y V (∆) x p n (cid:104) n , x (cid:105) h = (cid:104) n , p (cid:105) + − + − − + − − + Figure 10: Intersection area from simplex volume. In 2D the simplex-volumeis the area of a triangle. Left: a single vertex of the box lies below the plane( p , n ). Right: sequence of events when moving the plane along n . A Intersection Area between Plane and Box
We are sure that this formula is not new, but we were not able to find a cite-ablereference for Equation (12). We found the derivation on Mathoverflow ( https://math.stackexchange.com/a/885662/661978 ) and repeat it here for completeness.The basic idea is to compute the volume V inside the box and below theplane with respect to the plane offset h = (cid:104) n , p (cid:105) . Then the derivation of V yields the searched area. As primary conditions we have (cid:107) n (cid:107) = 1 and that thebox is axis aligned. Let∆( h, x ) = { y ∈ R d : ∀ y i ≥ x i ∧ (cid:104) n , y (cid:105) ≤ h } (13)be the d -dimensional simplex, which starts at x as depicted in Figure 10. Thevolume of this simplex can be computed from a product of its (axis aligned)edge lengths with V (∆( h, x )) = V (∆( h − (cid:104) n , x (cid:105) , )) = max(0 , h − (cid:104) n , x (cid:105) ) d d ! (cid:81) i n i . (14)The clamping to zero is necessary if the entire box is on the upper side of theplane.Now, moving the plane along n , it will pass all vertices of the box in a sortedorder. This sequence is shown in Figure 10, too. After passing the second vertex,the simplex from the first one will overestimate the volume. Here, the secondsimplex starting at the second vertex must be subtracted. The same applies tothe third vertex. After the forth vertex, the subtracted areas from the secondand third vertex will overlap and the volume is underestimated. Thus, thevolume of the forth simplex must be added again. This alternating sign isdescribed by the parity (cid:15) as given in Equation (12).Hence, the volume of the box B with respect to the plane is V ( B , h ) = 1 d ! (cid:81) i n i d − (cid:88) i =0 (cid:15) i max(0 , h − (cid:104) n , b i (cid:105) ) d . (15)20inally, the change of volume over h is dominated by the area of the infinitesimalslab which gives the area A ( B , h ) = ∂V ( B , h ) ∂h = 1( d − (cid:81) i n i d − (cid:88) i =0 (cid:15) i max(0 , h − (cid:104) n , b i (cid:105) ) ( d − . (16)Equation (12) is then obtained by applying the above equation for one, twoand three dimensions. Each component of n which is zero means that the normalis perpendicular to the respective edge of the box. Then, the size s of the boxmust be multiplied with the area of the simplex from the reduced dimension.21 Octree Implementation
To use the following C++ source codes you must provide some implementationof a 3D vector with following operations: • arithmetic (+, -, *, /) • array access [0] to [2] for the three dimensions • functions like dot(), abs(), len()Helper function which implements Equation (12): inline float sq(float x) { return x ∗ x; } // Compute the area of the plane-box intersection// https://math.stackexchange.com/questions/885546// https://math.stackexchange.com/a/885662// or appendix A inline float intersection area(const vec3& bmin, const vec3& bmax,const vec3& pos, const vec3& normal) {vec3 cellSize = bmax - bmin;vec3 absN = abs(normal); // 1D cases if(abs(absN[0] - 1.0f) < 1e-3f) return cellSize[1] ∗ cellSize[2];if(abs(absN[1] - 1.0f) < 1e-3f) return cellSize[0] ∗ cellSize[2];if(abs(absN[2] - 1.0f) < 1e-3f) return cellSize[0] ∗ cellSize[1]; // 2D cases for(int d = 0; d < 3; ++d) if(absN[d] < 1e-4f) {int x = (d + 1) % 3;int y = (d + 2) % 3; // Use the formula from stackexchange: phi(t) = max(0,t)ˆ2 / 2 m 1 m 2// -> l(t) = sumˆ4 s max(0,t-dot(m,v)) / m 1 m 2// -> A(t) = l(t) ∗ h 3 float t = normal[x] ∗ pos[x] + normal[y] ∗ pos[y];float sum = 0.0f;sum += max(0.0f, t - (normal[x] ∗ bmin[x] + normal[y] ∗ bmin[y]));sum -= max(0.0f, t - (normal[x] ∗ bmin[x] + normal[y] ∗ bmax[y]));sum -= max(0.0f, t - (normal[x] ∗ bmax[x] + normal[y] ∗ bmin[y]));sum += max(0.0f, t - (normal[x] ∗ bmax[x] + normal[y] ∗ bmax[y]));return cellSize[d] ∗ abs(sum / (normal[x] ∗ normal[y]));} // 3D cases float t = dot(normal, pos);float sum = 0.0f;sum += sq(max(0.0f, t - dot(normal, bmin)));sum -= sq(max(0.0f, t - dot(normal, vec3{bmin[0], bmin[1], bmax[2]})));sum += sq(max(0.0f, t - dot(normal, vec3{bmin[0], bmax[1], bmax[2]})));sum -= sq(max(0.0f, t - dot(normal, vec3{bmin[0], bmax[1], bmin[2]})));sum += sq(max(0.0f, t - dot(normal, vec3{bmax[0], bmax[1], bmin[2]})));sum -= sq(max(0.0f, t - dot(normal, vec3{bmax[0], bmin[1], bmin[2]})));sum += sq(max(0.0f, t - dot(normal, vec3{bmax[0], bmin[1], bmax[2]})));sum -= sq(max(0.0f, t - dot(normal, bmax)));return abs(sum / (2.0f ∗ normal[0] ∗ normal[1] ∗ normal[2]));} Helper to track the depth of the tree: template