Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Aaron E. Lefohn is active.

Publication


Featured researches published by Aaron E. Lefohn.


international conference on computer graphics and interactive techniques | 2016

Towards foveated rendering for gaze-tracked virtual reality

Anjul Patney; Marco Salvi; Joohwan Kim; Anton S. Kaplanyan; Chris Wyman; Nir Benty; David Luebke; Aaron E. Lefohn

Foveated rendering synthesizes images with progressively less detail outside the eye fixation region, potentially unlocking significant speedups for wide field-of-view displays, such as head mounted displays, where target framerate and resolution is increasing faster than the performance of traditional real-time renderers. To study and improve potential gains, we designed a foveated rendering user study to evaluate the perceptual abilities of human peripheral vision when viewing todays displays. We determined that filtering peripheral regions reduces contrast, inducing a sense of tunnel vision. When applying a postprocess contrast enhancement, subjects tolerated up to 2× larger blur radius before detecting differences from a non-foveated ground truth. After verifying these insights on both desktop and head mounted displays augmented with high-speed gaze-tracking, we designed a perceptual target image to strive for when engineering a production foveated renderer. Given our perceptual target, we designed a practical foveated rendering system that reduces number of shades by up to 70% and allows coarsened shading up to 30° closer to the fovea than Guenter et al. [2012] without introducing perceivable aliasing or blur. We filter both pre- and post-shading to address aliasing from undersampling in the periphery, introduce a novel multiresolution- and saccade-aware temporal antialising algorithm, and use contrast enhancement to help recover peripheral details that are resolvable by our eye but degraded by filtering. We validate our system by performing another user study. Frequency analysis shows our system closely matches our perceptual target. Measurements of temporal stability show we obtain quality similar to temporally filtered non-foveated renderings.


ACM Transactions on Graphics | 2017

Interactive reconstruction of Monte Carlo image sequences using a recurrent denoising autoencoder

Chakravarty Reddy Alla Chaitanya; Anton S. Kaplanyan; Christoph Schied; Marco Salvi; Aaron E. Lefohn; Derek Nowrouzezahrai; Timo Aila

We describe a machine learning technique for reconstructing image sequences rendered using Monte Carlo methods. Our primary focus is on reconstruction of global illumination with extremely low sampling budgets at interactive rates. Motivated by recent advances in image restoration with deep convolutional networks, we propose a variant of these networks better suited to the class of noise present in Monte Carlo rendering. We allow for much larger pixel neighborhoods to be taken into account, while also improving execution speed by an order of magnitude. Our primary contribution is the addition of recurrent connections to the network in order to drastically improve temporal stability for sequences of sparsely sampled input images. Our method also has the desirable property of automatically modeling relationships based on auxiliary per-pixel input channels, such as depth and normals. We show significantly higher quality results compared to existing methods that run at comparable speeds, and furthermore argue a clear path for making our method run at realtime rates in the near future.


international conference on computer graphics and interactive techniques | 2016

Perceptually-based foveated virtual reality

Anjul Patney; Joohwan Kim; Marco Salvi; Anton S. Kaplanyan; Chris Wyman; Nir Benty; Aaron E. Lefohn; David Luebke

Humans have two distinct vision systems: foveal and peripheral vision. Foveal vision is sharp and detailed, while peripheral vision lacks fidelity. The difference in characteristics of the two systems enable recently popular foveated rendering systems, which seek to increase rendering performance by lowering image quality in the periphery. We present a set of perceptually-based methods for improving foveated rendering running on a prototype virtual reality headset with an integrated eye tracker. Foveated rendering has previously been demonstrated in conventional displays, but has recently become an especially attractive prospect in virtual reality (VR) and augmented reality (AR) display settings with a large field-of-view (FOV) and high frame rate requirements. Investigating prior work on foveated rendering, we find that some previous quality-reduction techniques can create objectionable artifacts like temporal instability and contrast loss. Our emerging technologies installation demonstrates these techniques running live in a head-mounted display and we will compare them against our new perceptually-based foveated techniques. Our new foveation techniques enable significant reduction in rendering cost but have no discernible difference in visual quality. We show how such techniques can fulfill these requirements with potentially large reductions in rendering cost.


interactive 3d graphics and games | 2015

Frustum-traced raster shadows: revisiting irregular z-buffers

Chris Wyman; Rama Hoetzlein; Aaron E. Lefohn

We present a real-time system that renders antialiased hard shadows using irregular z-buffers (IZBs). For subpixel accuracy, we use 32 samples per pixel at roughly twice the cost of a single sample. Our system remains interactive on a variety of game assets and CAD models while running at 1080p and 2160p and imposes no constraints on light, camera or geometry, allowing fully dynamic scenes without precomputation. Unlike shadow maps we introduce no spatial or temporal aliasing, smoothly animating even subpixel shadows from grass or wires. Prior irregular z-buffer work relies heavily on GPU compute. Instead we leverage the graphics pipeline, including hardware conservative raster and early-z culling. We observe a duality between irregular z-buffer performance and shadow map quality; this allows common shadow map algorithms to reduce our cost. Compared to state-of-the-art ray tracers, we spawn similar numbers of triangle intersections per pixel yet completely rebuild our data structure in under 2 ms per frame.


interactive 3d graphics and games | 2015

Aggregate G-buffer anti-aliasing

Cyril Crassin; Morgan McGuire; Kayvon Fatahalian; Aaron E. Lefohn

We present Aggregate G-Buffer Anti-Aliasing (AGAA), a new technique for efficient anti-aliased deferred rendering of complex geometry using modern graphics hardware. In geometrically complex situations, where many surfaces intersect a pixel, current rendering systems shade each contributing surface at least once per pixel. As the sample density and geometric complexity increase, the shading cost becomes prohibitive for real-time rendering. Under deferred shading, so does the required framebuffer memory. AGAA uses the rasterization pipeline to generate a compact, pre-filtered geometric representation inside each pixel. We then shade this at a fixed rate, independent of geometric complexity. By decoupling shading rate from geometric sampling rate, the algorithm reduces the storage and bandwidth costs of a geometry buffer, and allows scaling to high visibility sampling rates for anti-aliasing. AGAA with 2 aggregate surfaces per-pixel generates results comparable to 8x MSAA, but requires 30% less memory (45% savings for 16x MSAA), and is up to 1.3x faster.


high performance graphics | 2017

Spatiotemporal variance-guided filtering: real-time reconstruction for path-traced global illumination

Christoph Schied; Anton S. Kaplanyan; Chris Wyman; Anjul Patney; Chakravarty Reddy Alla Chaitanya; John Matthew Burgess; Shiqiu Liu; Carsten Dachsbacher; Aaron E. Lefohn; Marco Salvi

We introduce a reconstruction algorithm that generates a temporally stable sequence of images from one path-per-pixel global illumination. To handle such noisy input, we use temporal accumulation to increase the effective sample count and spatiotemporal luminance variance estimates to drive a hierarchical, image-space wavelet filter [Dammertz et al. 2010]. This hierarchy allows us to distinguish between noise and detail at multiple scales using local luminance variance. Physically based light transport is a long-standing goal for realtime computer graphics. While modern games use limited forms of ray tracing, physically based Monte Carlo global illumination does not meet their 30 Hz minimal performance requirement. Looking ahead to fully dynamic real-time path tracing, we expect this to only be feasible using a small number of paths per pixel. As such, image reconstruction using low sample counts is key to bringing path tracing to real-time. When compared to prior interactive reconstruction filters, our work gives approximately 10× more temporally stable results, matches reference images 5--47% better (according to SSIM), and runs in just 10 ms (± 15%) on modern graphics hardware at 1920×1080 resolution.


IEEE Transactions on Visualization and Computer Graphics | 2016

Frustum-Traced Irregular Z-Buffers: Fast, Sub-Pixel Accurate Hard Shadows

Chris Wyman; Rama Hoetzlein; Aaron E. Lefohn

We further describe and analyze a real-time system for rendering antialiased hard shadows using irregular z-buffers (IZBs) that we first presented in Wyman et al. [1]. We focus on identifying bottlenecks, exploring these from an algorithmic complexity standpoint, and presenting techniques to improve performance. Our system remains interactive on a variety of game assets and CAD models while running at resolutions 1920 1080 and above and imposes no constraints on light, camera or geometry, allowing fully dynamic scenes without precomputation. We render sub-pixel accurate, 32 sample per pixel hard shadows at roughly twice the cost of a single sample per pixel. This allows us to smoothly animate even subpixel shadows from grass or wires without introducing spatial or temporal aliasing. Prior algorithms for irregular z-buffer shadows rely heavily on the GPUs compute pipeline. Instead we leverage the standard rasterization-based graphics pipeline, including hardware conservative raster and early-z culling. Our key observation is noting a duality between irregular z-buffer performance and shadow map quality; irregular z-buffering is most costly exactly where shadow maps exhibit the worst aliasing. This allows us to use common shadow map algorithms, which typically improve aliasing, to instead reduce our cost. Compared to state of the art ray tracers, we spawn similar numbers of triangle intersections per pixel yet completely rebuild our data structure in under 1 ms per frame.


high performance graphics | 2017

Interactive stable ray tracing

Alessandro Dal Corso; Marco Salvi; Craig Kolb; Jeppe Revall Frisvad; Aaron E. Lefohn; David Luebke

Interactive ray tracing applications running on commodity hardware can suffer from objectionable temporal artifacts due to a low sample count. We introduce stable ray tracing, a technique that improves temporal stability without the over-blurring and ghosting artifacts typical of temporal post-processing filters. Our technique is based on sample reprojection and explicit hole filling, rather than relying on hole-filling heuristics that can compromise image quality. We make reprojection practical in an interactive ray tracing context through the use of a super-resolution bitmask to estimate screen space sample density. We show significantly improved temporal stability as compared with supersampling and an existing reprojection techniques. We also investigate the performance and image quality differences between our technique and temporal antialiasing, which typically incurs a significant amount of blur. Finally, we demonstrate the benefits of stable ray tracing by combining it with progressive path tracing of indirect illumination.


high performance graphics | 2016

Filtering distributions of normals for shading antialiasing

Anton S. Kaplanyan; Stephen Hill; Anjul Patney; Aaron E. Lefohn

High-frequency illumination effects, such as highly glossy highlights on curved surfaces, are challenging to render in a stable manner. Such features can be much smaller than the area of a pixel and carry a high amount of energy due to high reflectance. These highlights are challenging to render in both offline rendering, where they require many samples and an outliers filter, and in real-time graphics, where they cause a significant amount of aliasing given the small budget of shading samples per pixel. In this paper, we propose a method for filtering the main source of highly glossy highlights in microfacet materials: the Normal Distribution Function (NDF). We provide a practical solution applicable for real-time rendering by employing recent advances in light transport for estimating the filtering region from various effects (such as pixel footprint) directly in the parallel-plane half-vector domain (also known as the slope domain), followed by filtering the NDF over this region. Our real-time method is GPU-friendly, temporally stable, and compatible with deferred shading, normal maps, as well as with filtering methods for normal maps.


international conference on computer graphics and interactive techniques | 2015

Frustum-traced irregular z-buffers: fast, sub-pixel accurate hard shadows

Chris Wyman; Rama Hoetzlein; Aaron E. Lefohn

We further describe and analyze a real-time system for rendering antialiased hard shadows using irregular z-buffers (IZBs) that we first presented in Wyman etxa0al.xa0<xref ref-type=bibr rid=ref1>[1]</xref> . We focus on identifying bottlenecks, exploring these from an algorithmic complexity standpoint, and presenting techniques to improve performance. Our system remains interactive on a variety of game assets and CAD models while running at resolutions <inline-formula><tex-math notation=LaTeX>

Collaboration


Dive into the Aaron E. Lefohn's collaboration.

Researchain Logo
Decentralizing Knowledge