Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Chris Wyman is active.

Publication


Featured researches published by Chris Wyman.


international conference on computer graphics and interactive techniques | 2016

Towards foveated rendering for gaze-tracked virtual reality

Anjul Patney; Marco Salvi; Joohwan Kim; Anton S. Kaplanyan; Chris Wyman; Nir Benty; David Luebke; Aaron E. Lefohn

Foveated rendering synthesizes images with progressively less detail outside the eye fixation region, potentially unlocking significant speedups for wide field-of-view displays, such as head mounted displays, where target framerate and resolution is increasing faster than the performance of traditional real-time renderers. To study and improve potential gains, we designed a foveated rendering user study to evaluate the perceptual abilities of human peripheral vision when viewing todays displays. We determined that filtering peripheral regions reduces contrast, inducing a sense of tunnel vision. When applying a postprocess contrast enhancement, subjects tolerated up to 2× larger blur radius before detecting differences from a non-foveated ground truth. After verifying these insights on both desktop and head mounted displays augmented with high-speed gaze-tracking, we designed a perceptual target image to strive for when engineering a production foveated renderer. Given our perceptual target, we designed a practical foveated rendering system that reduces number of shades by up to 70% and allows coarsened shading up to 30° closer to the fovea than Guenter et al. [2012] without introducing perceivable aliasing or blur. We filter both pre- and post-shading to address aliasing from undersampling in the periphery, introduce a novel multiresolution- and saccade-aware temporal antialising algorithm, and use contrast enhancement to help recover peripheral details that are resolvable by our eye but degraded by filtering. We validate our system by performing another user study. Frequency analysis shows our system closely matches our perceptual target. Measurements of temporal stability show we obtain quality similar to temporally filtered non-foveated renderings.


international conference on computer graphics and interactive techniques | 2016

Perceptually-based foveated virtual reality

Anjul Patney; Joohwan Kim; Marco Salvi; Anton S. Kaplanyan; Chris Wyman; Nir Benty; Aaron E. Lefohn; David Luebke

Humans have two distinct vision systems: foveal and peripheral vision. Foveal vision is sharp and detailed, while peripheral vision lacks fidelity. The difference in characteristics of the two systems enable recently popular foveated rendering systems, which seek to increase rendering performance by lowering image quality in the periphery. We present a set of perceptually-based methods for improving foveated rendering running on a prototype virtual reality headset with an integrated eye tracker. Foveated rendering has previously been demonstrated in conventional displays, but has recently become an especially attractive prospect in virtual reality (VR) and augmented reality (AR) display settings with a large field-of-view (FOV) and high frame rate requirements. Investigating prior work on foveated rendering, we find that some previous quality-reduction techniques can create objectionable artifacts like temporal instability and contrast loss. Our emerging technologies installation demonstrates these techniques running live in a head-mounted display and we will compare them against our new perceptually-based foveated techniques. Our new foveation techniques enable significant reduction in rendering cost but have no discernible difference in visual quality. We show how such techniques can fulfill these requirements with potentially large reductions in rendering cost.


interactive 3d graphics and games | 2015

Frustum-traced raster shadows: revisiting irregular z-buffers

Chris Wyman; Rama Hoetzlein; Aaron E. Lefohn

We present a real-time system that renders antialiased hard shadows using irregular z-buffers (IZBs). For subpixel accuracy, we use 32 samples per pixel at roughly twice the cost of a single sample. Our system remains interactive on a variety of game assets and CAD models while running at 1080p and 2160p and imposes no constraints on light, camera or geometry, allowing fully dynamic scenes without precomputation. Unlike shadow maps we introduce no spatial or temporal aliasing, smoothly animating even subpixel shadows from grass or wires. Prior irregular z-buffer work relies heavily on GPU compute. Instead we leverage the graphics pipeline, including hardware conservative raster and early-z culling. We observe a duality between irregular z-buffer performance and shadow map quality; this allows common shadow map algorithms to reduce our cost. Compared to state-of-the-art ray tracers, we spawn similar numbers of triangle intersections per pixel yet completely rebuild our data structure in under 2 ms per frame.


high performance graphics | 2017

Spatiotemporal variance-guided filtering: real-time reconstruction for path-traced global illumination

Christoph Schied; Anton S. Kaplanyan; Chris Wyman; Anjul Patney; Chakravarty Reddy Alla Chaitanya; John Matthew Burgess; Shiqiu Liu; Carsten Dachsbacher; Aaron E. Lefohn; Marco Salvi

We introduce a reconstruction algorithm that generates a temporally stable sequence of images from one path-per-pixel global illumination. To handle such noisy input, we use temporal accumulation to increase the effective sample count and spatiotemporal luminance variance estimates to drive a hierarchical, image-space wavelet filter [Dammertz et al. 2010]. This hierarchy allows us to distinguish between noise and detail at multiple scales using local luminance variance. Physically based light transport is a long-standing goal for realtime computer graphics. While modern games use limited forms of ray tracing, physically based Monte Carlo global illumination does not meet their 30 Hz minimal performance requirement. Looking ahead to fully dynamic real-time path tracing, we expect this to only be feasible using a small number of paths per pixel. As such, image reconstruction using low sample counts is key to bringing path tracing to real-time. When compared to prior interactive reconstruction filters, our work gives approximately 10× more temporally stable results, matches reference images 5--47% better (according to SSIM), and runs in just 10 ms (± 15%) on modern graphics hardware at 1920×1080 resolution.


international conference on computer graphics and interactive techniques | 2016

Stochastic layered alpha blending

Chris Wyman

Researchers have long sought efficient techniques for order-independent transparency (OIT) in a rasterization pipeline, to avoid sorting geometry prior to render. Techniques like A-buffers, k-buffers, stochastic transparency, hybrid transparency, adaptive transparency, and multi-layer alpha blending all approach the problem slightly differently with different tradeoffs. These OIT algorithms have many similarities, and our investigations allowed us to construct a continuum on which they lie. During this categorization, we identified various new algorithms including stochastic layered alpha blending (SLAB), which combines stochastic transparencys consistent and (optionally) unbiased convergence with the smaller memory footprint of k-buffers. Our approach can be seen as a stratified sampling technique for stochastic transparency, generating quality better than 32 x samples per pixel for roughly the cost and memory of 8 x stochastic samples. As with stochastic transparency, we can exchange noise for added bias; our algorithm provides an explicit parameter to trade noise for bias. At one end, this parameter gives results identical to stochastic transparency. On the other end, the results are identical to k-buffering.


IEEE Transactions on Visualization and Computer Graphics | 2016

Frustum-Traced Irregular Z-Buffers: Fast, Sub-Pixel Accurate Hard Shadows

Chris Wyman; Rama Hoetzlein; Aaron E. Lefohn

We further describe and analyze a real-time system for rendering antialiased hard shadows using irregular z-buffers (IZBs) that we first presented in Wyman et al. [1]. We focus on identifying bottlenecks, exploring these from an algorithmic complexity standpoint, and presenting techniques to improve performance. Our system remains interactive on a variety of game assets and CAD models while running at resolutions 1920 1080 and above and imposes no constraints on light, camera or geometry, allowing fully dynamic scenes without precomputation. We render sub-pixel accurate, 32 sample per pixel hard shadows at roughly twice the cost of a single sample per pixel. This allows us to smoothly animate even subpixel shadows from grass or wires without introducing spatial or temporal aliasing. Prior algorithms for irregular z-buffer shadows rely heavily on the GPUs compute pipeline. Instead we leverage the standard rasterization-based graphics pipeline, including hardware conservative raster and early-z culling. Our key observation is noting a duality between irregular z-buffer performance and shadow map quality; irregular z-buffering is most costly exactly where shadow maps exhibit the worst aliasing. This allows us to use common shadow map algorithms, which typically improve aliasing, to instead reduce our cost. Compared to state of the art ray tracers, we spawn similar numbers of triangle intersections per pixel yet completely rebuild our data structure in under 1 ms per frame.


high performance graphics | 2015

Decoupled coverage anti-aliasing

Yuxiang Wang; Chris Wyman; Yong He; Pradeep Sen

State-of-the-art methods for geometric anti-aliasing in real-time rendering are based on Multi-Sample Anti-Aliasing (MSAA), which samples visibility more than shading to reduce the number of expensive shading calculations. However, for high-quality results the number of visibility samples needs to be large (e.g., 64 samples/pixel), which requires significant memory because visibility samples are usually 24-bit depth values. In this paper, we present Decoupled Coverage Anti-Aliasing (DCAA), which improves upon MSAA by further decoupling coverage from visibility for high-quality geometric anti-aliasing. Our work is based on the previously-explored idea that all fragments at a pixel can be consolidated into a small set of visible surfaces. Although in the past this was only used to reduce the memory footprint of the G-Buffer for deferred shading with MSAA, we leverage this idea to represent each consolidated surface with a 64-bit binary mask for coverage and a single decoupled depth value, thus significantly reducing the overhead for high-quality anti-aliasing. To do this, we introduce new surface merging heuristics and resolve mechanisms to manage the decoupled depth and coverage samples. Our prototype implementation runs in real-time on current graphics hardware, and results in a significant reduction in geometric aliasing with less memory overhead than 8×MSAA for several complex scenes.


interactive 3d graphics and games | 2014

Adaptive depth bias for shadow maps

Hang Dou; Yajie Yan; Ethan Kerzner; Zeng Dai; Chris Wyman

Shadow aliasing due to limited storage precision has been plaguing discrete shadowing algorithms for decades. We present a simple method to eliminate false self-shadowing through adaptive depth bias. Unlike existing methods which simply set the weight of the bias based on surface slope or utilize the second nearest surface, we evaluate the bound of bias for each fragment and compute the optimal bias within the bound. Our method introduces small overhead, preserves more shadow details than widely used constant bias and slope scale bias and works for common 2D shadow maps as well as 3D binary shadow volumes.


interactive 3d graphics and games | 2017

Hashed alpha testing

Chris Wyman; Morgan McGuire

Renderers apply alpha testing to mask out complex silhouettes using alpha textures on simple proxy geometry. While widely used, alpha testing has a long-standing problem that is underreported in the literature, but observable in commercial games: geometry can entirely disappear as alpha mapped polygons recede with distance. As foveated rendering for virtual reality spreads this problem worsens, as peripheral minification and prefilitering also cause this problem for nearby objects. We introduce two algorithms, stochastic alpha testing and hashed alpha testing, that avoid this issue but add some noise. Instead of using a fixed alpha threshold, ατ, stochastic alpha testing discards fragments with alpha below randomly chosen ατ ∈ (0..1]. Hashed alpha testing uses a hash function to choose ατ procedurally, producing stable noise that reduces temporal flicker. With a good hash function and inputs, hashed alpha testing maintains distant geometry without introducing more temporal flicker than traditional alpha testing. We describe how hashed and stochastic alpha testing apply to alpha-to-coverage and screen-door transparency, and how they simplify stochastic transparency.


international conference on computer graphics and interactive techniques | 2016

HFTS: hybrid frustum-traced shadows in "the division"

Jon Story; Chris Wyman

We present a hybrid irregular z-buffer shadow algorithm building on work by Story [2015] and Wyman et al. [2015] that allows soft shadows and is fast enough for use in shipping games, like The Division. Key novelties include an improved light-space partitioning scheme that speeds best- and average-case running times compared to using multiple cascades. We also extract a per-pixel distance to the nearest occluder to enable transitioning between irregular z-buffers and filtered shadow maps.

Collaboration


Dive into the Chris Wyman's collaboration.

Researchain Logo
Decentralizing Knowledge