Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Anton S. Kaplanyan is active.

Publication


Featured researches published by Anton S. Kaplanyan.


international conference on computer graphics and interactive techniques | 2016

Towards foveated rendering for gaze-tracked virtual reality

Anjul Patney; Marco Salvi; Joohwan Kim; Anton S. Kaplanyan; Chris Wyman; Nir Benty; David Luebke; Aaron E. Lefohn

Foveated rendering synthesizes images with progressively less detail outside the eye fixation region, potentially unlocking significant speedups for wide field-of-view displays, such as head mounted displays, where target framerate and resolution is increasing faster than the performance of traditional real-time renderers. To study and improve potential gains, we designed a foveated rendering user study to evaluate the perceptual abilities of human peripheral vision when viewing todays displays. We determined that filtering peripheral regions reduces contrast, inducing a sense of tunnel vision. When applying a postprocess contrast enhancement, subjects tolerated up to 2× larger blur radius before detecting differences from a non-foveated ground truth. After verifying these insights on both desktop and head mounted displays augmented with high-speed gaze-tracking, we designed a perceptual target image to strive for when engineering a production foveated renderer. Given our perceptual target, we designed a practical foveated rendering system that reduces number of shades by up to 70% and allows coarsened shading up to 30° closer to the fovea than Guenter et al. [2012] without introducing perceivable aliasing or blur. We filter both pre- and post-shading to address aliasing from undersampling in the periphery, introduce a novel multiresolution- and saccade-aware temporal antialising algorithm, and use contrast enhancement to help recover peripheral details that are resolvable by our eye but degraded by filtering. We validate our system by performing another user study. Frequency analysis shows our system closely matches our perceptual target. Measurements of temporal stability show we obtain quality similar to temporally filtered non-foveated renderings.


ACM Transactions on Graphics | 2017

Interactive reconstruction of Monte Carlo image sequences using a recurrent denoising autoencoder

Chakravarty Reddy Alla Chaitanya; Anton S. Kaplanyan; Christoph Schied; Marco Salvi; Aaron E. Lefohn; Derek Nowrouzezahrai; Timo Aila

We describe a machine learning technique for reconstructing image sequences rendered using Monte Carlo methods. Our primary focus is on reconstruction of global illumination with extremely low sampling budgets at interactive rates. Motivated by recent advances in image restoration with deep convolutional networks, we propose a variant of these networks better suited to the class of noise present in Monte Carlo rendering. We allow for much larger pixel neighborhoods to be taken into account, while also improving execution speed by an order of magnitude. Our primary contribution is the addition of recurrent connections to the network in order to drastically improve temporal stability for sequences of sparsely sampled input images. Our method also has the desirable property of automatically modeling relationships based on auxiliary per-pixel input channels, such as depth and normals. We show significantly higher quality results compared to existing methods that run at comparable speeds, and furthermore argue a clear path for making our method run at realtime rates in the near future.


international conference on computer graphics and interactive techniques | 2016

Perceptually-based foveated virtual reality

Anjul Patney; Joohwan Kim; Marco Salvi; Anton S. Kaplanyan; Chris Wyman; Nir Benty; Aaron E. Lefohn; David Luebke

Humans have two distinct vision systems: foveal and peripheral vision. Foveal vision is sharp and detailed, while peripheral vision lacks fidelity. The difference in characteristics of the two systems enable recently popular foveated rendering systems, which seek to increase rendering performance by lowering image quality in the periphery. We present a set of perceptually-based methods for improving foveated rendering running on a prototype virtual reality headset with an integrated eye tracker. Foveated rendering has previously been demonstrated in conventional displays, but has recently become an especially attractive prospect in virtual reality (VR) and augmented reality (AR) display settings with a large field-of-view (FOV) and high frame rate requirements. Investigating prior work on foveated rendering, we find that some previous quality-reduction techniques can create objectionable artifacts like temporal instability and contrast loss. Our emerging technologies installation demonstrates these techniques running live in a head-mounted display and we will compare them against our new perceptually-based foveated techniques. Our new foveation techniques enable significant reduction in rendering cost but have no discernible difference in visual quality. We show how such techniques can fulfill these requirements with potentially large reductions in rendering cost.


high performance graphics | 2017

Spatiotemporal variance-guided filtering: real-time reconstruction for path-traced global illumination

Christoph Schied; Anton S. Kaplanyan; Chris Wyman; Anjul Patney; Chakravarty Reddy Alla Chaitanya; John Matthew Burgess; Shiqiu Liu; Carsten Dachsbacher; Aaron E. Lefohn; Marco Salvi

We introduce a reconstruction algorithm that generates a temporally stable sequence of images from one path-per-pixel global illumination. To handle such noisy input, we use temporal accumulation to increase the effective sample count and spatiotemporal luminance variance estimates to drive a hierarchical, image-space wavelet filter [Dammertz et al. 2010]. This hierarchy allows us to distinguish between noise and detail at multiple scales using local luminance variance. Physically based light transport is a long-standing goal for realtime computer graphics. While modern games use limited forms of ray tracing, physically based Monte Carlo global illumination does not meet their 30 Hz minimal performance requirement. Looking ahead to fully dynamic real-time path tracing, we expect this to only be feasible using a small number of paths per pixel. As such, image reconstruction using low sample counts is key to bringing path tracing to real-time. When compared to prior interactive reconstruction filters, our work gives approximately 10× more temporally stable results, matches reference images 5--47% better (according to SSIM), and runs in just 10 ms (± 15%) on modern graphics hardware at 1920×1080 resolution.


interactive 3d graphics and games | 2016

Real-time rendering of procedural multiscale materials

Tobias Zirr; Anton S. Kaplanyan

We present a stable shading method and a procedural shading model that enables real-time rendering of sub-pixel glints and anisotropic microdetails resulting from irregular microscopic surface structure to simulate a rich spectrum of appearances ranging from sparkling to brushed materials. We introduce a biscale Normal Distribution Function (NDF) for microdetails to provide a convenient artistic control over both the global appearance as well as over the appearance of the individual microdetail shapes, while efficiently generating procedural details. Our stable rendering approach simulates a hierarchy of scales and accurately estimates pixel footprint at multiple levels of detail to achieve good temporal stability and antialiasing, making it feasible for real-time rendering applications.


ACM Transactions on Graphics | 2017

Fusing state spaces for markov chain Monte Carlo rendering

Hisanari Otsu; Anton S. Kaplanyan; Johannes Hanika; Carsten Dachsbacher; Toshiya Hachisuka

Rendering algorithms using Markov chain Monte Carlo (MCMC) currently build upon two different state spaces. One of them is the path space, where the algorithms operate on the vertices of actual transport paths. The other state space is the primary sample space, where the algorithms operate on sequences of numbers used for generating transport paths. While the two state spaces are related by the sampling procedure of transport paths, all existing MCMC rendering algorithms are designed to work within only one of the state spaces. We propose a first framework which provides a comprehensive connection between the path space and the primary sample space. Using this framework, we can use mutation strategies designed for one space with mutation strategies in the respective other space. As a practical example, we take a combination of manifold exploration and multiplexed Metropolis light transport using our framework. Our results show that the simultaneous use of the two state spaces improves the robustness of MCMC rendering. By combining efficient local exploration in the path space with global jumps in primary sample space, our method achieves more uniform convergence as compared to using only one space.


high performance graphics | 2016

Filtering distributions of normals for shading antialiasing

Anton S. Kaplanyan; Stephen Hill; Anjul Patney; Aaron E. Lefohn

High-frequency illumination effects, such as highly glossy highlights on curved surfaces, are challenging to render in a stable manner. Such features can be much smaller than the area of a pixel and carry a high amount of energy due to high reflectance. These highlights are challenging to render in both offline rendering, where they require many samples and an outliers filter, and in real-time graphics, where they cause a significant amount of aliasing given the small budget of shading samples per pixel. In this paper, we propose a method for filtering the main source of highly glossy highlights in microfacet materials: the Normal Distribution Function (NDF). We provide a practical solution applicable for real-time rendering by employing recent advances in light transport for estimating the filtering region from various effects (such as pixel footprint) directly in the parallel-plane half-vector domain (also known as the slope domain), followed by filtering the NDF over this region. Our real-time method is GPU-friendly, temporally stable, and compatible with deferred shading, normal maps, as well as with filtering methods for normal maps.


international conference on computer graphics and interactive techniques | 2016

Estimating local Beckmann roughness for complex BSDFs

Nicolas Holzschuch; Anton S. Kaplanyan; Johannes Hanika; Carsten Dachsbacher

Many light transport related techniques require an analysis of the blur width of light scattering at a path vertex, for instance a Beckmann roughness. Such use cases are for instance analysis of expected variance (and potential biased countermeasures in production rendering), radiance caching or directionally dependent virtual point light sources, or determination of step sizes in the path space Metropolis light transport framework: recent advanced mutation strategies for Metropolis Light Transport [Veach 1997], such as Manifold Exploration [Jakob 2013] and Half Vector Space Light Transport [Kaplanyan et al. 2014] employ local curvature of the BSDFs (such as an average Beckmann roughness) at all interactions along the path in order to determine an optimal mutation step size. A single average Beckmann roughness, however, can be a bad fit for complex measured materials (such as [Matusik et al. 2003]) and, moreover, such curvature is completely undefined for layered materials as it depends on the active scattering layer. We propose a robust estimation of local curvature for BSDFs of any complexity by using local Beckmann approximations, taking into account additional factors such as both incident and outgoing direction.


Archive | 2017

PERCEPTUALLY-BASED FOVEATED RENDERING USING A CONTRAST-ENHANCING FILTER

Anjul Patney; Marco Salvi; Joohwan Kim; Anton S. Kaplanyan; Christopher Ryan Wyman; Nir Benty; David Luebke; Aaron E. Lefohn


Archive | 2013

Supplemental Material: Path Space Regularization for Holistic and Robust Light Transport

Anton S. Kaplanyan; Carsten Dachsbacher

Collaboration


Dive into the Anton S. Kaplanyan's collaboration.

Top Co-Authors

Avatar

Carsten Dachsbacher

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Joohwan Kim

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Christoph Schied

Karlsruhe Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge