Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Daniel Scherzer is active.

Publication


Featured researches published by Daniel Scherzer.


eurographics symposium on rendering techniques | 2004

Light space perspective shadow maps

Michael Wimmer; Daniel Scherzer; Werner Purgathofer

In this paper, we present a new shadow mapping technique that improves upon the quality of perspective and uniform shadow maps. Our technique uses a perspective transform specified in light space which allows treating all lights as directional lights and does not change the direction of the light sources. This gives all the benefits of the perspective mapping but avoids the problems inherent in perspective shadow mapping like singularities in post-perspective space, missed shadow casters etc. Furthermore, we show that both uniform and perspective shadow maps distribute the perspective aliasing error that occurs in shadow mapping unequally over the available depth range. We therefore propose a transform that equalizes this error and gives equally pleasing results for near and far viewing distances. Our method is simple to implement, requires no scene analysis and is therefore as fast as uniform shadow mapping.


eurographics | 2011

A Survey of Real-Time Hard Shadow Mapping Methods

Daniel Scherzer; Michael Wimmer; Werner Purgathofer

Because of its versatility, speed and robustness, shadow mapping has always been a popular algorithm for fast hard shadow generation since its introduction in 1978, first for offline film productions and later increasingly so in real‐time graphics. So it is not surprising that recent years have seen an explosion in the number of shadow map related publications. Because of the abundance of articles on the topic, it has become very hard for practitioners and researchers to select a suitable shadow algorithm, and therefore many applications miss out on the latest high‐quality shadow generation approaches. The goal of this survey is to rectify this situation by providing a detailed overview of this field. We show a detailed analysis of shadow mapping errors and derive a comprehensive classification of the existing methods. We discuss the most influential algorithms, consider their benefits and shortcomings and thereby provide the readers with the means to choose the shadow algorithm best suited to their needs.


eurographics symposium on rendering techniques | 2007

Pixel-correct shadow maps with temporal reprojection and shadow test confidence

Daniel Scherzer; Stefan Jeschke; Michael Wimmer

Shadow mapping suffers from spatial aliasing (visible as blocky shadows) as well as temporal aliasing (visible as flickering). Several methods have already been proposed for reducing such artifacts, but so far none is able to provide satisfying results in real time. This paper extends shadow mapping by reusing information of previously rasterized images, stored efficiently in a so-called history buffer. This buffer is updated in every frame and then used for the shadow calculation. In combination with a special confidence-based method for the history buffer update (based on the current shadow map), temporal and spatial aliasing can be completely removed. The algorithm converges in about 10 to 60 frames and during convergence, shadow borders are sharpened over time. Consequently, in case of real-time frame rates, the temporal shadow adaption is practically imperceptible. The method is simple to implement and is as fast as uniform shadow mapping, incurring only the minor speed hit of the history buffer update. It works together with advanced filtering methods like percentage closer filtering and more advanced shadow mapping techniques like perspective or light space perspective shadow maps.


Computer Graphics Forum | 2011

Interactive Modeling of City Layouts using Layers of Procedural Content

Markus Lipp; Daniel Scherzer; Peter Wonka; Michael Wimmer

In this paper, we present new solutions for the interactive modeling of city layouts that combine the power of procedural modeling with the flexibility of manual modeling. Procedural modeling enables us to quickly generate large city layouts, while manual modeling allows us to hand‐craft every aspect of a city. We introduce transformation and merging operators for both topology preserving and topology changing transformations based on graph cuts. In combination with a layering system, this allows intuitive manipulation of urban layouts using operations such as drag and drop, translation, rotation etc. In contrast to previous work, these operations always generate valid, i.e., intersection‐free layouts. Furthermore, we introduce anchored assignments to make sure that modifications are persistent even if the whole urban layout is regenerated.


Computer Graphics Forum | 2012

Temporal Coherence Methods in Real-Time Rendering

Daniel Scherzer; Lei Yang; Oliver Mattausch; Diego Nehab; Pedro V. Sander; Michael Wimmer; Elmar Eisemann

Nowadays, there is a strong trend towards rendering to higher‐resolution displays and at high frame rates. This development aims at delivering more detail and better accuracy, but it also comes at a significant cost. Although graphics cards continue to evolve with an ever‐increasing amount of computational power, the speed gain is easily counteracted by increasingly complex and sophisticated shading computations. For real‐time applications, the direct consequence is that image resolution and temporal resolution are often the first candidates to bow to the performance constraints (e.g. although full HD is possible, PS3 and XBox often render at lower resolutions).


eurographics | 2010

A layered particle-based fluid model for real-time rendering of water

Florian Bagar; Daniel Scherzer; Michael Wimmer

We present a physically based real‐time water simulation and rendering method that brings volumetric foam to the real‐time domain, significantly increasing the realism of dynamic fluids. We do this by combining a particle‐based fluid model that is capable of accounting for the formation of foam with a layered rendering approach that is able to account for the volumetric properties of water and foam. Foam formation is simulated through Weber number thresholding. For rendering, we approximate the resulting water and foam volumes by storing their respective boundary surfaces in depth maps. This allows us to calculate the attenuation of light rays that pass through these volumes very efficiently. We also introduce an adaptive curvature flow filter that produces consistent fluid surfaces from particles independent of the viewing distance.


Computer Graphics Forum | 2012

Pre-convolved Radiance Caching

Daniel Scherzer; Chuong H. Nguyen; Tobias Ritschel; Hans-Peter Seidel

The incident indirect light over a range of image pixels is often coherent. Two common approaches to exploit this inter‐pixel coherence to improve rendering performance are Irradiance Caching and Radiance Caching. Both compute incident indirect light only for a small subset of pixels (the cache), and later interpolate between pixels. Irradiance Caching uses scalar values that can be interpolated efficiently, but cannot account for shading variations caused by normal and reflectance variation between cache items. Radiance Caching maintains directional information, e.g., to allow highlights between cache items, but at the cost of storing and evaluating a Spherical Harmonics (SH) function per pixel. The arithmetic and bandwidth cost for this evaluation is linear in the number of coefficients and can be substantial. In this paper, we propose a method to replace it by an efficient per‐cache item pre‐filtering based on MIP maps — such as previously done for environment maps — leading to a single constant‐time lookup per pixel. Additionally, per‐cache item geometry statistics stored in distance‐MIP maps are used to improve the quality of each pixels lookup. Our approximate interactive global illumination approach is an order of magnitude faster than Radiance Caching with Phong BRDFs and can be combined with Monte Carlo‐raytracing, Point‐based Global Illumination or Instant Radiosity.


international symposium on visual computing | 2009

Real-Time Soft Shadows Using Temporal Coherence

Daniel Scherzer; Michael Schwärzler; Oliver Mattausch; Michael Wimmer

A vast amount of soft shadow map algorithms have been presented in recent years. Most use a single sample hard shadow map together with some clever filtering technique to calculate perceptually or even physically plausible soft shadows. On the other hand there is the class of much slower algorithms that calculate physically correct soft shadows by taking and combining many samples of the light. In this paper we present a new soft shadow method that combines the benefits of these approaches. It samples the light source over multiple frames instead of a single frame, creating only a single shadow map each frame. Where temporal coherence is low we use spatial filtering to estimate additional samples to create correct and very fast soft shadows.


Computer Graphics Forum | 2010

High-Quality Screen-Space Ambient Occlusion using Temporal Coherence

Oliver Mattausch; Daniel Scherzer; Michael Wimmer

Ambient occlusion is a cheap but effective approximation of global illumination. Recently, screen‐space ambient occlusion (SSAO) methods, which sample the frame buffer as a discretization of the scene geometry, have become very popular for real‐time rendering. We present temporal SSAO (TSSAO), a new algorithm which exploits temporal coherence to produce high‐quality ambient occlusion in real time. Compared to conventional SSAO, our method reduces both noise as well as blurring artefacts due to strong spatial filtering, faithfully representing fine‐grained geometric structures. Our algorithm caches and reuses previously computed SSAO samples, and adaptively applies more samples and spatial filtering only in regions that do not yet have enough information available from previous frames. The method works well for both static and dynamic scenes.


interactive 3d graphics and games | 2013

Fast percentage closer soft shadows using temporal coherence

Michael Schwärzler; Christian Luksch; Daniel Scherzer; Michael Wimmer

We propose a novel way to efficiently calculate soft shadows in real-time applications by overcoming the high computational effort involved with the complex corresponding visibility estimation each frame: We exploit the temporal coherence prevalent in typical scene movement, making the estimation of a new shadow value only necessary whenever regions are newly disoccluded due to camera adjustment, or the shadow situation changes due to object movement. By extending the typical shadow mapping algorithm by an additional light-weight buffer for the tracking of dynamic scene objects, we can robustly and efficiently detect all screen space fragments that need to be updated, including not only the moving objects themselves, but also the soft shadows they cast. By applying this strategy to the popular Percentage Closer Soft Shadow algorithm (PCSS), we double rendering performance in scenes with both static and dynamic objects -- as prevalent in various 3D game levels -- while maintaining the visual quality of the original approach.

Collaboration


Dive into the Daniel Scherzer's collaboration.

Top Co-Authors

Avatar

Michael Wimmer

Vienna University of Technology

View shared research outputs
Top Co-Authors

Avatar

Oliver Mattausch

Vienna University of Technology

View shared research outputs
Top Co-Authors

Avatar

Oliver Mattausch

Vienna University of Technology

View shared research outputs
Top Co-Authors

Avatar

Lei Yang

Hong Kong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Florian Bagar

Vienna University of Technology

View shared research outputs
Top Co-Authors

Avatar

Werner Purgathofer

Vienna University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tobias Ritschel

University College London

View shared research outputs
Researchain Logo
Decentralizing Knowledge