Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Gernot Ziegler is active.

Publication


Featured researches published by Gernot Ziegler.


Computer Graphics Forum | 2008

High-speed Marching Cubes using HistoPyramids

Christopher Dyken; Gernot Ziegler; Christian Theobalt; Hans-Peter Seidel

We present an implementation approach for Marching Cubes (MC) on graphics hardware for OpenGL 2.0 or comparable graphics APIs. It currently outperforms all other known graphics processing units (GPU)‐based iso‐surface extraction algorithms in direct rendering for sparse or large volumes, even those using the recently introduced geometry shader (GS) capabilites. To achieve this, we outfit the Histogram Pyramid (HP) algorithm, previously only used in GPU data compaction, with the capability for arbitrary data expansion. After reformulation of MC as a data compaction and expansion process, the HP algorithm becomes the core of a highly efficient and interactive MC implementation. For graphics hardware lacking GSs, such as mobile GPUs, the concept of HP data expansion is easily generalized, opening new application domains in mobile visual computing. Further, to serve recent developments, we present how the HP can be implemented in the parallel programming language CUDA (compute unified device architecture), by using a novel 1D chunk/layer construction.


international conference on computer graphics and interactive techniques | 2007

Eikonal rendering: efficient light transport in refractive objects

Ivo Ihrke; Gernot Ziegler; Art Tevs; Christian Theobalt; Marcus Magnor; Hans-Peter Seidel

We present a new method for real-time rendering of sophisticated lighting effects in and around refractive objects. It enables us to realistically display refractive objects with complex material properties, such as arbitrarily varying refractive index, inhomogeneous attenuation, as well as spatially-varying anisotropic scattering and reflectance properties. User-controlled changes of lighting positions only require a few seconds of update time. Our method is based on a set of ordinary differential equations derived from the eikonal equation, the main postulate of geometric optics. This set of equations allows for fast casting of bent light rays with the complexity of a particle tracer. Based on this concept, we also propose an efficient light propagation technique using adaptive wavefront tracing. Efficient GPU implementations for our algorithmic concepts enable us to render a combination of visual effects that were previously not reproducible in real-time.


international conference on image processing | 2004

Multivideo compression in texture space

Gernot Ziegler; Hendrik P. A. Lensch; Naveed Ahmed; Marcus A. Magnor; Hans-Peter Seidel

We present a model-based approach to encode multiple synchronized video streams depicting a dynamic scene from different viewpoints. With approximate 3D scene geometry available, we compensate for motion as well as disparity by transforming all video images to object textures prior to compression. A two-level hierarchical coding strategy is employed to efficiently exploit inter-texture coherence as well as to ensure quick random access during decoding. Experimental validation shows that attainable compression ratios range up to 50:1 without subsampling. The proposed coding scheme is intended for use in conjunction with free-viewpoint video and 3D-TV applications.


IEEE Signal Processing Magazine | 2007

High-Quality Reconstruction from Multiview Video Streams

Christian Theobalt; Naveed Ahmed; Gernot Ziegler; Hans-Peter Seidel

Three-dimensional (3-D) video processing is currently an active area of research that attracts scientists from many disciplines, including computer graphics, computer vision, electrical engineering, and video processing. They join their expertise to attack the very hard problem of reconstructing dynamic representations of real- world scenes from a sparse set of synchronized video streams. To put this idea into practice, a variety of challenging engineering and algorithmic problems have to be efficiently solved, ranging from acquisition, over reconstruction in itself, to realistic rendering. This article is a tutorial style review of methods from the literature aiming at reconstruction of 3-D humans as well as of a variety of model-based approaches that we developed to reconstruct, render, and encode free-viewpoint videos of human actors. We will show that the commitment to an a priori shape representation of a person in the real world allows us to solve many of the previously described reconstruction problems in an efficient way.


Untitled Event | 2007

Eikonal Rendering: Efficient Light Transport in Refractive Objects

Ivo Ihrke; Gernot Ziegler; Art Tevs; Christian Theobalt; Marcus Magnor; Hans-Peter Seidel

We present a new method for real-time rendering of sophisticated lighting effects in and around refractive objects. It enables us to realistically display refractive objects with complex material properties, such as arbitrarily varying refractive index, inhomogeneous attenuation, as well as spatially-varying anisotropic scattering and reflectance properties. User-controlled changes of lighting positions only require a few seconds of update time. Our method is based on a set of ordinary differential equations derived from the eikonal equation, the main postulate of geometric optics. This set of equations allows for fast casting of bent light rays with the complexity of a particle tracer. Based on this concept, we also propose an efficient light propagation technique using adaptive wavefront tracing. Efficient GPU implementations for our algorithmic concepts enable us to render a combination of visual effects that were previously not reproducible in real-time.


multimedia signal processing | 2004

Multi-video compression in texture space using 4D SPIHT

Gernot Ziegler; Hendrik P. A. Lensch; Marcus A. Magnor; Hans-Peter Seidel

We present a model-based approach to encode multiple synchronized video streams which show a dynamic scene from different viewpoints. By utilizing 3D scene geometry, we compensate for motion and disparity by transforming all video images to object textures prior to compression. A 4D SPIHT wavelet compression algorithm exploits interframe coherence in both temporal and spatial dimension. Unused texels increase the compression, and a shape mask can be omitted at the cost of higher decoder complexity. The proposed coding scheme is intended for use in conjunction with free-viewpoint video and 3D-TV applications.


electronic imaging | 2007

Real-time quadtree analysis using HistoPyramids

Gernot Ziegler; Rouslan Dimitrov; Christian Theobalt; Hans-Peter Seidel

Region quadtrees are convenient tools for hierarchical image analysis. Like the related Haar wavelets, they are simple to generate within a fixed calculation time. The clustering at each resolution level requires only local data, yet they deliver intuitive classification results. Although the region quadtree partitioning is very rigid, it can be rapidly computed from arbitrary imagery. This research article demonstrates how graphics hardware can be utilized to build region quadtrees at unprecedented speeds. To achieve this, a data-structure called HistoPyramid registers the number of desired image features in a pyramidal 2D array. Then, this HistoPyramid is used as an implicit indexing data structure through quadtree traversal, creating lists of the registered image features directly in GPU memory, and virtually eliminating bus transfers between CPU and GPU. With this novel concept, quadtrees can be applied in real-time video processing on standard PC hardware. A multitude of applications in image and video processing arises, since region quadtree analysis becomes a light-weight preprocessing step for feature clustering in vision tasks, motion vector analysis, PDE calculations, or data compression. In a sidenote, we outline how this algorithm can be applied to 3D volume data, effectively generating region octrees purely on graphics hardware.


international conference on computer graphics and interactive techniques | 2007

GPU-based light wavefront simulation for real-time refractive object rendering

Gernot Ziegler; Christian Theobalt; Ivo Ihrke; Marcus A. Magnor; Art Tevs; Hans-Peter Seidel

In our paper on Eikonal Rendering [1], presented in the SIGGRAPH 2007 main paper program, we propose a novel algorithm to render a variety of sophisticated lighting effects in and around refractive objects in real-time on a single PC. Our method enables us to realistically display objects with spatially-varying refractive indices, inhomogeneous attenuation characteristics, as well as well as spatially-varying reflectance and anisotropic scattering properties. We can reproduce arbitrarily curved light paths, volume and surface caustics, anisotropic scattering as well as total reflection by means of the same efficient theoretical framework. In our approach, scenes are represented volumetrically. One core component of our method is a fast GPU particle tracer to compute viewing ray trajectories. It uses ordinary differential equations derived from the eikonal equation. The second important component is a light simulator, which utilizes a similar ODE-based framework to adaptively trace light wavefronts through the scene in a few seconds. While the conference paper [1] focuses on the development of the theory and the validation of our results, this sketch describes in detail the new concepts and data structures that we developed to implement view rendering and light wavefront tracing on the GPU (see figure 1 for a stage overview).


international conference on computer graphics and interactive techniques | 2005

Joint motion and reflectance capture for relightable 3D video

Christian Theobalt; Naveed Ahmed; Edilson de Aguiar; Gernot Ziegler; Hendrik P. A. Lensch; Marcus A. Magnor; Hans-Peter Seidel

3D Videos of Human Actors can be faithfully reconstructed from multiple synchronized video streams by means of a model-based analysis-by-synthesis approach [Carranza et al. 2003]. The reconstructed videos play back in real-time and the virtual viewpoint onto the scene can be arbitrarily changed. By this means authentically animated, photo-realistically and view-dependently textured models of real people can be created that look real under fixed illumination conditions. To import real-world characters into virtual environments, however, also surface reflectance properties must be known. We have thus developed a video-based modeling approach that captures human motion as well as reflectance characteristics from a handful of synchronized video recordings. The presented method is able to recover spatially varying reflectance properties of clothes by exploiting the time-varying orientation of each surface point with respect to camera and light direction. The resulting model description enables us to match animated subject appearance to different lighting conditions, as well as to interchange surface attributes among different people, e.g. for virtual dressing. Our relightable 3D videos allow populating virtual worlds with convincingly relit real-world people.


Archive | 2006

GPU point list generation through histogram pyramids

Gernot Ziegler; Art Tevs; Christian Theobalt; Hans-Peter Seidel

Collaboration


Dive into the Gernot Ziegler's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge