Arno Zinke
University of Bonn
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Arno Zinke.
ACM Transactions on Graphics | 2011
Jochen Tautges; Arno Zinke; Björn Krüger; Jan Baumann; Andreas Weber; Thomas Helten; Meinard Müller; Hans-Peter Seidel; Bernd Eberhardt
The development of methods and tools for the generation of visually appealing motion sequences using prerecorded motion capture data has become an important research area in computer animation. In particular, data-driven approaches have been used for reconstructing high-dimensional motion sequences from low-dimensional control signals. In this article, we contribute to this strand of research by introducing a novel framework for generating full-body animations controlled by only four 3D accelerometers that are attached to the extremities of a human actor. Our approach relies on a knowledge base that consists of a large number of motion clips obtained from marker-based motion capturing. Based on the sparse accelerometer input a cross-domain retrieval procedure is applied to build up a lazy neighborhood graph in an online fashion. This graph structure points to suitable motion fragments in the knowledge base, which are then used in the reconstruction step. Supported by a kd-tree index structure, our procedure scales to even large datasets consisting of millions of frames. Our combined approach allows for reconstructing visually plausible continuous motion streams, even in the presence of moderate tempo variations which may not be directly reflected by the given knowledge base.
international conference on computer graphics and interactive techniques | 2008
Arno Zinke; Cem Yuksel; Andreas Weber; John Keyser
When rendering light colored hair, multiple fiber scattering is essential for the right perception of the overall hair color. In this context, we present a novel technique to efficiently approximate multiple fiber scattering for a full head of human hair or a similar fiber based geometry. In contrast to previous ad-hoc approaches, our method relies on the physically accurate concept of the Bidirectional Scattering Distribution Functions and gives physically plausible results with no need for parameter tweaking. We show that complex scattering effects can be approximated very well by using aggressive simplifications based on this theoretical model. When compared to unbiased Monte-Carlo path tracing, our approximations preserve photo-realism in most settings but with rendering times at least two-orders of magnitude lower. Time and space complexity are much lower compared to photon mapping-based techniques and we can even achieve realistic results in real-time on a standard PC with consumer graphics hardware.
IEEE Transactions on Visualization and Computer Graphics | 2007
Arno Zinke; Andreas Weber
Photorealistic visualization of a huge number of individual filaments like in the case of hair, fur, or knitwear is a challenging task: Explicit rendering approaches for simulating radiance transfer at a filament get totally impracticable with respect to rendering performance and it is also not obvious how to derive efficient scattering functions for different levels of (geometric) abstraction or how to deal with very complex scattering mechanisms. We present a novel uniform formalism for light scattering from filaments in terms of radiance, which we call the bidirectional fiber scattering distribution function (BFSDF). We show that previous specialized approaches, which have been developed in the context of hair rendering, can be seen as instances of the BFSDF. Similar to the role of the BSSRDF for surface scattering functions, the BFSDF can be seen as a general approach for light scattering from filaments, which is suitable for deriving approximations in a canonic and systematic way. For the frequent cases of distant light sources and observers, we deduce an efficient far field approximation (bidirectional curve scattering distribution function, BCSDF). We show that on the basis of the BFSDF, parameters for common rendering techniques can be estimated in a non-ad-hoc, but physically-based way
eurographics | 2011
Kai Schröder; Reinhard Klein; Arno Zinke
Efficient physically accurate modeling and rendering of woven cloth at a yarn level is an inherently complicated task due to the underlying geometrical and optical complexity. In this paper, a novel and general approach to physically accurate cloth rendering is presented. By using a statistical volumetric model approximating the distribution of yarn fibers, a prohibitively costly explicit geometrical representation is avoided. As a result, accurate rendering of even large pieces of fabrics containing orders of magnitudes more fibers becomes practical without sacrifying much generality compared to fiber‐based techniques. By employing the concept of local visibility and introducing the effective fiber density, limitations of existing volumetric approaches regarding self‐shadowing and fiber density estimation are greatly reduced.
Ninth International Conference on Information Visualisation (IV'05) | 2005
Alfred O. Effenberg; Joachim Melzer; Andreas Weber; Arno Zinke
Sonification of human movement offers a wide range of new kinds of information for supporting motor learning in sports and rehabilitation. Even though motor learning is dominated visually, auditory perception offers unique subtle temporal resolution as well as enormous integrative capacity - both are important features on perception of human movement patterns. But how to address the auditory system adequately? A sonification based on kinematic movement data can mediate structural features of movement via the auditory system, like polyrhythms of movement etc. And sonification of dynamic movement data makes muscle forces audible approximately. Here, a flexible framework for the sonification of human movement data is presented, capable of processing standard kinematic motion capture data as well as derived quantities such as force data. Force data are computed by inverse dynamics algorithms and can be used as input parameters for real time sonification. Simultaneous visualization is provided using OpenGL.
VRIPHYS | 2011
Jan Baumann; Björn Krüger; Arno Zinke; Andreas Weber
We present a data-driven method for completion of corrupted marker-based motion capture data. Our novel approach is especially suitable for challenging cases, e.g. if complete marker sets of multiple body parts are missing over a long period of time. Without the need for extensive preprocessing we are able to fix missing markers across different actors and motion styles. Our approach can be used for incrementally increasing prior-databases, as the underlying search technique for similar motions scales well to huge databases.
IEEE Transactions on Visualization and Computer Graphics | 2015
Kai Schröder; Arno Zinke; Reinhard Klein
Realistic visualization of cloth has many applications in computer graphics. An ongoing research problem is how to best represent and capture cloth models, specifically when considering computer aided design of cloth. Previous methods produce highly realistic images, however, they are either difficult to edit or require the measurement of large databases to capture all variations of a cloth sample. We propose a pipeline to reverse engineer cloth and estimate a parametrized cloth model from a single image. We introduce a geometric yarn model, integrating state-of-the-art textile research. We present an automatic analysis approach to estimate yarn paths, yarn widths, their variation and a weave pattern. Several examples demonstrate that we are able to model the appearance of the original cloth sample. Properties derived from the input image give a physically plausible basis that is fully editable using a few intuitive parameters.
international conference on computer graphics and interactive techniques | 2011
Martin Rump; Arno Zinke; Reinhard Klein
Simple and effective geometric and radiometric calibration of camera devices has enabled the use of consumer digital cameras for HDR photography, for image based measurement and similar applications requiring a deeper understanding about the camera characteristics. However, to date no such practical methods for estimating the spectral response of cameras are available. Existing approaches require costly hardware and controlled acquisition conditions limiting their applicability. Consequently, even though being highly desirable for color correction and color processing purposes as well as for designing image-based measurement or photographic setups, the spectral response of a camera is rarely considered. Our objective is to close this gap. In this work a practical approach for multi-spectral characterization of trichromatic cameras is presented. Taking photographs of a color chart and measuring the average lighting using a spectrophotometer the effective spectral response of a camera can be estimated for a wide range of out-of-lab environments. By comprehensive cross validation experiments we prove that the new method performs well compared to costly reference measurements. Moreover, we show that our technique can also be used to generate ICC profiles with higher accuracy and less constrained capturing conditions compared to state-of-the-art ICC profilers.
international conference on computer graphics and interactive techniques | 2012
Kai Schröder; Shuang Zhao; Arno Zinke
This course is about recent advances in the challenging field of physically-based appearance modeling of cloth. Apart from geometrical complexity, optical complexity presents complications as highly anisotropic single and multiple scattering effects often dominate the appearance. Many types of fibers are highly translucent and multiple scattering significantly influences the observed color. Since a cloth model may potentially consist of billions of fibers, finding a viable level of geometrical abstraction is difficult. After explaining the general structure of several types of textiles, we give an overview of different approaches that have been proposed to render cloth. As the micro-geometry of cloth can be represented using an explicit representation of a fiber assembly, we continue by explaining optical properties of fibers; these can be derived from first principles of physics such as absorption or index of refraction. Understanding light scattering from fibers is essential, when a physically-based cloth renderer is designed. However, as storing these fibers explicitly is often too costly, more efficient statistical descriptions of cloth have also been proposed that can be used together with volumetric rendering techniques to allow for physically-based image synthesis, while retaining most of the flexibility of explicit methods. A major part of this course will focus on these approaches. We discuss the theory and practice of physically-based rendering of anisotropic media. The discussion begins with a review of linear transport theory, upon which current methods for rendering volumetric cloth are based. Relevant implementation details are discussed at each stage, and the final result will be the pseudocode of a Monte Carlo path tracer for volumetric cloth representations. Although rendering of cloth is a very specialized task, many of the concepts, developed in this field, can be used for rendering other materials with complex micro-geometry as well.
spring conference on computer graphics | 2010
Tomas Lay Herrera; Arno Zinke; Andreas Weber; Thomas Vetter
In this paper we present a novel efficient and fully automated technique to synthesize realistic facial hair---such as beards and eyebrows---on 3D head models. The method requires registered texture images of a target model on which hair needs to be generated. In a first stage of our two-step approach a statistical measure for hair density is computed for each pixel of the texture. In addition, other geometric features such as 2D pixel orientations are extracted, which are subsequently used to generate a 3D model of the individual hair strands. Missing or incomplete information is estimated based on statistical models derived from a database of texture images of over 70 individuals. Using the new approach, characteristics of the hair extracted from a given head may be also transferred to another target.