David C. Schneider
Heinrich Hertz Institute
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by David C. Schneider.
Computers & Graphics | 2010
Anna Hilsmann; David C. Schneider; Peter Eisert
Augmenting cloth in real video is a challenging task because cloth performs complex motions and deformations and produces complex shading on the surface. Therefore, for a realistic augmentation of cloth, parameters describing both deformation as well as shading properties are needed. Furthermore, objects occluding the real surface have to be taken into account as on the one hand they affect the parameter estimation and on the other hand should also occlude the virtually textured surface. This is especially challenging in monocular image sequences where a 3-dimensional reconstruction of complex surfaces is difficult to achieve. In this paper, we present a method for cloth retexturing in monocular image sequences under external occlusions without a reconstruction of the 3-dimensional geometry. We exploit direct image information and simultaneously estimate deformation and photometric parameters using a robust estimator which detects occluded pixels as outliers. Additionally, we exploit the estimated parameters to establish an occlusion map from local statistical color models of texture surface patches that are established during tracking. With this information we can produce convincing augmented results.
international conference on computer vision | 2009
David C. Schneider; Peter Eisert
We propose an algorithm for non-rigidly registering a 3D template mesh with a dense point cloud, using a morphable shape model to control the deformation of the template mesh. A cost function involving nonrigid shape as well as rigid pose is proposed. Registration is performed by minimizing a first-order approximation of the cost function in the Iterative Closest Points framework. We show how a complex shape model, consisting of multiple PCA models for individual regions of the template, can be seamlessly integrated in the parameter estimation scheme. An appropriate Tikhonov regularization is introduced to guarantee the smoothness of the full mesh despite the splitting into local models. The proposed algorithm is compared to a recent generic nonrigid registration scheme. We show that the data-driven approach is faster, as the linear systems to be solved in the iterations are significantly smaller when a model is available. Also, we show that simultaneous optimization of pose and shape yields better registration results than shape alone.
vision modeling and visualization | 2011
Anna Hilsmann; David C. Schneider; Peter Eisert
Image-based texture overlay or retexturing is the process of augmenting a surface in an image or a video sequence with a new, synthetic texture. Some properties of the original texture such as texture distortion as well as lighting conditions should be preserved for a realistic appearance of the augmented result. One approach would be to estimate a 3-dimensional geometry of the surface. However, this is an ill-posed problem for complex deformed surfaces like cloth, especially if only one image is given. In an image-based approach, these properties are directly estimated from the image. The key challenge is to separate the shading information from the actual local texture and to retrieve the texture distortion from an image without any knowledge of the underlying scene. In this paper, we model an image of a deformed regular texture as a combination of its deformed surface albedo, a shading map and additional high frequency details. We present a method for determination of these intrinsic parts of a given texture image by first estimating the appearance of a small texture element and then synthesizing a reference image of the undeformed regular texture. In a subsequent image-based optimization method this reference image is iteratively warped spatially and photometrically onto the original image whilst estimating deformation and illumination parameters. The decomposition is used to create images of new textures with the same deformation and illumination properties as in the original image
eurographics | 2011
David C. Schneider; Anna Hilsmann; Peter Eisert
Endoscopic videokymography is a method for visualizing the motion of the plica vocalis (vocal folds) for medical diagnosis. The diagnostic interpretability of a kymogram deteriorates if camera motion interferes with vocal fold motion, which is hard to avoid in practice. We propose an algorithm for compensating strong camera motion for videokymography. The approach is based on an image-based inverse warping scheme that can be stated as an optimization problem. The algorithm is parallelizable and real-time capable on the CPU. We discuss advantages of the image-based approach and address its use for approximate structure visualization of the endoscopic scene.
computer vision and pattern recognition | 2011
David C. Schneider; Markus Kettern; Anna Hilsmann; Peter Eisert
Mesh-based deformable image alignment (MDIA) is an algorithm that warps a template image onto a target by deforming a 2D control mesh in the image plane, using an image-based nonlinear optimization strategy. MDIA has been successfully applied to various nonrigid registration problems, deformable surface tracking and stabilization of scene-to-camera motion in video. In this paper we investigate the use of image-based MDIA for computing dense correspondences for 3D reconstruction of human heads from high resolution portrait images. Human heads are topolog-ically simple in 3D while providing textures which are challenging to match, such as hair and skin. We find that even with a simple piecewise affine deformation model MDIA delivers excellent correspondence results. We propose a robust, piecewise optimization scheme to compute MDIA on very high resolution images. We address issues of regular-ization and luminance correction and discuss the role of epipolar constraints. The correspondences retrieved with our approach facilitate the estimation of camera extrinsics and yield highly detailed meshes of the head.
conference on visual media production | 2010
Markus Kettern; David C. Schneider; Benjamin Prestele; Frederik Zilly; Peter Eisert
Acquisition of consistent multi-camera image data such as for time-slice sequences (widely known by their use as cinematic effects, e.g. in “The Matrix”) is a challenging task, especially when using low-cost image sensors. Many different steps such as camera calibration and color conformation are involved, each of which poses individual problems. We have developed a complete and extendable setup for recording a time-slice image sequence displaying a rotation around the subject utilizing a circular camera array. Integrating all the aforementioned steps into a single environment, this setup includes geometrical and color calibration of the camera hardware utilizing a novel, multi-functional calibration target as well as a software color adaption to refine the calibration results. To obtain a steadily rotating animation, we have implemented an image rectification which compensates for inevitable mounting inaccuracies and creates a smooth viewpoint trajectory based on the geometrical calibration of the cameras.
vision modeling and visualization | 2011
David C. Schneider; Markus Kettern; Anna Hilsmann; Peter Eisert
The paper presents an approach for reconstructing head-and-shoulder portraits of people from calibrated stereo images with a high level of geometric detail. In contrast to many existing systems, our reconstructions cover the full head, including hair. This is achieved using a global intensity-based optimization approach which is stated as a parametric warp estimation problem and solved in a robust Gauss-Newton framework. We formulate a computationally efficient warp function for mesh-based estimation of depth which is based on a well known image-registration approach and adapted to the problem of 3D reconstruction. We address the use of sparse correspondence estimates for initializing the optimization as well as a coarse-to-fine scheme for reconstructing without specific initialization. We discuss issues of regularization and brightness constancy violations and show various results to demonstrate the effectiveness of the approach.
international conference on image processing | 2011
Benjamin Prestele; David C. Schneider; Peter Eisert
We propose a system for the fully automated segmentation of frontal human head portraits from arbitrary unknown background. No user interaction is required at all, as the system is initialized using a standard eye detector. Using this semantic information, the head region is projected into a normalized polar reference frame. Regional and boundary models are learned from the image data to setup an energy function for segmentation. A robust non-local boundary detection scheme is proposed, which minimizes the similarity of fore - and background regions. Additionally, a shape model learned from a large set of manually segmented images is employed as prior information to encourage the segmentation of plausible head shapes. Segmentation is performed as an iterative optimization process, using two different graph-based algorithms.
eurographics | 2011
Anna Hilsmann; David C. Schneider; Peter Eisert
Retexturing is the process of realistically replacing the texture of an object or surface in a given image by a new, synthetic one, such that texture distortion as well as lighting conditions of the original image are preserved. The key challenge is to separate the shading information from the actual local texture and to retrieve the texture distortion from an image without any knowledge of the underlying scene. In this paper, we introduce an approach for automatic retexturing that models an image of a deformed regular texture as a combination of its deformed surface albedo, a shading map and additional high frequency details.
british machine vision conference | 2011
Anna Hilsmann; David C. Schneider; Peter Eisert
This paper formulates the Shape-from-Texture (SFT) problem of deriving the shape of an imaged surface from the distortion of its texture as a single-plane/multiple-view Structure-from-Motion (SFM) problem under full perspective projection. As in classical SFT formulations we approximate the surface as being piecewise planar. In contrast to many methods, our approach does not need a frontal view of the texture or the texture elements as reference, as it optimizes 3D patch positions and orientations from transformations between texture elements in the image. The reconstruction results in minimizing a large sparse linear least squares cost function based on the reprojection error, a planarity constraint and the estimated rigid motion between patches. Texture element positions in the image are estimated under the assumption of a regular texture from clustered feature points representing repeating appearances in the image. We present results obtained with synthetic data as well as real data to evaluate our method.