Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Dominik Sibbing is active.

Publication


Featured researches published by Dominik Sibbing.


international conference on 3d vision | 2013

SIFT-Realistic Rendering

Dominik Sibbing; Torsten Sattler; Bastian Leibe; Leif Kobbelt

3D localization approaches establish correspondences between points in a query image and a 3D point cloud reconstruction of the environment. Traditionally, the dataBase models are created from photographs using Structure-from-Motion (SfM) techniques, which requires large collections of densely sampled images. In this paper, we address the question how point cloud data from terrestrial laser scanners can be used instead to significantly reduce the data collection effort and enable more scalable localization. The key change here is that, in contrast to SfM points, laser-scanned 3D points are not automatically associated with local image features that could be matched to query image features. In order to make this data usable for image-Based localization, we explore how point cloud rendering techniques can be leveraged to create virtual views from which dataBase features can be extracted that match real image-Based features as closely as possible. We propose different rendering techniques for this task, experimentally quantify how they affect feature repeatability, and demonstrate their benefit for image-Based localization.


Computer Vision and Image Understanding | 2011

Markerless reconstruction and synthesis of dynamic facial expressions

Dominik Sibbing; Martin Habbecke; Leif Kobbelt

In this paper we combine methods from the field of computer vision with surface editing techniques to generate animated faces, which are all in full correspondence to each other. The inputs for our system are synchronized video streams from multiple cameras. The system produces a sequence of triangle meshes with fixed connectivity, representing the dynamics of the captured face. By carefully taking all requirements and characteristics into account we decided for the proposed system design: We deform an initial face template using movements estimated from the video streams. To increase the robustness of the reconstruction, we use a morphable model as a shape prior to initialize a surfel fitting technique which is able to precisely capture face shapes not included in the morphable model. In the deformation stage, we use a 2D mesh based tracking approach to establish correspondences over time. We then reconstruct positions in 3D using the same surfel fitting technique, and finally use the reconstructed points to robustly deform the initially reconstructed face. We demonstrate the applicability of the tracked face template for automatic modeling and show how to use deformation transfer to attenuate expressions, blend expressions or how to build a statistical model, similar to a morphable model, on the dynamic movements.


Computer Graphics Forum | 2010

Image Synthesis for Branching Structures

Dominik Sibbing; Darko Pavic; Leif Kobbelt

We present a set of techniques for the synthesis of artificial images that depict branching structures like rivers, cracks, lightning, mountain ranges, or blood vessels. The central idea is to build a statistical model that captures the characteristic bending and branching structure from example images. Then a new skeleton structure is synthesized and the final output image is composed from image fragments of the original input images. The synthesis part of our algorithm runs mostly automatic but it optionally allows the user to control the process in order to achieve a specific result. The combination of the statistical bending and branching model with sophisticated fragment‐based image synthesis corresponds to a multi‐resolution decomposition of the underlying branching structure into the low frequency behavior (captured by the statistical model) and the high frequency detail (captured by the image detail in the fragments). This approach allows for the synthesis of realistic branching structures, while at the same time preserving important textural details from the original image.


international conference on computer vision | 2009

Markerless reconstruction of dynamic facial expressions

Dominik Sibbing; Martin Habbecke; Leif Kobbelt

In this paper we combine methods from the field of computer vision with surface editing techniques to generate animated faces, which are all in full correspondence to each other. The input for our system are synchronized video streams from multiple cameras. The system produces a sequence of triangle meshes with fixed connectivity, representing the dynamics of the captured face. By carfully taking all requirements and characteristics into account we decided for the proposed system design: We deform an initial face template using movements estimated from the video streams. To increase the robustness of the initial reconstruction, we use a morphable model as a shape prior. However using an efficient Surfel Fitting technique, we are still able to precisely capture face shapes not part of the PCA Model. In the deformation stage, we use a 2D mesh-based tracking approach to establish correspondences in time. We then reconstruct image-samples in 3D using the same Surfel Fitting technique, and finally use the reconstructed points to robustly deform the initially reconstructed face.


Computer Graphics Forum | 2014

Efficient enforcement of hard articulation constraints in the presence of closed loops and contacts

Robin Tomcin; Dominik Sibbing; Leif Kobbelt

In rigid body simulation, one must distinguish between contacts (so‐called unilateral constraints) and articulations (bilateral constraints). For contacts and friction, iterative solution methods have proven most useful for interactive applications, often in combination with Shock‐Propagation in cases with strong interactions between contacts (such as stacks), prioritizing performance and plausibility over accuracy. For articulation constraints, direct solution methods are preferred, because one can rely on a factorization with linear time complexity for tree‐like systems, even in ill‐conditioned cases caused by large mass‐ratios or high complexity. Despite recent advances, combining the advantages of direct and iterative solution methods wrt. performance has proven difficult and the intricacy of articulations in interactive applications is often limited by the convergence speed of the iterative solution method in the presence of closed kinematic loops (i.e. auxiliary constraints) and contacts. We identify common performance bottlenecks in the dynamic simulation of unilateral and bilateral constraints and are able to present a simulation method, that scales well in the number of constraints even in ill‐conditioned cases with frictional contacts, collisions and closed loops in the kinematic graph. For cases where many joints are connected to a single body, we propose a technique to increase the sparsity of the positive definite linear system. A solution to these bottlenecks is presented in this paper to make the simulation of a wider range of mechanisms possible in real‐time without extensive parameter tuning.


Computational Intelligence and Neuroscience | 2016

Nonparametric facial feature localization using segment-based eigenfeatures

Hyun Chul Choi; Dominik Sibbing; Leif Kobbelt

We present a nonparametric facial feature localization method using relative directional information between regularly sampled image segments and facial feature points. Instead of using any iterative parameter optimization technique or search algorithm, our method finds the location of facial feature points by using a weighted concentration of the directional vectors originating from the image segments pointing to the expected facial feature positions. Each directional vector is calculated by linear combination of eigendirectional vectors which are obtained by a principal component analysis of training facial segments in feature space of histogram of oriented gradient (HOG). Our method finds facial feature points very fast and accurately, since it utilizes statistical reasoning from all the training data without need to extract local patterns at the estimated positions of facial features, any iterative parameter optimization algorithm, and any search algorithm. In addition, we can reduce the storage size for the trained model by controlling the energy preserving level of HOG pattern space.


Bildverarbeitung für die Medizin | 2007

Fast Interactive Region of Interest Selection for Volume Visualization

Dominik Sibbing; Leif Kobbelt

We describe a new method to support the segmentation of a volumetric MRI- or CT-dataset such that only the components selected by the user are displayed by a volume renderer for visual inspection. The goal is to combine the advantages of direct volume rendering (high efficiency and semi-transparent display of internal structures) and indirect volume rendering (well defined surface geometry and topology). Our approach is based on a re-labeling of the input volume’s set of isosurfaces which allows the user to peel off the outer layers and to distinguish unconnected voxel components which happen to have the same voxel values. For memory and time efficiency, isosurfaces are never generated explicitly. Instead a second voxel grid is computed which stores a discretization of the new isosurface labels. Hence the masking of unwanted regions as well as the direct volume rendering of the desired regions of interest (ROI) can be implemented on the GPU which enables interactive frame rates even while the user changes the selection of the ROI.


Archive | 2017

Reconstructing dynamic morphable models of the human face

Dominik Sibbing; Leif Kobbelt; Norman I. Badler

In this thesis we developed new techniques to detect, reconstruct and track human faces from pure image data. It is divided into two parts. While the first part considers static faces only, the second part deals with dynamic facial movements. For static faces we introduce a new facial feature localization method that determines the position of facial features relative to segments that were uniformly distributed in an input image. In this work we introduce and train a compact codebook that is the foundation of a voting scheme: Based on the appearance of an image segment this codebook provides offset vectors originating form the segments center and pointing towards possible feature locations. Compared to state-of-the-art methods, we show that this compact codebook has advantages regarding computational time and memory consumptions without losing accuracy. Leaving the two-dimensional image space, in the following chapter we introduce and compare two new 3D reconstruction approaches that extracts the 3D shape of a human face from multiple images. Those images were synchronously taken by a calibrated camera rig. With the aim of generating a large database of 3D facial movements, in the second part of this thesis we extend both systems to reconstruct and track human faces in 3D from videos taken by our camera rig. Both systems are completely image based and do not require any kind of facial markers. By carefully taking all requirements and characteristics into account and discussing single steps of the pipeline, we propose our facial reconstruction system that efficiently and robustly deforms a generic 3D mesh template to track a human face over time. Our tracking system preserves temporal and spatial correspondences between reconstructed faces. Due to this fact we can use the resulting database of facial movements, showing different facial expressions of a fairly large number of subjects, for further statistical analysis and to compute a generic movement model for facial actions. This movement model is independent from individual facial physiognomies. In the last chapter we introduce a new marker-less 3D face tracking


Computer Graphics Forum | 2017

Building a Large Database of Facial Movements for Deformation Model‐Based 3D Face Tracking

Dominik Sibbing; Leif Kobbelt

We introduce a new markerless 3D face tracking approach for 2D videos captured by a single consumer grade camera. Our approach takes detected 2D facial features as input and matches them with projections of 3D features of a deformable model to determine its pose and shape. To make the tracking and reconstruction more robust we add a smoothness prior for pose and deformation changes of the faces. Our major contribution lies in the formulation of the deformation prior which we derive from a large database of facial animations showing different (dynamic) facial expressions of a fairly large number of subjects. We split these animation sequences into snippets of fixed length which we use to predict the facial motion based on previous frames. In order to keep the deformation model compact and independent from the individual physiognomy, we represent it by deformation gradients (instead of vertex positions) and apply a principal component analysis in deformation gradient space to extract the major modes of facial deformation. Since the facial deformation is optimized during tracking, it is particularly easy to apply them to other physiognomies and thereby re‐target the facial expressions. We demonstrate the effectiveness of our technique on a number of examples.


vision modeling and visualization | 2015

Data Driven 3D Face Tracking Based on a Facial Deformation Model

Dominik Sibbing; Leif Kobbelt

We introduce a new markerless 3D face tracking approach for 2D video streams captured by a single consumer grade camera. Our approach is based on tracking 2D features in the video and matching them with the projection of the corresponding feature points of a deformable 3D model. By this we estimate the initial shape and pose of the face. To make the tracking and reconstruction more robust we add a smoothness prior for pose changes as well as for deformations of the faces. Our major contribution lies in the formulation of the smooth deformation prior which we derive from a large database of previously captured facial animations showing different (dynamic) facial expressions of a fairly large number of subjects. We split these animation sequences into snippets of fixed length which we use to predict the facial motion based on previous frames. In order to keep the deformation model compact and independent from the individual physiognomy, we represent it by deformation gradients (instead of vertex positions) and apply a principal component analysis in deformation gradient space to extract the major modes of facial deformation. Since the facial deformation is optimized during tracking, it is particularly easy to apply them to other physiognomies and thereby re-target the facial expressions. We demonstrate the effectiveness of our technique on a number of examples.

Collaboration


Dive into the Dominik Sibbing's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hyun Chul Choi

Pohang University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Darko Pavic

RWTH Aachen University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge