Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Marc Stamminger is active.

Publication


Featured researches published by Marc Stamminger.


international conference on computer graphics and interactive techniques | 2013

Real-time 3D reconstruction at scale using voxel hashing

Matthias Nießner; Michael Zollhöfer; Shahram Izadi; Marc Stamminger

Online 3D reconstruction is gaining newfound interest due to the availability of real-time consumer depth cameras. The basic problem takes live overlapping depth maps as input and incrementally fuses these into a single 3D model. This is challenging particularly when real-time performance is desired without trading quality or scale. We contribute an online system for large and fine scale volumetric reconstruction based on a memory and speed efficient data structure. Our system uses a simple spatial hashing scheme that compresses space, and allows for real-time access and updates of implicit surface data, without the need for a regular or hierarchical grid data structure. Surface data is only stored densely where measurements are observed. Additionally, data can be streamed efficiently in or out of the hash table, allowing for further scalability during sensor motion. We show interactive reconstructions of a variety of scenes, reconstructing both fine-grained details and large scale environments. We illustrate how all parts of our pipeline from depth map pre-processing, camera pose estimation, depth map fusion, and surface rendering are performed at real-time rates on commodity graphics hardware. We conclude with a comparison to current state-of-the-art online systems, illustrating improved performance and reconstruction quality.


international conference on computer graphics and interactive techniques | 2014

Real-time non-rigid reconstruction using an RGB-D camera

Michael Zollhöfer; Matthias Nießner; Shahram Izadi; Christoph Rehmann; Christopher Zach; Matthew Fisher; Chenglei Wu; Andrew W. Fitzgibbon; Charles T. Loop; Christian Theobalt; Marc Stamminger

We present a combined hardware and software solution for markerless reconstruction of non-rigidly deforming physical objects with arbitrary shape in real-time. Our system uses a single self-contained stereo camera unit built from off-the-shelf components and consumer graphics hardware to generate spatio-temporally coherent 3D models at 30 Hz. A new stereo matching algorithm estimates real-time RGB-D data. We start by scanning a smooth template model of the subject as they move rigidly. This geometric surface prior avoids strong scene assumptions, such as a kinematic human skeleton or a parametric shape model. Next, a novel GPU pipeline performs non-rigid registration of live RGB-D data to the smooth template using an extended non-linear as-rigid-as-possible (ARAP) framework. High-frequency details are fused onto the final mesh using a linear deformation model. The system is an order of magnitude faster than state-of-the-art methods, while matching the quality and robustness of many offline algorithms. We show precise real-time reconstructions of diverse scenes, including: large deformations of users heads, hands, and upper bodies; fine-scale wrinkles and folds of skin and clothing; and non-rigid interactions performed by users on flexible objects such as toys. We demonstrate how acquired models can be used for many interactive scenarios, including re-texturing, online performance capture and preview, and real-time shape and motion re-targeting.


computer vision and pattern recognition | 2016

Face2Face: Real-Time Face Capture and Reenactment of RGB Videos

Justus Thies; Michael Zollhöfer; Marc Stamminger; Christian Theobalt; Matthias NieBner

We present a novel approach for real-time facial reenactment of a monocular target video sequence (e.g., Youtube video). The source sequence is also a monocular video stream, captured live with a commodity webcam. Our goal is to animate the facial expressions of the target video by a source actor and re-render the manipulated output video in a photo-realistic fashion. To this end, we first address the under-constrained problem of facial identity recovery from monocular video by non-rigid model-based bundling. At run time, we track facial expressions of both source and target video using a dense photometric consistency measure. Reenactment is then achieved by fast and efficient deformation transfer between source and target. The mouth interior that best matches the re-targeted expression is retrieved from the target sequence and warped to produce an accurate fit. Finally, we convincingly re-render the synthesized target face on top of the corresponding video stream such that it seamlessly blends with the real-world illumination. We demonstrate our method in a live setup, where Youtube videos are reenacted in real time.


international conference on computer graphics and interactive techniques | 2015

Real-time expression transfer for facial reenactment

Justus Thies; Michael Zollhöfer; Matthias Nießner; Levi Valgaerts; Marc Stamminger; Christian Theobalt

We present a method for the real-time transfer of facial expressions from an actor in a source video to an actor in a target video, thus enabling the ad-hoc control of the facial expressions of the target actor. The novelty of our approach lies in the transfer and photorealistic re-rendering of facial deformations and detail into the target video in a way that the newly-synthesized expressions are virtually indistinguishable from a real video. To achieve this, we accurately capture the facial performances of the source and target subjects in real-time using a commodity RGB-D sensor. For each frame, we jointly fit a parametric model for identity, expression, and skin reflectance to the input color and depth data, and also reconstruct the scene lighting. For expression transfer, we compute the difference between the source and target expressions in parameter space, and modify the target parameters to match the source expressions. A major challenge is the convincing re-rendering of the synthesized target face into the corresponding video stream. This requires a careful consideration of the lighting and shading design, which both must correspond to the real-world environment. We demonstrate our method in a live setup, where we modify a video conference feed such that the facial expressions of a different person (e.g., translator) are matched in real-time.


international conference on computer graphics and interactive techniques | 2014

Real-time shading-based refinement for consumer depth cameras

Chenglei Wu; Michael Zollhöfer; Matthias Nießner; Marc Stamminger; Shahram Izadi; Christian Theobalt

We present the first real-time method for refinement of depth data using shape-from-shading in general uncontrolled scenes. Per frame, our real-time algorithm takes raw noisy depth data and an aligned RGB image as input, and approximates the time-varying incident lighting, which is then used for geometry refinement. This leads to dramatically enhanced depth maps at 30Hz. Our algorithm makes few scene assumptions, handling arbitrary scene objects even under motion. To enable this type of real-time depth map enhancement, we contribute a new highly parallel algorithm that reformulates the inverse rendering optimization problem in prior work, allowing us to estimate lighting and shape in a temporally coherent way at video frame-rates. Our optimization problem is minimized using a new regular grid Gauss-Newton solver implemented fully on the GPU. We demonstrate results showing enhanced depth maps, which are comparable to offline methods but are computed orders of magnitude faster, as well as baseline comparisons with online filtering-based methods. We conclude with applications of our higher quality depth maps for improved real-time surface reconstruction and performance capture.


international conference on computer graphics and interactive techniques | 2016

Demo of Face2Face: real-time face capture and reenactment of RGB videos

Justus Thies; Michael Zollhöfer; Marc Stamminger; Christian Theobalt; Matthias Nießner

We present a novel approach for real-time facial reenactment of a monocular target video sequence (e.g., Youtube video). The source sequence is also a monocular video stream, captured live with a commodity webcam. Our goal is to animate the facial expressions of the target video by a source actor and re-render the manipulated output video in a photo-realistic fashion. To this end, we first address the under-constrained problem of facial identity recovery from monocular video by non-rigid model-based bundling. At run time, we track facial expressions of both source and target video using a dense photometric consistency measure. Reenactment is then achieved by fast and efficient deformation transfer between source and target. The mouth interior that best matches the re-targeted expression is retrieved from the target sequence and warped to produce an accurate fit. Finally, we convincingly re-render the synthesized target face on top of the corresponding video stream such that it seamlessly blends with the real-world illumination. We demonstrate our method in a live setup, where Youtube videos are reen-acted in real time.


Computer Animation and Virtual Worlds | 2014

Interactive model-based reconstruction of the human head using an RGB-D sensor

Michael Zollhöfer; Justus Thies; Matteo Colaianni; Marc Stamminger; Günther Greiner

We present a novel method for the interactive markerless reconstruction of human heads using a single commodity RGB‐D sensor. Our entire reconstruction pipeline is implemented on the graphics processing unit and allows to obtain high‐quality reconstructions of the human head using an interactive and intuitive reconstruction paradigm. The core of our method is a fast graphics processing unit‐based nonlinear quasi‐Newton solver that allows us to leverage all information of the RGB‐D stream and fit a statistical head model to the observations at interactive frame rates. By jointly solving for shape, albedo and illumination parameters, we are able to reconstruct high‐quality models including illumination corrected textures. All obtained reconstructions have a common topology and can be directly used as assets for games, films and various virtual reality applications. We show motion retargeting, retexturing and relighting examples. The accuracy of the presented algorithm is evaluated by a comparison against ground truth data. Copyright


ACM Journal on Computing and Cultural Heritage | 2016

Low-Cost Real-Time 3D Reconstruction of Large-Scale Excavation Sites

Michael Zollhöfer; Christian Siegl; Mark Vetter; Boris Dreyer; Marc Stamminger; Serdar Aybek; Frank Bauer

The 3D reconstruction of archeological sites is still an expensive and time-consuming task. In this article, we present a novel interactive, low-cost approach to 3D reconstruction and compare it to a standard photogrammetry pipeline based on high-resolution photographs. Our novel real-time reconstruction pipeline is based on a low-cost, consumer-level hand-held RGB-D sensor. While scanning, the user sees a live view of the current reconstruction, allowing the user to intervene immediately and adapt the sensor path to the current scanning result. After a raw reconstruction has been acquired, the digital model is interactively warped to fit a geo-referenced map using a handle-based deformation paradigm. Even large sites can be scanned within a few minutes, and no costly postprocessing is required. The quality of the acquired digitized raw 3D models is evaluated by comparing them to actual imagery, a geo-referenced map of the excavation site, and a photogrammetry-based reconstruction. We made extensive tests under real-world conditions on an archeological excavation in Metropolis, Ionia, Turkey. We found that the reconstruction quality of our approach is comparable to that of photogrammetry. Yet, both approaches have advantages and shortcomings in specific setups, which we analyze and discuss.


eurographics | 2014

Low-cost real-time 3D reconstruction of large-scale excavation sites using an RGB-D camera

Michael Zollhöfer; Christian Siegl; Bert Riffelmacher; Mark Vetter; Boris Dreyer; Marc Stamminger; Frank Bauer

In this paper, we present an end-to-end pipeline for the online reconstruction of large-scale outdoor environments and tightly confined indoor spaces using a low-cost consumer-level hand-held RGB-D sensor. While scanning, the user sees a live view of the current reconstruction, allowing him to intervene immediately and to adapt the sensor path to the current scanning result. After a raw reconstruction has been acquired, we interactively warp the digital model to fit a geo-referenced map using a handle based deformation paradigm. Even large sites can be scanned within a few minutes, and no costly postprocessing is required. n nWe developed our prototype in cooperation with researchers from the field of ancient history and geography and extensively tested the system under real world conditions on an archeological excavation in Metropolis, Ionia, Turkey. The quality of the acquired digitized raw 3D models is evaluated by comparing them to actual imagery and a geo-referenced map of the excavation site. Our reconstructions can be used to take virtual measurements that are often required in research and are the basis for a digital preservation of our cultural heritage. In addition, digital models are a helpful tool for teaching as well as for edutainment purposes making such information accessible to the general public.


international symposium on parallel and distributed processing and applications | 2013

Visualization and deformation techniques for entertainment and training in cultural heritage

Michael Zollhöfer; J. Sussmuth; Frank Bauer; Marc Stamminger; M. Boss

We think that state-of-the-art techniques in computer graphics and geometry processing can be leveraged in training and entertainment to make the topic of cultural heritage more accessible to a wider audience. In a cooperation with the “Antikensammlung” in Erlangen we produced five unique applications - all based around emperor Augustus - to visualize different scientific aspects for a big event targeted towards the general public. The applied methods include blending, geometric fitting, animation transfer and visualization techniques. Besides being entertaining, some of the presented applications are the foundation for more substantial research. (For our results video please visit http://lgdv.cs.fau.de/uploads/video/ISPA2013.mov).

Collaboration


Dive into the Marc Stamminger's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Frank Bauer

University of Erlangen-Nuremberg

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Boris Dreyer

University of Erlangen-Nuremberg

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Christian Siegl

University of Erlangen-Nuremberg

View shared research outputs
Top Co-Authors

Avatar

Mark Vetter

Karlsruhe University of Applied Sciences

View shared research outputs
Top Co-Authors

Avatar

Michael Bauer

University of Erlangen-Nuremberg

View shared research outputs
Researchain Logo
Decentralizing Knowledge