Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Matthias NieBner is active.

Publication


Featured researches published by Matthias NieBner.


computer vision and pattern recognition | 2016

Face2Face: Real-Time Face Capture and Reenactment of RGB Videos

Justus Thies; Michael Zollhöfer; Marc Stamminger; Christian Theobalt; Matthias NieBner

We present a novel approach for real-time facial reenactment of a monocular target video sequence (e.g., Youtube video). The source sequence is also a monocular video stream, captured live with a commodity webcam. Our goal is to animate the facial expressions of the target video by a source actor and re-render the manipulated output video in a photo-realistic fashion. To this end, we first address the under-constrained problem of facial identity recovery from monocular video by non-rigid model-based bundling. At run time, we track facial expressions of both source and target video using a dense photometric consistency measure. Reenactment is then achieved by fast and efficient deformation transfer between source and target. The mouth interior that best matches the re-targeted expression is retrieved from the target sequence and warped to produce an accurate fit. Finally, we convincingly re-render the synthesized target face on top of the corresponding video stream such that it seamlessly blends with the real-world illumination. We demonstrate our method in a live setup, where Youtube videos are reenacted in real time.


computer vision and pattern recognition | 2016

Volumetric and Multi-view CNNs for Object Classification on 3D Data

Charles Ruizhongtai Qi; Hao Su; Matthias NieBner; Angela Dai; Mengyuan Yan; Leonidas J. Guibas

3D shape models are becoming widely available and easier to capture, making available 3D information crucial for progress in object classification. Current state-of-theart methods rely on CNNs to address this problem. Recently, we witness two types of CNNs being developed: CNNs based upon volumetric representations versus CNNs based upon multi-view representations. Empirical results from these two types of CNNs exhibit a large gap, indicating that existing volumetric CNN architectures and approaches are unable to fully exploit the power of 3D representations. In this paper, we aim to improve both volumetric CNNs and multi-view CNNs according to extensive analysis of existing approaches. To this end, we introduce two distinct network architectures of volumetric CNNs. In addition, we examine multi-view CNNs, where we introduce multiresolution filtering in 3D. Overall, we are able to outperform current state-of-the-art methods for both volumetric CNNs and multi-view CNNs. We provide extensive experiments designed to evaluate underlying design choices, thus providing a better understanding of the space of methods available for object classification on 3D data.


computer vision and pattern recognition | 2017

ScanNet: Richly-Annotated 3D Reconstructions of Indoor Scenes

Angela Dai; Angel X. Chang; Manolis Savva; Maciej Halber; Thomas A. Funkhouser; Matthias NieBner

A key requirement for leveraging supervised deep learning methods is the availability of large, labeled datasets. Unfortunately, in the context of RGB-D scene understanding, very little data is available – current datasets cover a small range of scene views and have limited semantic annotations. To address this issue, we introduce ScanNet, an RGB-D video dataset containing 2.5M views in 1513 scenes annotated with 3D camera poses, surface reconstructions, and semantic segmentations. To collect this data, we designed an easy-to-use and scalable RGB-D capture system that includes automated surface reconstruction and crowdsourced semantic annotation. We show that using this data helps achieve state-of-the-art performance on several 3D scene understanding tasks, including 3D object classification, semantic voxel labeling, and CAD model retrieval.


computer vision and pattern recognition | 2015

Exploiting uncertainty in regression forests for accurate camera relocalization

Julien P. C. Valentin; Matthias NieBner; Jamie Shotton; Andrew W. Fitzgibbon; Shahram Izadi; Philip H. S. Torr

Recent advances in camera relocalization use predictions from a regression forest to guide the camera pose optimization procedure. In these methods, each tree associates one pixel with a point in the scenes 3D world coordinate frame. In previous work, these predictions were point estimates and the subsequent camera pose optimization implicitly assumed an isotropic distribution of these estimates. In this paper, we train a regression forest to predict mixtures of anisotropic 3D Gaussians and show how the predicted uncertainties can be taken into account for continuous pose optimization. Experiments show that our proposed method is able to relocalize up to 40% more frames than the state of the art.


computer vision and pattern recognition | 2017

Shape Completion Using 3D-Encoder-Predictor CNNs and Shape Synthesis

Angela Dai; Charles Ruizhongtai Qi; Matthias NieBner

We introduce a data-driven approach to complete partial 3D shapes through a combination of volumetric deep neural networks and 3D shape synthesis. From a partially-scanned input shape, our method first infers a low-resolution – but complete – output. To this end, we introduce a 3D-Encoder-Predictor Network (3D-EPN) which is composed of 3D convolutional layers. The network is trained to predict and fill in missing data, and operates on an implicit surface representation that encodes both known and unknown space. This allows us to predict global structure in unknown areas at high accuracy. We then correlate these intermediary results with 3D geometry from a shape database at test time. In a final pass, we propose a patch-based 3D shape synthesis method that imposes the 3D geometry from these retrieved shapes as constraints on the coarsely-completed mesh. This synthesis process enables us to reconstruct fine-scale detail and generate high-resolution output while respecting the global mesh structure obtained by the 3D-EPN. Although our 3D-EPN outperforms state-of-the-art completion method, the main contribution in our work lies in the combination of a data-driven shape predictor and analytic 3D shape synthesis. In our results, we show extensive evaluations on a newly-introduced shape completion benchmark for both real-world and synthetic data.


computer vision and pattern recognition | 2017

3DMatch: Learning Local Geometric Descriptors from RGB-D Reconstructions

Andy Zeng; Shuran Song; Matthias NieBner; Matthew Fisher; Jianxiong Xiao; Thomas A. Funkhouser

Matching local geometric features on real-world depth images is a challenging task due to the noisy, low-resolution, and incomplete nature of 3D scan data. These difficulties limit the performance of current state-of-art methods, which are typically based on histograms over geometric properties. In this paper, we present 3DMatch, a data-driven model that learns a local volumetric patch descriptor for establishing correspondences between partial 3D data. To amass training data for our model, we propose a self-supervised feature learning method that leverages the millions of correspondence labels found in existing RGB-D reconstructions. Experiments show that our descriptor is not only able to match local geometry in new scenes for reconstruction, but also generalize to different tasks and spatial scales (e.g. instance-level object model alignment for the Amazon Picking Challenge, and mesh surface correspondence). Results show that 3DMatch consistently outperforms other state-of-the-art approaches by a significant margin. Code, data, benchmarks, and pre-trained models are available online at http://3dmatch.cs.princeton.edu.


international conference on robotics and automation | 2015

Incremental dense semantic stereo fusion for large-scale semantic scene reconstruction

Vibhav Vineet; Ondrej Miksik; Morten Lidegaard; Matthias NieBner; Stuart Golodetz; Victor Adrian Prisacariu; Olaf Kähler; David W. Murray; Shahram Izadi; Patrick Peerez; Philip H. S. Torr


international conference on 3d vision | 2017

Matterport3D: Learning from RGB-D Data in Indoor Environments

Angel Chang; Angela Dai; Thomas A. Funkhouser; Maciej Halber; Matthias NieBner; Manolis Savva; Shuran Song; Andy Zeng; Yinda Zhang


international conference on computer vision | 2017

Intrinsic3D: High-Quality 3D Reconstruction by Joint Appearance and Geometry Optimization with Spatially-Varying Lighting

Robert Maier; Kihwan Kim; Daniel Cremers; Jan Kautz; Matthias NieBner


international conference on computer vision | 2017

A Lightweight Approach for On-the-Fly Reflectance Estimation

Kihwan Kim; Jinwei Gu; Stephen Tyree; Pavlo Molchanov; Matthias NieBner; Jan Kautz

Collaboration


Dive into the Matthias NieBner's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jan Kautz

University College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge