Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Christian Richardt is active.

Publication


Featured researches published by Christian Richardt.


international conference on computer graphics and interactive techniques | 2018

Deep Video Portraits

Hyeongwoo Kim; Pablo Garrido; Ayush Tewari; Weipeng Xu; Justus Thies; Matthias Niessner; Patrick Pérez; Christian Richardt; Michael Zollhöfer; Christian Theobalt

We present a novel approach that enables photo-realistic re-animation of portrait videos using only an input video. In contrast to existing approaches that are restricted to manipulations of facial expressions only, we are the first to transfer the full 3D head position, head rotation, face expression, eye gaze, and eye blinking from a source actor to a portrait video of a target actor. The core of our approach is a generative neural network with a novel space-time architecture. The network takes as input synthetic renderings of a parametric face model, based on which it predicts photo-realistic video frames for a given target actor. The realism in this rendering-to-video transfer is achieved by careful adversarial training, and as a result, we can create modified target videos that mimic the behavior of the synthetically-created input. In order to enable source-to-target video re-animation, we render a synthetic target video with the reconstructed head animation parameters from a source video, and feed it into the trained network - thus taking full control of the target. With the ability to freely recombine source and target parameters, we are able to demonstrate a large variety of video rewrite applications without explicitly modeling hair, body or background. For instance, we can reenact the full head using interactive user-controlled editing, and realize high-fidelity visual dubbing. To demonstrate the high quality of our output, we conduct an extensive series of experiments and evaluations, where for instance a user study shows that our video edits are hard to detect.


IEEE Transactions on Visualization and Computer Graphics | 2018

Parallax360: Stereoscopic 360° Scene Representation for Head-Motion Parallax

Bicheng Luo; Feng Xu; Christian Richardt; Jun-Hai Yong

We propose a novel 360° scene representation for converting real scenes into stereoscopic 3D virtual reality content with head-motion parallax. Our image-based scene representation enables efficient synthesis of novel views with six degrees-of-freedom (6-DoF) by fusing motion fields at two scales: (1) disparity motion fields carry implicit depth information and are robustly estimated from multiple laterally displaced auxiliary viewpoints, and (2) pairwise motion fields enable real-time flow-based blending, which improves the visual fidelity of results by minimizing ghosting and view transition artifacts. Based on our scene representation, we present an end-to-end system that captures real scenes with a robotic camera arm, processes the recorded data, and finally renders the scene in a head-mounted display in real time (more than 40 Hz). Our approach is the first to support head-motion parallax when viewing real 360° scenes. We demonstrate compelling results that illustrate the enhanced visual experience — and hence sense of immersion-achieved with our approach compared to widely-used stereoscopic panoramas.


international conference on computer graphics and interactive techniques | 2017

Video for virtual reality

Christian Richardt; James Tompkin; Jordan Halsey; Aaron Hertzmann; Jonathan Starck; Oliver Wang

Video can capture the dynamic appearance of the real world in a way no other technology does; virtual reality technology, on the other hand, enables the display of dynamic visual content with unparalleled realism and immersion. The fusion of these two technologies---video for virtual reality (VR)---promises to enable many exciting photo-realistic experiences. Over half a day, this course will provide an overview of three aspects of this exciting medium: the technical foundations, current systems in practice, and the potential for future systems of VR video. In the first section, we will explore the geometric and optical problems underpinning VR video. Then, we will introduce both 360 degree video and stereoscopic video, including how 360 video is captured, analyzed, and stitched, including the mathematics behind how stereo 360 video can be captured. This background material provides the prerequisites for understanding current systems in use. In the middle hour of the course, we explain how state-of-the-art stereo 360 video is produced from camera systems and computational processing. Then, we will consider the art of storytelling in VR, and how new tools for editing VR video can aid in the craft of this art production. Finally, this section provides an industry perspective covering current production and post-production choices and practice, including CG integration. The final part of our course focuses on the next generation of video for VR, where we move to 6 degrees-of-freedom (6DoF) experiences. We introduce the basics and challenges behind light field cameras, processing and displays, and see how they can enable 6DoF experiences. This will be followed by another industry perspective on how light field camera arrays have been used to create cutting-edge experiences integrating volumetric live-action elements. To conclude the course, we will see how far we still must go toward the ideal system, in hopes of inspiring the attendees to push the boundary farther to reach it. We hope this course is useful to a broad audience---at SIGGRAPH and beyond---as we cover the academic, artistic, and production sides of VR video.


siggraph conference and exhibition on computer graphics and interactive techniques in asia | 2018

Cutting-edge VR/AR display technologies (gaze-, accommodation-, motion-aware and HDR-enabled).

George Alex Koulieris; Kaan Akşit; Christian Richardt; Rafal Mantiuk; Katerina Mania

Near-eye (VR/AR) displays suffer from technical, interaction as well as visual quality issues which hinder their commercial potential. This tutorial will deliver an overview of cutting-edge VR/AR display technologies, focusing on technical, interaction and perceptual issues which, if solved, will drive the next generation of display technologies. The most recent advancements in near-eye displays will be presented providing (i) correct accommodation cues, (ii) near-eye varifocal AR, (iii) high dynamic range rendition, (iv) gaze-aware capabilities, either predictive or based on eye-tracking as well as (v) motion-awareness. Future avenues for academic and industrial research related to the next generation of AR/VR display technologies will be analyzed.


international conference on computer graphics and interactive techniques | 2018

MegaParallax: 360° panoramas with motion parallax

Tobias Bertel; Christian Richardt

Capturing 360° panoramas has become straightforward now that this functionality is implemented on every phone. However, it remains difficult to capture immersive 360° panoramas with motion parallax, which provide different views for different viewpoints. Alternatives such as omnidirectional stereo panoramas provide different views for each eye (binocular disparity), but do not support motion parallax, while Casual 3D Photography [Hedman et al. 2017] reconstructs textured 3D geometry that provides motion parallax but suffers from reconstruction artefacts. We propose a new image-based approach for capturing and rendering high-quality 360° panoramas with motion parallax. We use novel-view synthesis with flow-based blending to turn a standard monoscopic video into an enriched 360° panoramic experience that can be explored in real time. Our approach makes it possible for casual consumers to capture and view high-quality 360° panoramas with motion parallax.


international symposium on mixed and augmented reality | 2017

Live User-Guided Intrinsic Video for Static Scenes

Abhimitra Meka; Gereon Fox; Michael Zollhöfer; Christian Richardt; Christian Theobalt

We present a novel real-time approach for user-guided intrinsic decomposition of static scenes captured by an RGB-D sensor. In the first step, we acquire a three-dimensional representation of the scene using a dense volumetric reconstruction framework. The obtained reconstruction serves as a proxy to densely fuse reflectance estimates and to store user-provided constraints in three-dimensional space. User constraints, in the form of constant shading and reflectance strokes, can be placed directly on the real-world geometry using an intuitive touch-based interaction metaphor, or using interactive mouse strokes. Fusing the decomposition results and constraints in three-dimensional space allows for robust propagation of this information to novel views by re-projection. We leverage this information to improve on the decomposition quality of existing intrinsic video decomposition techniques by further constraining the ill-posed decomposition problem. In addition to improved decomposition quality, we show a variety of live augmented reality applications such as recoloring of objects, relighting of scenes and editing of material appearance.


computer vision and pattern recognition | 2018

Live Intrinsic Material Estimation

Abhimitra Meka; Maxim Maximov; Michael Zollhoefer; Avishek Chatterjee; Hans-Peter Seidel; Christian Richardt; Christian Theobalt


computer vision and pattern recognition | 2018

LIME: Live Intrinsic Material Estimation

Abhimitra Meka; Maxim Maximov; Michael Zollhöfer; Avishek Chatterjee; Hans-Peter Seidel; Christian Richardt; Christian Theobalt


neural information processing systems | 2018

Unsupervised Attention-guided Image-to-Image Translation

Youssef Alami Mejjati; Christian Richardt; James Tompkin; Darren Cosker; Kwang In Kim


computer vision and pattern recognition | 2018

InverseFaceNet: Deep Monocular Inverse Face Rendering

Hyeongwoo Kim; Michael Zollhöfer; Ayush Tewari; Justus Thies; Christian Richardt; Christian Theobalt

Collaboration


Dive into the Christian Richardt's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge