Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Weipeng Xu is active.

Publication


Featured researches published by Weipeng Xu.


international conference on computer graphics and interactive techniques | 2017

VNect: real-time 3D human pose estimation with a single RGB camera

Dushyant Mehta; Srinath Sridhar; Oleksandr Sotnychenko; Helge Rhodin; Mohammad Shafiei; Hans-Peter Seidel; Weipeng Xu; Dan Casas; Christian Theobalt

We present the first real-time method to capture the full global 3D skeletal pose of a human in a stable, temporally consistent manner using a single RGB camera. Our method combines a new convolutional neural network (CNN) based pose regressor with kinematic skeleton fitting. Our novel fully-convolutional pose formulation regresses 2D and 3D joint positions jointly in real time and does not require tightly cropped input frames. A real-time kinematic skeleton fitting method uses the CNN output to yield temporally stable 3D global pose reconstructions on the basis of a coherent kinematic skeleton. This makes our approach the first monocular RGB method usable in real-time applications such as 3D character control---thus far, the only monocular methods for such applications employed specialized RGB-D cameras. Our methods accuracy is quantitatively on par with the best offline 3D monocular RGB pose estimation methods. Our results are qualitatively comparable to, and sometimes better than, results from monocular RGB-D approaches, such as the Kinect. However, we show that our approach is more broadly applicable than RGB-D solutions, i.e., it works for outdoor scenes, community videos, and low quality commodity RGB cameras.


international conference on computer vision | 2015

Deformable 3D Fusion: From Partial Dynamic 3D Observations to Complete 4D Models

Weipeng Xu; Mathieu Salzmann; Yongtian Wang; Yue Liu

Capturing the 3D motion of dynamic, non-rigid objects has attracted significant attention in computer vision. Existing methods typically require either complete 3D volumetric observations, or a shape template. In this paper, we introduce a template-less 4D reconstruction method that incrementally fuses highly-incomplete 3D observations of a deforming object, and generates a complete, temporally-coherent shape representation of the object. To this end, we design an online algorithm that alternatively registers new observations to the current model estimate and updates the model. We demonstrate the effectiveness of our approach at reconstructing non-rigidly moving objects from highly-incomplete measurements on both sequences of partial 3D point clouds and Kinect videos.


european conference on computer vision | 2014

Nonrigid Surface Registration and Completion from RGBD Images

Weipeng Xu; Mathieu Salzmann; Yongtian Wang; Yue Liu

Nonrigid surface registration is a challenging problem that suffers from many ambiguities. Existing methods typically assume the availability of full volumetric data, or require a global model of the surface of interest. In this paper, we introduce an approach to nonrigid registration that performs on relatively low-quality RGBD images and does not assume prior knowledge of the global surface shape. To this end, we model the surface as a collection of patches, and infer the patch deformations by performing inference in a graphical model. Our representation lets us fill in the holes in the input depth maps, thus essentially achieving surface completion. Our experimental evaluation demonstrates the effectiveness of our approach on several sequences, as well as its robustness to missing data and occlusions.


international conference on computer graphics and interactive techniques | 2018

MonoPerfCap: Human Performance Capture from Monocular Video

Weipeng Xu; Avishek Chatterjee; Michael Zollhoefer; Helge Rhodin; Dushyant Mehta; Hans-Peter Seidel; Christian Theobalt

We present the first marker-less approach for temporally coherent 3D performance capture of a human with general clothing from monocular video. Our approach reconstructs articulated human skeleton motion as well as medium-scale non-rigid surface deformations in general scenes. Human performance capture is a challenging problem due to the large range of articulation, potentially fast motion, and considerable non-rigid deformations, even from multi-view data. Reconstruction from monocular video alone is drastically more challenging, since strong occlusions and the inherent depth ambiguity lead to a highly ill-posed reconstruction problem. We tackle these challenges by a novel approach that employs sparse 2D and 3D human pose detections from a convolutional neural network using a batch-based pose estimation strategy. Joint recovery of per-batch motion allows to resolve the ambiguities of the monocular reconstruction problem based on a low dimensional trajectory subspace. In addition, we propose refinement of the surface geometry based on fully automatically extracted silhouettes to enable medium-scale non-rigid alignment. We demonstrate state-of-the-art performance capture results that enable exciting applications such as video editing and free viewpoint video, previously infeasible from monocular video. Our qualitative and quantitative evaluation demonstrates that our approach significantly outperforms previous monocular methods in terms of accuracy, robustness and scene complexity that can be handled.


international conference on computer graphics and interactive techniques | 2018

Deep Video Portraits

Hyeongwoo Kim; Pablo Garrido; Ayush Tewari; Weipeng Xu; Justus Thies; Matthias Niessner; Patrick Pérez; Christian Richardt; Michael Zollhöfer; Christian Theobalt

We present a novel approach that enables photo-realistic re-animation of portrait videos using only an input video. In contrast to existing approaches that are restricted to manipulations of facial expressions only, we are the first to transfer the full 3D head position, head rotation, face expression, eye gaze, and eye blinking from a source actor to a portrait video of a target actor. The core of our approach is a generative neural network with a novel space-time architecture. The network takes as input synthetic renderings of a parametric face model, based on which it predicts photo-realistic video frames for a given target actor. The realism in this rendering-to-video transfer is achieved by careful adversarial training, and as a result, we can create modified target videos that mimic the behavior of the synthetically-created input. In order to enable source-to-target video re-animation, we render a synthetic target video with the reconstructed head animation parameters from a source video, and feed it into the trained network - thus taking full control of the target. With the ability to freely recombine source and target parameters, we are able to demonstrate a large variety of video rewrite applications without explicitly modeling hair, body or background. For instance, we can reenact the full head using interactive user-controlled editing, and realize high-fidelity visual dubbing. To demonstrate the high quality of our output, we conduct an extensive series of experiments and evaluations, where for instance a user study shows that our video edits are hard to detect.


international conference on image processing | 2013

Real-time keystone correction for hand-held projectors with an RGBD camera

Weipeng Xu; Yongtian Wang; Yue Liu; Dongdong Weng; Mengwen Tan; Mathieu Salzmann

This paper introduces a novel and simple approach to realtime continuous keystone correction for hand-held projectors. An RGBD camera is attached to the projector to form a projector-RGBD-camera system. The system is first calibrated in an offline stage. At run-time, we then estimate the relative pose between the projector and the screen using the RGBD camera, which lets us correct the keystone distortion by warping the projected image accordingly. Experimental results show that our method outperforms existing techniques in terms of both accuracy and efficiency.


european conference on computer vision | 2018

A Hybrid Model for Identity Obfuscation by Face Replacement

Qianru Sun; Ayush Tewari; Weipeng Xu; Mario Fritz; Christian Theobalt; Bernt Schiele

As more and more personal photos are shared and tagged in social media, avoiding privacy risks such as unintended recognition, becomes increasingly challenging. We propose a new hybrid approach to obfuscate identities in photos by head replacement. Our approach combines state of the art parametric face synthesis with latest advances in Generative Adversarial Networks (GAN) for data-driven image synthesis. On the one hand, the parametric part of our method gives us control over the facial parameters and allows for explicit manipulation of the identity. On the other hand, the data-driven aspects allow for adding fine details and overall realism as well as seamless blending into the scene context. In our experiments we show highly realistic output of our system that improves over the previous state of the art in obfuscation rate while preserving a higher similarity to the original image content.


computer-aided design and computer graphics | 2013

iSarProjection: A KinectFusion Based Handheld Dynamic Spatial Augmented Reality System

Mengwen Tan; Weipeng Xu; Dongdong Weng

We introduce a general technique of dynamically augmenting physical models. Using a handheld projector attached with an RGBD camera, we map the physical model with corresponding textures, and develop a real-time dynamic spatial augmented reality (SAR) system named iSarProjection. Compared to traditional huge and complex tracking system, this technique, based on KinectFusion, supports precise registration between the physical model point cloud and the reconstructed 3D scene, and provides real-time pose estimation to track the physical display surface effectively. Through projecting medical anatomy visualizations on a white body model, it presents that we ease the development of dynamic SAR system and make it possible to be applied in various research fields.


international symposium on mixed and augmented reality | 2011

“Soul Hunter”: A novel augmented reality application in theme parks

Dongdong Weng; Weipeng Xu; Dong Li; Yongtian Wang; Yue Liu

This paper introduces a novel augmented reality shooting game named “Soul Hunter”, which has been successfully operating in a theme park in China. Soul Hunter adopts an innovative infrared marker scheme to build a mobile augmented reality application in a wide area. It is an extension of the traditional first person game, in which a player is able to fight with virtual ghost through a gunlike device in real environment. This paper describes the challenges of applying augmented reality in theme parks and shares some experiences in solving the problems encountered in practical applications.


workshop on applications of computer vision | 2018

Illumination-Invariant Robust Multiview 3D Human Motion Capture

Nadia Robertini; Florian Bernard; Weipeng Xu; Christian Theobalt

In this work we address the problem of capturing human body motion under changing lighting conditions in a multiview setup. In order to account for changing lighting conditions we propose to use an intermediate image representation that is invariant to the scene lighting. In our approach this is achieved by solving time-varying segmentation problems that use frame- and view-dependent appearance costs that are able to adjust to the present conditions. Moreover, we use an adaptive combination of our lighting-invariant segmentation with CNN-based joint detectors in order to increase the robustness to segmentation errors. In our experimental validation we demonstrate that our method is able to handle difficult conditions better than existing works.

Collaboration


Dive into the Weipeng Xu's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yongtian Wang

Beijing Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Yue Liu

Beijing Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge