Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hao Li is active.

Publication


Featured researches published by Hao Li.


international conference on computer graphics and interactive techniques | 2011

Realtime performance-based facial animation

Thibaut Weise; Sofien Bouaziz; Hao Li; Mark Pauly

This paper presents a system for performance-based character animation that enables any user to control the facial expressions of a digital avatar in realtime. The user is recorded in a natural environment using a non-intrusive, commercially available 3D sensor. The simplicity of this acquisition device comes at the cost of high noise levels in the acquired data. To effectively map low-quality 2D images and 3D depth maps to realistic facial expressions, we introduce a novel face tracking algorithm that combines geometry and texture registration with pre-recorded animation priors in a single optimization. Formulated as a maximum a posteriori estimation in a reduced parameter space, our method implicitly exploits temporal coherence to stabilize the tracking. We demonstrate that compelling 3D facial dynamics can be reconstructed in realtime without the use of face markers, intrusive lighting, or complex scanning hardware. This makes our system easy to deploy and facilitates a range of new applications, e.g. in digital gameplay or social interactions.


symposium on geometry processing | 2008

Global correspondence optimization for non-rigid registration of depth scans

Hao Li; Robert W. Sumner; Mark Pauly

We present a registration algorithm for pairs of deforming and partial range scans that addresses the challenges of non‐rigid registration within a single non‐linear optimization. Our algorithm simultaneously solves for correspondences between points on source and target scans, confidence weights that measure the reliability of each correspondence and identify non‐overlapping areas, and a warping field that brings the source scan into alignment with the target geometry. The optimization maximizes the region of overlap and the spatial coherence of the deformation while minimizing registration error. All optimization parameters are chosen automatically; hand‐tuning is not necessary. Our method is not restricted to part‐in‐whole matching, but addresses the general problem of partial matching, and requires no explicit prior correspondences or feature points. We evaluate the performance and robustness of our method using scan data acquired by a structured light scanner and compare our method with existing non‐rigid registration algorithms.


international conference on computer graphics and interactive techniques | 2009

Robust single-view geometry and motion reconstruction

Hao Li; Bart Adams; Leonidas J. Guibas; Mark Pauly

We present a framework and algorithms for robust geometry and motion reconstruction of complex deforming shapes. Our method makes use of a smooth template that provides a crude approximation of the scanned object and serves as a geometric and topological prior for reconstruction. Large-scale motion of the acquired object is recovered using a novel space-time adaptive, non-rigid registration method. Fine-scale details such as wrinkles and folds are synthesized with an efficient linear mesh deformation algorithm. Subsequent spatial and temporal filtering of detail coefficients allows transfer of persistent geometric detail to regions not observed by the scanner. We show how this two-scale process allows faithful recovery of small-scale shape and motion features leading to a high-quality reconstruction. We illustrate the robustness and generality of our algorithm on a variety of examples composed of different materials and exhibiting a large range of dynamic deformations.


symposium on computer animation | 2009

Face/Off: live facial puppetry

Thibaut Weise; Hao Li; Luc Van Gool; Mark Pauly

We present a complete integrated system for live facial puppetry that enables high-resolution real-time facial expression tracking with transfer to another persons face. The system utilizes a real-time structured light scanner that provides dense 3D data and texture. A generic template mesh, fitted to a rigid reconstruction of the actors face, is tracked offline in a training stage through a set of expression sequences. These sequences are used to build a person-specific linear face model that is subsequently used for online face tracking and expression transfer. Even with just a single rigid pose of the target face, convincing real-time facial animations are achievable. The actor becomes a puppeteer with complete and accurate control over a digital face.


international conference on computer graphics and interactive techniques | 2010

Example-based facial rigging

Hao Li; Thibaut Weise; Mark Pauly

We introduce a method for generating facial blendshape rigs from a set of example poses of a CG character. Our system transfers controller semantics and expression dynamics from a generic template to the target blendshape model, while solving for an optimal reproduction of the training poses. This enables a scalable design process, where the user can iteratively add more training poses to refine the blendshape expression space. However, plausible animations can be obtained even with a single training pose. We show how formulating the optimization in gradient space yields superior results as compared to a direct optimization on blendshape vertices. We provide examples for both hand-crafted characters and 3D scans of a real actor and demonstrate the performance of our system in the context of markerless art-directable facial tracking.


ACM Transactions on Graphics | 2012

Temporally coherent completion of dynamic shapes

Hao Li; Linjie Luo; Daniel Vlasic; Pieter Peers; Jovan Popović; Mark Pauly; Szymon Rusinkiewicz

We present a novel shape completion technique for creating temporally coherent watertight surfaces from real-time captured dynamic performances. Because of occlusions and low surface albedo, scanned mesh sequences typically exhibit large holes that persist over extended periods of time. Most conventional dynamic shape reconstruction techniques rely on template models or assume slow deformations in the input data. Our framework sidesteps these requirements and directly initializes shape completion with topology derived from the visual hull. To seal the holes with patches that are consistent with the subjects motion, we first minimize surface bending energies in each frame to ensure smooth transitions across hole boundaries. Temporally coherent dynamics of surface patches are obtained by unwarping all frames within a time window using accurate interframe correspondences. Aggregated surface samples are then filtered with a temporal visibility kernel that maximizes the use of nonoccluded surfaces. A key benefit of our shape completion strategy is that it does not rely on long-range correspondences or a template model. Consequently, our method does not suffer error accumulation typically introduced by noise, large deformations, and drastic topological changes. We illustrate the effectiveness of our method on several high-resolution scans of human performances captured with a state-of-the-art multiview 3D acquisition system.


international conference on computer graphics and interactive techniques | 2012

Tracking surfaces with evolving topology

Morten Bojsen-Hansen; Hao Li; Chris Wojtan

We present a method for recovering a temporally coherent, deforming triangle mesh with arbitrarily changing topology from an incoherent sequence of static closed surfaces. We solve this problem using the surface geometry alone, without any prior information like surface templates or velocity fields. Our system combines a proven strategy for triangle mesh improvement, a robust multi-resolution non-rigid registration routine, and a reliable technique for changing surface mesh topology. We also introduce a novel topological constraint enforcement algorithm to ensure that the output and input always have similar topology. We apply our technique to a series of diverse input data from video reconstructions, physics simulations, and artistic morphs. The structured output of our algorithm allows us to efficiently track information like colors and displacement maps, recover velocity information, and solve PDEs on the mesh as a post process.


Computer Graphics Forum | 2012

Factored Facade Acquisition using Symmetric Line Arrangements

Duygu Ceylan; Niloy J. Mitra; Hao Li; Thibaut Weise; Mark Pauly

We introduce a novel framework for image‐based 3D reconstruction of urban buildings based on symmetry priors. Starting from image‐level edges, we generate a sparse and approximate set of consistent 3D lines. These lines are then used to simultaneously detect symmetric line arrangements while refining the estimated 3D model. Operating both on 2D image data and intermediate 3D feature representations, we perform iterative feature consolidation and effective outlier pruning, thus eliminating reconstruction artifacts arising from ambiguous or wrong stereo matches. We exploit non‐local coherence of symmetric elements to generate precise model reconstructions, even in the presence of a significant amount of outlier image‐edges arising from reflections, shadows, outlier objects, etc. We evaluate our algorithm on several challenging test scenarios, both synthetic and real. Beyond reconstruction, the extracted symmetry patterns are useful towards interactive and intuitive model manipulations.


computer vision and pattern recognition | 2012

Multi-view hair capture using orientation fields

Linjie Luo; Hao Li; Sylvain Paris; Thibaut Weise; Mark Pauly; Szymon Rusinkiewicz

Reconstructing realistic 3D hair geometry is challenging due to omnipresent occlusions, complex discontinuities and specular appearance. To address these challenges, we propose a multi-view hair reconstruction algorithm based on orientation fields with structure-aware aggregation. Our key insight is that while hairs color appearance is view-dependent, the response to oriented filters that captures the local hair orientation is more stable. We apply the structure-aware aggregation to the MRF matching energy to enforce the structural continuities implied from the local hair orientations. Multiple depth maps from the MRF optimization are then fused into a globally consistent hair geometry with a template refinement procedure. Compared to the state-of-the-art color-based methods, our method faithfully reconstructs detailed hair structures. We demonstrate the results for a number of hair styles, ranging from straight to curly, and show that our framework is suitable for capturing hair in motion.


eurographics | 2010

Geometric Registration for Deformable Shapes

Will Chang; Qixing Huang; Hao Li; Niloy J. Mitra; Mark Pauly; Michael Wand

Compendium. The primary objective of my research is to enhance the mathematical and scientific foundations of network modeling and system analysis of complex physical systems. The central avenue of my research is a differential geometry-based approach to computational modeling and structure-preserving discretization of distributed-parameter systems. The driving idea behind this work is the need for numerical methods for differential equations that are robust, modular, and globally accurate. Employing the geometric compositional modeling, I strive to enhance the analysis and control of the complex physical systems stemming from mechanics, engineering and systems biology.

Collaboration


Dive into the Hao Li's collaboration.

Top Co-Authors

Avatar

Mark Pauly

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Thibaut Weise

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Sofien Bouaziz

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Niloy J. Mitra

University College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Daniel Vlasic

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge