Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yanlin Weng is active.

Publication


Featured researches published by Yanlin Weng.


IEEE Transactions on Visualization and Computer Graphics | 2014

FaceWarehouse: A 3D Facial Expression Database for Visual Computing

Chen Cao; Yanlin Weng; Shun Zhou; Yiying Tong; Kun Zhou

We present FaceWarehouse, a database of 3D facial expressions for visual computing applications. We use Kinect, an off-the-shelf RGBD camera, to capture 150 individuals aged 7-80 from various ethnic backgrounds. For each person, we captured the RGBD data of her different expressions, including the neutral expression and 19 other expressions such as mouth-opening, smile, kiss, etc. For every RGBD raw data record, a set of facial feature points on the color image such as eye corners, mouth contour, and the nose tip are automatically localized, and manually adjusted if better accuracy is required. We then deform a template facial mesh to fit the depth data as closely as possible while matching the feature points on the color image to their corresponding points on the mesh. Starting from these fitted face meshes, we construct a set of individual-specific expression blendshapes for each person. These meshes with consistent topology are assembled as a rank-3 tensor to build a bilinear face model with two attributes: identity and expression. Compared with previous 3D facial databases, for every person in our database, there is a much richer matching collection of expressions, enabling depiction of most human facial actions. We demonstrate the potential of FaceWarehouse for visual computing with four applications: facial image manipulation, face component transfer, real-time performance-based facial image animation, and facial animation retargeting from video to image.


international conference on computer graphics and interactive techniques | 2013

3D shape regression for real-time facial animation

Chen Cao; Yanlin Weng; Stephen Lin; Kun Zhou

We present a real-time performance-driven facial animation system based on 3D shape regression. In this system, the 3D positions of facial landmark points are inferred by a regressor from 2D video frames of an ordinary web camera. From these 3D points, the pose and expressions of the face are recovered by fitting a user-specific blendshape model to them. The main technical contribution of this work is the 3D regression algorithm that learns an accurate, user-specific face alignment model from an easily acquired set of training data, generated from images of the user performing a sequence of predefined facial poses and expressions. Experiments show that our system can accurately recover 3D face shapes even for fast motions, non-frontal faces, and exaggerated expressions. In addition, some capacity to handle partial occlusions and changing lighting conditions is demonstrated.


The Visual Computer | 2006

2D shape deformation using nonlinear least squares optimization

Yanlin Weng; Weiwei Xu; Yanchen Wu; Kun Zhou; Baining Guo

This paper presents a novel 2D shape deformation algorithm based on nonlinear least squares optimization. The algorithm aims to preserve two local shape properties: the Laplacian coordinates of the boundary curve and the local area of the shape interior, which are together represented in a non-quadratic energy function. An iterative Gauss–Newton method is used to minimize this nonlinear energy function. The result is an interactive shape deformation system that can achieve physically plausible results that are difficult to achieve with previous linear least squares methods. In addition to this algorithm that preserves local shape properties, we also introduce a scheme to preserve the global area of the shape, which is useful for deforming incompressible objects.


international conference on computer graphics and interactive techniques | 2012

Single-view hair modeling for portrait manipulation

Menglei Chai; Lvdi Wang; Yanlin Weng; Yizhou Yu; Baining Guo; Kun Zhou

Human hair is known to be very difficult to model or reconstruct. In this paper, we focus on applications related to portrait manipulation and take an application-driven approach to hair modeling. To enable an average user to achieve interesting portrait manipulation results, we develop a single-view hair modeling technique with modest user interaction to meet the unique requirements set by portrait manipulation. Our method relies on heuristics to generate a plausible high-resolution strand-based 3D hair model. This is made possible by an effective high-precision 2D strand tracing algorithm, which explicitly models uncertainty and local layering during tracing. The depth of the traced strands is solved through an optimization, which simultaneously considers depth constraints, layering constraints as well as regularization terms. Our single-view hair modeling enables a number of interesting applications that were previously challenging, including transferring the hairstyle of one subject to another in a potentially different pose, rendering the original portrait in a novel view and image-space hair editing.


international conference on computer graphics and interactive techniques | 2016

Real-time facial animation with image-based dynamic avatars

Chen Cao; Hongzhi Wu; Yanlin Weng; Tianjia Shao; Kun Zhou

We present a novel image-based representation for dynamic 3D avatars, which allows effective handling of various hairstyles and headwear, and can generate expressive facial animations with fine-scale details in real-time. We develop algorithms for creating an image-based avatar from a set of sparsely captured images of a user, using an off-the-shelf web camera at home. An optimization method is proposed to construct a topologically consistent morphable model that approximates the dynamic hair geometry in the captured images. We also design a real-time algorithm for synthesizing novel views of an image-based avatar, so that the avatar follows the facial motions of an arbitrary actor. Compelling results from our pipeline are demonstrated on a variety of cases.


international conference on computer graphics and interactive techniques | 2013

Dynamic hair manipulation in images and videos

Menglei Chai; Lvdi Wang; Yanlin Weng; Xiaogang Jin; Kun Zhou

This paper presents a single-view hair modeling technique for generating visually and physically plausible 3D hair models with modest user interaction. By solving an unambiguous 3D vector field explicitly from the image and adopting an iterative hair generation algorithm, we can create hair models that not only visually match the original input very well but also possess physical plausibility (e.g., having strand roots fixed on the scalp and preserving the length and continuity of real strands in the image as much as possible). The latter property enables us to manipulate hair in many new ways that were previously very difficult with a single image, such as dynamic simulation or interactive hair shape editing. We further extend the modeling approach to handle simple video input, and generate dynamic 3D hair models. This allows users to manipulate hair in a video or transfer styles from images to videos.


international conference on computer graphics and interactive techniques | 2016

AutoHair: fully automatic hair modeling from a single image

Menglei Chai; Tianjia Shao; Hongzhi Wu; Yanlin Weng; Kun Zhou

We introduce AutoHair, the first fully automatic method for 3D hair modeling from a single portrait image, with no user interaction or parameter tuning. Our method efficiently generates complete and high-quality hair geometries, which are comparable to those generated by the state-of-the-art methods, where user interaction is required. The core components of our method are: a novel hierarchical deep neural network for automatic hair segmentation and hair growth direction estimation, trained over an annotated hair image database; and an efficient and automatic data-driven hair matching and modeling algorithm, based on a large set of 3D hair exemplars. We demonstrate the efficacy and robustness of our method on Internet photos, resulting in a database of around 50K 3D hair models and a corresponding hairstyle space that covers a wide variety of real-world hairstyles. We also show novel applications enabled by our method, including 3D hairstyle space navigation and hair-aware image retrieval.


Graphical Models \/graphical Models and Image Processing \/computer Vision, Graphics, and Image Processing | 2014

Real-time Facial Animation on Mobile Devices

Yanlin Weng; Chen Cao; Qiming Hou; Kun Zhou

Abstract We present a performance-based facial animation system capable of running on mobile devices at real-time frame rates. A key component of our system is a novel regression algorithm that accurately infers the facial motion parameters from 2D video frames of an ordinary web camera. Compared with the state-of-the-art facial shape regression algorithm [1] , which takes a two-step procedure to track facial animations (i.e., first regressing the 3D positions of facial landmarks, and then computing the head poses and expression coefficients), we directly regress the head poses and expression coefficients. This one-step approach greatly reduces the dimension of the regression target and significantly improves the tracking performance while preserving the tracking accuracy. We further propose to collect the training images of the user under different lighting environments, and make use of the data to learn a user-specific regressor, which can robustly handle lighting changes that frequently occur when using mobile devices.


Computer Graphics Forum | 2013

As-Rigid-AsPossible Distance Field Metamorphosis

Yanlin Weng; Menglei Chai; Weiwei Xu; Yiying Tong; Kun Zhou

Widely used for morphing between objects with arbitrary topology, distance field interpolation (DFI) handles topological transition naturally without the need for correspondence or remeshing, unlike surface-based interpolation approaches. However, lack of correspondence in DFI also leads to ineffective control over the morphing process. In particular, unless the user specifies a dense set of landmarks, it is not even possible to measure the distortion of intermediate shapes during interpolation, let alone control it. To remedy such issues, we introduce an approach for establishing correspondence between the interior of two arbitrary objects, formulated as an optimal mass transport problem with a sparse set of landmarks. This correspondence enables us to compute non-rigid warping functions that better align the source and target objects as well as to incorporate local rigidity constraints to perform as-rigid-aspossible DFI. We demonstrate how our approach helps achieve flexible morphing results with a small number of landmarks.


Computer Graphics Forum | 2013

Hair Interpolation for Portrait Morphing

Yanlin Weng; Lvdi Wang; Xiao Li; Menglei Chai; Kun Zhou

In this paper we study the problem of hair interpolation: given two 3D hair models, we want to generate a sequence of intermediate hair models that transform from one input to another both smoothly and aesthetically pleasing. We propose an automatic method that efficiently calculates a many-to-many strand correspondence between two or more given hair models, taking into account the multi-scale clustering structure of hair. Experiments demonstrate that hair interpolation can be used for producing more vivid portrait morphing effects and enabling a novel example-based hair styling methodology, where a user can interactively create new hairstyles by continuously exploring a “style space” spanning multiple input hair models.

Collaboration


Dive into the Yanlin Weng's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Weiwei Xu

Hangzhou Normal University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge