Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Menglei Chai is active.

Publication


Featured researches published by Menglei Chai.


international conference on computer graphics and interactive techniques | 2012

Single-view hair modeling for portrait manipulation

Menglei Chai; Lvdi Wang; Yanlin Weng; Yizhou Yu; Baining Guo; Kun Zhou

Human hair is known to be very difficult to model or reconstruct. In this paper, we focus on applications related to portrait manipulation and take an application-driven approach to hair modeling. To enable an average user to achieve interesting portrait manipulation results, we develop a single-view hair modeling technique with modest user interaction to meet the unique requirements set by portrait manipulation. Our method relies on heuristics to generate a plausible high-resolution strand-based 3D hair model. This is made possible by an effective high-precision 2D strand tracing algorithm, which explicitly models uncertainty and local layering during tracing. The depth of the traced strands is solved through an optimization, which simultaneously considers depth constraints, layering constraints as well as regularization terms. Our single-view hair modeling enables a number of interesting applications that were previously challenging, including transferring the hairstyle of one subject to another in a potentially different pose, rendering the original portrait in a novel view and image-space hair editing.


international conference on computer graphics and interactive techniques | 2013

Dynamic hair manipulation in images and videos

Menglei Chai; Lvdi Wang; Yanlin Weng; Xiaogang Jin; Kun Zhou

This paper presents a single-view hair modeling technique for generating visually and physically plausible 3D hair models with modest user interaction. By solving an unambiguous 3D vector field explicitly from the image and adopting an iterative hair generation algorithm, we can create hair models that not only visually match the original input very well but also possess physical plausibility (e.g., having strand roots fixed on the scalp and preserving the length and continuity of real strands in the image as much as possible). The latter property enables us to manipulate hair in many new ways that were previously very difficult with a single image, such as dynamic simulation or interactive hair shape editing. We further extend the modeling approach to handle simple video input, and generate dynamic 3D hair models. This allows users to manipulate hair in a video or transfer styles from images to videos.


international conference on computer graphics and interactive techniques | 2015

High-quality hair modeling from a single portrait photo

Menglei Chai; Linjie Luo; Kalyan Sunkavalli; Nathan A. Carr; Sunil Hadap; Kun Zhou

We propose a novel system to reconstruct a high-quality hair depth map from a single portrait photo with minimal user input. We achieve this by combining depth cues such as occlusions, silhouettes, and shading, with a novel 3D helical structural prior for hair reconstruction. We fit a parametric morphable face model to the input photo and construct a base shape in the face, hair and body regions using occlusion and silhouette constraints. We then estimate the normals in the hair region via a Shape-from-Shading-based optimization that uses the lighting inferred from the face model and enforces an adaptive albedo prior that models the typical color and occlusion variations of hair. We introduce a 3D helical hair prior that captures the geometric structure of hair, and show that it can be robustly recovered from the input photo in an automatic manner. Our system combines the base shape, the normals estimated by Shape from Shading, and the 3D helical hair prior to reconstruct high-quality 3D hair models. Our single-image reconstruction closely matches the results of a state-of-the-art multi-view stereo applied on a multi-view stereo dataset. Our technique can reconstruct a wide variety of hairstyles ranging from short to long and from straight to messy, and we demonstrate the use of our 3D hair models for high-quality portrait relighting, novel view synthesis and 3D-printed portrait reliefs.


international conference on computer graphics and interactive techniques | 2014

A reduced model for interactive hairs

Menglei Chai; Changxi Zheng; Kun Zhou

Realistic hair animation is a crucial component in depicting virtual characters in interactive applications. While much progress has been made in high-quality hair simulation, the overwhelming computation cost hinders similar fidelity in realtime simulations. To bridge this gap, we propose a data-driven solution. Building upon precomputed simulation data, our approach constructs a reduced model to optimally represent hair motion characteristics with a small number of guide hairs and the corresponding interpolation relationships. At runtime, utilizing such a reduced model, we only simulate guide hairs that capture the general hair motion and interpolate all rest strands. We further propose a hair correction method that corrects the resulting hair motion with a position-based model to resolve hair collisions and thus captures motion details. Our hair simulation method enables a simulation of a full head of hairs with over 150K strands in realtime. We demonstrate the efficacy and robustness of our method with various hairstyles and driven motions (e.g., head movement and wind force), and compared against full simulation results that does not appear in the training data.


international conference on computer graphics and interactive techniques | 2016

AutoHair: fully automatic hair modeling from a single image

Menglei Chai; Tianjia Shao; Hongzhi Wu; Yanlin Weng; Kun Zhou

We introduce AutoHair, the first fully automatic method for 3D hair modeling from a single portrait image, with no user interaction or parameter tuning. Our method efficiently generates complete and high-quality hair geometries, which are comparable to those generated by the state-of-the-art methods, where user interaction is required. The core components of our method are: a novel hierarchical deep neural network for automatic hair segmentation and hair growth direction estimation, trained over an annotated hair image database; and an efficient and automatic data-driven hair matching and modeling algorithm, based on a large set of 3D hair exemplars. We demonstrate the efficacy and robustness of our method on Internet photos, resulting in a database of around 50K 3D hair models and a corresponding hairstyle space that covers a wide variety of real-world hairstyles. We also show novel applications enabled by our method, including 3D hairstyle space navigation and hair-aware image retrieval.


IEEE Transactions on Visualization and Computer Graphics | 2014

Cone Tracing for Furry Object Rendering

Hao Qin; Menglei Chai; Qiming Hou; Zhong Ren; Kun Zhou

We present a cone-based ray tracing algorithm for high-quality rendering of furry objects with reflection, refraction and defocus effects. By aggregating many sampling rays in a pixel as a single cone, we significantly reduce the high supersampling rate required by the thin geometry of fur fibers. To reduce the cost of intersecting fur fibers with cones, we construct a bounding volume hierarchy for the fiber geometry to find the fibers potentially intersecting with cones, and use a set of connected ribbons to approximate the projections of these fibers on the image plane. The computational cost of compositing and filtering transparent samples within each cone is effectively reduced by approximating away in-cone variations of shading, opacity and occlusion. The result is a highly efficient ray tracing algorithm for furry objects which is able to render images of quality comparable to those generated by alternative methods, while significantly reducing the rendering time. We demonstrate the rendering quality and performance of our algorithm using several examples and a user study.


Computer Graphics Forum | 2013

As-Rigid-AsPossible Distance Field Metamorphosis

Yanlin Weng; Menglei Chai; Weiwei Xu; Yiying Tong; Kun Zhou

Widely used for morphing between objects with arbitrary topology, distance field interpolation (DFI) handles topological transition naturally without the need for correspondence or remeshing, unlike surface-based interpolation approaches. However, lack of correspondence in DFI also leads to ineffective control over the morphing process. In particular, unless the user specifies a dense set of landmarks, it is not even possible to measure the distortion of intermediate shapes during interpolation, let alone control it. To remedy such issues, we introduce an approach for establishing correspondence between the interior of two arbitrary objects, formulated as an optimal mass transport problem with a sparse set of landmarks. This correspondence enables us to compute non-rigid warping functions that better align the source and target objects as well as to incorporate local rigidity constraints to perform as-rigid-aspossible DFI. We demonstrate how our approach helps achieve flexible morphing results with a small number of landmarks.


Computer Graphics Forum | 2013

Hair Interpolation for Portrait Morphing

Yanlin Weng; Lvdi Wang; Xiao Li; Menglei Chai; Kun Zhou

In this paper we study the problem of hair interpolation: given two 3D hair models, we want to generate a sequence of intermediate hair models that transform from one input to another both smoothly and aesthetically pleasing. We propose an automatic method that efficiently calculates a many-to-many strand correspondence between two or more given hair models, taking into account the multi-scale clustering structure of hair. Experiments demonstrate that hair interpolation can be used for producing more vivid portrait morphing effects and enabling a novel example-based hair styling methodology, where a user can interactively create new hairstyles by continuously exploring a “style space” spanning multiple input hair models.


ACM Transactions on Graphics | 2017

A data-driven approach to four-view image-based hair modeling

Meng Zhang; Menglei Chai; Hongzhi Wu; Hao Yang; Kun Zhou

We introduce a novel four-view image-based hair modeling method. Given four hair images taken from the front, back, left and right views as input, we first estimate the rough 3D shape of the hair observed in the input using a predefined database of 3D hair models, then synthesize a hair texture on the surface of the shape, from which the hair growing direction information is calculated and used to construct a 3D direction field in the hair volume. Finally, we grow hair strands from the scalp, following the direction field, to produce the 3D hair model, which closely resembles the hair in all input images. Our method does not require that all input images are from the same hair, enabling an effective way to create compelling hair models from images of considerably different hairstyles at different views. We demonstrate the efficacy of our method using a wide range of examples.


IEEE Transactions on Visualization and Computer Graphics | 2017

Adaptive Skinning for Interactive Hair-Solid Simulation

Menglei Chai; Changxi Zheng; Kun Zhou

Reduced hair models have proven successful for interactively simulating a full head of hair strands, building upon a fundamental assumption that only a small set of guide hairs are needed for explicit simulation, and the rest of the hair move coherently and thus can be interpolated using guide hairs. Unfortunately, hair-solid interactions is a pathological case for traditional reduced hair models, as the motion coherence between hair strands can be arbitrarily broken by interacting with solids. In this paper, we propose an adaptive hair skinning method for interactive hair simulation with hair-solid collisions. We precompute many eligible sets of guide hairs and the corresponding interpolation relationships that are represented using a compact strand-based hair skinning model. At runtime, we simulate only guide hairs; for interpolating every other hair, we adaptively choose its guide hairs, taking into account motion coherence and potential hair-solid collisions. Further, we introduce a two-way collision correction algorithm to allow sparsely sampled guide hairs to resolve collisions with solids that can have small geometric features. Our method enables interactive simulation of more than 150 K hair strands interacting with complex solid objects, using 400 guide hairs. We demonstrate the efficiency and robustness of the method with various hairstyles and user-controlled arbitrary hair-solid interactions.

Collaboration


Dive into the Menglei Chai's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge