Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Wan-Chun Ma is active.

Publication


Featured researches published by Wan-Chun Ma.


eurographics symposium on rendering techniques | 2007

Rapid acquisition of specular and diffuse normal maps from polarized spherical gradient illumination

Wan-Chun Ma; Tim Hawkins; Pieter Peers; Charles-Félix Chabert; Malte Weiss; Paul E. Debevec

We estimate surface normal maps of an object from either its diffuse or specular reflectance using four spherical gradient illumination patterns. In contrast to traditional photometric stereo, the spherical patterns allow normals to be estimated simultaneously from any number of viewpoints. We present two polarized lighting techniques that allow the diffuse and specular normal maps of an object to be measured independently. For scattering materials, we show that the specular normal maps yield the best record of detailed surface shape while the diffuse normals deviate from the true surface normal due to subsurface scattering, and that this effect is dependent on wavelength. We show several applications of this acquisition technique. First, we capture normal maps of a facial performance simultaneously from several viewing positions using time-multiplexed illumination. Second, we show that highresolution normal maps based on the specular component can be used with structured light 3D scanning to quickly acquire high-resolution facial surface geometry using off-the-shelf digital still cameras. Finally, we present a realtime shading model that uses independently estimated normal maps for the specular and diffuse color channels to reproduce some of the perceptually important effects of subsurface scattering.


international conference on computer graphics and interactive techniques | 2008

Facial performance synthesis using deformation-driven polynomial displacement maps

Wan-Chun Ma; Andrew Jones; Jen-Yuan Chiang; Tim Hawkins; Sune Frederiksen; Pieter Peers; Marko Vukovic; Ming Ouhyoung; Paul E. Debevec

We present a novel method for acquisition, modeling, compression, and synthesis of realistic facial deformations using polynomial displacement maps. Our method consists of an analysis phase where the relationship between motion capture markers and detailed facial geometry is inferred, and a synthesis phase where novel detailed animated facial geometry is driven solely by a sparse set of motion capture markers. For analysis, we record the actor wearing facial markers while performing a set of training expression clips. We capture real-time high-resolution facial deformations, including dynamic wrinkle and pore detail, using interleaved structured light 3D scanning and photometric stereo. Next, we compute displacements between a neutral mesh driven by the motion capture markers and the high-resolution captured expressions. These geometric displacements are stored in a polynomial displacement map which is parameterized according to the local deformations of the motion capture dots. For synthesis, we drive the polynomial displacement map with new motion capture data. This allows the recreation of large-scale muscle deformation, medium and fine wrinkles, and dynamic skin pore detail. Applications include the compression of existing performance data and the synthesis of new performances. Our technique is independent of the underlying geometry capture system and can be used to automatically generate high-frequency wrinkle and pore details on top of many existing facial animation systems.


2003 Shape Modeling International. | 2003

Skeleton extraction of 3D objects with radial basis functions

Wan-Chun Ma; Fu-Che Wu; Ming Ouhyoung

A skeleton is a lower dimensional shape description of an object. The requirements of a skeleton differ with applications. For example, object recognition requires skeletons with primitive shape features to make similarity comparison. On the other hand, surface reconstruction needs skeletons, which contain detailed geometry information to reduce the approximation error in the reconstruction process. Whereas many previous works are concerned about skeleton extraction, most of these methods are sensitive to noise, time consuming, or restricted to specific 3D models. A practical approach for extracting skeletons from general 3D models using radial basis functions (RBFs) is proposed. A skeleton generated with this approach conforms more to the human perception. Given a 3D polygonal model, the vertices are regarded as centers for RBF level set construction. Next, a gradient descent algorithm is applied to each vertex to locate the local maxima in the RBF; the gradient is calculated directly from the partial derivatives of the RBF. Finally, with the inherited connectivity from the original model, local maximum pairs are connected with links driven by the active contour model. The skeletonization process is completed when the potential energy of these links is minimized.


The Visual Computer | 2006

Domain connected graph: the skeleton of a closed 3D shape for animation

Fu-Che Wu; Wan-Chun Ma; Rung-Huei Liang; Bing-Yu Chen; Ming Ouhyoung

In previous research, three main approaches have been employed to solve the skeleton extraction problem: medial axis transform (MAT), generalized potential field and decomposition-based methods. These three approaches have been formulated using three different concepts, namely surface variation, inside energy distribution, and the connectivity of parts. By combining the above mentioned concepts, this paper creates a concise structure to represent the control skeleton of an arbitrary object.First, an algorithm is proposed to detect the end, connection and joint points of an arbitrary 3D object. These three points comprise the skeleton, and are the most important to consider when describing it. In order to maintain the stability of the point extraction algorithm, a prong-feature detection technique and a level iso-surfaces function-based on the repulsive force field was employed. A neighborhood relationship inherited from the surface able to describe the connection relationship of these positions was then defined. Based on this relationship, the skeleton was finally constructed and named domain connected graph (DCG). The DCG not only preserves the topology information of a 3D object, but is also less sensitive than MAT to the perturbation of shapes. Moreover, from the results of complicated 3D models, consisting of thousands of polygons, it is evident that the DCG conforms to human perception.


eurographics symposium on rendering techniques | 2006

Relighting human locomotion with flowed reflectance fields

Per Einarsson; Charles-Félix Chabert; Andrew Jones; Wan-Chun Ma; Bruce Lamond; Tim Hawkins; Mark T. Bolas; Sebastian Sylwan; Paul E. Debevec

We present an image-based approach for capturing the appearance of a walking or running person so they can be rendered realistically under variable viewpoint and illumination. In our approach, a person walks on a treadmill at a regular rate as a turntable slowly rotates the persons direction. As this happens, the person is filmed with a vertical array of high-speed cameras under a time-multiplexed lighting basis, acquiring a seven-dimensional dataset of the person under variable time, illumination, and viewing direction in approximately forty seconds. We process this data into a flowed reflectance field using an optical flow algorithm to correspond pixels in neighboring camera views and time samples to each other, and we use image compression to reduce the size of this data. We then use image-based relighting and a hardware-accelerated combination of view morphing and light field rendering to render the subject under user-specified viewpoint and lighting conditions. To composite the person into a scene, we use an alpha channel derived from back lighting and a retroreflective treadmill surface and a visual hull process to render the shadows the person would cast onto the ground. We demonstrate realistic composites of several subjects into real and virtual environments using our technique.


eurographics | 2011

Comprehensive Facial Performance Capture

Graham Fyffe; Tim Hawkins; Chris Watts; Wan-Chun Ma; Paul E. Debevec

We present a system for recording a live dynamic facial performance, capturing highly detailed geometry and spatially varying diffuse and specular reflectance information for each frame of the performance. The result is a reproduction of the performance that can be rendered from novel viewpoints and novel lighting conditions, achieving photorealistic integration into any virtual environment. Dynamic performances are captured directly, without the need for any template geometry or static geometry scans, and processing is completely automatic, requiring no human input or guidance. Our key contributions are a heuristic for estimating facial reflectance information from gradient illumination photographs, and a geometry optimization framework that maximizes a principled likelihood function combining multi‐view stereo correspondence and photometric stereo, using multi‐resolution belief propagation. The output of our system is a sequence of geometries and reflectance maps, suitable for rendering in off‐the‐shelf software. We show results from our system rendered under novel viewpoints and lighting conditions, and validate our results by demonstrating a close match to ground truth photographs.


pacific conference on computer graphics and applications | 2003

Automatic animation skeleton using repulsive force field

Pin-Chou Liu; Fu-Che Wu; Wan-Chun Ma; Rung-Huei Liang; Ming Ouhyoung

A method is proposed in this paper to automatically generate the animation skeleton of a model such that the model can be manipulated according to the skeleton. With our method, users can construct the skeleton in a short time, and bring a static model both dynamic and alive. The primary steps of our method are finding skeleton joints, connecting the joints to form an animation skeleton, and binding skin vertices to the skeleton. Initially, a repulsive force field is constructed inside a given model, and a set of points with local minimal force magnitude are found based on the force field. Then, a modified thinning algorithm is applied to generate an initial skeleton, which is further refined to become the final result. When the skeleton construction completes, skin vertices are anchored to the skeleton joints according to the distances between the vertices and joints. In order to build the repulsive force field, hundreds of rays are shot radially from positions inside the model, and it leads to that the force field computation takes most of the execution time. Therefore, an octree structure is used to accelerate this process. Currently, the skeleton generated from a typical 3D model with 1000 to 10000 polygons takes less than 2 minutes on a Intel Pentium 4 2.4 GHz PC.


The Visual Computer | 2006

Real-time triple product relighting using spherical local-frame parameterization

Wan-Chun Ma; Chun-Tse Hsiao; Ken-Yi Lee; Yung-Yu Chuang; Bing-Yu Chen

This paper addresses the problem of real-time rendering for objects with complex materials under varying all-frequency illumination and changing view. Our approach extends the triple product algorithm by using local-frame parameterization, spherical wavelets, per-pixel shading and visibility textures. Storing BRDFs with local-frame parameterization allows us to handle complex BRDFs and incorporate bump mapping more easily. In addition, it greatly reduces the data size compared to storing BRDFs with respect to the global frame. The use of spherical wavelets avoids uneven sampling and energy normalization of cubical parameterization. Finally, we use per-pixel shading and visibility textures to remove the need for fine tessellations of meshes and shift most computation from vertex shaders to more powerful pixel shaders. The resulting system can render scenes with realistic shadow effects, complex BRDFs, bump mapping and spatially-varying BRDFs under varying complex illumination and changing view at real-time frame rates on modern graphics hardware.


conference on visual media production | 2011

Head-Mounted Photometric Stereo for Performance Capture

Andrew Jones; Graham Fyffe; Xueming Yu; Wan-Chun Ma; Jay Busch; Ryosuke Ichikari; Mark T. Bolas; Paul E. Debevec

Head-mounted cameras are an increasingly important tool for capturing facial performances to drive virtual characters. They provide a fixed, unoccluded view of the face, useful for observing motion capture dots or as input to video analysis. However, the 2D imagery captured with these systems is typically affected by ambient light and generally fails to record subtle 3D shape changes as the face performs. We have developed a system that augments a head-mounted camera with LED-based photometric stereo. The system allows observation of the face independent of the ambient light and generates per-pixel surface normals so that the performance is recorded dynamically in 3D. The resulting data can be used for facial relighting or as better input to machine learning algorithms for driving an animated face.


international conference on computer graphics and interactive techniques | 2011

Optimized local blendshape mapping for facial motion retargeting

Wan-Chun Ma; Graham Fyffe; Paul E. Debevec

One of the popular methods for facial motion retargeting is local blendshape mapping [Pighin and Lewis 2006], where each local facial region is controlled by a tracked feature (for example, a vertex in motion capture data). To map a target motion input onto blendshapes, a pose set is chosen for each facial region with minimal retargeting error. However, since the best pose set for each region is chosen independently, the solution likely has unorganized pose sets across the face regions, as shown in Figure 1(b). Therefore, even though every pose set matches the local features, the retargeting result is not guaranteed to be spatially smooth. In addition, previous methods ignored temporal coherence which is key for jitter-free results.

Collaboration


Dive into the Wan-Chun Ma's collaboration.

Top Co-Authors

Avatar

Paul E. Debevec

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Tim Hawkins

University of California

View shared research outputs
Top Co-Authors

Avatar

Ming Ouhyoung

National Taiwan University

View shared research outputs
Top Co-Authors

Avatar

Andrew Jones

University of Colorado Boulder

View shared research outputs
Top Co-Authors

Avatar

Charles-Félix Chabert

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Graham Fyffe

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Fu-Che Wu

National Taiwan University

View shared research outputs
Top Co-Authors

Avatar

Mark T. Bolas

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bing-Yu Chen

National Taiwan University

View shared research outputs
Researchain Logo
Decentralizing Knowledge