Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Chongyang Ma is active.

Publication


Featured researches published by Chongyang Ma.


international conference on computer graphics and interactive techniques | 2015

Facial performance sensing head-mounted display

Hao Li; Laura C. Trutoiu; Kyle Olszewski; Lingyu Wei; Tristan Trutna; Pei-Lun Hsieh; Aaron Nicholls; Chongyang Ma

There are currently no solutions for enabling direct face-to-face interaction between virtual reality (VR) users wearing head-mounted displays (HMDs). The main challenge is that the headset obstructs a significant portion of a users face, preventing effective facial capture with traditional techniques. To advance virtual reality as a next-generation communication platform, we develop a novel HMD that enables 3D facial performance-driven animation in real-time. Our wearable system uses ultra-thin flexible electronic materials that are mounted on the foam liner of the headset to measure surface strain signals corresponding to upper face expressions. These strain signals are combined with a head-mounted RGB-D camera to enhance the tracking in the mouth region and to account for inaccurate HMD placement. To map the input signals to a 3D face model, we perform a single-instance offline training session for each person. For reusable and accurate online operation, we propose a short calibration step to readjust the Gaussian mixture distribution of the mapping before each use. The resulting animations are visually on par with cutting-edge depth sensor-driven facial performance capture systems and hence, are suitable for social interactions in virtual worlds.


international conference on computer graphics and interactive techniques | 2011

Discrete element textures

Chongyang Ma; Li-Yi Wei; Xin Tong

A variety of phenomena can be characterized by repetitive small scale elements within a large scale domain. Examples include a stack of fresh produce, a plate of spaghetti, or a mosaic pattern. Although certain results can be produced via manual placement or procedural/physical simulation, these methods can be labor intensive, difficult to control, or limited to specific phenomena. We present discrete element textures, a data-driven method for synthesizing repetitive elements according to a small input exemplar and a large output domain. Our method preserves both individual element properties and their aggregate distributions. It is also general and applicable to a variety of phenomena, including different dimensionalities, different element properties and distributions, and different effects including both artistic and physically realistic ones. We represent each element by one or multiple samples whose positions encode relevant element attributes including position, size, shape, and orientation. We propose a sample-based neighborhood similarity metric and an energy optimization solver to synthesize desired outputs that observe not only input exemplars and output domains but also optional constraints such as physics, orientation fields, and boundary conditions. As a further benefit, our method can also be applied for editing existing element distributions.


computer vision and pattern recognition | 2015

Unconstrained realtime facial performance capture

Pei-Lun Hsieh; Chongyang Ma; Jihun Yu; Hao Li

We introduce a realtime facial tracking system specifically designed for performance capture in unconstrained settings using a consumer-level RGB-D sensor. Our framework provides uninterrupted 3D facial tracking, even in the presence of extreme occlusions such as those caused by hair, hand-to-face gestures, and wearable accessories. Anyones face can be instantly tracked and the users can be switched without an extra calibration step. During tracking, we explicitly segment face regions from any occluding parts by detecting outliers in the shape and appearance input using an exponentially smoothed and user-adaptive tracking model as prior. Our face segmentation combines depth and RGB input data and is also robust against illumination changes. To enable continuous and reliable facial feature tracking in the color channels, we synthesize plausible face textures in the occluded regions. Our tracking model is personalized on-the-fly by progressively refining the users identity, expressions, and texture with reliable samples and temporal filtering. We demonstrate robust and high-fidelity facial tracking on a wide range of subjects with highly incomplete and largely occluded data. Our system works in everyday environments and is fully unobtrusive to the user, impacting consumer AR applications and surveillance.


international conference on computer graphics and interactive techniques | 2009

Motion field texture synthesis

Chongyang Ma; Li-Yi Wei; Baining Guo; Kun Zhou

A variety of animation effects such as herds and fluids contain detailed motion fields characterized by repetitive structures. Such detailed motion fields are often visually important, but tedious to specify manually or expensive to simulate computationally. Due to the repetitive nature, some of these motion fields (e.g. turbulence in fluids) could be synthesized by procedural texturing, but procedural texturing is known for its limited generality. We apply example-based texture synthesis for motion fields. Our technique is general and can take on a variety of user inputs, including captured data, manual art, and physical/procedural simulation. This data-driven approach enables artistic effects that are difficult to achieve via previous methods, such as heart shaped swirls in fluid animation. Due to the use of texture synthesis, our method is able to populate a large output field from a small input exemplar, imposing minimum user workload. Our algorithm also allows the synthesis of output motion fields not only with the same dimension as the input (e.g. 2D to 2D) but also of higher dimension, such as 3D volumetric outputs from 2D planar inputs. This cross-dimension capability supports a convenient usage scenario, i.e. the user could simply supply 2D images and our method produces a 3D motion field with similar characteristics. The motion fields produced by our method are generic, and could be combined with a variety of large-scale low-resolution motions that are easy to specify either manually or computationally but lack the repetitive structures to be characterized as textures. We apply our technique to a variety of animation phenomena, including smoke, liquid, and group motion.


international conference on computer graphics and interactive techniques | 2015

Single-view hair modeling using a hairstyle database

Liwen Hu; Chongyang Ma; Linjie Luo; Hao Li

Human hair presents highly convoluted structures and spans an extraordinarily wide range of hairstyles, which is essential for the digitization of compelling virtual avatars but also one of the most challenging to create. Cutting-edge hair modeling techniques typically rely on expensive capture devices and significant manual labor. We introduce a novel data-driven framework that can digitize complete and highly complex 3D hairstyles from a single-view photograph. We first construct a large database of manually crafted hair models from several online repositories. Given a reference photo of the target hairstyle and a few user strokes as guidance, we automatically search for multiple best matching examples from the database and combine them consistently into a single hairstyle to form the large-scale structure of the hair model. We then synthesize the final hair strands by jointly optimizing for the projected 2D similarity to the reference photo, the physical plausibility of each strand, as well as the local orientation coherency between neighboring strands. We demonstrate the effectiveness and robustness of our method on a variety of hairstyles and challenging images, and compare our system with state-of-the-art hair modeling algorithms.


international conference on computer graphics and interactive techniques | 2014

Robust hair capture using simulated examples

Liwen Hu; Chongyang Ma; Linjie Luo; Hao Li

We introduce a data-driven hair capture framework based on example strands generated through hair simulation. Our method can robustly reconstruct faithful 3D hair models from unprocessed input point clouds with large amounts of outliers. Current state-of-the-art techniques use geometrically-inspired heuristics to derive global hair strand structures, which can yield implausible hair strands for hairstyles involving large occlusions, multiple layers, or wisps of varying lengths. We address this problem using a voting-based fitting algorithm to discover structurally plausible configurations among the locally grown hair segments from a database of simulated examples. To generate these examples, we exhaustively sample the simulation configurations within the feasible parameter space constrained by the current input hairstyle. The number of necessary simulations can be further reduced by leveraging symmetry and constrained initial conditions. The final hairstyle can then be structurally represented by a limited number of examples. To handle constrained hairstyles such as a ponytail of which realistic simulations are more difficult, we allow the user to sketch a few strokes to generate strand examples through an intuitive interface. Our approach focuses on robustness and generality. Since our method is structurally plausible by construction, we ensure an improved control during hair digitization and avoid implausible hair synthesis for a wide range of hairstyles.


international conference on computer graphics and interactive techniques | 2013

Dynamic element textures

Chongyang Ma; Li-Yi Wei; Sylvain Lefebvre; Xin Tong

Many natural phenomena consist of geometric elements with dynamic motions characterized by small scale repetitions over large scale structures, such as particles, herds, threads, and sheets. Due to their ubiquity, controlling the appearance and behavior of such phenomena is important for a variety of graphics applications. However, such control is often challenging; the repetitive elements are often too numerous for manual edit, while their overall structures are often too versatile for fully automatic computation. We propose a method that facilitates easy and intuitive controls at both scales: high-level structures through spatial-temporal output constraints (e.g. overall shape and motion of the output domain), and low-level details through small input exemplars (e.g. element arrangements and movements). These controls are suitable for manual specification, while the corresponding geometric and dynamic repetitions are suitable for automatic computation. Our system takes such user controls as inputs, and generates as outputs the corresponding repetitions satisfying the controls. Our method, which we call dynamic element textures, aims to produce such controllable repetitions through a combination of constrained optimization (satisfying controls) and data driven computation (synthesizing details). We use spatial-temporal samples as the core representation for dynamic geometric elements. We propose analysis algorithms for decomposing small scale repetitions from large scale themes, as well as synthesis algorithms for generating outputs satisfying user controls. Our method is general, producing a range of artistic effects that previously required disparate and specialized techniques.


Computer Graphics Forum | 2014

Analogy-driven 3D style transfer

Chongyang Ma; Haibin Huang; Alla Sheffer; Evangelos Kalogerakis; Rui Wang

Style transfer aims to apply the style of an exemplar model to a target one, while retaining the targets structure. The main challenge in this process is to algorithmically distinguish style from structure, a high‐level, potentially ill‐posed cognitive task. Inspired by cognitive science research we recast style transfer in terms of shape analogies. In IQ testing, shape analogy queries present the subject with three shapes: source, target and exemplar, and ask them to select an output such that the transformation, or analogy, from the exemplar to the output is similar to that from the source to the target. The logical process involved in identifying the source‐to‐target analogies implicitly detects the structural differences between the source and target and can be used effectively to facilitate style transfer. Since the exemplar has a similar structure to the source, applying the analogy to the exemplar will provide the output we seek. The main technical challenge we address is to compute the source to target analogies, consistent with human logic. We observe that the typical analogies we look for consist of a small set of simple transformations, which when applied to the exemplar generate a continuous, seamless output model. To assemble a shape analogy, we compute an optimal set of source‐to‐target transformations, such that the assembled analogy best fits these criteria. The assembled analogy is then applied to the exemplar shape to produce the desired output model. We use the proposed framework to seamlessly transfer a variety of style properties between 2D and 3D objects and demonstrate significant improvements over the state of the art in style transfer. We further show that our framework can be used to successfully complete partial scans with the help of a user provided structural template, coherently propagating scan style across the completed surfaces.


international conference on computer graphics and interactive techniques | 2014

Capturing braided hairstyles

Liwen Hu; Chongyang Ma; Linjie Luo; Li-Yi Wei; Hao Li

From fishtail to princess braids, these intricately woven structures define an important and popular class of hairstyle, frequently used for digital characters in computer graphics. In addition to the challenges created by the infinite range of styles, existing modeling and capture techniques are particularly constrained by the geometric and topological complexities. We propose a data-driven method to automatically reconstruct braided hairstyles from input data obtained from a single consumer RGB-D camera. Our approach covers the large variation of repetitive braid structures using a family of compact procedural braid models. From these models, we produce a database of braid patches and use a robust random sampling approach for data fitting. We then recover the input braid structures using a multi-label optimization algorithm and synthesize the intertwining hair strands of the braids. We demonstrate that a minimal capture equipment is sufficient to effectively capture a wide range of complex braids with distinct shapes and structures.


Computer Graphics Forum | 2014

Game level layout from design specification

Chongyang Ma; Nicholas Vining; Sylvain Lefebvre; Alla Sheffer

The design of video game environments, or levels, aims to control gameplay by steering the player through a sequence of designer‐controlled steps, while simultaneously providing a visually engaging experience. Traditionally these levels are painstakingly designed by hand, often from pre‐existing building blocks, or space templates. In this paper, we propose an algorithmic approach for automatically laying out game levels from user‐specified blocks. Our method allows designers to retain control of the gameplay flow via user‐specified level connectivity graphs, while relieving them from the tedious task of manually assembling the building blocks into a valid, plausible layout. Our method produces sequences of diverse layouts for the same input connectivity, allowing for repeated replay of a given level within a visually different, new environment. We support complex graph connectivities and various building block shapes, and are able to compute complex layouts in seconds. The two key components of our algorithm are the use of configuration spaces defining feasible relative positions of building blocks within a layout and a graph‐decomposition based layout strategy that leverages graph connectivity to speed up convergence and avoid local minima. Together these two tools quickly steer the solution toward feasible layouts. We demonstrate our method on a variety of real‐life inputs, and generate appealing layouts conforming to user specifications.

Collaboration


Dive into the Chongyang Ma's collaboration.

Top Co-Authors

Avatar

Hao Li

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Weiming Dong

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Liwen Hu

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xing Mei

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Pei-Lun Hsieh

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Alla Sheffer

University of British Columbia

View shared research outputs
Researchain Logo
Decentralizing Knowledge