Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Andrew W. Feng is active.

Publication


Featured researches published by Andrew W. Feng.


symposium on computer animation | 2013

Virtual character performance from speech

Stacy Marsella; Yuyu Xu; Margaux Lhommet; Andrew W. Feng; Stefan Scherer; Ari Shapiro

We demonstrate a method for generating a 3D virtual character performance from the audio signal by inferring the acoustic and semantic properties of the utterance. Through a prosodic analysis of the acoustic signal, we perform an analysis for stress and pitch, relate it to the spoken words and identify the agitation state. Our rule-based system performs a shallow analysis of the utterance text to determine its semantic, pragmatic and rhetorical content. Based on these analyses, the system generates facial expressions and behaviors including head movements, eye saccades, gestures, blinks and gazes. Our technique is able to synthesize the performance and generate novel gesture animations based on coarticulation with other closely scheduled animations. Because our method utilizes semantics in addition to prosody, we are able to generate virtual character performances that are more appropriate than methods that use only prosody. We perform a study that shows that our technique outperforms methods that use prosody alone.


interactive 3d graphics and games | 2012

An example-based motion synthesis technique for locomotion and object manipulation

Andrew W. Feng; Yuyu Xu; Ari Shapiro

We synthesize natural-looking locomotion, reaching and grasping for a virtual character in order to accomplish a wide range of movement and manipulation tasks in real time. Our virtual characters can move while avoiding obstacles, as well as manipulate arbitrarily shaped objects, regardless of height, location or placement in a virtual environment. Our characters can touch, reach and grasp objects while maintaining a high quality appearance. We demonstrate a system that combines these skills in an interactive setting suitable for interactive games and simulations.


international conference on computer graphics and interactive techniques | 2015

Avatar reshaping and automatic rigging using a deformable model

Andrew W. Feng; Dan Casas; Ari Shapiro

3D scans of human figures have become widely available through online marketplaces and have become relatively easy to acquire using commodity scanning hardware. In addition to static uses of such 3D models, such as 3D printed figurines or rendered 3D still imagery, there are numerous uses for an animated 3D character that uses such 3D scan data. In order to effectively use such models as dynamic 3D characters, the models must be properly rigged before they are animated. In this work, we demonstrate a method to automatically rig a 3D mesh by matching a set of morphable models against the 3D scan. Once the morphable model has been matched against the 3D scan, the skeleton position and skinning attributes are then copied, resulting in a skinning and rigging that is similar in quality to the original hand-rigged model. In addition, the use of a morphable model allows us to reshape and resize the 3D scan according to approximate human proportions. Thus, a human 3D scan can be modified to be taller, shorter, fatter or skinnier. Such manipulations of the 3D scan are useful both for social science research, as well as for visualization for applications such as fitness, body image, plastic surgery and the like.


motion in games | 2013

A Practical and Configurable Lip Sync Method for Games

Yuyu Xu; Andrew W. Feng; Stacy Marsella; Ari Shapiro

We demonstrate a lip animation (lip sync) algorithm for real-time applications that can be used to generate synchronized facial movements with audio generated from natural speech or a text-to-speech engine. Our method requires an animator to construct animations using a canonical set of visemes for all pairwise combinations of a reduced phoneme set (phone bigrams). These animations are then stitched together to construct the final animation, adding velocity and lip-pose constraints. This method can be applied to any character that uses the same, small set of visemes. Our method can operate efficiently in multiple languages by reusing phone bigram animations that are shared among languages, and specific word sounds can be identified and changed on a per-character basis. Our method uses no machine learning, which offers two advantages over techniques that do: 1) data can be generated for non-human characters whose faces can not be easily retargeted from a human speakers face, and 2) the specific facial poses or shapes used for animation can be specified during the setup and rigging stage, and before the lip animation stage, thus making it suitable for game pipelines or circumstances where the speech targets poses are predetermined, such as after acquisition from an online 3D marketplace.


Computer Animation and Virtual Worlds | 2014

Fast, automatic character animation pipelines

Andrew W. Feng; Yazhou Huang; Yuyu Xu; Ari Shapiro

Humanoid three‐dimensional (3D) models can be easily acquired through various sources, including through online marketplaces. The use of such models within a game or simulation environment requires human input and intervention in order to associate such a model with a relevant set of motions and control mechanisms. In this paper, we demonstrate a pipeline where humanoid 3D models can be incorporated within seconds into an animation system and infused with a wide range of capabilities, such as locomotion, object manipulation, gazing, speech synthesis and lip syncing. We offer a set of heuristics that can associate arbitrary joint names with canonical ones and describe a fast retargeting algorithm that enables us to instill a set of behaviors onto an arbitrary humanoid skeleton on‐the‐fly. We believe that such a system will vastly increase the use of 3D interactive characters due to the ease that new models can be animated.Copyright


motion in games | 2012

An Analysis of Motion Blending Techniques

Andrew W. Feng; Yazhou Huang; Marcelo Kallmann; Ari Shapiro

Motion blending is a widely used technique for character animation. The main idea is to blend similar motion examples according to blending weights, in order to synthesize new motions parameterizing high level characteristics of interest. We present in this paper an in-depth analysis and comparison of four motion blending techniques: Barycentric interpolation, Radial Basis Function, K-Nearest Neighbors and Inverse Blending optimization. Comparison metrics were designed to measure the performance across different motion categories on criteria including smoothness, parametric error and computation time. We have implemented each method in our character animation platform SmartBody and we present several visualization renderings that provide a window for gleaning insights into the underlying pros and cons of each method in an intuitive way.


international conference on computer graphics and interactive techniques | 2015

A fast and robust pipeline for populating mobile AR scenes with gamified virtual characters

Margarita Papaefthymiou; Andrew W. Feng; Ari Shapiro; George Papagiannakis

In this work we present a complete methodology for robust authoring of AR virtual characters powered from a versatile character animation framework (Smartbody), using only mobile devices. We can author, fully augment with life-size, animated, geometrically accurately registered virtual characters into any open space in less than 1 minute with only modern smartphones or tablets and then automatically revive this augmentation for subsequent activations from the same spot, in under a few seconds. Also, we handle efficiently scene authoring rotations of the AR objects using Geometric Algebra rotors in order to extract higher quality visual results. Moreover, we have implemented a mobile version of the global illumination for real-time Precomputed Radiance Transfer algorithm for diffuse shadowed characters in real-time, using High Dynamic Range (HDR) environment maps integrated in our open-source OpenGL Geometric Application (glGA) framework. Effective character interaction plays fundamental role in attaining high level of believability and makes the AR application more attractive and immersive based on the SmartBody framework.


motion in games | 2012

Automating the Transfer of a Generic Set of Behaviors Onto a Virtual Character

Andrew W. Feng; Yazhou Huang; Yuyu Xu; Ari Shapiro

Humanoid 3D models can be easily acquired through various sources, including online. The use of such models within a game or simulation environment requires human input and intervention in order to associate such a model with a relevant set of motions and control mechanisms. In this paper, we demonstrate a pipeline where humanoid 3D models can be incorporated within seconds into an animation system, and infused with a wide range of capabilities, such as locomotion, object manipulation, gazing, speech synthesis and lip syncing. We offer a set of heuristics that can associate arbitrary joint names with canonical ones, and describe a fast retargeting algorithm that enables us to instill a set of behaviors onto an arbitrary humanoid skeleton. We believe that such a system will vastly increase the use of 3D interactive characters due to the ease that new models can be animated.


Computer Animation and Virtual Worlds | 2017

Just‐in‐time, viable, 3‐D avatars from scans

Andrew W. Feng; Evan Suma Rosenberg; Ari Shapiro

We demonstrate a system that can generate a photorealistic, interactive 3‐D character from a human subject that is capable of movement, emotion, speech, and gesture in less than 20 min without the need for 3‐D artist intervention or specialized technical knowledge through a near automatic process. Our method uses mostly commodity or off‐the‐shelf hardware. We demonstrate the just‐in‐time use of generating such 3‐D models for virtual and augmented reality, games, simulation, and communication. We anticipate that the inexpensive generation of such photorealistic models will be useful in many venues where a just‐in‐time 3‐D reconstructions of digital avatars that resemble particular human subjects is necessary.


motion in games | 2016

The effect of operating a virtual doppleganger in a 3D simulation

Gale M. Lucas; Evan Szablowski; Jonathan Gratch; Andrew W. Feng; Tiffany Huang; Jill Boberg; Ari Shapiro

Recent advances in scanning technology have enabled the widespread capture of 3D character models based on human subjects. Intuition suggests that, with these new capabilities to create avatars that look like their users, every player should have his or her own avatar to play video games or simulations. We explicitly test the impact of having ones own avatar (vs. a yoked control avatar) in a simulation (i.e., maze running task with mines). We test the impact of avatar identity on both subjective (e.g., feeling connected and engaged, liking avatars appearance, feeling upset when avatars injured, enjoying the game) and behavioral variables (e.g., time to complete task, speed, number of mines triggered, riskiness of maze path chosen). Results indicate that having an avatar that looks like the user improves their subjective experience, but there is no significant effect on how users perform in the simulation.

Collaboration


Dive into the Andrew W. Feng's collaboration.

Top Co-Authors

Avatar

Ari Shapiro

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Yuyu Xu

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Evan A. Suma

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Sin-Hwa Kang

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Anton Leuski

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Graham Fyffe

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Jill Boberg

University of Southern California

View shared research outputs
Researchain Logo
Decentralizing Knowledge