Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yazhou Huang is active.

Publication


Featured researches published by Yazhou Huang.


motion in games | 2010

Motion parameterization with inverse blending

Yazhou Huang; Marcelo Kallmann

Motion blending is a popular motion synthesis technique which interpolates similar motion examples according to blending weighs parameterizing high-level characteristics of interest. We present in this paper an optimization framework for determining blending weights able to produce motions precisely satisfying multiple given spatial constraints. Our proposed method is simpler than previous approaches, and yet it can quickly achieve locally optimal solutions without pre-processing of basis functions. The effectiveness of our method is demonstrated in solving two classes of problems: 1) we show the precise control of end-effectors during the execution of diverse upper-body actions, and 2) we also address the problem of synthesizing walking animations with precise feet placements, demonstrating the ability to simultaneously meet multiple constraints and at different frames. Our several experimental results demonstrate that the proposed optimization approach is simple to implement and effectively achieves realistic results with precise motion control.


Computer Animation and Virtual Worlds | 2014

Fast, automatic character animation pipelines

Andrew W. Feng; Yazhou Huang; Yuyu Xu; Ari Shapiro

Humanoid three‐dimensional (3D) models can be easily acquired through various sources, including through online marketplaces. The use of such models within a game or simulation environment requires human input and intervention in order to associate such a model with a relevant set of motions and control mechanisms. In this paper, we demonstrate a pipeline where humanoid 3D models can be incorporated within seconds into an animation system and infused with a wide range of capabilities, such as locomotion, object manipulation, gazing, speech synthesis and lip syncing. We offer a set of heuristics that can associate arbitrary joint names with canonical ones and describe a fast retargeting algorithm that enables us to instill a set of behaviors onto an arbitrary humanoid skeleton on‐the‐fly. We believe that such a system will vastly increase the use of 3D interactive characters due to the ease that new models can be animated.Copyright


intelligent virtual agents | 2010

Interactive motion modeling and parameterization by direct demonstration

Carlo Camporesi; Yazhou Huang; Marcelo Kallmann

While interactive virtual humans are becoming widely used in education, training and therapeutic applications, building animations which are both realistic and parameterized in respect to a given scenario remains a complex and time-consuming task. In order to improve this situation, we propose a framework based on the direct demonstration and parameterization of motions. The presented approach addresses three important aspects of the problem in an integrated fashion: (1) our framework relies on an interactive real-time motion capture interface that empowers non-skilled animators with the ability to model realistic upper-body actions and gestures by direct demonstration; (2) our interface also accounts for the interactive definition of clustered example motions, in order to well represent the variations of interest for a given motion being modeled; and (3) we also present an inverse blending optimization technique which solves the problem of precisely parameterizing a cluster of example motions in respect to arbitrary spatial constraints. The optimization is efficiently solved online, allowing autonomous virtual humans to precisely perform learned actions and gestures in respect to arbitrarily given targets. Our proposed framework has been implemented in an immersive multi-tile stereo visualization system, achieving a powerful and intuitive interface for programming generic parameterized motions by demonstration.


motion in games | 2012

An Analysis of Motion Blending Techniques

Andrew W. Feng; Yazhou Huang; Marcelo Kallmann; Ari Shapiro

Motion blending is a widely used technique for character animation. The main idea is to blend similar motion examples according to blending weights, in order to synthesize new motions parameterizing high level characteristics of interest. We present in this paper an in-depth analysis and comparison of four motion blending techniques: Barycentric interpolation, Radial Basis Function, K-Nearest Neighbors and Inverse Blending optimization. Comparison metrics were designed to measure the performance across different motion categories on criteria including smoothness, parametric error and computation time. We have implemented each method in our character animation platform SmartBody and we present several visualization renderings that provide a window for gleaning insights into the underlying pros and cons of each method in an intuitive way.


intelligent robots and systems | 2011

Planning humanlike actions in blending spaces

Yazhou Huang; Mentar Mahmudi; Marcelo Kallmann

We introduce an approach for enabling sampling-based planners to compute motions with humanlike appearance. The proposed method is based on a space of blendable example motions collected by motion capture. This space is explored by a sampling-based planner that is able to produce motions around obstacles while keeping solutions similar to the original examples. The results therefore largely maintain the humanlike characteristics observed in the example motions. The method is applied to generic upper-body actions and is complemented by a locomotion planner that searches for suitable body placements for executing upper-body actions successfully. As a result, our overall multi-modal planning method is able to automatically coordinate whole-body motions for action execution among obstacles, and the produced motions remain similar to example motions given as input to the system.


motion in games | 2012

Automating the Transfer of a Generic Set of Behaviors Onto a Virtual Character

Andrew W. Feng; Yazhou Huang; Yuyu Xu; Ari Shapiro

Humanoid 3D models can be easily acquired through various sources, including online. The use of such models within a game or simulation environment requires human input and intervention in order to associate such a model with a relevant set of motions and control mechanisms. In this paper, we demonstrate a pipeline where humanoid 3D models can be incorporated within seconds into an animation system, and infused with a wide range of capabilities, such as locomotion, object manipulation, gazing, speech synthesis and lip syncing. We offer a set of heuristics that can associate arbitrary joint names with canonical ones, and describe a fast retargeting algorithm that enables us to instill a set of behaviors onto an arbitrary humanoid skeleton. We believe that such a system will vastly increase the use of 3D interactive characters due to the ease that new models can be animated.


international conference on robotics and automation | 2010

A skill-based motion planning framework for humanoids

Marcelo Kallmann; Yazhou Huang; Robert Backman

This paper presents a multi-skill motion planner which is able to sequentially synchronize parameterized motion skills in order to achieve humanoid motions exhibiting complex whole-body coordination. The proposed approach integrates sampling-based motion planning in continuous parametric spaces with discrete search over skill choices, selecting the search strategy according to the functional type of each skill being coordinated. As a result, the planner is able to sequence arbitrary motion skills (such as reaching, balance adjustment, stepping, etc) in order to achieve complex motions needed for solving humanoid reaching tasks in realistic environments. The proposed framework is applied to the HOAP-3 humanoid robot and several results are presented.


intelligent virtual agents | 2014

Planning Motions for Virtual Demonstrators

Yazhou Huang; Marcelo Kallmann

In order to deliver information effectively, virtual human demonstrators must be able to address complex spatial constraints and at the same time replicate motion coordination patterns observed in human-human interactions. We introduce in this paper a whole-body motion planning and synthesis framework that coordinates locomotion, body positioning, action execution and gaze behavior for generic demonstration tasks among obstacles.


intelligent virtual agents | 2011

Modeling gaze behavior for virtual demonstrators

Yazhou Huang; Justin L. Matthews; Teenie Matlock; Marcelo Kallmann

Achieving autonomous virtual humans with coherent and natural motions is key for being effective in many educational, training and therapeutic applications. Among several aspects to be considered, the gaze behavior is an important non-verbal communication channel that plays a vital role in the effectiveness of the obtained animations. This paper focuses on analyzing gaze behavior in demonstrative tasks involving arbitrary locations for target objects and listeners. Our analysis is based on full-body motions captured from human participants performing real demonstrative tasks in varied situations. We address temporal information and coordination with targets and observers at varied positions.


international conference on human computer interaction | 2009

Interactive Demonstration of Pointing Gestures for Virtual Trainers

Yazhou Huang; Marcelo Kallmann

While interactive virtual humans are becoming widely used in education, training and delivery of instructions, building the animations required for such interactive characters in a given scenario remains a complex and time consuming work. One of the key problems is that most of the systems controlling virtual humans are mainly based on pre-defined animations which have to be re-built by skilled animators specifically for each scenario. In order to improve this situation this paper proposes a framework based on the direct demonstration of motions via a simplified and easy to wear set of motion capture sensors. The proposed system integrates motion segmentation, clustering and interactive motion blending in order to enable a seamless interface for programming motions by demonstration.

Collaboration


Dive into the Yazhou Huang's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andrew W. Feng

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Ari Shapiro

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Teenie Matlock

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yuyu Xu

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Mentar Mahmudi

University of California

View shared research outputs
Top Co-Authors

Avatar

Robert Backman

University of California

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge