Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Shihong Xia is active.

Publication


Featured researches published by Shihong Xia.


Computers & Graphics | 2010

Technical Section: Continuum crowd simulation in complex environments

Hao Jiang; Wenbin Xu; Tianlu Mao; Chunpeng Li; Shihong Xia; Zhaoqi Wang

This paper presents a novel approach for crowd simulation in complex environments. Our method is based on the continuum model proposed by Treuille et al. [13]. Compared to the original method, our solution is well-suited for complex environments. First, we present an environmental structure and a corresponding discretization scheme that helps us to organize and simulate crowds in large-scale scenarios. Second, additional discomfort zones around obstacles are auto-generated to keep a certain, psychologically plausible distance between pedestrians and obstacles, making it easier to obtain smoother trajectories when people move around these obstacles. Third, we propose a technique for density conversion; the density field is dynamically affected by each individual so that it can be adapted to different grid resolutions. The experiment results demonstrate that our hybrid solution can perform plausible crowd flow simulations in complex dynamic environments.


virtual reality software and technology | 2009

Indexing and retrieval of human motion data by a hierarchical tree

Shuangyuan Wu; Zhaoqi Wang; Shihong Xia

For the convenient reuse of large-scale 3D motion capture data, browsing and searching methods for the data should be explored. In this paper, an efficient indexing and retrieval approach for human motion data is presented based on a novel similarity metric. We divide the human character model into three partitions to reduce the spatial complexity and measure the temporal similarity of each partition by self-organizing map and Smith--Waterman algorithm. The overall similarity between two motion clips can be achieved by integrating the similarities of the separate body partitions. Then the hierarchical clustering method is implemented, which can not only cluster the motion data accurately, but also discover the relationships between different motion types by a binary tree structure. With our typical cluster locating algorithm and motion motif mining method, fast and accurate retrieval can be performed. The experiment results show the effectiveness of our approach.


international conference on computer graphics and interactive techniques | 2015

Realtime style transfer for unlabeled heterogeneous human motion

Shihong Xia; Congyi Wang; Jinxiang Chai; Jessica K. Hodgins

This paper presents a novel solution for realtime generation of stylistic human motion that automatically transforms unlabeled, heterogeneous motion data into new styles. The key idea of our approach is an online learning algorithm that automatically constructs a series of local mixtures of autoregressive models (MAR) to capture the complex relationships between styles of motion. We construct local MAR models on the fly by searching for the closest examples of each input pose in the database. Once the model parameters are estimated from the training data, the model adapts the current pose with simple linear transformations. In addition, we introduce an efficient local regression model to predict the timings of synthesized poses in the output style. We demonstrate the power of our approach by transferring stylistic human motion for a wide variety of actions, including walking, running, punching, kicking, jumping and transitions between those behaviors. Our method achieves superior performance in a comparison against alternative methods. We have also performed experiments to evaluate the generalization ability of our data-driven model as well as the key components of our system.


The Visual Computer | 2009

Efficient motion data indexing and retrieval with local similarity measure of motion strings

Shuangyuan Wu; Shihong Xia; Zhaoqi Wang; Chunpeng Li

Widely used in data-driven computer animation, motion capture data exhibits its complexity both spatially and temporally. The indexing and retrieval of motion data is a hard task that is not totally solved. In this paper, we present an efficient motion data indexing and retrieval method based on self-organizing map and Smith–Waterman string similarity metric. Existing motion clips are first used to train a self-organizing map and then indexed by the nodes of the map to get the motion strings. The Smith–Waterman algorithm, a local similarity measure method for string comparison, is used in clustering the motion strings. Then the motion motif of each cluster is extracted for the retrieval of example-based query. As an unsupervised learning approach, our method can cluster motion clips automatically without needing to know their motion types. Experiment results on a dataset of various kinds of motion show that the proposed method not only clusters the motion data accurately but also retrieves appropriate motion data efficiently.


virtual reality software and technology | 2009

A semantic environment model for crowd simulation in multilayered complex environment

Hao Jiang; Wenbin Xu; Tianlu Mao; Chunpeng Li; Shihong Xia; Zhaoqi Wang

Simulating crowds in complex environment is fascinating and challenging, however, modeling of the environment is always neglected in the past, which is one of the essential problems in crowd simulation especially for multilayered complex environment. This paper presents a semantic model for representing the complex environment, where the semantic information is described with a three-tier framework: a geometric level, a semantic level and an application level. Each level contains different maps for different purposes and our approach greatly facilitates the interactions between individuals and virtual environment. And then a modified continuum crowd method is designed to fit the proposed virtual environment model so that realistic behaviors of large dense crowds could be simulated in multilayered complex environments such as buildings and subway stations. Finally, we implement this method and test it in two complex synthetic urban spaces. The experiment results demonstrate that the semantic environment model can provide sufficient and accurate information for crowd simulation in multilayered complex environment.


ieee pacific visualization symposium | 2010

Motion track: Visualizing variations of human motion data

Yueqi Hu; Shuangyuan Wu; Shihong Xia; Jinghua Fu; Wei Chen

This paper proposes a novel visualization approach, which can depict the variations between different human motion data. This is achieved by representing the time dimension of each animation sequence with a sequential curve in a locality-preserving reference 2D space, called the motion track representation. The principal advantage of this representation over standard representations of motion capture data - generally either a keyframed timeline or a 2D motion map in its entirety - is that it maps the motion differences along the time dimension into parallel perceptible spatial dimensions but at the same time captures the primary content of the source data. Latent semantic differences that are difficult to be visually distinguished can be clearly displayed, favoring effective summary, clustering, comparison and analysis of motion database.


The Visual Computer | 2006

Least-squares fitting of multiple M -dimensional point sets

Gaojin Wen; Zhaoqi Wang; Shihong Xia; Dengming Zhu

Based on the classic absolute orientation technique, a new method for least-squares fitting of multiple point sets in m-dimensional space is proposed, analyzed and extended to a weighted form in this paper. This method generates a fixed point set from k corresponding original m-dimensional point sets and minimizes the mean squared error between the fixed point set and these k point sets under the similarity transformation. Experiments and interesting applications are presented to show its efficiency and accuracy.


international conference on computer graphics and interactive techniques | 2016

Realtime 3D eye gaze animation using a single RGB camera

Congyi Wang; Fuhao Shi; Shihong Xia; Jinxiang Chai

This paper presents the first realtime 3D eye gaze capture method that simultaneously captures the coordinated movement of 3D eye gaze, head poses and facial expression deformation using a single RGB camera. Our key idea is to complement a realtime 3D facial performance capture system with an efficient 3D eye gaze tracker. We start the process by automatically detecting important 2D facial features for each frame. The detected facial features are then used to reconstruct 3D head poses and large-scale facial deformation using multi-linear expression deformation models. Next, we introduce a novel user-independent classification method for extracting iris and pupil pixels in each frame. We formulate the 3D eye gaze tracker in the Maximum A Posterior (MAP) framework, which sequentially infers the most probable state of 3D eye gaze at each frame. The eye gaze tracker could fail when eye blinking occurs. We further introduce an efficient eye close detector to improve the robustness and accuracy of the eye gaze tracker. We have tested our system on both live video streams and the Internet videos, demonstrating its accuracy and robustness under a variety of uncontrolled lighting conditions and overcoming significant differences of races, genders, shapes, poses and expressions across individuals.


virtual reality software and technology | 2006

From motion capture data to character animation

Gaojin Wen; Zhaoqi Wang; Shihong Xia; Dengming Zhu

In this paper, we propose a practical and systematical solution to the mapping problem that is from 3D marker position data recorded by optical motion capture systems to joint trajectories together with a matching skeleton based on least-squares fitting techniques. First, we preprocess the raw data and estimate the joint centers based on related efficient techniques. Second, a skeleton of fixed length which precisely matching the joint centers are generated by an articulated skeleton fitting method. Finally, we calculate and rectify joint angles with a minimum angle modification technique. We present the results for our approach as applied to several motion-capture behaviors, which demonstrates the positional accuracy and usefulness of our method.


computer aided design and computer graphics | 2007

A Volumetric Bounding Volume Hierarchy for Collision Detection

Li Liu; Zhaoqi Wang; Shihong Xia

Existing BVHs are restricted to surface-based hierarchical decomposition, which lose the inner space properties of object and provide little guidance for the collision avoidance. We present a novel, volume-based BvH - inner space bounding volume hierarchy (ISBVH). Compared with the surface-based BVH, ISBVH can enclose the boundary surface and inner space simultaneously in any order. Due to approximating the object in volume manner, more significant penetration features such as potential penetration regions, penetration points and penetration depth can be identified in the phase of collision detection without any additional computation cost. These features are extremely necessary for guiding quick and reasonable collision avoidance.

Collaboration


Dive into the Shihong Xia's collaboration.

Top Co-Authors

Avatar

Zhaoqi Wang

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Tianlu Mao

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Chunpeng Li

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Lin Gao

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Xianjie Qiu

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Dengming Zhu

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Wenbin Xu

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Yi Wei

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Hao Jiang

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Dan Zong

Chinese Academy of Sciences

View shared research outputs
Researchain Logo
Decentralizing Knowledge