Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jixu Chen is active.

Publication


Featured researches published by Jixu Chen.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2010

A Unified Probabilistic Framework for Spontaneous Facial Action Modeling and Understanding

Yan Tong; Jixu Chen; Qiang Ji

Facial expression is a natural and powerful means of human communication. Recognizing spontaneous facial actions, however, is very challenging due to subtle facial deformation, frequent head movements, and ambiguous and uncertain facial motion measurements. Because of these challenges, current research in facial expression recognition is limited to posed expressions and often in frontal view. A spontaneous facial expression is characterized by rigid head movements and nonrigid facial muscular movements. More importantly, it is the coherent and consistent spatiotemporal interactions among rigid and nonrigid facial motions that produce a meaningful facial expression. Recognizing this fact, we introduce a unified probabilistic facial action model based on the dynamic Bayesian network (DBN) to simultaneously and coherently represent rigid and nonrigid facial motions, their spatiotemporal dependencies, and their image measurements. Advanced machine learning methods are introduced to learn the model based on both training data and subjective prior knowledge. Given the model and the measurements of facial motions, facial action recognition is accomplished through probabilistic inference by systematically integrating visual measurements with the facial action model. Experiments show that compared to the state-of-the-art techniques, the proposed system yields significant improvements in recognizing both rigid and nonrigid facial motions, especially for spontaneous facial expressions.


international conference on pattern recognition | 2008

3D gaze estimation with a single camera without IR illumination

Jixu Chen; Qiang Ji

This paper proposes a 3D eye gaze estimation and tracking algorithm based on facial feature tracking using a single camera. Instead of using the infrared (IR) lights and the corneal reflections (glint), this algorithm estimates the 3D visual axis using the tracked facial feature points. For this, we first introduce an extended 3D eye model which includes both the eyeball and the eye-corners. Based on this eye model, we derive the equations to solve for the 3D eyeball center, the 3D pupil center and the 3D visual axis, from which we can solve for the point of gaze after a one-time personal calibration. The experimental results show the accuracy of this algorithm is less than 3deg. Compared with the existing IR-based eye tracking methods, the proposed method is simple to setup and can work both indoor and outdoor.


computer vision and pattern recognition | 2009

Switching Gaussian Process Dynamic Models for simultaneous composite motion tracking and recognition

Jixu Chen; Minyoung Kim; Yu Wang; Qiang Ji

Traditional dynamical systems used for motion tracking cannot effectively handle high dimensionality of the motion states and composite dynamics. In this paper, to address both issues simultaneously, we propose the marriage of the switching dynamical system and recent Gaussian Process Dynamic Models (GPDM), yielding a new model called the switching GPDM (SGPDM). The proposed switching variables enable the SGPDM to capture diverse motion dynamics effectively, and also allow to identify the motion class (e.g. walk or run in the human motion tracking, smile or angry in the facial motion tracking), which naturally leads to the idea of simultaneous motion tracking and classification. Moreover, each of GPDMs in SGPDM can faithfully model its corresponding primitive motion, while performing tracking in the low-dimensional latent space, therefore significantly improving the tracking efficiency. The proposed SGPDM is then applied to human body motion tracking and classification, and facial motion tracking and recognition. We demonstrate the performance of our model on several composite body motion videos obtained from the CMU database, including exercises and salsa dance. We also demonstrate the robustness of our model in terms of both facial feature tracking and facial expression/pose recognition performance on real videos under diverse scenarios including pose change, low frame rate and low quality videos.


computer vision and pattern recognition | 2007

Online Spatial-temporal Data Fusion for Robust Adaptive Tracking

Jixu Chen; Qiang Ji

One problem with the adaptive tracking is that the data that are used to train the new target model often contain errors and these errors will affect the quality of the new target model. As time passes by, these errors will accumulate and eventually lead the tracker to drift away. In this paper, we propose a new method based on online data fusion to alleviate this tracking drift problem. Based on combining the spatial and temporal data through a dynamic Bayesian network, the proposed method can improve the quality of online data labeling, therefore minimizing the error associated with model updating and alleviating the tracking drift problem. Experiments show the proposed method significantly improves the performance of an existing adaptive tracking method.


computer vision and pattern recognition | 2009

Modeling and exploiting the spatio-temporal facial action dependencies for robust spontaneous facial expression recognition

Yan Tong; Jixu Chen; Qiang Ji

Facial action provides various types of messages for human communications. Recognizing spontaneous facial actions, however, is very challenging due to subtle facial deformation, frequent head movements, and ambiguous and uncertain facial motion measurements. As a result, current research in facial action recognition is limited to posed facial actions and often in frontal view.Spontaneous facial action is characterized by rigid head movements and nonrigid facial muscular movements. More importantly, it is the spatiotemporal interactions among the rigid and nonrigid facial motions that produce a meaningful and natural facial display. Recognizing this fact, we introduce a probabilistic facial action model based on a dynamic Bayesian network (DBN) to simultaneously and coherently capture rigid and nonrigid facial motions, their spatiotemporal dependencies, and their image measurements. Advanced machine learning methods are introduced to learn the probabilistic facial action model based on both training data and prior knowledge. Facial action recognition is accomplished through probabilistic inference by systemically integrating measurements official motions with the facial action model. Experiments show that the proposed system yields significant improvements in recognizing spontaneous facial actions.


international conference on pattern recognition | 2008

2D and 3D upper body tracking with one framework

Lei Zhang; Jixu Chen; Zhi Zeng; Qiang Ji

We propose a Dynamic Bayesian Network (DBN) model for upper body tracking. We first construct a Bayesian Network (BN) to represent the human upper body structure and then incorporate into the BN various generic physical and anatomical constraints on the parts of the upper body. Unlike the existing upper body models, ours aims at handling physically feasible body motion rather than only some typical motion patterns. We also explicitly model part self-occlusion in the DBN model, which allows to automatically detect the occurrence of self-occlusion and to minimize the effect of measurement errors on the tracking accuracy due to occlusion. Moreover, our method can handle both 2D and 3D upper body tracking within the same framework. Using the DBN model, upper body tracking can be achieved through probabilistic inference over time.


Face and Gesture 2011 | 2011

Constraint-based gaze estimation without active calibration

William Maio; Jixu Chen; Qiang Ji

Existing eye gaze tracking systems typically require an explicit personal calibration process in order to estimate certain person-specific eye parameters. For natural human computer interaction, such a personal calibration is often cumbersome and unnatural. In this paper, we introduce a new method that estimates a persons gaze without active personal calibration. By exploiting the binocular constraint that the gaze point is produced by the intersection of visual axes of two eyes and some generic person-independent constraints on eye parameters, our method is able to estimate the required person-specific parameters implicitly and naturally without active participation from the user and without use of any special calibration object. Experiments with different subjects show that the proposed method achieves good gaze estimation comparable to the conventional 9 point method.


international conference on pattern recognition | 2010

Efficient 3D Upper Body Tracking with Self-Occlusions

Jixu Chen; Qiang Ji

We propose an efficient 3D upper body tracking method, which recovers the positions and orientations of six upper-body parts from the video sequence. Our method is based on a probabilistic graphical model (PGM), which incorporates the spatial relationships among the body parts, and a robust multi-view image likelihood using probabilistic PCA (PPCA). For the efficiency, we use a tree-structured graphical model and use the particle based belief propagation to perform the inference. Since our image likelihood is based on multiple views, we address the self-occlusion by modeling the likelihood of the body part in each view, and automatically decrease the influence of the occluded view in the inference procedure.


computer vision and pattern recognition | 2011

Probabilistic gaze estimation without active personal calibration

Jixu Chen; Qiang Ji


eye tracking research & application | 2008

A robust 3D eye gaze tracking system using noise reduction

Jixu Chen; Yan Tong; Wayne D. Gray; Qiang Ji

Collaboration


Dive into the Jixu Chen's collaboration.

Top Co-Authors

Avatar

Qiang Ji

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar

Yan Tong

University of South Carolina

View shared research outputs
Top Co-Authors

Avatar

Lei Zhang

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar

Zhi Zeng

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar

Minyoung Kim

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar

Wayne D. Gray

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar

William Maio

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar

Yongmian Zhang

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar

Yu Wang

Rensselaer Polytechnic Institute

View shared research outputs
Researchain Logo
Decentralizing Knowledge