Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Changbo Hu is active.

Publication


Featured researches published by Changbo Hu.


Image and Vision Computing | 2006

Manifold based analysis of facial expression

Ya Chang; Changbo Hu; Rogério Schmidt Feris; Matthew Turk

We propose a novel approach for modeling, tracking and recognizing facial expressions. Our method works on a low dimensional expression manifold, which is obtained by Isomap embedding. In this space, facial contour features are first clustered, using a mixture model. Then, expression dynamics are learned for tracking and classification. We use ICondensation to track facial features in the embedded space, while recognizing facial expressions in a cooperative manner, within a common probabilistic framework. The image observation likelihood is derived from a variation of the Active Shape Model (ASM) algorithm. For each cluster in the low-dimensional space, a specific ASM model is learned, thus avoiding incorrect matching due to non-linear image variations. Preliminary experimental results show that our probabilistic facial expression model on manifold significantly improves facial deformation tracking and expression recognition.


computer vision and pattern recognition | 2004

Probabilistic expression analysis on manifolds

Ya Chang; Changbo Hu; Matthew Turk

In this paper, we propose a probabilistic video-based facial expression recognition method on manifolds. The concept of the manifold of facial expression is based on the observation that the images of all possible facial deformations of an individual make a smooth manifold embedded in a high dimensional image space. An enhanced Lipschitz embedding is developed to embed the aligned face appearance in a low dimensional space while keeping the main structure of the manifold. In the embedded space, a complete expression sequence becomes a path on the expression manifold, emanating from a center that corresponds to the neutral expression. Each path consists of several clusters. A probabilistic model of transition between the clusters and paths is learned through training videos in the embedded space. The likelihood of one kind of facial expression is modeled as a mixture density with the clusters as mixture centers. The transition between different expressions is represented as the evolution of the posterior probability of the six basic paths. The experimental results demonstrate that the probabilistic approach can recognize expression transitions effectively. We also synthesize image sequences of changing expressions through the manifold model.


international soi conference | 2003

Manifold of facial expression

Ya Chang; Changbo Hu; Matthew Turk

We propose the concept of manifold of facial expression based on the observation that images of a subjects facial expressions define a smooth manifold in the high dimensional image space. Such a manifold representation can provide a unified framework for facial expression analysis. We first apply active wavelet networks (AWN) on the image sequences for facial feature localization. To learn the structure of the manifold in the feature space derived by AWN, we investigated two types of embeddings from a high dimensional space to a low dimensional space: locally linear embedding (LLE) and Lipschitz embedding. Our experiments show that LLE is suitable for visualizing expression manifolds. After applying Lipschitz embedding, the expression manifold can be approximately considered as a super-spherical surface in the embedding space. For manifolds derived from different subjects, we propose a nonlinear alignment algorithm that keeps the semantic similarity of facial expression from different subjects on one generalized manifold. We also show that nonlinear alignment outperforms linear alignment in expression classification.


computer vision and pattern recognition | 2004

Manifold Based Analysis of Facial Expression

Changbo Hu; Ya Chang; Rogério Schmidt Feris; Matthew Turk

We propose a novel approach for modeling, tracking and recognizing facial expressions. Our method works on a low dimensional expression manifold, which is obtained by Isomap embedding. In this space, facial contour features are first clustered, using a mixture model. Then, expression dynamics are learned for tracking and classification. We use ICondensation to track facial features in the embedded space, while recognizing facial expressions in a cooperative manner, within a common probabilistic framework. The image observation likelihood is derived from a variation of the Active Shape Model (ASM) algorithm. For each cluster in the low-dimensional space, a specific ASM model is learned, thus avoiding incorrect matching due to non-linear image variations. Preliminary experimental results show that our probabilistic facial expression model on manifold significantly improves facial deformation tracking and expression recognition.


International Journal of Pattern Recognition and Artificial Intelligence | 2005

Non-negative matrix factorization framework for face recognition

Yuan Wang; Yunde Jia; Changbo Hu; Matthew Turk

Non-negative Matrix Factorization (NMF) is a part-based image representation method which adds a non-negativity constraint to matrix factorization. NMF is compatible with the intuitive notion of combining parts to form a whole face. In this paper, we propose a framework of face recognition by adding NMF constraint and classifier constraints to matrix factorization to get both intuitive features and good recognition results. Based on the framework, we present two novel subspace methods: Fisher Non-negative Matrix Factorization (FNMF) and PCA Non-negative Matrix Factorization (PNMF). FNMF adds both the non-negative constraint and the Fisher constraint to matrix factorization. The Fisher constraint maximizes the between-class scatter and minimizes the within-class scatter of face samples. Subsequently, FNMF improves the capability of face recognition. PNMF adds the non-negative constraint and characteristics of PCA, such as maximizing the variance of output coordinates, orthogonal bases, etc. to matrix factorization. Therefore, we can get intuitive features and desirable PCA characteristics. Our experiments show that FNMF and PNMF achieve better face recognition performance than NMF and Local NMF.


international conference on computer vision | 2005

Multi-view AAM fitting and camera calibration

Seth Koterba; Simon Baker; Iain A. Matthews; Changbo Hu; Jing Xiao; Jeffrey F. Cohn; Takeo Kanade

In this paper, we study the relationship between multi-view active appearance model (AAM) fitting and camera calibration. In the first part of the paper we propose an algorithm to calibrate the relative orientation of a set of N > 1 cameras by fitting an AAM to sets of N images. In essence, we use the human face as a (non-rigid) calibration grid. Our algorithm calibrates a set of 2 /spl times/ 3 weak-perspective camera projection matrices, protections of the world coordinate system origin into the images, depths of the world coordinate system origin, and focal lengths. We demonstrate that the performance of this algorithm is comparable to a standard algorithm using a calibration grid. In the second part of the paper, we show how calibrating the cameras improves tile performance of multi-view AAM fitting.


british machine vision conference | 2004

Fitting a Single Active Appearance Model Simultaneously to Multiple Images

Changbo Hu; Jing Xiao; Iain A. Matthews; Simon Baker; Jeffrey F. Cohn; Takeo Kanade

Active Appearance Models (AAMs) are a well studied 2D deformable model. One recently proposed extension of AAMs to multiple images is the CoupledView AAM. Coupled-View AAMs model the 2D shape and appearance of a face in two or more views simultaneously. The major limitation of CoupledView AAMs, however, is that they are specific to a particular set of cameras, both in geometry and the photometric responses. In this paper, we describe how a single AAM can be fit to multiple images, captured simultaneously by cameras with arbitrary geometry and response functions. Our algorithm retains the major benefits of Coupled-View AAMs: the integration of information from multiple images into a single model, and improved fitting robustness.


international soi conference | 2003

Real-time view-based face alignment using active wavelet networks

Changbo Hu; Rogério Schmidt Feris; Matthew Turk

The active wavelet network (AWN) [C. Hu et al., (2003)] approach was recently proposed for automatic face alignment, showing advantages over active appearance models (AAM), such as more robustness against partial occlusions and illumination changes. We (1) extend the AWN method to a view-based approach, (2) verify the robustness of our algorithm with respect to unseen views in a large dataset and (3) show that using only nine wavelets, our method yields similar performance to state-of-the-art face alignment systems, with a significant enhancement in terms of speed. After optimization, our system requires only 3 ms per iteration on a 1.6 GHz Pentium IV. We show applications in face alignment for recognition and real-time facial feature tracking under large pose variations.


british machine vision conference | 2003

Active Wavelet Networks for Face Alignment.

Changbo Hu; Rogério Schmidt Feris; Matthew Turk

The active appearance model (AAM) algorithm has proved to be a successful method for face alignment and synthesis. By elegantly combining both shape and texture models, AAM allows fast and robust deformable image matching. However, the method is sensitive to partial occlusions and illumination changes. In such cases, the PCA-based texture model causes the reconstruction error to be globally spread over the image. In this paper, we propose a new method for face alignment called active wavelet networks (AWN), which replaces the AAM texture model by a wavelet network representation. Since we consider spatially localized wavelets for modeling texture, our method shows more robustness against partial occlusions and some illumination changes.


ieee international conference on automatic face and gesture recognition | 2000

Extraction of parametric human model for posture recognition using genetic algorithm

Changbo Hu; Qingfeng Yu; Yi Li; Songde Ma

We present in this paper an approach to extracting a human parametric 2D model for the purpose of estimating human posture and recognizing human activity. This task is done in two steps. In the first step, a human silhouette is extracted from a complex background under a fixed camera through a statistical method. By this method, we can reconstruct the background dynamically and obtain the moving silhouette. In the second step, a genetic algorithm is used to match the silhouette of the human body to a model in parametric shape space. In order to reduce the searching dimension, a layer method is proposed to take the advantage of the human model. Additionally we apply a structure-oriented Kalman filter to estimate the motion of body parts. Therefore the initial population and value in the GA can be well constrained. Experiments on real video sequences show that our method can extract the human model robustly and accurately.

Collaboration


Dive into the Changbo Hu's collaboration.

Top Co-Authors

Avatar

Matthew Turk

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Songde Ma

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ya Chang

University of California

View shared research outputs
Top Co-Authors

Avatar

Jing Xiao

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Takeo Kanade

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Hanqing Lu

Chinese Academy of Sciences

View shared research outputs
Researchain Logo
Decentralizing Knowledge