Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Xiujuan Chai is active.

Publication


Featured researches published by Xiujuan Chai.


IEEE Transactions on Image Processing | 2007

Locally Linear Regression for Pose-Invariant Face Recognition

Xiujuan Chai; Shiguang Shan; Xilin Chen; Wen Gao

The variation of facial appearance due to the viewpoint (/pose) degrades face recognition systems considerably, which is one of the bottlenecks in face recognition. One of the possible solutions is generating virtual frontal view from any given nonfrontal view to obtain a virtual gallery/probe face. Following this idea, this paper proposes a simple, but efficient, novel locally linear regression (LLR) method, which generates the virtual frontal view from a given nonfrontal face image. We first justify the basic assumption of the paper that there exists an approximate linear mapping between a nonfrontal face image and its frontal counterpart. Then, by formulating the estimation of the linear mapping as a prediction problem, we present the regression-based solution, i.e., globally linear regression. To improve the prediction accuracy in the case of coarse alignment, LLR is further proposed. In LLR, we first perform dense sampling in the nonfrontal face image to obtain many overlapped local patches. Then, the linear regression technique is applied to each small patch for the prediction of its virtual frontal patch. Through the combination of all these patches, the virtual frontal view is generated. The experimental results on the CMU PIE database show distinct advantage of the proposed method over Eigen light-field method.


european conference on computer vision | 2012

Morphable displacement field based image matching for face recognition across pose

Shaoxin Li; Xin Liu; Xiujuan Chai; Haihong Zhang; Shihong Lao; Shiguang Shan

Fully automatic Face Recognition Across Pose (FRAP) is one of the most desirable techniques, however, also one of the most challenging tasks in face recognition field. Matching a pair of face images in different poses can be converted into matching their pixels corresponding to the same semantic facial point. Following this idea, given two images G and P in different poses, we propose a novel method, named Morphable Displacement Field (MDF), to match G with Ps virtual view under Gs pose. By formulating MDF as a convex combination of a number of template displacement fields generated from a 3D face database, our model satisfies both global conformity and local consistency. We further present an approximate but effective solution of the proposed MDF model, named implicit Morphable Displacement Field (iMDF), which synthesizes virtual view implicitly via an MDF by minimizing matching residual. This formulation not only avoids intractable optimization of the high-dimensional displacement field but also facilitates a constrained quadratic optimization. The proposed method can work well even when only 2 facial landmarks are labeled, which makes it especially suitable for fully automatic FRAP system. Extensive evaluations on FERET, PIE and Multi-PIE databases show considerable improvement over state-of-the-art FRAP algorithms in both semi-automatic and fully automatic evaluation protocols.


international conference on acoustics, speech, and signal processing | 2003

Virtual face image generation for illumination and pose insensitive face recognition

Wen Gao; Shiguang Shan; Xiujuan Chai; Xiaowei Fu

Face recognition has attracted much attention in the past decades for its wide potential applications. Much progress has been made in the past few years. However, specialized evaluation of the state-of-the-art in both academic algorithms and commercial systems illustrates that the performance of most current recognition technologies degrades significantly due to the variations of illumination and/or pose. To solve these problems, providing multiple training samples to the recognition system is a rational choice. However, enough samples are not always available for many practical applications. It is an alternative to augment the training set by generating virtual views from one single face image, that is relighting the given face images of synthesize novel views of the given face. Based on this strategy, this paper presents some attempts by presenting a ratio-image based face relighting method and a face re-rotating approach based on linear shape prediction and image warp. To evaluate the effect of the additional virtual face images, primary experiments are conducted using our specific substance method as face recognition approach, which shows impressive improvement compared with standard benchmark face recognition methods.


international conference on multimedia and expo | 2009

Robust hand gesture analysis and application in gallery browsing

Xiujuan Chai; Yikai Fang; Kongqiao Wang

This paper presents a robust hand gesture analysis method using 3D depth data. Our scheme focuses on accurate hand segmentation by eliminating the negative effect of the forearm part. In the general Human Computer Interaction (HCI) tasks, such an assumption usually holds that the depth of hand is smaller than forearm. Therefore, the precise hand region can be obtained through the fusion of the hand geometric features and the 3D depth information in real-time. Moreover, a robust hand gesture recognition method, which combines the global structure information and the local texture variation, is included in our gesture analysis framework. The elaborate hand segmentation makes the succedent recognition problem much easier and gets more accurate recognition results. Experimental results convincingly show the effectiveness of the proposed gesture analysis strategy. Furthermore, a concrete application scenario, gesture controlled picture gallery browsing, is implemented successfully.


international conference on automatic face and gesture recognition | 2006

Local Linear Regression (LLR) for Pose Invariant Face Recognition

Xiujuan Chai; Shiguang Shan; Xilin Chen; Wen Gao

The variation of facial appearance due to the viewpoint (/pose) degrades face recognition systems considerably, which is well known as one of the bottlenecks in face recognition. One of the possible solutions is generating virtual frontal view from any given non-frontal view to obtain a virtual gallery/probe face. By formulating this kind of solutions as a prediction problem, this paper proposes a simple but efficient novel local linear regression (LLR) method, which can generate the virtual frontal view from a given non-frontal face image. The proposed LLR inspires from the observation that the corresponding local facial regions of the frontal and non-frontal view pair satisfy linear assumption much better than the whole face region. This can be explained easily by the fact that a 3D face shape is composed of many local planar surfaces, which satisfy naturally linear model under imaging projection. In LLR, we simply partition the whole non-frontal face image into multiple local patches and apply linear regression to each patch for the prediction of its virtual frontal patch. Comparing with other methods, the experimental results on CMU PIE database show distinct advantage of the proposed method


international conference on computer vision | 2013

Cascaded Shape Space Pruning for Robust Facial Landmark Detection

Xiaowei Zhao; Shiguang Shan; Xiujuan Chai; Xilin Chen

In this paper, we propose a novel cascaded face shape space pruning algorithm for robust facial landmark detection. Through progressively excluding the incorrect candidate shapes, our algorithm can accurately and efficiently achieve the globally optimal shape configuration. Specifically, individual landmark detectors are firstly applied to eliminate wrong candidates for each landmark. Then, the candidate shape space is further pruned by jointly removing incorrect shape configurations. To achieve this purpose, a discriminative structure classifier is designed to assess the candidate shape configurations. Based on the learned discriminative structure classifier, an efficient shape space pruning strategy is proposed to quickly reject most incorrect candidate shapes while preserve the true shape. The proposed algorithm is carefully evaluated on a large set of real world face images. In addition, comparison results on the publicly available BioID and LFW face databases demonstrate that our algorithm outperforms some state-of-the-art algorithms.


Neurocomputing | 2013

A novel feature descriptor based on biologically inspired feature for head pose estimation

Bingpeng Ma; Xiujuan Chai; Tianjiang Wang

Abstract This paper proposes a novel method to improve the accuracy of head pose estimation. Since biologically inspired features (BIF) have been demonstrated to be both effective and efficient for many visual tasks, we argue that BIF can be applied to the problem of head pose estimation. By combining the BIF with the well-known local binary pattern (LBP) feature, we propose a novel feature descriptor named “local biologically inspired features” (LBIF). Considering that LBIF is extrinsically very high dimensional, ensemble-based supervised methods are applied to reduce the dimension while at the same time improving its discriminative ability. Results obtained from the evaluation on two different databases show that the proposed LBIF feature achieves significant improvements over the state-of-the-art methods of head pose estimation.


pacific rim conference on multimedia | 2003

Pose normalization for robust face recognition based on statistical affine transformation

Xiujuan Chai; Shiguang Shan; Wen Gao

A framework for pose-invariant face recognition using the pose alignment method is described in this paper. The main idea is to normalize the face view in depth to frontal view as the input of face recognition framework. Concretely, an inputted face image is first normalized using the irises information, and then the pose subspace algorithm is employed to perform the pose estimation. To model the pose-invariance, the face region is divided into three rectangles with different mapping parameters in this pose alignment algorithm. So the affine transformation parameters associated with the different poses can be used to align the input pose image to frontal view. To evaluate this algorithm objectively, the views after the pose alignment are incorporated into the frontal face recognition system. Experimental results show that it has the better performance and it increases the recognition rate statistically by 17.75% under the pose that rotated within 30 degree.


ieee international conference on automatic face gesture recognition | 2015

Fast sign language recognition benefited from low rank approximation

Hanjie Wang; Xiujuan Chai; Yu Zhou; Xilin Chen

This paper proposes a framework based on the Hidden Markov Models (HMMs) benefited from the low rank approximation of the original sign videos for two aspects. First, under the observations that most visual information of a sign sequence typically concentrates on limited key frames, we apply an online low rank approximation of sign videos for the first time to select the key frames. Second, rather than fixing the number of hidden states for large vocabulary of variant signs, we further take the advantage of the low rank approximation to independently determine it for each sign to optimise predictions. With the key frame selection and the variant number of hidden states determination, an advanced framework based on HMMs for Sign Language Recognition (SLR) is proposed, which is denoted as Light-HMMs (because of the fewer frames and proper estimated hidden states). With the Kinect sensor, RGB-D data is fully investigated for the feature representation. In each frame, we adopt Skeleton Pair feature to character the motion and extract the Histograms of Oriented Gradients as the feature of the hand posture appearance. The proposed framework achieves an efficient computing and even better correct rate in classification. The widely experiments are conducted on large vocabulary sign datasets with up to 1000 classes of signs and the encouraging results are obtained.


Neurocomputing | 2014

CovGa: A novel descriptor based on symmetry of regions for head pose estimation

Bingpeng Ma; Annan Li; Xiujuan Chai; Shiguang Shan

This paper proposes a novel method to estimate the head yaw rotation using the symmetry of regions. We argue that the symmetry of 2D regions located in the same horizontal row is more intrinsically relevant to the yaw rotation of head than the symmetry of 1D signals, while at the same time insensitive to the identity of the face. Specifically, the proposed method relies on the effective combination of Gabor filters and covariance descriptors. We first extract the multi-scale and multi-orientation Gabor representations of the input face image, and then use covariance descriptors to compute the symmetry between two regions in terms of Gabor representations under the same scale and orientation. Since the covariance matrix can alleviate the influence caused by rotations and illumination, the proposed method is robust to such variations. In addition, the proposed method is further improved by combining it with a metric learning method named aa KISS MEtric learning (KISSME). Experiments on four challenging databases demonstrated that the proposed method outperformed the state of the art.

Collaboration


Dive into the Xiujuan Chai's collaboration.

Top Co-Authors

Avatar

Shiguang Shan

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Xilin Chen

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bingpeng Ma

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Hanjie Wang

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Xiaowei Zhao

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Fang Yin

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Xin Liu

Harbin Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge