Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Xiaozhao Fang is active.

Publication


Featured researches published by Xiaozhao Fang.


IEEE Transactions on Systems, Man, and Cybernetics | 2014

Data Uncertainty in Face Recognition

Yong Xu; Xiaozhao Fang; Xuelong Li; Jiang Yang; Jane You; Hong Liu; Shaohua Teng

The image of a face varies with the illumination, pose, and facial expression, thus we say that a single face image is of high uncertainty for representing the face. In this sense, a face image is just an observation and it should not be considered as the absolutely accurate representation of the face. As more face images from the same person provide more observations of the face, more face images may be useful for reducing the uncertainty of the representation of the face and improving the accuracy of face recognition. However, in a real world face recognition system, a subject usually has only a limited number of available face images and thus there is high uncertainty. In this paper, we attempt to improve the face recognition accuracy by reducing the uncertainty. First, we reduce the uncertainty of the face representation by synthesizing the virtual training samples. Then, we select useful training samples that are similar to the test sample from the set of all the original and synthesized virtual training samples. Moreover, we state a theorem that determines the upper bound of the number of useful training samples. Finally, we devise a representation approach based on the selected useful training samples to perform face recognition. Experimental results on five widely used face databases demonstrate that our proposed approach can not only obtain a high face recognition accuracy, but also has a lower computational complexity than the other state-of-the-art approaches.


IEEE Transactions on Image Processing | 2016

Discriminative Transfer Subspace Learning via Low-Rank and Sparse Representation

Yong Xu; Xiaozhao Fang; Jian Wu; Xuelong Li; David Zhang

In this paper, we address the problem of unsupervised domain transfer learning in which no labels are available in the target domain. We use a transformation matrix to transfer both the source and target data to a common subspace, where each target sample can be represented by a combination of source samples such that the samples from different domains can be well interlaced. In this way, the discrepancy of the source and target domains is reduced. By imposing joint low-rank and sparse constraints on the reconstruction coefficient matrix, the global and local structures of data can be preserved. To enlarge the margins between different classes as much as possible and provide more freedom to diminish the discrepancy, a flexible linear classifier (projection) is obtained by learning a non-negative label relaxation matrix that allows the strict binary label matrix to relax into a slack variable matrix. Our method can avoid a potentially negative transfer by using a sparse matrix to model the noise and, thus, is more robust to different types of noise. We formulate our problem as a constrained low-rankness and sparsity minimization problem and solve it by the inexact augmented Lagrange multiplier method. Extensive experiments on various visual domain adaptation tasks show the superiority of the proposed method over the state-of-the art methods. The MATLAB code of our method will be publicly available at http://www.yongxu.org/lunwen.html.


IEEE Transactions on Image Processing | 2015

Learning a Nonnegative Sparse Graph for Linear Regression

Xiaozhao Fang; Yong Xu; Xuelong Li; Zhihui Lai; Wai Keung Wong

Previous graph-based semisupervised learning (G-SSL) methods have the following drawbacks: 1) they usually predefine the graph structure and then use it to perform label prediction, which cannot guarantee an overall optimum and 2) they only focus on the label prediction or the graph structure construction but are not competent in handling new samples. To this end, a novel nonnegative sparse graph (NNSG) learning method was first proposed. Then, both the label prediction and projection learning were integrated into linear regression. Finally, the linear regression and graph structure learning were unified within the same framework to overcome these two drawbacks. Therefore, a novel method, named learning a NNSG for linear regression was presented, in which the linear regression and graph learning were simultaneously performed to guarantee an overall optimum. In the learning process, the label information can be accurately propagated via the graph structure so that the linear regression can learn a discriminative projection to better fit sample labels and accurately classify new samples. An effective algorithm was designed to solve the corresponding optimization problem with fast convergence. Furthermore, NNSG provides a unified perceptiveness for a number of graph-based learning methods and linear regression methods. The experimental results showed that NNSG can obtain very high classification accuracy and greatly outperforms conventional G-SSL methods, especially some conventional graph construction methods.


Neurocomputing | 2014

Modified minimum squared error algorithm for robust classification and face recognition experiments

Yong Xu; Xiaozhao Fang; Qi Zhu; Yan Chen; Jane You; Hong Liu

In this paper, we improve the minimum squared error (MSE) algorithm for classification by modifying its classification rule. Differing from the conventional MSE algorithm which first obtains the mapping that can best transform the training sample into its class label and then exploits the obtained mapping to predict the class label of the test sample, the modified minimum squared error classification (MMSEC) algorithm simultaneously predicts the class labels of the test sample and the training samples nearest to it and combines the predicted results to ultimately classify the test sample. Besides this paper, for the first time, proposes the idea to take advantage of the predicted class labels of the training samples for classification of the test sample, it devises a weighted fusion scheme to fuse the predicted class labels of the training sample and test sample. The paper also interprets the rationale of MMSEC. As MMSEC generalizes better than conventional MSE, it can lead to more robust classification decisions. The face recognition experiments show that MMSEC does obtain very promising performance.


Neurocomputing | 2015

Noise-free representation based classification and face recognition experiments

Yong Xu; Xiaozhao Fang; Jane You; Yan Chen; Hong Liu

The representation based classification has achieved promising performance in high-dimensional pattern classification problems. As we know, in real-world applications the samples are usually corrupted by noise. However, representation based classification can take only noise in the test sample into account and is not able to deal with noise in the training sample, which causes side-effect on the classification result. In order to make the representation based classification more suitable for real-world applications such as face recognition, we propose a new representation based classification method in this paper. This method can effectively and simultaneously reduce noise in the test and training samples. Moreover, the proposed method can reduce noise in both the original and virtual training samples and then exploits them to determine the label of the test sample. The virtual training sample is generated from the original face image and shows possible variation of the face in scale, facial pose and expression. The experimental results show that the proposed method performs very well in face recognition.


Neurocomputing | 2016

Low-rank representation integrated with principal line distance for contactless palmprint recognition

Lunke Fei; Yong Xu; Bob Zhang; Xiaozhao Fang; Jie Wen

Contactless palmprint recognition has recently begun to draw attention of researchers. Different from conventional palmprint images, contactless palmprint images are captured under free conditions and usually have significant variations on translations, rotations, illuminations and even backgrounds. Conventional powerful palmprint recognition methods are not very effective for the recognition of contactless palmprint. It is known that low-rank representation (LRR) is a promizing scheme for subspace clustering, owing to its success in exploring the multiple subspace structures of data. In this paper, we integrate LRR with the adaptive principal line distance for contactless palmprint recognition. The principal lines are the most distinctive features of the palmprint and can be correctly extracted in most cases; thereby, the principal line distances can be used to determine the neighbors of a palmprint image. With the principal line distance penalty, the proposed method effectively improves the clustering results of LRR by improving the weights of the affinities among nearby samples with small principal line distances. Therefore, the weighted affinity graph identified by the proposed method is more discriminative. Extensive experiments show that the proposed method can achieve higher accuracy than both the conventional powerful palmprint recognition methods and the subspace clustering-based methods in contactless palmprint recognition. Also, the proposed method shows promizing robustness to the noisy palmprint images. The effectiveness of the proposed method indicates that using LRR for contactless palmprint recognition is feasible. The LRR is first used to perform contactless palmprint recognition.The LRRIPLD can capture both the global and local structure of the whole data.The proposed LRRIPLD shows good robustness to the noises.The proposed method performs better than state-of-the-art methods.


Pattern Recognition | 2017

Low rank representation with adaptive distance penalty for semi-supervised subspace classification

Lunke Fei; Yong Xu; Xiaozhao Fang; Jian Yang

Abstract The graph based Semi-supervised Subspace Learning (SSL) methods treat both labeled and unlabeled data as nodes in a graph, and then instantiate edges among these nodes by weighting the affinity between the corresponding pairs of samples. Constructing a good graph to discover the intrinsic structures of the data is critical for these SSL tasks such as subspace clustering and classification. The Low Rank Representation (LRR) is one of powerful subspace clustering methods, based on which a weighted affinity graph can be constructed. Generally, adjacent samples usually belong to a union of subspace and thereby nearby points in the graph should have large edge weights. Motivated by this, in this paper, we proposed a novel LRR with Adaptive Distance Penalty (LRRADP) to construct a good affinity graph. The graph identified by the LRRADP can not only capture the global subspace structure of the whole data but also effectively preserve the neighbor relationship among samples. Furthermore, by projecting the data set into an appropriate subspace, the LRRADP can be further improved to construct a more discriminative affinity graph. Extensive experiments on different types of baseline datasets are carried out to demonstrate the effectiveness of the proposed methods. The improved method, named as LRRADP2, shows impressive performance on real world handwritten and noisy data. The MATLAB codes of the proposed methods will be available at http://www.yongxu.org/lunwen.html .


IEEE Transactions on Image Processing | 2017

Low-Rank Embedding for Robust Image Feature Extraction

Wai Keung Wong; Zhihui Lai; Jiajun Wen; Xiaozhao Fang; Yuwu Lu

Robustness to noises, outliers, and corruptions is an important issue in linear dimensionality reduction. Since the sample-specific corruptions and outliers exist, the class-special structure or the local geometric structure is destroyed, and thus, many existing methods, including the popular manifold learning- based linear dimensionality methods, fail to achieve good performance in recognition tasks. In this paper, we focus on the unsupervised robust linear dimensionality reduction on corrupted data by introducing the robust low-rank representation (LRR). Thus, a robust linear dimensionality reduction technique termed low-rank embedding (LRE) is proposed in this paper, which provides a robust image representation to uncover the potential relationship among the images to reduce the negative influence from the occlusion and corruption so as to enhance the algorithm’s robustness in image feature extraction. LRE searches the optimal LRR and optimal subspace simultaneously. The model of LRE can be solved by alternatively iterating the argument Lagrangian multiplier method and the eigendecomposition. The theoretical analysis, including convergence analysis and computational complexity, of the algorithms is presented. Experiments on some well-known databases with different corruptions show that LRE is superior to the previous methods of feature extraction, and therefore, it indicates the robustness of the proposed method. The code of this paper can be downloaded from http://www.scholat.com/laizhihui.Robustness to noises, outliers, and corruptions is an important issue in linear dimensionality reduction. Since the sample-specific corruptions and outliers exist, the class-special structure or the local geometric structure is destroyed, and thus, many existing methods, including the popular manifold learning- based linear dimensionality methods, fail to achieve good performance in recognition tasks. In this paper, we focus on the unsupervised robust linear dimensionality reduction on corrupted data by introducing the robust low-rank representation (LRR). Thus, a robust linear dimensionality reduction technique termed low-rank embedding (LRE) is proposed in this paper, which provides a robust image representation to uncover the potential relationship among the images to reduce the negative influence from the occlusion and corruption so as to enhance the algorithms robustness in image feature extraction. LRE searches the optimal LRR and optimal subspace simultaneously. The model of LRE can be solved by alternatively iterating the argument Lagrangian multiplier method and the eigendecomposition. The theoretical analysis, including convergence analysis and computational complexity, of the algorithms is presented. Experiments on some well-known databases with different corruptions show that LRE is superior to the previous methods of feature extraction, and therefore, it indicates the robustness of the proposed method. The code of this paper can be downloaded from http://www.scholat.com/laizhihui.


IEEE Transactions on Neural Networks | 2018

Regularized Label Relaxation Linear Regression

Xiaozhao Fang; Yong Xu; Xuelong Li; Zhihui Lai; Wai Keung Wong; Bingwu Fang

Linear regression (LR) and some of its variants have been widely used for classification problems. Most of these methods assume that during the learning phase, the training samples can be exactly transformed into a strict binary label matrix, which has too little freedom to fit the labels adequately. To address this problem, in this paper, we propose a novel regularized label relaxation LR method, which has the following notable characteristics. First, the proposed method relaxes the strict binary label matrix into a slack variable matrix by introducing a nonnegative label relaxation matrix into LR, which provides more freedom to fit the labels and simultaneously enlarges the margins between different classes as much as possible. Second, the proposed method constructs the class compactness graph based on manifold learning and uses it as the regularization item to avoid the problem of overfitting. The class compactness graph is used to ensure that the samples sharing the same labels can be kept close after they are transformed. Two different algorithms, which are, respectively, based on


IEEE Transactions on Neural Networks | 2018

Robust Latent Subspace Learning for Image Classification

Xiaozhao Fang; Shaohua Teng; Zhihui Lai; Zhaoshui He; Shengli Xie; Wai Keung Wong

\ell _{2}

Collaboration


Dive into the Xiaozhao Fang's collaboration.

Top Co-Authors

Avatar

Yong Xu

Harbin Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Lunke Fei

Harbin Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Xuelong Li

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Shaohua Teng

Guangdong University of Technology

View shared research outputs
Top Co-Authors

Avatar

Wai Keung Wong

Hong Kong Polytechnic University

View shared research outputs
Top Co-Authors

Avatar

Na Han

Guangdong University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jie Wen

Harbin Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Wei Zhang

Guangdong University of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge