Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Zhifeng Li is active.

Publication


Featured researches published by Zhifeng Li.


european conference on computer vision | 2016

A Discriminative Feature Learning Approach for Deep Face Recognition

Yandong Wen; Kaipeng Zhang; Zhifeng Li; Yu Qiao

Convolutional neural networks (CNNs) have been widely used in computer vision community, significantly improving the state-of-the-art. In most of the available CNNs, the softmax loss function is used as the supervision signal to train the deep model. In order to enhance the discriminative power of the deeply learned features, this paper proposes a new supervision signal, called center loss, for face recognition task. Specifically, the center loss simultaneously learns a center for deep features of each class and penalizes the distances between the deep features and their corresponding class centers. More importantly, we prove that the proposed center loss function is trainable and easy to optimize in the CNNs. With the joint supervision of softmax loss and center loss, we can train a robust CNNs to obtain the deep features with the two key learning objectives, inter-class dispension and intra-class compactness as much as possible, which are very essential to face recognition. It is encouraging to see that our CNNs (with such joint supervision) achieve the state-of-the-art accuracy on several important face recognition benchmarks, Labeled Faces in the Wild (LFW), YouTube Faces (YTF), and MegaFace Challenge. Especially, our new approach achieves the best results on MegaFace (the largest public domain face benchmark) under the protocol of small training set (contains under 500000 images and under 20000 persons), significantly improving the previous results and setting new state-of-the-art for both face recognition and face verification tasks.


IEEE Signal Processing Letters | 2016

Joint Face Detection and Alignment Using Multitask Cascaded Convolutional Networks

Kaipeng Zhang; Zhanpeng Zhang; Zhifeng Li; Yu Qiao

Face detection and alignment in unconstrained environment are challenging due to various poses, illuminations, and occlusions. Recent studies show that deep learning approaches can achieve impressive performance on these two tasks. In this letter, we propose a deep cascaded multitask framework that exploits the inherent correlation between detection and alignment to boost up their performance. In particular, our framework leverages a cascaded architecture with three stages of carefully designed deep convolutional networks to predict face and landmark location in a coarse-to-fine manner. In addition, we propose a new online hard sample mining strategy that further improves the performance in practice. Our method achieves superior accuracy over the state-of-the-art techniques on the challenging face detection dataset and benchmark and WIDER FACE benchmarks for face detection, and annotated facial landmarks in the wild benchmark for face alignment, while keeps real-time performance.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2009

Nonparametric Discriminant Analysis for Face Recognition

Zhifeng Li; Dahua Lin; Xiaoou Tang

In this paper, we develop a new framework for face recognition based on nonparametric discriminant analysis (NDA) and multi-classifier integration. Traditional LDA-based methods suffer a fundamental limitation originating from the parametric nature of scatter matrices, which are based on the Gaussian distribution assumption. The performance of these methods notably degrades when the actual distribution is Non-Gaussian. To address this problem, we propose a new formulation of scatter matrices to extend the two-class nonparametric discriminant analysis to multi-class cases. Then, we develop two more improved multi-class NDA-based algorithms (NSA and NFA) with each one having two complementary methods based on the principal space and the null space of the intra-class scatter matrix respectively. Comparing to the NSA, the NFA is more effective in the utilization of the classification boundary information. In order to exploit the complementary nature of the two kinds of NFA (PNFA and NNFA), we finally develop a dual NFA-based multi-classifier fusion framework by employing the over complete Gabor representation to boost the recognition performance. We show the improvements of the developed new algorithms over the traditional subspace methods through comparative experiments on two challenging face databases, Purdue AR database and XM2VTS database.


IEEE Transactions on Information Forensics and Security | 2011

A Discriminative Model for Age Invariant Face Recognition

Zhifeng Li; Unsang Park; Anil K. Jain

Aging variation poses a serious problem to automatic face recognition systems. Most of the face recognition studies that have addressed the aging problem are focused on age estimation or aging simulation. Designing an appropriate feature representation and an effective matching framework for age invariant face recognition remains an open problem. In this paper, we propose a discriminative model to address face matching in the presence of age variation. In this framework, we first represent each face by designing a densely sampled local feature description scheme, in which scale invariant feature transform (SIFT) and multi-scale local binary patterns (MLBP) serve as the local descriptors. By densely sampling the two kinds of local descriptors from the entire facial image, sufficient discriminatory information, including the distribution of the edge direction in the face image (that is expected to be age invariant) can be extracted for further analysis. Since both SIFT-based local features and MLBP-based local features span a high-dimensional feature space, to avoid the overfitting problem, we develop an algorithm, called multi-feature discriminant analysis (MFDA) to process these two local feature spaces in a unified framework. The MFDA is an extension and improvement of the LDA using multiple features combined with two different random sampling methods in feature and sample space. By random sampling the training set as well as the feature space, multiple LDA-based classifiers are constructed and then combined to generate a robust decision via a fusion rule. Experimental results show that our approach outperforms a state-of-the-art commercial face recognition engine on two public domain face aging data sets: MORPH and FG-NET. We also compare the performance of the proposed discriminative model with a generative aging model. A fusion of discriminative and generative models further improves the face matching accuracy in the presence of aging.


international conference on computer vision | 2013

Hidden Factor Analysis for Age Invariant Face Recognition

Dihong Gong; Zhifeng Li; Dahua Lin; Jianzhuang Liu; Xiaoou Tang

Age invariant face recognition has received increasing attention due to its great potential in real world applications. In spite of the great progress in face recognition techniques, reliably recognizing faces across ages remains a difficult task. The facial appearance of a person changes substantially over time, resulting in significant intra-class variations. Hence, the key to tackle this problem is to separate the variation caused by aging from the person-specific features that are stable. Specifically, we propose a new method, called Hidden Factor Analysis (HFA). This method captures the intuition above through a probabilistic model with two latent factors: an identity factor that is age-invariant and an age factor affected by the aging process. Then, the observed appearance can be modeled as a combination of the components generated based on these factors. We also develop a learning algorithm that jointly estimates the latent factors and the model parameters using an EM procedure. Extensive experiments on two well-known public domain face aging datasets: MORPH (the largest public face aging database) and FGNET, clearly show that the proposed method achieves notable improvement over state-of-the-art algorithms.


computer vision and pattern recognition | 2004

Bayesian face recognition using support vector machine and face clustering

Zhifeng Li; Xiaoou Tang

In this paper, we first develop a direct Bayesian based support vector machine by combining the Bayesian analysis with the SVM. Unlike traditional SVM-based face recognition method that needs to train a large number of SVMs, the direct Bayesian SVM needs only one SVM trained to classify the face difference between intra-personal variation and extra-personal variation. However, the added simplicity means that the method has to separate two complex subspaces by one hyper-plane thus affects the recognition accuracy. In order to improve the recognition performance we develop three more Bayesian based SVMs, including the one-versus-all method, the hierarchical agglomerative clustering based method, and the adaptive clustering method. We show the improvement of the new algorithms over traditional subspace methods through experiments on two face databases, the FERET database and the XM2VTS database.


computer vision and pattern recognition | 2005

Nonparametric subspace analysis for face recognition

Zhifeng Li; Wei Liu; Dahua Lin; Xiaoou Tang

Linear discriminant analysis (LDA) is a popular face recognition technique. However, an inherent problem with this technique stems from the parametric nature of the scatter matrix, in which the sample distribution in each class is assumed to be normal distribution. So it tends to suffer in the case of non-normal distribution. In this paper a nonparametric scatter matrix is defined to replace the traditional parametric scatter matrix in order to overcome this problem. Two kinds of nonparametric subspace analysis (NSA): PNSA and NNSA are proposed for face recognition. The former is based on the principal space of intra-personal scatter matrix, while the latter is based on the null space. In addition, based on the complementary nature of PNSA and NNSA, we further develop a dual NSA-based classifier framework using Gabor images to further improve the recognition performance. Experiments achieve near perfect recognition accuracy (99.7%) on the XM2VTS database.


IEEE Transactions on Information Forensics and Security | 2007

Using Support Vector Machines to Enhance the Performance of Bayesian Face Recognition

Zhifeng Li; Xiaoou Tang

In this paper, we first develop a direct Bayesian-based support vector machine (SVM) by combining the Bayesian analysis with the SVM. Unlike traditional SVM-based face recognition methods that require one to train a large number of SVMs, the direct Bayesian SVM needs only one SVM trained to classify the face difference between intrapersonal variation and extrapersonal variation. However, the additional simplicity means that the method has to separate two complex subspaces by one hyperplane thus affecting the recognition accuracy. In order to improve the recognition performance, we develop three more Bayesian-based SVMs, including the one-versus-all method, the hierarchical agglomerative clustering-based method, and the adaptive clustering method. Finally, we combine the adaptive clustering method with multilevel subspace analysis to further improve the recognition performance. We show the improvement of the new algorithms over traditional subspace methods through experiments on two face databases - the FERET database and the XM2VTS database


computer vision and pattern recognition | 2016

Latent Factor Guided Convolutional Neural Networks for Age-Invariant Face Recognition

Yandong Wen; Zhifeng Li; Yu Qiao

While considerable progresses have been made on face recognition, age-invariant face recognition (AIFR) still remains a major challenge in real world applications of face recognition systems. The major difficulty of AIFR arises from the fact that the facial appearance is subject to significant intra-personal changes caused by the aging process over time. In order to address this problem, we propose a novel deep face recognition framework to learn the ageinvariant deep face features through a carefully designed CNN model. To the best of our knowledge, this is the first attempt to show the effectiveness of deep CNNs in advancing the state-of-the-art of AIFR. Extensive experiments are conducted on several public domain face aging datasets (MORPH Album2, FGNET, and CACD-VS) to demonstrate the effectiveness of the proposed model over the state-of the-art. We also verify the excellent generalization of our new model on the famous LFW dataset.


computer vision and pattern recognition | 2015

A maximum entropy feature descriptor for age invariant face recognition

Dihong Gong; Zhifeng Li; Dacheng Tao; Jianzhuang Liu; Xuelong Li

In this paper, we propose a new approach to overcome the representation and matching problems in age invariant face recognition. First, a new maximum entropy feature descriptor (MEFD) is developed that encodes the microstructure of facial images into a set of discrete codes in terms of maximum entropy. By densely sampling the encoded face image, sufficient discriminatory and expressive information can be extracted for further analysis. A new matching method is also developed, called identity factor analysis (IFA), to estimate the probability that two faces have the same underlying identity. The effectiveness of the framework is confirmed by extensive experimentation on two face aging datasets, MORPH (the largest public-domain face aging dataset) and FGNET. We also conduct experiments on the famous LFW dataset to demonstrate the excellent generalizability of our new approach.

Collaboration


Dive into the Zhifeng Li's collaboration.

Top Co-Authors

Avatar

Yu Qiao

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Xiaoou Tang

The Chinese University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Dihong Gong

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xuelong Li

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Weiwu Jiang

The Chinese University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Dahua Lin

The Chinese University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Helen M. Meng

The Chinese University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Yandong Wen

South China University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge