Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Pengfei Zhu is active.

Publication


Featured researches published by Pengfei Zhu.


european conference on computer vision | 2012

Multi-scale patch based collaborative representation for face recognition with margin distribution optimization

Pengfei Zhu; Lei Zhang; Qinghua Hu; Simon C. K. Shiu

Small sample size is one of the most challenging problems in face recognition due to the difficulty of sample collection in many real-world applications. By representing the query sample as a linear combination of training samples from all classes, the so-called collaborative representation based classification (CRC) shows very effective face recognition performance with low computational cost. However, the recognition rate of CRC will drop dramatically when the available training samples per subject are very limited. One intuitive solution to this problem is operating CRC on patches and combining the recognition outputs of all patches. Nonetheless, the setting of patch size is a non-trivial task. Considering the fact that patches on different scales can have complementary information for classification, we propose a multi-scale patch based CRC method, while the ensemble of multi-scale outputs is achieved by regularized margin distribution optimization. Our extensive experiments validated that the proposed method outperforms many state-of-the-art patch based face recognition algorithms.


Pattern Recognition | 2015

Unsupervised feature selection by regularized self-representation

Pengfei Zhu; Wangmeng Zuo; Lei Zhang; Qinghua Hu; Simon C. K. Shiu

By removing the irrelevant and redundant features, feature selection aims to find a compact representation of the original feature with good generalization ability. With the prevalence of unlabeled data, unsupervised feature selection has shown to be effective in alleviating the curse of dimensionality, and is essential for comprehensive analysis and understanding of myriads of unlabeled high dimensional data. Motivated by the success of low-rank representation in subspace clustering, we propose a regularized self-representation (RSR) model for unsupervised feature selection, where each feature can be represented as the linear combination of its relevant features. By using L 2 , 1 -norm to characterize the representation coefficient matrix and the representation residual matrix, RSR is effective to select representative features and ensure the robustness to outliers. If a feature is important, then it will participate in the representation of most of other features, leading to a significant row of representation coefficients, and vice versa. Experimental analysis on synthetic and real-world data demonstrates that the proposed method can effectively identify the representative features, outperforming many state-of-the-art unsupervised feature selection methods in terms of clustering accuracy, redundancy reduction and classification accuracy. HighlightsA regularized self-representation (RSR) model is proposed for unsupervised feature selection.An iterative reweighted least-squares algorithm is proposed to solve the RSR model.The proposed method shows superior performance to state-of-the-art.


ieee international conference on automatic face gesture recognition | 2013

Face recognition based on regularized nearest points between image sets

Meng Yang; Pengfei Zhu; Luc Van Gool; Lei Zhang

In this paper, a novel regularized nearest points (RNP) method is proposed for image sets based face recognition. By modeling an image set as a regularized affine hull (RAH), two regularized nearest points (RNP), one on each image sets RAH, are automatically determined by an efficient iterative solver. The between-set distance of RNP is then defined by considering both the distance between the RNPs and the structure of image sets. Compared with the recently developed sparse approximated nearest points (SANP) method, RNP has a more concise formulation, less variables and lower time complexity. Extensive experiments on benchmark databases (e.g., Honda/UCSD, CMU Mobo and YouTube databases) clearly show that our proposed RNP consistently outperforms state-of-the-art methods in both accuracy and efficiency.


Information Sciences | 2011

Rule learning for classification based on neighborhood covering reduction

Yong Du; Qinghua Hu; Pengfei Zhu; Peijun Ma

Rough set theory has been extensively discussed in the domain of machine learning and data mining. Pawlaks rough set theory offers a formal theoretical framework for attribute reduction and rule learning from nominal data. However, this model is not applicable to numerical data, which widely exist in real-world applications. In this work, we extend this framework to numerical feature spaces by replacing partition of universe with neighborhood covering and derive a neighborhood covering reduction based approach to extracting rules from numerical data. We first analyze the definition of covering reduction and point out its advantages and disadvantages. Then we introduce the definition of relative covering reduction and develop an algorithm to compute it. Given a feature space, we compute the neighborhood of each sample and form a neighborhood covering of the universe, and then employ the algorithm of relative covering reduction to the neighborhood covering, thus derive a minimal covering rule set. Some numerical experiments are presented to show the effectiveness of the proposed technique.


international conference on computer vision | 2013

From Point to Set: Extend the Learning of Distance Metrics

Pengfei Zhu; Lei Zhang; Wangmeng Zuo; David Zhang

Most of the current metric learning methods are proposed for point-to-point distance (PPD) based classification. In many computer vision tasks, however, we need to measure the point-to-set distance (PSD) and even set-to-set distance (SSD) for classification. In this paper, we extend the PPD based Mahalanobis distance metric learning to PSD and SSD based ones, namely point-to-set distance metric learning (PSDML) and set-to-set distance metric learning (SSDML), and solve them under a unified optimization framework. First, we generate positive and negative sample pairs by computing the PSD and SSD between training samples. Then, we characterize each sample pair by its covariance matrix, and propose a covariance kernel based discriminative function. Finally, we tackle the PSDML and SSDML problems by using standard support vector machine solvers, making the metric learning very efficient for multiclass visual classification tasks. Experiments on gender classification, digit recognition, object categorization and face recognition show that the proposed metric learning methods can effectively enhance the performance of PSD and SSD based classification.


IEEE Transactions on Fuzzy Systems | 2012

A Novel Algorithm for Finding Reducts With Fuzzy Rough Sets

Degang Chen; Lei Zhang; Suyun Zhao; Qinghua Hu; Pengfei Zhu

Attribute reduction is one of the most meaningful research topics in the existing fuzzy rough sets, and the approach of discernibility matrix is the mathematical foundation of computing reducts. When computing reducts with discernibility matrix, we find that only the minimal elements in a discernibility matrix are sufficient and necessary. This fact motivates our idea in this paper to develop a novel algorithm to find reducts that are based on the minimal elements in the discernibility matrix. Relative discernibility relations of conditional attributes are defined and minimal elements in the fuzzy discernibility matrix are characterized by the relative discernibility relations. Then, the algorithms to compute minimal elements and reducts are developed in the framework of fuzzy rough sets. Experimental comparison shows that the proposed algorithms are effective.


IEEE Transactions on Information Forensics and Security | 2014

Image Set-Based Collaborative Representation for Face Recognition

Pengfei Zhu; Wangmeng Zuo; Lei Zhang; Simon C. K. Shiu; David Zhang

With the rapid development of digital imaging and communication technologies, image set-based face recognition (ISFR) is becoming increasingly important. One key issue of ISFR is how to effectively and efficiently represent the query face image set using the gallery face image sets. The set-to-set distance-based methods ignore the relationship between gallery sets, whereas representing the query set images individually over the gallery sets ignores the correlation between query set images. In this paper, we propose a novel image set-based collaborative representation and classification method for ISFR. By modeling the query set as a convex or regularized hull, we represent this hull collaboratively over all the gallery sets. With the resolved representation coefficients, the distance between the query set and each gallery set can then be calculated for classification. The proposed model naturally and effectively extends the image-based collaborative representation to an image set based one, and our extensive experiments on benchmark ISFR databases show the superiority of the proposed method to state-of-the-art ISFR methods under different set sizes in terms of both recognition rate and efficiency.


Neurocomputing | 2015

Joint representation and pattern learning for robust face recognition

Meng Yang; Pengfei Zhu; Feng Liu; Linlin Shen

Image feature is a significant factor for the success of robust face recognition. Recently sparse representation based classifier (SRC) has been widely applied to robust face recognition by using sparse representation residuals to tolerate disturbed image features (e.g., occluded pixels). In order to deal with more complicated image variations, robust representation based classifier, which estimates feature weights (e.g., low weight values are given to the pixels with big representation residuals), has attracted much attention in recent work. Although these methods have achieved improved performance by estimating feature weights independently, structured information and prior knowledge of image features are ignored in these works, resulting in unsatisfactory performance in some challenging cases. Thus how to better learn image feature weight to fully exploit structure information and prior knowledge is still an open question in robust face recognition. In this paper, we proposed a novel joint representation and pattern learning (JRPL) model, in which the feature pattern weight is simultaneously learned with the representation of query image. Especially a feature pattern dictionary, which captures structured information and prior knowledge of image features, are constructed to represent the unknown feature pattern weight of a query image. An efficient algorithm to solve JRPL was also presented in this paper. The experiments of face recognition with various variations and occlusions on several benchmark datasets clearly show the advantage of the proposed JRPL in accuracy and efficiency.


asian conference on computer vision | 2014

Local Generic Representation for Face Recognition with Single Sample per Person

Pengfei Zhu; Meng Yang; Lei Zhang; Ilyong Lee

Face recognition with single sample per person (SSPP) is a very challenging task because in such a scenario it is difficult to predict the facial variations of a query sample by the gallery samples. Considering the fact that different parts of human faces have different importance to face recognition, and the fact that the intra-class facial variations can be shared across different subjects, we propose a local generic representation (LGR) based framework for face recognition with SSPP. A local gallery dictionary is built by extracting the neighboring patches from the gallery dataset, while an intra-class variation dictionary is built by using an external generic dataset to predict the possible facial variations (e.g., illuminations, pose, expressions and disguises). LGR minimizes the total representation residual of the query sample over the local gallery dictionary and the generic variation dictionary, and it uses correntropy to measure the representation residual of each patch. Half-quadratic analysis is adopted to solve the optimization problem. LGR takes the advantages of patch based local representation and generic variation representation, showing leading performance in face recognition with SSPP.


international conference on computer vision | 2011

A linear subspace learning approach via sparse coding

Lei Zhang; Pengfei Zhu; Qinghua Hu; David Zhang

Linear subspace learning (LSL) is a popular approach to image recognition and it aims to reveal the essential features of high dimensional data, e.g., facial images, in a lower dimensional space by linear projection. Most LSL methods compute directly the statistics of original training samples to learn the subspace. However, these methods do not effectively exploit the different contributions of different image components to image recognition. We propose a novel LSL approach by sparse coding and feature grouping. A dictionary is learned from the training dataset, and it is used to sparsely decompose the training samples. The decomposed image components are grouped into a more discriminative part (MDP) and a less discriminative part (LDP). An unsupervised criterion and a supervised criterion are then proposed to learn the desired subspace, where the MDP is preserved and the LDP is suppressed simultaneously. The experimental results on benchmark face image databases validated that the proposed methods outperform many state-of-the-art LSL schemes.

Collaboration


Dive into the Pengfei Zhu's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Wangmeng Zuo

Harbin Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Lei Zhang

Hong Kong Polytechnic University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David Zhang

Hong Kong Polytechnic University

View shared research outputs
Top Co-Authors

Avatar

Daren Yu

Harbin Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge