Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Minghua Wan is active.

Publication


Featured researches published by Minghua Wan.


Neurocomputing | 2009

Two-dimensional local graph embedding discriminant analysis (2DLGEDA) with its application to face and palm biometrics

Minghua Wan; Zhihui Lai; Jie Shao; Zhong Jin

This paper proposes a novel method, called two-dimensional local graph embedding discriminant analysis (2DLGEDA), for image feature extraction, which can directly extract the optimal projective vectors from two-dimensional image matrices rather than image vectors based on the scatter difference criterion. In graph embedding, the intrinsic graph characterizes the intraclass compactness and connects each data point with its neighboring within the same class, while the penalty graph connects the marginal points and characterizes the interclass separability. The proposed method effectively avoids the singularity problem frequently encountered in the traditional linear discriminant analysis algorithm (LDA) due to the small sample size (SSS) and overcomes the limitations of LDA due to data distribution assumptions and available projection directions. Experimental results on ORL, YALE, FERET face databases and PolyU palmprint database show the effectiveness of the proposed method.


Neural Computing and Applications | 2011

Locality preserving embedding for face and handwriting digital recognition

Zhihui Lai; Minghua Wan; Zhong Jin

Most supervised manifold learning-based methods preserve the original neighbor relationships to pursue the discriminating power. Thus, structure information of the data distributions might be neglected and destroyed in low-dimensional space in a certain sense. In this paper, a novel supervised method, called locality preserving embedding (LPE), is proposed to feature extraction and dimensionality reduction. LPE can give a low-dimensional embedding for discriminative multi-class sub-manifolds and preserves principal structure information of the local sub-manifolds. In LPE framework, supervised and unsupervised ideas are combined together to learn the optimal discriminant projections. On the one hand, the class information is taken into account to characterize the compactness of local sub-manifolds and the separability of different sub-manifolds. On the other hand, at the same time, all the samples in the local neighborhood are used to characterize the original data distributions and preserve the structure in low-dimensional subspace. The most significant difference from existing methods is that LPE takes the distribution directions of local neighbor data into account and preserves them in low-dimensional subspace instead of only preserving the each local sub-manifold’s original neighbor relationships. Therefore, LPE optimally preserves both the local sub-manifold’s original neighborhood relationships and the distribution direction of local neighbor data to separate different sub-manifolds as far as possible. The criterion, similar to the classical Fisher criterion, is a Rayleigh quotient in form, and the optimal linear projections are obtained by solving a generalized Eigen equation. Furthermore, the framework can be directly used in semi-supervised learning, and the semi-supervised LPE and semi-supervised kernel LPE are given. The proposed LPE is applied to face recognition (on the ORL and Yale face databases) and handwriting digital recognition (on the USPS database). The experimental results show that LPE consistently outperforms classical linear methods, e.g., principal component analysis and linear discriminant analysis, and the recent manifold learning-based methods, e.g., marginal Fisher analysis and constrained maximum variance mapping.


Applied Mathematics and Computation | 2011

Feature extraction using two-dimensional local graph embedding based on maximum margin criterion

Minghua Wan; Zhihui Lai; Zhong Jin

In this paper, we propose a novel method for image feature extraction, namely the two-dimensional local graph embedding, which is based on maximum margin criterion and thus not necessary to convert the image matrix into high-dimensional image vector and directly avoid computing the inverse matrix in the discriminant criterion. This method directly learns the optimal projective vectors from 2D image matrices by simultaneously considering local graph embedding and maximum margin criterion. The proposed method avoids huge feature matrix problem in Eigenfaces, Fisherfaces, Laplacianfaces, maximum margin criterion (MMC) and inverse matrix in 2D Fisherfaces, 2D Laplacianfaces and 2D Local Graph Embedding Discriminant Analysis (2DLGEDA) so that computational time would be saved for feature extraction. Experimental results on the Yale and the USPS databases show the effectiveness of the proposed method under various experimental conditions.


Neural Computing and Applications | 2013

Local sparse representation projections for face recognition

Zhihui Lai; Yajing Li; Minghua Wan; Zhong Jin

How to define the sparse affinity weight matrices is still an open problem in existing manifold learning algorithm. In this paper, we propose a novel supervised learning method called local sparse representation projections (LSRP) for linear dimensionality reduction. Differing from sparsity preserving projections (SPP) and the recent manifold learning methods such as locality preserving projections (LPP), LSRP introduces the local sparse representation information into the objective function. Although there are no labels used in the local sparse representation, it still can provide better measure coefficients and significant discriminant abilities. By combining the local interclass neighborhood relationships and sparse representation information, LSRP aims to preserve the local sparse reconstructive relationships of the data and simultaneously maximize the interclass separability. Comprehensive comparison and extensive experiments show that LSRP achieves higher recognition rates than principle component analysis, linear discriminant analysis and the state-of-the-art techniques such as LPP, SPP and maximum variance projections.


chinese conference on pattern recognition | 2009

Two-Dimensional Local Graph Embedding Discriminant Analysis(F2DLGEDA) with Its Application to Face and Palm Biometrics

Minghua Wan; Zhen Lou; Zhonghua Liu; Zhong Jin

In two-dimensional local graph embedding discriminant analysis, the intrinsic graph characterizes the intraclass compactness and connects each data point with its neighboring within the same class, while the penalty graph connects the marginal points and characterizes the interclass separability. But in the real world, face images are always affected by variations in illumination conditions and different facial expressions. So, the fuzzy two-dimensional local graph embedding analysis algorithm is proposed, in which the fuzzy k-nearest neighbor is implemented to achieve the distribution local information of original samples. Experimental results on the ORL, Yale face and on the PolyU palmprint databases show the effectiveness of the proposed method.


chinese conference on pattern recognition | 2008

Face Recognition Based on Wavelet Transform, Singular Value Decomposition and Kernel Principal Component Analysis

Zhonghua Liu; Zhong Jin; Zhihui Lai; Chuanbo Huang; Minghua Wan

Combined with wavelet transform, singular value decomposition and kernel principal component analysis, a method for face recognition is presented. Firstly, the wavelet transformation is used to reduce the dimension of the face picture. Then, SVD is used to subtract the features of the lowest resolution subimage, and the singular value feature vector is mapped onto the feature space with kpca and obtains nonlinear feature . Finally, face recognition can be realized according to BP neural network method. Experimental results on ORL and YALE face-databases show that the recognition rate by the proposed method is higher than that by KPCA, SVD, WT-KPCA and WT-SVD respectively.


Neural Processing Letters | 2011

Locally Minimizing Embedding and Globally Maximizing Variance: Unsupervised Linear Difference Projection for Dimensionality Reduction

Minghua Wan; Zhihui Lai; Zhong Jin

Recently, many dimensionality reduction algorithms, including local methods and global methods, have been presented. The representative local linear methods are locally linear embedding (LLE) and linear preserving projections (LPP), which seek to find an embedding space that preserves local information to explore the intrinsic characteristics of high dimensional data. However, both of them still fail to nicely deal with the sparsely sampled or noise contaminated datasets, where the local neighborhood structure is critically distorted. On the contrary, principal component analysis (PCA), the most frequently used global method, preserves the total variance by maximizing the trace of feature variance matrix. But PCA cannot preserve local information due to pursuing maximal variance. In order to integrate the locality and globality together and avoid the drawback in LLE and PCA, in this paper, inspired by the dimensionality reduction methods of LLE and PCA, we propose a new dimensionality reduction method for face recognition, namely, unsupervised linear difference projection (ULDP). This approach can be regarded as the integration of a local approach (LLE) and a global approach (PCA), so that it has better performance and robustness in applications. Experimental results on the ORL, YALE and AR face databases show the effectiveness of the proposed method on face recognition.


international conference on information science and engineering | 2009

Locality Preserving Embedding

Zhihui Lai; Minghua Wan; Zhong Jin

Most manifold learning based methods preserve the original neighbor relationships to pursue the discriminating power. Thus, structure information of data distribution might be neglected and destroyed in low-dimensional space in a sense. In this paper, a novel supervised method, called Locality Preserving Embedding (LPE), is proposed to feature extraction and dimensionality reduction. LPE gives a low-dimensional embedding and preserves principal structure information of the local sub-manifolds. The most significant difference from existing methods is that LPE takes the distribution directions of local neighbor data into account and preserves them in low-dimensional subspace instead of only preserving the each local sub-manifolds original neighbor relationships. Therefore, LPE optimally preserves both the local sub-manifolds original neighbor relations and the distribution direction of local neighbors to separate different sub-manifolds as far as possible. The proposed LPE is applied to face recognition on the ORL and Yale face database. The experimental results show that LPE consistently outperforms the-state-of-art linear methods such as Marginal Fisher Analysis (MFA) and Constrained Maximum Variance Mapping (CMVM).


international conference on information science and engineering | 2009

Fuzzy Local Discriminant Embedding (FLDE) For Face Recognition

Minghua Wan; Zhihui Lai; Jie Shao; Chuanbo Huang; Zhong Jin

Face images are always affected by variations in illumination conditions and different facial expressions in the real world. Recently, local discriminant embedding (LDE) was proposed to manifold learning and pattern classification. LDE achieves good discriminating performance by integrating the information of neighbor and class relations between data points. But LDE cannot solve illumination problem in face recognition. So, the fuzzy local discriminant embedding (FLDE) algorithm is proposed, in which the fuzzy k-nearest neighbor (FKNN) is implemented to achieve the local distribution information of original samples. Experimental results on ORL, Yale and AR face databases show the effectiveness of the proposed method.


chinese conference on pattern recognition | 2009

Margin Maximum Embedding Discriminant (MMED) for Feature Extraction and Classification

Minghua Wan; Zhen Lou; Zhong Jin

This paper develops a supervised discriminant technique, called Margin Maximum Embedding Discriminant (MMED), for dimensionality reduction of high-dimensional data. In graph embedding, our objective is to find a linear transform matrix to make the samples in the same class as compact as possible and the samples belong to the different classes as dispersed as possible. The proposed method effectively avoids the singularity problem frequently encountered in the classical linear discriminant analysis due to the small sample size (SSS) and overcomes the limitations of the traditional linear discriminant analysis algorithm (LDA) due to data distribution assumptions and available projection directions. Experimental results on ORL and AR face databases show the effectiveness of the proposed method

Collaboration


Dive into the Minghua Wan's collaboration.

Top Co-Authors

Avatar

Zhong Jin

Nanjing University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Zhihui Lai

Nanjing University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Chuanbo Huang

Nanjing University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Jie Shao

Nanjing University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Zhen Lou

Nanjing University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Zhonghua Liu

Nanjing University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Yajing Li

East China University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Jian Yang

University of Queensland

View shared research outputs
Researchain Logo
Decentralizing Knowledge