Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Wen-Sheng Chen is active.

Publication


Featured researches published by Wen-Sheng Chen.


systems man and cybernetics | 2005

Kernel machine-based one-parameter regularized Fisher discriminant method for face recognition

Wen-Sheng Chen; Pong Chi Yuen; Jian Huang; Dao-Qing Dai

This paper addresses two problems in linear discriminant analysis (LDA) of face recognition. The first one is the problem of recognition of human faces under pose and illumination variations. It is well known that the distribution of face images with different pose, illumination, and face expression is complex and nonlinear. The traditional linear methods, such as LDA, will not give a satisfactory performance. The second problem is the small sample size (S3) problem. This problem occurs when the number of training samples is smaller than the dimensionality of feature vector. In turn, the within-class scatter matrix will become singular. To overcome these limitations, this paper proposes a new kernel machine-based one-parameter regularized Fisher discriminant (K1PRFD) technique. K1PRFD is developed based on our previously developed one-parameter regularized discriminant analysis method and the well-known kernel approach. Therefore, K1PRFD consists of two parameters, namely the regularization parameter and kernel parameter. This paper further proposes a new method to determine the optimal kernel parameter in RBF kernel and regularized parameter in within-class scatter matrix simultaneously based on the conjugate gradient method. Three databases, namely FERET, Yale Group B, and CMU PIE, are selected for evaluation. The results are encouraging. Comparing with the existing LDA-based methods, the proposed method gives superior results.


systems man and cybernetics | 2007

Choosing Parameters of Kernel Subspace LDA for Recognition of Face Images Under Pose and Illumination Variations

Jian Huang; Pong Chi Yuen; Wen-Sheng Chen; Jian-Huang Lai

This paper addresses the problem of automatically tuning multiple kernel parameters for the kernel-based linear discriminant analysis (LDA) method. The kernel approach has been proposed to solve face recognition problems under complex distribution by mapping the input space to a high-dimensional feature space. Some recognition algorithms such as the kernel principal components analysis, kernel Fisher discriminant, generalized discriminant analysis, and kernel direct LDA have been developed in the last five years. The experimental results show that the kernel-based method is a good and feasible approach to tackle the pose and illumination variations. One of the crucial factors in the kernel approach is the selection of kernel parameters, which highly affects the generalization capability and stability of the kernel-based learning methods. In view of this, we propose an eigenvalue-stability-bounded margin maximization (ESBMM) algorithm to automatically tune the multiple parameters of the Gaussian radial basis function kernel for the kernel subspace LDA (KSLDA) method, which is developed based on our previously developed subspace LDA method. The ESBMM algorithm improves the generalization capability of the kernel-based LDA method by maximizing the margin maximization criterion while maintaining the eigenvalue stability of the kernel-based LDA method. An in-depth investigation on the generalization performance on pose and illumination dimensions is performed using the YaleB and CMU PIE databases. The FERET database is also used for benchmark evaluation. Compared with the existing PCA-based and LDA-based methods, our proposed KSLDA method, with the ESBMM kernel parameter estimation algorithm, gives superior performance.


international soi conference | 2003

Component-based LDA method for face recognition with one training sample

Jian Huang; Pong Chi Yuen; Wen-Sheng Chen; Jian-Huang Lai

Many face recognition algorithms/systems have been developed in the last decade and excellent performances are also reported when there is sufficient number of representative training samples. In many real-life applications, only one training sample is available. Under this situation, the performance of existing algorithms will be degraded dramatically or the formulation is incorrect, which in turn, the algorithm cannot be implemented. We propose a component-based linear discriminant analysis (LDA) method to solve the one training sample problem. The basic idea of the proposed method is to construct local facial feature component bunches by moving each local feature region in four directions. In this way, we not only generate more samples, but also consider the face detection localization error while training. After that, we employ a sub-space LDA method, which is tailor-made for small number of training samples, for the local feature projection to maximize the discrimination power. Finally, combining the contributions of each local feature draws the recognition decision. FERET database is used for evaluating the proposed method and results are encouraging.


ieee international conference on automatic face gesture recognition | 2004

Kernel subspace LDA with optimized kernel parameters on face recognition

Jian Huang; Pong Chi Yuen; Wen-Sheng Chen; Jian-Huang Lai

This work addresses the problem of selection of kernel parameters in kernel fisher discriminant for face recognition. We propose a new criterion and derive a new formation in optimizing the parameters in RBF kernel based on the gradient descent algorithm. The proposed formulation is further integrated into a subspace LDA algorithm and a new face recognition algorithm is developed. FERET database is used for evaluation. Comparing with the existing kernel LDA-based methods with kernel parameter selected by experiment manually, the results are encouraging.


International Journal of Pattern Recognition and Artificial Intelligence | 2005

A NEW REGULARIZED LINEAR DISCRIMINANT ANALYSIS METHOD TO SOLVE SMALL SAMPLE SIZE PROBLEMS

Wen-Sheng Chen; Pong Chi Yuen; Jian Huang

This paper presents a new regularization technique to deal with the small sample size (S3) problem in linear discriminant analysis (LDA) based face recognition. Regularization on the within-class scatter matrix Sw has been shown to be a good direction for solving the S3 problem because the solution is found in full space instead of a subspace. The main limitation in regularization is that a very high computation is required to determine the optimal parameters. In view of this limitation, this paper re-defines the three-parameter regularization on the within-class scatter matrix , which is suitable for parameter reduction. Based on the new definition of , we derive a single parameter (t) explicit expression formula for determining the three parameters and develop a one-parameter regularization on the within-class scatter matrix. A simple and efficient method is developed to determine the value of t. It is also proven that the new regularized within-class scatter matrix approaches the original within-class scatter matrix Sw as the single parameter tends to zero. A novel one-parameter regularization linear discriminant analysis (1PRLDA) algorithm is then developed. The proposed 1PRLDA method for face recognition has been evaluated with two public available databases, namely ORL and FERET databases. The average recognition accuracies of 50 runs for ORL and FERET databases are 96.65% and 94.00%, respectively. Comparing with existing LDA-based methods in solving the S3 problem, the proposed 1PRLDA method gives the best performance.


international conference on computer vision | 2001

Robust facial feature point detection under nonlinear illuminations

Jian-Huang Lai; Pong Chi Yuen; Wen-Sheng Chen; Shihong Lao; Masato Kawade

Addresses the problem of facial feature point detection under different lighting conditions. Our goal is to develop an efficient detection algorithm, which is suitable for practical applications. The problems that we need to overcome include (1) high detection accuracy, (2) low computational time and (3) nonlinear illumination. An algorithm is developed and reported in the paper. One of the key factors affecting the performance of feature point detection is the accuracy in locating face boundary. To solve this problem, we propose to make use of skin color, lip color and also the face boundary information. The basic idea to overcome the nonlinear illumination is that, each person shares the same/similar facial primitives, such as two eyes, one nose and one mouth. So the binary images of each person should be similar. Again, if a binary image (with appropriate thresholding) is obtained from the gray scale image, the facial feature points can also be detection easily. To achieve this, we propose to use the integral optical density (IOD) on face region. We propose to use the average IOD to detect feature windows. As all the above-mentioned techniques are simple and efficient, the proposed method is computationally effective and suitable for practical applications. 743 images from the Omron database with different facial expressions, different glasses and different hairstyle captured indoor and outdoor have been used to evaluate the proposed method and the detection accuracy is around 86%. The computational time in Pentium III 750 MHz using matlab for implementation is less than 7 seconds.


Neurocomputing | 2016

Supervised kernel nonnegative matrix factorization for face recognition

Wen-Sheng Chen; Yang Zhao; Binbin Pan; Bo Chen

Nonnegative matrix factorization (NMF) is a promising algorithm for dimensionality reduction and local feature extraction. However, NMF is a linear and unsupervised method. The performance of NMF would be degraded when dealing with the complicated nonlinear distributed data, such as face images with variations of pose, illumination and facial expression. Also, the available labels could potentially improve the discriminant power of NMF. To overcome the aforementioned limitations of NMF, this paper proposes a novel supervised and nonlinear approach to enhance the classification power of NMF. By mapping the input data into a reproducing kernel Hilbert space (RKHS), we can discover the nonlinear relations between the data. This is known as the kernel methods. At the same time, we make use of discriminant analysis to force the within-class scatter small and between-class scatter large in the RKHS. It theoretically shows that the proposed approach can guarantee the non-negativity of the decomposed components and the objective function is non-increasing under the update rules. The proposed method is applied to face recognition. Compared with some state-of-the-art algorithms, experimental results demonstrate the superior performance of our method.


Mathematical Problems in Engineering | 2008

Incremental Nonnegative Matrix Factorization for Face Recognition

Wen-Sheng Chen; Binbin Pan; Bin Fang; Ming Li; Jianliang Tang

Nonnegative matrix factorization (NMF) is a promising approach for local feature extraction in face recognition tasks. However, there are two major drawbacks in almost all existing NMF-based methods. One shortcoming is that the computational cost is expensive for large matrix decomposition. The other is that it must conduct repetitive learning, when the training samples or classes are updated. To overcome these two limitations, this paper proposes a novel incremental nonnegative matrix factorization (INMF) for face representation and recognition. The proposed INMF approach is based on a novel constraint criterion and our previous block strategy. It thus has some good properties, such as low computational complexity, sparse coefficient matrix. Also, the coefficient column vectors between different classes are orthogonal. In particular, it can be applied to incremental learning. Two face databases, namely FERET and CMU PIE face databases, are selected for evaluation. Compared with PCA and some state-of-the-art NMF-based methods, our INMF approach gives the best performance.


Neurocomputing | 2015

A novel discriminant criterion based on feature fusion strategy for face recognition

Wen-Sheng Chen; Xiuli Dai; Binbin Pan; Taiquan Huang

Feature extraction is an important problem in face recognition. There are two kinds of structural features, namely the Euclidean structure and the manifold structure. However, the single-structural feature extraction methods cannot fully utilize the advantages of global feature and local feature simultaneously. Thus their performances will be degraded. To overcome the limitations of the single-structural feature based face recognition schemes, this paper proposes a novel discriminant criterion using Feature Fusion Strategy (FFS), which nonlinearly combines both Euclidean and manifold structures in the face pattern space. The proposed discriminant criterion is suitable to develop an iterative algorithm. It is able to automatically determine the optimal parameters and balance the tradeoff between Euclidean structure and manifold structure. The proposed FFS algorithm is successfully applied to face recognition. Three publicly available face databases, ORL, FERET and CMU PIE, are selected for evaluation. Compared with Linear Discriminant Analysis (LDA), Locality Preserving Projection (LPP), Unsupervised Discriminant Projection (UDP) and Semi-Supervised LDA (SSLDA), the experimental results show that the proposed method gives superior performance.


International Journal of Pattern Recognition and Artificial Intelligence | 2006

TWO-STEP SINGLE PARAMETER REGULARIZATION FISHER DISCRIMINANT METHOD FOR FACE RECOGNITION

Wen-Sheng Chen; Pong Chi Yuen; Jian Huang; Bin Fang

In face recognition tasks, Fisher discriminant analysis (FDA) is one of the promising methods for dimensionality reduction and discriminant feature extraction. The objective of FDA is to find an optimal projection matrix, which maximizes the between-class-distance and simultaneously minimizes within-class-distance. The main limitation of traditional FDA is the so-called Small Sample Size (3S) problem. It induces that the within-class scatter matrix is singular and then the traditional FDA fails to perform directly for pattern classification. To overcome 3S problem, this paper proposes a novel two-step single parameter regularization Fisher discriminant (2SRFD) algorithm for face recognition. The first semi-regularized step is based on a rank lifting theorem. This step adjusts both the projection directions and their corresponding weights. Our previous three-to-one parameter regularized technique is exploited in the second stage, which just changes the weights of projection directions. It is shown that the final regularized within-class scatter matrix approaches the original within-class scatter matrix as the single parameter tends to zero. Also, our method has good computational complexity. The proposed method has been tested and evaluated with three public available databases, namely ORL, CMU PIE and FERET face databases. Comparing with existing state-of-the-art FDA-based methods in solving the S3 problem, the proposed 2SRFD approach gives the best performance.

Collaboration


Dive into the Wen-Sheng Chen's collaboration.

Top Co-Authors

Avatar

Pong Chi Yuen

Hong Kong Baptist University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jian Huang

Hong Kong Baptist University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bin Fang

Chongqing University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge