Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sheng Li is active.

Publication


Featured researches published by Sheng Li.


IEEE Transactions on Circuits and Systems for Video Technology | 2012

Supervised and Unsupervised Parallel Subspace Learning for Large-Scale Image Recognition

Xiao-Yuan Jing; Sheng Li; David Zhang; Jian Yang; Jingyu Yang

Subspace learning is an effective and widely used image feature extraction and classification technique. However, for the large-scale image recognition issue in real-world applications, many subspace learning methods often suffer from large computational burden. In order to reduce the computational time and improve the recognition performance of subspace learning technique under this situation, we introduce the idea of parallel computing which can reduce the time complexity by splitting the original task into several subtasks. We develop a parallel subspace learning framework. In this framework, we first divide the sample set into several subsets by designing two random data division strategies that are equal data division and unequal data division. These two strategies correspond to equal and unequal computational abilities of nodes under parallel computing environment. Next, we calculate projection vectors from each subset in parallel. The graph embedding technique is employed to provide a general formulation for parallel feature extraction. After combining the extracted features from all nodes, we present a unified criterion to select most distinctive features for classification. Under the developed framework, we separately propose supervised and unsupervised parallel subspace learning approaches, which are called parallel linear discriminant analysis (PLDA) and parallel locality preserving projection (PLPP). PLDA selects features with the largest Fisher scores by estimating the weighted and unweighted sample scatter, while PLPP selects features with the smallest Laplacian scores by constructing a whole affinity matrix. Theoretically, we analyze the time complexities of proposed approaches and provide the fundamental supports for applying random division strategies. In the experiments, we establish two real parallel computing environments and employ four public image and video databases as the test data. Experimental results demonstrate that the proposed approaches outperform several related supervised and unsupervised subspace learning methods, and significantly reduce the computational time.


international conference on image processing | 2010

Holistic orthogonal analysis of discriminant transforms for color face recognition

Xiao-Yuan Jing; Qian Liu; Chao Lan; Jiangyue Man; Sheng Li; David Zhang

The key of color face recognition technique is how to effectively utilize the complementary information between color components and remove their redundancy. Present color face recognition methods generally reduce the correlations between color components in the image pixel level, and then extract the discriminant features from the uncorrelated color face images. In this paper, we propose a novel color face recognition approach based on the holistic orthogonal analysis (HOA) of discriminant transforms of color images. HOA can reduce the correlation of color information in the feature level. It in turn achieves the discriminant transforms of red, green and blue color images by using the Fisher criterion, and simultaneously makes the achieved transforms mutually orthogonal. Experimental results on the AR and FRGC-2 public color face image databases demonstrate that the proposed approach acquires better recognition performance than several representative color face recognition methods.


international conference on image processing | 2011

Face recognition based on local uncorrelated and weighted global uncorrelated discriminant transforms

Xiao-Yuan Jing; Sheng Li; David Zhang; Jingyu Yang

Feature extraction is one of the most important problems in image recognition tasks. In many applications such as face recognition, it is desirable to eliminate the redundancy among the extracted discriminant features. In this paper, we propose two novel feature extraction approaches named local uncorrelated discriminant transform (LUDT) and weighted global uncorrelated discriminant transform (WGUDT) for face recognition, respectively. LUDT and WGUDT separately construct the local uncorrelated constraints and the weighted global uncorrelated constraints. Then they iteratively calculate the optimal discriminant vectors that maximize the Fisher criterion under the corresponding statistical uncorrelated constraints, respectively. The proposed LUDT and WGUDT approaches are evaluated on the public AR and FERET face databases. Experimental results demonstrate that the proposed approaches outperform several representative feature extraction methods.


international congress on image and signal processing | 2010

Exploring the natural discriminative information of sparse representation for feature extraction

Chao Lan; Xiao-Yuan Jing; Sheng Li; Lusha Bian; Yongfang Yao

Sparse representation has been extensively studied in the signal processing community, which shows that one target sample can be accurately recovered by a sparse linear combination of the overall data. Such discovery has soon been applied to the pattern recognition task and, more recently, given rise to two new feature extraction methods, namely sparsity preserving projections (SPP) and global sparse representation projections (GSRP). However, both methods utilized the sparse representation by simply preserving it in the embedded space, but none of them investigates its natural discriminative information and therefore may have limited classifying power for the recognition task. In this paper, we propose a novel feature extraction method by exploring the discriminative information naturally embodied in the sparse representation. Based on the idea that one target sample shall ideally be more accurately reproduced by the intra-class data associated with the sparse coefficients than by the inter-class data, we seek a linearly transformed space where the reconstructive errors of samples caused by intra-class data are minimized and the reconstructive errors caused by inter-class data are simultaneously maximized. We name the proposed method sparse representation-based discriminative information exploring transform (DIET) and experiments on two face databases, i.e., Yale and ORL validate the effectiveness of DIET, as compared with several representative linear feature extraction methods.


international conference on image processing | 2011

A novel kernel discriminant feature extraction framework based on mapped virtual samples for face recognition

Sheng Li; Xiao-Yuan Jing; David Zhang; Yongfang Yao; Lusha Bian

In this paper, we propose a novel kernel discriminant feature extraction framework based on the mapped virtual samples (MVS) for face recognition. We calculate a non-symmetric kernel matrix by constructing a few virtual samples (including eigen-samples and common vector samples) in the input space, and then express kernel projection vectors by using mapped virtual samples (MVS). Under this framework, we realize two MVS-based representative kernel methods including kernel principal component analysis (KPCA) and generalized discriminant analysis (GDA). Experimental results on the AR and CAS-PEAL face databases demonstrate that the proposed framework can effectively improve the classification performance of kernel discriminant methods. In addition, the MVS-based kernel approaches have a lower computational cost in contrast with the related kernel methods.


international conference on pattern recognition | 2010

Kernel Uncorrelated Adjacent-class Discriminant Analysis

Xiao-Yuan Jing; Sheng Li; Yongfang Yao; Lusha Bian; Jingyu Yang

In this paper, a kernel uncorrelated adjacent-class discriminant analysis (KUADA) approach is proposed for image recognition. The optimal nonlinear discriminant vector obtained by this approach can differentiate one class and its adjacent classes, i.e., its nearest neighbor classes, by constructing the specific between-class and within-class scatter matrices in kernel space using the Fisher criterion. In this manner, KUADA acquires all discriminant vectors class by class. Furthermore, KUADA makes every discriminant vector satisfy locally statistical uncorrelated constraints by using the corresponding class and part of its most adjacent classes. Experimental results on the public AR and CAS-PEAL face databases demonstrate that the proposed approach outperforms several representative nonlinear discriminant methods.


international conference on genetic and evolutionary computing | 2009

Face Recognition Based on a Gabor-2DFisherface Approach with Selecting 2D Gabor Principal Components and Discriminant Vectors

Xiao-Yuan Jing; Heng Chang; Sheng Li; Yongfang Yao; Qian Liu; Lusha Bian; Jiang-Yue Man; Chao Wang

In this paper, a novel Gabor-2DFisherface approach with selecting 2D Gabor principal components and discriminant vectors is proposed for face recognition. Gabor transform is an important frequency-domain analysis tool. The proposed approach combines it with discriminant analysis technique. This approach first preprocesses all image samples by using Gabor transform, and then calculates 2D Gabor principal components and discriminant vectors by using 2DFisherface method. To enhance the discriminant capability, an automatic strategy is employed to select these components and vectors. After extracting the discriminant features, this approach adopts the nearest neighbor classifier with cosine distance for classification. The experimental results on the public AR face database demonstrate that the proposed approach outperforms several related discrimination methods.


asian conference on pattern recognition | 2011

Supervised local sparsity preserving projection for face feature extraction

Xiao-Yuan Jing; Sheng Li; Songhao Zhu; Qian Liu; Jingyu Yang; Jiasen Lu

In the sparse representation of a target sample, most nonzero coefficients belong to the neighbors of the target sample. Combining this observation with the theory of manifold learning, we propose a novel unsupervised feature extraction approach named local sparsity preserving projection (LSPP). LSPP sparsely reconstructs a target training sample from merely its neighbors, and seeks a subspace where the local sparse reconstructive relations among all training samples are preserved. To improve the discriminating power of LSPP, we further propose a supervised LSPP (SLSPP), which incorporates the class information of neighbor samples into local sparse representation. Experimental results on the AR and CAS-PEAL face databases demonstrate the effectiveness of LSPP and SLSPP, as compared with related feature extraction methods.


2011 International Conference on Hand-Based Biometrics | 2011

Multi-Modal Biometric Feature Extraction and Recognition Based on Subclass Discriminant Analysis (SDA) and Generalized Singular Value Decomposition (GSVD)

Xiao-Yuan Jing; Sheng Li; Yongfang Yao; Wenqian Li; Fei Wu; Chao Lan

When extracting discriminative features from multi-modal data, current methods rarely concern the data distribution. In this paper, we present an assumption that is consistent with the viewpoint of discrimination, that is, a persons overall biometric data should be regarded as one class in the input space, and his different biometric data can form different Gaussians distributions, i.e., different subclasses. Hence, we propose a novel multi-modal feature extraction and recognition approach based on subclass discriminant analysis (SDA). Specifically, one persons different bio-data are treated as different subclasses of one class, and a transformed space is calculated, where the difference among subclasses belonging to different persons is maximized, and the difference within each subclass is minimized. Then, the obtained multi-modal features are used for classification. Two solutions are presented to overcome the singularity problem encountered in calculation, which are using PCA preprocessing, and employing the generalized singular value decomposition (GSVD) technique, respectively. Two typical biometric data are considered in this paper for simplicity, i.e., face data and palmprint data. Compare with several representative multimodal biometrics recognition methods, the experimental results show that the proposed SDA-GSVD based multimodal biometric feature extraction approach achieves best recognition performance.


international conference on genetic and evolutionary computing | 2009

A Regularized Nonlinear Discrimination Approach

Xiao-Yuan Jing; Shou Gao; Yongfang Yao; Sheng Li; Shi-Qiang Gao; Shu Wu; Fengnan Yu; Yongchuan Zhang

For nonlinear discrimination analysis technique, there are some key points worthy of further research. One is finding an effective rule to select appropriate kernel function parameter for different sample sets. Another is providing a simple and efficient solution for the singularity problem of within-class scatter matrix. In this paper, we focus on these two points and address a regularized nonlinear discrimination analysis approach. We first present a definition of regularized within-class scatter and provide a very simple solution of regularization parameter. Then, a nonlinear discriminant judgment is proposed to select the parameter of radial basis function. A large public face database is used as the test data. The experimental results demonstrate that the proposed approach outperforms several representative nonlinear discrimination methods.

Collaboration


Dive into the Sheng Li's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yongfang Yao

Nanjing University of Posts and Telecommunications

View shared research outputs
Top Co-Authors

Avatar

Jingyu Yang

Nanjing University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

David Zhang

Hong Kong Polytechnic University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Qian Liu

Nanjing University of Posts and Telecommunications

View shared research outputs
Top Co-Authors

Avatar

Lusha Bian

Nanjing University of Posts and Telecommunications

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Min Li

Nanjing University of Posts and Telecommunications

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge