Miao Cheng
Chongqing University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Miao Cheng.
systems man and cybernetics | 2010
Miao Cheng; Bin Fang; Yuan Yan Tang; Taiping Zhang
Dimensionality reduction and incremental learning have recently received broad attention in many applications of data mining, pattern recognition, and information retrieval. Inspired by the concept of manifold learning, many discriminant embedding techniques have been introduced to seek low-dimensional discriminative manifold structure in the high-dimensional space for feature reduction and classification. However, such graph-embedding framework-based subspace methods usually confront two limitations: (1) since there is no available updating rule for local discriminant analysis with the additive data, it is difficult to design incremental learning algorithm and (2) the small sample size (SSS) problem usually occurs if the original data exist in very high-dimensional space. To overcome these problems, this paper devises a supervised learning method, called local discriminant subspace embedding (LDSE), to extract discriminative features. Then, the incremental-mode algorithm, incremental LDSE (ILDSE), is proposed to learn the local discriminant subspace with the newly inserted data, which applies incremental learning extension to the batch LDSE algorithm by employing the idea of singular value-decomposition (SVD) updating algorithm. Furthermore, the SSS problem is avoided in our method for the high-dimensional data and the benchmark incremental learning experiments on face recognition show that ILDSE bears much less computational cost compared with the batch algorithm.
Neurocomputing | 2009
Bin Fang; Miao Cheng; Yuan Yan Tang; Guanghui He
Many applications in machine learning and computer vision come down to feature representation and reduction. Manifold learning seeks the intrinsic low-dimensional manifold structure hidden in the high-dimensional data. In the past few years, many local discriminant analysis methods have been proposed to exploit the discriminative submanifold structure by extending the manifold learning idea to supervised ones. Particularly, marginal Fisher analysis (MFA) finds the local interclass margin for feature extraction and classification. However, since the limited data pairs are employed to determine the discriminative margin, such method usually suffers from the maladjusted learning as we introduced in this paper. To improve the discriminant ability of MFA, we incorporate the marginal Fisher idea with the global between-class separability criterion (BCSC), and propose a novel supervised learning method, called local and global margin projections (LGMP), where the maladjusted learning problem can be alleviated. Experimental evaluation shows that the proposed LGMP outperforms the original MFA.
Pattern Recognition Letters | 2010
Miao Cheng; Bin Fang; Yuan Yan Tang
For pattern analysis and recognition, it is necessary to find the meaningful low-dimensional representation of data in general. In the past decades, subspace learning methods have been regarded as the useful tools for feature extraction and dimensionality reduction. Without loss of generality, the linear subspace learning algorithms can be explained as the enhancement of the affinity and repulsion of several data pairs. Based on this point of view, a novel linear discriminant method, termed Marginal Discriminant Projections (MDP), is proposed to learn the marginal subspace. Rather than the existing marginal learning method, the maladjusted learning problem is alleviated by adopting a hierarchical fuzzy clustering approach, where the discriminative margin can be found adaptively and the iterative objective optimization is avoided. In addition, the proposed method is immune from the well-known curse of dimensionality problem, with respect to the presented subspace learning framework. Experiments on extensive datasets demonstrate the effectiveness of the proposed MDP for discriminative learning and recognition tasks.
Neurocomputing | 2011
Miao Cheng; Bin Fang; Chi-Man Pun; Yuan Yan Tang
Derived from the traditional manifold learning algorithms, local discriminant analysis methods identify the underlying submanifold structures while employing discriminative information for dimensionality reduction. Mathematically, they can all be unified into a graph embedding framework with different construction criteria. However, such learning algorithms are limited by the curse-of-dimensionality if the original data lie on the high-dimensional manifold. Different from the existing algorithms, we consider the discriminant embedding as a kernel analysis approach in the sample space, and a kernel-view based discriminant method is proposed for the embedded feature extraction, where both PCA pre-processing and the pruning of data can be avoided. Extensive experiments on the high-dimensional data sets show the robustness and outstanding performance of our proposed method.
Pattern Analysis and Applications | 2014
Miao Cheng; Chi-Man Pun; Yuan Yan Tang
Nonnegative learning aims to learn the part-based representation of nonnegative data and receives much attention in recent years. Nonnegative matrix factorization has been popular to make nonnegative learning applicable, which can also be explained as an optimization problem with bound constraints. In order to exploit the informative components hidden in nonnegative patterns, a novel nonnegative learning method, termed nonnegative class-specific entropy component analysis, is developed in this work. Distinguish from the existing methods, the proposed method aims to conduct the general objective functions, and the conjugate gradient technique is applied to enhance the iterative optimization. In view of the development, a general nonnegative learning framework is presented to deal with the nonnegative optimization problem with general objective costs. Owing to the general objective costs and the nonnegative bound constraints, the diseased nonnegative learning problem usually occurs. To address this limitation, a modified line search criterion is proposed, which prevents the null trap with insured conditions while keeping the feasible step descendent. In addition, the numerical stopping rule is employed to achieve optimized efficiency, instead of the popular gradient-based one. Experiments on face recognition with varieties of conditions reveal that the proposed method possesses better performance over other methods.
International Journal of Pattern Recognition and Artificial Intelligence | 2011
Miao Cheng; Bin Fang; Yuan Yan Tang; Hengxin Chen
Many problems in pattern classification and feature extraction involve dimensionality reduction as a necessary processing. Traditional manifold learning algorithms, such as ISOMAP, LLE, and Laplacian Eigenmap, seek the low-dimensional manifold in an unsupervised way, while the local discriminant analysis methods identify the underlying supervised submanifold structures. In addition, it has been well-known that the intraclass null subspace contains the most discriminative information if the original data exist in a high-dimensional space. In this paper, we seek for the local null space in accordance with the null space LDA (NLDA) approach and reveal that its computational expense mainly depends on the quantity of connected edges in graphs, which may be still unacceptable if a great deal of samples are involved. To address this limitation, an improved local null space algorithm is proposed to employ the penalty subspace to approximate the local discriminant subspace. Compared with the traditional approach, the proposed method can achieve more efficiency so that the overload problem is avoided, while slight discriminant power is lost theoretically. A comparative study on classification shows that the performance of the approximative algorithm is quite close to the genuine one.
International Journal of Pattern Recognition and Artificial Intelligence | 2009
Bin Fang; Yuan Yan Tang; Patrick S. P. Wang; Miao Cheng; Taiping Zhang
The main problem to identify skilled forgeries for offline signature verification lies in the fact that it is difficult to formalize distinguished feature representation of the signature patterns a...
international conference on machine learning and applications | 2011
Miao Cheng; Yuan Yan Tang; Chi-Man Pun
As an important step in machine learning and information processing, feature extraction has widely received attention in the past decades. For high-dimensional data, feature extraction problem usually comes down to exploit the intrinsic pattern information with dimensionality reduction. Since nonparametric approaches are applicable without parameter installation, they are more preferred in many real-world applications. In this work, a novel approach to direct maximum margin alignment (DMMA), is proposed for nonparametric feature reduction and extraction. Though there has been the straightforward solution for the discriminative ratio based subspace selection, such type of solution is still unavailable for maximum margin alignment. In terms of the kernel-view idea, DMMA can be performed by bringing in sample kernel, while the computational efficiency is achievable. Experiments on pattern recognition show that the proposed method is able to obtain comparable performance with several state-of-the-art algorithms.
international conference on wavelet analysis and pattern recognition | 2008
Miao Cheng; Bin Fang; Yuan Yan Tang
Inspired by the concept of manifold learning, the discriminant embedding technologies aim to exploit low dimensional discriminant manifold structure in the high dimensional space for dimension reduction and classification. However, such graph embedding framework based techniques usually suffer from the large complexity and small sample size (SSS) problem. To address the problem, we reformulate the Laplacian matrix and propose a regularized neighborhood discriminant analysis method, namely RNDA, to discover the local discriminant information, which follows similar approach to regularized LDA. Compared with other discriminant embedding techniques, RNDA achieves efficiency by employing the QR decomposition as a pre-step. Experiments on face databases are presented to show the outstanding performance of the proposed method.
Mathematical Problems in Engineering | 2008
Miao Cheng; Bin Fang; Yuan Yan Tang
Face recognition is a challenging problem in computer vision and pattern recognition. Recently, many local geometrical structure-based techiniques are presented to obtain the low-dimensional representation of face images with enhanced discriminatory power. However, these methods suffer from the small simple size (SSS) problem or the high computation complexity of high-dimensional data. To overcome these problems, we propose a novel local manifold structure learning method for face recognition, named direct neighborhood discriminant analysis (DNDA), which separates the nearby samples of interclass and preserves the local within-class geometry in two steps, respectively. In addition, the PCA preprocessing to reduce dimension to a large extent is not needed in DNDA avoiding loss of discriminative information. Experiments conducted on ORL, Yale, and UMIST face databases show the effectiveness of the proposed method.