Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Chenping Hou is active.

Publication


Featured researches published by Chenping Hou.


IEEE Transactions on Systems, Man, and Cybernetics | 2014

Joint Embedding Learning and Sparse Regression: A Framework for Unsupervised Feature Selection

Chenping Hou; Feiping Nie; Xuelong Li; Dongyun Yi; Yi Wu

Feature selection has aroused considerable research interests during the last few decades. Traditional learning-based feature selection methods separate embedding learning and feature ranking. In this paper, we propose a novel unsupervised feature selection framework, termed as the joint embedding learning and sparse regression (JELSR), in which the embedding learning and sparse regression are jointly performed. Specifically, the proposed JELSR joins embedding learning with sparse regression to perform feature selection. To show the effectiveness of the proposed framework, we also provide a method using the weight via local linear approximation and adding the ℓ2,1-norm regularization, and design an effective algorithm to solve the corresponding optimization problem. Furthermore, we also conduct some insightful discussion on the proposed feature selection approach, including the convergence analysis, computational complexity, and parameter determination. In all, the proposed framework not only provides a new perspective to view traditional methods but also evokes some other deep researches for feature selection. Compared with traditional unsupervised feature selection methods, our approach could integrate the merits of embedding learning and sparse regression. Promising experimental results on different kinds of data sets, including image, voice data and biological data, have validated the effectiveness of our proposed algorithm.


international joint conference on artificial intelligence | 2011

Feature selection via joint embedding learning and sparse regression

Chenping Hou; Feiping Nie; Dongyun Yi; Yi Wu

The problem of feature selection has aroused considerable research interests in the past few years. Traditional learning based feature selection methods separate embedding learning and feature ranking. In this paper, we introduce a novel unsupervised feature selection approach via Joint Embedding Learning and Sparse Regression (JELSR). Instead of simply employing the graph laplacian for embedding learning and then regression, we use the weight via locally linear approximation to construct graph and unify embedding learning and sparse regression to perform feature selection. By adding the l2,1-norm regularization, we can learn a sparse matrix for feature ranking. We also provide an effective method to solve the proposed problem. Compared with traditional unsupervised feature selection methods, our approach could integrate the merits of embedding learning and sparse regression simultaneously. Plenty of experimental results are provided to show the validity.


Pattern Recognition | 2010

Multiple view semi-supervised dimensionality reduction

Chenping Hou; Changshui Zhang; Yi Wu; Feiping Nie

Multiple view data, together with some domain knowledge in the form of pairwise constraints, arise in various data mining applications. How to learn a hidden consensus pattern in the low dimensional space is a challenging problem. In this paper, we propose a new method for multiple view semi-supervised dimensionality reduction. The pairwise constraints are used to derive embedding in each view and simultaneously, the linear transformation is introduced to make different embeddings from different pattern spaces comparable. Hence, the consensus pattern can be learned from multiple embeddings of multiple representations. We derive an iterating algorithm to solve the above problem. Some theoretical analyses and out-of-sample extensions are also provided. Promising experiments on various data sets, together with some important discussions, are also presented to demonstrate the effectiveness of the proposed algorithm.


Pattern Recognition | 2009

Stable local dimensionality reduction approaches

Chenping Hou; Changshui Zhang; Yi Wu; Yuanyuan Jiao

Dimensionality reduction is a big challenge in many areas. A large number of local approaches, stemming from statistics or geometry, have been developed. However, in practice these local approaches are often in lack of robustness, since in contrast to maximum variance unfolding (MVU), which explicitly unfolds the manifold, they merely characterize local geometry structure. Moreover, the eigenproblems that they encounter, are hard to solve. We propose a unified framework that explicitly unfolds the manifold and reformulate local approaches as the semi-definite programs instead of the above-mentioned eigenproblems. Three well-known algorithms, locally linear embedding (LLE), laplacian eigenmaps (LE) and local tangent space alignment (LTSA) are reinterpreted and improved within this framework. Several experiments are presented to demonstrate the potential of our framework and the improvements of these local algorithms.


Pattern Recognition | 2014

Multiple rank multi-linear SVM for matrix data classification

Chenping Hou; Feiping Nie; Changshui Zhang; Dongyun Yi; Yi Wu

Matrices, or more generally, multi-way arrays (tensors) are common forms of data that are encountered in a wide range of real applications. How to classify this kind of data is an important research topic for both pattern recognition and machine learning. In this paper, by analyzing the relationship between two famous traditional classification approaches, i.e., SVM and STM, a novel tensor-based method, i.e., multiple rank multi-linear SVM (MRMLSVM), is proposed. Different from traditional vector-based and tensor based methods, multiple-rank left and right projecting vectors are employed to construct decision boundary and establish margin function. We reveal that the rank of transformation can be regarded as a tradeoff parameter to balance the capacity of learning and generalization in essence. We also proposed an effective approach to solve the proposed non-convex optimization problem. The convergence behavior, initialization, computational complexity and parameter determination problems are analyzed. Compared with vector-based classification methods, MRMLSVM achieves higher accuracy and has lower computational complexity. Compared with traditional supervised tensor-based methods, MRMLSVM performs better for matrix data classification. Promising experimental results on various kinds of data sets are provided to show the effectiveness of our method. HighlightsWe have proposed a novel method, i.e., MRMLSVM, for matrix data classification.We have revealed the essence of MRMLSVM from the view of learning theory.We have provided an effective way to solve the proposed non-convex problem.Our proposed approach is a common model and MRMLSVM is just an instance.


IEEE Transactions on Neural Networks | 2015

Discriminative Embedded Clustering: A Framework for Grouping High-Dimensional Data

Chenping Hou; Feiping Nie; Dongyun Yi; Dacheng Tao

In many real applications of machine learning and data mining, we are often confronted with high-dimensional data. How to cluster high-dimensional data is still a challenging problem due to the curse of dimensionality. In this paper, we try to address this problem using joint dimensionality reduction and clustering. Different from traditional approaches that conduct dimensionality reduction and clustering in sequence, we propose a novel framework referred to as discriminative embedded clustering which alternates them iteratively. Within this framework, we are able not only to view several traditional approaches and reveal their intrinsic relationships, but also to be stimulated to develop a new method. We also propose an effective approach for solving the formulated nonconvex optimization problem. Comprehensive analyses, including convergence behavior, parameter determination, and computational complexity, together with the relationship to other related approaches, are also presented. Plenty of experimental results on benchmark data sets illustrate that the proposed method outperforms related state-of-the-art clustering approaches and existing joint dimensionality reduction and clustering methods.


IEEE Transactions on Neural Networks | 2016

Effective Discriminative Feature Selection With Nontrivial Solution

Hong Tao; Chenping Hou; Feiping Nie; Yuanyuan Jiao; Dongyun Yi

Feature selection and feature transformation, the two main ways to reduce dimensionality, are often presented separately. In this paper, a feature selection method is proposed by combining the popular transformation-based dimensionality reduction method linear discriminant analysis (LDA) and sparsity regularization. We impose row sparsity on the transformation matrix of LDA through ℓ2,1-norm regularization to achieve feature selection, and the resultant formulation optimizes for selecting the most discriminative features and removing the redundant ones simultaneously. The formulation is extended to the ℓ2,p-norm regularized case, which is more likely to offer better sparsity when 0 <; p <; 1. Thus, the formulation is a better approximation to the feature selection problem. An efficient algorithm is developed to solve the ℓ2,p-norm-based optimization problem and it is proved that the algorithm converges when 0<; p <; 2. Systematical experiments are conducted to understand the work of the proposed method. Promising experimental results on various types of real-world data sets demonstrate the effectiveness of our algorithm.


IEEE Transactions on Image Processing | 2013

Efficient Image Classification via Multiple Rank Regression

Chenping Hou; Feiping Nie; Dongyun Yi; Yi Wu

The problem of image classification has aroused considerable research interest in the field of image processing. Traditional methods often convert an image to a vector and then use a vector-based classifier. In this paper, a novel multiple rank regression model (MRR) for matrix data classification is proposed. Unlike traditional vector-based methods, we employ multiple-rank left projecting vectors and right projecting vectors to regress each matrix data set to its label for each category. The convergence behavior, initialization, computational complexity, and parameter determination are also analyzed. Compared with vector-based regression methods, MRR achieves higher accuracy and has lower computational complexity. Compared with traditional supervised tensor-based methods, MRR performs better for matrix data classification. Promising experimental results on face, object, and hand-written digit image classification tasks are provided to show the effectiveness of our method.


IEEE Transactions on Systems, Man, and Cybernetics | 2013

An Adaptive Approach to Learning Optimal Neighborhood Kernels

Xinwang Liu; Jianping Yin; Lei Wang; Lingqiao Liu; Jun Liu; Chenping Hou; Jian Zhang

Learning an optimal kernel plays a pivotal role in kernel-based methods. Recently, an approach called optimal neighborhood kernel learning (ONKL) has been proposed, showing promising classification performance. It assumes that the optimal kernel will reside in the neighborhood of a “pre-specified” kernel. Nevertheless, how to specify such a kernel in a principled way remains unclear. To solve this issue, this paper treats the pre-specified kernel as an extra variable and jointly learns it with the optimal neighborhood kernel and the structure parameters of support vector machines. To avoid trivial solutions, we constrain the pre-specified kernel with a parameterized model. We first discuss the characteristics of our approach and in particular highlight its adaptivity. After that, two instantiations are demonstrated by modeling the pre-specified kernel as a common Gaussian radial basis function kernel and a linear combination of a set of base kernels in the way of multiple kernel learning (MKL), respectively. We show that the optimization in our approach is a min-max problem and can be efficiently solved by employing the extended level method and Nesterovs method. Also, we give the probabilistic interpretation for our approach and apply it to explain the existing kernel learning methods, providing another perspective for their commonness and differences. Comprehensive experimental results on 13 UCI data sets and another two real-world data sets show that via the joint learning process, our approach not only adaptively identifies the pre-specified kernel, but also achieves superior classification performance to the original ONKL and the related MKL algorithms.


Neural Computing and Applications | 2013

Robust non-negative matrix factorization via joint sparse and graph regularization for transfer learning

Shizhun Yang; Chenping Hou; Changshui Zhang; Yi Wu

In real-world applications, we often have to deal with some high-dimensional, sparse, noisy, and non-independent identically distributed data. In this paper, we aim to handle this kind of complex data in a transfer learning framework, and propose a robust non-negative matrix factorization via joint sparse and graph regularization model for transfer learning. First, we employ robust non-negative matrix factorization via sparse regularization model (RSNMF) to handle source domain data and then learn a meaningful matrix, which contains much common information between source domain and target domain data. Second, we treat this learned matrix as a bridge and transfer it to target domain. Target domain data are reconstructed by our robust non-negative matrix factorization via joint sparse and graph regularization model (RSGNMF). Third, we employ feature selection technique on new sparse represented target data. Fourth, we provide novel efficient iterative algorithms for RSNMF model and RSGNMF model and also give rigorous convergence and correctness analysis separately. Finally, experimental results on both text and image data sets demonstrate that our REGTL model outperforms existing start-of-art methods.

Collaboration


Dive into the Chenping Hou's collaboration.

Top Co-Authors

Avatar

Dongyun Yi

National University of Defense Technology

View shared research outputs
Top Co-Authors

Avatar

Feiping Nie

Northwestern Polytechnical University

View shared research outputs
Top Co-Authors

Avatar

Yi Wu

National University of Defense Technology

View shared research outputs
Top Co-Authors

Avatar

Hong Tao

National University of Defense Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yuanyuan Jiao

National University of Defense Technology

View shared research outputs
Top Co-Authors

Avatar

Wenzhang Zhuge

National University of Defense Technology

View shared research outputs
Top Co-Authors

Avatar

Shizhun Yang

National University of Defense Technology

View shared research outputs
Top Co-Authors

Avatar

Tingjin Luo

National University of Defense Technology

View shared research outputs
Top Co-Authors

Avatar

Jubo Zhu

National University of Defense Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge