Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Xi Peng is active.

Publication


Featured researches published by Xi Peng.


Pattern Recognition | 2014

Learning Locality-Constrained Collaborative Representation for Robust Face Recognition

Xi Peng; Lei Zhang; Zhang Yi; Kok Kiong Tan

Abstract The models of low-dimensional manifold and sparse representation are two well-known concise models that suggest that each data can be described by a few characteristics. Manifold learning is usually investigated for dimension reduction by preserving some expected local geometric structures from the original space into a low-dimensional one. The structures are generally determined by using pairwise distance, e.g., Euclidean distance. Alternatively, sparse representation denotes a data point as a linear combination of the points from the same subspace. In practical applications, however, the nearby points in terms of pairwise distance may not belong to the same subspace, and vice versa. Consequently, it is interesting and important to explore how to get a better representation by integrating these two models together. To this end, this paper proposes a novel coding algorithm, called Locality-Constrained Collaborative Representation (LCCR), which introduce a kind of local consistency into coding scheme to improve the discrimination of the representation. The locality term derives from a biologic observation that the similar inputs have similar codes. The objective function of LCCR has an analytical solution, and it does not involve local minima. The empirical studies based on several popular facial databases show that LCCR is promising in recognizing human faces with varying pose, expression and illumination, as well as various corruptions and occlusions.


IEEE Transactions on Neural Networks | 2016

A Unified Framework for Representation-Based Subspace Clustering of Out-of-Sample and Large-Scale Data

Xi Peng; Huajin Tang; Lei Zhang; Zhang Yi; Shijie Xiao

Under the framework of spectral clustering, the key of subspace clustering is building a similarity graph, which describes the neighborhood relations among data points. Some recent works build the graph using sparse, low-rank, and ℓ2-norm-based representation, and have achieved the state-of-the-art performance. However, these methods have suffered from the following two limitations. First, the time complexities of these methods are at least proportional to the cube of the data size, which make those methods inefficient for solving the large-scale problems. Second, they cannot cope with the out-of-sample data that are not used to construct the similarity graph. To cluster each out-of-sample datum, the methods have to recalculate the similarity graph and the cluster membership of the whole data set. In this paper, we propose a unified framework that makes the representation-based subspace clustering algorithms feasible to cluster both the out-of-sample and the large-scale data. Under our framework, the large-scale problem is tackled by converting it as the out-of-sample problem in the manner of sampling, clustering, coding, and classifying. Furthermore, we give an estimation for the error bounds by treating each subspace as a point in a hyperspace. Extensive experimental results on various benchmark data sets show that our methods outperform several recently proposed scalable methods in clustering a large-scale data set.


IEEE Transactions on Neural Networks | 2018

Connections Between Nuclear-Norm and Frobenius-Norm-Based Representations

Xi Peng; Canyi Lu; Zhang Yi; Huajin Tang

A lot of works have shown that frobenius-norm-based representation (FNR) is competitive to sparse representation and nuclear-norm-based representation (NNR) in numerous tasks such as subspace clustering. Despite the success of FNR in experimental studies, less theoretical analysis is provided to understand its working mechanism. In this brief, we fill this gap by building the theoretical connections between FNR and NNR. More specially, we prove that: 1) when the dictionary can provide enough representative capacity, FNR is exactly NNR even though the data set contains the Gaussian noise, Laplacian noise, or sample-specified corruption and 2) otherwise, FNR and NNR are two solutions on the column space of the dictionary.


IEEE Transactions on Systems, Man, and Cybernetics | 2017

Automatic Subspace Learning via Principal Coefficients Embedding

Xi Peng; Jiwen Lu; Zhang Yi; Rui Yan

In this paper, we address two challenging problems in unsupervised subspace learning: 1) how to automatically identify the feature dimension of the learned subspace (i.e., automatic subspace learning) and 2) how to learn the underlying subspace in the presence of Gaussian noise (i.e., robust subspace learning). We show that these two problems can be simultaneously solved by proposing a new method [(called principal coefficients embedding (PCE)]. For a given data set , PCE recovers a clean data set from and simultaneously learns a global reconstruction relation of . By preserving into an -dimensional space, the proposed method obtains a projection matrix that can capture the latent manifold structure of , where is automatically determined by the rank of with theoretical guarantees. PCE has three advantages: 1) it can automatically determine the feature dimension even though data are sampled from a union of multiple linear subspaces in presence of the Gaussian noise; 2) although the objective function of PCE only considers the Gaussian noise, experimental results show that it is robust to the non-Gaussian noise (e.g., random pixel corruption) and real disguises; and 3) our method has a closed-form solution and can be calculated very fast. Extensive experimental results show the superiority of PCE on a range of databases with respect to the classification accuracy, robustness, and efficiency.


Knowledge Based Systems | 2015

Fast low rank representation based spatial pyramid matching for image classification

Xi Peng; Rui Yan; Bo Zhao; Huajin Tang; Zhang Yi

Spatial Pyramid Matching (SPM) and its variants have achieved a lot of success in image classification. The main difference among them is their encoding schemes. For example, ScSPM incorporates Sparse Code (SC) instead of Vector Quantization (VQ) into the framework of SPM. Although the methods achieve a higher recognition rate than the traditional SPM, they consume more time to encode the local descriptors extracted from the image. In this paper, we propose using Low Rank Representation (LRR) to encode the descriptors under the framework of SPM. Different from SC, LRR considers the group effect among data points instead of sparsity. Benefiting from this property, the proposed method (i.e., LrrSPM) can offer a better performance. To further improve the generalizability and robustness, we reformulate the rank-minimization problem as a truncated projection problem. Extensive experimental studies show that LrrSPM is more efficient than its counterparts (e.g., ScSPM) while achieving competitive recognition rates on nine image data sets.


IEEE Transactions on Neural Networks | 2017

Bag of Events: An Efficient Probability-Based Feature Extraction Method for AER Image Sensors

Xi Peng; Bo Zhao; Rui Yan; Huajin Tang; Zhang Yi

Address event representation (AER) image sensors represent the visual information as a sequence of events that denotes the luminance changes of the scene. In this paper, we introduce a feature extraction method for AER image sensors based on the probability theory, namely, bag of events (BOE). The proposed approach represents each object as the joint probability distribution of the concurrent events, and each event corresponds to a unique activated pixel of the AER sensor. The advantages of BOE include: 1) it is a statistical learning method and has a good interpretability in mathematics; 2) BOE can significantly reduce the effort to tune parameters for different data sets, because it only has one hyperparameter and is robust to the value of the parameter; 3) BOE is an online learning algorithm, which does not require the training data to be collected in advance; 4) BOE can achieve competitive results in real time for feature extraction (>275 frames/s and >120,000 events/s); and 5) the implementation complexity of BOE only involves some basic operations, e.g., addition and multiplication. This guarantees the hardware friendliness of our method. The experimental results on three popular AER databases (i.e., MNIST-dynamic vision sensor, Poker Card, and Posture) show that our method is remarkably faster than two recently proposed AER categorization systems while preserving a good classification accuracy.


Electronics Letters | 2013

Inductive sparse subspace clustering

Xi Peng; Lei Zhang; Zhang Yi

Sparse subspace clustering (SSC) has achieved state-of-the-art clustering quality by performing spectral clustering over an l1-norm based similarity graph. However, SSC is a transductive method, i.e. it cannot handle out-of-sample data that is not used to construct the graph. For each new datum, SSC requires solving n optimisation problems in O(n) variables, where n is the number of data points. Therefore, it is inefficient to apply SSC in fast online clustering and scalable grouping. An inductive spectral clustering algorithm called inductive SSC (iSSC) is proposed, which makes SSC feasible to cluster out-of-sample data. iSSC adopts the assumption that high-dimensional data actually lie on the low-dimensional manifold such that out-of-sample data could be grouped in the embedding space learned from in-sample data. Experimental results show that iSSC is promising in clustering out-of-sample data.


Information Processing Letters | 2013

Free-gram phrase identification for modeling Chinese text

Xi Peng; Zhang Yi; Xiao-Yong Wei; Dezhong Peng; Yong-Sheng Sang

Vector space model using bag of phrases plays an important role in modeling Chinese text. However, the conventional way of using fixed gram scanning to identify free-length phrases is costly. To address this problem, we propose a novel approach for key phrase identification which is capable of identify phrases with all lengths and thus improves the coding efficiency and discrimination of the data representation. In the proposed method, we first convert each document into a context graph, a directed graph that encapsulates the statistical and positional information of all the 2-word strings in the document. We treat every transmission path in the graph as a hypothesis for a phrase, and select the corresponding phrase as a candidate phrase if the hypothesis is valid in the original document. Finally, we selectively divide some of the complex candidate phrases into sub-phrases to improve the coding efficiency, resulting in a set of phrases for codebook construction. The experiments on both balanced and unbalanced datasets show that the codebooks generated by our approach are more efficient than those by conventional methods (one syntactical method and three statistical methods are investigated). Furthermore, the data representation created by our approach has demonstrated higher discrimination than those by conventional methods in classification task.


Neurocomputing | 2016

Semi-supervised subspace learning with L2graph

Xi Peng; Miaolong Yuan; Zhiding Yu; Wei Yun Yau; Lei Zhang

Subspace learning aims to learn a projection matrix from a given training set so that a transformation of raw data to a low-dimensional representation can be obtained. In practice, the labels of some training samples are available, which can be used to improve the discrimination of low-dimensional representation. In this paper, we propose a semi-supervised learning method which is inspired by the biological observation of similar inputs having similar codes (SISC), i.e., the same collection of cortical columns of the mammals visual cortex is always activated by the similar stimuli. More specifically, we propose a mathematical formulation of SISC which minimizes the distance among the data points with the same label while maximizing the separability between different subjects in the projection space. The proposed method, namely, semi-supervised L2graph (SeL2graph) has two advantages: (1) unlike the classical dimension reduction methods such as principle component analysis, SeL2graph can automatically determine the dimension of feature space. This remarkably reduces the effort to find an optimal feature dimension for a good performance; and (2) it fully exploits the prior knowledge carried by the labeled samples and thus the obtained features are with higher discrimination and compactness. Extensive experiments show that the proposed method outperforms 7 subspace learning algorithms on 15 data sets with respect to classification accuracy, computational efficiency, and robustness to noises and disguises. HighlightsA semi-supervised subspace learning method is proposed.The method is inspired by similar inputs having similar code.The method can automatically determine the feature dimension.


Image and Vision Computing | 2017

Regularization techniques for high-dimensional data analysis

Jiwen Lu; Xi Peng; Weihong Deng; Ajmal S. Mian

aDepartment of Automation, Tsinghua University, State Key Lab of Intelligent Technologies and Systems, Tsinghua National Laboratory for Information Science and Technology (TNList), Tsinghua University, Beijing 100084, China bInstitute for Infocomm, Research Agency for Science, Technology and Research (A*STAR), 138632, Singapore cSchool of Information and Communication Engineering, Beijing University of Posts and Telecommunications, Beijing 100876, China dSchool of Computer Science Software Engineering, The University of Western Australia, Crawley, WA6009, Australia

Collaboration


Dive into the Xi Peng's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Shijie Xiao

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bo Zhao

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jiashi Feng

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge