Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Nannan Ji is active.

Publication


Featured researches published by Nannan Ji.


Pattern Recognition | 2014

A sparse-response deep belief network based on rate distortion theory

Nannan Ji; Jiang-She Zhang; Chun-Xia Zhang

Abstract Deep belief networks (DBNs) are currently the dominant technique for modeling the architectural depth of brain, and can be trained efficiently in a greedy layer-wise unsupervised learning manner. However, DBNs without a narrow hidden bottleneck typically produce redundant, continuous-valued codes and unstructured weight patterns. Taking inspiration from rate distortion (RD) theory, which encodes original data using as few bits as possible, we introduce in this paper a variant of DBN, referred to as sparse-response DBN (SR-DBN). In this approach, Kullback–Leibler divergence between the distribution of data and the equilibrium distribution defined by the building block of DBN is considered as a distortion function, and the sparse response regularization induced by L 1 -norm of codes is used to achieve a small code rate. Several experiments by extracting features from different scale image datasets show that our approach SR-DBN learns codes with small rate, extracts features at multiple levels of abstraction mimicking computations in the cortical hierarchy, and obtains more discriminative representation than PCA and several basic algorithms of DBNs.


Pattern Recognition Letters | 2014

Learning ensemble classifiers via restricted Boltzmann machines

Chun-Xia Zhang; Jiang-She Zhang; Nannan Ji; Gao Guo

Recently, restricted Boltzmann machines (RBMs) have attracted considerable interest in machine learning field due to their strong ability to extract features. Given some training data, an RBM or a stack of several RBMs can be used to extract informative features. Meanwhile, ensemble learning is an active research area in machine learning owing to their potential to greatly increase the prediction accuracy of a single classifier. However, RBMs have not been studied to work with ensemble learning so far. In this study, we present several methods for integrating RBMs with bagging to generate diverse and accurate individual classifiers. Taking a classification tree as the base learning algorithm, a thoroughly experimental study conducted on 31 real-world data sets yields some promising conclusions. When using the features extracted by RBMs in ensemble learning, the best way is to perform model combination respectively on the original feature set and the one extracted by a single RBM. However, the prediction performance becomes worse when the features detected by a stack of 2 RBMs are also considered. As for the features detected by RBMs, good classification can be obtained only when they are used together with the original features.


Neurocomputing | 2015

Enhancing performance of the backpropagation algorithm via sparse response regularization

Jiang-She Zhang; Nannan Ji; Junmin Liu; Jiyuan Pan; Deyu Meng

Abstract The backpropagation (BP) algorithm is the most commonly utilized training strategy for a feed-forward artificial neural network (FFANN). The BP algorithm, however, always leads to the problems of low convergence rate, high energy and poor generalization capability of FFANN. In this paper, motivated by the sparsity property of human neuron’ responses, we introduce a new sparse-response BP (SRBP) to improve the capacity of a FFANN by enforcing sparsity to its hidden units through imposing a supplemental L 1 term on them. The FFANN model learned from our algorithm is closely related to the real human and thus its mechanism fully complies with the human nervous system, i.e., sparse representation and architectural depth. Experiments on several datasets demonstrate that SRBP yields good performances on convergence rate, energy saving and generalization capability.


Mathematical Problems in Engineering | 2014

A Novel Selective Ensemble Algorithm for Imbalanced Data Classification Based on Exploratory Undersampling

Qing-Yan Yin; Jiang-She Zhang; Chun-Xia Zhang; Nannan Ji

Learning with imbalanced data is one of the emergent challenging tasks in machine learning. Recently, ensemble learning has arisen as an effective solution to class imbalance problems. The combination of bagging and boosting with data preprocessing resampling, namely, the simplest and accurate exploratory undersampling, has become the most popular method for imbalanced data classification. In this paper, we propose a novel selective ensemble construction method based on exploratory undersampling, RotEasy, with the advantage of improving storage requirement and computational efficiency by ensemble pruning technology. Our methodology aims to enhance the diversity between individual classifiers through feature extraction and diversity regularized ensemble pruning. We made a comprehensive comparison between our method and some state-of-the-art imbalanced learning methods. Experimental results on 20 real-world imbalanced data sets show that RotEasy possesses a significant increase in performance, contrasted by a nonparametric statistical test and various evaluation criteria.


Neurocomputing | 2016

Randomizing outputs to increase variable selection accuracy

Chun-Xia Zhang; Nannan Ji; Guan-Wei Wang

Variable selection plays a key role in explanatory modeling and its aim is to identify the variables that are truly important to the outcome. Recently, ensemble learning techniques have manifested great potential in improving the performance of some traditional methods such as lasso, genetic algorithm, stepwise search. Following the main principle to build a variable selection ensemble, we propose in this paper a novel approach by randomizing outputs (i.e., adding some random noise to the response) to maximize variable selection accuracy. In order to generate multiple but slightly different importance measures for each variable, some Gaussian noise is artificially added to the response. The new training set (i.e, the original design matrix together with the new response vector) is then fed into genetic algorithm to perform variable selection. By repeating this process a number of trials and fusing the results by simple averaging, a more reliable importance measure is obtained for each candidate variable. The variables are then ranked and further determined to be important or not by a thresholding rule. The performance of the proposed method is studied with some simulated and real-world data in the framework of linear and logistic regression models. The results demonstrate that it compares favorably with several other existing methods. HighlightsEnsemble learning methods are used to perform variable ranking and selection.Output smearing technique is employed to build a variable selection ensemble.A parameter is introduced to control the amount of added random noise.The measure averaged across multiple trials is used to rank and select variables.Experiments on simulated and real data sets show the efficiency of new method.


Pattern Recognition Letters | 2014

Discriminative restricted Boltzmann machine for invariant pattern recognition with linear transformations

Nannan Ji; Jiang-She Zhang; Chun-Xia Zhang; Lei Wang

How to make a machine automatically achieve invariant pattern recognition like human brain is still very challenging in machine learning community. In this paper, we present a single hidden-layer network TIClassRBM for invariant pattern recognition by incorporating linear transformations into discriminative restricted Boltzmann machine. In our model, invariant feature extraction and pattern classification can be implemented simultaneously. The mapping from input features to class label is represented by two groups of weights: transformed weights that connect hidden units to data, and pooling weights that connect pooling units yielded by probabilistic max-pooling to class label. All weights play an important role in the invariant pattern recognition. Moreover, TIClassRBM can handle general transformations contained in images, such as translation, rotation and scaling. The experimental studies on the variations of MNIST and NORB datasets demonstrate that the proposed model yields the best performance among some comparative models.


Knowledge Based Systems | 2017

A new regularized restricted Boltzmann machine based on class preserving

Junying Hu; Jiang-She Zhang; Nannan Ji; Chun-Xia Zhang

It is known that an Restricted Boltzmann machine (RBM) can be used as a feature extractor to automatically extract data features in a completely unsupervised learning manner. In this paper, we develop a new regularized RBM by adding the class information, referred to as class preserving RBM (CPr-RBM). Specifically, we impose two constraints on RBM to make the class information clearly reflected in extracted features. One constraint can decrease the distance between the features of the same class and the other one can increase the distance between the features of different classes. The two constraints introduce class information to RBM and make the extracted features contain more category information which contributes to a better classification result. Experiments are conducted on MNIST dataset and 20-newgroup dataset, which show that CPr-RBM learns more discriminate representations and outperforms other related state-of-the-art models in dealing with classification problems.


Neurocomputing | 2017

A modified version of Helmholtz machine by using a Restricted Boltzmann Machine to model the generative probability of the top layer

Junying Hu; Jiang-She Zhang; Nannan Ji; Chun-Xia Zhang

Abstract The Helmholtz machine is an unsupervised deep neural network with different bottom-up recognition weights and top-down generative weights, which attempts to build probability density models of sensory inputs. The recognition weights are used to determine the recognition probability of each unit from bottom layer to top layer and the generative weights are used to determine the generative probability of each unit from top layer to bottom layer. The model parameters can be gained by minimizing the sum of the Kullback–Leibler divergence between generative and recognition distributions of all units. In this paper, we proposed a modified Helmholtz machine by adding an additional hidden layer on the top layer of the Helmholtz machine, which is used to model the generative probability of the top layer. The additional added hidden layer provides ‘complementary prior’ to the original top layer and can eliminate the ‘explaining away effects’ to make the Helmholtz machine fitting sensory inputs much better. The experimental results of new algorithm on various data sets show that the modified Helmholtz machine learns better generative models.


Neurocomputing | 2018

An effective hierarchical extreme learning machine based multimodal fusion framework

Fang Du; Jiang-She Zhang; Nannan Ji; Guang Shi; Chun-Xia Zhang

Abstract Deep learning has been successfully applied to multimodal representation learning. Similar with single modal deep learning method, such multimodal deep learning methods consist of a greedy layer-wise feedforward propagation and a backpropagation (BP) fine-tune conducted by diverse targets. These models have the drawback of time consuming. While, extreme learning machine (ELM) is a fast learning algorithm for single hidden layer feedforward neural network. And previous works has shown the effectiveness of ELM based hierarchical framework for multilayer perceptron. In this paper, we introduce an ELM based hierarchical framework for multimodal data. The proposed architecture consists of three main components: (1) self-taught feature extraction for specific modality by an ELM-based sparse autoencoder, (2) fused representation learning based on the features learned by previous step and (3) supervised feature classification based on the fused representation. This is an exact feedforward framework that once a layer is established, its weights are fixed without fine-tuning. Therefore, it has much better learning efficiency than the gradient based multimodal deep learning methods. We conduct experiments on MNIST, XRMB and NUS datasets, the proposed algorithm obtains faster convergence and achieves better classification performance compared with the other existing multimodal deep learning models.


Neural Processing Letters | 2018

Discriminative Representation Learning with Supervised Auto-encoder

Fang Du; Jiang-She Zhang; Nannan Ji; Junying Hu; Chun-Xia Zhang

Auto-encoders have been proved to be powerful unsupervised learning methods that able to extract useful features from input data or construct deep artificial neural networks by recent studies. In such settings, the extracted features or the initialized networks only learn the data structure while contain no class information which is a disadvantage in classification tasks. In this paper, we aim to leverage the class information of input to learn a reconstructive and discriminative auto-encoder. More specifically, we introduce a supervised auto-encoder that combines the reconstruction error and the classification error to form a unified objective function while taking the noisy concatenate data and label as input. The noisy concatenate input is constructed in such a method that one third has only original data and zero labels, one third has only label and zero data, the last one third has both original data and label. We show that the representations learned by the proposed supervised auto-encoder are more discriminative and more suitable for classification tasks. Experimental results demonstrate that our model outperforms many existing learning algorithms.

Collaboration


Dive into the Nannan Ji's collaboration.

Top Co-Authors

Avatar

Chun-Xia Zhang

Xi'an Jiaotong University

View shared research outputs
Top Co-Authors

Avatar

Jiang-She Zhang

Xi'an Jiaotong University

View shared research outputs
Top Co-Authors

Avatar

Junying Hu

Xi'an Jiaotong University

View shared research outputs
Top Co-Authors

Avatar

Fang Du

Xi'an Jiaotong University

View shared research outputs
Top Co-Authors

Avatar

Guan-Wei Wang

Xi'an Jiaotong University

View shared research outputs
Top Co-Authors

Avatar

Qing-Yan Yin

Xi'an Jiaotong University

View shared research outputs
Top Co-Authors

Avatar

Deyu Meng

Xi'an Jiaotong University

View shared research outputs
Top Co-Authors

Avatar

Gao Guo

Xi'an Jiaotong University

View shared research outputs
Top Co-Authors

Avatar

Guang Shi

Xi'an Jiaotong University

View shared research outputs
Top Co-Authors

Avatar

Jiyuan Pan

Xi'an Jiaotong University

View shared research outputs
Researchain Logo
Decentralizing Knowledge