Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jinxi Zhao is active.

Publication


Featured researches published by Jinxi Zhao.


Neural Computing and Applications | 2013

An online incremental learning support vector machine for large-scale data

Jun Zheng; Furao Shen; Hongjun Fan; Jinxi Zhao

Support Vector Machines (SVMs) have gained outstanding generalization in many fields. However, standard SVM and most of modified SVMs are in essence batch learning, which make them unable to handle incremental learning or online learning well. Also, such SVMs are not able to handle large-scale data effectively because they are costly in terms of memory and computing consumption. In some situations, plenty of Support Vectors (SVs) are produced, which generally means a long testing time. In this paper, we propose an online incremental learning SVM for large data sets. The proposed method mainly consists of two components: the learning prototypes (LPs) and the learning Support Vectors (LSVs). LPs learn the prototypes and continuously adjust prototypes to the data concept. LSVs are to get a new SVM by combining learned prototypes with trained SVs. The proposed method has been compared with other popular SVM algorithms and experimental results demonstrate that the proposed algorithm is effective for incremental learning problems and large-scale problems.


Neural Computing and Applications | 2012

An incremental learning vector quantization algorithm for pattern classification

Ye Xu; Furao Shen; Jinxi Zhao

Prototype classifiers have been studied for many years. However, few methods can realize incremental learning. On the other hand, most prototype classifiers need users to predetermine the number of prototypes; an improper prototype number might undermine the classification performance. To deal with these issues, in the paper we propose an online supervised algorithm named Incremental Learning Vector Quantization (ILVQ) for classification tasks. The proposed method has three contributions. (1) By designing an insertion policy, ILVQ incrementally learns new prototypes, including both between-class incremental learning and within-class incremental learning. (2) By employing an adaptive threshold scheme, ILVQ automatically learns the number of prototypes needed for each class dynamically according to the distribution of training data. Therefore, unlike most current prototype classifiers, ILVQ needs no prior knowledge of the number of prototypes or their initial value. (3) A technique for removing useless prototypes is used to eliminate noise interrupted into the input data. Results of experiments show that the proposed ILVQ can accommodate the incremental data environment and provide good recognition performance and storage efficiency.


conference on information and knowledge management | 2009

To obtain orthogonal feature extraction using training data selection

Ye Xu; Shen Furao; Jinxi Zhao; Osamu Hasegawa

Feature extraction is an effective tool in data mining and machine learning. Many feature extraction methods have been investigated recently. However, few methods can achieve orthogonal components. Non-orthogonal components distort the metric structure of original data space and contain reductant information. In this paper, we propose a feature extraction method, named as incremental orthogonal basis analysis (IOBA), to cope with the challenging endeavors. First, IOBA learns orthogonal components for original data, not only theoretically but also numerically. Second, an innovative way of training data selection is proposed. This selection scheme helps IOBA pick up numerically orthogonal components from training patterns. Third, by designing a self-adaptive threshold technique, no prior knowledge about the number of components is necessary to use IOBA. Moreover, without solving eigenvalue and eigenvector problems, IOBA not only saves large computing loads, but also avoids ill-conditioned problems. Results of experiments show the efficiency of the proposed IOBA.


knowledge discovery and data mining | 2009

An Online Incremental Learning Vector Quantization

Ye Xu; Shen Furao; Osamu Hasegawa; Jinxi Zhao

As described in this paper, we propose online incremental learning vector quantization (ILVQ) for supervised classification tasks. As a prototype-based classifier, ILVQ needs no prior knowledge of the number of prototypes in the network or their initial value, as do most current prototype-based algorithms. It adopts a threshold-based insertion scheme to determine the number of prototypes needed for each class dynamically according to the distribution of training data. In addition, this insertion policy insures the fulfillment of the incremental learning goal, including both between-class incremental learning and within-class incremental learning. A technique for removing useless prototypes is used to eliminate noise interrupting the input data. Unlike other LVQ-based methods, the learning result wont be affected by the sequence of input patterns that come into the ILVQ. Results of experiments described herein show that the proposed ILVQ can accommodate the non-stationary data environment and can provide good recognition performance and storage efficiency.


Neural Networks | 2017

An online incremental orthogonal component analysis method for dimensionality reduction

Tao Zhu; Ye Xu; Furao Shen; Jinxi Zhao

In this paper, we introduce a fast linear dimensionality reduction method named incremental orthogonal component analysis (IOCA). IOCA is designed to automatically extract desired orthogonal components (OCs) in an online environment. The OCs and the low-dimensional representations of original data are obtained with only one pass through the entire dataset. Without solving matrix eigenproblem or matrix inversion problem, IOCA learns incrementally from continuous data stream with low computational cost. By proposing an adaptive threshold policy, IOCA is able to automatically determine the dimension of feature subspace. Meanwhile, the quality of the learned OCs is guaranteed. The analysis and experiments demonstrate that IOCA is simple, but efficient and effective.


IEEE Transactions on Neural Networks | 2016

Perception Evolution Network Based on Cognition Deepening Model—Adapting to the Emergence of New Sensory Receptor

Youlu Xing; Furao Shen; Jinxi Zhao

The proposed perception evolution network (PEN) is a biologically inspired neural network model for unsupervised learning and online incremental learning. It is able to automatically learn suitable prototypes from learning data in an incremental way, and it does not require the predefined prototype number or the predefined similarity threshold. Meanwhile, being more advanced than the existing unsupervised neural network model, PEN permits the emergence of a new dimension of perception in the perception field of the network. When a new dimension of perception is introduced, PEN is able to integrate the new dimensional sensory inputs with the learned prototypes, i.e., the prototypes are mapped to a high-dimensional space, which consists of both the original dimension and the new dimension of the sensory inputs. In the experiment, artificial data and real-world data are used to test the proposed PEN, and the results show that PEN can work effectively.


conference on information and knowledge management | 2011

TAKES: a fast method to select features in the kernel space

Ye Xu; Furao Shen; Wei Ping; Jinxi Zhao

Feature selection is an effective tool to deal with the curse of dimensionality. To cope with the non-separable problem, feature selection in the kernel space has been investigated. However, previous study cannot adequately estimate the intrinsic dimensionality of the kernel space. Thus, it is difficult to accurately preserve the sketch of the kernel space using the learned basis, and the feature selection performance is affected. Moreover, the computing load of the algorithm reaches at least cubic with the number of training data. In this paper, we propose a fast framework to conduct feature selection in the kernel space. By designing a fast kernel subspace learning method, we automatically learn the intrinsic dimensionality and construct an orthogonal basis set of kernel space. The learned basis can accurately preserve the sketch of kernel space. Then backed by the constructed basis, we directly select features in kernel space. The whole proposed framework has a quadratic complexity with the number of training data, which is faster than existing kernel methods for feature selection. We evaluate our work under several typical datasets and find it not only preserves the sketch of the kernel space more accurately but also achieves better classification performance compared with many state-of-the-art methods.


international conference on neural information processing | 2016

A Swarm Intelligence Algorithm Inspired by Twitter

Zhihui Lv; Furao Shen; Jinxi Zhao; Tao Zhu

For many years, evolutionary computation researchers have been trying to extract the swarm intelligence from biological systems in nature. Series of algorithms proposed by imitating animals’ behaviours have established themselves as effective means for solving optimization problems. However these bio-inspired methods are not yet satisfactory enough because the behaviour models they reference, such as the foraging birds and bees, are too simple to handle different problems. In this paper, by studying a more complicated behaviour model, human’s social behaviour pattern on Twitter which is an influential social media and popular among billions of users, we propose a new algorithm named Twitter Optimization (TO). TO is able to solve most of the real-parameter optimization problems by imitating human’s social actions on Twitter: following, tweeting and retweeting. The experiments show that, TO has a good performance on the benchmark functions.


Neurocomputing | 2016

Orthogonal component analysis

Tao Zhu; Ye Xu; Furao Shen; Jinxi Zhao

Most existing dimensionality reduction algorithms have two disadvantages: their computational cost is high and they cannot estimate the intrinsic dimension of the original dataset by themselves. To deal with these problems, in this paper we propose a fast linear dimensionality reduction method named Orthogonal Component Analysis (OCA). While avoiding solving eigenproblem and matrix inverse problem, OCA successfully achieves high-speed orthogonal component extraction. By proposing an adaptive threshold scheme, OCA is able to estimate the dimension of the feature space automatically. Meanwhile, the algorithm is guaranteed to be numerical stable. In the experiments, OCA is compared with several typical dimensionality reduction algorithms. The experimental results demonstrate that as a universal algorithm, OCA is efficient and effective.


international conference on neural information processing | 2017

Time Series Forecasting Using GRU Neural Network with Multi-lag After Decomposition

Xu Zhang; Furao Shen; Jinxi Zhao; GuoHai Yang

Time series forecasting has a wide range of applications in society, industry, market, etc. In this paper, a new time series forecasting method (FCD-MLGRU) is proposed for solving short-term forecasting problem. First we decompose the original time series using Filtering Cycle Decomposition (FCD) proposed in this paper, secondly we train the Gated Recurrent Unit (GRU) Neural Network to forecasting the subseries respectively. In the process of training and forecasting, the multi-time-lag sampling and ensemble forecasting method is adopted, which reduces the dependence on the selection of time lag and enhance the generalization and stability of the model. The comparative experiments on the real data sets and theoretical analysis show that our proposed method performs better than other related methods.

Collaboration


Dive into the Jinxi Zhao's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ye Xu

Nanjing University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ye Xu

Nanjing University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chaomin Luo

University of Detroit Mercy

View shared research outputs
Researchain Logo
Decentralizing Knowledge