Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Furao Shen is active.

Publication


Featured researches published by Furao Shen.


Neural Networks | 2008

A fast nearest neighbor classifier based on self-organizing incremental neural network

Furao Shen; Osamu Hasegawa

A fast prototype-based nearest neighbor classifier is introduced. The proposed Adjusted SOINN Classifier (ASC) is based on SOINN (self-organizing incremental neural network), it automatically learns the number of prototypes needed to determine the decision boundary, and learns new information without destroying old learned information. It is robust to noisy training data, and it realizes very fast classification. In the experiment, we use some artificial datasets and real-world datasets to illustrate ASC. We also compare ASC with other prototype-based classifiers with regard to its classification error, compression ratio, and speed up ratio. The results show that ASC has the best performance and it is a very efficient classifier.


Neurocomputing | 2015

Forecasting exchange rate using deep belief networks and conjugate gradient method

Furao Shen; Jing Chao; Jinxi Zhao

Forecasting exchange rates is an important financial problem. In this paper, an improved deep belief network (DBN) is proposed for forecasting exchange rates. By using continuous restricted Boltzmann machines (CRBMs) to construct a DBN, we update the classical DBN to model continuous data. The structure of DBN is optimally determined through experiments for application in exchange rates forecasting. Also, conjugate gradient method is applied to accelerate the learning for DBN. In the experiments, three exchange rate series are tested and six evaluation criteria are adopted to evaluate the performance of the proposed method. Comparison with typical forecasting methods such as feed forward neural network (FFNN) shows that the proposed method is applicable to the prediction of foreign exchange rate and works better than traditional methods.


international symposium on neural networks | 2011

Forecasting exchange rate with deep belief networks

Jing Chao; Furao Shen; Jinxi Zhao

Forecasting exchange rates is an important financial problem which has received much attention. Nowadays, neural network has become one of the effective tools in this research field. In this paper, we propose the use of a deep belief network (DBN) to tackle the exchange rate forecasting problem. A DBN is applied to predict both British Pound/US dollar and Indian rupee/US dollar exchange rates in our experiments. We use six evaluation criteria to evaluate its performance. We also compare our method to a feedforward neural network (FFNN), which is the state-of-the-art method for forecasting exchange rate with neural networks. Experiments indicate that deep belief networks (DBNs) are applicable to the prediction of foreign exchange rate, since they achieve better performance than feedforward neural networks (FFNNs).


Neural Computing and Applications | 2011

An incremental online semi-supervised active learning algorithm based on self-organizing incremental neural network

Furao Shen; Hui Yu; Keisuke Sakurai; Osamu Hasegawa

An incremental online semi-supervised active learning algorithm, which is based on a self-organizing incremental neural network (SOINN), is proposed. This paper describes improvement of the two-layer SOINN to a single-layer SOINN to represent the topological structure of input data and to separate the generated nodes into different groups and subclusters. We then actively label some teacher nodes and use such teacher nodes to label all unlabeled nodes. The proposed method can learn from both labeled and unlabeled samples. It can query the labels of some important samples rather than selecting the labeled samples randomly. It requires neither prior knowledge, such as the number of nodes, nor the number of classes. It can automatically learn the number of nodes and teacher vectors required for a current task. Moreover, it can realize online incremental learning. Experiments using artificial data and real-world data show that the proposed method performs effectively and efficiently.


Neural Computing and Applications | 2012

An incremental learning vector quantization algorithm for pattern classification

Ye Xu; Furao Shen; Jinxi Zhao

Prototype classifiers have been studied for many years. However, few methods can realize incremental learning. On the other hand, most prototype classifiers need users to predetermine the number of prototypes; an improper prototype number might undermine the classification performance. To deal with these issues, in the paper we propose an online supervised algorithm named Incremental Learning Vector Quantization (ILVQ) for classification tasks. The proposed method has three contributions. (1) By designing an insertion policy, ILVQ incrementally learns new prototypes, including both between-class incremental learning and within-class incremental learning. (2) By employing an adaptive threshold scheme, ILVQ automatically learns the number of prototypes needed for each class dynamically according to the distribution of training data. Therefore, unlike most current prototype classifiers, ILVQ needs no prior knowledge of the number of prototypes or their initial value. (3) A technique for removing useless prototypes is used to eliminate noise interrupted into the input data. Results of experiments show that the proposed ILVQ can accommodate the incremental data environment and provide good recognition performance and storage efficiency.


Neurocomputing | 2013

A general associative memory based on self-organizing incremental neural network

Furao Shen; Qiubao Ouyang; Wataru Kasai; Osamu Hasegawa

This paper proposes a general associative memory (GAM) system that combines the functions of other typical associative memory (AM) systems. The GAM is a network consisting of three layers: an input layer, a memory layer, and an associative layer. The input layer accepts key vectors, response vectors, and the associative relationships between these vectors. The memory layer stores the input vectors incrementally to corresponding classes. The associative layer builds associative relationships between classes. The GAM can store and recall binary or non-binary information, learn key vectors and response vectors incrementally, realize many-to-many associations with no predefined conditions, store and recall both static and temporal sequence information, and recall information from incomplete or noise-polluted inputs. Experiments using binary data, real-value data, and temporal sequences show that GAM is an efficient system. The AM experiments using a humanoid robot demonstrates that GAM can accommodate real tasks and build associations between patterns with different dimensions.


international conference on artificial neural networks | 2010

An online incremental learning support vector machine for large-scale data

Jun Zheng; Hui Yu; Furao Shen; Jinxi Zhao

Support Vector Machines (SVMs) have gained outstanding generalization in many fields. However, standard SVM and most modified SVMs are in essence batch learning, which makes them unable to handle incremental learning well. Also, such SVMs are not able to handle large-scale data effectively because they are costly in terms of memory and computing consumption. In some situations, plenty of Support Vectors (SVs) are produced, which generally means a long testing time. In this paper, we propose an online incremental learning SVM for large data sets. The proposed method mainly consists of two components, Learning Prototypes (LPs) and Learning SVs (LSVs). Experimental results demonstrate that the proposed algorithm is effective for incremental learning problems and large-scale problems.


international conference on artificial neural networks | 2010

Self-organizing incremental neural network and its application

Furao Shen; Osamu Hasegawa

Self-organizing incremental neural network (SOINN) is introduced. SOINN is able to represent the topology structure of input data, incrementally learn new knowledge without destroy of learned knowledge, and process online non-stationary data. It is free of prior conditions such as a suitable network structure or network size, and it is also robust to noise. SOINN has been adapted for unsupervised learning, supervised learning, semi-supervised learning, and active learning tasks. Also, SOINN is used for some applications such as associative memory, pattern-based reasoning, word-grounding, gesture recognition, and robotics.


international symposium on neural networks | 2010

An associative memory system for incremental learning and temporal sequence

Furao Shen; Hui Yu; Wataru Kasai; Osamu Hasegawa

An associative memory (AM) system is proposed to realize incremental learning and temporal sequence learning. The proposed system is constructed with three layer networks: The input layer inputs key vectors, response vectors, and the associative relation between vectors. The memory layer stores input vectors incrementally to corresponding classes. The associative layer builds associative relations between classes. The proposed method can incrementally learn key vectors and response vectors; store and recall both static information and temporal sequence information; and recall information from incomplete or noise-polluted inputs. Experiments using binary data, real-value data, and temporal sequences show that the proposed method works well.


Neural Networks | 2016

A Self-Organizing Incremental Neural Network based on local distribution learning

Youlu Xing; Xiaofeng Shi; Furao Shen; Ke Zhou; Jinxi Zhao

In this paper, we propose an unsupervised incremental learning neural network based on local distribution learning, which is called Local Distribution Self-Organizing Incremental Neural Network (LD-SOINN). The LD-SOINN combines the advantages of incremental learning and matrix learning. It can automatically discover suitable nodes to fit the learning data in an incremental way without a priori knowledge such as the structure of the network. The nodes of the network store rich local information regarding the learning data. The adaptive vigilance parameter guarantees that LD-SOINN is able to add new nodes for new knowledge automatically and the number of nodes will not grow unlimitedly. While the learning process continues, nodes that are close to each other and have similar principal components are merged to obtain a concise local representation, which we call a relaxation data representation. A denoising process based on density is designed to reduce the influence of noise. Experiments show that the LD-SOINN performs well on both artificial and real-word data.

Collaboration


Dive into the Furao Shen's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Osamu Hasegawa

Tokyo Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chaomin Luo

University of Detroit Mercy

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge