Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Takashi Kuremoto is active.

Publication


Featured researches published by Takashi Kuremoto.


Neurocomputing | 2014

Time series forecasting using a deep belief network with restricted Boltzmann machines

Takashi Kuremoto; Shinsuke Kimura; Kunikazu Kobayashi; Masanao Obayashi

Abstract Multi-layer perceptron (MLP) and other artificial neural networks (ANNs) have been widely applied to time series forecasting since 1980s. However, for some problems such as initialization and local optima existing in applications, the improvement of ANNs is, and still will be the most interesting study for not only time series forecasting but also other intelligent computing fields. In this study, we propose a method for time series prediction using Hinton and Salakhutdinov׳s deep belief nets (DBN) which are probabilistic generative neural network composed by multiple layers of restricted Boltzmann machine (RBM). We use a 3-layer deep network of RBMs to capture the feature of input space of time series data, and after pretraining of RBMs using their energy functions, gradient descent training, i.e., back-propagation learning algorithm is used for fine-tuning connection weights between “visible layers” and “hidden layers” of RBMs. To decide the sizes of neural networks and the learning rates, Kennedy and Eberhart׳s particle swarm optimization (PSO) is adopted during the training processes. Furthermore, “trend removal”, a preprocessing to the original data, is also approached in the forecasting experiment using CATS benchmark data. Additionally, approximating and short-term prediction of chaotic time series such as Lorenz chaos and logistic map were also applied by the proposed method.


Expert Systems With Applications | 2013

Enhanced decision making mechanism of rule-based genetic network programming for creating stock trading signals

Shingo Mabu; Kotaro Hirasawa; Masanao Obayashi; Takashi Kuremoto

Evolutionary computation generally aims to create the optimal individual which represents optimal action rules when it is applied to agent systems. Genetic Network Programming (GNP) has been proposed as one of the graph-based evolutionary computations in order to create optimal individuals. GNP with rule accumulation is an extended algorithm of GNP, which extracts a large number of rules throughout the generations and stores them in rule pools, which is different from general evolutionary computations. Concretely, the individuals of GNP with rule accumulation are regarded as evolving rule generators in the training phase and the generated rules in the rule pools are actually used for decision making. In this paper, GNP with rule accumulation is enhanced in terms of its rule extraction and classification abilities for generating stock trading signals considering up and down trends and occurrence frequency of specific buying/selling timing. A large number of buying and selling rules are extracted by the individuals evolved in the training period. Then, a unique classification mechanism is used to appropriately determine whether to buy or sell stocks based on the extracted rules. In the testing simulations, the stock trading is carried out using the extracted rules and it is confirmed that the rule-based trading model shows higher profits than the conventional individual-based trading model.


Artificial Life and Robotics | 2009

A dynamic associative memory system by adopting an amygdala model

Takashi Kuremoto; Tomonori Ohta; Kunikazu Kobayashi; Masanao Obayashi

Although several kinds of computational associative memory models and emotion models have been proposed since the last century, the interaction between memory and emotion is almost always neglected in these conventional models. This study constructs a dynamic memory system, named the amygdala-hippocampus model, which intends to realize dynamic auto-association and the mutual association of time-series patterns more naturally by adopting an emotional factor, i.e., the functional model of the amygdala given by Morén and Balkenius. The output of the amygdala is designed to control the recollection state of multiple chaotic neural networks (MCNN) in CA3 of the hippocampus-neocortex model proposed in our early work. The efficiency of the proposed association system is verified by computer simulation using several benchmark time-series patterns.


international conference on intelligent computing | 2012

Time Series Forecasting Using Restricted Boltzmann Machine

Takashi Kuremoto; Shinsuke Kimura; Kunikazu Kobayashi; Masanao Obayashi

In this study, we propose a method for time series prediction using restricted Boltzmann machine (RBM), which is one of stochastic neural networks. The idea comes from Hinton & Salakhutdinov’s multilayer “encoder” network which realized dimensionality reduction of data. A 3-layer deep network of RBMs is constructed and after pre-training RBMs using their energy functions, gradient descent training (error back propagation) is adopted to execute fine-tuning. Additionally, to deal with the problem of neural network structure determination, particle swarm optimization (PSO) is used to find the suitable number of units and parameters. Moreover, a preprocessing, “trend removal” to the original data, was also performed in the forecasting. To compare the proposed predictor with conventional neural network method, i.e., multi-layer perceptron (MLP), CATS benchmark data was used in the prediction experiments.


international conference on intelligent computing | 2005

Nonlinear prediction by reinforcement learning

Takashi Kuremoto; Masanao Obayashi; Kunikazu Kobayashi

Artificial neural networks have presented their powerful ability and efficiency in nonlinear control, chaotic time series prediction, and many other fields. Reinforcement learning, which is the last learning algorithm by awarding the learner for correct actions, and punishing wrong actions, however, is few reported to nonlinear prediction. In this paper, we construct a multi-layer neural network and using reinforcement learning, in particular, a learning algorithm called Stochastic Gradient Ascent (SGA) to predict nonlinear time series. The proposed system includes 4 layers: input layer, hidden layer, stochastic parameter layer and output layer. Using stochastic policy, the system optimizes its weights of connections and output value to obtain its prediction ability of nonlinear dynamics. In simulation, we used the Lorenz system, and compared short-term prediction accuracy of our proposed method with classical learning method.


Applied Soft Computing | 2015

Ensemble learning of rule-based evolutionary algorithm using multi-layer perceptron for supporting decisions in stock trading problems

Shingo Mabu; Masanao Obayashi; Takashi Kuremoto

Graphical abstractDisplay Omitted HighlightsRule pools for stock trading are generated by a rule-based evolutionary algorithm.Ensemble learning using MLP selects appropriate rule pools for trading decision.The proposed method shows better profitability than the other methods.The proposed method appropriately selects good rules depending on the situations. Classification is a major research field in pattern recognition and many methods have been proposed to enhance the generalization ability of classification. Ensemble learning is one of the methods which enhance the classification ability by creating several classifiers and making decisions by combining their classification results. On the other hand, when we consider stock trading problems, trends of the markets are very important to decide to buy and sell stocks. In this case, the combinations of trading rules that can adapt to various kinds of trends are effective to judge the good timing of buying and selling. Therefore, in this paper, to enhance the performance of the stock trading system, ensemble learning mechanism of rule-based evolutionary algorithm using multi-layer perceptron (MLP) is proposed, where several rule pools for stock trading are created by rule-based evolutionary algorithm, and effective rule pools are adaptively selected by MLP and the selected rule pools cooperatively make decisions of stock trading. In the simulations, it is clarified that the proposed method shows higher profits or lower losses than the method without ensemble learning and buy&hold.


International Journal of Intelligent Computing and Cybernetics | 2009

Adaptive swarm behavior acquisition by a neuro‐fuzzy system and reinforcement learning algorithm

Takashi Kuremoto; Masanao Obayashi; Kunikazu Kobayashi

Purpose – The purpose of this paper is to present a neuro‐fuzzy system with a reinforcement learning algorithm (RL) for adaptive swarm behaviors acquisition. The basic idea is that each individual (agent) has the same internal model and the same learning procedure, and the adaptive behaviors are acquired only by the reward or punishment from the environment. The formation of the swarm is also designed by RL, e.g. temporal difference (TD)‐error learning algorithm, and it may bring out a faster exploration procedure comparing with the case of individual learning.Design/methodology/approach – The internal model of each individual composes a part of input states classification by a fuzzy net, and a part of optimal behavior learning network which adopting a kind of RL methodology named actor‐critic method. The membership functions and fuzzy rules in the fuzzy net are adaptively formed online by the change of environment states observed in the trials of agents behaviors. The weights of connections between the ...


international congress on image and signal processing | 2014

Forecast chaotic time series data by DBNs

Takashi Kuremoto; Masanao Obayashi; Kunikazu Kobayashi; Takaomi Hirata; Shingo Mabu

Deep belief nets (DBNs) with multiple artificial neural networks (ANNs) have attracted many researchers recently. In this paper, we propose to compose restricted Boltzmann machine (RBM) and multi-layer perceptron (MLP) as a DBN to predict chaotic time series data, such as the Lorenz chaos and the Henon map. Experiment results showed that in the sense of prediction precision, the novel DBN performed better than the conventional DBN with RBMs.


computational intelligence for modelling, control and automation | 2008

A Self-Organized Fuzzy-Neuro Reinforcement Learning System for Continuous State Space for Autonomous Robots

Masanao Obayashi; Takashi Kuremoto; Kunikazu Kobayashi

This paper proposes the system that combines self-organized fuzzy-neural networks with reinforcement learning system (Q-learning, stochastic gradient ascent : SGA) to realize the autonomous robot behavior learning for continuous state space. The self-organized fuzzy neural network works as adaptive input state space classifier to adapt the change of environment, the part of reinforcement learning has the learning ability corresponding to rule for the input state space . Simultaneously, to simulate the real environment the robot has ability to estimate own-position. Finally, it is clarified that our proposed system is effective through the autonomous robot behavior learning simulation by using the khepera robot simulator.


Journal of Robotics | 2010

Parameterless-Growing-SOM and Its Application to a Voice Instruction Learning System

Takashi Kuremoto; Takahito Komoto; Kunikazu Kobayashi; Masanao Obayashi

An improved self-organizing map (SOM), parameterless-growing-SOM (PL-G-SOM), is proposed in this paper. To overcome problems existed in traditional SOM (Kohonen, 1982), kinds of structure-growing-SOMs or parameter-adjusting-SOMs have been invented and usually separately. Here, we combine the idea of growing SOMs (Bauer and Villmann, 1997; Dittenbach et al. 2000) and a parameterless SOM (Berglund and Sitte, 2006) together to be a novel SOM named PL-G-SOM to realize additional learning, optimal neighborhood preservation, and automatic tuning of parameters. The improved SOM is applied to construct a voice instruction learning system for partner robots adopting a simple reinforcement learning algorithm. Users instructions of voices are classified by the PL-G-SOM at first, then robots choose an expected action according to a stochastic policy. The policy is adjusted by the reward/punishment given by the user of the robot. A feeling map is also designed to express learning degrees of voice instructions. Learning and additional learning experiments used instructions in multiple languages including Japanese, English, Chinese, and Malaysian confirmed the effectiveness of our proposed system.

Collaboration


Dive into the Takashi Kuremoto's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kunikazu Kobayashi

Aichi Prefectural University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge