Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kunikazu Kobayashi is active.

Publication


Featured researches published by Kunikazu Kobayashi.


Neurocomputing | 2014

Time series forecasting using a deep belief network with restricted Boltzmann machines

Takashi Kuremoto; Shinsuke Kimura; Kunikazu Kobayashi; Masanao Obayashi

Abstract Multi-layer perceptron (MLP) and other artificial neural networks (ANNs) have been widely applied to time series forecasting since 1980s. However, for some problems such as initialization and local optima existing in applications, the improvement of ANNs is, and still will be the most interesting study for not only time series forecasting but also other intelligent computing fields. In this study, we propose a method for time series prediction using Hinton and Salakhutdinov׳s deep belief nets (DBN) which are probabilistic generative neural network composed by multiple layers of restricted Boltzmann machine (RBM). We use a 3-layer deep network of RBMs to capture the feature of input space of time series data, and after pretraining of RBMs using their energy functions, gradient descent training, i.e., back-propagation learning algorithm is used for fine-tuning connection weights between “visible layers” and “hidden layers” of RBMs. To decide the sizes of neural networks and the learning rates, Kennedy and Eberhart׳s particle swarm optimization (PSO) is adopted during the training processes. Furthermore, “trend removal”, a preprocessing to the original data, is also approached in the forecasting experiment using CATS benchmark data. Additionally, approximating and short-term prediction of chaotic time series such as Lorenz chaos and logistic map were also applied by the proposed method.


Artificial Life and Robotics | 2009

A dynamic associative memory system by adopting an amygdala model

Takashi Kuremoto; Tomonori Ohta; Kunikazu Kobayashi; Masanao Obayashi

Although several kinds of computational associative memory models and emotion models have been proposed since the last century, the interaction between memory and emotion is almost always neglected in these conventional models. This study constructs a dynamic memory system, named the amygdala-hippocampus model, which intends to realize dynamic auto-association and the mutual association of time-series patterns more naturally by adopting an emotional factor, i.e., the functional model of the amygdala given by Morén and Balkenius. The output of the amygdala is designed to control the recollection state of multiple chaotic neural networks (MCNN) in CA3 of the hippocampus-neocortex model proposed in our early work. The efficiency of the proposed association system is verified by computer simulation using several benchmark time-series patterns.


international conference on intelligent computing | 2012

Time Series Forecasting Using Restricted Boltzmann Machine

Takashi Kuremoto; Shinsuke Kimura; Kunikazu Kobayashi; Masanao Obayashi

In this study, we propose a method for time series prediction using restricted Boltzmann machine (RBM), which is one of stochastic neural networks. The idea comes from Hinton & Salakhutdinov’s multilayer “encoder” network which realized dimensionality reduction of data. A 3-layer deep network of RBMs is constructed and after pre-training RBMs using their energy functions, gradient descent training (error back propagation) is adopted to execute fine-tuning. Additionally, to deal with the problem of neural network structure determination, particle swarm optimization (PSO) is used to find the suitable number of units and parameters. Moreover, a preprocessing, “trend removal” to the original data, was also performed in the forecasting. To compare the proposed predictor with conventional neural network method, i.e., multi-layer perceptron (MLP), CATS benchmark data was used in the prediction experiments.


international conference on intelligent computing | 2005

Nonlinear prediction by reinforcement learning

Takashi Kuremoto; Masanao Obayashi; Kunikazu Kobayashi

Artificial neural networks have presented their powerful ability and efficiency in nonlinear control, chaotic time series prediction, and many other fields. Reinforcement learning, which is the last learning algorithm by awarding the learner for correct actions, and punishing wrong actions, however, is few reported to nonlinear prediction. In this paper, we construct a multi-layer neural network and using reinforcement learning, in particular, a learning algorithm called Stochastic Gradient Ascent (SGA) to predict nonlinear time series. The proposed system includes 4 layers: input layer, hidden layer, stochastic parameter layer and output layer. Using stochastic policy, the system optimizes its weights of connections and output value to obtain its prediction ability of nonlinear dynamics. In simulation, we used the Lorenz system, and compared short-term prediction accuracy of our proposed method with classical learning method.


International Journal of Intelligent Computing and Cybernetics | 2009

Adaptive swarm behavior acquisition by a neuro‐fuzzy system and reinforcement learning algorithm

Takashi Kuremoto; Masanao Obayashi; Kunikazu Kobayashi

Purpose – The purpose of this paper is to present a neuro‐fuzzy system with a reinforcement learning algorithm (RL) for adaptive swarm behaviors acquisition. The basic idea is that each individual (agent) has the same internal model and the same learning procedure, and the adaptive behaviors are acquired only by the reward or punishment from the environment. The formation of the swarm is also designed by RL, e.g. temporal difference (TD)‐error learning algorithm, and it may bring out a faster exploration procedure comparing with the case of individual learning.Design/methodology/approach – The internal model of each individual composes a part of input states classification by a fuzzy net, and a part of optimal behavior learning network which adopting a kind of RL methodology named actor‐critic method. The membership functions and fuzzy rules in the fuzzy net are adaptively formed online by the change of environment states observed in the trials of agents behaviors. The weights of connections between the ...


international symposium on neural networks | 1999

A new indirect encoding method with variable length gene code to optimize neural network structures

Kunikazu Kobayashi; Masanao Ohbayashi

A new encoding method for optimizing neural network structures is proposed. It is based on an indirect encoding method with variable length gene code. The search ability for finding an optimal solution is higher than the direct encoding methods because redundant information in gene code is reduced and the search space is also reduced. The proposed method easily operates adding and deleting hidden units. The performance of the proposed method is evaluated through computer simulations.


international congress on image and signal processing | 2014

Forecast chaotic time series data by DBNs

Takashi Kuremoto; Masanao Obayashi; Kunikazu Kobayashi; Takaomi Hirata; Shingo Mabu

Deep belief nets (DBNs) with multiple artificial neural networks (ANNs) have attracted many researchers recently. In this paper, we propose to compose restricted Boltzmann machine (RBM) and multi-layer perceptron (MLP) as a DBN to predict chaotic time series data, such as the Lorenz chaos and the Henon map. Experiment results showed that in the sense of prediction precision, the novel DBN performed better than the conventional DBN with RBMs.


computational intelligence for modelling, control and automation | 2008

A Self-Organized Fuzzy-Neuro Reinforcement Learning System for Continuous State Space for Autonomous Robots

Masanao Obayashi; Takashi Kuremoto; Kunikazu Kobayashi

This paper proposes the system that combines self-organized fuzzy-neural networks with reinforcement learning system (Q-learning, stochastic gradient ascent : SGA) to realize the autonomous robot behavior learning for continuous state space. The self-organized fuzzy neural network works as adaptive input state space classifier to adapt the change of environment, the part of reinforcement learning has the learning ability corresponding to rule for the input state space . Simultaneously, to simulate the real environment the robot has ability to estimate own-position. Finally, it is clarified that our proposed system is effective through the autonomous robot behavior learning simulation by using the khepera robot simulator.


Journal of Robotics | 2010

Parameterless-Growing-SOM and Its Application to a Voice Instruction Learning System

Takashi Kuremoto; Takahito Komoto; Kunikazu Kobayashi; Masanao Obayashi

An improved self-organizing map (SOM), parameterless-growing-SOM (PL-G-SOM), is proposed in this paper. To overcome problems existed in traditional SOM (Kohonen, 1982), kinds of structure-growing-SOMs or parameter-adjusting-SOMs have been invented and usually separately. Here, we combine the idea of growing SOMs (Bauer and Villmann, 1997; Dittenbach et al. 2000) and a parameterless SOM (Berglund and Sitte, 2006) together to be a novel SOM named PL-G-SOM to realize additional learning, optimal neighborhood preservation, and automatic tuning of parameters. The improved SOM is applied to construct a voice instruction learning system for partner robots adopting a simple reinforcement learning algorithm. Users instructions of voices are classified by the PL-G-SOM at first, then robots choose an expected action according to a stochastic policy. The policy is adjusted by the reward/punishment given by the user of the robot. A feeling map is also designed to express learning degrees of voice instructions. Learning and additional learning experiments used instructions in multiple languages including Japanese, English, Chinese, and Malaysian confirmed the effectiveness of our proposed system.


international conference on neural information processing | 2009

A Meta-learning Method Based on Temporal Difference Error

Kunikazu Kobayashi; Hiroyuki Mizoue; Takashi Kuremoto; Masanao Obayashi

In general, meta-parameters in a reinforcement learning system, such as a learning rate and a discount rate, are empirically determined and fixed during learning. When an external environment is therefore changed, the sytem cannot adapt itself to the variation. Meanwhile, it is suggested that the biological brain might conduct reinforcement learning and adapt itself to the external environment by controlling neuromodulators corresponding to the meta-parameters. In the present paper, based on the above suggestion, a method to adjust meta-parameters using a temporal difference (TD) error is proposed. Through various computer simulations using a maze search problem and an inverted pendulum control problem, it is verified that the proposed method could appropriately adjust meta-parameters according to the variation of the external environment.

Collaboration


Dive into the Kunikazu Kobayashi's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Takuo Suzuki

Aichi Prefectural University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge