Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hironori Hirata is active.

Publication


Featured researches published by Hironori Hirata.


ieee international conference on evolutionary computation | 1996

An immunity based genetic algorithm and its application to the VLSI floorplan design problem

Isao Tazawa; Seiichi Koakutsu; Hironori Hirata

The genetic algorithm (GA) paradigm is a search procedure for combinatorial optimization problems. Unlike most of other optimization techniques, GA searches the solution space using a population of solutions. Although GA has an excellent global search ability, it is not effective for searching the solution space locally due to crossover-based search, and the diversity of the population sometimes decreases rapidly. In order to overcome these drawbacks, we propose a new algorithm called immunity based GA (IGA) combining features of the immune system (IS) with GA. The proposed method is expected to have local search ability and prevent premature convergence. We apply IGA to the floorplan design problem of VLSI layout. Experimental results show that IGA performs better than GA.


systems, man and cybernetics | 2004

Development of intelligent wheelchair acquiring autonomous, cooperative, and collaborative behavior

Tomoki Hamagami; Hironori Hirata

An intelligent wheelchair (IWC) prototype system: ACCoMo is developed to aid indoor safe mobility for physically challenged people. ACCoMo, as an agent, can acquire autonomous, cooperative, and collaborative behavior. The autonomous behavior realizes safe and effective moves with observing local real environments. The cooperative behavior emerges from interactions within other ACCoMo dynamically. The collaborative behavior aims to assist user operations, and provides functions for connecting to various ubiquitous devices. These behaviors are acquired with learning and evolution of intelligent ACCoMo agents through their experience in the real or virtual environments. Through experiments in real world environments, it is shown that the agent can acquire these intelligent behaviors on ACCoMo.


ieee wic acm international conference on intelligent agent technology | 2003

Method of crowd simulation by using multiagent on cellular automata

Tomoki Hamagami; Hironori Hirata

This paper presents a new simulation method of crowd behavior, method that uses a two- layer model that consists of multiagent (MA) framework and cellular automata (CA). The features of this method are as follows. (1) Complicated crowd behavior emerges from the autonomous actions of agents. (2) Separating an autonomous action process from a restriction of physical interferences. Using a simulation system implementing the two-layer model, crowd behavior simulations are realized. In particular, collision cases of counter crowds are analyzed in detail, and interesting results are found. (1) Homogeneous agent crowd tends to make whirlpools, waves, and blanks, and to be slow at the walking. (2) Heterogeneous agent crowd formed lines, then flows efficiently. Experimental results show that combining MA and CA is effective to easy realize the complicated crowd behavior on various environments.


systems, man and cybernetics | 2002

Reinforcement learning to compensate for perceptual aliasing using dynamic additional parameter: motivational value

Tomoki Hamagami; Seiichi Koakutsu; Hironori Hirata

AbstnactIn this paper, we present a new reinforcement learning approach compensating for the perceptual aliasing problem by varying policies depending on the behavior context. For this approach, motiuatwd vdue(M-value) is introduced as a parameter emphasizing specific future action selection probabilities temporarily according to the context. In the learning phase, a Q-value renewal error linked with the current state-action pair is memorized as M-value linked with past visited experiences. In the control phase, t o motivate a next action, an agent awakes M-values linked with the current state and memorized in past experiences. By combining Mvalue with Q-value, even if an agent observes the same sensory inputs under the different states ,the agent can generate different action selection policies with the context. The advantage of the proposed approach is that the learning/control system reflecting the difference of context can be realized easily, in spite of the saving of computational memories, by the simple extension of general reinforcement learning: Q-learning. In order to investigate the validity of the proposed method, we apply the method to the maze problem containing perceptual aliasing problem, and compare it with the case of general Q-learning. The result on maze environment experiment shows that the proposed approach can work effectively in the non-Markov decision process environment involving perceptual aliasing problems. Keywordsreinforcement learning, Q-learning, POMDPs, perceptual aliasing.


ieee international conference on evolutionary computation | 1996

A parallel learning cellular automata for combinatorial optimization problems

Fei Qian; Hironori Hirata

Reinforcement learning is a class of learning methodologies in which the controller (or agent) adapts based on external feedback from the random environment. We present a theoretic model of stochastic learning cellular automata (SLCA) as a model of reinforcement learning systems. The SLCA is an extended model of traditional cellular automata, defined as a stochastic cellular automaton with its random environment. There are three rule spaces for the SLCA: parallel, sequential and mixture. We especially study the parallel SLCA with a genetic operator and apply it to the combinatorial optimization problems. The computer simulations of graph partition problems show that the convergence of SLCA is better than the parallel mean field algorithm.


ieee wic acm international conference on intelligent agent technology | 2003

Q-learning automaton

Fei Qian; Hironori Hirata

Reinforcement learning is the problem faced by a controller that must learn behavior through trial and error interactions with a dynamic environment. The controllers goal is to maximize reward over time, by producing an effective mapping of states of actions called policy. To construct the model of such systems, we present a generalized learning automaton approach with Q-learning behaviors. Compared to Q-learning, the computational experiments of the pursuit problems show that the proposed reinforcement scheme obtains better results in terms of convergence speed and memory size.


sice journal of control, measurement, and system integration | 2015

Analysis and Improvements of the Pareto Optimal Solution Visualization Method Using the Self-Organizing Maps

Atsushi Hironaka; Takashi Okamoto; Seiichi Koakutsu; Hironori Hirata


society of instrument and control engineers of japan | 2012

A growing complex network design method with an adaptive multi-objective genetic algorithm and an inner link restructuring method

Haruki Mizuno; Takashi Okamoto; Seiichi Koakutsu; Hironori Hirata


Ieej Transactions on Electronics, Information and Systems | 2001

Learning Cellular Automata for Function Optimization Problems

Fei Qian; Yue Zhao; Hironori Hirata


Ieej Transactions on Electronics, Information and Systems | 2011

A Growing Complex Network Design Method

Haruki Mizuno; Takashi Okamoto; Seiichi Koakutsu; Hironori Hirata

Collaboration


Dive into the Hironori Hirata's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tomoki Hamagami

Yokohama National University

View shared research outputs
Top Co-Authors

Avatar

Fei Qian

Hiroshima Kokusai Gakuin University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yue Zhao

Hiroshima Kokusai Gakuin University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge