Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Koichiro Morihiro is active.

Publication


Featured researches published by Koichiro Morihiro.


meeting of the association for computational linguistics | 2006

A Feedback-Augmented Method for Detecting Errors in the Writing of Learners of English

Ryo Nagata; Atsuo Kawai; Koichiro Morihiro; Naoki Isu

This paper proposes a method for detecting errors in article usage and singular plural usage based on the mass count distinction. First, it learns decision lists from training data generated automatically to distinguish mass and count nouns. Then, in order to improve its performance, it is augmented by feedback that is obtained from the writing of learners. Finally, it detects errors by applying rules to the mass count distinction. Experiments show that it achieves a recall of 0.71 and a precision of 0.72 and outperforms other methods used for comparison when augmented by feedback.


international conference on knowledge based and intelligent information and engineering systems | 2006

Emergence of flocking behavior based on reinforcement learning

Koichiro Morihiro; Teijiro Isokawa; Haruhiko Nishimura; Nobuyuki Matsui

Grouping motion, such as bird flocking, land animal herding, and fish schooling, is well-known in nature. Many observations have shown that there are no leading agents to control the behavior of the group. Several models have been proposed for describing the flocking behavior, which we regard as a distinctive example of the aggregate motions. In these models, some fixed rule is given to each of the individuals a priori for their interactions in reductive and rigid manner. Instead of this, we have proposed a new framework for self-organized flocking of agents by reinforcement learning. It will become important to introduce a learning scheme for making collective behavior in artificial autonomous distributed systems. In this paper, anti-predator behaviors of agents are examined by our scheme through computer simulations. We demonstrate the feature of behavior under two learning modes against agents of the same kind and predators.


international conference on knowledge-based and intelligent information and engineering systems | 2004

Effects of Chaotic Exploration on Reinforcement Maze Learning

Koichiro Morihiro; Nobuyuki Matsui; Haruhiko Nishimura

In reinforcement learning, it is necessary to introduce a process of trial and error called an exploration. As a generator for exploration, it seems to be familiar to use the uniform pseudorandom number generator. However, it is known that chaotic source also provides a random-like sequence as like as stochastic source. In this research, we propose an application of the random-like feature of deterministic chaos for a generator of the exploration. As a result, we find that the deterministic chaotic generator for the exploration based on the logistic map gives better performances than the stochastic random exploration generator in a nonstationary shortcut maze problem. In order to understand why the exploration generator based on the logistic map shows the better result, we investigate the learning structures obtained from the two exploration generators.


international conference on knowledge based and intelligent information and engineering systems | 2008

Learning Grouping and Anti-predator Behaviors for Multi-agent Systems

Koichiro Morihiro; Haruhiko Nishimura; Teijiro Isokawa; Nobuyuki Matsui

Several models have been proposed for describing grouping behavior such as bird flocking, terrestrial animal herding, and fish schooling. In these models, a fixed rule has been imposed on each individual a priori for its interactions in a reductive and rigid manner. We have proposed a new framework for self-organized grouping of agents by reinforcement learning. It is important to introduce a learning scheme for developing collective behavior in artificial autonomous distributed systems. This scheme can be expanded to cases in which predators are present. In this study we integrate grouping and anti-predator behaviors into our proposed scheme. The behavior of agents is demonstrated and evaluated in detail through computer simulations, and their grouping and anti-predator behaviors developed as a result of learning are shown to be diverse and robust by changing some parameters of the scheme.


meeting of the association for computational linguistics | 2006

Reinforcing English Countability Prediction with One Countability per Discourse Property

Ryo Nagata; Atsuo Kawai; Koichiro Morihiro; Naoki Isu

Countability of English nouns is important in various natural language processing tasks. It especially plays an important role in machine translation since it determines the range of possible determiners. This paper proposes a method for reinforcing countability prediction by introducing a novel concept called one countability per discourse. It claims that when a noun appears more than once in a discourse, they will all share the same countability in the discourse. The basic idea of the proposed method is that mispredictions can be correctly overridden using efficiently the one countability per discourse property. Experiments show that the proposed method successfully reinforces countability prediction and outperforms other methods used for comparison.


international conference on knowledge-based and intelligent information and engineering systems | 2007

Reinforcement learning scheme for grouping and anti-predator behavior

Koichiro Morihiro; Haruhiko Nishimura; Teijiro Isokawa; Nobuyuki Matsui

Collective behavior such as bird flocking, land animal herding, and fish schooling is well known in nature. Many observations have shown that there are no leaders to control the behavior of a group. Several models have been proposed for describing the grouping behavior, which we regard as a distinctive example of aggregate motions. In these models, a fixed rule is provided for each of the individuals a priori for their interactions in a reductive and rigid manner. In contrast, we propose a new framework for the self-organized grouping of agents by reinforcement learning. It is important to introduce a learning scheme for causing collective behavior in artificial autonomous distributed systems. The behavior of agents is demonstrated and evaluated through computer simulations and it is shown that their grouping and anti-predator behavior emerges as a result of learning.


society of instrument and control engineers of japan | 2006

Characteristics of Flocking Behavior Model by Reinforcement Learning Scheme

Koichiro Morihiro; Teijiro Isokawa; Haruhiko Nishimura; Nobuyuki Matsui

Grouping motion of creatures is observed in various scenes in nature. As its typical cases, bird flocking, land animal herding, and fish schooling are well-known. Many observations have shown that there are no leading agents to control the behavior of the group. Several models have been proposed for describing the flocking behavior. In these models, some fixed rule is given to each of the individuals a priori for their interactions in reductive and rigid manner. Instead of this, we have proposed a new framework for self-organized flocking of agents by reinforcement learning. It will become important to introduce a learning scheme for making collective behavior in artificial autonomous distributed systems. In this paper, anti-predator behaviors of agents are examined by our scheme through computer simulations. We demonstrate the feature of behavior under two learning modes against agents of the same kind and predators


international conference on knowledge based and intelligent information and engineering systems | 2005

Reinforcement learning by chaotic exploration generator in target capturing task

Koichiro Morihiro; Teijiro Isokawa; Nobuyuki Matsui; Haruhiko Nishimura

The exploration, that is a process of trial and error, plays a very important role in reinforcement learning. As a generator for exploration, it seems to be familiar to use the uniform pseudorandom number generator. However, it is known that chaotic source also provides a random-like sequence as like as stochastic source. Applying this random-like feature of deterministic chaos for a generator of the exploration, we already found that the deterministic chaotic generator for the exploration based on the logistic map gives better performances than the stochastic random exploration generator in a nonstationary shortcut maze problem. In this research, in order to make certain such a difference of the performance, we examine target capturing as another nonstationary task. The simulation result in this task approves the result in our previous work.


International Journal of Bifurcation and Chaos | 2006

CHAOTIC EXPLORATION EFFECTS ON REINFORCEMENT LEARNING IN SHORTCUT MAZE TASK

Koichiro Morihiro; Nobuyuki Matsui; Haruhiko Nishimura

Reinforcement learning is usually required in the process of trial and error called exploration, and the uniform pseudorandom number generator is considered effective in that process. As a generator for the exploration, chaotic sources are also useful in creating a random-like sequence such as in the case of stochastic sources. In this research, we investigate the efficiency of the deterministic chaotic generator for the exploration in learning a nonstationary shortcut maze problem. As a result, it is found that the deterministic chaotic generator based on the logistic map is better in the performance of the exploration than in the stochastic random generator. This has been made clear by analyzing the difference of the performances between the two generators in terms of the patterns of exploration occurrence. We also examine the tent map, which is homeomorphic to the logistic map, compared with other generators.


international conference on knowledge based and intelligent information and engineering systems | 2010

Reinforcement learning scheme for grouping and characterization of multi-agent network

Koichiro Morihiro; Nobuyuki Matsui; Teijiro Isokawa; Haruhiko Nishimura

Several models have been proposed for describing grouping behavior such as bird flocking, terrestrial animal herding, and fish schooling. In these models, a fixed rule has been imposed on each individual a priori for its interactions in a reductive and rigid manner. We have proposed a new framework for self-organized grouping of agents by reinforcement learning. It is important to introduce a learning scheme for developing collective behavior in artificial autonomous distributed systems. This scheme can be expanded to cases in which predators are present. We integrated grouping and anti-predator behaviors into our proposed scheme. The behavior of agents is demonstrated and evaluated in detail through computer simulations, and their grouping and antipredator behaviors developed as a result of learning are shown to be diverse and robust by changing some parameters of the scheme. In this study, we investigate the network structure of agents in the process of learning these behaviors. From the view point of the complex network, the average shortest path length and clustering coefficient are evaluated through computer simulations.

Collaboration


Dive into the Koichiro Morihiro's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Junichi Kakegawa

Hyogo University of Teacher Education

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Koji Suda

Hyogo University of Teacher Education

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge