Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jun Tani is active.

Publication


Featured researches published by Jun Tani.


PLOS Computational Biology | 2008

Emergence of Functional Hierarchy in a Multiple Timescale Neural Network Model: A Humanoid Robot Experiment

Yuichi Yamashita; Jun Tani

It is generally thought that skilled behavior in human beings results from a functional hierarchy of the motor control system, within which reusable motor primitives are flexibly integrated into various sensori-motor sequence patterns. The underlying neural mechanisms governing the way in which continuous sensori-motor flows are segmented into primitives and the way in which series of primitives are integrated into various behavior sequences have, however, not yet been clarified. In earlier studies, this functional hierarchy has been realized through the use of explicit hierarchical structure, with local modules representing motor primitives in the lower level and a higher module representing sequences of primitives switched via additional mechanisms such as gate-selecting. When sequences contain similarities and overlap, however, a conflict arises in such earlier models between generalization and segmentation, induced by this separated modular structure. To address this issue, we propose a different type of neural network model. The current model neither makes use of separate local modules to represent primitives nor introduces explicit hierarchical structure. Rather than forcing architectural hierarchy onto the system, functional hierarchy emerges through a form of self-organization that is based on two distinct types of neurons, each with different time properties (“multiple timescales”). Through the introduction of multiple timescales, continuous sequences of behavior are segmented into reusable primitives, and the primitives, in turn, are flexibly integrated into novel sequences. In experiments, the proposed network model, coordinating the physical body of a humanoid robot through high-dimensional sensori-motor control, also successfully situated itself within a physical environment. Our results suggest that it is not only the spatial connections between neurons but also the timescales of neural activity that act as important mechanisms leading to functional hierarchy in neural systems.


Neural Networks | 1999

Learning to perceive the world as articulated: an approach for hierarchical learning in sensory-motor systems

Jun Tani; Stefano Nolfi

This paper describes how agents can learn an internal model of the world structurally by focusing on the problem of behavior-based articulation. We develop an on-line learning scheme-the so-called mixture of recurrent neural net (RNN) experts-in which a set of RNN modules become self-organized as experts on multiple levels, in order to account for the different categories of sensory-motor flow which the robot experiences. Autonomous switching of activated modules in the lower level actually represents the articulation of the sensory-motor flow. In the meantime, a set of RNNs in the higher level competes to learn the sequences of module switching in the lower level, by which articulation at a further, more abstract level can be achieved. The proposed scheme was examined through simulation experiments involving the navigation learning problem. Our dynamical system analysis clarified the mechanism of the articulation. The possible correspondence between the articulation mechanism and the attention switching mechanism in thalamo-cortical loops is also discussed.


Neural Networks | 2004

Self-organization of distributedly represented multiple behavior schemata in a mirror system: reviews of robot experiments using RNNPB

Jun Tani; Masato Ito; Yuuya Sugita

The current paper reviews a connectionist model, the recurrent neural network with parametric biases (RNNPB), in which multiple behavior schemata can be learned by the network in a distributed manner. The parametric biases in the network play an essential role in both generating and recognizing behavior patterns. They act as a mirror system by means of self-organizing adequate memory structures. Three different robot experiments are reviewed: robot and user interactions; learning and generating different types of dynamic patterns; and linguistic-behavior binding. The hallmark of this study is explaining how self-organizing internal structures can contribute to generalization in learning, and diversity in behavior generation, in the proposed distributed representation scheme.


systems man and cybernetics | 2003

Self-organization of behavioral primitives as multiple attractor dynamics: A robot experiment

Jun Tani; Masato Ito

This paper investigates how behavior primitives are self-organized in a neural network model utilizing a distributed representation scheme. The model is characterized by so-called parametric biases which adaptively modulate the encoding of different behavior patterns in a single recurrent neural net (RNN). Our experiments, using a real robot arm, showed that a set of end-point and oscillatory behavior patterns are learned by self-organizing fixed points and limit cycle dynamics that form behavior primitives. It was also found that diverse novel behavior patterns can be generated by modulating the parametric biases arbitrarily. Our analysis showed that such diversity in behavior generation emerges because a nonlinear map is self-organized between the space of parametric biases and that of the behavior patterns. The origin of the observed nonlinearity from the distributed representation is discussed. This paper investigates how behavior primitives are self-organized in a neural network model utilizing a distributed representation scheme. Our robot experiments showed that a set of end-point and oscillatory behavior patterns are learned by self-organizing fixed points and limit cycle dynamics that form behavior primitives. It was also found that diverse novel behavior patterns, in addition to previously learned patterns, can be generated by taking advantage of nonlinear effects that emerge from the distributed representation.


Adaptive Behavior | 2005

Learning Semantic Combinatoriality from the Interaction between Linguistic and Behavioral Processes

Yuuya Sugita; Jun Tani

We present a novel connectionist model for acquiring the semantics of a simple language through the behavioral experiences of a real robot. We focus on the “compositionality” of semantics and examine how it can be generated through experiments. Our experimental results showed that the essential structures for situated semantics can self-organize themselves through dense interactions between linguistic and behavioral processes whereby a certain generalization in learning is achieved. Our analysis of the acquired dynamical structures indicates that an equivalence of compositionality appears in the combinatorial mechanics self-organized in the neuronal nonlinear dynamics. The manner in which this mechanism of compositionality, based on dynamical systems, differs from that considered in conventional linguistics and other synthetic computational models, is discussed in this paper.


Neural Networks | 2003

Learning to generate articulated behavior through the bottom-up and the top-down interaction processes

Jun Tani

A novel hierarchical neural network architecture for sensory-motor learning and behavior generation is proposed. Two levels of forward model neural networks are operated on different time scales while parametric interactions are allowed between the two network levels in the bottom-up and top-down directions. The models are examined through experiments of behavior learning and generation using a real robot arm equipped with a vision system. The results of the learning experiments showed that the behavioral patterns are learned by self-organizing the behavioral primitives in the lower level and combining the primitives sequentially in the higher level. The results contrast with prior work by Pawelzik et al. [Neural Comput. 8 (1996) 340], Tani and Nolfi [From animals to animats, 1998], and Wolpert and Kawato [Neural Networks 11 (1998) 1317] in that the primitives are represented in a distributed manner in the network in the present scheme whereas, in the prior work, the primitives were localized in specific modules in the network. Further experiments of on-line planning showed that the behavior could be generated robustly against a background of real world noise while the behavior plans could be modified flexibly in response to changes in the environment. It is concluded that the interaction between the bottom-up process of recalling the past and the top-down process of predicting the future enables both robust and flexible situated behavior.


IEEE Transactions on Autonomous Mental Development | 2010

Integration of Action and Language Knowledge: A Roadmap for Developmental Robotics

Angelo Cangelosi; Giorgio Metta; Gerhard Sagerer; Stefano Nolfi; Chrystopher L. Nehaniv; Kerstin Fischer; Jun Tani; Tony Belpaeme; Giulio Sandini; Francesco Nori; Luciano Fadiga; Britta Wrede; Katharina J. Rohlfing; Elio Tuci; Kerstin Dautenhahn; Joe Saunders; Arne Zeschel

This position paper proposes that the study of embodied cognitive agents, such as humanoid robots, can advance our understanding of the cognitive development of complex sensorimotor, linguistic, and social learning skills. This in turn will benefit the design of cognitive robots capable of learning to handle and manipulate objects and tools autonomously, to cooperate and communicate with other robots and humans, and to adapt their abilities to changing internal, environmental, and social conditions. Four key areas of research challenges are discussed, specifically for the issues related to the understanding of: 1) how agents learn and represent compositional actions; 2) how agents learn and represent compositional lexica; 3) the dynamics of social interaction and learning; and 4) how compositional action and language representations are integrated to bootstrap the cognitive system. The review of specific issues and progress in these areas is then translated into a practical roadmap based on a series of milestones. These milestones provide a possible set of cognitive robotics goals and test scenarios, thus acting as a research roadmap for future work on cognitive developmental robotics.


Adaptive Behavior | 2004

On-line imitative interaction with a humanoid robot using a dynamic neural network model of a mirror system.

Masato Ito; Jun Tani

This study presents experiments on the imitative interactions between a small humanoid robot and a user. A dynamic neural network model of a mirror system was implemented in a humanoid robot, based on the recurrent neural network model with parametric bias (RNNPB). The experiments showed that after the robot learns multiple cyclic movement patterns as embedded in the RNNPB, it can regenerate each pattern synchronously with the movements of a human who is demonstrating the corresponding movement pattern in front of the robot. Further, the robot exhibits diverse interactive responses when the user demonstrates novel cyclic movement patterns. Those responses were analyzed and categorized. We propose that the dynamics of coherence and incoherence between the robot’s and the user’s movements could enhance close interactions between them, and that they could also explain the essential psychological mechanism of joint attention.


Neural Networks | 2006

2006 Special issue: Dynamic and interactive generation of object handling behaviors by a small humanoid robot using a dynamic neural network model

Masato Ito; Kuniaki Noda; Yukiko Hoshino; Jun Tani

This study presents experiments on the learning of object handling behaviors by a small humanoid robot using a dynamic neural network model, the recurrent neural network with parametric bias (RNNPB). The first experiment showed that after the robot learned different types of ball handling behaviors using human direct teaching, the robot was able to generate adequate ball handling motor sequences situated to the relative position between the robots hands and the ball. The same scheme was applied to a block handling learning task where it was shown that the robot can switch among learned different block handling sequences, situated to the ways of interaction by human supporters. Our analysis showed that entrainment of the internal memory structures of the RNNPB through the interactions of the objects and the human supporters are the essential mechanisms for those observed situated behaviors of the robot.


Adaptive Behavior | 2005

How Hierarchical Control Self-organizes in Artificial Adaptive Systems:

Rainer W. Paine; Jun Tani

Diverse, complex, and adaptive animal behaviors are achieved by organizing hierarchically structured controllers in motor systems. The levels of control progress from simple spinal reflexes and central pattern generators through to executive cognitive control in the frontal cortex. Various types of hierarchical control structures have been introduced and shown to be effective in past artificial agent models, but few studies have shown how such structures can self-organize. This study describes how such hierarchical control may evolve in a simple recurrent neural network model implemented in a mobile robot. Topological constraints on information flow are found to improve system performance by decreasing interference between different parts of the network. One part becomes responsible for generating lower behavior primitives while another part evolves top-down sequencing of the primitives for achieving global goals. Fast and slow neuronal response dynamics are automatically generated in specific neurons of the lower and the higher levels, respectively. A hierarchical neural network is shown to outperform a comparable single-level network in controlling a mobile robot.

Collaboration


Dive into the Jun Tani's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jun Namikawa

RIKEN Brain Science Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yuuya Sugita

RIKEN Brain Science Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ryunosuke Nishimoto

RIKEN Brain Science Institute

View shared research outputs
Researchain Logo
Decentralizing Knowledge