Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hiroaki Arie is active.

Publication


Featured researches published by Hiroaki Arie.


Robotics and Autonomous Systems | 2014

Multimodal integration learning of robot behavior using deep neural networks

Kuniaki Noda; Hiroaki Arie; Yuki Suga; Tetsuya Ogata

For humans to accurately understand the world around them, multimodal integration is essential because it enhances perceptual precision and reduces ambiguity. Computational models replicating such human ability may contribute to the practical use of robots in daily human living environments; however, primarily because of scalability problems that conventional machine learning algorithms suffer from, sensory-motor information processing in robotic applications has typically been achieved via modal-dependent processes. In this paper, we propose a novel computational framework enabling the integration of sensory-motor time-series data and the self-organization of multimodal fused representations based on a deep learning approach. To evaluate our proposed model, we conducted two behavior-learning experiments utilizing a humanoid robot; the experiments consisted of object manipulation and bell-ringing tasks. From our experimental results, we show that large amounts of sensory-motor information, including raw RGB images, sound spectrums, and joint angles, are directly fused to generate higher-level multimodal representations. Further, we demonstrated that our proposed framework realizes the following three functions: (1) cross-modal memory retrieval utilizing the information complementation capability of the deep autoencoder; (2) noise-robust behavior recognition utilizing the generalization capability of multimodal features; and (3) multimodal causality acquisition and sensory-motor prediction based on the acquired causality. Novel computational framework for sensory-motor integration learning.Cross-modal memory retrieval utilizing a deep autoencoder.Noise-robust behavior recognition utilizing acquired multimodal features.Multimodal causality acquisition and sensory-motor prediction.


IEEE Transactions on Autonomous Mental Development | 2013

Learning to Reproduce Fluctuating Time Series by Inferring Their Time-Dependent Stochastic Properties: Application in Robot Learning Via Tutoring

Shingo Murata; Jun Namikawa; Hiroaki Arie; Shigeki Sugano; Jun Tani

This study proposes a novel type of dynamic neural network model that can learn to extract stochastic or fluctuating structures hidden in time series data. The network learns to predict not only the mean of the next input state, but also its time-dependent variance. The training method is based on maximum likelihood estimation by using the gradient descent method and the likelihood function is expressed as a function of the estimated variance. Regarding the model evaluation, we present numerical experiments in which training data were generated in different ways utilizing Gaussian noise. Our analysis showed that the network can predict the time-dependent variance and the mean and it can also reproduce the target stochastic sequence data by utilizing the estimated variance. Furthermore, it was shown that a humanoid robot using the proposed network can learn to reproduce latent stochastic structures hidden in fluctuating tutoring trajectories. This learning scheme is essential for the acquisition of sensory-guided skilled behavior.


Robotics and Autonomous Systems | 2012

Imitating others by composition of primitive actions: A neuro-dynamic model

Hiroaki Arie; Takafumi Arakaki; Shigeki Sugano; Jun Tani

This paper introduces a novel neuro-dynamical model that accounts for possible mechanisms of action imitation and learning. It is considered that imitation learning requires at least two classes of generalization. One is generalization over sensory-motor trajectory variances, and the other class is on cognitive level which concerns on more qualitative understanding of compositional actions by own and others which do not necessarily depend on exact trajectories. This paper describes a possible model dealing with these classes of generalization by focusing on the problem of action compositionality. The model was evaluated in the experiments using a small humanoid robot. The robot was trained with a set of different actions concerning object manipulations which can be decomposed into sequences of action primitives. Then the robot was asked to imitate a novel compositional action demonstrated by a human subject which are composed from prior-learned action primitives. The results showed that the novel action can be successfully imitated by decomposing and composing it with the primitives by means of organizing unified intentional representation hosted by mirror neurons even though the trajectory-level appearance is different between the ones of observed and those of self-generated.


New Mathematics and Natural Computation | 2009

CREATING NOVEL GOAL-DIRECTED ACTIONS AT CRITICALITY: A NEURO-ROBOTIC EXPERIMENT

Hiroaki Arie; Tetsuro Endo; Takafumi Arakaki; Shigeki Sugano; Jun Tani

The present study examines the possible roles of cortical chaos in generating novel actions for achieving specified goals. The proposed neural network model consists of a sensory-forward model responsible for parietal lobe functions, a chaotic network model for premotor functions and prefrontal cortex model responsible for manipulating the initial state of the chaotic network. Experiments using humanoid robot were performed with the model and showed that the action plans for satisfying specific novel goals can be generated by diversely modulating and combining prior-learned behavioral patterns at critical dynamical states. Although this criticality resulted in fragile goal achievements in the physical environment of the robot, the reinforcement of the successful trials was able to provide a substantial gain with respect to the robustness. The discussion leads to the hypothesis that the consolidation of numerous sensory-motor experiences into the memory, meditating diverse imagery in the memory by cortical chaos, and repeated enaction and reinforcement of newly generated effective trials are indispensable for realizing an open-ended development of cognitive behaviors.


Advanced Robotics | 2007

Reinforcement learning of a continuous motor sequence with hidden states

Hiroaki Arie; Tetsuya Ogata; Jun Tani; Shigeki Sugano

Reinforcement learning is the scheme for unsupervised learning in which robots are expected to acquire behavior skills through self-explorations based on reward signals. There are some difficulties, however, in applying conventional reinforcement learning algorithms to motion control tasks of a robot because most algorithms are concerned with discrete state space and based on the assumption of complete observability of the state. Real-world environments often have partial observablility; therefore, robots have to estimate the unobservable hidden states. This paper proposes a method to solve these two problems by combining the reinforcement learning algorithm and a learning algorithm for a continuous time recurrent neural network (CTRNN). The CTRNN can learn spatio-temporal structures in a continuous time and space domain, and can preserve the contextual flow by a self-organizing appropriate internal memory structure. This enables the robot to deal with the hidden state problem. We carried out an experiment on the pendulum swing-up task without rotational speed information. As a result, this task is accomplished in several hundred trials using the proposed algorithm. In addition, it is shown that the information about the rotational speed of the pendulum, which is considered as a hidden state, is estimated and encoded on the activation of a context neuron.


IEEE Transactions on Neural Networks | 2017

Learning to Perceive the World as Probabilistic or Deterministic via Interaction With Others: A Neuro-Robotics Experiment

Shingo Murata; Yuichi Yamashita; Hiroaki Arie; Tetsuya Ogata; Shigeki Sugano; Jun Tani

We suggest that different behavior generation schemes, such as sensory reflex behavior and intentional proactive behavior, can be developed by a newly proposed dynamic neural network model, named stochastic multiple timescale recurrent neural network (S-MTRNN). The model learns to predict subsequent sensory inputs, generating both their means and their uncertainty levels in terms of variance (or inverse precision) by utilizing its multiple timescale property. This model was employed in robotics learning experiments in which one robot controlled by the S-MTRNN was required to interact with another robot under the condition of uncertainty about the other’s behavior. The experimental results show that self-organized and sensory reflex behavior—based on probabilistic prediction—emerges when learning proceeds without a precise specification of initial conditions. In contrast, intentional proactive behavior with deterministic predictions emerges when precise initial conditions are available. The results also showed that, in situations where unanticipated behavior of the other robot was perceived, the behavioral context was revised adequately by adaptation of the internal neural dynamics to respond to sensory inputs during sensory reflex behavior generation. On the other hand, during intentional proactive behavior generation, an error regression scheme by which the internal neural activity was modified in the direction of minimizing prediction errors was needed for adequately revising the behavioral context. These results indicate that two different ways of treating uncertainty about perceptual events in learning, namely, probabilistic modeling and deterministic modeling, contribute to the development of different dynamic neuronal structures governing the two types of behavior generation schemes.


Cognitive Neurodynamics | 2012

Neuro-robotics study on integrative learning of proactive visual attention and motor behaviors

Sungmoon Jeong; Hiroaki Arie; Minho Lee; Jun Tani

The current paper proposes a novel model for integrative learning of proactive visual attention and sensory-motor control as inspired by the premotor theory of visual attention. The model is characterized by coupling a slow dynamics network with a fast dynamics network and by inheriting our prior proposed multiple timescales recurrent neural networks model (MTRNN) that may correspond to the fronto-parietal networks in the cortical brains. The neuro-robotics experiments in a task of manipulating multiple objects utilizing the proposed model demonstrated that some degrees of generalization in terms of position and object size variation can be achieved by organizing seamless integration of the proactive object-related visual attention and the related sensory-motor control into a set of action primitives in the distributed neural activities appearing in the fast dynamics network. It was also shown that such action primitives can be combined in compositional ways in acquiring novel actions in the slow dynamics network. The experimental results presented substantiate the premotor theory of visual attention.


intelligent robots and systems | 2013

Multimodal integration learning of object manipulation behaviors using deep neural networks

Kuniaki Noda; Hiroaki Arie; Yuki Suga; Testuya Ogata

This paper presents a novel computational approach for modeling and generating multiple object manipulation behaviors by a humanoid robot. The contribution of this paper is that deep learning methods are applied not only for multimodal sensor fusion but also for sensory-motor coordination. More specifically, a time-delay deep neural network is applied for modeling multiple behavior patterns represented with multi-dimensional visuomotor temporal sequences. By using the efficient training performance of Hessian-free optimization, the proposed mechanism successfully models six different object manipulation behaviors in a single network. The generalization capability of the learning mechanism enables the acquired model to perform the functions of cross-modal memory retrieval and temporal sequence prediction. The experimental results show that the motion patterns for object manipulation behaviors are successfully generated from the corresponding image sequence, and vice versa. Moreover, the temporal sequence prediction enables the robot to interactively switch multiple behaviors in accordance with changes in the displayed objects.


Archive | 2013

Synthetic Approach to Understanding Meta-level Cognition of Predictability in Generating Cooperative Behavior

Jun Namikawa; Ryunosuke Nishimoto; Hiroaki Arie; Jun Tani

We propose that “predictability” is a meta-level cognitive function that accounts for cooperative behaviors and describe this from a dynamical systems perspective based on a neuro-robotic experiment. In order to bring about cooperative behaviors among individuals, individuals should attempt to predict the behavior of their partners by making internal models of them. However, the behaviors of partners are often unpredictable because individuals possess free will to generate their own independent actions. Thus, acquiring internal models which attempt to completely predict the actions of others seems to be intractable. In the current study we suggest that, when learning internal models for interacting with the partners, cooperative agents should maintain predictability monitoring mechanisms by which attention is oriented more toward predictable segments in spatio-temporal sensory input space.


international conference on development and learning | 2010

Developmental learning of integrating visual attention shifts and bimanual object grasping and manipulation tasks

Sungmoon Jeong; Minho Lee; Hiroaki Arie; Jun Tani

In order to achieve visual-guided object manipulation tasks via learning by example, the current neuro-robotics study considers integration of two essential mechanisms of visual attention and arm/hand movement and their adaptive coordination. The present study proposes a new dynamic neural network model in which visual attention and motor behavior are associated with task specific manners by learning with self-organizing functional hierarchy required for the cognitive tasks. The top-down visual attention provides a goal-directed shift sequence in a visual scan path and it can guide a generation of a motor plan for hand movement during action by reinforcement and inhibition learning. The proposed model can automatically generate the corresponding goal-directed actions with regards to the current sensory states including visual stimuli and body postures. The experiments show that developmental learning from basic actions to combinational ones can achieve certain generalizations in learning by which some novel behaviors without prior learning can be successfully generated.

Collaboration


Dive into the Hiroaki Arie's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jun Namikawa

RIKEN Brain Science Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge