Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Shingo Murata is active.

Publication


Featured researches published by Shingo Murata.


IEEE Transactions on Autonomous Mental Development | 2013

Learning to Reproduce Fluctuating Time Series by Inferring Their Time-Dependent Stochastic Properties: Application in Robot Learning Via Tutoring

Shingo Murata; Jun Namikawa; Hiroaki Arie; Shigeki Sugano; Jun Tani

This study proposes a novel type of dynamic neural network model that can learn to extract stochastic or fluctuating structures hidden in time series data. The network learns to predict not only the mean of the next input state, but also its time-dependent variance. The training method is based on maximum likelihood estimation by using the gradient descent method and the likelihood function is expressed as a function of the estimated variance. Regarding the model evaluation, we present numerical experiments in which training data were generated in different ways utilizing Gaussian noise. Our analysis showed that the network can predict the time-dependent variance and the mean and it can also reproduce the target stochastic sequence data by utilizing the estimated variance. Furthermore, it was shown that a humanoid robot using the proposed network can learn to reproduce latent stochastic structures hidden in fluctuating tutoring trajectories. This learning scheme is essential for the acquisition of sensory-guided skilled behavior.


IEEE Transactions on Neural Networks | 2017

Learning to Perceive the World as Probabilistic or Deterministic via Interaction With Others: A Neuro-Robotics Experiment

Shingo Murata; Yuichi Yamashita; Hiroaki Arie; Tetsuya Ogata; Shigeki Sugano; Jun Tani

We suggest that different behavior generation schemes, such as sensory reflex behavior and intentional proactive behavior, can be developed by a newly proposed dynamic neural network model, named stochastic multiple timescale recurrent neural network (S-MTRNN). The model learns to predict subsequent sensory inputs, generating both their means and their uncertainty levels in terms of variance (or inverse precision) by utilizing its multiple timescale property. This model was employed in robotics learning experiments in which one robot controlled by the S-MTRNN was required to interact with another robot under the condition of uncertainty about the other’s behavior. The experimental results show that self-organized and sensory reflex behavior—based on probabilistic prediction—emerges when learning proceeds without a precise specification of initial conditions. In contrast, intentional proactive behavior with deterministic predictions emerges when precise initial conditions are available. The results also showed that, in situations where unanticipated behavior of the other robot was perceived, the behavioral context was revised adequately by adaptation of the internal neural dynamics to respond to sensory inputs during sensory reflex behavior generation. On the other hand, during intentional proactive behavior generation, an error regression scheme by which the internal neural activity was modified in the direction of minimizing prediction errors was needed for adequately revising the behavioral context. These results indicate that two different ways of treating uncertainty about perceptual events in learning, namely, probabilistic modeling and deterministic modeling, contribute to the development of different dynamic neuronal structures governing the two types of behavior generation schemes.


Frontiers in Neurorobotics | 2016

Dynamical Integration of Language and Behavior in a Recurrent Neural Network for Human–Robot Interaction

Tatsuro Yamada; Shingo Murata; Hiroaki Arie; Tetsuya Ogata

To work cooperatively with humans by using language, robots must not only acquire a mapping between language and their behavior but also autonomously utilize the mapping in appropriate contexts of interactive tasks online. To this end, we propose a novel learning method linking language to robot behavior by means of a recurrent neural network. In this method, the network learns from correct examples of the imposed task that are given not as explicitly separated sets of language and behavior but as sequential data constructed from the actual temporal flow of the task. By doing this, the internal dynamics of the network models both language–behavior relationships and the temporal patterns of interaction. Here, “internal dynamics” refers to the time development of the system defined on the fixed-dimensional space of the internal states of the context layer. Thus, in the execution phase, by constantly representing where in the interaction context it is as its current state, the network autonomously switches between recognition and generation phases without any explicit signs and utilizes the acquired mapping in appropriate contexts. To evaluate our method, we conducted an experiment in which a robot generates appropriate behavior responding to a human’s linguistic instruction. After learning, the network actually formed the attractor structure representing both language–behavior relationships and the task’s temporal pattern in its internal dynamics. In the dynamics, language–behavior mapping was achieved by the branching structure. Repetition of human’s instruction and robot’s behavioral response was represented as the cyclic structure, and besides, waiting to a subsequent instruction was represented as the fixed-point attractor. Thanks to this structure, the robot was able to interact online with a human concerning the given task by autonomously switching phases.


international conference on artificial neural networks | 2014

Tool-Body Assimilation Model Based on Body Babbling and a Neuro-Dynamical System for Motion Generation

Kuniyuki Takahashi; Tetsuya Ogata; Hadi Tjandra; Shingo Murata; Hiroaki Arie; Shigeki Sugano

We propose a model for robots to use tools without predetermined parameters based on a human cognitive model. Almost all existing studies of robot using tool require predetermined motions and tool features, so the motion patterns are limited and the robots cannot use new tools. Other studies use a full search for new tools; however, this entails an enormous number of calculations. We built a model for tool use based on the phenomenon of tool-body assimilation using the following approach: We used a humanoid robot model to generate random motion, based on human body babbling. These rich motion experiences were then used to train a recurrent neural network for modeling a body image. Tool features were self-organized in the parametric bias modulating the body image according to the used tool. Finally, we designed the neural network for the robot to generate motion only from the target image.


intelligent robots and systems | 2015

Attractor representations of language-behavior structure in a recurrent neural network for human-robot interaction

Tatsuro Yamada; Shingo Murata; Hiroaki Arie; Tetsuya Ogata

In recent years there has been increased interest in studies that explore integrative learning of language and other modalities by using neural network models. However, for practical application to human-robot interaction, the acquired semantic structure between language and meaning has to be available immediately and repeatably whenever necessary, just as in everyday communication. As a solution to this problem, this study proposes a method in which a recurrent neural network self-organizes cyclic attractors that reflect semantic structure and represent interaction flows in its internal dynamics. To evaluate this method we design a simple task in which a human verbally directs a robot, which responds appropriately. Training the network with training data that represent the interaction series, the cyclic attractors that reflect the semantic structure is self-organized. The network first receives a verbal direction, and its internal state moves according to the first half of the cyclic attractors with branch structures corresponding to semantics. After that, the internal state reaches a potential to generate appropriate behavior. Finally, the internal state moves to the second half and converges on the initial point of the cycle while generating the appropriate behavior. By self-organizing such an internal structure in its forward dynamics, the model achieves immediate and repeatable response to linguistic directions. Furthermore, the network self-organizes a fixed-point attractor, and so able to wait for directions. It can thus repeat the interaction flexibly without explicit turn-taking signs.


Advanced Robotics | 2014

Learning to generate proactive and reactive behavior using a dynamic neural network model with time-varying variance prediction mechanism

Shingo Murata; Hiroaki Arie; Tetsuya Ogata; Shigeki Sugano; Jun Tani

This paper discusses a possible neurodynamic mechanism that enables self-organization of two basic behavioral modes, namely a ‘proactive mode’ and a ‘reactive mode,’ and of autonomous switching between these modes depending on the situation. In the proactive mode, actions are generated based on an internal prediction, whereas in the reactive mode actions are generated in response to sensory inputs in unpredictable situations. In order to investigate how these two behavioral modes can be self-organized and how autonomous switching between the two modes can be achieved, we conducted neurorobotics experiments by using our recently developed dynamic neural network model that has a capability to learn to predict time-varying variance of the observable variables. In a set of robot experiments under various conditions, the robot was required to imitate other’s movements consisting of alternating predictable and unpredictable patterns. The experimental results showed that the robot controlled by the neural network model was able to proactively imitate predictable patterns and reactively follow unpredictable patterns by autonomously switching its behavioral modes. Our analysis revealed that variance prediction mechanism can lead to self-organization of these abilities with sufficient robustness and generalization capabilities. Graphical Abstract


international conference on artificial neural networks | 2016

Dynamical Linking of Positive and Negative Sentences to Goal-Oriented Robot Behavior by Hierarchical RNN

Tatsuro Yamada; Shingo Murata; Hiroaki Arie; Tetsuya Ogata

Meanings of language expressions are constructed not only from words grounded in real-world matters, but also from words such as “not” that participate in the construction by working as logical operators. This study proposes a connectionist method for learning and internally representing functions that deal with both of these word groups, and grounding sentences constructed from them in corresponding behaviors just by experiencing raw sequential data of an imposed task. In the experiment, a robot implemented with a recurrent neural network is required to ground imperative positive and negative sentences given as a sequence of words in corresponding goal-oriented behavior. Analysis of the internal representations reveals that the network fulfilled the requirement by extracting XOR problems implicitly included in the target sequences and solving them by learning to represent the logical operations in its nonlinear dynamics in a self-organizing manner.


international conference on artificial neural networks | 2014

Learning and Recognition of Multiple Fluctuating Temporal Patterns Using S-CTRNN

Shingo Murata; Hiroaki Arie; Tetsuya Ogata; Jun Tani; Shigeki Sugano

In the present study, we demonstrate the learning and recognition capabilities of our recently proposed recurrent neural network (RNN) model called stochastic continuous-time RNN (S-CTRNN). S-CTRNN can learn to predict not only the mean but also the variance of the next state of the learning targets. The network parameters consisting of weights, biases, and initial states of context neurons are optimized through maximum likelihood estimation (MLE) using the gradient descent method. First, we clarify the essential difference between the learning capabilities of conventional CTRNN and S-CTRNN by analyzing the results of a numerical experiment in which multiple fluctuating temporal patterns were used as training data, where the variance of the Gaussian noise varied among the patterns. Furthermore, we also show that the trained S-CTRNN can recognize given fluctuating patterns by inferring the initial states that can reproduce the patterns through the same MLE scheme as that used for network training.


international conference on artificial neural networks | 2017

Mixing actual and predicted sensory states based on uncertainty estimation for flexible and robust robot behavior

Shingo Murata; Wataru Masuda; Saki Tomioka; Tetsuya Ogata; Shigeki Sugano

In this paper, we propose a method to dynamically modulate the input state of recurrent neural networks (RNNs) so as to realize flexible and robust robot behavior. We employ the so-called stochastic continuous-time RNN (S-CTRNN), which can learn to predict the mean and variance (or uncertainty) of subsequent sensorimotor information. Our proposed method uses this estimated uncertainty to determine a mixture ratio for combining actual and predicted sensory states of network input. The method is evaluated by conducting a robot learning experiment in which a robot is required to perform a sensory-dependent task and a sensory-independent task. The sensory-dependent task requires the robot to incorporate meaningful sensory information, and the sensory-independent task requires the robot to ignore irrelevant sensory information. Experimental results demonstrate that a robot controlled by our proposed method exhibits flexible and robust behavior, which results from dynamic modulation of the network input on the basis of the estimated uncertainty of actual sensory states.


ieee/sice international symposium on system integration | 2016

Analysis of imitative interactions between humans and a robot with a neuro-dynamical system

Shingo Murata; Kai Hirano; Hiroaki Arie; Shigeki Sugano; Tetsuya Ogata

Human communicative behavior is both dynamic and bidirectional. This study aims to analyze such behavior by conducting imitative interactions between human subjects and a humanoid robot that has a neuro-dynamical system. For this purpose, we take a robot-centered approach in which the change in robot performance according to difference in human partner is analyzed, rather than adopting the typical human-centered approach. A small humanoid robot equipped with a neuro-dynamical system learns imitative arm movement patterns and interacts with humans after the learning process. We analyze the interactive phenomena by different methods, including principal component analysis and use of a recurrence plot. Through this analysis, we demonstrate that different classes of interactions can be observed in the contextual dynamics of the neuro-dynamical system.

Collaboration


Dive into the Shingo Murata's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yuichi Yamashita

RIKEN Brain Science Institute

View shared research outputs
Top Co-Authors

Avatar

Jun Namikawa

RIKEN Brain Science Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge