Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Shun Nishide is active.

Publication


Featured researches published by Shun Nishide.


Advanced Robotics | 2008

Predicting object dynamics from visual images through active sensing experiences

Shun Nishide; Tetsuya Ogata; Jun Tani; Kazunori Komatani; Hiroshi G. Okuno

Prediction of dynamic features is an important task for determining the manipulation strategies of an object. This paper presents a technique for predicting dynamics of objects relative to the robots motion from visual images. During the training phase, the authors use the recurrent neural network with parametric bias (RNNPB) to self-organize the dynamics of objects manipulated by the robot into the PB space. The acquired PB values, static images of objects and robot motor values are input into a hierarchical neural network to link the images to dynamic features (PB values). The neural network extracts prominent features that each induce object dynamics. For prediction of the motion sequence of an unknown object, the static image of the object and robot motor value are input into the neural network to calculate the PB values. By inputting the PB values into the closed loop RNNPB, the predicted movements of the object relative to the robot motion are calculated recursively. Experiments were conducted with the humanoid robot Robovie-IIs pushing objects at different heights. The results of the experiment predicting the dynamics of target objects proved that the technique is efficient for predicting the dynamics of the objects.


IEEE Transactions on Autonomous Mental Development | 2012

Tool–Body Assimilation of Humanoid Robot Using a Neurodynamical System

Shun Nishide; Jun Tani; Toru Takahashi; Hiroshi G. Okuno; Tetsuya Ogata

Researches in the brain science field have uncovered the human capability to use tools as if they are part of the human bodies (known as tool-body assimilation) through trial and experience. This paper presents a method to apply a robots active sensing experience to create the tool-body assimilation model. The model is composed of a feature extraction module, dynamics learning module, and a tool-body assimilation module. Self-organizing map (SOM) is used for the feature extraction module to extract object features from raw images. Multiple time-scales recurrent neural network (MTRNN) is used as the dynamics learning module. Parametric bias (PB) nodes are attached to the weights of MTRNN as second-order network to modulate the behavior of MTRNN based on the properties of the tool. The generalization capability of neural networks provide the model the ability to deal with unknown tools. Experiments were conducted with the humanoid robot HRP-2 using no tool, I-shaped, T-shaped, and L-shaped tools. The distribution of PB values have shown that the model has learned that the robots dynamic properties change when holding a tool. Motion generation experiments show that the tool-body assimilation model is capable of applying to unknown tools to generate goal-oriented motions.


systems, man and cybernetics | 2013

Developmental Human-Robot Imitation Learning of Drawing with a Neuro Dynamical System

Keita Mochizuki; Shun Nishide; Hiroshi G. Okuno; Tetsuya Ogata

This paper mainly deals with robot developmental learning on drawing and discusses the influences of physical embodiment to the task. Humans are said to develop their drawing skills through five phases: 1) Scribbling, 2) Fortuitous Realism, 3) Failed Realism, 4) Intellectual Realism, 5) Visual Realism. We implement phases 1) and 3) into the humanoid robot NAO, holding a pen, using a neuro dynamical model, namely Multiple Timescales Recurrent Neural Network (MTRNN). For phase 1), we used random arm motion of the robot as body babbling to associate motor dynamics with pen position dynamics. For phase 3), we developed incremental imitation learning to imitate and develop the robots drawing skill using basic shapes: circle, triangle, and rectangle. We confirmed two notable features from the experiment. First, the drawing was better performed for shapes requiring arm motions used in babbling. Second, performance of clockwise drawing of circle was good from beginning, which is a similar phenomenon that can be observed in human development. The results imply the capability of the model to create a developmental robot relating to human development.


international conference on robotics and automation | 2007

Predicting Object Dynamics from Visual Images through Active Sensing Experiences

Shun Nishide; Tetsuya Ogata; Jun Tani; Kazunori Komatani; Hiroshi G. Okuno

Prediction of dynamic features is an important task for determining the manipulation strategies of an object. This paper presents a technique for predicting dynamics of objects relative to the robots motion from visual images. During the learning phase, the authors use recurrent neural network with parametric bias (RNNPB) to self-organize the dynamics of objects manipulated by the robot into the PB space. The acquired PB values, static images of objects, and robot motor values are input into a hierarchical neural network to link the static images to dynamic features (PB values). The neural network extracts prominent features that induce each object dynamics. For prediction of the motion sequence of an unknown object, the static image of the object and robot motor value are input into the neural network to calculate the PB values. By inputting the PB values into the closed loop RNNPB, the predicted movements of the object relative to the robot motion are calculated sequentially. Experiments were conducted with the humanoid robot Robovie-IIs pushing objects at different heights. Reducted grayscale images and shoulder pitch angles were input into the neural network to predict the dynamics of target objects. The results of the experiment proved that the technique is efficient for predicting the dynamics of the objects.


international conference on robotics and automation | 2008

Object dynamics prediction and motion generation based on reliable predictability

Shun Nishide; Tetsuya Ogata; Ryunosuke Yokoya; Jun Tani; Kazunori Komatani; Hiroshi G. Okuno

Consistency of object dynamics, which is related to reliable predictability, is an important factor for generating object manipulation motions. This paper proposes a technique to generate autonomous motions based on consistency of object dynamics. The technique resolves two issues: construction of an object dynamics prediction model and evaluation of consistency. The authors utilize Recurrent Neural Network with Parametric Bias to self-organize the dynamics, and link static images to the self-organized dynamics using a hierarchical neural network to deal with the first issue. For evaluation of consistency, the authors have set an evaluation function based on object dynamics relative to robot motor dynamics. Experiments have shown that the method is capable of predicting 90% of unknown object dynamics. Motion generation experiments have proved that the technique is capable of generating autonomous pushing motions that generate consistent rolling motions.


intelligent robots and systems | 2009

Modeling tool-body assimilation using second-order Recurrent Neural Network

Shun Nishide; Tatsuhiro Nakagawa; Tetsuya Ogata; Jun Tani; Toru Takahashi; Hiroshi G. Okuno

Tool-body assimilation is one of the intelligent human abilities. Through trial and experience, humans are capable of using tools as if they are part of their own bodies. This paper presents a method to apply a robots active sensing experience for creating the tool-body assimilation model. The model is composed of a feature extraction module, dynamics learning module, and a tool recognition module. Self-Organizing Map (SOM) is used for the feature extraction module to extract object features from raw images. Multiple Time-scales Recurrent Neural Network (MTRNN) is used as the dynamics learning module. Parametric Bias (PB) nodes are attached to the weights of MTRNN as second-order network to modulate the behavior of MTRNN based on the tool. The generalization capability of neural networks provide the model the ability to deal with unknown tools. Experiments are performed with HRP-2 using no tool, I-shaped, T-shaped, and L-shaped tools. The distribution of PB values have shown that the model has learned that the robots dynamic properties change when holding a tool. The results of the experiment show that the tool-body assimilation model is capable of applying to unknown objects to generate goal-oriented motions.


international conference on robotics and automation | 2014

Insertion of pause in drawing from babbling for robot's developmental imitation learning

Shun Nishide; Keita Mochizuki; Hiroshi G. Okuno; Tetsuya Ogata

In this paper, we present a method to improve a robots imitation performance in a drawing scenario by inserting pauses in motion. Humans drawing skills are said to develop through five stages: 1) Scribbling, 2) Fortuitous Realism, 3) Failed Realism, 4) Intellectual Realism, and 5) Visual Realism. We focus on stages 1) and 3) for creating our system, each corresponding to body babbling and imitation learning, respectively. For stage 1), the robot randomly moves its arm to associate robots arm dynamics with the drawing result. Presuming that the robot has no knowledge about its own dynamics, the robot learns its body dynamics in this stage. For stage 3), we consider a scenario where a robot would imitate a humans drawing motion. Upon creating the system, we focus on the motionese phenomenon, which is one of the key factors for discussing acquisition of a skill through a human parent-child interaction. In motionese, the parent would first show each action elaborately to the child, when teaching a skill. As the child starts to improve, the parents actions would be simplified. Likewise in our scenario, the human would first insert pauses during the drawing motions where the direction of drawing changes (i.e. corners). As the robots imitation learning of drawing converges, the human would change to drawing without pauses. The experimental results show that insertion of pause in drawing imitation scenarios greatly improves the robots drawing performance.


systems, man and cybernetics | 2011

Handwriting prediction based character recognition using recurrent neural network

Shun Nishide; Hiroshi G. Okuno; Tetsuya Ogata; Jun Tani

Humans are said to unintentionally trace handwriting sequences in their brains based on handwriting experiences when recognizing written text. In this paper, we propose a model for predicting handwriting sequence for written text recognition based on handwriting experiences. The model is first trained using image sequences acquired while writing text. The image features of sequences are self-organized from the images using Self-Organizing Map. The feature sequences are used to train a neuro-dynamics learning model. For recognition, the text image is input into the model for predicting the handwriting sequence and recognition of the text. We conducted two experiments using ten Japanese characters. The results of the experiments show the effectivity of the model.


international conference on neural information processing | 2011

Use of a sparse structure to improve learning performance of recurrent neural networks

Hiromitsu Awano; Shun Nishide; Hiroaki Arie; Jun Tani; Toru Takahashi; Hiroshi G. Okuno; Tetsuya Ogata

The objective of our study is to find out how a sparse structure affects the performance of a recurrent neural network (RNN). Only a few existing studies have dealt with the sparse structure of RNN with learning like Back Propagation Through Time (BPTT). In this paper, we propose a RNN with sparse connection and BPTT called Multiple time scale RNN (MTRNN). Then, we investigated how sparse connection affects generalization performance and noise robustness. In the experiments using data composed of alphabetic sequences, the MTRNN showed the best generalization performance when the connection rate was 40%. We also measured sparseness of neural activity and found out that sparseness of neural activity corresponds to generalization performance. These results means that sparse connection improved learning performance and sparseness of neural activity would be used as metrics of generalization performance.


Journal of robotics and mechatronics | 2009

Autonomous Motion Generation Based on Reliable Predictability

Shun Nishide; Tetsuya Ogata; Jun Tani; Kazunori Komatani; Hiroshi G. Okuno

Predictability is an important factor for generating object manipulation motions. In this paper, the authors present a technique to generate autonomous object pushing motions based on object dynamics consistency, which is tightly connected to reliable predictability. The technique first creates an internal model of the robot and object dynamics using Recurrent Neural Network with Parametric Bias, based on transitions of extracted object features and generated robot motions acquired during active sensing experiences with objects. Next, the technique searches through the model for the most consistent object dynamics and corresponding robot motion through a consistency evaluation function using Steepest Descent Method. Finally, the initial static image of the object is linked to the acquired robot motion using a hierarchical neural network. The authors have conducted a motion generation experiment using pushing motions with cylindrical objects for evaluation of the method. The experiment has shown that the method has generalized its ability to adapt to object postures for generating consistent rolling motions.

Collaboration


Dive into the Shun Nishide's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge