Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yuki Suga is active.

Publication


Featured researches published by Yuki Suga.


Robotics and Autonomous Systems | 2014

Multimodal integration learning of robot behavior using deep neural networks

Kuniaki Noda; Hiroaki Arie; Yuki Suga; Tetsuya Ogata

For humans to accurately understand the world around them, multimodal integration is essential because it enhances perceptual precision and reduces ambiguity. Computational models replicating such human ability may contribute to the practical use of robots in daily human living environments; however, primarily because of scalability problems that conventional machine learning algorithms suffer from, sensory-motor information processing in robotic applications has typically been achieved via modal-dependent processes. In this paper, we propose a novel computational framework enabling the integration of sensory-motor time-series data and the self-organization of multimodal fused representations based on a deep learning approach. To evaluate our proposed model, we conducted two behavior-learning experiments utilizing a humanoid robot; the experiments consisted of object manipulation and bell-ringing tasks. From our experimental results, we show that large amounts of sensory-motor information, including raw RGB images, sound spectrums, and joint angles, are directly fused to generate higher-level multimodal representations. Further, we demonstrated that our proposed framework realizes the following three functions: (1) cross-modal memory retrieval utilizing the information complementation capability of the deep autoencoder; (2) noise-robust behavior recognition utilizing the generalization capability of multimodal features; and (3) multimodal causality acquisition and sensory-motor prediction based on the acquired causality. Novel computational framework for sensory-motor integration learning.Cross-modal memory retrieval utilizing a deep autoencoder.Noise-robust behavior recognition utilizing acquired multimodal features.Multimodal causality acquisition and sensory-motor prediction.


intelligent robots and systems | 2005

Interactive evolution of human-robot communication in real world

Yuki Suga; Yoshinori Ikuma; Daisuke Nagao; Shigeki Sugano; Tetsuya Ogata

This paper describes how to implement interactive evolutionary computation (IEC) into a human-robot communication system. IEC is an evolutionary computation (EC) in which the fitness function is performed by human assessors. We used IEC to configure the human-robot communication system. We have already simulated IECs application. In this paper, we implemented IEC into a real robot. Since this experiment leads considerable burdens on both the robot and experimental subjects, we propose the human-machine hybrid evaluation (HMHE) to increase the diversity within the genetic pool without increasing the number of interactions. We used a communication robot, WAMOEBA-3 (Waseda artificial mind on emotion base), which is appropriate for this experiment. In the experiment, human assessors interacted with WAMOEBA-3 in various ways. The fitness values increased gradually, and assessors felt the robot learnt the motions they desired. Therefore, it was confirmed that the IEC is most suitable as the communication learning system.


intelligent robots and systems | 2013

Multimodal integration learning of object manipulation behaviors using deep neural networks

Kuniaki Noda; Hiroaki Arie; Yuki Suga; Testuya Ogata

This paper presents a novel computational approach for modeling and generating multiple object manipulation behaviors by a humanoid robot. The contribution of this paper is that deep learning methods are applied not only for multimodal sensor fusion but also for sensory-motor coordination. More specifically, a time-delay deep neural network is applied for modeling multiple behavior patterns represented with multi-dimensional visuomotor temporal sequences. By using the efficient training performance of Hessian-free optimization, the proposed mechanism successfully models six different object manipulation behaviors in a single network. The generalization capability of the learning mechanism enables the acquired model to perform the functions of cross-modal memory retrieval and temporal sequence prediction. The experimental results show that the motion patterns for object manipulation behaviors are successfully generated from the corresponding image sequence, and vice versa. Moreover, the temporal sequence prediction enables the robot to interactively switch multiple behaviors in accordance with changes in the displayed objects.


robot and human interactive communication | 2004

Imitation based human-robot interaction - roles of joint attention and motion prediction

Y. Akiwa; Tetsuya Ogata; Yuki Suga; Shigeki Sugano

Behavior imitation is crucial for the acquisition of intelligence as well as in communication. This paper describes two kinds of experiments of human-robot communication based on behavior imitation. One compared the results obtained when the robot did and did not predict the experimental subjects behaviors by using past datasets, and the other compared the results obtained with and without target objects in the simulator environment. The result of former experiment showed that the prediction of the subjects behaviors increase the subjects interest. The result of the latter experiment confirmed that the presence of objects facilitates joint attention and make human-robot communication possible even when the robot uses a simple imitation mechanism. This result shows that in human-robot communication, human not only recognizes the behaviors of the robot passively but also adapts to the situation actively. In conclusion, it is confirmed that the motion prediction and the presence of objects for joint attention are important for human-robot communication.


intelligent robots and systems | 2010

Contact detection and reaction of a wheelchair mounted robotic arm equiped with mechanical gravity canceller

Wei Wang; Yuki Suga; Shigeki Sugano

Safety issue has become the primary concern in wheelchair mounted robotic arm applications. We introduced mechanical gravity canceller in the design to realize a light weight manipulator, which also yield the greatly simplify of manipulator dynamics. Based on the simplified dynamics, sensor based contact detection can be easily implemented to enable safety. Contact reaction schemes are also applied through a impedance control law to achieve desired backdrivability. Experiments are conducted to illustrate the proposed method.


Advanced Robotics | 2006

Dynamic perception after visually guided grasping by a human-like autonomous robot

Mototaka Suzuki; Kuniaki Noda; Yuki Suga; Tetsuya Ogata; Shigeki Sugano

We will explore dynamic perception following the visually guided grasping of several objects by a human-like autonomous robot. This competency serves for object categorization. Physical interaction with the hand-held object gives the neural network of the robot the rich, coherent and multi-modal sensory input. Multi-layered self-organizing maps are designed and examined in static and dynamic conditions. The results of the tests in the former condition show its capability of robust categorization against noise. The network also shows better performance than a single-layered map does. In the latter condition we focus on shaking behavior by moving only the forearm of the robot. In some combinations of grasping style and shaking radius the network is capable of categorizing two objects robustly. The results show that the network capability to achieve the task largely depends on how to grasp and how to move the objects. These results together with a preliminary simulation are promising toward the self-organization of a high degree of autonomous dynamic object categorization.


systems, man and cybernetics | 2013

Intersensory Causality Modeling Using Deep Neural Networks

Kuniaki Noda; Hiroaki Arie; Yuki Suga; Testuya Ogata

Our brain is known to enhance perceptual precision and reduce ambiguity about sensory environment by integrating multiple sources of sensory information acquired from different modalities, such as vision, auditory and somatic sensation. From an engineering perspective, building a computational model that replicates this ability to integrate multimodal information and to self-organize the causal dependency among them, represents one of the central challenges in robotics. In this study, we propose such a model based on a deep learning framework and we evaluate the proposed model by conducting a bell ring task using a small humanoid robot. Our experimental results demonstrate that (1) the cross-modal memory retrieval function of the proposed method succeeds in generating visual sequence from the corresponding sound and bell ring motion, and (2) the proposed method leads to accurate causal dependencies among the sensory-motor sequence.


international conference on advanced intelligent mechatronics | 2012

Solve inverse kinematics through a new quadratic minimization technique

Wei Wang; Yuki Suga; Hiroyasu Iwata; Shigeki Sugano

A new technique was developed to solve inverse kinematics based on quadratic minimization. Firstly the inverse kinematic problem was formulated as a quadratic minimization problem through the constitution of a quadratic objective function derived from quaternion based kinematic description, then a new technique is developed to solve the quadratic minimization problem. The iteration procedures of the new technique are organized in two main phases: compute the search direction based on dynamic system synthesis; seek for acceptable step along the search direction. Numerical examples showed that inverse kinematic solutions can be delivered in a robust and time-efficient way, to meet the requirements coming across in the application field of every daily living support.


intelligent robots and systems | 2004

Acquisition of reactive motion for communication robots using interactive EC

Yuki Suga; Tetsuya Ogata; Shigeki Sugano

We developed an emotional communication robot, WAMOEBA, using behavior-based techniques. We also proposed motor-agent (MA) model, which is an autonomous distributed-control algorithm constructed of simple sensor motor coordination. Though it enables WAMOEBA to behave in various ways, the weight of the combinations between different motor agents is influenced by the preferences of the developer. We usually use machine-learning algorithms to automatically configure these parameters for communication robots. However, this makes it difficult to define the quantitative evaluation required for communication. We therefore used the method of interactive evolutionary computation (IEC), which can be applied to problems involving quantitative evaluation. IEC does not require to define a fitness function; this task is performed by users. But the biggest problem with using IEC is human fatigue, which causes insufficiency of individuals and generations for convergence of EC. To fix this problem, we use the prediction function that automatically calculates the fitness values of genes from some samples that have received the human subjective evaluation. Then, we carried out the behavior acquisition experiment using the IEC simulation system with the prediction function. As the results of experiments, it is confirmed that diversifying the genetic pool is an efficient way for generating a variety of behavior.


international conference on advanced intelligent mechatronics | 2014

Tool-body assimilation model using a neuro-dynamical system for acquiring representation of tool function and motion

Kuniyuki Takahshi; Tetsuya Ogata; Hadi Tjandra; Yuki Yamaguchi; Yuki Suga; Shigeki Sugano

In this paper, we propose a tool-body assimilation model that implements a multiple time-scales recurrent neural network (MTRNN). Our model allows a robot to acquire the representation of a tool function and the required motion without having any prior knowledge of the tool. It is composed of five modules: image feature extraction, body model, tool dynamics feature, tool recognition, and motion recognition. Self-organizing maps (SOM) are used for image feature extraction from raw images. The MTRNN is used for body model learning. Parametric bias (PB) nodes are used to learn tool dynamic features. The PB nodes are attached to the neurons of the MTRNN to modulate the body model. A hierarchical neural network (HNN) is implemented for tool and motion recognition. Experiments were conducted using OpenHRP3, a robotics simulator, with multiple tools. The results show that the tool-body assimilation model is capable of recognizing tools, including those having an unlearned shape, and acquires the required motions accordingly.

Collaboration


Dive into the Yuki Suga's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge