Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ryo Saegusa is active.

Publication


Featured researches published by Ryo Saegusa.


Neurocomputing | 2004

Nonlinear principal component analysis to preserve the order of principal components

Ryo Saegusa; Hitoshi Sakano; Shuji Hashimoto

Principal component analysis (PCA) is an effective method of linear dimensional reduction. Because of its simplicity in theory and implementation, it is often used for analyses in various disciplines. However, because of its linearity, PCA is not always suitable, and has redundancy in expressing data. To overcome this problem, some nonlinear PCA methods have been proposed. However, most of these methods have drawbacks, such that the number of principal components must be predetermined, and also the order of the generated principal components is not explicitly given. In this paper, we propose a nonlinear PCA algorithm that nonlinearly transforms data into principal components, and at the same time, preserving the order of the principal components, and we also propose a hierarchical neural network model to perform the algorithm. Moreover, our method does not need to know the number of principal components in advance. The effectiveness of the proposed model will be shown through experiments.


robotics and biomimetics | 2009

Active motor babbling for sensorimotor learning

Ryo Saegusa; Giorgio Metta; Giulio Sandini; Sophie Sakka

For a complex autonomous robotic system such as a humanoid robot, motor-babbling-based sensorimotor learning is considered an effective method to develop an internal model of the self-body and the environment autonomously. In this paper, we propose a method of sensorimotor learning and evaluate it performance in active learning. The proposed model is characterized by a function we call the “confidence”, and is a measure of the reliability of state prediction and control. The confidence for the state can be a good measure to bias the next exploration strategy of data sampling, and to direct its attention to areas in the state domain less reliably predicted and controlled. We consider the confidence function to be a first step toward an active behavior design for autonomous environment adaptation. The approach was experimentally validated using the humanoid robot James.


ieee-ras international conference on humanoid robots | 2010

Learning the skill of archery by a humanoid robot iCub

Petar Kormushev; Sylvain Calinon; Ryo Saegusa; Giorgio Metta

We present an integrated approach allowing the humanoid robot iCub to learn the skill of archery. After being instructed how to hold the bow and release the arrow, the robot learns by itself to shoot the arrow in such a way that it hits the center of the target. Two learning algorithms are proposed and compared to learn the bi-manual skill: one with Expectation-Maximization based Reinforcement Learning, and one with chained vector regression called the ARCHER algorithm. Both algorithms are used to modulate and coordinate the motion of the two hands, while an inverse kinematics controller is used for the motion of the arms. The image processing part recognizes where the arrow hits the target and is based on Gaussian Mixture Models for color-based detection of the target and the arrows tip. The approach is evaluated on a 53-DOF humanoid robot iCub.


robot and human interactive communication | 2007

Autonomous navigation of a mobile robot based on passive RFID

Sunhong Park; Ryo Saegusa; Shuji Hashimoto

This paper describes a novel approach of autonomous navigation for a mobile robot based on a passive RFID that will be used in human living environment. Conventional approaches using dead reckoning or landmark are influenced by disturbances such as the light condition and obstacles, while the proposed method can obtain environmental information more reliably and robustly with the RFID system. The proposed method estimates not only the position but also its current orientation without other sensors by using the detected sequential IC tag information. We examined that the robot reaches to the goal utilizing only the position information but not the orientation information.


intelligent robots and systems | 2010

Own body perception based on visuomotor correlation

Ryo Saegusa; Giorgio Metta; Giulio Sandini

This work proposes a plausible approach for a humanoid robot to define its own body parts based on the correlation of two different sensory signals: vision and proprioception. The high correlation between the motions in vision and proprioception informs the robot that the visually attractive object is related to the motor function of its own body. When the robot finds the highly motor-correlated object during head-arm movements, the visuomotor cues such as the body posture and visual features are stored in a visuomotor memory. Then, developmentally, the robot defines the motor-correlated objects as the own-body parts without prior knowledge on the body appearances and kinematics. It is also adaptable to extended body parts such as a grasped tool. The body movements are generated by stochastic motor babbling. The visuomotor memory biases the babbling to keep the own-body parts in sight. This memory-based bias towards the own-body parts helps the robot explore the large head-arm joint space. The acquired visuomotor memory is also used to anticipate the own-body image from the motor commands in advance of the body movement. The proposed approach was evaluated with two humanoid platforms; iCub and James.


IEEE Transactions on Industrial Electronics | 2012

Body Definition Based on Visuomotor Correlation

Ryo Saegusa; Giorgio Metta; Giulio Sandini

This work proposes a plausible approach for a humanoid robot to define its own body based on visuomotor correlation. The high correlation of motion between vision and proprioception informs the robot that a visually moving object is related to the motor function of its own body. When the robot finds a motor-correlated object during motor exploration, visuomotor cues such as body posture and the visual features of the object are stored in visuomotor memory. Then, the robot developmentally defines its own body without prior knowledge on body appearances and kinematics. Body definition is also adaptable for an extended body such as a tool that the robot is grasping. The body movements are generated in the manner of stochastic motor babbling, whereas visuomotor memory biases the babbling to keep the body parts in sight. This ego-attracted bias helps the robot explore the joint space more efficiently. After motor exploration, visuomotor memory allows the robot to anticipate a visual image of its own body from a motor command. The proposed approach was experimentally evaluated with humanoid robot iCub.


intelligent robots and systems | 2009

Active learning for multiple sensorimotor coordination based on state confidence

Ryo Saegusa; Giorgio Metta; Giulio Sandini

For a complex autonomous robotic system such as a humanoid robot, motor-babbling-based sensorimotor learning is considered an effective method to develop an internal model of the self-body and the environment autonomously. However, learning process requires much time for exploration and computation. In this paper, we propose a method of sensorimotor learning which explores the learning domain actively. Our approach discovers that the embodied learning system can design its own learning process actively, which is different from the conventional passive data-access machine learning. The proposed model is characterized by a function we call the “confidence”, and is a measure of the reliability of state control. The confidence for the state can be a good measure to bias the exploration strategy of data sampling, and to direct its attention to areas of learning interest. We consider the confidence function to be a first step toward an active behavior design for autonomous environment adaptation. The approach was experimentally validated in typical sensorimotor coordination such as arm reaching and object fixation, using the humanoid robot James and the iCub simulator.


ieee-ras international conference on humanoid robots | 2007

Sensory prediction for autonomous robots

Ryo Saegusa; Francesco Nori; Giulio Sandini; Giorgio Metta; Sophie Sakka

For a complex autonomous robotic system such as a humanoid robot, the learning-based sensory prediction is considered effective to develop a perceptual environment model by itself. We developed a learning system for an autonomous robot to predict the next sensory information from the current sensory information and the expected action. The system we consider contains a learning procedure and a behavior generation procedure. The learning procedure uses a multi layer perceptron minimizing the error between a given sensory input and its predicted value. The behavior generation procedure is based on a uniform probablistic density function to sample the learning data randomly, which is the effective strategy when the system does not have any assumption or knowledge of the environment. We also investigated sensory blind prediction which should allow action plannning as well as offer a reliable forecast for a safe evolution of the robot in the environment. The simulation and experimental results show that the system learns interaction between the robot and the environment in high fidelity.


Neural Networks | 2012

2012 Special Issue: Self-protective whole body motion for humanoid robots based on synergy of global reaction and local reflex

Toshihiko Shimizu; Ryo Saegusa; Shuhei Ikemoto; Hiroshi Ishiguro; Giorgio Metta

This paper describes a self-protective whole body motor controller to enable life-long learning of humanoid robots. In order to reduce the damages on robots caused by physical interaction such as obstacle collision, we introduce self-protective behaviors based on the adaptive coordination of full-body global reactions and local limb reflexes. Global reactions aim at adaptive whole-body movements to prepare for harmful situations. The system incrementally learns a more effective association of the states and global reactions. Local reflexes based on a force-torque sensing function to reduce the impact load on the limbs independently of high-level motor intention. We examined the proposed method with a robot simulator in various conditions. We then applied the systems on a real humanoid robot.


IEICE Transactions on Information and Systems | 2005

A Nonlinear Principal Component Analysis of Image Data

Ryo Saegusa; Hitoshi Sakano; Shuji Hashimoto

Principal component analysis (PCA) has been applied in various areas such as pattern recognition and data compression. In some cases, however, PCA does not extract the characteristic of the data-distribution efficiently. In order to overcome this problem, we have proposed a novel method of nonlinear PCA which preserves the order of principal components. In this paper, we reduce the dimensionality of image data with the proposed method, and examine its effectiveness in compression and recognition of the images

Collaboration


Dive into the Ryo Saegusa's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Giorgio Metta

Istituto Italiano di Tecnologia

View shared research outputs
Top Co-Authors

Avatar

Giulio Sandini

Istituto Italiano di Tecnologia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lorenzo Natale

Istituto Italiano di Tecnologia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge