Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Koh Hosoda is active.

Publication


Featured researches published by Koh Hosoda.


IEEE Transactions on Autonomous Mental Development | 2009

Cognitive Developmental Robotics: A Survey

Minoru Asada; Koh Hosoda; Yasuo Kuniyoshi; Hiroshi Ishiguro; Toshio Inui; Yuichiro Yoshikawa; Masaki Ogino; Chisato Yoshida

Cognitive developmental robotics (CDR) aims to provide new understanding of how humans higher cognitive functions develop by means of a synthetic approach that developmentally constructs cognitive functions. The core idea of CDR is ldquophysical embodimentrdquo that enables information structuring through interactions with the environment, including other agents. The idea is shaped based on the hypothesized development model of human cognitive functions from body representation to social behavior. Along with the model, studies of CDR and related works are introduced, and discussion on the model and future issues are argued.


intelligent robots and systems | 1994

Versatile visual servoing without knowledge of true Jacobian

Koh Hosoda; Minoru Asada

Proposes a versatile visual servoing control scheme with a Jacobian matrix estimator. The Jacobian matrix estimator does not need a priori knowledge of the kinematic structure and parameters of the robot system, such as camera and link parameters. The proposed visual servoing control scheme ensures the convergence of the image-features to desired trajectories, by using the estimated Jacobian matrix, which is proved by the Lyapunov stability theory. To show the effectiveness of the proposed scheme, simulation and experimental results are presented.<<ETX>>


Machine Learning | 1996

Purposive behavior acquisition for a real robot by vision-based reinforcement learning

Minoru Asada; Shoichi Noda; Sukoya Tawaratsumida; Koh Hosoda

This paper presents a method of vision-based reinforcement learning by which a robot learns to shoot a ball into a goal. We discuss several issues in applying the reinforcement learning method to a real robot with vision sensor by which the robot can obtain information about the changes in an environment. First, we construct a state space in terms of size, position, and orientation of a ball and a goal in an image, and an action space is designed in terms of the action commands to be sent to the left and right motors of a mobile robot. This causes a “state-action deviation” problem in constructing the state and action spaces that reflect the outputs from physical sensors and actuators, respectively. To deal with this issue, an action set is constructed in a way that one action consists of a series of the same action primitive which is successively executed until the current state changes. Next, to speed up the learning time, a mechanism of Learning from Easy Missions (or LEM) is implemented. LEM reduces the learning time from exponential to almost linear order in the size of the state space. The results of computer simulations and real robot experiments are given.


Robotics and Autonomous Systems | 2006

Anthropomorphic Robotic Soft Fingertip with Randomly Distributed Receptors

Koh Hosoda; Yasunori Tada; Minoru Asada

Abstract To improve the manipulation ability of robotic fingers, this paper proposes a design of an anthropomorphic soft fingertip with distributed receptors. The fingertip consists of two silicon rubber layers of different hardness containing two kinds of receptors, strain gauges and PVDF (polyvinylidene fluoride) films. The structure of the fingertip is similar to that of a human’s; it consists of a bone, a body, a skin layer, and randomly distributed receptors inside. Experimental results demonstrate the discriminating ability of the fingertip: it can discriminate five different materials by pushing and rubbing the objects.


Artificial Intelligence | 1999

Cooperative behavior acquisition for mobile robots in dynamically changing real worlds via vision-based reinforcement learning and development

Minoru Asada; Eiji Uchibe; Koh Hosoda

Abstract In this paper, we first discuss the meaning of physical embodiment and the complexity of the environment in the context of multi-agent learning. We then propose a vision-based reinforcement learning method that acquires cooperative behaviors in a dynamic environment. We use the robot soccer game initiated by RoboCup (Kitano et al., 1997) to illustrate the effectiveness of our method. Each agent works with other team members to achieve a common goal against opponents. Our method estimates the relationships between a learners behaviors and those of other agents in the environment through interactions (observations and actions) using a technique from system identification. In order to identify the model of each agent, Akaikes Information Criterion is applied to the results of Canonical Variate Analysis to clarify the relationship between the observed data in terms of actions and future observations. Next, reinforcement learning based on the estimated state vectors is performed to obtain the optimal behavior policy. The proposed method is applied to a soccer playing situation. The method successfully models a rolling ball and other moving agents and acquires the learners behaviors. Computer simulations and real experiments are shown and a discussion is given.


Robotics and Autonomous Systems | 2008

Biped robot design powered by antagonistic pneumatic actuators for multi-modal locomotion

Koh Hosoda; Takashi Takuma; Atsushi Nakamoto; Shinji Hayashi

An antagonistic muscle mechanism that regulates joint compliance contributes enormously to human dynamic locomotion. Antagonism is considered to be the key for realizing more than one locomotion mode. In this paper, we demonstrate how antagonistic pneumatic actuators can be utilized to achieve three dynamic locomotion modes (walking, jumping, and running) in a biped robot. Firstly, we discuss the contribution of joint compliance to dynamic locomotion, which highlights the importance of tunable compliance. Secondly, we introduce the design of a biped robot powered by antagonistic pneumatic actuators. Lastly, we apply simple feedforward controllers for realizing walking, jumping, and running and confirm the contribution of joint compliance to such multimodal dynamic locomotion. Based on the results, we can conclude that the antagonistic pneumatic actuators are superior candidates for constructing a human-like dynamic locomotor.


Autonomous Robots | 2010

Pneumatic-driven jumping robot with anthropomorphic muscular skeleton structure

Koh Hosoda; Yuki Sakaguchi; Hitoshi Takayama; Takashi Takuma

Human muscular skeleton structure plays an important role for adaptive locomotion. Understanding of its mechanism is expected to be used for realizing adaptive locomotion of a humanoid robot as well. In this paper, a jumping robot driven by pneumatic artificial muscles is designed to duplicate human leg structure and function. It has three joints and nine muscles, three of them are biarticular muscles. For controlling such a redundant robot, we take biomechanical findings into account: biarticular muscles mainly contribute to joint coordination whereas monoarticular muscles contribute to provide power. Through experiments, we find (1) the biarticular muscles realize coordinated movement of joints when knee and/or hip is extended, (2) the extension of the ankle does not lead to coordinated movement, and (3) we can superpose extension of the knee with that of the hip without losing the joint coordination. The obtained knowledge can be used not only for robots, but may also contribute to understanding of adaptive human mechanism.


intelligent robots and systems | 1996

Behavior coordination for a mobile robot using modular reinforcement learning

Eiji Uchibe; Minoru Asada; Koh Hosoda

Coordination of multiple behaviors independently obtained by a reinforcement learning method is one of the issues in order for the method to be scaled to larger and more complex robot learning tasks. Direct combination of all the state spaces for individual modules (subtasks) needs enormous learning time, and it causes hidden states. This paper presents a method of modular learning which coordinates multiple behaviors taking account of a trade-off between learning time and performance. First, in order to reduce the learning time the whole state space is classified into two categories based on the action values separately obtained by Q learning: the area where one of the learned behaviors is directly applicable (no more learning area), and the area where learning is necessary due to competition of multiple behaviors (re-learning area). Second, hidden states are detected by model fitting to the learned action values based on the information criterion. Finally, the initial action valves in the re-learning area are adjusted so that they can be consistent with the values in the no more learning area. The method is applied to one to one soccer playing robots. Computer simulation and real robot experiments are given, to show the validity of the proposed method.


IEEE Robotics & Automation Magazine | 1998

Adaptive hybrid control for visual and force servoing in an unknown environment

Koh Hosoda; Katsuji Igarashi; Minoru Asada

An adaptive robot controller is proposed to achieve a contacting task with an unknown environment, while the robot is visually guided. Since the proposed controller has online estimators for the parameters of the camera-manipulator system and the unknown constraint surface, the controller needs no a priori knowledge besides the manipulator kinematics. Experimental results validate the proposed scheme.


intelligent robots and systems | 1996

Reasonable performance in less learning time by real robot based on incremental state space segmentation

Yasutake Takahashi; Minoru Asada; Koh Hosoda

Reinforcement learning has recently been receiving increased attention as a method for robot learning with little or no a priori knowledge and higher capability of reactive and adaptive behaviors. However, there are two major problems in applying it to real robot tasks: how to construct the state space, and how to reduce the learning time. This paper presents a method by which a robot learns purposive behavior within less learning time by incrementally segmenting the sensor space based on the experiences of the robot. The incremental segmentation is performed by constructing local models in the state space, which is based on the function approximation of the sensor outputs to reduce the learning time and on the reinforcement signal to emerge a purposive behavior. The method is applied to a soccer robot which tried to shoot a ball into a goal, The experiments with computer simulations and a real robot are shown. As a result, our real robot has learned a shooting behavior within less than one hour training by incrementally segmenting the state space.

Collaboration


Dive into the Koh Hosoda's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Takashi Takuma

Osaka Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge