Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Shoichi Noda is active.

Publication


Featured researches published by Shoichi Noda.


Machine Learning | 1996

Purposive behavior acquisition for a real robot by vision-based reinforcement learning

Minoru Asada; Shoichi Noda; Sukoya Tawaratsumida; Koh Hosoda

This paper presents a method of vision-based reinforcement learning by which a robot learns to shoot a ball into a goal. We discuss several issues in applying the reinforcement learning method to a real robot with vision sensor by which the robot can obtain information about the changes in an environment. First, we construct a state space in terms of size, position, and orientation of a ball and a goal in an image, and an action space is designed in terms of the action commands to be sent to the left and right motors of a mobile robot. This causes a “state-action deviation” problem in constructing the state and action spaces that reflect the outputs from physical sensors and actuators, respectively. To deal with this issue, an action set is constructed in a way that one action consists of a series of the same action primitive which is successively executed until the current state changes. Next, to speed up the learning time, a mechanism of Learning from Easy Missions (or LEM) is implemented. LEM reduces the learning time from exponential to almost linear order in the size of the state space. The results of computer simulations and real robot experiments are given.


intelligent robots and systems | 1994

Coordination of multiple behaviors acquired by a vision-based reinforcement learning

Minoru Asada; Eiji Uchibe; Shoichi Noda; Sukoya Tawaratsumida; Koh Hosoda

A method is proposed which accomplishes a whole task consisting of plural subtasks by coordinating multiple behaviors acquired by a vision-based reinforcement learning. First, individual behaviors which achieve the corresponding subtasks are independently acquired by Q-learning, a widely used reinforcement learning method. Each learned behavior can be represented by an action-value function in terms of state of the environment and robot action. Next, three kinds of coordinations of multiple behaviors are considered; simple summation of different action-value functions, switching action-value functions according to situations, and learning with previously obtained action-value functions as initial values of a new action-value function. A task of shooting a ball into the goal avoiding collisions with an enemy is examined. The task can be decomposed into a ball shooting subtask and a collision avoiding subtask. These subtasks should be accomplished simultaneously, but they are not independent of each other.<<ETX>>


IEICE technical report. Pattern recognition and understanding | 1994

Vision-Based Behavior Acquisition For A Shooting Robot By Using A Reinforcement Learning

Minoru Asada; Shoichi Noda; Sukoya Tawaratsumida; Koh Hosoda

We propose a method which acquires a purposive behavior for a mobile robot to shoot a ball into the goal by using a vision-based reinforcement learning. A mobile robot (an agent) does not need to know any parameters of the 3-D environment or its kinematics/dynamics. Information about the changes of the environment is only the image captured from a single TV camera mounted on the robot. An action-value function in terms of state is to be learned. Image positions of a ball and a goal are used as a state variable which shows the effect of an action previously taken. After the learning process, the robot tries to carry a ball near the goal and to shoot it. Both computer simulation and real robot experiments are shown, and discussion on the role of vision in the context of the vision-based reinforcement learning is given.


Journal of the Robotics Society of Japan | 1995

Purposive Behavior Acquisition for a Robot by Vision-Based Reinforcement Learning

Minoru Asada; Shoichi Noda; Sukoya Tawaratsumida; Koh Hosoda


intelligent robots and systems | 1996

Action-based sensor space categorization for robot learning

Minoru Asada; Shoichi Noda; Koh Hosoda


international conference on robotics and automation | 1995

Vision-based reinforcement learning for purposive behavior acquisition

Minoru Asada; Shoichi Noda; Sukoya Tawaratsumida; Koh Hosoda


Journal of the Robotics Society of Japan | 1997

Action-Based State Space Construction for Robot Learning

Minoru Asada; Shoichi Noda; Koh Hosoda


Archive | 1995

Non-Physical Intervention in Robot Learning Based on LfE Method

Minoru Asada; Shoichi Noda; Koh Hosoda


Archive | 2007

Sensor Space Segmentation for Mobile Robot Learning

Yasutake Takahashi; Minoru Asada; Shoichi Noda


Archive | 2004

Vision-Based Reinforcement Learning for RoboCup : Towards Real Robot Competition

Eiji Uchibe; Minoru Asada; Shoichi Noda; Yasutake Takahashi

Collaboration


Dive into the Shoichi Noda's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge