Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kiyotaka Izumi is active.

Publication


Featured researches published by Kiyotaka Izumi.


robot and human interactive communication | 2009

Adaptation of robot behaviors toward user perception on fuzzy linguistic information by fuzzy voice feedback

A. G. Buddhika P. Jayasekara; Keigo Watanabe; Kazuo Kiguchi; Kiyotaka Izumi

This paper proposes a method to adapt robot behaviors toward users perception by human teaching. Human-friendly robotic system should be able to understand the fuzzy linguistic information based on the users guidance and the environmental conditions. The contextual meaning of fuzzy linguistic information depends on the conditions of the environment. Therefore, users perception is acquired to evaluate the fuzzy linguistic information in user commands based on fuzzy voice feedback. The primitive behaviors are evaluated by behavior evaluation network (BEN). Feedback evaluation system (FES) is introduced to evaluate the user feedback to correct the robots perception by adapting the BEN. This yields the adaptation of the system for understanding fuzzy linguistic information toward the corresponding environment. A situation of cooperative rearrangement of users working space is simulated to illustrate the system. This is demonstrated by using a PA-10 robot manipulator.


Artificial Life and Robotics | 2009

Understanding user commands by evaluating fuzzy linguistic information based on visual attention

A. G. Buddhika P. Jayasekara; Keigo Watanabe; Kiyotaka Izumi

This article proposes a method for understanding user commands based on visual attention. Normally, fuzzy linguistic terms such as “very little” are commonly included in voice commands. Therefore, a robot’s capacity to understand such information is vital for effective human-robot interaction. However, the quantitative meaning of such information strongly depends on the spatial arrangement of the surrounding environment. Therefore, a visual attention system (VAS) is introduced to evaluate fuzzy linguistic information based on the environmental conditions. It is assumed that the corresponding distance value for a particular fuzzy linguistic command depends on the spatial arrangement of the surrounding objects. Therefore, a fuzzy-logic-based voice command evaluation system (VCES) is proposed to assess the uncertain information in user commands based on the average distance to the surrounding objects. A situation of object manipulation to rearrange the user’s working space is simulated to illustrate the system. This is demonstrated with a PA-10 robot manipulator.


society of instrument and control engineers of japan | 2008

Controlling a robot manipulator with fuzzy voice commands guided by visual motor coordination learning

Buddhika Jayasekara; Keigo Watanabe; Kiyotaka Izumi

This paper proposes a method for learning and controlling an industrial robot manipulator through fuzzy voice commands guided by visual motor coordination. The visual motor coordination learning is implemented by a supervised self organizing map (SSOM). Study of human-robot communication is one of the most important research areas. The voice communication is significant in human robot interactions among various communication media. The fuzzy voice commands are used to control the robot and the visual feedback is used to learn the precision control based on visual motor coordination, where it is learned by the supervision of the teacher voice commands. The learned system is capable of positioning the robot manipulator to a point in 3D working space as instructed by the voice command. The proposed idea is demonstrated with a PA-10 industrial manipulator.


international conference on information and automation | 2008

Interactive Dialogue for Behavior Teaching to Robots based on Primitive Behaviors with Fuzzy Voice Commands

Buddhika Jayasekara; Keigo Watanabe; Kiyotaka Izumi

This paper proposes a method to teach a robot interactively based on behavior primitives. Fuzzy voice commands are used to activate the behaviors in a fuzzy coach player system scenario. An intention control module is implemented in the interaction manager to handle the interactivity and a hierarchical behavior memory is introduced to store the learned behaviors. The proposed system is used to learn complex behaviors based on simple posture movements of a manipulator. The fuzzy voice joint commands are used to trigger the primitive behaviors which represent the single joint movements. The proposed method is demonstrated with a PA-10 robot manipulator.


International Journal of Mechatronics and Manufacturing Systems | 2010

Visual evaluation and fuzzy voice commands for controlling a robot manipulator

A. G. Buddhika P. Jayasekara; Keigo Watanabe; Maki K. Habib; Kiyotaka Izumi

This paper proposes a method for learning and controlling an industrial robot manipulator through Fuzzy Voice Commands (FVCs) guided by visual motor coordination. Supervised Self-Organising Map (SSOM) is proposed to implement the visual motor coordination. The FVCs are used to control the robot manipulator toward a goal. Visual evaluation process is adapted by the supervision of the human coach. The learned system is capable of navigating the robot manipulator to a point in 3D working space as instructed by the voice commands. The proposed idea is demonstrated with a PA-10 industrial manipulator.


international symposium on industrial electronics | 2009

Posture control of a robot manipulator by evaluating fuzzy linguistic information based on user feedback

A. G. B. P. Jayasekara; Keigo Watanabe; Kiyotaka Izumi; Maki K. Habib

This paper proposes a method for controlling posture of a robot manipulator by fuzzy voice commands. Human-friendly robotic system should be able to understand the fuzzy linguistic information based on the users guidance and the environmental conditions. The contextual meaning of fuzzy linguistic information depends on the conditions of the environment. Therefore, the users feedback is evaluated to understand the fuzzy linguistic information related to the posture movements. The primitive posture movements are evaluated by the behavior evaluation network (BEN). Feedback evaluation system (FES) is introduced to evaluate the users feedback to correct the robot perception by adapting the BEN. The capability of evaluating fuzzy linguistic information based on the current context is enhanced. A selected set of posture movements are used to illustrate the system by using a PA-10 robot manipulator.


international conference on control, automation and systems | 2008

Voice control of a robotic forceps using hierarchical instructions

Kiyotaka Izumi; Shinichi Ishii; Keigo Watanabe

A robotic forceps is controlled by voice instructions in the framework of a fuzzy coach-player system. In the proposed system, we can deal with some fuzziness included in voice instructions and the system is composed of two instruction levels in order to increase the efficiency of voice instructions. One is a local instruction level that uses any action commands directly. The other is a global instruction level that uses a task command. Such a fuzzy coach-player system is applied for the manipulation of a robotic forceps and the effectiveness of the present system is verified through some actual experiments.


2009 ICCAS-SICE | 2009

Voice instructions for controlling a robotic forceps with image and auxiliary information

Kiyotaka Izumi; Takuya Tokunaga; Keigo Watanabe


Journal of System Design and Dynamics | 2010

Interpreting Fuzzy Linguistic Information by Acquiring Robot's Experience Based on Internal Rehearsal

A. G. Buddhika P. Jayasekara; Keigo Watanabe; Kazuo Kiguchi; Kiyotaka Izumi


2009 ICCAS-SICE | 2009

Evaluating fuzzy voice commands by internal rehearsal for controlling a robot manipulator

A. G. B. P. Jayasekara; Keigo Watanabe; Kiyotaka Izumi

Collaboration


Dive into the Kiyotaka Izumi's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Maki K. Habib

American University in Cairo

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge