Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yoshihiro Kawai is active.

Publication


Featured researches published by Yoshihiro Kawai.


intelligent robots and systems | 2004

Robust speech interface based on audio and video information fusion for humanoid HRP-2

Isao Hara; Futoshi Asano; Hideki Asoh; Jun Ogata; Naoyuki Ichimura; Yoshihiro Kawai; Fumio Kanehiro; Hirohisa Hirukawa; Kiyoshi Yamamoto

For cooperative work of robots and humans in the real world, a communicative function based on speech is indispensable for robots. To realize such a function in a noisy real environment, it is essential that robots be able to extract target speech spoken by humans from a mixture of sounds by their own resources. We have developed a method of detecting and extracting speech events based on the fusion of audio and video information. In this method, audio information (sound localization using a microphone array) and video information (human tracking using a camera) are fused by a Bayesian network to enable the detection of speech events. The information of detected speech events is then utilized in sound separation using adaptive beam forming. In this paper, some basic investigations for applying the above system to the humanoid robot HRP-2 are reported. Input devices, namely a microphone array and a camera, were mounted on the head of HRP-2, and acoustic characteristics for sound localization/separation performance were investigated. Also, the human tracking system was improved so that it can be used in a dynamic situation. Finally, overall performance of the system was tested via off-line experiments.


international conference on robotics and automation | 2003

Cooperative works by a human and a humanoid robot

Kazuhiko Yokoyama; Hiroyuki Handa; Takakatsu Isozumi; Yutaro Fukase; Kenji Kaneko; Fumio Kanehiro; Yoshihiro Kawai; Fumiaki Tomita; Hirohisa Hirukawa

We have developed a humanoid robot HRP-2P with a biped locomotion controller, stereo vision software and aural human interface to realize cooperative works by a human and a humanoid robot. The robot can find a target object by the vision, and carry it cooperatively with a human by biped locomotion according to the voice commands by the human. A cooperative control is applied to the arms of the robot while it carries the object, and the walking direction of the robot is controlled by the interactive force and torque through the force/torque sensor on the wrists. The experimental results are presented in the paper.


Robotics and Autonomous Systems | 2004

Humanoid robotics platforms developed in HRP

Hirohisa Hirukawa; Fumio Kanehiro; Kenji Kaneko; Shuuji Kajita; Kiyoshi Fujiwara; Yoshihiro Kawai; Fumiaki Tomita; Shigeoki Hirai; Kazuo Tanie; Takakatsu Isozumi; Kazuhiko Akachi; Toshikazu Kawasaki; Shigehiko Ota; Kazuhiko Yokoyama; Hiroyuki Handa; Yutaro Fukase; Junichiro Maeda; Yoshihiko Nakamura; Susumu Tachi; Hirochika Inoue

Abstract This paper presents humanoid robotics platform that consists of a humanoid robot and an open architecture software platform developed in METI’s Humanoid Robotics Project (HRP). The final version of the robot, called HRP-2, has 1540 mm height, 58 kg weight and 30 degrees of the freedom. The software platform includes a dynamics simulator and motion controllers of the robot for biped locomotion, falling and getting up motions. The platform has been used to develop various applications and is expected to initiate more humanoid robotics research.


International Journal of Computer Vision | 2002

3D Object recognition in cluttered environments by segment-based stereo vision

Yasushi Sumi; Yoshihiro Kawai; Takashi Yoshimi; Fumiaki Tomita

We propose a new method for 3D object recognition which uses segment-based stereo vision. An object is identified in a cluttered environment and its position and orientation (6 dof) are determined accurately enabling a robot to pick up the object and manipulate it. The object can be of any shape (planar figures, polyhedra, free-form objects) and partially occluded by other objects. Segment-based stereo vision is employed for 3D sensing. Both CAD-based and sensor-based object modeling subsystems are available. Matching is performed by calculating candidates for the object position and orientation using local features, verifying each candidate, and improving the accuracy of the position and orientation by an iteration method. Several experimental results are presented to demonstrate the usefulness of the proposed method.


conference on computers and accessibility | 1996

Interactive tactile display system: a support system for the visually disabled to recognize 3D objects

Yoshihiro Kawai; Fumiaki Tomita

We have developed an interactive tactile display system for the visually disabled to actively recognize three-dimensional objects or environments. The display presents visual patterns by tactile pins arranged in two-dimensional format. The pin height can be set to several levels to increase the touch information and to display a three-dimensional surface shape. Also, each pin has a tact switch in the bottom for the user to make the system know the position by pushing it. This paper describes the hardware and software of the system.


international conference on pattern recognition | 1998

Stereo correspondence using segment connectivity

Yoshihiro Kawai; T. Ueshiba; Yutaka Ishiyama; Yasushi Sumi; F. Tomitai

Presents a method to search for stereo correspondence based on the connectivity of segments. The data structure is defined by boundary representation. The free curve can be treated by this data structure not as a straight line approximation, but as a curved line. Some candidates for correspondence pairs consist of two segments in left and right images under epipolar condition. The connectivity of two pairs is decided by their distance, intensity, and angle, based on each boundary in the left image. The similarity of paths, which are sequences of pairs, is evaluated. The main feature of our method is that it can evaluate in general situations because the similarity is calculated based on the connectivity of pairs. Multiple correspondence between pairs is removed using this similarity, and the 3D structure is reconstructed. Applications of our work include recognition for free form objects and tracking in 3D space.


international conference on computer vision | 1998

Recognition of 3D free-form objects using segment-based stereo vision

Yasushi Sumi; Yoshihiro Kawai; Takashi Yoshimi; Fumiaki Tomita

We propose a new method to recognize 3D free-form objects from their apparent contours. It is the extension of our established method to recognize objects with fixed edges. Object models are compared with 3D boundaries which are extracted by segment-based stereo vision. Based on the local shapes of the boundaries, candidate transformations are generated. The candidates are verified and adjusted based on the whole shapes of the boundaries. The models are built from all-around range data of the objects. Experimental results show the effectiveness of the method.


international conference on robotics and automation | 2012

Pick and place planning for dual-arm manipulators

Kensuke Harada; Torea Foissotte; Tokuo Tsuji; Kazuyuki Nagata; Natsuki Yamanobe; Akira Nakamura; Yoshihiro Kawai

This paper proposes a method for planning the pick-and-place motion of an object by dual-arm manipulators. Our planner is composed of the offline and the online phases. The offline phase generates a set of regions on the object and the environment surfaces and calculates several parameters needed in the online phase. In the online phase, the planner selects a grasping pose of the robot and a putting posture of the object by searching for the regions calculated in the offline phase. By using the proposed method, we can also plan the trajectory of the robot, and the regrasping strategy of the dual-arm. Here, the putting posture of the object can be planned by considering stability of the object placed on the environment. The effectiveness of the proposed method is confirmed by simulation and experimental results by using the dual-arm robot NX-HIRO.


international conference on pattern recognition | 2002

A support system for visually impaired persons to understand three-dimensional visual information using acoustic interface

Yoshihiro Kawai; Furniaki Tomita

Visual information processing technology is very important in the implementation for sensory substitution of visually impaired persons as well as applications to factory automation. This paper outlines the design of a visual support system that provides 3D visual information using 3D virtual sounds. Three-dimensional information, such as distance map, object recognition, and object tracking required for the visually impaired user is obtained by analyzing images captured by stereo cameras. Using a 3D virtual acoustic display, which relies on Head Related Transfer Functions (HRTFs), the user is informed of the locations and movements of objects. The users external auditory sense is not impeded as the system uses bone conduction headphones which do not block out environment sounds. The proposed system is expected to be useful in the situations where the infrastructure is incomplete and the situation changes in realtime. We plan experiments using this system to guide users while walking and playing sports.


intelligent robots and systems | 2010

Picking up an indicated object in a complex environment

Kazuyuki Nagata; Takashi Miyasaka; Dragomir N. Nenchev; Natsuki Yamanobe; Kenichi Maruyama; Satoshi Kawabata; Yoshihiro Kawai

This paper presents a grasping system for picking up an indicated object in a complex real-world environment using a parallel jaw gripper. The proposed grasping scheme comprises the following three main steps: 1) A user indicates a target object and provides the system with a task instruction on how to grasp it, 2) the system acquires geometric information about the target object and constructs a 3D environment model around the target by stereo vision using the information obtained from the task instruction, and 3) the system finds a grasp point based on grasp evaluation using the acquired information. As an example of the scheme, we examined the picking up of a cylindrical object by grasping at the brim. An important and advantageous feature of this scheme is that the user can easily instruct the robot on how to perform the object-picking task through simple clicking operations, and the robot can execute the task without exact models of the target object and the environment being available in advance.

Collaboration


Dive into the Yoshihiro Kawai's collaboration.

Top Co-Authors

Avatar

Fumiaki Tomita

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Kazuyuki Nagata

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Natsuki Yamanobe

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Kenichi Maruyama

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Takashi Yoshimi

Shibaura Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Hiromu Onda

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Tokuo Tsuji

Systems Research Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge