Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Koichi Ogawara is active.

Publication


Featured researches published by Koichi Ogawara.


IEEE Transactions on Robotics | 2005

A sensor fusion approach for recognizing continuous human grasping sequences using hidden Markov models

Keni Bernardin; Koichi Ogawara; Katsushi Ikeuchi; Ruediger Dillmann

The Programming by Demonstration (PbD) technique aims at teaching a robot to accomplish a task by learning from a human demonstration. In a manipulation context, recognizing the demonstrators hand gestures, specifically when and how objects are grasped, plays a significant role. Here, a system is presented that uses both hand shape and contact-point information obtained from a data glove and tactile sensors to recognize continuous human-grasp sequences. The sensor fusion, grasp classification, and task segmentation are made by a hidden Markov model recognizer. Twelve different grasp types from a general, task-independent taxonomy are recognized. An accuracy of up to 95% could be achieved for a multiple-user system.


IEEE Transactions on Robotics | 2006

Representation for knot-tying tasks

Jun Takamatsu; Takuma Morita; Koichi Ogawara; Hiroshi Kimura; Katsushi Ikeuchi

The learning from observation (LFO) paradigm has been widely applied in various types of robot systems. It helps reduce the work of the programmer. However, the applications of available systems are limited to manipulation of rigid objects. Manipulation of deformable objects is rarely considered, because it is difficult to design a method for representing states of deformable objects and operations against them. Furthermore, too many operations are possible on them. In this paper, we choose knot tying as a case study for manipulating deformable objects, because the knot theory is available and the types of operations possible in knot tying are limited. We propose a knot planning from observation (KPO) paradigm, a KPO theory, and a KPO system.


intelligent robots and systems | 2004

Flexible cooperation between human and robot by interpreting human intention from gaze information

Kenji Sakita; Koichi Ogawara; Shinji Murakami; Kentaro Kawamura; Katsushi Ikeuchi

This paper describes a method to realize flexible cooperation between human and robot which reflects the intention and state of human by using gaze information. This physiological information expresses the process of thinking directly, so it enables us to read the internal condition such as hesitation or search in decision making process. We propose a method to interpret the intention and condition from the latest history of gaze movement and determine an appropriate cooperative action of a robot based on it so that the task proceeds smoothly. Finally, we show experimental results by using a humanoid-type robot.


international conference on robotics and automation | 2007

Marker-less Human Motion Estimation using Articulated Deformable Model

Koichi Ogawara; Xiaolu Li; Katsushi Ikeuchi

This paper presents a novel whole body motion estimation method by fitting a deformable articulated model of the human body into the 3D reconstructed volume obtained from multiple video streams. The advantage of the proposed method is two fold: (1) combination of a robust estimator and ICP algorithm with Kd-tree search in pose and normal space make it possible to track complex and dynamic motion robustly against noise and interference between limb and torso, (2) the hierarchical estimation and backtrack re-estimation algorithm enable accurate estimation. The power to track challenging whole body motion in real environment is also presented.


intelligent robots and systems | 2000

Recognition of human task by attention point analysis

Koichi Ogawara; Soshi Iba; Tomikazu Tanuki; Hiroshi Kimura; Katsushi Ikeuchi

This paper presents a novel method of constructing a human task model by attention point (AP) analysis. The AP analysis consists of two steps: at the first step, it broadly observes human task, constructs rough human task model and finds APs which require detailed analysis; and at the second step, by applying time-consuming analysis on APs in the same human task, it can enhance the human task model. This human task model is highly abstracted and is able to change the degree of abstraction adapting to the environment so as to be applicable in a different environment. We describe this method and its implementation using data gloves and a stereo vision system. We also show an experimental result in which a real robot observed a human task and performed the same human task successfully in a different environment using this model.


international conference on emerging security technologies | 2010

Person Identification from Spatio-temporal 3D Gait

Yumi Iwashita; Ryosuke Baba; Koichi Ogawara; Ryo Kurazume

This paper presents a spatio-temporal 3D gait database and a view independent person identification method from gait. In case that a target changes ones walking direction compared with that in a database, the correct classification rate is reduced due to the appearance change. To deal with this problem, several methods based on a view transformation model, which converts walking images from one direction to virtual images from different viewpoints, have been proposed. However, the converted image may not coincide the real one, since the target is not included in the training dataset to obtain the transformation model. So we propose a view independent person identification method which creates a database with virtual images synthesized directly from the targets 3D model. In the proposed method, firstly we built a spatio-temporal 3D gait database using multiple cameras, which consists of sequential 3D models of multiple walking people. Then virtual images from multiple arbitrary viewpoints are synthesized from 3D models, and affine moment invariants are derived from virtual images as gait features. In the identification phase, images of a target who walks in an arbitrary direction are taken from one camera, and then gait features are calculated. Finally the person is identified and ones walking direction is estimated. Experiments using the spatio-temporal 3D gait database show the effectiveness of the proposed method.


The International Journal of Robotics Research | 2007

Recognizing Assembly Tasks Through Human Demonstration

Jun Takamatsu; Koichi Ogawara; Hiroshi Kimura; Katsushi Ikeuchi

As one of the methods for reducing the work of programming, the Learning-from-Observation (LFO) paradigm has been heavily promoted. This paradigm requires the programmer only to perform a task in front of a robot and does not require expertise. In this paper, the LFO paradigm is applied to assembly tasks by two rigid polyhedral objects. A method is proposed for recognizing these tasks as a sequence of movement primitives from noise-contaminated data obtained by a conventional 6 degree-of-freedom (DOF) object-tracking system. The system is implemented on a robot with a real-time stereo vision system and dual arms with dexterous hands, and its effectiveness is demonstrated.


Robotics and Autonomous Systems | 2009

Painting robot with multi-fingered hands and stereo vision

Shunsuke Kudoh; Koichi Ogawara; Miti Ruchanurucks; Katsushi Ikeuchi

In this paper, we describe a painting robot with multi-fingered hands and stereo vision. The goal of this study is for the robot to reproduce the whole procedure involved in human painting. A painting action is divided into three phases: obtaining a 3D model, composing a picture model, and painting by a robot. In this system, various feedback techniques including computer vision and force sensors are used. As experiments, an apple and a human silhouette are painted on a canvas using this system


intelligent robots and systems | 2003

Grasp recognition using a 3D articulated model and infrared images

Koichi Ogawara; Jun Takamatsu; Kentaro Hashimoto; Katsushi Ikeuchi

A technique to recognize the shape of a grasping hand during manipulation tasks is proposed; which utilizes 3D articulated hand model and a reconstructed 3D volume from infrared cameras. Vision-based recognition of a grasping hand is a tough problem, because a hand may be partially occluded by a grasped object and the ratio of occlusion changes along the progress of the task. To recognize the shape in a single time frame, a robust recognition method of an articulated object is proposed. In this method, 3D volumetric representation of a hand is reconstructed from multiple silhouette images and 3D articulated object model is fitted to be reconstructed data to estimate the pose and the joint angles. To deal with large occlusion, a technique to simultaneously estimate time series reconstructed volumes with the above method is proposed, which can automatically suppress the effect form badly reconstructed volumes. The proposed techniques are verified in simulation as well as in a real world.


international conference on robotics and automation | 2001

Acquiring hand-action models by attention point analysis

Koichi Ogawara; Soshi Iba; Tomikazu Tanuki; Hiroshi Kimura; Katsushi Ikeuchi

This paper describes our current research on learning task level representations by a robot through observation of human demonstrations. We focus on human hand actions and represent such hand actions in symbolic task models. We propose a framework of such models by efficiently integrating multiple observations based on attention points; we then evaluate the model by using a human-form robot. We propose a two-step observation mechanism. At the first step, the system roughly observes the entire sequence of the human demonstration, builds a rough task model and extracts attention points (APs). The attention points indicate the time and position in the observation sequence that requires further detailed analysis. At the second step, the system closely examines the sequence around the APs and the obtained attribute values for the task model, such as what to grasp, which hand to be used, or what is the precise trajectory of the manipulated object. We implemented this system on a human form robot and demonstrated its effectiveness.

Collaboration


Dive into the Koichi Ogawara's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jun Takamatsu

University of Electro-Communications

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge