Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kikuo Fujimura is active.

Publication


Featured researches published by Kikuo Fujimura.


intelligent robots and systems | 2002

The intelligent ASIMO: system overview and integration

Yoshiaki Sakagami; Ryujin Watanabe; Chiaki Aoyama; Shinichi Matsunaga; Nobuo Higaki; Kikuo Fujimura

We present the system overview and integration of the ASIMO autonomous robot that can function successfully in indoor environments. The first model of ASIMO is already being leased to companies for receptionist work. In this paper, we describe the new capabilities that we have added to ASIMO. We explain the structure of the robot system for intelligence, integrated subsystems on its body, and their new functions. We describe the behavior-based planning architecture on ASIMO and its vision and auditory system. We describe its gesture recognition system, human interaction and task performance. We also discuss the external online database system that can be accessed using internet to retrieve desired information, the management system for receptionist work, and various function demonstrations.


computer vision and pattern recognition | 2004

Visual Tracking Using Depth Data

Harsh Nanda; Kikuo Fujimura

A method is presented for robust tracking in highly cluttered environments. The method makes effective use of 3D depth sensing technology, resulting in illumination-invariant tracking. A few applications using tracking are presented including face tracking and hand tracking.


computer vision and pattern recognition | 2008

Controlled human pose estimation from depth image streams

Younding Zhu; Behzad Dariush; Kikuo Fujimura

This paper presents a model-based, Cartesian control theoretic approach for estimating human pose from features detected using depth images obtained from a time of flight imaging device. The features represent positions of anatomical landmarks, detected and tracked over time based on a probabilistic inferencing algorithm. The detected features are subsequently used as input to a constrained, closed loop tracking control algorithm which not only estimates the pose of the articulated human model, but also provides feedback to the feature detector in order to resolve ambiguities or to provide estimates of undetected features. Based on a simple kinematic model, constraints such as joint limit avoidance, and self penetration avoidance are enforced within the tracking control framework. We demonstrate the effectiveness of the algorithm with experimental results of upper body pose reconstruction from a small set of features. On average, the entire pipeline runs at approximately 10 frames per second on a standard 3 GHz PC using a 17 degree of freedom upper body human model.


ieee international conference on automatic face gesture recognition | 2004

Hand gesture recognition using depth data

Xia Liu; Kikuo Fujimura

A method is presented for recognizing hand gestures by using a sequence of real-time depth image data acquired by an active sensing hardware. Hand posture and motion information extracted from a video is represented in a gesture space which consists of a number of aspects including hand shape, location and motion information. In this space, it is shown to be possible to recognize many types of gestures. Experimental results are shown to validate our approach and characteristics of our approach are discussed.


international conference on robotics and automation | 2008

Whole body humanoid control from human motion descriptors

Behzad Dariush; Michael Gienger; Bing Jian; Christian Goerick; Kikuo Fujimura

Many advanced motion control strategies developed in robotics use captured human motion data as valuable source of examples to simplify the process of programming or learning complex robot motions. Direct and online control of robots from observed human motion has several inherent challenges. The most important may be the representation of the large number of mechanical degrees of freedom involved in the execution of movement tasks. Attempting to map all such degrees of freedom from a human to a humanoid is a formidable task from an instrumentation and sensing point of view. More importantly, such an approach is incompatible with mechanisms in the central nervous system which are believed to organize or simplify the control of these degrees of freedom during motion execution and motor learning phase. Rather than specifying the desired motion of every degree of freedom for the purpose of motion control, it is important to describe motion by low dimensional motion primitives that are defined in Cartesian (or task) space. In this paper, we formulate the human to humanoid retargeting problem as a task space control problem. The control objective is to track desired task descriptors while satisfying constraints such as joint limits, velocity limits, collision avoidance, and balance. The retargeting algorithm generates the joint space trajectories that are commanded to the robot. We present experimental and simulation results of the retargeting control algorithm on the Honda humanoid robot ASIMO.


asian conference on computer vision | 2007

Constrained optimization for human pose estimation from depth sequences

Youding Zhu; Kikuo Fujimura

A new 2-step method is presented for human upper-body pose estimation from depth sequences, in which coarse human part labeling takes place first, followed by more precise joint position estimation as the second phase. In the first step, a number of constraints are extracted from notable image features such as the head and torso. The problem of pose estimation is cast as that of label assignment with these constraints. Major parts of the human upper body are labeled by this process. The second step estimates joint positions optimally based on kinematic constraints using dense correspondences between depth profile and human model parts. The proposed framework is shown to overcome some issues of existing approaches for human pose tracking using similar types of data streams. Performance comparison with motion capture data is presented to demonstrate the accuracy of our approach.


workshop on human motion | 2007

Recognizing activities with multiple cues

Rahul Biswas; Sebastian Thrun; Kikuo Fujimura

In this paper, we introduce a first-order probabilistic model that combines multiple cues to classify human activities from video data accurately and robustly. Our system works in a realistic office setting with background clutter, natural illumination, different people, and partial occlusion. The model we present is compact, requires only fifteen sentences of first-order logic grouped as a Dynamic Markov Logic Network (DMLNs) to implement the probabilistic model and leverages existing state-of-the-art work in pose detection and object recognition.


Computer Vision and Image Understanding | 2010

Kinematic self retargeting: A framework for human pose estimation

Youding Zhu; Behzad Dariush; Kikuo Fujimura

This paper presents a model-based, Cartesian control theoretic approach for estimating human pose from a set of key features points (key-points) detected using depth images obtained from a time-of-flight imaging device. The key-points represent positions of anatomical landmarks, detected and tracked over time based on a probabilistic inferencing algorithm that is robust to partial occlusions and capable of resolving ambiguities in detection. The detected key-points are subsequently kinematically self retargeted, or mapped to the subjects own kinematic model, in order to predict the pose of an articulated human model at the current state, resolve ambiguities in key-point detection, and provide estimates of missing or intermittently occluded key-points. Based on a standard kinematic and mesh model of a human, constraints such as joint limit avoidance, and self-penetration avoidance are enforced within the retargeting framework. Effectiveness of the algorithm is demonstrated experimentally for upper and full-body pose reconstruction from a small set of detected key-points. On average, the proposed algorithm runs at approximately 10 frames per second for the upper-body and 5 frames per second for whole body reconstruction on a standard 2.13GHz laptop PC.


international conference on automatic face and gesture recognition | 2006

Sign recognition using depth image streams

Kikuo Fujimura; Xia Liu

A set of techniques is presented for extracting essential shape information from image sequences. Presented methods are (i) human detection, (ii) human body parts detection, and (iii) hand shape analysis, all based on depth image streams. In particular, representative types of hand shapes used in Japanese sign language (JSL) are recognized in a non-intrusive manner with a high recognition rate. An experimental JSL recognition system is built that can recognize over 100 words by using an active sensing hardware to capture a stream of depth images at a video rate. Experimental results are shown to validate our approach and characteristics of our approach are discussed


intelligent robots and systems | 2008

Online and markerless motion retargeting with kinematic constraints

Behzad Dariush; Michael Gienger; Arjun Arumbakkam; Christian Goerick; Youding Zhu; Kikuo Fujimura

Transferring motion from a human demonstrator to a humanoid robot is an important step toward developing robots that are easily programmable and that can replicate or learn from observed human motion. The so called motion retargeting problem has been well studied and several off-line solutions exist based on optimization approaches that rely on pre-recorded human motion data collected from a marker-based motion capture system. From the perspective of human robot interaction, there is a growing interest in online and marker-less motion transfer. Such requirements have placed stringent demands on retargeting algorithms and limited the potential use of off-line and pre-recorded methods. To address these limitations, we present an online task space control theoretic retargeting formulation to generate robot joint motions that adhere to the robotpsilas joint limit constraints, self-collision constraints, and balance constraints. The inputs to the proposed method include low dimensional normalized human motion descriptors, detected and tracked using a vision based feature detection and tracking algorithm. The proposed vision algorithm does not rely on markers placed on anatomical landmarks, nor does it require special instrumentation or calibration. The current implementation requires a depth image sequence, which is collected from a single time of flight imaging device. We present online experimental results of the entire pipeline on the Honda humanoid robot - ASIMO.

Collaboration


Dive into the Kikuo Fujimura's collaboration.

Top Co-Authors

Avatar

David Isele

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lijie Xu

Ohio State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bing Jian

University of Florida

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge