Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Etsuko Ueda is active.

Publication


Featured researches published by Etsuko Ueda.


robot and human interactive communication | 2008

Model-based hand pose estimation using multiple viewpoint silhouette images and Unscented Kalman Filter

Albert J. Causo; Etsuko Ueda; Yuichi Kurita; Yoshio Matsumoto; Tsukasa Ogasawara

This paper addresses pose estimation of a hand in motion through a model-based vision-based system. Previous research on human hand tracking and pose estimation usually suffers from limited number of degrees-of-freedom estimated, camera orientation (i.e., the hand is restricted to a particular pose while moving) and occlusion. We describe and evaluate a system that allows several DOF of the hand to be estimated without hindering its motion and while minimizing occlusion. To allow for a complete 3D motion, a voxel model and a skeletal model of the hand are used. The system uses multiple viewpoint cameras to obtain information of the hand motion. Due to the non-linear characteristics of the system, unscented Kalman filter (UKF) is used to track the hand motion. UKF estimates the hand pose by minimizing the difference between the skeletal model and the voxel model. Estimation results from different hand motions of up to 15DOF show the feasibility of the system.


international conference on advanced intelligent mechatronics | 2009

Hand pose estimation using voxel-based individualized hand model

Albert J. Causo; Mai Matsuo; Etsuko Ueda; Kentaro Takemura; Yoshio Matsumoto; Jun Takamatsu; Tsukasa Ogasawara

A robust and adaptive hand pose estimation technique is necessary to create a natural and non-contact manmachine interface system. In this regard, individualizing the systems hand model is key to improving hand pose estimation. This paper addresses the issue of calibrating the hand model of the system for every user in order to get a more robust pose estimation results. Our system is a vision-based model-based approach that uses a skeletal hand model composed of a finger link structure and a surface structure. The surface structure is composed of voxel which is derived from silhouette images obtained by multi-viewpoint cameras. The finger link structure is estimated by searching for the optimum lengths from a set of values generated from the calibration motion of the fingers. We compared the output of using a standard (non-calibrated) hand model with our proposed individualized hand model in the pose estimation system. Results show that using the calibrated hand model has a more robust result than the one using the standard model.


Archive | 2010

Predictive Tracking in Vision-based Hand Pose Estimation Using Unscented Kalman Filter and Multi-viewpoint Cameras

Albert J. Causo; Kentaro Takemura; Jun Takamatsu; Tsukasa Ogasawara; Etsuko Ueda; Yoshio Matsumoto

One of the major challenges in human-robot interaction is how to enable the use of unrestricted hand motion as a tool for communication. The direct use of hand as an input tool enables the user to connect to systems more naturally, allowing them to become an integral part of our daily lives. A vision-based approach, using cameras to capture data, supports non-contact and unrestricted movement of the hand. Nonetheless, the high degrees of freedom (DOF) of the hand is an essential issue to tackle in articulated hand motion tracking and pose estimation. In this paper, we present our vision-based model-based approach, which uses multiple cameras and predictive filtering, to estimate the pose of the hand. We build on the research of Ueda et al., whose work can separately estimate the global pose (wrist position and palm orientation) and the local pose (finger joint angles), but not simultaneously (Ueda et al., 2003). We address the problem through the use of a non-linear filter, Unscented Kalman Filter (UKF), to track the motion and simultaneously estimate the global and local poses of the hand. The rest of the paper is organized as follows. Section 2 presents the related works and Section 3 discusses the UKF. Section 4 explains the hand pose estimation system and Section 5 details how we use the UKF for tracking and pose estimation. Experimental results and discussions are found in Section 6.


Computational methods and experimental measurements, 2013, ISBN 978-1-84564-732-2, págs. 453-463 | 2013

Modeling of graceful motions: determining characteristics of graceful motions from handover motion

Takahiro Tanaka; Takumi Tsuduki; Etsuko Ueda; Kentaro Takemura; Takayuki Nakamura

Since robots in the welfare and service industries make contact with humans, they have to produce favourable impressions in the human mind. This study proposes graceful motions of humans and aims to quantify it to make robot motions more favourable. We have focused on the handover motion of a glass and selected waiters as subjects. Their motions were recorded as trajectories by motion capture system and evaluated by observers. The result revealed that the motions of them produce favourable and graceful impressions. The trajectory was projected on a two-dimensional plane surface that was obtained by conducting principal component analysis. The projected trajectory was calculated by fitting extracted as parameters of spline curves and graceful motions are characterized by S-shaped trajectories. After that, 4 motions were created using 3DCG by changing extracted parameters step by step to verify which parameters provide graceful impressions through simulations. Finally, the parameters that produce graceful and favourable impressions were determined.


intelligent robots and systems | 2010

Generating natural hand motion in playing a piano

Kazuki Yamamoto; Etsuko Ueda; Tsuyoshi Suenaga; Kentaro Takemura; Jun Takamatsu; Tsukasa Ogasawara

Generating natural motion of an articulated object with higher DOF (e.g., humanoid robot and robot hand) is a crucial issue in robotics and computer graphics fields. Use of the motion capture data is one of the solutions, but it requires expensive device and time-consuming measurement. In this paper, we propose a method for generating natural hand motion to play a piano from the inputted music score. The proposed method uses inverse kinematics while considering naturalness of hand poses. We revisit background of the inverse kinematics based on the maximum likelihood estimation and use the prior model of the hand pose to achieve the naturalness. We evaluate the effectiveness of the proposed method using voluntary survey.


international conference on human-computer interaction | 2017

Classification of Synchronous Non-parallel Shuffling Walk for Humanoid Robot

Masanao Koeda; Daiki Sugimoto; Etsuko Ueda

Humanoid robot is one of the best solution to interact and utilize in our life space. We focus on shuffling walk for humanoid robots. Shuffling walk is expected to have an advantage in moving around the narrow area with constrained posture. In this study, we describe the classification of the motion of the shuffling walk and propose new classification in synchronous non-parallel shuffling walk which focused on the load point of the sole of the robot.


international conference on digital human modeling and applications in health, safety, ergonomics and risk management | 2017

Quantification of Elegant Motions for Receptionist Android Robot

Makoto Ikawa; Etsuko Ueda; Akishige Yuguchi; Gustavo Alfonso Garcia Ricardez; Ming Ding; Jun Takamatsu; Tsukasa Ogasawara

To improve the general image of robots, in this study we describe a method of achieving “elegant motions based on women’s sense” in an android robot. There have been many books published in Japan containing advice for women on how to have elegant manners. Our approach was to quantify the elegant motions that are qualitatively expressed in these etiquette books, using an android robot. In this research, we focused on arm- and face-based motions, such as giving directions, with an emphasis on “reception” tasks. We programmed the robot to perform desirable motions, such as “show the palm to a guest and do not raise the hand higher than the shoulder,” which are commonly expressed in the manners books. For each implemented motions, many patterns could be generated by changing certain parameters, such as the movement speed, the angle of the arm and the hand, and the distance and angle to the indicated location. We verified these motions using a subjective evaluation and discussed the elegant and quantified motions based on the result.


robotics and biomimetics | 2014

Low-cost and highly reliable MIMAMORI device for patient monitoring

Atsutoshi Ikeda; Yasuyuki Otoda; Etsuko Ueda; Takayuki Nakamura; Tsukasa Ogasawara

In an aging society, it is important to closely monitor elderly people, especially those with cognitive disorders, because they can easily fall down and sustain serious injuries. Therefore, the demand of the monitoring system which can detect dangerous situation automatically is increased. In this paper, we propose a low-cost and highly reliable monitoring device which is called the MIMAMORI device for a monitoring system at a medical institute. The MIMAMORI device can detect dangerous situations, such as when a patient is at risk of rolling out of bed. Our MIMAMORI device has the following advantages: 1) open-source software and hardware, 2) component-based software, and 3) built from cheap products. We detail the device integration and the patient state classification algorithm of the MIMAMORI device, and verify its practical applicability with a preliminary experiment. We found that the MIMAMORI device can identify dangerous situations with an accuracy of 93%. This result shows that the MIMAMORI device is effective in monitoring patients.


human-robot interaction | 2009

Individualization of voxel-based hand model

Albert J. Causo; Mai Matsuo; Etsuko Ueda; Yoshio Matsumoto; Tsukasa Ogasawara

Improvements in hand pose estimation, made possible by refining the model matching step, is necessary in creating a more natural human-robot interface. Individualizing the 3D hand model of the user can result to a better hand pose estimation. This paper presents a way to accomplish the individualization by estimating the length of the finger links (bones), which is unique for every user. The 3D model of the hand is made up of voxel data derived from silhouette images obtained by multiple cameras and the finger link is estimated by searching a set of models generated from the calibration motion of the fingers. Initial pose estimation result using the model shows the feasibility of the system.


international conference of the ieee engineering in medicine and biology society | 2008

Estimation of physical constraint condition by analyzing walking motions

Yuichi Kurita; Daisuke Kuraki; Etsuko Ueda; Yoshio Matsumoto; Tsukasa Ogasawara

In this paper, the estimation system of physical constraint conditions is proposed using motion data during walks. Walking motions of subjects who are fixed braces on their bodies were measured by a motion capturing system. We focused on the acceleration data on the body and found which parts in the body are influenced by the physical constraint. The estimation of the physical constraint condition was done by Hidden Markov Model (HMM) based on the reliability map created from the recognition rate. Experimental result shows that we can efficiently estimate the constraint condition by using data of the particular body parts.

Collaboration


Dive into the Etsuko Ueda's collaboration.

Top Co-Authors

Avatar

Tsukasa Ogasawara

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Yoshio Matsumoto

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Albert J. Causo

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Jun Takamatsu

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Kenichi Iida

National Archives and Records Administration

View shared research outputs
Top Co-Authors

Avatar

Atsutoshi Ikeda

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Masanao Koeda

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge