Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ee Sian Neo is active.

Publication


Featured researches published by Ee Sian Neo.


IEEE Transactions on Robotics | 2007

Whole-Body Motion Generation Integrating Operator's Intention and Robot's Autonomy in Controlling Humanoid Robots

Ee Sian Neo; Kazuhito Yokoi; Shuuji Kajita; Kazuo Tanie

This paper introduces a framework for whole-body motion generation integrating operators control and robots autonomous functions during online control of humanoid robots. Humanoid robots are biped machines that usually possess multiple degrees of freedom (DOF). The complexity of their structure and the difficulty in maintaining postural stability make the whole-body control of humanoid robots fundamentally different from fixed-base manipulators. Getting hints from human conscious and subconscious motion generations, the authors propose a method of generating whole-body motions that integrates the operators command input and the robots autonomous functions. Instead of giving commands to all the joints all the time, the operator selects only the necessary points of the humanoid robots body for manipulation. This paper first explains the concept of the system and the framework for integrating operators command and autonomous functions in whole-body motion generation. Using the framework, autonomous functions were constructed for maintaining postural stability constraint while satisfying the desired trajectory of operation points, including the feet, while interacting with the environment. Finally, this paper reports on the implementation of the proposed method to teleoperate two 30-DOF humanoid robots, HRP-1S and HRP-2, by using only two 3-DOF joysticks. Experiments teleoperating the two robots are reported to verify the effectiveness of the proposed method.


international conference on robotics and automation | 2009

Clothes state recognition using 3D observed data

Yasuyo Kita; Toshio Ueshiba; Ee Sian Neo; Nobuyuki Kita

In this paper, we propose a deformable-model-driven method to recognize the state of hanging clothes using three-dimensional (3D) observed data. For the task to pick up a specific part of the clothes, it is indispensable to obtain the 3D position and posture of the part. In order to robustly obtain such information from 3D observed data of the clothes, we take a deformable-model-driven approach[4], that recognizes the clothes state by comparing the observed data with candidate shapes which are predicted in advance. To carry out this approach despite large shape variation of the clothes, we propose a two-staged method. First, small number of representative 3D shapes are calculated through physical simulations of hanging the clothes. Then, after observing clothes, each representative shape is deformed so as to fit the observed 3D data better. The consistency between the adjusted shapes and the observed data is checked to select the correct state. Experimental results using actual observations have shown the good prospect of the proposed method.


IEEE Transactions on Robotics | 2006

Stepping over obstacles with humanoid robots

Yisheng Guan; Ee Sian Neo; Kazuhito Yokoi; Kazuo Tanie

The wide potential applications of humanoid robots require that the robots can walk in complex environments and overcome various obstacles. To this end, we address the problem of humanoid robots stepping over obstacles in this paper. We focus on two aspects, which are feasibility analysis and motion planning. The former determines whether a robot can step over a given obstacle, and the latter discusses how to step over, if feasible, by planning appropriate motions for the robot. We systematically examine both of these aspects. In the feasibility analysis, using an optimization technique, we cast the problem into global optimization models with nonlinear constraints, including collision-free and balance constraints. The solutions to the optimization models yield answers to the possibility of stepping over obstacles under some assumptions. The presented approach for feasibility provides not only a priori knowledge and a database to implement stepping over obstacles, but also a tool to evaluate and compare the mobility of humanoid robots. In motion planning, we present an algorithm to generate suitable trajectories of the feet and the waist of the robot using heuristic methodology, based on the results of the feasibility analysis. We decompose the body motion of the robot into two parts, corresponding to the lower body and upper body of the robot, to meet the collision-free and balance constraints. This novel planning method is adaptive to obstacle sizes, and is, hence, oriented to autonomous stepping over by humanoid robots guided by vision or other range finders. Its effectiveness is verified by simulations and experiments on our humanoid platform HRP-2


ISRR | 2011

Cybernetic Human HRP-4C: A Humanoid Robot with Human-Like Proportions

Shuuji Kajita; Kenji Kaneko; Fumio Kaneiro; Kensuke Harada; Mitsuharu Morisawa; Shin’ichiro Nakaoka; Kanako Miura; Kiyoshi Fujiwara; Ee Sian Neo; Isao Hara; Kazuhito Yokoi; Hirohisa Hirukawa

Cybernetic human HRP-4C is a humanoid robot whose body dimensions were designed to match the average Japanese young female. In this paper, we explain the aim of the development, realization of human-like shape and dimensions, research to realize human-like motion and interactions using speech recognition.


intelligent robots and systems | 2009

A method for handling a specific part of clothing by dual arms

Yasuyo Kita; Toshio Ueshiba; Ee Sian Neo; Nobuyuki Kita

In this paper, we propose a strategy for a dual-arm robot to pick up a specific part of clothing with one hand while holding the item of clothing with its other hand. Due to the large deformability of clothing, the handling requirements differ from those required for the handling of rigid objects. In the case of holding a specific part of clothing, large deformation leads to a large variety of positions and orientations of the target part, requiring flexibility of both visual recognition and motion control. On the other hand, since the clothing can flexibly curve over the hand, a relatively large range of suitable actions is allowed for grasping the clothing. By considering these characteristics, the following three-stage strategy is proposed. First, the state of the clothing is recognized from visual observation of the clothing using a deformable-model [1]. Then, the theoretically optimal position and orientation of the hand for handling a specific part of the clothing is calculated based on the recognition results. Finally, the position and orientation of the hand is modified by considering the executable motion range of the dual arms. Preliminary experimental results using actual observations of a humanoid robot were used to validate the effectiveness of the proposed strategy.


ieee-ras international conference on humanoid robots | 2006

A Behavior Level Operation System for Humanoid Robots

Ee Sian Neo; Takeshi Sakaguchi; Kazuhito Yokoi; Yoshihiro Kawai; Kenichi Maruyama

This paper introduces the construction of a behavior level operation system for humanoid robots. One of the future applications for humanoid robots is to serve in environments like homes or offices. These places are partially-unknown environments where some knowledge of the world can be described and stored in the system before hand. A heuristic methodology utilizing an online operation system to actually operate a humanoid robot to act on objects in designing object oriented actions is introduced. Autonomous behaviors are constructed by integrating the designed object oriented actions with 3D visual recognition functions equipped with a library of geometrical object library. A behavior level operation system allowing an operator to monitor and modify the robots behavior online is constructed. The behavior level operation system is implemented on humanoid robot HRP-2 and experiments realizing a drink serving task which includes taking out a can of drink from a fridge are reported


intelligent robots and systems | 2010

Clothes handling using visual recognition in cooperation with actions

Yasuyo Kita; Ee Sian Neo; Toshio Ueshiba; Nobuyuki Kita

In this paper, we propose a method of visual recognition in cooperation with actions for automatic handling of clothing by a robot. Difficulty in visual recognition of clothing largely depends on the observed shape of the clothing. Therefore, strategy of actively making clothing into the shape easier to recognize should be effective. First, after clothing is observed by a trinocular stereo vision system, it is checked whether the observation gives enough information to recognize the clothing shape robustly or not. If not, proper “recognition-aid” actions, such as rotating and/or spreading the clothing, are automatically planned based on the visual analysis of the current shape. After executing the planned action, the clothing is observed again to recognize. The effect of the action of spreading clothes was demonstrated through experimental results using an actual humanoid.


intelligent robots and systems | 2008

Intercontinental multimodal tele-cooperation using a humanoid robot

Angelika Peer; Sandra Hirche; Carolina Weber; Inga Krause; Martin Buss; Sylvain Miossec; Paul Evrard; Olivier Stasse; Ee Sian Neo; Abderrahmane Kheddar; Kazuhito Yokoi

In multimodal tele-cooperation as considered in this paper two humans in distant locations jointly perform a task requiring multimodal including haptic feedback. One human operator teleoperates a remotely placed humanoid robot which is collocated with the human cooperator. Time delay in the communication channel as destabilizing factor is one of the multiple challenges associated with such a tele-cooperation setup. In this paper we employ a control architecture with force-position exchange accounting for the admittance type of the haptic input device and the telerobot, which both are position-based admittance controlled. Llewellynpsilas stability criteria are employed for the parameter tuning of the virtual impedances in the presence of time delay. The control strategy is successfully validated in an intercontinental tele-cooperation experiment with the humanoid telerobot HRP-2 located in Japan/Tsukuba and a multimodal human-system-interface located in Germany/Munich, see also the corresponding video submission. The proposed setup gives rise to a large number of exciting new research questions to be addressed in the future.


intelligent robots and systems | 2008

Intercontinental cooperative telemanipulation between Germany and Japan

Angelika Peer; Sandra Hirche; Carolina Weber; Inga Krause; Martin Buss; Sylvain Miossec; Paul Evrard; Olivier Stasse; Ee Sian Neo; Abderrahmane Kheddar; Kazuhito Yokoi

The video shows an intercontinental cooperative telemanipulation task, whereby the operator site is located in Munich, Germany and the teleoperator site in Tsukuba, Japan. The human operator controls a remotely located teleoperator, which performs a task in the remote environment. Hereby the human operator is assisted by another person located at the remote site. The task consists in jointly grasping an object, moving it to a new position and finally releasing it.


robotics and biomimetics | 2006

A Method Determining the Reaching Behavior of Humanoid Robots

Haitao Na; Ee Sian Neo; Kazuhito Yokoi

This paper addresses the problem of the reaching motion of humanoid robots. In order to determine the necessary motions for reaching, an object-orientated reachable space (ORS) concept is proposed. After checking whether an object is reachable or unreachable, a standing point (i.e. where the humanoid should stand) is calculated. A method of generating a reaching motion is also described. Simulation results with the humanoid simulator OpenHRP demonstrate the effectiveness of the proposed method.

Collaboration


Dive into the Ee Sian Neo's collaboration.

Top Co-Authors

Avatar

Kazuhito Yokoi

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Nobuyuki Kita

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kazuhito Yokoi

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Takeshi Sakaguchi

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Toshio Ueshiba

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Yasuyo Kita

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Abderrahmane Kheddar

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge