Olivier Stasse
University of Toulouse
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Olivier Stasse.
intelligent robots and systems | 2006
Olivier Stasse; Andrew J. Davison; Ramzi Sellaouti; Kazuhito Yokoi
Humanoid robotics and SLAM (simultaneous localisation and mapping) are certainly two of the most significant themes of the current worldwide robotics research effort, but the two fields have up until now largely run independent parallel paths, despite the obvious benefit to be gained in joining the two. The next major step forward in humanoid robotics will be increased autonomy, and the ability of a robot to create its own world map on the fly will be a significant enabling technology. Meanwhile, SLAM techniques have found most success with robot platforms and sensor configurations which are outside of the humanoid domain. Humanoid robots move with high linear and angular accelerations in full 3D, and normally only vision is available as an outward-looking sensor. Building on recently published work on monocular SLAM using vision, and on pattern generation, we show that real-time SLAM for a humanoid can indeed be achieved. Using HRP-2, we present results in which a sparse 3D map of visual landmarks is acquired on the fly using a single camera and demonstrated loop closing and drift-free 3D motion estimation within a typical cluttered indoor environment. This is achieved by tightly coupling the pattern generator, the robot odometry and inertial sensing to aid visual mapping within a standard EKF framework. To our knowledge this is the first implementation of real-time 3D SLAM for a humanoid robot able to demonstrate loop closing
international conference on robotics and automation | 2007
Nicolas Mansard; Olivier Stasse; François Chaumette; Kazuhito Yokoi
In this paper, we apply a general framework for building complex whole-body control for highly redundant robot, and we propose to implement it for visually-guided grasping while walking on a humanoid robot. The key idea is to divide the control into several sensor-based control tasks that are simultaneously executed by a general structure called stack of tasks. This structure enables a very simple access for task sequencing, and can be used for task-level control. This framework was applied for a visual servoing task. The robot walks along a planned path, keeping the specified object in the middle of its field of view and finally, when it is close enough, the robot grasps the object while walking.
intelligent robots and systems | 2006
Ramzi Sellaouti; Olivier Stasse; Shuuji Kajita; Kazuhito Yokoi; Abderrahmane Kheddar
This paper addresses the role of toe joints in increasing the walking speed of biped robots. It is worthy that adding a toe joint will increase the step length thanks to the additional degree of freedom. But, the originality of this work is that longer steps are obtained thanks to an under-actuated phase and an appropriate ZMP trajectory. The simulations showed that adding passive toe joints allows smoother and 1.5 faster walking
IEEE Transactions on Robotics | 2012
Nicolas Perrin; Olivier Stasse; Léo Baudouin; Florent Lamiraux; Eiichi Yoshida
In this paper, we propose a novel and coherent framework for fast footstep planning for legged robots on a flat ground with 3-D obstacle avoidance. We use swept volume approximations that are computed offline in order to considerably reduce the time spent in collision checking during the online planning phase, in which a rapidly exploring random tree variant is used to find collision-free sequences of half-steps (which are produced by a specific walking pattern generator). Then, an original homotopy is used to smooth the sequences into natural motions, gently avoiding the obstacles. The results are experimentally validated on the robot HRP-2.
international conference on robotics and automation | 2008
Olivier Stasse; Adrien Escande; Nicolas Mansard; Sylvain Miossec; Paul Evrard; Abderrahmane Kheddar
This paper proposes a real-time implementation of collision and self-collision avoidance for robots. On the basis of a new proximity distance computation method which ensures having continuous gradient, a new controller in the velocity domain is proposed. The gradient continuity encompasses no jump in the generated command. Included in a stack of tasks architecture, this controller has been implemented on the humanoid platform HRP-2 and experienced in a grasping task while walking and avoiding collisions with the environment and auto-collisions.
IEEE Transactions on Robotics | 2009
Olivier Stasse; Björn Verrelst; Bram Vanderborght; Kazuhito Yokoi
This study proposes a complete solution to make the humanoid robot HRP-2 dynamically step over large obstacles. As compared with previous results using quasistatic stability, where the robot crosses over a 15-cm obstacle in 40 s, our solution allows HRP-2 to step over the same obstacle in 4 s. This approach allows the robot to clear obstacles as high as 21% of the robots leg length (15 cm) while walking. Simulations show the possibility to step over an obstacle that is 35% of the length (25 cm) with a margin of 3 cm.
ieee-ras international conference on humanoid robots | 2006
Björn Verrelst; Olivier Stasse; Kazuhito Yokoi; Bram Vanderborght
Humanoid robots have the potential to navigate through complex environments such as a standard human living environment. One of the advantages is that a biped can negotiate obstacles by stepping over them, which is the topic of the work presented in this paper. The main focus of this research is to investigate stepping over large obstacles. Previous work has reported on algorithms using quasi-static balancing, which resulted in somehow unnatural slow motions. This work however, focuses on stepping over larger obstacles in a fluent dynamic motion, using stability criteria on zero moment point instead of center of gravity. All the work is formulated in function of the elaborate HRP-2 humanoid research platform. The strategy uses a preview controller for dynamic balancing and consists of collision free trajectory generation for the feet and waist. The paper provides a proof of concept for dynamically stepping over large obstacles by experiment and pinpoints some difficulties in the planning for practical implementations. Several measures have been implemented e.g., to avoid overstretching of the knee and reduce impact at touch-down
intelligent robots and systems | 2015
J. Koenemann; A. Del Prete; Y. Tassa; Emanuel Todorov; Olivier Stasse; Nicolas Mansard
Controlling the robot with a permanently-updated optimal trajectory, also known as model predictive control, is the Holy Grail of whole-body motion generation. Before obtaining it, several challenges should be faced: computation cost, non-linear local minima, algorithm stability, etc. In this paper, we address the problem of applying the updated optimal control in real-time on the physical robot. In particular, we focus on the problems raised by the delays due to computation and by the differences between the real robot and the simulated model. Based on the optimal-control solver MuJoCo, we implemented a complete model-predictive controller and we applied it in real-time on the physical HRP-2 robot. It is the first time that such a whole-body model predictive controller is applied in real-time on a complex dynamic robot. Aside from the technical contributions cited above, the main contribution of this paper is to report the experimental results of this première implementation.
Lecture Notes in Computer Science | 2000
Olivier Stasse; Yasuo Kuniyoshi; Gordon Cheng
The aim of this paper is to present our attempt in creating a visual system for a humanoid robot, which can intervene in nonspecific tasks in real-time. Due to the generic aspects of our goal, our models are based around human architecture. Such approaches have usually been contradictory, with the efficient implementation of real systems and its demanding computational cost. We show that by using PredN, a system for developing distributed real-time robotic applications, we are able to build a real-time scalable visual attention system. It is easy to change the structure of the system, or the hardware in order to investigate new models. In our presentation, we will also present our system with a number of human visual attributes, such as: log-polar retino-cortical mapping, banks of oriented filters providing a generic signature of any object in an image. Additionally, a visual attention mechanism - a psychophysical model - FeatureGate, is used in eliciting a fixation point. The system runs at frame rate, allowing interaction of same time scale as humans.
intelligent robots and systems | 2007
Francois Saidi; Olivier Stasse; Kazuhito Yokoi; Fumio Kanehiro
This paper presents an object active visual search behavior in a 3D environment performed by a HRP-2 humanoid robot. The search is formalized as an optimization problem in which the goal is to maximize the target detection probability while minimizing the energy/distance and time to achieve the task. Natural constraints on the camera parameter space based on the characteristics of the recognition system are used to reduce the dimension of the problem and to speed up the optimization process to achieve a real time behavior. We present simulation and real experimental results using an HRP-2 robot.
Collaboration
Dive into the Olivier Stasse's collaboration.
National Institute of Advanced Industrial Science and Technology
View shared research outputsNational Institute of Advanced Industrial Science and Technology
View shared research outputsNational Institute of Advanced Industrial Science and Technology
View shared research outputsNational Institute of Advanced Industrial Science and Technology
View shared research outputsNational Institute of Advanced Industrial Science and Technology
View shared research outputs