Anthony Mallet
Centre national de la recherche scientifique
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Anthony Mallet.
international conference on robotics and automation | 2000
Anthony Mallet; Simon Lacroix; Laurent Gallo
Presents a method that estimates robot displacements in outdoor unstructured terrain. It computes the displacements on the basis of associations of 3D points sets produced by consecutive stereovision frames, the associations being determined by tracking pixels from one image frame to the other. The paper details the various steps of the algorithms, and first experimental results are presented: they show that the algorithm is able to estimate the 6 parameters of the robot position with a relative error smaller than about 5%, processing several hundreds of images over several tens of meters.
international symposium on experimental robotics | 2000
Simon Lacroix; Anthony Mallet; David Bonnafous; Gérard Bauzil; Sara Fleury; Matthieu Herrb; Raja Chatila
Autonomous long range navigation in partially known planetary-like terrain is an open challenge for robotics. Navigating several hundreds of meters without any human intervention requires the robot to be able to build various representations of its environment, to plan and execute trajectories according to the kind of terrain traversed, to localize itself as it moves, and to schedule, start, control and interrupt these various activities. In this paper, we briefly describe some functionalities that are currently running on board the Marsokhod model robot Lama at LAAS/CNRS. We then focus on the necessity to integrate various instances of the perception and decision functionalities, and on the difficulties raised by this integration.
international conference on robotics and automation | 2007
Peter Ford Dominey; Anthony Mallet; Eiichi Yoshida
The current research analyses and demonstrates how spoken language can be used by human users to communicate with the HRP-2 humanoid to program the robots behavior in a cooperative task. The task involves the humans and the HRP-2 working together to assemble a piece of furniture. The objectives of the system are to 1) Allow the human to impart knowledge of how to accomplish a cooperative task to the robot, i.e. to program the robot, in the form of a sensory-motor action plan. 2) To do this in a semi-natural and real-time manner using spoken language. In this framework, a system for spoken language programming (SLP) is presented, and experimental results are presented from this prototype system. In Experiment 1, the human programs the robot to assist in assembling a small table. In Experiment 2, the generalization of the system is demonstrated as the user programs the robot to assist in taking the table apart. The SLP is evaluated in terms of the changes in efficiency as revealed by task completion time and number of command operations required to accomplish the tasks with and without SLP. Lessons learned are discussed, along with plans for improving the system, including developing a richer base of robot action and perception predicates that will allow the use of richer language. We thus demonstrate - for the first time - the capability for a human user to tell a humanoid what to do in a cooperative task so that in real time, the robot performs the task, and acquires new skills that significantly facilitate the cooperative human-robot interaction.
intelligent robots and systems | 2002
Anthony Mallet; Sara Fleury; Herman Bruyninckx
Robotics systems and software are becoming more and more complex: the need of standard specifications is certainly a key issue in the near future. There is a need to define, from the software point of view, a generic robot. The Orocos project was started to address this problem. It aims at developing a set of robotics software in particular domains and, as a first step, will define a software framework to do that. This paper presents the future evolutions of G/sup en/oM (component generator) which we propose as a programming framework in the context of the project. G/sup en/oM proposes the definition of generic components that are used to implement robotics functionalities (vision, control, motion planning, ...), and the paper presents the definition of those components. They have been designed so that they can be connected together and externally controlled to form a modular functional layer on robots. We have defined three main entities of which components are made: a set of algorithms, an execution engine, and a communication library. The paper explains their role and presents a formal description language that let us achieves a real decoupling between code of the components (the algorithms) and the execution engine.
ieee-ras international conference on humanoid robots | 2007
Peter Ford Dominey; Anthony Mallet; Eiichi Yoshida
An apprentice is an able-bodied individual that should interactively assist an expert, and through this interaction, they should acquire knowledge and skill in the given task domain. In this context the robot should have a useful repertoire of sensory-motor acts that the human can command with spoken language. In order to address the additional requirements for learning new behaviors, the robot should additionally have a real-time behavioral sequence acquisition capability. The learned sequences should function as executable procedures that can operate in a flexible manner that are not rigidly sensitive to initial conditions. The current research develops these capabilities in a real-time control system for the HRP-2 humanoid. The task domain involves a human and the HRP-2 working together to assemble a piece of furniture. We previously defined a system for Spoken Language Programming (SLP) that allowed the user to guide the robot through an arbitrary, task relevant, motor sequence via spoken commands, and to store this sequence as re-usable macro. The current research significantly extends the SPL system: It integrates vision and motion planning into the SLP framework, providing a new level of flexibility in the behviora that can be created. Most important it allows the user to create ldquogenericrdquo functions with arguments (e.g. Give me X), and it allows multiple functions to be created. We thus demonstrate - for the first time - a humanoid robot equipped with vision based grasping, and the ability to acquire multiple sensory motor behavioral procedures in real-time through SLP in the context of a cooperative task. The humanoid robot thus acquires new sensory motor skills that significantly facilitate the cooperative human-robot interaction.
ieee-ras international conference on humanoid robots | 2007
Eiichi Yoshida; Anthony Mallet; Florent Lamiraux; Oussama Kanoun; Olivier Stasse; Mathieu Poirier; Peter Ford Dominey; Jean-Paul Laumond; Kazuhito Yokoi
This paper reports current experiments conducted on HRP-2 based research on robot autonomy. The contribution of the paper is not focused on a specific area but its objective is to highlight the critical issues that had to be solved to allow the humanoid robot HRP-2 to understand and execute the order ldquogive me the purple ballrdquo in an autonomous way. Such an experiment requires: simple object recognition and localization, motion planning and control, natural spoken language supervision, simple action supervisor and control architecture.
From Motor Learning to Interaction Learning in Robots | 2010
Stéphane Lallée; Eiichi Yoshida; Anthony Mallet; Francesco Nori; Lorenzo Natale; Giorgio Metta; Felix Warneken; Peter Ford Dominey
Robots are now physically capable of locomotion, object manipulation, and an essentially unlimited set of sensory motor behaviors. This sets the scene for the corresponding technical challenge: how can non-specialist human users interact with these robots for human robot cooperation? Crangle and Suppes stated in [1] : “the user should not have to become a programmer, or rely on a programmer, to alter the robot’s behavior, and the user should not have to learn specialized technical vocabulary to request action from a robot.” To achieve this goal, one option is to consider the robot as a human apprentice and to have it learn through its interaction with a human. This chapter reviews our approach to this problem.
international conference on formal engineering methods | 2016
Mohammed Foughali; Bernard Berthomieu; Silvano Dal Zilio; Félix Ingrand; Anthony Mallet
Software is an essential part of robotic systems. As robots and autonomous systems are more and more deployed in human environments, we need to use elaborate validation and verification techniques in order to gain a higher level of trust in our systems. This motivates our determination to apply formal verification methods to robotics software. In this paper, we describe our results obtained using model-checking on the functional layer of an autonomous robot. We implement an automatic translation from GenoM, a robotics model-based software engineering framework, to the formal specification language Fiacre. This translation takes into account the semantics of the robotics middleware. TINA, our model-checking toolbox, can be used on the synthesized models to prove real-time properties of the functional modules implementation on the robot. We illustrate our approach using a realistic autonomous navigation example.
Archive | 2010
Eiichi Yoshida; Claudia Esteves; Oussama Kanoun; Mathieu Poirier; Anthony Mallet; Jean-Paul Laumond; Kazuhito Yokoi
In this chapter we address the planning problem of whole-body motions by humanoid robots. The approach presented benefits from two cutting edge recent advancements in robotics: powerful probabilistic geometric and kinematic motion planning and advanced dynamic motion control for humanoids. First, we introduce a two-stage approach that combines these two techniques for collision-free simultaneous locomotion and upper-body task. Then a whole-body motion generationmethod is presented for reaching, including steps based on generalized inverse kinematics. The third example is planning of whole-body manipulation of a large object by “pivoting”, by making use of the precedent results. Finally, an integrated experiment is shown in which the humanoid robot interacts with its environment through perception. The humanoid robot platform HRP-2 is used as the platform to validate the results.
international conference on robotics and automation | 1998
Anthony Mallet; Simon Lacroix
We present an approach to refine the pose estimate of an outdoor mobile robot evolving on flat terrains cluttered with obstacles. We propose an algorithm to extract relevant obstacle contour lines on the basis of stereo-vision data. The algorithm is very robust with respect to the uncertainties on the data, and do not require a very fine and precise obstacle extraction procedure. We explain how the contour lines are compared from one image to another to refine the pose estimate provided by the robot internal sensors.