Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jean-Bernard Hayet is active.

Publication


Featured researches published by Jean-Bernard Hayet.


international conference on robotics and automation | 2004

Face tracking and hand gesture recognition for human-robot interaction

Ludovic Brèthes; Paulo Menezes; Frédéric Lerasle; Jean-Bernard Hayet

The interaction between man and machines has become an important topic for the robotic community as it can generalise the use of robots. For active H/R interaction scheme, the robot needs to detect human faces in its vicinity and then interpret canonical gestures of the tracked person assuming this interlocutor has been beforehand identified. In this context, we depict functions suitable to detect and recognise faces in video stream and then focus on face or hand tracking functions. An efficient colour segmentation based on a watershed on the skin-like coloured pixels is proposed. A new measurement model is proposed to take into account both shape and colour cues in the particle filter to track face or hand silhouettes in video stream. An extension of the basic condensation algorithm is proposed to achieve recognition of the current hand posture and automatic switching between multiple templates in the tracking loop. Results of tracking and recognition using are illustrated in the paper and show the process robustness in cluttered environments and in various light conditions. The limits of the method and future works are also discussed.


international conference on robotics and automation | 2002

A visual landmark framework for indoor mobile robot navigation

Jean-Bernard Hayet; Frédéric Lerasle; Michel Devy

Presents vision functions needed on a mobile robot to deal with landmark-based navigation in buildings. Landmarks are planar, quadrangular surfaces, which must be distinguished from the background, typically a poster on a wall or a door-plate. In a first step, these landmarks are detected and their positions with respect to a global reference frame are learned; this learning step is supervised so that only the best landmarks are memorized, with an invariant representation based on a set of interest points. Then, when the robot looks for visible landmarks, the recognition procedure takes advantage of the partial Hausdorff distance to compare the landmark model and the detected quadrangles. The paper presents the landmark detection and recognition procedures, and discusses their performances.


computer vision and pattern recognition | 2003

Visual landmarks detection and recognition for mobile robot navigation

Jean-Bernard Hayet; Frédéric Lerasle; Michel Devy

This article describes visual functions dedicated to the extraction and recognition of planar quadrangles detected from a single camera. Extraction is based on a relaxation scheme with constraints between image segments, while the characterization we propose allows recognition to be achieved from different view-points and viewing conditions. We defined and evaluated several metrics on this representation space - a correlation-based one and another one based on sets of interest points.


intelligent robots and systems | 2000

Visual localization of a mobile robot in indoor environments using planar landmarks

V. Ayala; Jean-Bernard Hayet; Frédéric Lerasle; Michel Devy

Describes the localization function integrated in a landmark-based navigation system. It relies on the use of planar landmarks (typically, posters) to localize the robot. It is based on two periodic processes running at different frequencies. One of them performs the poster tracking (based on the partial Hausdorff distance) and the active control of the camera. The other one runs on a lower frequency and localizes the robot thanks to the tracked landmarks, the positions of which have been learnt during an offline exploration step. The system has been embedded on our indoor Hilare mobile robot and works in real time. Experiments, illustrated in the paper, demonstrate the validity of the approach.


Robotics and Autonomous Systems | 2002

Topological navigation and qualitative localization for indoor environment using multi-sensory perception

Parthasarathy Ranganathan; Jean-Bernard Hayet; Michel Devy; Seth Hutchinson; Frédéric Lerasle

This article describes a navigation system for a mobile robot which must execute motions in a building; the robot is equipped with a belt of ultrasonic sensors and with a camera. The environment is represented by a topological model based on a Generalized Voronoi Graph (GVG) and by a set of visual landmarks. Typically, the topological graph describes the free space in which the robot must navigate; a node is associated to an intersection between corridors, or to a crossing towards another topological area (an open space: rooms, hallways, ... ); an edge corresponds to a corridor or to a path in an open space. Landmarks correspond to static, rectangular and planar objects (e.g. doors, windows, posters, ... ) located on the walls. The landmarks are only located with respect to the topological graph: some of them are associated to nodes, other to edges. The paper is focused on the preliminary exploration task, i.e. the incremental construction of the topological model. The navigation task is based on this model: the robot self-localization is only expressed with respect to the graph.


The International Journal of Robotics Research | 2015

Vision-guided motion primitives for humanoid reactive walking: Decoupled versus coupled approaches

Mauricio Garcia; Olivier Stasse; Jean-Bernard Hayet; Claire Dune; Claudia Esteves; Jean-Paul Laumond

This paper proposes a novel visual servoing approach to control the dynamic walk of a humanoid robot. Online visual information is given by an on-board camera. It is used to drive the robot towards a specific goal. Our work is built upon a recent reactive pattern generator that make use of model predictive control (MPC) to modify footsteps, center of mass and center of pressure trajectories to track a reference velocity. The contribution of the paper is to formulate the MPC problem considering visual feedback. We compare our approach with a scheme decoupling visual servoing and walking gait generation. Such a decoupled scheme consists of, first, computing a reference velocity from visual servoing; then, the reference velocity is the input of the pattern generator. Our MPC-based approach allows to avoid a number of limitations that appears in decoupled methods. In particular, visual constraints can be introduced directly inside the locomotion controller, while camera motions do not have to be accounted for separately. Both approaches are compared numerically and validated in simulation. Our MPC method shows a faster convergence.


International Journal of Humanoid Robotics | 2014

Toward Reactive Vision-Guided Walking on Rough Terrain: An Inverse-Dynamics Based Approach

Oscar E. Ramos; Mauricio Garcia; Nicolas Mansard; Olivier Stasse; Jean-Bernard Hayet; Philippe Souères

This work presents a method to handle walking on rough terrain using inverse dynamics control and information from a stereo vision system. The ideal trajectories for the center of mass (CoM) and the next position of the feet are given by a pattern generator. The pattern generator is able to automatically find the footsteps for a given direction. Then, an inverse dynamics control scheme relying on a quadratic programming optimization solver is used to let each foot go from its initial to final position, controlling also the CoM and the waist. A 3D model reconstruction of the ground is obtained through the robot cameras located on its head as a stereo vision pair. The model allows the system to know the ground structure where the swinging foot is going to step on. Thus, contact points can be handled to adapt the foot position to the ground conditions.


International Journal of Humanoid Robotics | 2012

HUMANOID LOCOMOTION PLANNING FOR VISUALLY GUIDED TASKS

Jean-Bernard Hayet; Claudia Esteves; Gustavo Arechavaleta; Olivier Stasse; Eiichi Yoshida

In this work, we propose a landmark-based navigation approach that integrates (1) high-level motion planning capabilities that take into account the landmarks position and visibility and (2) a stack of feasible visual servoing tasks based on footprints to follow. The path planner computes a collision-free path that considers sensory, geometric, and kinematic constraints that are specific to humanoid robots. Based on recent results in movement neuroscience that suggest that most humans exhibit nonholonomic constraints when walking in open spaces, the humanoid steering behavior is modeled as a differential-drive wheeled robot (DDR). The obtained paths are made of geometric primitives that are the shortest in distance in free spaces. The footprints around the path and the positions of the landmarks to which the gaze must be directed are used within a stack-of-tasks (SoT) framework to compute the whole-body motion of the humanoid. We provide some experiments that verify the effectiveness of the proposed strategy on the HRP-2 platform.


intelligent robots and systems | 2002

Qualitative modeling of indoor environments from visual landmarks and range data

Jean-Bernard Hayet; Claudia Esteves; Michel Devy; Frédéric Lerasle

This article describes the integration in a complete navigation system of an environment modeling method based on a Generalized Voronoi Graph (GVG), relying on laser data, on the one hand, and of a localization method based on monocular vision landmark learning and recognition framework, on the other hand. Such a system is intended to work in structured environments. It is shown that the two corresponding modules - laser GVG construction and visual landmarks learning and recognition - can cooperate to complete each other, as image processing can be enhanced by some structural knowledge about the scene, whereas the GVG is annotated, even as far as its edges are concerned, by qualitative visual information.


international conference on robotics and automation | 2003

Environment modeling for topological navigation using visual landmarks and range data

Frédéric Lerasle; J. Carbajo; Michel Devy; Jean-Bernard Hayet

This article describes the integration in a whole navigation system of visual functions dedicated to the extraction and recognition of visual landmarks, i.e. planar quadrangles detected from a single camera. The extraction of these landmarks is based on a relaxation scheme, used to satisfy constraints between image segments. During the exploration of an indoor environment, according to the current topology, there are several ways to take advantage of these visual landmarks. Two representations are considered, relying both on laser data and or our visual landmarks system: a GVG-based model (generalized Voronoi graph) for corridors, and a composite stochastic map for open spaces. In the particular case of corridors, the corresponding modules - laser GVG construction and visual landmarks extraction and recognition - can cooperate to complete each other, as image processing can be enhanced by some structural knowledge about the scene, whereas the GVG is annotated, even as far as its edges are concerned, by qualitative visual information.

Collaboration


Dive into the Jean-Bernard Hayet's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michel Devy

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Mauricio Garcia

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Claudia Esteves

Monterrey Institute of Technology and Higher Education

View shared research outputs
Top Co-Authors

Avatar

Francisco Madrigal

Centro de Investigación en Matemáticas

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Rogelio Hasimoto-Beltran

Centro de Investigación en Matemáticas

View shared research outputs
Top Co-Authors

Avatar

Salvador Ruiz-Correa

Instituto Potosino de Investigación Científica y Tecnológica

View shared research outputs
Top Co-Authors

Avatar

Christian Lemaire

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Claudia Esteves

Monterrey Institute of Technology and Higher Education

View shared research outputs
Researchain Logo
Decentralizing Knowledge