Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Frédéric Lerasle is active.

Publication


Featured researches published by Frédéric Lerasle.


international conference on robotics and automation | 2004

Face tracking and hand gesture recognition for human-robot interaction

Ludovic Brèthes; Paulo Menezes; Frédéric Lerasle; Jean-Bernard Hayet

The interaction between man and machines has become an important topic for the robotic community as it can generalise the use of robots. For active H/R interaction scheme, the robot needs to detect human faces in its vicinity and then interpret canonical gestures of the tracked person assuming this interlocutor has been beforehand identified. In this context, we depict functions suitable to detect and recognise faces in video stream and then focus on face or hand tracking functions. An efficient colour segmentation based on a watershed on the skin-like coloured pixels is proposed. A new measurement model is proposed to take into account both shape and colour cues in the particle filter to track face or hand silhouettes in video stream. An extension of the basic condensation algorithm is proposed to achieve recognition of the current hand posture and automatic switching between multiple templates in the tracking loop. Results of tracking and recognition using are illustrated in the paper and show the process robustness in cluttered environments and in various light conditions. The limits of the method and future works are also discussed.


Computer Vision and Image Understanding | 2010

Vision and RFID data fusion for tracking people in crowds by a mobile robot

Thierry Germa; Frédéric Lerasle; Noureddine Ouadah; Viviane Cadenat

In this paper, we address the problem of realizing a human following task in a crowded environment. We consider an active perception system, consisting of a camera mounted on a pan-tilt unit and a 360^o RFID detection system, both embedded on a mobile robot. To perform such a task, it is necessary to efficiently track humans in crowds. In a first step, we have dealt with this problem using the particle filtering framework because it enables the fusion of heterogeneous data, which improves the tracking robustness. In a second step, we have considered the problem of controlling the robot motion to make the robot follow the person of interest. To this aim, we have designed a multi-sensor-based control strategy based on the tracker outputs and on the RFID data. Finally, we have implemented the tracker and the control strategy on our robot. The obtained experimental results highlight the relevance of the developed perceptual functions. Possible extensions of this work are discussed at the end of the article.


Autonomous Robots | 2012

Two-handed gesture recognition and fusion with speech to command a robot

Brice Burger; Isabelle Ferrané; Frédéric Lerasle; Guillaume Infantes

Assistance is currently a pivotal research area in robotics, with huge societal potential. Since assistant robots directly interact with people, finding natural and easy-to-use user interfaces is of fundamental importance. This paper describes a flexible multimodal interface based on speech and gesture modalities in order to control our mobile robot named Jido. The vision system uses a stereo head mounted on a pan-tilt unit and a bank of collaborative particle filters devoted to the upper human body extremities to track and recognize pointing/symbolic mono but also bi-manual gestures. Such framework constitutes our first contribution, as it is shown, to give proper handling of natural artifacts (self-occlusion, camera out of view field, hand deformation) when performing 3D gestures using one or the other hand even both. A speech recognition and understanding system based on the Julius engine is also developed and embedded in order to process deictic and anaphoric utterances. The second contribution deals with a probabilistic and multi-hypothesis interpreter framework to fuse results from speech and gesture components. Such interpreter is shown to improve the classification rates of multimodal commands compared to using either modality alone. Finally, we report on successful live experiments in human-centered settings. Results are reported in the context of an interactive manipulation task, where users specify local motion commands to Jido and perform safe object exchanges.


Image and Vision Computing | 2007

A visual landmark framework for mobile robot navigation

Jean-Bernard Hayet; Frédéric Lerasle; Michel Devy

This article describes visual functions dedicated to the extraction and recognition of visual landmarks, here planar quadrangles detected by a single camera. Landmarks are extracted among edge segments through a relaxation scheme, used to apply geometrical, topological and appearance constraints on sets of segments. Once extracted, such a landmark is characterized by invariant attributes so that recognition is made possible from a large range of viewpoints. Landmarks are represented by an icon which is built using the homography between the current viewpoint and a reference shape (a square). When detected again, the landmark is recognized by using a distance between icons. We propose a comparison of several of these metrics and an evaluation on actual and synthetic images that shows the validity of our approach. Results issued from experiments of a mobile robot navigating in an indoor environment are finally presented.


international conference on robotics and automation | 2002

A visual landmark framework for indoor mobile robot navigation

Jean-Bernard Hayet; Frédéric Lerasle; Michel Devy

Presents vision functions needed on a mobile robot to deal with landmark-based navigation in buildings. Landmarks are planar, quadrangular surfaces, which must be distinguished from the background, typically a poster on a wall or a door-plate. In a first step, these landmarks are detected and their positions with respect to a global reference frame are learned; this learning step is supervised so that only the best landmarks are memorized, with an invariant representation based on a set of interest points. Then, when the robot looks for visible landmarks, the recognition procedure takes advantage of the partial Hausdorff distance to compare the landmark model and the detected quadrangles. The paper presents the landmark detection and recognition procedures, and discusses their performances.


computer vision and pattern recognition | 2003

Visual landmarks detection and recognition for mobile robot navigation

Jean-Bernard Hayet; Frédéric Lerasle; Michel Devy

This article describes visual functions dedicated to the extraction and recognition of planar quadrangles detected from a single camera. Extraction is based on a relaxation scheme with constraints between image segments, while the characterization we propose allows recognition to be achieved from different view-points and viewing conditions. We defined and evaluated several metrics on this representation space - a correlation-based one and another one based on sets of interest points.


intelligent robots and systems | 2000

Visual localization of a mobile robot in indoor environments using planar landmarks

V. Ayala; Jean-Bernard Hayet; Frédéric Lerasle; Michel Devy

Describes the localization function integrated in a landmark-based navigation system. It relies on the use of planar landmarks (typically, posters) to localize the robot. It is based on two periodic processes running at different frequencies. One of them performs the poster tracking (based on the partial Hausdorff distance) and the active control of the camera. The other one runs on a lower frequency and localizes the robot thanks to the tracked landmarks, the positions of which have been learnt during an offline exploration step. The system has been embedded on our indoor Hilare mobile robot and works in real time. Experiments, illustrated in the paper, demonstrate the validity of the approach.


robot and human interactive communication | 2006

Rackham: An Interactive Robot-Guide

Aurélie Clodic; Sara Fleury; Rachid Alami; Raja Chatila; Gérard Bailly; Ludovic Brèthes; Maxime Cottret; Patrick Danès; Xavier Dollat; Frédéric Elisei; Isabelle Ferrané; Matthieu Herrb; Guillaume Infantes; Christian Lemaire; Frédéric Lerasle; Jérôme Manhes; Patrick Marcoul; Paulo Menezes; Vincent Montreuil

Rackham is an interactive robot-guide that has been used in several places and exhibitions. This paper presents its design and reports on results that have been obtained after its deployment in a permanent exhibition. The project is conducted so as to incrementally enhance the robot functional and decisional capabilities based on the observation of the interaction between the public and the robot. Besides robustness and efficiency in the robot navigation abilities in a dynamic environment, our focus was to develop and test a methodology to integrate human-robot interaction abilities in a systematic way. We first present the robot and some of its key design issues. Then, we discuss a number of lessons that we have drawn from its use in interaction with the public and how that will serve to refine our design choices and to enhance robot efficiency and acceptability


european conference on computer vision | 2016

Improving Multi-frame Data Association with Sparse Representations for Robust Near-online Multi-object Tracking

Loïc Fagot-Bouquet; Romaric Audigier; Yoann Dhome; Frédéric Lerasle

Multiple Object Tracking still remains a difficult problem due to appearance variations and occlusions of the targets or detection failures. Using sophisticated appearance models or performing data association over multiple frames are two common approaches that lead to gain in performances. Inspired by the success of sparse representations in Single Object Tracking, we propose to formulate the multi-frame data association step as an energy minimization problem, designing an energy that efficiently exploits sparse representations of all detections. Furthermore, we propose to use a structured sparsity-inducing norm to compute representations more suited to the tracking context. We perform extensive experiments to demonstrate the effectiveness of the proposed formulation , and evaluate our approach on two public authoritative benchmarks in order to compare it with several state-of-the-art methods.


Robotics and Autonomous Systems | 2002

Topological navigation and qualitative localization for indoor environment using multi-sensory perception

Parthasarathy Ranganathan; Jean-Bernard Hayet; Michel Devy; Seth Hutchinson; Frédéric Lerasle

This article describes a navigation system for a mobile robot which must execute motions in a building; the robot is equipped with a belt of ultrasonic sensors and with a camera. The environment is represented by a topological model based on a Generalized Voronoi Graph (GVG) and by a set of visual landmarks. Typically, the topological graph describes the free space in which the robot must navigate; a node is associated to an intersection between corridors, or to a crossing towards another topological area (an open space: rooms, hallways, ... ); an edge corresponds to a corridor or to a path in an open space. Landmarks correspond to static, rectangular and planar objects (e.g. doors, windows, posters, ... ) located on the walls. The landmarks are only located with respect to the topological graph: some of them are associated to nodes, other to edges. The paper is focused on the preliminary exploration task, i.e. the incremental construction of the topological model. The navigation task is based on this model: the robot self-localization is only expressed with respect to the graph.

Collaboration


Dive into the Frédéric Lerasle's collaboration.

Top Co-Authors

Avatar

Michel Devy

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alhayat Ali Mekonnen

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jean-Bernard Hayet

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ariane Herbulot

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge