Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Eric Royer is active.

Publication


Featured researches published by Eric Royer.


International Journal of Computer Vision | 2007

Monocular Vision for Mobile Robot Localization and Autonomous Navigation

Eric Royer; Maxime Lhuillier; Michel Dhome; Jean-Marc Lavest

This paper presents a new real-time localization system for a mobile robot. We show that autonomous navigation is possible in outdoor situation with the use of a single camera and natural landmarks. To do that, we use a three step approach. In a learning step, the robot is manually guided on a path and a video sequence is recorded with a front looking camera. Then a structure from motion algorithm is used to build a 3D map from this learning sequence. Finally in the navigation step, the robot uses this map to compute its localization in real-time and it follows the learning path or a slightly different path if desired. The vision algorithms used for map building and localization are first detailed. Then a large part of the paper is dedicated to the experimental evaluation of the accuracy and robustness of our algorithms based on experimental data collected during two years in various environments.


intelligent robots and systems | 2005

Outdoor autonomous navigation using monocular vision

Eric Royer; Jonathan Bom; Michel Dhome; Benoit Thuilot; Maxime Lhuillier; François Marmoiton

In this paper, a complete system for outdoor robot navigation is presented. It uses only monocular vision. The robot is first guided on a path by a human. During this learning step, the robot records a video sequence. From this sequence, a three dimensional map of the trajectory and the environment is built. When this map has been computed, the robot is able to follow the same trajectory by itself. Experimental results carried out with an urban electric vehicle are shown and compared to the ground truth.


computer vision and pattern recognition | 2005

Localization in urban environments: monocular vision compared to a differential GPS sensor

Eric Royer; Maxime Lhuillier; Michel Dhome; Thierry Chateau

In this paper we present a method for computing the localization of a mobile robot with reference to a learning video sequence. The robot is first guided on a path by a human, while the camera records a monocular learning sequence. Then a 3D reconstruction of the path and the environment is computed off line from the learning sequence. The 3D reconstruction is then used for computing the pose of the robot in real time (30 Hz) in autonomous navigation. Results from our localization method are compared to the ground truth measured with a differential GPS.


computer vision and pattern recognition | 2009

Towards geographical referencing of monocular SLAM reconstruction using 3D city models: Application to real-time accurate vision-based localization

Pierre Lothe; Steve Bourgeois; Fabien Dekeyser; Eric Royer; Michel Dhome

In the past few years, lots of works were achieved on Simultaneous Localization and Mapping (SLAM). It is now possible to follow in real time the trajectory of a moving camera in an unknown environment. However, current SLAM methods are still prone to drift errors, which prevent their use in large-scale applications. In this paper, we propose a solution to reduce those errors a posteriori. Our solution is based on a postprocessing algorithm that exploits additional geometric constraints, relative to the environment, to correct both the reconstructed geometry and the camera trajectory. These geometric constraints are obtained through a coarse 3D modelisation of the environment, similar to those provided by GIS database. First, we propose an original articulated transformation model in order to roughly align the SLAM reconstruction with this 3D model through a non-rigid ICP step. Then, to refine the reconstruction, we introduce a new bundle adjustment cost function that includes, in a single term, the usual 3D point/ID observation consistency constraint as well as the geometric constraints provided by the 3D model. Results on large-scale synthetic and real sequences show that our method successfully improves SLAM reconstructions. Besides, experiments prove that the resulting reconstruction is accurate enough to be directly used for global relocalization applications.


british machine vision conference | 2004

Towards an alternative GPS sensor in dense urban environment from visual memory

Eric Royer; Maxime Lhuillier; Michel Dhome; Thierry Chateau

In this paper we present a method for computing the localization of a mobile robot with reference to a learning video sequence. The robot is first guided on a path by a human, while the camera records a monocular learning sequence. Then the computer builds a map of the environment. This is done by first extracting key frames from the learning sequence. Then the epipolar geometry and camera motion are computed between key frames. Additionally, a hierachical bundle adjustment is used to refine the reconstruction. The map stored for the localization include the position odf the camera associated with each key frame as well as a set of interest points detected in the images and reconstructed in 3D. Using this map it is possible to compute the localization of the robot in real time during the automatic driving phase.


british machine vision conference | 2010

Calibration of Non-Overlapping Cameras - Application to Vision-Based Robotics

Pierre Lébraly; Eric Royer; Omar Ait-Aider; Michel Dhome

Multi-camera systems are more and more used in vision-based robotics. An accurate extrinsic calibration is usually required. In most of cases, this task is done by matching features through different views of the same scene. However, if the cameras fields of view do not overlap, such a matching procedure is not feasible anymore. This article deals with a simple and flexible extrinsic calibration method, for nonoverlapping camera rig. The aim is the calibration of non-overlapping cameras embedded on a vehicle, for visual navigation purpose in urban environment. The cameras do not see the same area at the same time. The calibration procedure consists in manoeuvring the vehicle while each camera observes a static scene. The main contributions are a study of the singular motions and a specific bundle adjustment which both reconstructs the scene and calibrates the cameras. Solutions to handle the singular configurations, such as planar motions, are exposed. The proposed approach has been validated with synthetic and real data.


International Journal of Image and Graphics | 2010

OUTDOOR/INDOOR VISION-BASED LOCALIZATION FOR BLIND PEDESTRIAN NAVIGATION ASSISTANCE

Sylvie Treuillet; Eric Royer

The most challenging issue facing the navigation assistive systems for the visually impaired is the instantaneous and accurate spatial localization of the user. Most of the previously proposed systems are based on global positioning system (GPS) sensors. However, the accuracy of low-cost versions is insufficient for pedestrian use. Furthermore, GPS-based systems are confined to outdoor navigation and experience severe signal losts in urban areas. This paper presents a new approach for localizing a person by using a single-body-mounted camera and computer vision techniques. Instantaneous accurate localization and heading estimates of the person are computed from images as the user progresses along a memorized path. A portable prototype has been tested for outdoor as well as indoor pedestrian use. Experimental results demonstrate the effectiveness of the vision-based localization: the accuracy is sufficient for making it possible to guide and maintain the blind person within a navigation corridor less than 1 m wide along the intended path. In combination with a suitable guiding interface, such a localization system will be convenient to assist the visually impaired in their everyday movements outdoors as well as indoors.


international conference on robotics and automation | 2011

Fast calibration of embedded non-overlapping cameras

Pierre Lébraly; Eric Royer; Omar Ait-Aider; Clement Deymier; Michel Dhome

This article deals with a simple and flexible extrinsic calibration method, for non-overlapping camera rig. The cameras do not see the same area at the same time. They are rigidly linked and can be moved. The most representative application is the mobile robotics domain. The calibration procedure consists in maneuvering the system while each camera observes a static scene. A linear solution derived from hand eye calibration scheme is proposed to compute an initial estimate of the extrinsic parameters. The main contribution is a specific bundle adjustment which refines both the scene geometry and the extrinsic parameters. Finally, an efficient implementation of the specific bundle adjustment step is defined for online calibration purpose. The proposed approach is validated with both synthetic and real data.


computer vision and pattern recognition | 2010

Real-time vehicle global localisation with a single camera in dense urban areas: Exploitation of coarse 3D city models

Pierre Lothe; Steve Bourgeois; Eric Royer; Michel Dhome; Sylvie Naudet-Collette

In this system paper, we propose a real-time car localisation process in dense urban areas by using a single perspective camera and a priori on the environment. To tackle this problem, it is necessary to solve two well-known monocular SLAM limitations: scale factor drift and error accumulation. The proposed idea is to combine a monocular SLAM process based on bundle adjustment with simple knowledge, i.e. the position and orientation of the camera with regard to the road and a coarse 3D model of the environment, as those provided by GIS database. First, we show that, thanks to specific SLAM-based constraints, the road homography can be expressed only with respect to the scale factor parameter. This allows the scale factor to be robustly and frequently estimated. Then, we propose to use the global information brought by 3D city models in order to correct the monocular SLAM error accumulation. Even with coarse 3D models, turnings give enough geometrical constraints to allow fitting the reconstructed 3D point cloud with the 3D model. Experiments on large-scale sequences (several kilometres) show that the entire process permits the real-time localisation of a car in city centre, even in real traffic condition.


international conference on robotics and automation | 2013

Using monocular visual SLAM to manually convoy a fleet of automatic urban vehicles

Pierre Avanzini; Eric Royer; Benoit Thuilot; Jean-Pierre Derutin

This paper addresses platooning navigation as part of new transportation services emerging nowadays in urban areas. Platooning formation is ensured using a global decentralized control strategy supported by inter-vehicle communications. A large motion flexibility is achieved according to a manual guidance mode, i.e. the path to follow is inferred online from the motion of the manually driven first vehicle. For this purpose, a visual SLAM algorithm that relies on monocular vision is run on the lead vehicle and coupled with a trajectory creation procedure. Both the map and trajectory updates are shared online with the following vehicles and permit them to derive their absolute location with respect to a common reference trajectory from their current camera image. Full-scale experiments with two urban vehicles demonstrate the performance of the proposed approach.

Collaboration


Dive into the Eric Royer's collaboration.

Top Co-Authors

Avatar

Michel Dhome

Blaise Pascal University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Omar Ait-Aider

Blaise Pascal University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Benoit Thuilot

Blaise Pascal University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge