Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jonathan Courbon is active.

Publication


Featured researches published by Jonathan Courbon.


intelligent robots and systems | 2007

A generic fisheye camera model for robotic applications

Jonathan Courbon; Youcef Mezouar; Laurent Eck; Philippe Martinet

Omnidirectional cameras have a wide field of view and are thus used in many robotic vision tasks. An omnidirectional view may be acquired by a fisheye camera which provides a full image compared to catadioptric visual sensors and do not increase the size and the weakness of the imaging system with respect to perspective cameras. We prove that the unified model for catadioptric systems can model fisheye cameras with distortions directly included in its parameters. This unified projection model consists on a projection onto a virtual unitary sphere, followed by a perspective projection onto an image plane. The validity of this assumption is discussed and compared with other existing models. Calibration and partial Euclidean reconstruction results help to confirm the validity of our approach. Finally, an application to the visual servoing of a mobile robot is presented and experimented.


IEEE Transactions on Intelligent Transportation Systems | 2009

Autonomous Navigation of Vehicles from a Visual Memory Using a Generic Camera Model

Jonathan Courbon; Youcef Mezouar; Philippe Martinet

In this paper, we present a complete framework for autonomous vehicle navigation using a single camera and natural landmarks. When navigating in an unknown environment for the first time, usual behavior consists of memorizing some key views along the performed path to use these references as checkpoints for future navigation missions. The navigation framework for the wheeled vehicles presented in this paper is based on this assumption. During a human-guided learning step, the vehicle performs paths that are sampled and stored as a set of ordered key images, as acquired by an embedded camera. The visual paths are topologically organized, providing a visual memory of the environment. Given an image of the visual memory as a target, the vehicle navigation mission is defined as a concatenation of visual path subsets called visual routes. When autonomously running, the control guides the vehicle along the reference visual route without explicitly planning any trajectory. The control consists of a vision-based control law that is adapted to the nonholonomic constraint. Our navigation framework has been designed for a generic class of cameras (including conventional, catadioptric, and fisheye cameras). Experiments with an urban electric vehicle navigating in an outdoor environment have been carried out with a fisheye camera along a 750-m-long trajectory. Results validate our approach.


Autonomous Robots | 2008

Indoor navigation of a non-holonomic mobile robot using a visual memory

Jonathan Courbon; Youcef Mezouar; Philippe Martinet

When navigating in an unknown environment for the first time, a natural behavior consists on memorizing some key views along the performed path, in order to use these references as checkpoints for a future navigation mission. The navigation framework for wheeled mobile robots presented in this paper is based on this assumption. During a human-guided learning step, the robot performs paths which are sampled and stored as a set of ordered key images, acquired by an embedded camera. The set of these obtained visual paths is topologically organized and provides a visual memory of the environment. Given an image of one of the visual paths as a target, the robot navigation mission is defined as a concatenation of visual path subsets, called visual route. When running autonomously, the robot is controlled by a visual servoing law adapted to its nonholonomic constraint. Based on the regulation of successive homographies, this control guides the robot along the reference visual route without explicitly planning any trajectory. The proposed framework has been designed for the entire class of central catadioptric cameras (including conventional cameras). It has been validated onto two architectures. In the first one, algorithms have been implemented onto a dedicated hardware and the robot is equipped with a standard perspective camera. In the second one, they have been implemented on a standard PC and an omnidirectional camera is considered.


intelligent robots and systems | 2009

Visual navigation of a quadrotor Aerial Vehicle

Jonathan Courbon; Youcef Mezouar; Nicolas Guénard; Philippe Martinet

This paper presents a vision-based navigation strategy for a Vertical Take-off and Landing (VTOL) Unmanned Aerial Vehicle (UAV) using a single embedded camera observing natural landmarks. In the proposed approach, images of the environment are first sampled and stored as a set of ordered key images (visual path) and organized providing a visual memory of the environment. The robot navigation task is then defined as a concatenation of visual path subsets (called visual route) linking the current observed image and a target image belonging to the visual memory. The UAV is controlled to reach each image of the visual route using a vision-based control law adapted to its dynamic model and without explicitly planning any trajectory. This framework is largely substantiated by experiments with a X4-flyer equipped with a fisheye camera.


international conference on robotics and automation | 2008

Efficient hierarchical localization method in an omnidirectional images memory

Jonathan Courbon; Youcef Mezouar; Laurent Eck; Philippe Martinet

An efficient method for global robot localization in a memory of omnidirectional images is presented. This method is valid for indoor and outdoor environments and not restricted to mobile robots. The proposed strategy is purely vision-based and uses as reference a set of prerecorded images (visual memory). The localization consists on finding in the visual memory the image which best fits the current image. We propose a hierarchical process combining global descriptors computed onto cubic interpolation of triangular mesh and patches correlation around Harris corners. To evaluate this method, three large images data sets have been used. Results of the proposed method are compared with those obtained from state-of-the-art techniques by means of 1) accuracy, 2) amount of memorized data required per image and 3) computational cost. The proposed method shows the best compromise in term of those criteria.


international conference on robotics and automation | 2012

Image Sequence Partitioning for outdoor mapping

Hemanth Korrapati; Jonathan Courbon; Youcef Mezouar; Philippe Martinet

Most of the existing appearance based topological mapping algorithms produce dense topological maps in which each image stands as a node in the topological graph. Sparser maps can be built by representing groups of visually similar images as nodes of a topological graph. In this paper, we present a sparse topological mapping framework which uses Image Sequence Partitioning (ISP) techniques to group visually similar images as topological graph nodes. We present four different ISP techniques and evaluate their performance. In order to take advantage of the afore mentioned maps, we make use of Hierarchical Inverted Files (HIF) which enable efficient hierarchical loop closure. Outdoor experimental results demonstrating the sparsity, efficiency and accuracy achieved by the combination of ISP and HIF in performing loop closure are presented.


intelligent robots and systems | 2010

Wheeled mobile robots navigation from a visual memory using wide field of view cameras

Hector M. Becerra; Jonathan Courbon; Youcef Mezouar; Carlos Sagüés

In this paper, we propose a visual path following control scheme for wheeled mobile robots based on the epipolar geometry. The control law only requires the position of the epipole computed between the current and target views along the sequence of a visual memory. The proposed approach has two main advantages: explicit pose parameters decomposition is not required and the rotational velocity is smooth or eventually piece-wise constant avoiding discontinuities that generally appear when the target image changes. The translational velocity is adapted as required for the path and the approach is independent of this velocity. Furthermore, our approach is valid for all cameras obeying the unified model, including conventional, central catadioptric and some fisheye cameras. Simulations as well as real-world experiments with a robot illustrate the validity of our approach.


international conference on control, automation, robotics and vision | 2008

Efficient visual memory based navigation of indoor robot with a wide-field of view camera

Jonathan Courbon; Youcef Mezouar; Laurent Eck; Philippe Martinet

In this paper, we present a complete framework for autonomous indoor robot navigation. We show that autonomous navigation is possible in indoor situation using a single camera and natural landmarks. When navigating in an unknown environment for the first time, a natural behavior consists on memorizing some key views along the performed path, in order to use these references as checkpoints for a future navigation mission. The navigation framework for wheeled robots presented in this paper is based on this assumption. During a human-guided learning step, the robot performs paths which are sampled and stored as a set of ordered key images, acquired by an embedded camera. The set of these obtained visual paths is topologically organized and provides a visual memory of the environment. Given an image of one of the visual paths as a target, the robot navigation mission is defined as a concatenation of visual path subsets, called visual route. When running autonomously, the control guides the robot along the reference visual route without explicitly planning any trajectory. The control consists on a vision-based control law adapted to the nonholonomic constraint. The proposed framework has been designed for a generic class of cameras (including conventional, catadioptric and fish-eye cameras). Experiments with a AT3 Pioneer robot navigating in an indoor environment have been carried on with a fisheye camera. Results validate our approach.


intelligent robots and systems | 2008

Navigation of urban vehicle: An efficient visual memory management for large scale environments

Jonathan Courbon; Youcef Mezouar; Laurent Lequievre; Laurent Eck

In this paper, we present a method to efficiently manage visual memory for autonomous vehicle navigation in large scale environments. It relies on two crucial issues for real-time navigation: an efficient organisation of the memory and small computational cost. A software platform (SoViN) dedicated to visual memory management and navigation strategies (including vision-based memory building, localization and navigation) has been developed to fulfill these requirements. We show that using this software architecture makes possible real-time navigation in large-scale outdoor situation using a single camera and natural landmarks.


IAS (1) | 2013

Visual Memory Update for Life-Long Mobile Robot Navigation

Jonathan Courbon; Hemanth Korrapati; Youcef Mezouar

A central clue for implementation of visual memory based navigation strategies relies on efficient point matching between the current image and the key images of the memory. However, the visual memory may become out of date after some times because the appearance of real-world environments keeps changing. It is thus necessary to remove obsolete information and to add new data to the visual memory over time. In this paper, we propose a method based on short-term and long term memory concepts to update the visual memory of mobile robots during navigation. The results of our experiments show that using this method improves the robustness of the localization and path-following steps.

Collaboration


Dive into the Jonathan Courbon's collaboration.

Top Co-Authors

Avatar

Youcef Mezouar

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge