Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Youcef Mezouar is active.

Publication


Featured researches published by Youcef Mezouar.


international conference on robotics and automation | 2002

Path planning for robust image-based control

Youcef Mezouar; François Chaumette

Vision feedback control loop techniques are efficient for a large class of applications, but they come up against difficulties when the initial and desired robot positions are distant. Classical approaches are based on the regulation to zero of an error function computed from the current measurement and a constant desired one. By using such an approach, it is not obvious how to introduce any constraint in the realized trajectories or to ensure the convergence for all the initial configurations. In this paper, we propose a new approach to resolve these difficulties by coupling path planning in image space and image-based control. Constraints such that the object remains in the camera field of view or the robot avoids its joint limits can be taken into account at the task planning level. Furthermore, by using this approach, current measurements always remain close to their desired value, and a control by image-based servoing ensures robustness with respect to modeling errors. The proposed method is based on the potential field approach and is applied whether the object shape and dimensions are known or not, and when the calibration parameters of the camera are well or badly estimated. Finally, real-time experimental results using an eye-in-hand robotic system are presented and confirm the validity of our approach.


intelligent robots and systems | 2007

A generic fisheye camera model for robotic applications

Jonathan Courbon; Youcef Mezouar; Laurent Eck; Philippe Martinet

Omnidirectional cameras have a wide field of view and are thus used in many robotic vision tasks. An omnidirectional view may be acquired by a fisheye camera which provides a full image compared to catadioptric visual sensors and do not increase the size and the weakness of the imaging system with respect to perspective cameras. We prove that the unified model for catadioptric systems can model fisheye cameras with distortions directly included in its parameters. This unified projection model consists on a projection onto a virtual unitary sphere, followed by a perspective projection onto an image plane. The validity of this assumption is discussed and compared with other existing models. Calibration and partial Euclidean reconstruction results help to confirm the validity of our approach. Finally, an application to the visual servoing of a mobile robot is presented and experimented.


The International Journal of Robotics Research | 2003

Optimal Camera Trajectory with Image-Based Control

Youcef Mezouar; François Chaumette

Image-based servo is a local control solution. Thanks to the feedback loop closed in the image space, local convergence and stability in the presence of modeling errors and noise perturbations are ensured when the error is small. The principal deficiency of this approach is that the induced (3D) trajectories are not optimal and sometimes, especially when the displacement to realize is large, these trajectories are not physically valid leading to the failure of the servoing process. In this paper we address the problem of finding realistic image-space trajectories that correspond to optimal 3D trajectories. The camera calibration and the model of the observed scene are assumed unknown. First, a smooth closed-form collineation path between given start and end points is obtained. This path is generated in order to correspond to an optimal camera path. The trajectories of the image features are then derived and efficiently tracked using a purely image-based control. A second path planning scheme, based on the potential field method is briefly presented. It allows us to introduce constraints in the desired trajectory to be realized. Such constraints are, for instance, to ensure that the object of interest remains in the camera field of view and to avoid the robot joints limits. Experimental results obtained on a six-degrees-of-freedom eye-in-hand robotic system are presented and confirm the validity of the proposed approach.


international conference on robotics and automation | 2000

Path planning in image space for robust visual servoing

Youcef Mezouar; François Chaumette

Vision feedback control loop techniques are efficient for a number of applications but they come up against difficulties when the initial and desired positions of the camera are distant. We propose a new approach to resolve these difficulties by planning trajectories in the image. Constraints such that the object remains in the camera field of view can be taken into account. Furthermore, using this process, current measurement always remain close to their desired value and a control by image based servoing ensures the robustness with respect to modeling errors. We apply our method when object dimension are known or not and/or when the calibration parameters of the camera are well or badly estimated. Finally, real time experimental results using a camera mounted on the end effector of a 6-DOF robot are presented.


IEEE Transactions on Intelligent Transportation Systems | 2009

Autonomous Navigation of Vehicles from a Visual Memory Using a Generic Camera Model

Jonathan Courbon; Youcef Mezouar; Philippe Martinet

In this paper, we present a complete framework for autonomous vehicle navigation using a single camera and natural landmarks. When navigating in an unknown environment for the first time, usual behavior consists of memorizing some key views along the performed path to use these references as checkpoints for future navigation missions. The navigation framework for the wheeled vehicles presented in this paper is based on this assumption. During a human-guided learning step, the vehicle performs paths that are sampled and stored as a set of ordered key images, as acquired by an embedded camera. The visual paths are topologically organized, providing a visual memory of the environment. Given an image of the visual memory as a target, the vehicle navigation mission is defined as a concatenation of visual path subsets called visual routes. When autonomously running, the control guides the vehicle along the reference visual route without explicitly planning any trajectory. The control consists of a vision-based control law that is adapted to the nonholonomic constraint. Our navigation framework has been designed for a generic class of cameras (including conventional, catadioptric, and fisheye cameras). Experiments with an urban electric vehicle navigating in an outdoor environment have been carried out with a fisheye camera along a 750-m-long trajectory. Results validate our approach.


international conference on robotics and automation | 2005

Indoor Navigation of a Wheeled Mobile Robot along Visual Routes

Guillaume Blanc; Youcef Mezouar; Philippe Martinet

When navigating in an unknown environment for the first time, a natural behavior consists in memorizing some key views along the performed path, in order to use these references as checkpoints for a future navigation mission taking a similar path. This assumption is used in this paper as the basis of a navigation framework for wheeled mobile robots in indoor environments. During a human-guided teleoperated learning step, the robot performs paths which are sampled and stored as a set of ordered key images, acquired by a standard embedded camera. The set of these obtained visual paths is topologically organized and provides a visual memory of the environment. Given an image of one of the visual paths as a target, the robot navigation mission is defined as a concatenation of visual path subsets, called visual route. When running autonomously, the robot is controlled by a visual servoing law adapted to its nonholonomic constraint. Based on the regulation of successive homographies, this control guides the robot along the reference visual route without explicitly planning any trajectory. Real experiment results illustrate the validity of the presented framework.


Autonomous Robots | 2008

Indoor navigation of a non-holonomic mobile robot using a visual memory

Jonathan Courbon; Youcef Mezouar; Philippe Martinet

When navigating in an unknown environment for the first time, a natural behavior consists on memorizing some key views along the performed path, in order to use these references as checkpoints for a future navigation mission. The navigation framework for wheeled mobile robots presented in this paper is based on this assumption. During a human-guided learning step, the robot performs paths which are sampled and stored as a set of ordered key images, acquired by an embedded camera. The set of these obtained visual paths is topologically organized and provides a visual memory of the environment. Given an image of one of the visual paths as a target, the robot navigation mission is defined as a concatenation of visual path subsets, called visual route. When running autonomously, the robot is controlled by a visual servoing law adapted to its nonholonomic constraint. Based on the regulation of successive homographies, this control guides the robot along the reference visual route without explicitly planning any trajectory. The proposed framework has been designed for the entire class of central catadioptric cameras (including conventional cameras). It has been validated onto two architectures. In the first one, algorithms have been implemented onto a dedicated hardware and the robot is equipped with a standard perspective camera. In the second one, they have been implemented on a standard PC and an omnidirectional camera is considered.


IEEE Transactions on Robotics | 2010

Robustness of Image-Based Visual Servoing With a Calibrated Camera in the Presence of Uncertainties in the Three-Dimensional Structure

Ezio Malis; Youcef Mezouar; Patrick Rives

This paper concerns the stability analysis of image-based visual servoing control laws with respect to uncertainties on the 3-D parameters needed to compute the interaction matrix for any calibrated central catadioptric camera. In the recent past, research on image-based visual servoing has been concentrated on potential problems of stability and on robustness with respect to camera-calibration errors. Only little attention, if any, has been devoted to the robustness of image-based visual servoing to estimation errors on the 3-D structure. It is generally believed that a rough approximation of the 3-D structure is sufficient to ensure the stability of the control law. In this paper, we prove that this is not always true and that an extreme care must be taken when approximating the depth distribution to ensure stability of the image-based control law. The theoretical results are obtained not only for conventional pinhole cameras but for the entire class of central catadioptric systems as well.


intelligent robots and systems | 2004

Central catadioptric visual servoing from 3D straight lines

Youcef Mezouar; Hicham Hadj Abdelkader; Philippe Martinet; François Chaumette

In this paper we consider the problem of controlling a robotic system using the projection of 3D lines in the image plane of central catadioptric systems. Most of the efforts in visual servoing are devoted to points, only few works have investigated the use of lines in visual servoing with traditional cameras and none has explored the case of omnidirectional cameras. First a generic central catadioptric interaction matrix for the projection of 3D straight lines is derived from the projection model of an entire class of camera. Then an image-based control law is designed and validated through simulation results.


IEEE Transactions on Robotics | 2010

Decoupled Image-Based Visual Servoing for Cameras Obeying the Unified Projection Model

Omar Tahri; Youcef Mezouar; François Chaumette; Peter Corke

In this paper a generic decoupled imaged-based control scheme for calibrated cameras obeying the unified projection model is proposed. The proposed decoupled scheme is based on the surface of object projections onto the unit sphere. Such features are invariant to rotational motions. This allows the control of translational motion independently from the rotational motion. Finally, the proposed results are validated with experiments using a classical perspective camera as well as a fisheye camera mounted on a 6 dofs robot platform.

Collaboration


Dive into the Youcef Mezouar's collaboration.

Top Co-Authors

Avatar

Philippe Martinet

Institut de Recherche en Communications et Cybernétique de Nantes

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lounis Adouane

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Nicolas Andreff

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Omar Tahri

University of Orléans

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Grigore Gogu

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge