Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Philippe Martinet is active.

Publication


Featured researches published by Philippe Martinet.


international conference on robotics and automation | 2002

Position based visual servoing: keeping the object in the field of vision

Benoit Thuilot; Philippe Martinet; Lionel Cordesses; Jean Gallice

Visual servoing requires an object in the field of view of the camera, in order to control the robot evolution. Otherwise, the virtual link is broken and the control loop cannot continue to be closed. In this paper, a novel approach is presented in order to guarantee that the object remains in the field of view of the camera during the whole robot motion. It consists in tracking an iteratively computed trajectory. A position based modeling adapted to a moving target object is established, and is used to control the trajectory. A nonlinear decoupling approach is then used to control the robot. Experiments, demonstrating the capabilities of this approach, have been conducted on a Cartesian robot connected to a real time vision system, with a CCD camera mounted on the end effector of the robot.


Autonomous Robots | 2006

High accuracy path tracking for vehicles in presence of sliding: Application to farm vehicle automatic guidance for agricultural tasks

Roland Lenain; Benoit Thuilot; Christophe Cariou; Philippe Martinet

When designing an accurate automated guidance system for vehicles, a major problem is sliding and pseudo-sliding effects. This is especially the case in agricultural applications, where five-centimetre accuracy with respect to the desired trajectory is required, although the vehicles are moving on slippery ground.It has been established that RTK GPS was a very suitable sensor to achieve automated guidance with such high precision: several control laws have been designed for vehicles equipped with this sensor, and provide the expected guidance accuracy as long as the vehicles do not slide. In previous work, further control developments have been proposed to take sliding into account: guidance accuracy in slippery environments has been shown to be preserved, except transiently at the beginning/end of curves. In this paper, the design of this control law is first recalled and discussed. A Model Predictive Control method is then applied in order to preserve accuracy of guidance even during these curvature transitions. Finally, the overall control scheme is implemented, and improvements with respect to previous guidance laws are demonstrated through full-scale experiments.


Autonomous Robots | 2002

Automatic Guidance of a Farm Tractor Relying on a Single CP-DGPS

Benoit Thuilot; Christophe Cariou; Philippe Martinet; Michel Berducat

Precision agriculture involves very accurate farm vehicle control along recorded paths, which are not necessarily straight lines. In this paper, we investigate the possibility of achieving this task with a CP-DGPS as the unique sensor. The vehicle heading is derived according to a Kalman state reconstructor, and a nonlinear velocity independent control law is designed, relying on chained systems properties. Field experiments, demonstrating the capabilities of our guidance system, are reported and discussed.


The International Journal of Robotics Research | 2009

A Review on the Dynamic Control of Parallel Kinematic Machines: Theory and Experiments

Flavien Paccot; Nicolas Andreff; Philippe Martinet

In this article, we review the dynamic control of parallel kinematic machines. It is shown that the classical control strategies from serial robotics generally used for parallel kinematic machine have to be rethought. Indeed, it is first shown that the joint space control is not relevant for these mechanisms for several reasons such as mechanical behavior or computational efficiency. Consequently, Cartesian space control should be preferred over joint space control. Nevertheless, some modifications to the well-known Cartesian space control strategies of serial robotics are proposed to make them perfectly suited to parallel kinematic machines, particularly a solution using an exteroceptive measure of the end-effector pose. The expected improvement in terms of accuracy, stability and robustness are discussed. A comparison between the main presented strategies is finally performed both in simulation and experiments.


intelligent robots and systems | 2007

A generic fisheye camera model for robotic applications

Jonathan Courbon; Youcef Mezouar; Laurent Eck; Philippe Martinet

Omnidirectional cameras have a wide field of view and are thus used in many robotic vision tasks. An omnidirectional view may be acquired by a fisheye camera which provides a full image compared to catadioptric visual sensors and do not increase the size and the weakness of the imaging system with respect to perspective cameras. We prove that the unified model for catadioptric systems can model fisheye cameras with distortions directly included in its parameters. This unified projection model consists on a projection onto a virtual unitary sphere, followed by a perspective projection onto an image plane. The validity of this assumption is discussed and compared with other existing models. Calibration and partial Euclidean reconstruction results help to confirm the validity of our approach. Finally, an application to the visual servoing of a mobile robot is presented and experimented.


european conference on computer vision | 2006

Simultaneous object pose and velocity computation using a single view from a rolling shutter camera

Omar Ait-Aider; Nicolas Andreff; Jean Marc Lavest; Philippe Martinet

An original concept for computing instantaneous 3D pose and 3D velocity of fast moving objects using a single view is proposed, implemented and validated. It takes advantage of the image deformations induced by rolling shutter in CMOS image sensors. First of all, after analysing the rolling shutter phenomenon, we introduce an original model of the image formation when using such a camera, based on a general model of moving rigid sets of 3D points. Using 2D-3D point correspondences, we derive two complementary methods, compensating for the rolling shutter deformations to deliver an accurate 3D pose and exploiting them to also estimate the full 3D velocity. The first solution is a general one based on non-linear optimization and bundle adjustment, usable for any object, while the second one is a closed-form linear solution valid for planar objects. The resulting algorithms enable us to transform a CMOS low cost and low power camera into an innovative and powerful velocity sensor. Finally, experimental results with real data confirm the relevance and accuracy of the approach.


Robotics and Autonomous Systems | 2006

Trajectory tracking control of farm vehicles in presence of sliding

Hao Fang; Ruixia Fan; Benoit Thuilot; Philippe Martinet

In automatic guidance of agriculture vehicles, lateral control is not the only requirement. Lots of research works have been focused on trajectory tracking control which can provide high longitudinal-lateral control accuracy. Satisfactory results have been reported as soon as vehicles move without sliding. But unfortunately pure rolling constraints are not always satisfied especially in agriculture applications where working conditions are rough and not expectable. In this paper the problem of trajectory tracking control of autonomous farm vehicles in presence of sliding is addressed. To take sliding effects into account, two variables which characterize sliding effects are introduced into the kinematic model based on geometric and velocity constrains in presence of sliding. With linearization approximation a refined kinematic model is obtained in which sliding appears as additive unknown parameters to the ideal kinematic model. By integrating parameter adaptation technique with backstepping method, a stepwise procedure is proposed to design a robust adaptive controller. It is theoretically proven that for the farm vehicles subjected to sliding, the longitudinal-lateral deviations can be stabilized near zero and the orientation errors converge into a neighborhood near the origin. To be more realistic for agriculture applications, an adaptive controller with projection mapping is also proposed. Simulation results show that the proposed (robust) adaptive controllers can guarantee high trajectory tracking accuracy regardless of sliding.


international conference on robotics and automation | 1996

Visual servoing in robotics scheme using a camera/laser-stripe sensor

Djamel Khadraoui; Guy Motyl; Philippe Martinet; Jean Gallice; François Chaumette

The work presented in this paper belongs to the realm of robotics and computer vision. The problem we seek to solve is the accomplishment of robotics tasks using visual features provided by a special sensor, mounted on a robot end effector. This sensor consists of two laser stripes fixed rigidly to a camera, projecting planar light on the scene. First, we briefly describe the classical visual servoing approach. We then generalize this approach to the case of our special sensor by considering its interaction with respect to a sphere. This interaction permits us to establish a kinematics relation between the sensor and the scene. Finally, both in simulation and in our experimental cell, the results are presented. They concern the positioning task with respect to a sphere, and show the robustness and the stability of the control scheme.


IEEE Transactions on Intelligent Transportation Systems | 2009

Autonomous Navigation of Vehicles from a Visual Memory Using a Generic Camera Model

Jonathan Courbon; Youcef Mezouar; Philippe Martinet

In this paper, we present a complete framework for autonomous vehicle navigation using a single camera and natural landmarks. When navigating in an unknown environment for the first time, usual behavior consists of memorizing some key views along the performed path to use these references as checkpoints for future navigation missions. The navigation framework for the wheeled vehicles presented in this paper is based on this assumption. During a human-guided learning step, the vehicle performs paths that are sampled and stored as a set of ordered key images, as acquired by an embedded camera. The visual paths are topologically organized, providing a visual memory of the environment. Given an image of the visual memory as a target, the vehicle navigation mission is defined as a concatenation of visual path subsets called visual routes. When autonomously running, the control guides the vehicle along the reference visual route without explicitly planning any trajectory. The control consists of a vision-based control law that is adapted to the nonholonomic constraint. Our navigation framework has been designed for a generic class of cameras (including conventional, catadioptric, and fisheye cameras). Experiments with an urban electric vehicle navigating in an outdoor environment have been carried out with a fisheye camera along a 750-m-long trajectory. Results validate our approach.


international conference on robotics and automation | 2005

Indoor Navigation of a Wheeled Mobile Robot along Visual Routes

Guillaume Blanc; Youcef Mezouar; Philippe Martinet

When navigating in an unknown environment for the first time, a natural behavior consists in memorizing some key views along the performed path, in order to use these references as checkpoints for a future navigation mission taking a similar path. This assumption is used in this paper as the basis of a navigation framework for wheeled mobile robots in indoor environments. During a human-guided teleoperated learning step, the robot performs paths which are sampled and stored as a set of ordered key images, acquired by a standard embedded camera. The set of these obtained visual paths is topologically organized and provides a visual memory of the environment. Given an image of one of the visual paths as a target, the robot navigation mission is defined as a concatenation of visual path subsets, called visual route. When running autonomously, the robot is controlled by a visual servoing law adapted to its nonholonomic constraint. Based on the regulation of successive homographies, this control guides the robot along the reference visual route without explicitly planning any trajectory. Real experiment results illustrate the validity of the presented framework.

Collaboration


Dive into the Philippe Martinet's collaboration.

Top Co-Authors

Avatar

Benoit Thuilot

Blaise Pascal University

View shared research outputs
Top Co-Authors

Avatar

Nicolas Andreff

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Youcef Mezouar

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Lounis Adouane

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Sébastien Briot

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nicolas Bouton

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tej Dallej

Blaise Pascal University

View shared research outputs
Researchain Logo
Decentralizing Knowledge