Omar Ait-Aider
Blaise Pascal University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Omar Ait-Aider.
european conference on computer vision | 2006
Omar Ait-Aider; Nicolas Andreff; Jean Marc Lavest; Philippe Martinet
An original concept for computing instantaneous 3D pose and 3D velocity of fast moving objects using a single view is proposed, implemented and validated. It takes advantage of the image deformations induced by rolling shutter in CMOS image sensors. First of all, after analysing the rolling shutter phenomenon, we introduce an original model of the image formation when using such a camera, based on a general model of moving rigid sets of 3D points. Using 2D-3D point correspondences, we derive two complementary methods, compensating for the rolling shutter deformations to deliver an accurate 3D pose and exploiting them to also estimate the full 3D velocity. The first solution is a general one based on non-linear optimization and bundle adjustment, usable for any object, while the second one is a closed-form linear solution valid for planar objects. The resulting algorithms enable us to transform a CMOS low cost and low power camera into an innovative and powerful velocity sensor. Finally, experimental results with real data confirm the relevance and accuracy of the approach.
computer vision and pattern recognition | 2007
Omar Ait-Aider; Adrien Bartoli; Nicolas Andreff
Recent work shows that recovering pose and velocity from a single view of a moving rigid object is possible with a rolling shutter camera, based on feature point correspondences. We extend this method to line correspondences. Owing to the combined effect of rolling shutter and object motion, straight lines are distorted to curves as they get imaged with a rolling shutter camera. Lines thus capture more information than points, which is not the case with standard projection models for which both points and lines give two constraints. We extend the standard line reprojection error, and propose a nonlinear method for retrieving a solution to the pose and velocity computation problem. A careful inspection of the design matrix in the normal equations reveals that it is highly sparse and patterned. We propose a blockwise solution procedure based on bundle-adjustment-like sparse inversion. This makes nonlinear optimization fast and numerically stable. The method is validated using real data.
international conference on computer vision | 2009
Omar Ait-Aider; François Berry
We describe a spatio-temporal triangulation method to be used with rolling shutter cameras. We show how a single pair of rolling shutter images enables the computation of both structure and motion of rigid moving objects. Starting from a set of point correspondences in the left and right images, we introduce the velocity and shutter characteristics in the triangulation equations. This results in a non-linear error criterion whose minimization in the least square sense provides the shape and velocity parameters. Unlike previous work on rolling shutter cameras, the constraining assumption of a-priori knowledge about the object geometry is removed and a full 3D motion model is considered. The aim of this work is thus to make the use of rolling shutter cameras of a broader interest. Experimental evaluation results confirm the feasibility of the approach.
british machine vision conference | 2010
Pierre Lébraly; Eric Royer; Omar Ait-Aider; Michel Dhome
Multi-camera systems are more and more used in vision-based robotics. An accurate extrinsic calibration is usually required. In most of cases, this task is done by matching features through different views of the same scene. However, if the cameras fields of view do not overlap, such a matching procedure is not feasible anymore. This article deals with a simple and flexible extrinsic calibration method, for nonoverlapping camera rig. The aim is the calibration of non-overlapping cameras embedded on a vehicle, for visual navigation purpose in urban environment. The cameras do not see the same area at the same time. The calibration procedure consists in manoeuvring the vehicle while each camera observes a static scene. The main contributions are a study of the singular motions and a specific bundle adjustment which both reconstructs the scene and calibrates the cameras. Solutions to handle the singular configurations, such as planar motions, are exposed. The proposed approach has been validated with synthetic and real data.
international conference on computer vision systems | 2006
Omar Ait-Aider; Nicolas Andreff; Jean Marc Lavest; Philippe Martinet
An original method for computing instantaneous 3D pose and velocity of fast moving objects using a single view is presented. It exploits image deformations induced by rolling shutter in CMOS image sensors. First of all, a general perspective projection model of a moving 3D point is presented. A solution for the pose and velocity recovery problem is then described. The method is based on bundle adjustment and uses point correspondences. The resulting algorithm enables to transform a CMOS low cost and low power camera into an original velocity sensor. Finally, experimental results with real data confirm the relevance of the approach.
international conference on robotics and automation | 2011
Pierre Lébraly; Eric Royer; Omar Ait-Aider; Clement Deymier; Michel Dhome
This article deals with a simple and flexible extrinsic calibration method, for non-overlapping camera rig. The cameras do not see the same area at the same time. They are rigidly linked and can be moved. The most representative application is the mobile robotics domain. The calibration procedure consists in maneuvering the system while each camera observes a static scene. A linear solution derived from hand eye calibration scheme is proposed to compute an initial estimate of the extrinsic parameters. The main contribution is a specific bundle adjustment which refines both the scene geometry and the extrinsic parameters. Finally, an efficient implementation of the specific bundle adjustment step is defined for online calibration purpose. The proposed approach is validated with both synthetic and real data.
european conference on computer vision | 2012
Ludovic Magerand; Adrien Bartoli; Omar Ait-Aider; Daniel Pizarro
Low cost CMOS cameras can have an acquisition mode called rolling shutter which sequentially exposes the scan-lines. When a single object moves with respect to the camera, this creates image distortions. Assuming 2D-3D correspondences known, previous work showed that the object pose and kinematics can be estimated from a single rolling shutter image. This was achieved using a suboptimal initialization followed by local iterative optimization. We propose a polynomial projection model for rolling shutter cameras and a constrained global optimization of its parameters. This is done by means of a semidefinite programming problem obtained from the generalized problem of moments method. Contrarily to previous work, our optimization does not require an initialization and ensures that the global minimum is achieved. This allows us to build automatically robust 2D-3D correspondences using a template to provide an initial set of correspondences. Experiments show that our method slightly improves previous work on both simulated and real data. This is due to local minima into which previous methods get trapped. We also successfully experimented building 2D-3D correspondences automatically with both simulated and real data.
The International Journal of Robotics Research | 2012
Redwan Dahmouche; Nicolas Andreff; Youcef Mezouar; Omar Ait-Aider; Philippe Martinet
One of the main drawbacks of vision-based control that remains unsolved is the poor dynamic performances caused by the low acquisition frequency of the vision systems and the time latency due to processing. We propose in this paper to face the challenge of designing a high-performance dynamic visual servo control scheme. Two versatile control laws are developed in this paper: a position-based dynamic visual servoing and an image-based dynamic visual servoing. Both control laws are designed to compute the control torques exclusively from a sequential acquisition of regions of interest containing the visual features to achieve an accurate trajectory tracking. The presented experiments on vision-based dynamic control of a high-speed parallel robot show that the proposed control schemes can perform better than joint-based computed torque control.
international conference on robotics and automation | 2008
Redwan Dahmouche; Omar Ait-Aider; Nicolas Andreff; Youcef Mezouar
This paper presents a novel method for high speed pose and velocity computation from visual sensor. The main problem in high speed vision is the bottleneck phenomenon which limits the video rate transmission. The proposed approach circles the problem out by increasing the information density instead of the data rate transmission. This strategy is based on a rotary sequential acquisition of selected regions of interest (ROI) which provides space-time data. This acquisition mode induces an image projection deformation of dynamic objects. This paper shows how to use this artifact for the simultaneous measure of both pose and velocity, at the same frequency as the ROIs acquisition one.
international conference on robotics and automation | 2006
Omar Ait-Aider; Nicolas Andreff; Philippe Martinet; Jean-Marc Lavest
This paper proposes an original and novel vision sensing method to be used in vision-based dynamic identification of parallel robots. Indeed, it is shown that in the latter problem one requires to estimate (to the least) or measure (to the best) the end-effector pose and its time derivatives. The sensor we propose, based on a clever modelling of CMOS rolling shutter camera, measures simultaneously the end-effector pose of a calibrated visual pattern and its Cartesian velocity using a single view. Although motivated by parallel robot identification, this low cost sensor does not make any assumption on the kinematics of the robot and can thus be used for other applications. Experimental results with real data confirm the relevance of the approach and show the sensor good practical measurement accuracy