El Mustapha Mouaddib
University of Picardie Jules Verne
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by El Mustapha Mouaddib.
Pattern Recognition | 1998
Joan Batlle; El Mustapha Mouaddib; Joaquim Salvi
We present a survey of the most significant techniques, used in the last few years, concerning the coded structured light methods employed to get 3D information. In fact, depth perception is one of the most important subjects in computer vision. Stereovision is an attractive and widely used method, but, it is rather limited to make 3D surface maps, due to the correspondence problem. The correspondence problem can be improved using a method based on structured light concept, projecting a given pattern on the measuring surfaces. However, some relations between the projected pattern and the reflected one must be solved. This relationship can be directly found codifying the projected light, so that, each imaged region of the projected pattern carries the needed information to solve the correspondence problem.
international conference on robotics and automation | 1997
El Mustapha Mouaddib; Joan Batlle; Joaquim Salvi
We present a summary of the most significant techniques, used in the last few years, concerning the coded structured light methods employed to get 3D information. In fact, depth perception is one of the most important subjects in computer vision. Stereovision is an attractive and widely used method, but, rather limited to make 3D surface maps, due to the correspondence problem. The correspondence problem can be improved using a method based on a structured light concept, projecting a given pattern on the measuring surfaces, although some relations between the projected pattern and the reflected one must be solved. This relationship can be directly found codifying the projected light, so that, each imaged region of the projected pattern carries the necessary information to solve the correspondence problem. We do not need to mention the numerous advantages in accurate obtention of 3D information for many research subjects, such as: robotics, autonomous navigation, shape analysis, and so on.
international conference on robotics and automation | 1996
Claude Pégard; El Mustapha Mouaddib
Mobile robots use actually a combination of internal and external sensors to determine their position and orientation in a path following period. Incremental encoders, gyrometers are generally used to give an approximative estimation of localization. Nevertheless, the cumulative drifts of these internal sensors must be periodically corrected with an exteroceptive sensor. So, the authors present, in this paper, an optical omnidirectional sensor which can give, with an adapted software, an absolute localization. This sensor is made of a CCD video camera associated with a conic shaped reflector; so, a view of a 2 /spl pi/ radian field is available to compute the position of the robot. The authors report the matching algorithm allowing a local observed scene to be replaced in the global navigation area. The authors conduct the accuracy analysis of this global positioning system, and present the experimental results.
international conference on robotics and automation | 2005
El Mustapha Mouaddib; Ryusuke Sagawa; Tomio Echigo; Yasushi Yagi
You can create catadioptric omnidirectional stereovision using several mirrors with a single camera. These systems have interesting advantages, for instance in the case of mobile robot navigation and environment reconstruction. Our paper aims at estimating the” quality” of such stereovision system. What happens when the number of mirrors increases? Is it better to increase the base-line or to increase the number of mirrors? We propose some criteria and a methodology to compare different significant categories (seven): three already existing systems and four new designs that we propose. We also study and propose a global comparison between the best configurations.
Autonomous Robots | 2013
Guillaume Caron; Eric Marchand; El Mustapha Mouaddib
Abstract2D visual servoing consists in using data provided by a vision sensor for controlling the motions of a dynamic system. Most of visual servoing approaches has relied on the geometric features that have to be tracked and matched in the image acquired by the camera. Recent works have highlighted the interest of taking into account the photometric information of the entire image. This approach was tackled with images of perspective cameras. We propose, in this paper, to extend this technique to central cameras. This generalization allows to apply this kind of method to catadioptric cameras and wide field of view cameras. Several experiments have been successfully done with a fisheye camera in order to control a 6 degrees of freedom robot and with a catadioptric camera for a mobile robot navigation task.
intelligent robots and systems | 2009
Guillaume Caron; Eric Marchand; El Mustapha Mouaddib
Robot vision has a lot to win as well with wide field of view induced by catadioptric cameras as with redundancy brought by stereovision. Merging these two characteristics in a single sensor is obtained by combining a single camera and multiple mirrors. This paper proposes a 3D model tracking algorithm that allows a robust tracking of 3D objects using stereo catadioptric images given by this sensor. The presented work relies on an adapted virtual visual servoing approach, a non-linear pose computation technique. The model take into account central projection and multiple mirrors. Results show robustness in illumination changes, mistracking and even higher robustness with four mirrors than with two.
Computer Vision and Image Understanding | 2011
Amina Radgui; Cédric Demonceaux; El Mustapha Mouaddib; Mohammed Rziza; Driss Aboutajdine
The problem of optical flow estimation is largely discussed in computer vision domain for perspective images. It was also proven that, in terms of optical flow analysis from these images, we have difficulty distinguishing between some motion fields obtained with little camera motion. The omnidirectional cameras provided images with large filed of view. These images contain global information about motion and allow to remove the ambiguity present in perspective case. Nevertheless, these images contain significant radial distortions that is necessary to take into account when treating these images to estimate the motion. In this paper, we shall describe new way to compute efficient optical flow for several camera motions given synthetic and real omnidirectional images. Our formulation of optical flow estimation problem will be given in the spherical domain. The omnidirectional images will be mapped on the sphere and used in multichannel image decomposition process to constraint spherical optical flow equation. This decomposition is based on spherical wavelets. The optical flow fields obtained using our proposed approach are illustrated and compared with multichannel image decomposition method developed for perspective images and other published methods dedicated to omnidirectional images.
intelligent robots and systems | 1998
Bruno Marhic; El Mustapha Mouaddib; Claude Pégard
We first present a short description of our conic sensor. Furthermore, we pay special attention to the projective invariant, represented by the cross-ratio. We end the paper by presenting a new 1D cross-ratio application that enables us to localise our mobile robot SARAH efficiently. With the method that we propose using, the camera does not need to be calibrated and this vision system, based on an omnidirectional sensor, is a major advantage for the navigation of an autonomous mobile robot.
international conference on robotics and automation | 2011
Pauline Merveilleux; Ouiddad Labbani-Igbida; El Mustapha Mouaddib
This paper contributes to adapt parametric and geometric active contour methods in a new framework to handle real time free space extraction while taking advantage of the properties of omnivision. Both methods were formally and algorithmically adapted and improved. Some comparative results, achieved on unknown indoor and outdoor images, are presented to validate the efficiency of our two snake based approaches. We also show that active contours can be applied to make a robot navigate autonomously, only using real omni-images, thanks to the extracted free space skeleton.
intelligent robots and systems | 1999
Bruno Marhic; El Mustapha Mouaddib; Claude Pégard; Nicolas Hutin
We present a complete localisation method for mobile robots. We deal with low level image processing, model recognition through an invariant, matching with real noisy goniometrical observations and location estimation through a triangulation technique. We show how useful invariants may be in order to solve the matching problem. Moreover, we point out how invariants are powerful to discriminate features (in images) in order to provide a quick and reliable robot location. Our localisation technique is absolute, i.e. it needs neither a (exterior) position prediction nor a resetting technique. Finally, we present experimental results with a substantial set of real noisy images that demonstrate the reliability of our localisation method.