Cédric Demonceaux
Centre national de la recherche scientifique
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Cédric Demonceaux.
Computer Vision and Image Understanding | 2010
Jean Charles Bazin; Cédric Demonceaux; Pascal Vasseur; In So Kweon
Previous works have shown that catadioptric systems are particularly suited for egomotion estimation thanks to their large field of view and thus numerous algorithms have already been proposed in the literature to estimate the motion. In this paper, we present a method for estimating six degrees of freedom camera motions from central catadioptric images in man-made environments. State-of-the-art methods can obtain very impressive results. However, our proposed system provides two strong advantages over the existing methods: first, it can implicitly handle the difficulty of planar/non-planar scenes, and second, it is computationally much less expensive. The only assumption deals with the presence of parallel straight lines which is reasonable in a man-made environment. More precisely, we estimate the motion by decoupling the rotation and the translation. The rotation is computed by an efficient algorithm based on the detection of dominant bundles of parallel catadioptric lines and the translation is calculated from a robust 2-point algorithm. We also show that the line-based approach allows to estimate the absolute attitude (roll and pitch angles) at each frame, without error accumulation. The efficiency of our approach has been validated by experiments in both indoor and outdoor environments and also by comparison with other existing methods.
computer vision and pattern recognition | 2012
Jean Charles Bazin; Yongduek Seo; Cédric Demonceaux; Pascal Vasseur; Katsushi Ikeuchi; In So Kweon; Marc Pollefeys
The projections of world parallel lines in an image intersect at a single point called the vanishing point (VP). VPs are a key ingredient for various vision tasks including rotation estimation and 3D reconstruction. Urban environments generally exhibit some dominant orthogonal VPs. Given a set of lines extracted from a calibrated image, this paper aims to (1) determine the line clustering, i.e. find which line belongs to which VP, and (2) estimate the associated orthogonal VPs. None of the existing methods is fully satisfactory because of the inherent difficulties of the problem, such as the local minima and the chicken-and-egg aspect. In this paper, we present a new algorithm that solves the problem in a mathematically guaranteed globally optimal manner and can inherently enforce the VP orthogonality. Specifically, we formulate the task as a consensus set maximization problem over the rotation search space, and further solve it efficiently by a branch-and-bound procedure based on the Interval Analysis theory. Our algorithm has been validated successfully on sets of challenging real images as well as synthetic data sets.
The International Journal of Robotics Research | 2012
Jean Charles Bazin; Cédric Demonceaux; Pascal Vasseur; In So Kweon
Rotation estimation is a fundamental step for various robotic applications such as automatic control of ground/aerial vehicles, motion estimation and 3D reconstruction. However it is now well established that traditional navigation equipments, such as global positioning systems (GPSs) or inertial measurement units (IMUs), suffer from several disadvantages. Hence, some vision-based works have been proposed recently. Whereas interesting results can be obtained, the existing methods have non-negligible limitations such as a difficult feature matching (e.g. repeated textures, blur or illumination changes) and a high computational cost (e.g. analyze in the frequency domain). Moreover, most of them utilize conventional perspective cameras and thus have a limited field of view. In order to overcome these limitations, in this paper we present a novel rotation estimation approach based on the extraction of vanishing points in omnidirectional images. The first advantage is that our rotation estimation is decoupled from the translation computation, which accelerates the execution time and results in a better control solution. This is made possible by our complete framework dedicated to omnidirectional vision, whereas conventional vision has a rotation/translation ambiguity. Second, we propose a top-down approach which maintains the important constraint of vanishing point orthogonality by inverting the problem: instead of performing a difficult line clustering preliminary step, we directly search for the orthogonal vanishing points. Finally, experimental results on various data sets for diverse robotic applications have demonstrated that our novel framework is accurate, robust, maintains the orthogonality of the vanishing points and can run in real-time.
Journal of Intelligent and Robotic Systems | 2012
Abd El Rahman Shabayek; Cédric Demonceaux; Olivier Morel; David Fofi
Unmanned aerial vehicles (UAVs) are increasingly replacing manned systems in situations that are dangerous, remote, or difficult for manned aircraft to access. Its control tasks are empowered by computer vision technology. Visual sensors are robustly used for stabilization as primary or at least secondary sensors. Hence, UAV stabilization by attitude estimation from visual sensors is a very active research area. Vision based techniques are proving their effectiveness and robustness in handling this problem. In this work a comprehensive review of UAV vision based attitude estimation approaches is covered, starting from horizon based methods and passing by vanishing points, optical flow, and stereoscopic based techniques. A novel segmentation approach for UAV attitude estimation based on polarization is proposed. Our future insightes for attitude estimation from uncalibrated catadioptric sensors are also discussed.
international conference on robotics and automation | 2006
Cédric Demonceaux; Pascal Vasseur; C. Regard
Unmanned aerial vehicles (UAVs) are the subject of an increasing interest in many applications. Autonomy is one of the major advantages of these vehicles. It is then necessary to develop particular sensors in order to provide efficient navigation functions. In this paper, we propose a method for attitude computation catadioptric images. We first demonstrate the advantages of the catadioptric vision sensor for this application. In fact, the geometric properties of the sensor permit to compute easily the roll and pitch angles. The method consists in separating the sky from the earth in order to detect the horizon. We propose an adaptation of the Markov random fields for catadioptric images for this segmentation. The second step consists in estimating the parameters of the horizon line thanks to a robust estimation algorithm. We also present the angle estimation algorithm and finally, we show experimental results on synthetic and real images captured from an airplane
international conference on robotics and automation | 2008
Jean Charles Bazin; In So Kweon; Cédric Demonceaux; Pascal Vasseur
Unmanned aerial vehicles (UAV) are the subject of an increasing interest in many applications and a key requirement is the stabilization of the vehicle. Some previous works have suggested using catadioptric vision, instead of traditional perspective cameras, in order to gather much more information from the environment and therefore improve the robustness of the UAV attitude estimation. This paper belongs to a series of recent publications of our research group concerning catadioptric vision for UAVs. Currently, we focus on the estimation of the complete attitude of a UAV flying in urban environment. In order to avoid the limitations of horizon-based approaches, the difficulties of traditional epipolar methods (such as rotation-translation ambiguity, lack of features, retrieving motion parameters from matrix decomposition, etc..) and improve UAV dynamic control, we suggest computing infinite homography. We show how catadioptric vision plays a key role to: first, extract a large number of lines, second robustly estimate the associated vanishing points and third, track them even during long video sequences. Therefore it is not only possible to estimate the relative rotation between consecutive frames but also compute the absolute rotation between two distant frames without error accumulation. Finally, we present some experimental results with ground truth data to demonstrate the accuracy and the robustness of our method.
vehicular technology conference | 2004
Cédric Demonceaux; Alexis Potelle; Djemaa Kachi-Akkouche
This paper deals with the problem of obstacle detection in traffic applications. The proposed device allows a driver to receive the current road and vehicle environment information. The perception of the environment is performed through a fast processing of image sequences acquired from a single camera mounted on a vehicle. This approach is based on frame motion analysis. The road motion is first computed through a fast and robust wavelets analysis. Finally, we detect the areas that have a different motion thanks to a Bayesian modelization. Results shown in this paper prove that the proposed method permits the detection of any obstacle on all type of road in various image conditions.
international conference on robotics and automation | 2007
Cédric Demonceaux; Pascal Vasseur; Claude Pégard
Attitude is one of the most important parameters for a UAV during a flight. Attitude computation methods based vision generally use the horizon line as reference. However, the horizon line becomes an inadequate feature in urban environment. We then propose in this paper an omnidirectional vision system based on straight lines (very frequent in urban environment) that is able to compute the roll and pitch angles. The method consists in finding bundles of horizontal and vertical parallel lines in order to obtain an absolute reference for the attitude computation. We also develop here a new and efficient method for line extraction and bundle of parallel line detection. An original method of horizontal and vertical plane detection is also provided. We show experimental results on different images extracted from video sequences.
intelligent robots and systems | 2010
Damien Eynard; Pascal Vasseur; Cédric Demonceaux; Vincent Fremont
Altitude is one of the most important parameters to be known for an Unmanned Aerial Vehicle (UAV) especially during critical maneuvers such as landing or steady flight. In this paper, we present mixed stereoscopic vision system made of a fish-eye camera and a perspective camera for altitude estimation. Contrary to classical stereoscopic systems based on feature matching, we propose a plane sweeping approach in order to estimate the altitude and consequently to detect the ground plane. Since there exists a homography between the two views and the sensor being calibrated and the attitude estimated by the fish-eye camera, the algorithm consists then in searching the altitude which verifies this homography. We show that this approach is robust and accurate, and a CPU implementation allows a real time estimation. Experimental results on real sequences of a small UAV demonstrate the effectiveness of the approach.
intelligent robots and systems | 2006
Cédric Demonceaux; Pascal Vasseur; Claude Pégard
Attitude (roll and pitch) is an essential data for the navigation of a UAV. Rather than using inertial sensors, we propose a catadioptric vision system allowing a fast, robust and accurate estimation of these angles. We show that the optimization of a sky/ground partitioning criterion associated with the specific geometric characteristics of the catadioptric sensor provides very interesting results. Experimental results obtained on real sequences are presented and compared with inertial sensor measures