Maxime Lhuillier
Blaise Pascal University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Maxime Lhuillier.
computer vision and pattern recognition | 2006
Etienne Mouragnon; Maxime Lhuillier; Michel Dhome; Fabien Dekeyser; Patrick Sayd
In this paper we describe a method that estimates the motion of a calibrated camera (settled on an experimental vehicle) and the tridimensional geometry of the environment. The only data used is a video input. In fact, interest points are tracked and matched between frames at video rate. Robust estimates of the camera motion are computed in real-time, key-frames are selected and permit the features 3D reconstruction. The algorithm is particularly appropriate to the reconstruction of long images sequences thanks to the introduction of a fast and local bundle adjustment method that ensures both good accuracy and consistency of the estimated camera poses along the sequence. It also largely reduces computational complexity compared to a global bundle adjustment. Experiments on real data were carried out to evaluate speed and robustness of the method for a sequence of about one kilometer long. Results are also compared to the ground truth measured with a differential GPS.
International Journal of Computer Vision | 2007
Eric Royer; Maxime Lhuillier; Michel Dhome; Jean-Marc Lavest
This paper presents a new real-time localization system for a mobile robot. We show that autonomous navigation is possible in outdoor situation with the use of a single camera and natural landmarks. To do that, we use a three step approach. In a learning step, the robot is manually guided on a path and a video sequence is recorded with a front looking camera. Then a structure from motion algorithm is used to build a 3D map from this learning sequence. Finally in the navigation step, the robot uses this map to compute its localization in real-time and it follows the learning path or a slightly different path if desired. The vision algorithms used for map building and localization are first detailed. Then a large part of the paper is dedicated to the experimental evaluation of the accuracy and robustness of our algorithms based on experimental data collected during two years in various environments.
Image and Vision Computing | 2009
Etienne Mouragnon; Maxime Lhuillier; Michel Dhome; Fabien Dekeyser; Patrick Sayd
This paper describes a method for estimating the motion of a calibrated camera and the three-dimensional geometry of the filmed environment. The only data used is video input. Interest points are tracked and matched between frames at video rate. Robust estimates of the camera motion are computed in real-time, key frames are selected to enable 3D reconstruction of the features. We introduce a local bundle adjustment allowing 3D points and camera poses to be refined simultaneously through the sequence. This significantly reduces computational complexity when compared with global bundle adjustment. This method is applied initially to a perspective camera model, then extended to a generic camera model to describe most existing kinds of cameras. Experiments performed using real-world data provide evaluations of the speed and robustness of the method. Results are compared to the ground truth measured with a differential GPS. The generalized method is also evaluated experimentally, using three types of calibrated cameras: stereo rig, perspective and catadioptric.
european conference on computer vision | 2004
Gang Zeng; Sylvain Paris; Long Quan; Maxime Lhuillier
We present a novel approach to surface reconstruction from multiple images. The central idea is to explore the integration of both 3D stereo data and 2D calibrated images. This is motivated by the fact that only robust and accurate feature points that survived the geometry scrutiny of multiple images are reconstructed in space. The density insufficiency and the inevitable holes in the stereo data should be filled in by using information from multiple images. The idea is therefore to first construct small surface patches from stereo points, then to progressively propagate only reliable patches in their neighborhood from images into the whole surface using a best-first strategy. The problem reduces to searching for an optimal local surface patch going through a given set of stereo points from images. This constrained optimization for a surface patch could be handled by a local graph-cut that we develop. Real experiments demonstrate the usability and accuracy of the approach.
british machine vision conference | 2007
Etienne Mouragnon; Maxime Lhuillier; Michel Dhome; Fabien Dekeyser; Patrick Sayd
We introduce a generic and incremental Structure from Motion method. By generic, we mean that the proposed method is independent of any specific camera model. During the incremental 3D reconstruction, parameters of 3D points and camera poses are refined simultaneously by a generic local bundle adjustment that minimizes an angular error between rays. This method has three main advantages: it is generic, fast and accurate. The proposed method is evaluated by experiments on real data with three kinds of calibrated cameras: stereo rig, perspective and catadioptric cameras.
international conference on robotics and automation | 2006
Maxime Lhuillier; Mathieu Perriollat
Many methods exist for the automatic and optimal 3D reconstruction of camera motion and scene structure from image sequence (structure from motion or SfM). The solution to this problem is not unique: we obtain an other solution by changing the coordinate system where points and cameras are defined. On the other hand, existing methods provide some measure of confidence or uncertainty for the estimation when the ground truth is not available, using a gauge constraint which settle the reconstruction coordinate system. Here we justify and describe a method which estimate the uncertainty ellipsoids, when previous methods are not straightforward to use due to a huge number of parameters to fit. Many examples are given and discussed for big reconstructions
british machine vision conference | 2010
Alexandre Eudes; Sylvie Naudet-Collette; Maxime Lhuillier; Michel Dhome
Local Bundle Adjustments were recently introduced for visual SLAM (Simultaneous Localization and Mapping). In Monocular Visual SLAM, the scale factor is not observable and the reconstruction scale drifts as time goes by. On long trajectory, this problem makes absolute localisation not usable. To overcome this major problem, data fusion is a possible solution. In this paper, we describe Weighted Local Bundle Adjustment(WLBA) for monocular visual SLAM purposes. We show that W-LBA used with local covariance gives better results than Local Bundle Adjustment especially on the scale propagation. Moreover W-LBA is well designed for sensor fusion. Since odometer is a common sensor and is reliable to obtain a scale information, we apply W-LBA to fuse visual SLAM with odometry data. The method performance is shown on a large scale sequence.
IWMM'04/GIAE'04 Proceedings of the 6th international conference on Computer Algebra and Geometric Algebra with Applications | 2004
Gang Zeng; Maxime Lhuillier; Long Quan
Many objects can be mathematically represented as smooth surfaces with arbitrary topology, and smooth surface reconstruction from images could be cast into a variational problem. The main difficulties are the intrinsic ill-posedness of the reconstruction, image noise, efficiency and scalability. In this paper, we discuss the reconstruction approaches that use volumetric, graph-cut, and level-set optimization tools; and the objective functionals that use different image information, silhouette, photometry, and texture. Our discussion is accompanied by the implementations of these approaches on real examples.
IFAC Proceedings Volumes | 2007
Eric Royer; Maxime Lhuillier; Michel Dhome; François Marmoiton
Abstract In this paper, we present a framework used to compute the relative pose of several vehicles in a platooning configuration. The localization of each vehicle is done separately with monocular vision. A wireless communication is used so that the vehicles can share their positions and compute the distance between them. This approach doesnt assume that the following vehicles can see the leader. The main difficulty is the synchronization of the localization information. A motion model of the vehicles is used to solve this problem. Experimental data show how vision can be used as a localization sensor for vehicle platooning.
TS. Traitement du signal | 2006
Eric Royer; Maxime Lhuillier; Michel Dhome; Jean-Marc Lavest