Adlane Habed
University of Strasbourg
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Adlane Habed.
international conference on pattern recognition | 1998
Karl Tombre; Christian Ah-Soon; Philippe Dosch; Adlane Habed; Gérald Masini
We claim that time has come in graphics recognition for choosing stable and robust methods, even (or especially) when this means implementing methods proposed by others, instead of inventing a new algorithm which ends up being a minor variation on an old idea. In this spirit, we present some of the choices we have made. We do not cover the whole range of methods and steps used in graphics recognition, but rather concentrate on those which we deem the most crucial and time consuming, in terms of development time, in the design of a graphics recognition system. They include: binarization, text-graphics segmentation, thin-thick separation, vectorisation and arc recognition, and tiling.
Pattern Recognition | 2008
Adlane Habed; Boubakeur Boufama
This paper presents a new set of equations for the self-calibration of a moving camera with constant intrinsic parameters. Unlike most existing methods that require solving equations in three or more unknowns, the proposed equations are only bivariate. In particular, we show that the three scale factors appearing in the Kruppas equations, that are due to a triplet of images, are not independent but rather closely related. This relationship is used to derive sextic bivariate polynomial equations and allow the recovery of the unknown scale factors using a homotopy continuation method. Once the scale factors are calculated, an estimate of Kruppas coefficients can be linearly retrieved and then refined through a nonlinear least-squares optimization procedure. The results of our experiments conducted on simulated data as well as the three-dimensional structure reconstruction using real images are also presented in the paper.
Image and Vision Computing | 2006
Adlane Habed; Boubakeur Boufama
This paper presents a new approach for self-calibrating a moving camera with constant intrinsic parameters. Unlike existing methods, the proposed method turns the self-calibration problem into one of solving bivariate polynomial equations. In particular, we show that each pair of images partially identifies a pair of 3D points that lie on the plane at infinity. These points are parameterized in terms of the real eigenvalue of the homography of the plane at infinity. A triplet of images identifies six such points on which the coplanarity constraint is enforced leading to a set of quintic and sextic polynomial equations. These equations are solved using a homotopy continuation method. More images allow to isolate the real eigenvalue associated with each motion and thus, to fully identify the points at infinity. The method also presents inequality conditions that allow to eliminate spurious solutions. Degenerate motions, not allowing the calculation of the eigenvalues, are also presented here. Once the 3D points at infinity are localized, both the plane at infinity and the Kruppas coefficients can be linearly estimated.
Image and Vision Computing | 2004
Boubakeur Boufama; Adlane Habed
Abstract The problem of Euclidean 3D reconstruction is closely related to the calibration of the camera. It is well known that self-calibration methods only provide an approximate solution to camera parameters and their accuracy is undermined by the correspondence problem. However, we demonstrate through this article, that recovering the Euclidean 3D structure of a scene can be achieved in an accurate manner without resorting to a highly precise estimate of the intrinsic parameters. Mainly, we describe a three-step procedure in which we jointly use the simplified form of the Kruppas equations, a normalization of pixel coordinates and the Eight-Point algorithm to recover the three-dimensional structure with high accuracy even in the presence of noise.
international multi-conference on systems, signals and devices | 2011
Frank Boochs; Andreas Marbs; Helmi Ben Hmida; Hung Truong; Ashish Karmachaiya; Christophe Cruz; Adlane Habed; Christophe Nicolle; Yvon Voisin
Object reconstruction is an important task in many fields of application as it allows to generate digital representations of our physical world used as base for analysis, planning, construction, visualization or other aims. A reconstruction itself normally is based on reliable data (images, 3D point clouds for example) expressing the object in his complete extent. This data then has to be compiled and analyzed in order to extract all necessary geometrical elements, which represent the object and form a digital copy of it. Traditional strategies are largely based on manual interaction and interpretation, because with increasing complexity of objects human understanding is inevitable to achieve acceptable and reliable results. But human interaction is time consuming and expensive, why many researches has already been invested to use algorithmic support, what allows to speed up the process and to reduce manual work load. Presently most of such supporting algorithms are data-driven and concentate on specific features of the objects, being accessible to numerical models. By means of these models, which normally will represent geometrical (flatness, roughness, for example) or physical features (color, texture), the data is classified and analyzed. This is successful for objects with low complexity, but gets to its limits with increasing complexness of objects. Then purely numerical strategies are not able to sufficiently model the reality. Therefore, the intention of our approach is to take human cognitive strategy as an example, and to simulate extraction processes based on available human defined knowledge for the objects of interest. Such processes will introduce a semantic structure for the objects and guide the algorithms used to detect and rexognize objects, which will yield a higher effectiveness. Hence, our research proposes an approach using knowledge to guide the algorithms in 3D point cloud and image processing.
international conference on computer vision | 2015
Danda Pani Paudel; Adlane Habed; Cédric Demonceaux; Pascal Vasseur
This paper deals with the problem of registering a known structured 3D scene and its metric Structure-from-Motion (SfM) counterpart. The proposed work relies on a prior plane segmentation of the 3D scene and aligns the data obtained from both modalities by solving the point-to-plane assignment problem. An inliers-maximization approach within a Branch-and-Bound (BnB) search scheme is adopted. For the first time in this paper, a Sum-of-Squares optimization theory framework is employed for identifying point-to-plane mismatches (i.e. outliers) with certainty. This allows us to iteratively build potential inliers sets and converge to the solution satisfied by the largest number of point-to-plane assignments. Furthermore, our approach is boosted by new plane visibility conditions which are also introduced in this paper. Using this framework, we solve the registration problem in two cases: (i) a set of putative point-to-plane correspondences (with possibly overwhelmingly many outliers) is given as input and (ii) no initial correspondences are given. In both cases, our approach yields outstanding results in terms of robustness and optimality.
intelligent robots and systems | 2014
Danda Pani Paudel; Cédric Demonceaux; Adlane Habed; Pascal Vasseur; In So Kweon
Accurate estimation of camera motion is very important for many robotics applications involving SfM and visual SLAM. Such accuracy is attempted by refining the estimated motion through nonlinear optimization. As many modern robots are equipped with both 2D and 3D cameras, it is both highly desirable and challenging to exploit data acquired from both modalities to achieve a better localization. Existing refinement methods, such as Bundle adjustment and loop closing, may be employed only when precise 2D-to-3D correspondences across frames are available. In this paper, we propose a framework for robot localization that benefits from both 2D and 3D information without requiring such accurate correspondences to be established. This is carried out through a 2D-3D based initial motion estimation followed by a constrained nonlinear optimization for motion refinement. The initial motion estimation finds the best possible 2D-to-3D correspondences and localizes the cameras with respect the 3D scene. The refinement step minimizes the projection errors of 3D points while preserving the existing relationships between images. The problems of occlusion and that of missing scene parts are handled by comparing the image-based reconstruction and 3D sensor measurements. The effect of data inaccuracies is minimized using an M-estimator based technique. Our experiments have demonstrated that the proposed framework allows to obtain a good initial motion estimate and a significant improvement through refinement.
international conference on image processing | 2010
Adlane Habed; Amirhasan Amintabar; Boubakeur Boufama
This paper deals with the problem of retrieving the affine structure of a scene from two or more images of parallel planes. We propose a new approach that is solely based on plane homographies, calculated from point correspondences, and that does not require the recovery of the 3D structure of the scene. Neither vanishing points nor lines need to be extracted from the images. The case of a moving camera with constant intrinsic parameters and the one of cameras with possibly different parameters are both addressed. Extensive experiments with both synthetic and real images have validated our approach.
international conference on pattern recognition | 2004
Adlane Habed; Boubakeur Boufama
In this paper, the modulus constraint is used to retrieve the scale factors that are responsible for the nonlinearity of Kruppas equations. In the case of constant intrinsic parameters, each pair of images identifies a pair of 3D points at infinity whose coordinates are expressed in terms of the scale factors we are looking for. By enforcing the coplanarity constraint on these points, a set of quintic bivariate equations is obtained for each triplet of images. Once the scale factors are calculated, the problems of retrieving the plane at infinity and solving Kruppas equations become straightforward and linear.
international conference on pattern recognition | 2014
Danda Pani Paudel; Cédric Demonceaux; Adlane Habed; Pascal Vasseur
In this paper we propose a robust and direct 2D-to-3D registration method for localizing 2D cameras in a known 3D environment. Although the 3D environment is known, localizing the cameras remains a challenging problem that is particularly undermined by the unknown 2D-3D correspondences, outliers, scale ambiguities and occlusions. Once the cameras are localized, the Structure-from-Motion reconstruction obtained from image correspondences is refined by means of a constrained nonlinear optimization that benefits from the knowledge of the scene. We also propose a common optimization framework for both localization and refinement steps in which projection errors in one view are minimized while preserving the existing relationships between images. The problem of occlusion and that of missing scene parts are handled by employing a scale histogram while the effect of data inaccuracies is minimized using an M-estimator-based technique.