Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Boubakeur Boufama is active.

Publication


Featured researches published by Boubakeur Boufama.


international conference on computer vision | 1995

Epipole and fundamental matrix estimation using virtual parallax

Boubakeur Boufama; Roger Mohr

The paper addresses the problem of computing the fundamental matrix which describes a geometric relationship between a pair of stereo images: the epipolar geometry. We propose a novel method based on virtual parallax. Instead of computing directly the 3/spl times/3 fundamental matrix, we compute a homography with one epipole position, and show that this is equivalent to computing the fundamental matrix. Simple equations are derived by reducing the number of parameters to estimate. As a consequence, we obtain an accurate fundamental matrix of rank two with a stable linear computation. Experiments with simulated and real images validate our method and clearly show the improvement over existing methods.<<ETX>>


international conference on computer vision | 1993

Euclidean constraints for uncalibrated reconstruction

Boubakeur Boufama; Roger Mohr; Francoise Veillon

It is possible to recover the three-dimensional structure of a scene using images taken with uncalibrated cameras and pixel correspondences betweeen these images. But such reconstruction can only be performed up to a projective transformation of the 3-D space. Therefore, constraints have to be put on the reconstructed data to get the reconstruction in the Euclidean space. Such constraints arise from knowledge of the scene, such as the location of points, geometrical constraints on lines, etc. The kind of constraints that have to be added are discussed, and it is shown how they can be fed in a general framework. Experimental results on real data prove the feasibility, and experiments on simulated data address the accuracy of the results.<<ETX>>


Proceedings of the Second Joint European - US Workshop on Applications of Invariance in Computer Vision | 1993

Accurate Projective Reconstruction

Roger Mohr; Boubakeur Boufama; Pascal Brand

It is possible to recover the three-dimensional structure of a scene using images taken with uncalibrated cameras and pixel correspondences. But such a reconstruction can only be computed up to a projective transformation of the 3D space. Therefore, constraints have to be added to the reconstructed data in order to get the reconstruction in the euclidean space. Such constraints arise from knowledge of the scene: location of points, geometrical constraints on lines, etc. We first discuss here the type of constraints that have to be added then we show how they can be fed into a general framework. Experiments prove that the accuracy needed for industrial applications is reachable when measurements in the image have subpixel accuracy. Therefore, we show how a real camera can be mapped into an accurate projective camera and how accurate point detection improve the reconstruction results.


Artificial Intelligence | 1995

Understanding positioning from multiple images

Roger Mohr; Boubakeur Boufama; Pascal Brand

It is possible to recover the three-dimensional structure of a scene using only correspondences between images taken with uncalibrated cameras (faugeras 1992). The reconstruction obtained this way is only defined up to a projective transformation of the 3D space. However, this kind of structure allows some spatial reasoning such as finding a path. In order to perform more specific reasoning, or to perform work with a robot moving in Euclidean space, Euclidean or affine constraints have to be added to the camera observations. Such constraints arise from the knowledge of the scene: location of points, geometrical constraints on lines, etc. First, this paper presents a reconstruction method for the scene, then it discusses how the framework of projective geometry allows symbolic or numerical information about positions to be derived, and how knowledge about the scene can be used for computing symbolic or numerical relationships. Implementation issues and experimental results are discussed.


International Journal of Pattern Recognition and Artificial Intelligence | 1998

A Stable and Accurate Algorithm for Computing Epipolar Geometry

Boubakeur Boufama; Roger Mohr

This paper addresses the problem of computing the fundamental matrix which describes a geometric relationship between a pair of stereo images: the epipolar geometry. In the uncalibrated case, epipolar geometry captures all the 3D information available from the scene. It is of central importance for problems such as 3D reconstruction, self-calibration and feature tracking. Hence, the computation of the fundamental matrix is of great interest. The existing classical methods14 use two steps: a linear step followed by a nonlinear one. However, in some cases, the linear step does not yield a close form solution for the fundamental matrix, resulting in more iterations for the nonlinear step which is not guaranteed to converge to the correct solution. In this paper, a novel method based on virtual parallax is proposed. The problem is formulated differently; instead of computing directly the 3 × 3 fundamental matrix, we compute a homography with one epipole position, and show that this is equivalent to computing the fundamental matrix. Simple equations are derived by reducing the number of parameters to estimate. As a consequence, we obtain an accurate fundamental matrix with a stable linear computation. Experiments with simulated and real images validate our method and clearly show the improvement over the classical 8-point method.


european conference on computer vision | 1994

Shape from motion algorithms: a comparative analysis of scaled orthography and perspective

Boubakeur Boufama; Daphna Weinshall; Michael Werman

This paper describes a comparative study of reconstruction algorithms from sequences of images, comparing algorithms which make the weak perspective assumption (also called scaled orthography or para perspective) to algorithms assuming perspective projection. The weak perspective assumption is usually employed to simplify the computation. Using three sequences of real images, taken under conditions corresponding to small, medium, and large fields of view, we compare two algorithms that compute invariant shape from motion; one assumes scaled orthography and one assumes perspective projection.


Image and Vision Computing | 1998

Using geometric properties for automatic object positioning

Boubakeur Boufama; Roger Mohr; Luce Morin

This paper presents an application of some recent results in computer vision, in particular the use of geometric properties. The problem we examine here is the accurate positioning of an object with respect to another, say a tool. Such applications could be used in complex and hazardous environments like nuclear plants. Because high accuracy in positioning is our goal here, we suppose that the objects have planar faces on which targets can be put. Without loss of generality, we have used only two objects in our experiments. Throughout this paper, we have called them the reference object and the unknown object respectively. The positions of the targets of the reference object with their associated projective invariants are computed in an off-line stage. Hence, given at least two images of the two objects, we can automatically identify the reference object points in the images and reconstruct the points of the unknown object relative to the reference object. The experiments show that a precision of 0.1 mm in relative positioning can be reached for an object observed at a distance of 2 m.


european conference on computer vision | 1994

Self calibration of a stereo head mounted onto a robot arm

Radu Horaud; Fadi Dornaika; Boubakeur Boufama; Roger Mohr

In this paper we propose a new method for solving the handeye calibration problem and we show how this method can be used in conjunction with a reconstruction technique in order to estimate on-line the relationship between the frame in which the scene has been reconstructed (or calibration frame) and the frame attached to the robot hand. The method is particularly well suited for calibrating stereo heads with respect to the robot on which they are mounted. We discuss the advantage of on-line (self) versus off-line hand-eye and camera calibrations. We develop two solutions for solving for the hand-eye calibration problem, a closed-form solution and a non-linear least-squares solution. Finally we report on some experiments performed with a stereo head mounted onto a 6 degrees of freedom robot arm.


International Journal of Pattern Recognition and Artificial Intelligence | 1999

ON THE RECOVERY OF MOTION AND STRUCTURE WHEN CAMERAS ARE NOT CALIBRATED

Boubakeur Boufama

This paper addresses the problem of computing the camera motion and the Euclidean 3D structure of an observed scene using uncalibrated images. Given at least two images with pixel correspondences, the motion of the camera (translation and rotation) and the 3D structure of the scene are calculated simultaneously. We do not assume the knowledge of the intrinsic parameters of the camera. However, an approximation of these parameters is required. Such an approximation is all the time available, either from the camera manufacturers data or from former experiments. Classical methods based on the essential matrix are highly sensitive to image noise. This sensitivity is amplified when the intrinsic parameters of the cameras contain errors. To overcome such instability, we propose here a method where a particular choice of a 3D Euclidean coordinate system with a different parameterization of the motion/structure problem allowed us to reduce significantly the total number of unknowns. In addition, the simultaneous calculation of the camera motion and the 3D structure has made the computation of the motion and structure less sensitive to the errors in the values of the intrinsic parameters of the camera. All steps of our method are linear. However, a final nonlinear optimal step might be added to improve the accuracy of the results and to allow the orthogonality of the rotation matrix to be taken into account. Experiments with real images validated our method and showed that a good quality motion/structure can be recovered from a pair of uncalibrated images. Intensive experiments with simulated images have shown the relationship between the errors on the intrinsic parameters and the accuracy of the recovered 3D structure.


systems man and cybernetics | 1998

3D structure recovery and errors on the intrinsic parameters

Boubakeur Boufama

This paper addresses the problem of computing the Euclidean 3D structure of an observed scene. Given at least 2 images with pixel correspondences, the 3D structure of the scene and the motion of the camera (translation and rotation) are calculated simultaneously. We study here the effect of inaccurate intrinsic parameters on the quality of the recovered reconstruction. Classical methods based on the essential matrix computation have proven to be very unstable when the intrinsic parameters of the cameras are not known exactly. To overcome such unstability, we used a method where a particular choice of a 3D Euclidean coordinate system with a different parameterization of the motion/structure problem allowed us to reduce significantly the total number of unknowns. In addition, the simultaneous calculation of the camera motion and the 3D structure has made the computation of the motion and structure less sensitive to the errors in the values of the intrinsic parameters of the camera. Experiments with real images validated our method and experiments with simulated data showed how the errors on the intrinsic parameters affect the accuracy of the reconstruction.

Collaboration


Dive into the Boubakeur Boufama's collaboration.

Top Co-Authors

Avatar

Fadi Dornaika

University of the Basque Country

View shared research outputs
Top Co-Authors

Avatar

Daphna Weinshall

Hebrew University of Jerusalem

View shared research outputs
Top Co-Authors

Avatar

Michael Werman

Hebrew University of Jerusalem

View shared research outputs
Researchain Logo
Decentralizing Knowledge