Samia Bouchafa
University of Paris
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Samia Bouchafa.
Image and Vision Computing | 2006
Samia Bouchafa; Bertrand Zavidovique
A new level-line registration technique is proposed for image transform estimation. This approach is robust towards contrast changes, does not require any estimate of the unknown transformation between images and tackles very challenging situations that usually lead to pairing ambiguities, like repetitive patterns in the images. The registration by itself is performed through efficient level-line cumulative matching based on a multi-stage primitive election procedure. Each stage provides a coarse estimate of the transformation that the next stage gets to refine. Even if we deal in this paper with similarity transform (rotation, scale and translation), our approach can be adapted to more general transformations.
ieee intelligent vehicles symposium | 2010
Adrien Bak; Samia Bouchafa; Didier Aubert
Vision-based autonomous vehicles must face numerous challenges in order to be effective in practical areas. Among these lies the detection and localization of independent-moving objects, so as to track or avoid them. In this paper a method that address this particular issue is presented. Information from stereo and motion is used to extract the ego-motion of the vehicle. Known defects of this estimation are exploited to detect independent-moving obstacles. This method allows an early and reliable detection, even for objects partially occluded. Besides, it highlights the errors in the disparity map, which can be used, in future works, to correct depth-estimation, through motion-estimation.
Real-time Imaging | 2004
Didier Aubert; Frédéric Guichard; Samia Bouchafa
This paper presents two robust algorithms with respect to global contrast changes: one detects changes; and the other detects stationary people or objects in image sequences obtained via a fixed camera. The first one is based on a level set representation of images and exploits their suitable properties under image contrast variation. The second makes use of the first, at different time scales, to allow discriminating between the scene background, the moving parts and stationarities. This latter algorithm is justified by and tested in real-life situations; the detection of abnormal stationarities in public transit settings, e.g. subway corridors, will be presented herein with assessments carried out on a large number of real-life situations.
machine vision applications | 2014
Adrien Bak; Samia Bouchafa; Didier Aubert
Road safety, whatever the considered environment, relies heavily on the ability to detect and track moving objects from a moving point of view. In order to achieve such a detection, the vehicle’s ego-motion must first be estimated and compensated. This issue is crucial to complete a fully autonomous vehicle; this is why several approaches have already been proposed. This study presents a method, based solely on visual information that implements such a process. Information from stereo-vision and motion is derived to extract the vehicle’s ego-motion. Ego-motion extraction algorithm is thoroughly evaluated in terms of precision and uncertainty. Given those statistical attributes, a method for dynamic objects detection is presented. This method relies on 3D image registration and residual displacement field evaluation. This method is then evaluated on several real and synthetic data sequences. It will be shown that it allows a reliable and early detection, even in hard cases (e.g. occlusions,...). Given a few additional factors (detectable motion range), overall performances can be derived from visual odometry performances.
International Journal of Computer Vision | 2012
Samia Bouchafa; Bertrand Zavidovique
This paper deals with plane detection from a monocular image sequence without camera calibration or a priori knowledge about the egomotion. Within a framework of driver assistance applications, it is assumed that the 3D scene is a set of 3D planes. In this paper, the vision process considers obstacles, roads and buildings as planar structures. These planes are detected by exploiting iso-velocity curves after optical flow estimation. A Hough Transform-like frame called c-velocity was designed. This paper explains how this c-velocity, defined by analogy to the v-disparity in stereovision, can represent planes, regardless of their orientation and how this representation facilitates plane extraction. Under a translational camera motion, planar surfaces are transformed into specific parabolas of the c-velocity space. The error and robustness analysis of the proposed technique confirms that this cumulative approach is very efficient for making the detection more robust and coping with optical flow imprecision. Moreover, the results suggest that the concept could be generalized to the detection of other parameterized surfaces than planes.
international symposium on memory management | 2002
Frédéric Guichard; Samia Bouchafa; Didier Aubert
This paper presents a local measurement based on the level lines within an image. Its most important feature is that it separates local geometry (the shape of the level lines) from local contrast (the grey-levels). Using only the first of these we have derived two types of motion detection one of which relates to the disappearance of local level lines and the other to a change in their local geometry. The nature of the measurement allows us to use both a short term and long term time reference and therefore detect objects that are moving or that were not present a few minutes (for example) before. We have used this technique in a number of applications. Appraisals by transportation operators have provided encouraging results.
2008 First Workshops on Image Processing Theory, Tools and Applications | 2008
Samia Bouchafa; Bertrand Zavidovique
This paper deals with obstacle detection from a moving camera using the new concept of c-velocity space. By analogy to the v-disparity space in stereovision based approaches, our method focuses on the extraction of 3D-planar structures like obstacles, road or buildings from a moving scene. The camera is assumed first to have a translational motion so that the dominant apparent motion generates a scale change along images. The c-velocity space is then defined as a cumulative frame in which planar surfaces are transformed into straight lines. Equations ruling the phenomenon are given and explained. Results on synthetic images are shown to meet the theory. Eventually results on real data are commented on as for the uncertainty introduced by the location of the FOE and other types of perturbations.
international conference on information and communication technologies | 2004
Nikom Suvonvorn; Samia Bouchafa; L. Lacassagne
In this paper, we propose a fast reliable level-line extraction algorithm that could be used as a basic and generic feature extraction algorithm. In many applications, we need to match features extracted from images. The nature of the features will play a significant role for the choice of the matching strategy. Less reliable features lead to very complex matching processes that have to compensate for their sensitivity toward several perturbations like contrast changes. We choose level lines as a basic feature for motion analysis and registration algorithms. Their invariance property toward contrast changes has been proved by several authors and verified experimentally using outdoor complex sequences.
international conference on image analysis and processing | 2003
Samia Bouchafa; Bertrand Zavidovique
A new level-line registration technique is proposed for image transform estimation. This approach is robust towards contrast changes, does not require any estimate of the unknown transformation between images and tackles very challenging situations that usually lead to pairing ambiguities, such as repetitive patterns in the images. The registration itself is performed through an efficient level-line cumulative matching based on a multistage primitive election procedure. Each stage provides a coarse estimate of the transformation that the next stage gets to refine. Although we deal with similarity transforms (rotation, scale and translation), our approach can be easily adapted to more general transformations.
international conference on intelligent transportation systems | 2011
Samia Bouchafa; Bertrand Zavidovique
Obstacle detection is a key process of automatic driver assistance. The present paper focuses on ”vision”, and particularly on ”monocular” mobile vision, to reconstruct a rough Scene-Structure From Motion. Considering the 3D scene as a set of 3D planes, our c-velocity approach segments the optical flow field into plane pieces without any camera calibration or a priori knowledge about the egomotion. The technical tip is to exhibit iso-velocity curves in establishing relations between their properties and plane orientations. We show in this paper how obstacle detection becomes obvious, costless and robust with the method. Results confirm the expected robustness, not to forget that the method extends to other parameterized surfaces.
Collaboration
Dive into the Samia Bouchafa's collaboration.
Institut national de recherche sur les transports et leur sécurité
View shared research outputsInstitut national de recherche sur les transports et leur sécurité
View shared research outputs