Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yassine Ruichek is active.

Publication


Featured researches published by Yassine Ruichek.


Pattern Recognition Letters | 1996

A neural matching algorithm for 3-D reconstruction from stereo pairs of linear images

Yassine Ruichek; Jack-Gérard Postaire

Abstract In this paper, we propose a neural approach for obstacle detection in front of moving cars, using linear stereo vision. The key problem is the so-called “correspondence problem” which consists in matching features extracted from two images that are projections of the same entity in the three-dimensional world. The linear stereo correspondence problem is first formulated as an optimization task where an energy function, which represents the constraints on the solution, is to be minimized. The optimization problem is then performed by means of a Hopfield neural network. Experimental results, using real stereo images, demonstrate the effectiveness of the method.


International Journal of Vehicular Technology | 2011

GPS and Stereovision-Based Visual Odometry: Application to Urban Scene Mapping and IntelligentVehicle Localization

Lijun Wei; Cindy Cappelle; Yassine Ruichek; Frédérick Zann

We propose an approach for vehicle localization in dense urban environments using a stereoscopic system and a GPS sensor. Stereoscopic system is used to capture the stereo video flow, to recover the environments, and to estimate the vehicle motion based on feature detection, matching, and triangulation from every image pair. A relative depth constraint is applied to eliminate the tracking couples which are inconsistent with the vehicle ego-motion. Then the optimal rotation and translation between the current and the reference frames are computed using an RANSAC based minimization method. Meanwhile, GPS positions are obtained by an on-board GPS receiver and periodically used to adjust the vehicle orientations and positions estimated by stereovision. The proposed method is tested with two real sequences obtained by a GEM vehicle equipped with a stereoscopic system and a RTK-GPS receiver. The results show that the vision/GPS integrated trajectory can fit the ground truth better than the vision-only method, especially for the vehicle orientation. And vice-versa, the stereovision-based motion estimation method can correct the GPS signal failures (e.g., GPS jumps) due to multipath problem or other noises.


international conference on intelligent transportation systems | 2008

Representing and Tracking of Dynamics Objects using Oriented Bounding Box and Extended Kalman Filter

Pawel Kmiotek; Yassine Ruichek

Representing and tracking of dynamics objects is one of the main parts of autonomous navigation in urban areas. In the framework of the development of a multiple objects tracking system using multisensor fusion, this paper presents an oriented bounding box (OBB) representation with uncertainty computation as well as a model for object tracking. The uncertainty computation method, which takes into account laser range finder sensor uncertainty and objects relative position, is evaluated. The influence of this uncertainty on the accuracy of the estimation is shown. The tracking model, based on the extended Kalman filter is tested and evaluated using the OBB objects representation.


IEEE Transactions on Instrumentation and Measurement | 2013

Camera/Laser/GPS Fusion Method for Vehicle Positioning Under Extended NIS-Based Sensor Validation

Lijun Wei; Cindy Cappelle; Yassine Ruichek

Vehicle localization and autonomous navigation consist of precisely positioning a vehicle on road by the use of different kinds of sensors. This paper presents a vehicle localization method by integrating a stereoscopic system, a laser range finder (LRF) and a global localization sensor GPS. For more accurate LRF-based vehicle motion estimation, an outlier-rejection invariant closest point method (ICP) is proposed to reduce the matching ambiguities of scan alignment. The fusion approach starts by a sensor selection step that is applied to validate the coherence of the observations from different sensors. Then the information provided by the validated sensors is fused with an unscented information filter. To demonstrate its performance, the proposed multisensor localization method is tested with real data and evaluated by RTK-GPS data as ground truth. The fusion approach also facilitates the incorporation of more sensors if needed.


IEEE Transactions on Instrumentation and Measurement | 2013

Optimal Extrinsic Calibration Between a Stereoscopic System and a LIDAR

You Li; Yassine Ruichek; Cindy Cappelle

Current perception systems of intelligent vehicles not only make use of visual sensors, but also take advantage of depth sensors. Extrinsic calibration of these heterogeneous sensors is required for fusing information obtained separately by vision sensors and light detection and ranging (LIDARs). In this paper, an optimal extrinsic calibration algorithm between a binocular stereo vision system and a 2-D LIDAR is proposed. Most extrinsic calibration methods between cameras and a LIDAR proceed by calibrating separately each camera with the LIDAR. We show that by placing a common planar chessboard with different poses in front of the multisensor system, the extrinsic calibration problem is solved by a 3-D reconstruction of the chessboard and geometric constraints between the views from the stereovision system and the LIDAR. Furthermore, our method takes sensor noise into account that it provides optimal results under Mahalanobis distance constraints. To evaluate the performance of the algorithm, experiments based on both computer simulation and real datasets are presented and analyzed. The proposed approach is also compared with a popular camera/LIDAR calibration method to show the benefits of our method.


international conference on robotics and automation | 2003

A voting stereo matching method for real-time obstacle detection

Mohamed Hariti; Yassine Ruichek; Abderrafiaa Koukam

Depth from stereo is one of the most active research areas in the computer vision field. The heavily investigated problem in stereo approaches is the matching between two or more images of a scene observed, by two or more video cameras, from different viewpoints. It consists of identifying features in the left and right images that are projections of the same physical feature in the three-dimensional world. This paper presents a real-time stereo matching method using a voting schema. The correspondence problem is first mapped onto a two-dimensional matrix, called matching matrix, where each element represents a possible match between two features extracted from the left and right images. Local and global constraints are then used to search the true elements of the matching matrix, which represent compatible matches. The valid elements are determined by applying the local constraints. Global constraints are used to define the voting rules between the valid elements. The voting based-method is evaluated for real-time obstacle detection in front of a moving car using linear stereo vision.


international conference on intelligent transportation systems | 2011

Object tracking using Harris corner points based optical flow propagation and Kalman filter

Houssam Salmane; Yassine Ruichek; Louahdi Khoudour

This paper proposes an objects tracking method using optical flow information and Kalman filtering. The basic idea of the proposed approach starts from the fact that interesting points based optical flow is more precise and robust when compared to the optical flow of the other pixels of objects. Firstly, objects to be tracked are detected basing on independent component analysis. For each detected object, Harris corner points are extracted and their local optical flow is calculated. The optical flow of the Harris points is then propagated using a Gaussian distribution based technique to estimate the optical flow of the remaining pixels. Finally, the estimated optical flow is corrected using an iterative Kalman Filter. Experimental results on real data set frames are presented to demonstrate the effectiveness and robustness of the method. This work is developed within the framework of the PANsafer project, supported by the ANR VTT program.


Expert Systems With Applications | 2016

Building detection from orthophotos using a machine learning approach

Fadi Dornaika; Abdelmalik Moujahid; Youssef El Merabet; Yassine Ruichek

Automatic building detection in orthophotos via a machine learning approach.Flexible framework that exploits supervised learning.Applying the covariance descriptor to the building detection problem.An extended performance study of several combination segmentation-descriptor.Classification performance is obtained with K-NN, Partial Least Square and SVM. Building detection from aerial images has many applications in fields like urban planning, real-estate management, and disaster relief. In the last two decades, a large variety of methods on automatic building detection have been proposed in the remote sensing literature. Many of these approaches make use of local features to classify each pixel or segment to an object label, therefore involving an extra step to fuse pixelwise decisions. This paper presents a generic framework that exploits recent advances in image segmentation and region descriptors extraction for the automatic and accurate detection of buildings on aerial orthophotos. The proposed solution is supervised in the sense that appearances of buildings are learnt from examples. For the first time in the context of building detection, we use the matrix covariance descriptor, which proves to be very informative and compact. Moreover, we introduce a principled evaluation that allows selecting the best pair segmentation algorithm-region descriptor for the task of building detection. Finally, we provide a performance evaluation at pixel level using different classifiers. This evaluation is conducted over 200 buildings using different segmentation algorithms and descriptors. The performance analysis quantifies the quality of both the image segmentation and the descriptor used. The proposed approach presents several advantages in terms of scalability, suitability and simplicity with respect to the existing methods. Furthermore, the proposed scheme (detection chain and evaluation) can be deployed for detecting multiple object categories that are present in images and can be used by intelligent systems requiring scene perception and parsing such as intelligent unmanned aerial vehicle navigation and automatic 3D city modeling.


international conference on intelligent transportation systems | 2011

3D triangulation based extrinsic calibration between a stereo vision system and a LIDAR

You Li; Yassine Ruichek; Cindy Cappelle

This paper presents a novel extrinsic calibration algorithm between a binocular stereo vision system and a 2D LIDAR (laser range finder). Extrinsic calibration of these heterogeneous sensors is required to fuse information obtained separately by vision sensor and LIDAR in the context of intelligent vehicle. By placing a planar chessboard at different positions and orientations in front of the sensors, the proposed method solves the problem based on 3D reconstruction of the chessboard and geometric constraints between views from the stereovision system and the LIDAR. The three principle steps of the approach are: 3D corner points triangulation, 3D plane least-squares estimation, solving extrinsic parameters by applying a non-linear optimization algorithm based on the geometric constraints. To evaluate the performance of the algorithm, experiments based on computer simulation and real data are performed. The proposed approach is also compared with a popular calibration method to show its advantages.


intelligent vehicles symposium | 1995

Real-time neural vision for obstacle detection using linear cameras

Yassine Ruichek; Jack-Gérard Postaire

This paper presents a neural vision system for real-time obstacle detection in front of vehicles using a linear stereo vision set-up. The problem addressed here consists in identifying features in two images that are projections of the same physical entity in the three-dimensional world. The linear stereo correspondence problem is formulated as an optimization problem. An energy function, which represents the constraints on the solution, is mapped onto a two-dimensional Hopfield neural network for minimization. The system has been evaluated with experimental results on real stereo images.

Collaboration


Dive into the Yassine Ruichek's collaboration.

Top Co-Authors

Avatar

Fadi Dornaika

University of the Basque Country

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alireza Bosaghzadeh

University of the Basque Country

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hazem Issa

University of the Sciences

View shared research outputs
Top Co-Authors

Avatar

Khadija Lekdioui

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Abdelmalik Moujahid

University of the Basque Country

View shared research outputs
Researchain Logo
Decentralizing Knowledge