Abdelhafid Elouardi
University of Paris-Sud
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Abdelhafid Elouardi.
international symposium on industrial electronics | 2006
Abdelhafid Elouardi; Samir Bouaziz; Antoine Dupret; Lionel Lacassagne; Jacques-Olivier Klein; Roger Reynaud
One of the solutions to reduce the computational complexity of image processing is to perform some low-level computations on the sensor focal plane. This paper presents a vision system based on a smart sensor. PARISl (programmable analog retina-like image sensor) is the first prototype used to evaluate the architecture of an on-chip vision system based on such a sensor and a digital processor. The sensor integrates analog and digital computing units. This architecture makes the vision system more compact and increases the performances reducing the data flow exchanges with the digital processor. A system has been implemented as a proof-of-concept. This has enabled us to evaluate the performance needed for a possible implementation of a digital processor on the same chip. The approach is compared to two architectures implementing CMOS sensors and interfaced to the same processor. The comparison is related to image processing computation time, processing reliability, programmability, precision, bandwidth and subsequent stages of computations
ieee intelligent vehicles symposium | 2004
Abdelhafid Elouardi; Samir Bouaziz; Antoine Dupret; Jacques-Olivier Klein; Roger Reynaud
This paper discusses design solution for integrating a complete vision system on a single chip. We describe the architecture and the implementation of a smart integrated retina based vision system dedicated for vehicles applications. The retina is a circuit that combines image acquisition and analog/digital processing operators allowing the achievement of real-time image processing. Interests of vision system integration are analysed through comparisons with conventional approaches using CCD cameras and a digital processor or CMOS sensors combined with wired algorithms on FPGA technology. Our solution takes the advantages of both these solutions.
IEEE Transactions on Instrumentation and Measurement | 2007
Abdelhafid Elouardi; Samir Bouaziz; Antoine Dupret; Lionel Lacassagne; Jacques-Olivier Klein; Roger Reynaud
To decrease the computational complexity of computer vision algorithms, one of the solutions is to achieve some low-level image processing on the sensor focal plan. It becomes a smart sensor device called a retina. This concept makes the vision systems more compact. It increases performances thanks to the reduction of data flow exchanges with external circuits. This paper presents a comparison relating two different vision system architectures. The first one implements a logarithmic complimentary metal-oxide-semiconductor (CMOS)/active pixel sensor interfaced to a microprocessor, where all computations are carried out. The second involves a CMOS sensor including analog processors allowing on-chip image processing. An external microprocessor is used to control the on-chip data flow and integrated operators. We have designed two vision systems as proof-of-concept. The comparison is related to image processing computation time, processing reliability, programmability, precision, bandwidth, and subsequent stages of computations.
instrumentation and measurement technology conference | 2004
Abdelhafid Elouardi; Samir Bouaziz; Antoine Dupret; Jacques-Olivier Klein; Roger Reynaud
There are many vision algorithms of image processing with CMOS image sensors such as image enhancement, segmentation, feature extraction and pattern recognition. These algorithms are frequently used in software-based operations. Here, the main research interest focuses on how to integrate image processing (vision) algorithms with CMOS integrated systems or how to implement smart retinas in hardware, in terms of their system-level architecture and design methodologies. The approach is compared with state-of-the-art vision chips build using digital SIMD arrays and vision systems using CMOS sensors combined with FPGA technology. As a test bench, an advanced algorithm for an exposure time calibration is presented with experimental results of image processing.
application specific systems architectures and processors | 2013
Mickael Njiki; Abdelhafid Elouardi; Samir Bouaziz; Olivier Casula; Olivier Roy
This paper describes a multi-FPGA architecture dedicated for the real-time imaging using the Total Focusing Method (TFM) and an advanced acquisition technique called the Full Matrix Capture (FMC). The maximum operating frequency is 147.3 MHz for the control blocks and 161.3 MHz for the acquisition blocks. The architecture is able to perform real-time image reconstruction at a maximum frame rate of 73 fps and a maximum resolution of 128 × 128 pixels, which is adequate as a performance for a real-time imaging system with a good characterization of defects.
international multi-conference on systems, signals and devices | 2014
Rabah Louali; Mohand Said Djouadi; Abdelkrim Nemra; Samir Bouaziz; Abdelhafid Elouardi
Fixed-wing Unmanned Aerial Vehicles (UAVs) are a special class of UAVs which present many advantages notably long range of action. Whereas, design of this kind of UAVs requires heavy logistics like outdoor tests, runways, and experimented pilots. These constraints reverberate on the design of embedded systems for fixed-wing UAVs. Because static tests are not representative, this paper proposes a practical approach to evaluate an embedded system on an appropriate vehicle emulating the dynamic model of a fixed-wing aircraft. For that, a comparison between the dynamic model of fixed-wing aircraft, tank-type mobile robot, and a bicycle is achieved. We show that, contrary to trend in literature, a mobile robot is not the optimal choice to emulate a fixed-wing UAV. Indeed, supposing a motion without slip (and a constant altitude for the aircraft), translation models of the three vehicles are under the form of Dubin car model. Whereas, translation and rotation velocities of tank-type mobile robot are coupled (while it is not the case for the aircraft where propulsion and turning are actuated separately). This constraint defines an allowed kinematic zone which limits the emulation of a fixed wing airplane. In the other hand, in bicycle model “bank to turn effect” is similar to the one observed in fixed-wing aircraft model. Furthermore, both models are not defined when the translation velocity tends to zero (stalling effect). As a conclusion, we propose to use mobile robot to test the navigation layer, and the bicycle to evaluate the sensor processing layer of an embedded system based fixed-wing UAVs applications.
ieee/sice international symposium on system integration | 2010
Bastien Vincke; Abdelhafid Elouardi; Alain Lambert
Simultaneous Localization And Mapping (SLAM) is a branch of algorithms widely used by autonomous robots operating in unknown environments. Research community has developed numerous SLAM algorithms in the last years. Several researches have presented optimizations into different approaches. However, they have not explored a system optimization from the algorithmic development level to the system hardware design. In some applications areas, such as indoor mapping, we would obviously benefit from low-cost sensors technology and SLAM implementations on a smart architecture. In this paper, a solution to the SLAM problem is presented. It is based on the co-design of a hardware architecture, a feature detector, a SLAM algorithm and an optimization methodology. Experiments were conducted with an instrumented vehicle. Results aim to demonstrate that our approach, based on low-cost sensors interfaced to an adequate architecture and an optimized algorithm, is good suitable to design embedded systems for SLAM applications in real time conditions.
Journal of Physics D | 2006
Abdelhafid Elouardi; Samir Bouaziz; Antoine Dupret; Lionel Lacassagne; Jacques-Olivier Klein; Roger Reynaud
One of the methods of solving the computational complexity of image-processing is to perform some low-level computations on the sensor focal plane. This paper presents a vision system based on a smart sensor. PARIS1 (Programmable Analog Retina-like Image Sensor1) is the first prototype used to evaluate the architecture of an on-chip vision system based on such a sensor coupled with a microcontroller. The smart sensor integrates a set of analog and digital computing units. This architecture paves the way for a more compact vision system and increases the performances reducing the data flow exchanges with a microprocessor in control. A system has been implemented as a proof-of-concept and has enabled us to evaluate the performance requirements for a possible integration of a microcontroller on the same chip. The used approach is compared with two architectures implementing CMOS active pixel sensors (APS) and interfaced to the same microcontroller. The comparison is related to image processing computation time, processing reliability, programmability, precision, bandwidth and subsequent stages of computations.
international conference on robotics and automation | 2015
Abdelhamid Dine; Abdelhafid Elouardi; Bastien Vincke; Samir Bouaziz
The graph-based SLAM (Simultaneous Localization and Mapping) method uses a graph to represent and solve the SLAM problem. The SLAM allows building a map of an unknown environment and simultaneously localizing the robot on this map. This paper presents a temporal analysis of the 3D graph-based SLAM method. We also propose an efficient implementation, on an OMAP embedded architecture, which is a widely used open multimedia applications platform. We provide an optimized data structure and an efficient memory access management to solve the nonlinear least squares problem related to the algorithm. The algorithm takes advantage of the Schur complement to reduce the execution time. We will present an optimized implementation of this task. We also take advantage of the multi-core architecture to parallelize the algorithm. To evaluate our implementation, we will compare the computational performances to the well known framework g2o. This work aims to demonstrate how optimizing data structure and multi-threading can decrease significantly the execution time of the graph-based SLAM on a low-cost architecture dedicated to embedded applications.
Measurement Science and Technology | 2007
Abdelhafid Elouardi; Samir Bouaziz; Antoine Dupret; Lionel Lacassagne; Jacques-Olivier Klein; Roger Reynaud
To resolve the computational complexity of computer vision algorithms, one of the solutions is to perform some low-level image processing on the sensor focal plane. It becomes a smart sensor device called a retina. This concept makes vision systems more compact. It increases performance thanks to the reduction of the data flow exchanges with external circuits. This paper presents a comparison between two different vision system architectures. The first one involves a smart sensor including analogue processors allowing on-chip image processing. An external microprocessor is used to control the on-chip dataflow and integrated operators. The second system implements a logarithmic CMOS/APS sensor interfaced to the same microprocessor, in which all computations are carried out. We have designed two vision systems as proof of concept. The comparison is related to image processing time.