Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Roger Reynaud is active.

Publication


Featured researches published by Roger Reynaud.


IEEE Transactions on Acoustics, Speech, and Signal Processing | 1985

Fast minimum variance deconvolution

Guy Demoment; Roger Reynaud

We propose a statistical estimation approach to the deconvolution problem which is optimal in the minimum variance sense when the a priori knowledge on the signal to be restored is strictly limited to its first two moments. By viewing the estimation problem as a degenerate case of a Kalman filter applied to a static system with time-varying measurements and by developing the corresponding Chandrasekhar-type equations, the estimator may be obtained recursively and implemented with a fast practical algorithm.


international conference on control, automation, robotics and vision | 2014

Lane marking based vehicle localization using particle filter and multi-kernel estimation

Wenjie Lu; Emmanuel Seignez; F. Sergio A. Rodriguez; Roger Reynaud

Vehicle localization is the primary information needed for advanced tasks like navigation. This information is usually provided by the use of Global Positioning System (GPS) receivers. However, the low accuracy of GPS in urban environments makes it unreliable for further treatments. The combination of GPS data and additional sensors can improve the localization precision. In this article, a marking feature based vehicle localization method is proposed, able to enhance the localization performance. To this end, markings are detected using a multi-kernel estimation method from an on-vehicle camera. A particle filter is implemented to estimate the vehicle position with respect to the detected markings. Then, map-based markings are constructed according to an open source map database. Finally, vision-based markings and map-based markings are fused to obtain the improved vehicle fix. The results on road traffic scenarios using a public database show that our method leads to a clear improvement in localization accuracy.


international symposium on industrial electronics | 2006

A Smart Sensor for Image Processing: Towards a System on Chip

Abdelhafid Elouardi; Samir Bouaziz; Antoine Dupret; Lionel Lacassagne; Jacques-Olivier Klein; Roger Reynaud

One of the solutions to reduce the computational complexity of image processing is to perform some low-level computations on the sensor focal plane. This paper presents a vision system based on a smart sensor. PARISl (programmable analog retina-like image sensor) is the first prototype used to evaluate the architecture of an on-chip vision system based on such a sensor and a digital processor. The sensor integrates analog and digital computing units. This architecture makes the vision system more compact and increases the performances reducing the data flow exchanges with the digital processor. A system has been implemented as a proof-of-concept. This has enabled us to evaluate the performance needed for a possible implementation of a digital processor on the same chip. The approach is compared to two architectures implementing CMOS sensors and interfaced to the same processor. The comparison is related to image processing computation time, processing reliability, programmability, precision, bandwidth and subsequent stages of computations


ieee intelligent vehicles symposium | 2004

On chip vision system architecture using a CMOS retina

Abdelhafid Elouardi; Samir Bouaziz; Antoine Dupret; Jacques-Olivier Klein; Roger Reynaud

This paper discusses design solution for integrating a complete vision system on a single chip. We describe the architecture and the implementation of a smart integrated retina based vision system dedicated for vehicles applications. The retina is a circuit that combines image acquisition and analog/digital processing operators allowing the achievement of real-time image processing. Interests of vision system integration are analysed through comparisons with conventional approaches using CCD cameras and a digital processor or CMOS sensors combined with wired algorithms on FPGA technology. Our solution takes the advantages of both these solutions.


IEEE Transactions on Instrumentation and Measurement | 2007

Image Processing Vision Systems: Standard Image Sensors Versus Retinas

Abdelhafid Elouardi; Samir Bouaziz; Antoine Dupret; Lionel Lacassagne; Jacques-Olivier Klein; Roger Reynaud

To decrease the computational complexity of computer vision algorithms, one of the solutions is to achieve some low-level image processing on the sensor focal plan. It becomes a smart sensor device called a retina. This concept makes the vision systems more compact. It increases performances thanks to the reduction of data flow exchanges with external circuits. This paper presents a comparison relating two different vision system architectures. The first one implements a logarithmic complimentary metal-oxide-semiconductor (CMOS)/active pixel sensor interfaced to a microprocessor, where all computations are carried out. The second involves a CMOS sensor including analog processors allowing on-chip image processing. An external microprocessor is used to control the on-chip data flow and integrated operators. We have designed two vision systems as proof-of-concept. The comparison is related to image processing computation time, processing reliability, programmability, precision, bandwidth, and subsequent stages of computations.


intelligent vehicles symposium | 2003

PICAR: experimental platform for road tracking applications

Samir Bouaziz; M. Fan; A. Lambert; Thierry Maurin; Roger Reynaud

The PICAR platform is an electrical car including an embedded electronics system. The generic goal is to design an embedded multi sensor plat-form for automotive application such as collision avoidance. Therefore the system includes classical sensors like video camera and ultrasonic sensors, a PC bi-processors, a CAN network and dedicated software for signal and image processing, data fusion and decision system. This plat-form allows experimenting customized sensors and specific architectures dedicated to fusion systems and data processing. The target scenarios are collision avoidance system, automatic parking, and lateral control application on a road lane.


Engineering Applications of Artificial Intelligence | 2015

Evidential framework for data fusion in a multi-sensor surveillance system

Cyrille André; Sylvie Le Hégarat-Mascle; Roger Reynaud

The multi-sensor data fusion relies on a combination of information pieces to produce a more accurate or complete description of the environment. In this work, we considered the case of a surveillance system using several heterogeneous sensors in a network. In such a system, the data fusion objective is to merge the detections provided by the different sensors in order to count, locate and track all the targets in the monitored area. The problem was addressed in the context of Belief Function theory in order to cope with the high inaccuracy of information and the different forms of imprecision. In this framework, we developed a unified approach to model and merge the detections coming from various kinds of sensors with prior knowledge about target location derived from topographical elements. We showed that the developed belief model provided an efficient measurement for data association between tracks and detections. Considering scalable constraints for the system, the complexity and consistency of belief function representation should be controlled, which was achieved by implementing versatile discernment frames and by restricting the number of focal elements. The proof of concept of the proposed data fusion module was achieved by implementing it in an actual detection system. Real-world scenarios were used to draw some conclusions about localization performance and end-user perception. Further experiments were also performed on simulated data to focus on data association and belief function simplification subproblems.


instrumentation and measurement technology conference | 2004

Image processing vision system implementing a smart sensor

Abdelhafid Elouardi; Samir Bouaziz; Antoine Dupret; Jacques-Olivier Klein; Roger Reynaud

There are many vision algorithms of image processing with CMOS image sensors such as image enhancement, segmentation, feature extraction and pattern recognition. These algorithms are frequently used in software-based operations. Here, the main research interest focuses on how to integrate image processing (vision) algorithms with CMOS integrated systems or how to implement smart retinas in hardware, in terms of their system-level architecture and design methodologies. The approach is compared with state-of-the-art vision chips build using digital SIMD arrays and vision systems using CMOS sensors combined with FPGA technology. As a test bench, an advanced algorithm for an exposure time calibration is presented with experimental results of image processing.


Optical Engineering | 1998

Hybrid neural-based decision level fusion architecture: application to road traffic collision avoidance

Kurosh Madani; Abdennasser Chebira; Kamel Bouchefra; Thierry Maurin; Roger Reynaud

A hybrid decision level architecture for a road collision risks avoidance system is presented. The goal of the decision level is to clas- sify the behavior of the vehicles observed by a smart system or vehicle. The knowledge of vehicle behavior enables the best management of the smart system resources. The association of a model to each observed vehicle mainly enables the limitation of inference and of the set of actions to be activated; thus the interactions between system levels can be more intelligent. The decision level of this architecture is composed of a neural classifier, which is associated to a numerical classifier. Each of these classifiers provides decisions that are expressed within the framework of fuzzy theory. An optimal fusion policy is reached using the functional neural network tool.


ieee international conference on cyber technology in automation, control, and intelligent systems | 2014

Monocular multi-kernel based lane marking detection

Wenjie Lu; A F Sergio Rodriguez; Emmanuel Seignez; Roger Reynaud

Lane marking detection provides key information for scene understanding in structured environments. Such information has been widely exploited in Advanced Driving Assistance Systems and Autonomous Vehicle applications. This paper presents an enhanced lane marking detection approach intended for low-level perception. It relies on a multi-kernel detection framework with hierarchical weights. First, the detection strategy performs in Birds Eye View (BEV) space and starts with an image filtering using a cell-based blob method. Then, lane marking parameters are optimized following a parabolic model. Finally, a self-assessment process provides an integrity indicator to improve the output performance of detection results. An evaluation using images from a public dataset confirms the effectiveness of the method.

Collaboration


Dive into the Roger Reynaud's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kamel Bouchefra

École Normale Supérieure

View shared research outputs
Top Co-Authors

Avatar

Guy Demoment

École Normale Supérieure

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge