Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jean-Yves Ertaud is active.

Publication


Featured researches published by Jean-Yves Ertaud.


IEEE Transactions on Image Processing | 2013

Robust Radial Face Detection for Omnidirectional Vision

Yohan Dupuis; Xavier Savatier; Jean-Yves Ertaud; Pascal Vasseur

Bio-inspired and non-conventional vision systems are highly researched topics. Among them, omnidirectional vision systems have demonstrated their ability to significantly improve the geometrical interpretation of scenes. However, few researchers have investigated how to perform object detection with such systems. The existing approaches require a geometrical transformation prior to the interpretation of the picture. In this paper, we investigate what must be taken into account and how to process omnidirectional images provided by the sensor. We focus our research on face detection and highlight the fact that particular attention should be paid to the descriptors in order to successfully perform face detection on omnidirectional images. We demonstrate that this choice is critical to obtaining high detection rates. Our results imply that the adaptation of existing object-detection frameworks, designed for perspective images, should be focused on the choice of appropriate image descriptors in the design of the object-detection pipeline.


international conference on emerging security technologies | 2010

Fusion of Omnidirectional and PTZ Cameras for Face Detection and Tracking

H. Amine Iraqui; Yohan Dupuis; Rémi Boutteau; Jean-Yves Ertaud; Xavier Savatier

Many applications for mobile robot authentication require to be able to explore a large field of view with high resolution. The proposed vision system is composed of a catadioptric sensor for full range monitoring and a pan tilt zoom (PTZ) camera leading to an innovative sensor, able to detect and track any moving objects at a higher zoom level. In our application, the catadioptric sensor is calibrated and used to detect and track regions of interest (ROIs) within its 360 degree field of view (FOV), especially face regions. Using a joint calibration strategy, the PTZ camera parameters are automatically adjusted by the system in order to detect and track the face ROI within a higher resolution.


2008 International Workshop on Robotic and Sensors Environments | 2008

An omnidirectional stereoscopic system for mobile robot navigation

Rémi Boutteau; Xavier Savatier; Jean-Yves Ertaud; Bélahcène Mazari

This paper proposes a scheme for a 3D metric reconstruction of the environment of a mobile robot. We first introduce the advantages of a catadioptric stereovision sensor for autonomous navigation and how we have designed it with respect to the Single Viewpoint constraint. For applications such as path generation, the robot needs a metric reconstruction of its environment, so calibration of the sensor is required. After justification of the chosen model, a calibration method to obtain the model parameters and the relative pose of the two catadioptric sensors is presented. Knowledge of all the sensor parameters yields the 3D metric reconstruction of the environment by triangulation. Tools for calibration and relative pose estimation are presented and are available on the authorpsilas Web page. The entire process has been evaluated using real data.


ieee international symposium on robotic and sensors environments | 2011

A direct approach for face detection on omnidirectional images

Yohan Dupuis; Xavier Savatier; Jean-Yves Ertaud; Pascal Vasseur

Catadioptric sensors offer abilities unexploited so far. This is especially true for face detection, and more generally, object detection. This paper presents our results of a direct approach to tackle face detection on catadioptric images. Despite no geometrical transformations, we are able to successfully apply our detector on distorted images. We expose a new method to synthesize large omnidirectional images database. Inspired from regular face detection training schemes, our method makes use of newly introduced polygonal Haar-like features. First tests demonstrated that our approach gives good performance and at the same time speeds up the detection process.


2009 IEEE International Workshop on Robotic and Sensors Environments | 2009

Real-time 3D reconstruction for mobile robot using catadioptric cameras

Romain Rossi; Xavier Savatier; Jean-Yves Ertaud; Bélahcène Mazari

This paper presents a 3-second 3D reconstruction algorithm able to process a dense geometric approximation of the surrounding environment. Image acquisition is done by a stereoscopic panoramic system with two color catadioptric cameras mounted on a mobile robot. An algorithm running on a Graphical Processing Unit (GPU) processes the 3D reconstruction in real-time. As the camera system moves, new views of the scene are used to improve the model of the scene thanks to an incremental algorithm. Then, the performance of our approach is evaluated using a synthetic image sequence.


Archive | 2010

A 3D Omnidirectional Sensor For Mobile Robot Applications

Rémi Boutteau; Xavier Savatier; Jean-Yves Ertaud; Bélahcène Mazari

In most of the missions a mobile robot has to achieve – intervention in hostile environments, preparation of military intervention, mapping, etc – two main tasks have to be completed: navigation and 3D environment perception. Therefore, vision based solutions have been widely used in autonomous robotics because they provide a large amount of information useful for detection, tracking, pattern recognition and scene understanding. Nevertheless, the main limitations of this kind of system are the limited field of view and the loss of the depth perception. A 360-degree field of view offers many advantages for navigation such as easiest motion estimation using specific properties on optical flow (Mouaddib, 2005) and more robust feature extraction and tracking. The interest for omnidirectional vision has therefore been growing up significantly over the past few years and several methods are being explored to obtain a panoramic image: rotating cameras (Benosman & Devars, 1998), muti-camera systems and catadioptric sensors (Baker & Nayar, 1999). Catadioptric sensors, i.e. the combination of a camera and a mirror with revolution shape, are nevertheless the only system that can provide a panoramic image instantaneously without moving parts, and are thus well-adapted for mobile robot applications. The depth perception can be retrieved using a set of images taken from at least two different viewpoints either by moving the camera or by using several cameras at different positions. The use of the camera motion to recover the geometrical structure of the scene and the camera’s positions is known as Structure From Motion (SFM). Excellent results have been obtained during the last years with SFM approaches (Pollefeys et al., 2004; Nister, 2001), but with off-line algorithms that need to process all the images simultaneous. SFM is consequently not well-adapted to the exploration of an unknown environment because the robot needs to build the map and to localize itself in this map during its world exploration. The in-line approach, known as SLAM (Simultaneous Localization and Mapping), is one of the most active research areas in robotics since it can provide a real autonomy to a mobile robot. Some interesting results have been obtained in the last few years but principally to build 2D maps of indoor environments using laser range-finders. A survey of these algorithms can be found in the tutorials of Durrant-Whyte and Bailey (Durrant-Whyte & Bailey, 2006; Bailey & Durrant-Whyte, 2006). 1


international conference on intelligent transportation systems | 2013

Road-line detection and 3D reconstruction using fisheye cameras

Rémi Boutteau; Xavier Savatier; Fabien Bonardi; Jean-Yves Ertaud

In future Advanced Driver Assistance Systems (ADAS), smart monitoring of the vehicle environment is a key issue. Fisheye cameras have become popular as they provide a panoramic view with a few low-cost sensors. However, current ADAS systems have limited use as most of the underlying image processing has been designed for perspective views only. In this article we illustrate how the theoretical work done in omnidirectional vision over the past ten years can help to tackle this issue. To do so, we have evaluated a simple algorithm for road line detection based on the unified sphere model in real conditions. We firstly highlight the interest of using fisheye cameras in a vehicle, then we outline our method, we present our experimental results on the detection of lines on a set of 180 images, and finally, we show how the 3D position of the lines can be recovered by triangulation.


asian conference on pattern recognition | 2013

A Dynamic Programming Algorithm Applied to Omnidirectional Vision for Dense 3D Reconstruction

Rémi Boutteau; Xavier Savatier; Jean-Yves Ertaud

In this paper, we present a dense 3D reconstruction algorithm adapted to stereoscopic omni directional sensors. Our main contributions are the generalization of global constraints to central systems and the use of dynamic programming to take them into account. Experimental results show the interest of our algorithm on real data.


robotics and biomimetics | 2011

A new approach for face detection with omnidirectional sensors

Yohan Dupuis; Xavier Savatier; Jean-Yves Ertaud; Pascal Vasseur

This paper tackles the problem of frontal face detection on omnidirectional vision sensors. The adaptation of traditional face detection frameworks for our system enables to speed up the detection process while achieving good performance. We propose a method to synthesize omnidirectional image database. Hence, we were able to train our detector on the synthesized omnidirectional image database. Our experiments suggest that despite non linear distortions, our face detection algorithm has high success rate. We studied performance of Adaboost algorithm variants for our system. The proposed approach is able to match the speed and robustness of existing face detection algorithm for conventional cameras.


ieee intelligent vehicles symposium | 2008

Catadioptric vision system for an optimal observation of the driver face and the road scene

Jean-Frangois Layerle; Xavier Savatier; El Mustapha Mouaddib; Jean-Yves Ertaud

In this paper, we propose to design a new compact sensor for the simultaneous monitoring of the driver activity and the road scene. This sensor will be integrated in a driver assistant system to study the correlation between the driverpsilas gaze and the road scene. The device is based on a catadioptric configuration combining two different reflective surfaces. One enables the capture of a panoramic view of the environment in and out of the vehicle. Another has been designed to obtain a sufficient resolution for the eye tracking of the driver. With this new sensor, a 3D reconstruction by stereovision can be computed as two different projections of the driverpsilas face can be contemplated on the same image. The complete design is validated with the study of the accuracy of stereovision reconstruction.

Collaboration


Dive into the Jean-Yves Ertaud's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Redouane Khemmar

Systems Research Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

El Mustapha Mouaddib

University of Picardie Jules Verne

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Adnane Cabani

Systems Research Institute

View shared research outputs
Top Co-Authors

Avatar

Joseph Mouzna

Systems Research Institute

View shared research outputs
Researchain Logo
Decentralizing Knowledge