Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Rémi Boutteau is active.

Publication


Featured researches published by Rémi Boutteau.


Sensors | 2017

A Study of Vicon System Positioning Performance

Pierre Merriaux; Yohan Dupuis; Rémi Boutteau; Pascal Vasseur; Xavier Savatier

Motion capture setups are used in numerous fields. Studies based on motion capture data can be found in biomechanical, sport or animal science. Clinical science studies include gait analysis as well as balance, posture and motor control. Robotic applications encompass object tracking. Today’s life applications includes entertainment or augmented reality. Still, few studies investigate the positioning performance of motion capture setups. In this paper, we study the positioning performance of one player in the optoelectronic motion capture based on markers: Vicon system. Our protocol includes evaluations of static and dynamic performances. Mean error as well as positioning variabilities are studied with calibrated ground truth setups that are not based on other motion capture modalities. We introduce a new setup that enables directly estimating the absolute positioning accuracy for dynamic experiments contrary to state-of-the art works that rely on inter-marker distances. The system performs well on static experiments with a mean absolute error of 0.15 mm and a variability lower than 0.025 mm. Our dynamic experiments were carried out at speeds found in real applications. Our work suggests that the system error is less than 2 mm. We also found that marker size and Vicon sampling rate must be carefully chosen with respect to the speed encountered in the application in order to reach optimal positioning performance that can go to 0.3 mm for our dynamic study.


international conference on emerging security technologies | 2010

Fusion of Omnidirectional and PTZ Cameras for Face Detection and Tracking

H. Amine Iraqui; Yohan Dupuis; Rémi Boutteau; Jean-Yves Ertaud; Xavier Savatier

Many applications for mobile robot authentication require to be able to explore a large field of view with high resolution. The proposed vision system is composed of a catadioptric sensor for full range monitoring and a pan tilt zoom (PTZ) camera leading to an innovative sensor, able to detect and track any moving objects at a higher zoom level. In our application, the catadioptric sensor is calibrated and used to detect and track regions of interest (ROIs) within its 360 degree field of view (FOV), especially face regions. Using a joint calibration strategy, the PTZ camera parameters are automatically adjusted by the system in order to detect and track the face ROI within a higher resolution.


2008 International Workshop on Robotic and Sensors Environments | 2008

An omnidirectional stereoscopic system for mobile robot navigation

Rémi Boutteau; Xavier Savatier; Jean-Yves Ertaud; Bélahcène Mazari

This paper proposes a scheme for a 3D metric reconstruction of the environment of a mobile robot. We first introduce the advantages of a catadioptric stereovision sensor for autonomous navigation and how we have designed it with respect to the Single Viewpoint constraint. For applications such as path generation, the robot needs a metric reconstruction of its environment, so calibration of the sensor is required. After justification of the chosen model, a calibration method to obtain the model parameters and the relative pose of the two catadioptric sensors is presented. Knowledge of all the sensor parameters yields the 3D metric reconstruction of the environment by triangulation. Tools for calibration and relative pose estimation are presented and are available on the authorpsilas Web page. The entire process has been evaluated using real data.


international conference on image processing | 2015

3D real-time human action recognition using a spline interpolation approach

Enjie Ghorbel; Rémi Boutteau; Jacques Boonaert; Xavier Savatier; Stéphane Lecoeuche

This paper presents a novel descriptor based on skeleton information provided by RGB-D videos for human action recognition. These features are obtained, considering the motion as continuous trajectories of skeleton joints. With the discrete information of skeleton joints position, a cubic-spline interpolation is applied to joints position, velocity and acceleration components. The training and classification steps are done using a linear SVM. In the literature, many human motion descriptors based on RGB-D cameras had already been proposed with good accuracy, but with a high computational time. The main interest of this proposed approach is its ability to calculate human motion descriptors with a low computation cost while such a descriptor leads to an acceptable accuracy of recognition. Thus, this approach can be adapted to human computer interaction applications. For the purpose of validation, we apply our method to the challenging benchmark MSR-Action3D and introduce a new indicator which is the ratio between accuracy and execution time per descriptor. Using this criterion, we show that our algorithm outperforms the state-of-art methods in terms of the combined information of rapidity and accuracy.


intelligent vehicles symposium | 2014

Visual odometry with unsynchronized multi-cameras setup for intelligent vehicle application

Rawia Mhiri; Pascal Vasseur; Stéphane Mousset; Rémi Boutteau; Abdelaziz Bensrhair

This paper presents a visual odometry with metric scale estimation of a multi-camera system in challenging un-synchronized setup. The intended application is in the field of intelligent vehicles. We propose a new algorithm named “triangle-based” method. The proposed algorithm employs the information from both extrinsic and intrinsic parameters of calibrated cameras. We assume that the trajectory between two consecutive frames of a camera is a linear segment (straight trajectory). The relative camera poses are estimated via classical Structure-from-Motion. Then, the scale factors are computed by imposing the known extrinsic parameters and the linearity assumption. We verify the validity of our method both in simulated and real conditions. For the real world, the motion trajectory estimated for image sequence of two cameras from KITTI dataset is compared against the GPS/INS ground truth.


Archive | 2010

A 3D Omnidirectional Sensor For Mobile Robot Applications

Rémi Boutteau; Xavier Savatier; Jean-Yves Ertaud; Bélahcène Mazari

In most of the missions a mobile robot has to achieve – intervention in hostile environments, preparation of military intervention, mapping, etc – two main tasks have to be completed: navigation and 3D environment perception. Therefore, vision based solutions have been widely used in autonomous robotics because they provide a large amount of information useful for detection, tracking, pattern recognition and scene understanding. Nevertheless, the main limitations of this kind of system are the limited field of view and the loss of the depth perception. A 360-degree field of view offers many advantages for navigation such as easiest motion estimation using specific properties on optical flow (Mouaddib, 2005) and more robust feature extraction and tracking. The interest for omnidirectional vision has therefore been growing up significantly over the past few years and several methods are being explored to obtain a panoramic image: rotating cameras (Benosman & Devars, 1998), muti-camera systems and catadioptric sensors (Baker & Nayar, 1999). Catadioptric sensors, i.e. the combination of a camera and a mirror with revolution shape, are nevertheless the only system that can provide a panoramic image instantaneously without moving parts, and are thus well-adapted for mobile robot applications. The depth perception can be retrieved using a set of images taken from at least two different viewpoints either by moving the camera or by using several cameras at different positions. The use of the camera motion to recover the geometrical structure of the scene and the camera’s positions is known as Structure From Motion (SFM). Excellent results have been obtained during the last years with SFM approaches (Pollefeys et al., 2004; Nister, 2001), but with off-line algorithms that need to process all the images simultaneous. SFM is consequently not well-adapted to the exploration of an unknown environment because the robot needs to build the map and to localize itself in this map during its world exploration. The in-line approach, known as SLAM (Simultaneous Localization and Mapping), is one of the most active research areas in robotics since it can provide a real autonomy to a mobile robot. Some interesting results have been obtained in the last few years but principally to build 2D maps of indoor environments using laser range-finders. A survey of these algorithms can be found in the tutorials of Durrant-Whyte and Bailey (Durrant-Whyte & Bailey, 2006; Bailey & Durrant-Whyte, 2006). 1


intelligent robots and systems | 2014

GPS-based preliminary map estimation for autonomous vehicle mission preparation

Yohan Dupuis; Pierre Merriaux; Peggy Subirats; Rémi Boutteau; Xavier Savatier; Pascal Vasseur

In this paper, we tackle the problem of map estimation from small set of vehicular GPS traces collected from low cost devices. Contrary to the existing works, we rely only on GPS information. First, we propose a fast implementation of Kalman filtering of spline-based road modeling. Our approach demonstrates a significant boost of the computation speed while maintained a good estimation error. Secondly, we perform an evaluation of our algorithm on real world data. Our estimation is compared to a high grade Inertial Navigation System and vectorial data gathered from major map providers. Our results suggest that a good performance can be achieved from the fusion of multiple GPS traces collected from multiple vehicles and drivers.


international conference on intelligent transportation systems | 2013

Road-line detection and 3D reconstruction using fisheye cameras

Rémi Boutteau; Xavier Savatier; Fabien Bonardi; Jean-Yves Ertaud

In future Advanced Driver Assistance Systems (ADAS), smart monitoring of the vehicle environment is a key issue. Fisheye cameras have become popular as they provide a panoramic view with a few low-cost sensors. However, current ADAS systems have limited use as most of the underlying image processing has been designed for perspective views only. In this article we illustrate how the theoretical work done in omnidirectional vision over the past ten years can help to tackle this issue. To do so, we have evaluated a simple algorithm for road line detection based on the unified sphere model in real conditions. We firstly highlight the interest of using fisheye cameras in a vehicle, then we outline our method, we present our experimental results on the detection of lines on a set of 180 images, and finally, we show how the 3D position of the lines can be recovered by triangulation.


Sensors | 2017

PHROG: A Multimodal Feature for Place Recognition

Fabien Bonardi; Samia Ainouz; Rémi Boutteau; Yohan Dupuis; Xavier Savatier; Pascal Vasseur

Long-term place recognition in outdoor environments remains a challenge due to high appearance changes in the environment. The problem becomes even more difficult when the matching between two scenes has to be made with information coming from different visual sources, particularly with different spectral ranges. For instance, an infrared camera is helpful for night vision in combination with a visible camera. In this paper, we emphasize our work on testing usual feature point extractors under both constraints: repeatability across spectral ranges and long-term appearance. We develop a new feature extraction method dedicated to improve the repeatability across spectral ranges. We conduct an evaluation of feature robustness on long-term datasets coming from different imaging sources (optics, sensors size and spectral ranges) with a Bag-of-Words approach. The tests we perform demonstrate that our method brings a significant improvement on the image retrieval issue in a visual place recognition context, particularly when there is a need to associate images from various spectral ranges such as infrared and visible: we have evaluated our approach using visible, Near InfraRed (NIR), Short Wavelength InfraRed (SWIR) and Long Wavelength InfraRed (LWIR).


international conference on pattern recognition | 2016

A fast and accurate motion descriptor for human action recognition applications

Enjie Ghorbel; Rémi Boutteau; Jacques Bonnaert; Xavier Savatier; Stéphane Lecoeuche

With the availability of the recent human skeleton extraction algorithm introduced by Shotton et al. [1], an interest for skeleton-based action recognition methods has been renewed. Despite the importance of the low-latency aspect in applications, it can be noted that the majority of recent approaches has not been evaluated in terms of computational cost. In this paper, a novel fast and accurate human action descriptor named Kinematic Spline Curves (KSC) is introduced. This descriptor is built by interpolating the kinematics of joints (position, velocity and acceleration). To overcome the anthropometric and the execution rate variability, we respectively propose the use of a skeleton normalization and a temporal normalization. For this purpose, a new temporal normalization method based on the Normalized Accumulated kinetic Energy (NAE) of the human skeleton is suggested. Finally, the classification step is performed using a linear Support Vector Machine (SVM). Experimental results on challenging benchmarks show the efficiency of our approach in terms of recognition accuracy and computational latency.

Collaboration


Dive into the Rémi Boutteau's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jean-Yves Ertaud

Systems Research Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Samia Ainouz

Centre national de la recherche scientifique

View shared research outputs
Researchain Logo
Decentralizing Knowledge