Damien Vivet
Blaise Pascal University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Damien Vivet.
Sensors | 2013
Damien Vivet; Paul Checchin; Roland Chapuis
Rotating radar sensors are perception systems rarely used in mobile robotics. This paper is concerned with the use of a mobile ground-based panoramic radar sensor which is able to deliver both distance and velocity of multiple targets in its surrounding. The consequence of using such a sensor in high speed robotics is the appearance of both geometric and Doppler velocity distortions in the collected data. These effects are, in the majority of studies, ignored or considered as noise and then corrected based on proprioceptive sensors or localization systems. Our purpose is to study and use data distortion and Doppler effect as sources of information in order to estimate the vehicles displacement. The linear and angular velocities of the mobile robot are estimated by analyzing the distortion of the measurements provided by the panoramic Frequency Modulated Continuous Wave (FMCW) radar, called IMPALA. Without the use of any proprioceptive sensor, these estimates are then used to build the trajectory of the vehicle and the radar map of outdoor environments. In this paper, radar-only localization and mapping results are presented for a ground vehicle moving at high speed.
international conference on robotics and automation | 2012
Damien Vivet; Paul Checchin; Roland Chapuis
The use of a rotating range sensor in high speed robotics creates distortions in the collected data. Such an effect is, in the majority of studies, ignored or considered as noise and then corrected, based on proprioceptive sensors or localization systems. In this study we consider that distortion contains the information about the vehicles displacement. We propose to extract this information from distortion without any other information than exteroceptive sensor data. The only sensor used for this work is a panoramic Frequency Modulated Continuous Wave (FMCW) radar called K2Pi. No odometer, gyrometer or other proprioceptive sensor is used. The idea is to resort to velocimetry by analyzing the distortion of the measurements. As a result, the linear and angular velocities of the mobile robot are estimated and used to build, without any other sensor, the trajectory of the vehicle and then the radar map of outdoor environments. In this paper, radar-only localization and mapping results are presented for a ground vehicle and a riverbank application. This work can easily be extended to other slow rotating range sensors.
EURASIP Journal on Advances in Signal Processing | 2012
Damien Vivet; Paul Checchin; Roland Chapuis; Patrice Faure; Raphaël Rouveure; Marie-Odile Monod
The detection and tracking of moving objects (DATMO) in an outdoor environment from a mobile robot are difficult tasks because of the wide variety of dynamic objects. A reliable discrimination of mobile and static detections without any prior knowledge is often conditioned by a good position estimation obtained using Global Positionning System/Differential Global Positioning System (GPS/DGPS), proprioceptive sensors, inertial sensors or even the use of Simultaneous Localization and Mapping (SLAM) algorithms. In this article a solution of the DATMO problem is presented to perform this task using only a microwave radar sensor. Indeed, this sensor provides images of the environment from which Doppler information can be extracted and interpreted in order to obtain not only velocities of detected objects but also the robots own velocity.
International Journal of Advanced Robotic Systems | 2013
Damien Vivet; Franck Gérossier; Paul Checchin; Laurent Trassoudaine; Roland Chapuis
This paper is concerned with robotic applications using a ground-based radar sensor for simultaneous localization and mapping problems. In mobile robotics, radar technology is interesting because of its long range and the robustness of radar waves to atmospheric conditions, making these sensors well-suited for extended outdoor robotic applications. Two localization and mapping approaches using data obtained from a 360° field of view microwave radar sensor are presented and compared. The first method is a trajectory-oriented simultaneous localization and mapping technique, which makes no landmark assumptions and avoids the data association problem. The estimation of the ego-motion makes use of the Fourier-Mellin transform for registering radar images in a sequence, from which the rotation and translation of the sensor motion can be estimated. The second approach uses the consequence of using a rotating range sensor in high speed robotics. In such a situation, movement combinations create distortions in the collected data. Velocimetry is achieved here by explicitly analysing these measurement distortions. As a result, the trajectory of the vehicle and then the radar map of outdoor environments can be obtained. The evaluation of experimental results obtained by the two methods is presented on real-world data from a vehicle moving at 30 km/h over a 2.5 km course.
international conference on control, automation, robotics and vision | 2010
Damien Vivet; Paul Checchin; Roland Chapuis
This paper is concerned with the Simultaneous Localization And Mapping (SLAM) application with a mobile robot moving in a structured environment using data obtained from rotating sensors such as radars or lasers. A line-based EKF-SLAM (EKF stands for Extended Kaiman Filter) algorithm is presented, which is able to deal with data that cannot be considered instantaneous when compared with the dynamics of the vehicle. When the sensor motion is fast relative to the measurement time, scans become locally distorted. A mapping solution is presented, that includes sensor motion in the observation model by taking into account the dynamics of the system. Experimental results with real-world 2D-laser scanner data are presented. Moreover a performance evaluation of the results is carried out. A quantitative performance evaluation method is proposed when dealing with a 2D line map and when a ground truth is available. It is based on the bipartite graph matching and combines several criteria that are described. A comparative study is made between the output data of the proposed method and the data processed without taking into account distortion phenomena.
Pattern Recognition | 2017
Dieudonné Fabrice Atrevi; Damien Vivet; Florent Duculty; Bruno Emile
Abstract In this paper, we propose a framework in order to automatically extract the 3D pose of an individual from a single silhouette image obtained with a classical low-cost camera without any depth information. By pose, we mean the configuration of human bones in order to reconstruct a 3D skeleton representing the 3D posture of the detected human. Our approach combines prior learned correspondences between silhouettes and skeletons extracted from simulated 3D human models publicly available on the internet. The main advantages of such approach are that silhouettes can be very easily extracted from video, and 3D human models can be animated using motion capture data in order to quickly build any movement training data. In order to match detected silhouettes with simulated silhouettes, we compared geometrics invariants moments. According to our results, we show that the proposed method provides very promising results with a very low time processing.
international conference on computer vision theory and applications | 2016
Fabrice Dieudonné Atrevi; Damien Vivet; Florent Duculty; Bruno Emile
This work focuses on the problem of automatically extracting human 3D poses from a single 2D image. By pose we mean the configuration of human bones in order to reconstruct a 3D skeleton representing the 3D posture of the detected human. This problem is highly non-linear in nature and confounds standard regression techniques. Our approach combines prior learned correspondences between silhouettes and skeletons extracted from 3D human models. In order to match detected silhouettes with simulated silhouettes, we used Krawtchouk geometric moment as shape descriptor. We provide quantitative results for image retrieval across different action and subjects, captured from differing viewpoints. We show that our approach gives promising result for 3D pose extraction from a single silhouette.
international conference on intelligent transportation systems | 2013
Yadu Prabhakar; Peggy Subirats; Christele Lecomte; Damien Vivet; Eric Violette; Abdelaziz Bensrhair
The safety of Powered Two Wheelers (PTWs) is an issue of concern for public authorities and road administrators around the world. In 2011, the official figures show that the PTW is estimated to represent only 2% of the total traffic but represents 30% of the deaths on the roads in France. The ambiguity in the values is due to the fact that the PTWs are particularly difficult to detect because of their unknown interactions with the other vehicles on the road. To date, there is no overall definite solution to this problem that uses a single sensor to detect and count this category of vehicle in the traffic. In this paper we present a robust method for detecting and counting PTWs in real time and real traffic, named the Last Line Check (LLC) method. This method can adapt to the angle at which the laser scanner is tilted with respect to the road and can estimate the non-observed values in the data. We can obtain data with an accuracy, which eases the extraction process. After extraction, a Support Vector Machine (SVM) is used for classification of laser scanner data. The approach gives encouraging results even when the traffic moves at up to 130 km/h with a precision of 98.5%.
robotics and biomimetics | 2012
Damien Vivet; Clement Deymier; Benoit Priot; Vincent Calmettes
This paper describes a full 6D localization algorithm based on probabilistic motion field. The motion field is obtained by an adaptation of the video compression algorithm known as Block-Matching which provides a sparse optical flow. Such a technique is very fast and allows real time applications. Image is decomposed in a grid of rectangular blocks. For each block, a relative displacement between consecutive images is calculated. Obtained motion flow is analyzed probabilistically in order to extract for each movement detection its uncertainty and to obtain subpixelic information about the area movement. Such motion flow is then used in order to obtain full 6 degrees of freedom camera localization using epipolar geometry based techniques without any 3D landmark reconstruction requirement. The method is applied to real data set obtained from a mobile robot and compared with SIFT and Harris detection.
ieee intelligent vehicles symposium | 2013
Clement Deymier; Damien Vivet; Thierry Chateau
This paper presents a fast method to estimate the probability of occupancy of a space point from a huge set of 3D rays represented in a common reference. These data can come from any range finding sensor such as : Lidar, Kinect or Velodyne. The key idea is to consider that the occupancy of a space 3D point is linked to 1) the number of 3D point belonging to a local volume around the point and 2) the number of rays crossing through the same volume. We propose a probabilistic non-parametric framework based on KNN estimator. The major contribution of the paper is an original solution to search rays in the neighborhood of a 3D point with a five dimensional binary tree that can handle several millions measurements. Experiments shows the relevance of the proposed method in terms of both accuracy and computation time. Moreover, the resulting method has been applied to three different 3D sensors: a Kinect, a 3D Lidar (Velodyne HDL-64E) and a mono-planar Lidar.