Datta Ramadasan
Blaise Pascal University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Datta Ramadasan.
international conference on computer vision | 2009
François Bardet; Thierry Chateau; Datta Ramadasan
This paper addresses real-time automatic visual tracking, labeling and classification of a variable number of objects such as pedestrians or/and vehicles, under time-varying illumination conditions. The illumination and multi-object configuration are jointly tracked through a Markov Chain Monte-Carlo Particle Filter (MCMC PF). The measurement is provided by a static camera, associated to a basic foreground / background segmentation. As a first contribution, we propose in this paper to jointly track the light source within the Particle Filter, considering it as an additionnal object. Illumination-dependant shadows cast by objects are modeled and treated as foreground, thus avoiding the difficult task of shadow segmentation. As a second contribution, we estimate object category as a random variable also tracked within the Particle Filter, thus unifying object tracking and classification into a single process. Real time tracking results are shown and discussed on sequences involving various categories of users such as pedestrians, cars, light trucks and heavy trucks.
international conference on image processing | 2015
Datta Ramadasan; Thierry Chateau; Marc Chevaldonné
We propose a new real-time CSLAM algorithm (Constrained Simultaneous Localization And Mapping), named DCSLAM (dynamically constrained SLAM), designed to dynamically adapt each optimization to the variable number of parameters families and heterogeneous constraints. An automatic method is used to generate, from an exhaustive list of constraints, a dedicated optimization algorithm. This is, to our knowledge, the only implementation that combines flexibility and performance. The proposed experiments show the effectiveness of our approach in terms of accuracy and execution time compared to the state of the art on several public benchmarks of varying complexity. An augmented reality application mixing heterogeneous objects and constraints is also available.
ieee intelligent vehicles symposium | 2009
François Bardet; Thierry Chateau; Datta Ramadasan
This paper addresses real-time automatic visual tracking and classification of a variable number of vehicles in traffic. This off-board surveillance device may cooperate with on-board Advanced Driver Assistance Systems (ADAS), extending its measurement range to the areas of the road that are not in the car sensors field-of-view (in a curve or an intersection). Tracking results also are useful for statistical trajectory analysis, devoted to understanding and improving user-user and user-infrastructure interactions. As a main contribution, this paper proposes to unify vehicle tracking and classification in a single processing step. This paper also addresses a vehicle anisotropic distance measurement based on the vehicle 3D geometric model. Real time tracking results are shown and discussed on road sequences involving various types of vehicles such as motorcycles, cars, light trucks and heavy trucks.
international conference on intelligent transportation systems | 2016
Eric Royer; François Marmoiton; Serge Alizon; Datta Ramadasan; Morgan Slade; Ange Nizard; Michel Dhome; Benoit Thuilot; Florent Bonjean
This article presents a large scale and long duration experiment as part of the French FUI VipaFleet project. A driverless shuttle has been operated for three month on an industrial site, totaling nearly 1500 km of autonomous travel and 300 passengers transported. The localization relies mainly on a multi camera system and a visual SLAM algorithm. Besides the vision algorithms themselves, this article develops the practical aspects of a large scale experiment and the lessons learned from this experience.
british machine vision conference | 2015
Datta Ramadasan; Marc Chevaldonné; Thierry Chateau
In this paper, we propose a new algorithm, named MCSLAM (Multiple Constrained SLAM ), designed to dynamically adapt each optimization to the variable number of parameters families (sensor pose, roughly known objects poses and dimensions, delay between sensors, ...) and heterogeneous constraints (reprojection error, distance from points or edges to object surface, acceleration...). The proposed algorithm is based on three contributions: 1) a new Levenberg-Marquardt C++ library named LMA and freely available 1 2) an architecture allowing a high level of flexibility and performances 3) a real-time usage of a temporal spline curve as the parametric trajectory model that provides an efficient way to add heterogeneous constraints within the optimization. The main idea of LMA is to provide a simple interface with a nonintrusive mechanism of adaptation to a problem while maintaining good performances. LMA works as a meta-program using C++ template to analyse at compile-time (CT) the problem to optimize from a list of C++ functors. The parameters are deduced from the functors arguments and the degree of freedom (dof ) of each parameter is defined by the user. Then LMA generates a data structure to store the functors and the parameters in tuple according to the number of parameters families and constraints. The resolution of the normal equations is written efficiently using a sparse representation constructed with a set of small matrices of static sizes. The LMA library solves the normal equations using dense Cholesky or a sparse PCG specially designed to manage little matrices of static size. It also implements the classical optimization tricks to be effective on little, medium, and big size problems. LMA also implements common features as automatic differentiation and robust cost functions. The continuous-time representation of the trajectory is used to deal with constraints on the motion. This allows to mix data from many unsynchronised sensors and evolution model. To apply constraints on the 3D structure of the environment, 3D models of coarsely knowns shapes are used. To represent the motion, we use the uniform cumulative b-spline described by Lovegrove et al. [2] but we separate position and orientation in two different splines and we use the Rodriguez formula to compute the exponential and the logarithm of SO3 group. Moreover, we adapt the keyframe based SLAM to deal with the spline: key-frames are constrained to be on the spline. We also use every inter-key-frame poses computed by the localization process to apply a weak constraint on the spline. A temporal sliding window of 3 seconds is used to select, from the SLAM and the IMU, the more recent data used to constrain the spline. The MCSLAM algorithm is based on a graph composed by constraints, parameters and dependencies. First, the problem configuration is analysed at compile-time to generate a specified LMA solver, whose cost function to minimize is the sum of the constraints C dynamically
ieee virtual reality conference | 2015
Datta Ramadasan; Marc Chevaldonné; Thierry Chateau
This paper presents a new approach for multi-objects tracking from a video camera moving in an unknown environment. The tracking involves static objects of different known shapes, whose poses and sizes are determined online. For augmented reality applications, objects must be precisely tracked even if they are far from the camera or if they are hidden. Camera poses are computed using simultaneous localization and mapping (SLAM) based on bundle adjustment process to optimize problem parameters. We propose to include in an incremental bundle adjustment the parameters of the observed objects as well as the camera poses and 3D points. We show, through the example of 3D models of basics shapes (planes, parallelepipeds, cylinders and spheres) coarsely initialized online using a manual selection, that the joint optimization of parameters constrains the 3D points to approach the objects, and also constrains the objects to fit the 3D points. Moreover, we developed a generic and optimized library to solve this modified bundle adjustment and demonstrate the high performance of our solution compared to the state of the art alternative. Augmented reality experiments in realtime demonstrate the accuracy and the robustness of our method.
intelligent robots and systems | 2010
Nadir Karam; Hicham Hadj-Abdelkader; Clement Deymier; Datta Ramadasan
We address the problem of vehicle (mobile robot) navigation by combining visual-based reconstruction and localization with metrical information given by the proprioceptive sensors such as the odometry sensor. The proposed approach extends the navigation system based on a monocular vision [1] which is able to build a map and localize the vehicle in the real time way using only one camera. An extended kalman filter is used to integrate odometric information to estimate the vehicle position. This position is updated by the localization obtained from the vision system. Experimental result carried out with an urban electric vehicle will show the improvement of the navigation system and its robustness to the temporary loss of images.
Archive | 2012
Michel Dhome; Eric Royer; Maxime Lhuilier; Datta Ramadasan; Nadir Karam; Clement Deymier; Vadim Litvinov; Hicham Hadj Abdelkader; Thierry Chateau; Jean-mare Lavest; François Marmoiton; Serge Alizon; Laurent Malatere; Pierre Lébraly
international conference on computer vision theory and applications | 2009
François Bardet; Thierry Chateau; Datta Ramadasan
Journées francophones des jeunes chercheurs en vision par ordinateur | 2015
Datta Ramadasan; Marc Chevaldonné; Thierry Chateau