Aveek Das
SRI International
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Aveek Das.
intelligent robots and systems | 2011
Sujit Kuthirummal; Aveek Das; Supun Samarasekera
We present a novel computationally efficient approach to obstacle detection that is applicable to both structured (e.g. indoor, road) and unstructured (e.g. off-road, grassy terrain) environments. In contrast to previous works that attempt to explicitly identify obstacles, we explicitly detect scene regions that are traversable - safe for the robot to go to - from its current position. Traversability is defined on a 2D grid of cells. Given 3D points, we map them to individual cells and compute histograms of elevations of the points in each cell. This elevation information is then used in a graph based algorithm to label all traversable cells. In this manner, positive and negative obstacles, as well as unknown regions are implicitly detected and avoided. Our notion of traversability does not make any flat-world assumptions and does not need sensor pitch-roll compensation. It also accounts for overhanging structures like tree branches. We demonstrate that our approach can be used with both lidar and stereo sensors even though the two sensors differ in their resolution and accuracy. We present several results from our real-time implementation on realistic environments using both lidar and stereo.
intelligent robots and systems | 2014
Han-Pang Chiu; Aveek Das; Phillip Miller; Supun Samarasekera; Rakesh Kumar
This paper proposes a novel vision-aided navigation approach that continuously estimates precise 3D absolute pose for aerial vehicles, using only inertial measurements and monocular camera observations. Our approach is able to provide accurate navigation solutions under long-term GPS outage, by tightly incorporating absolute geo-registered information into two kinds of visual measurements: 2D-3D tie-points, and geo-registered feature tracks. 2D-3D tie-points are established by finding feature correspondences to align an aerial video frame to a 2D geo-referenced image rendered from the 3D terrain database. These measurements provide global information to correct accumulated error in navigation estimation. Geo-registered feature tracks are generated by associating features across consecutive frames. They enable the propagation of 3D geo-referenced values to further improve the pose estimation. All sensor measurements are fully optimized in a smoother-based inference framework, which achieves efficient relinearization and real-time estimation of navigation states and their covariances over a constant-length of sliding window. Experimental results demonstrate that our approach provides accurate and consistent aerial navigation solutions on several large-scale GPS-denied scenarios.
british machine vision conference | 2014
S. Hussain Raza; Omar Javed; Aveek Das; Harpreet S. Sawhney; Hui Cheng; Irfan A. Essa
We present an algorithm to estimate depth in dynamic video scenes. We propose to learn and infer depth in videos from appearance, motion, occlusion boundaries, and geometric context of the scene. Using our method, depth can be estimated from unconstrained videos with no requirement of camera pose estimation, and with significant background/foreground motions. We start by decomposing a video into spatio-temporal regions. For each spatio-temporal region, we learn the relationship of depth to visual appearance, motion, and geometric classes. Then we infer the depth information of new scenes using piecewise planar parametrization estimated within a Markov random field (MRF) framework by combining appearance to depth learned mappings and occlusion boundary guided smoothness constraints. Subsequently, we perform temporal smoothing to obtain temporally consistent depth maps. To evaluate our depth estimation algorithm, we provide a novel dataset with ground truth depth for outdoor video scenes. We present a thorough evaluation of our algorithm on our new dataset and the publicly available Make3d static image dataset.
Proceedings of SPIE | 2013
Aveek Das; Dinesh Thakur; James F. Keller; Sujit Kuthirummal; Mihail Pivtoraiko
Autonomous robotic “fetch” operation, where a robot is shown a novel object and then asked to locate it in the field, re- trieve it and bring it back to the human operator, is a challenging problem that is of interest to the military. The CANINE competition presented a forum for several research teams to tackle this challenge using state of the art in robotics technol- ogy. The SRI-UPenn team fielded a modified Segway RMP 200 robot with multiple cameras and lidars. We implemented a unique computer vision based approach for textureless colored object training and detection to robustly locate previ- ously unseen objects out to 15 meters on moderately flat terrain. We integrated SRI’s state of the art Visual Odometry for GPS-denied localization on our robot platform. We also designed a unique scooping mechanism which allowed retrieval of up to basketball sized objects with a reciprocating four-bar linkage mechanism. Further, all software, including a novel target localization and exploration algorithm was developed using ROS (Robot Operating System) which is open source and well adopted by the robotics community. We present a description of the system, our key technical contributions and experimental results.
Archive | 2007
John Benjamin Southall; Mayank Bansal; Aastha Jain; Manish Kumar; Theodore Armand Camus; Aveek Das; John Richard Fields; Gary Alan Greene; Jayakrishnan Eledath
Archive | 2004
Aveek Das; Theodore Camus; Peng Chang
Archive | 2014
Hussain Raza; Omar Javed; Aveek Das; Harpreet S. Sawhney; Hui Cheng; Irfan A. Essa
Computational Optical Sensing and Imaging | 2014
Sehoon Lim; Choongyeun Cho; Aveek Das; Sek M. Chai
Archive | 2007
John Benjamin Southall; Mayank Bansal; Aastha Jain; Manish Kumar; Theodore Camus; Aveek Das; John Richard Fields; Gary Alan Greene; Jayakrishnan Eledath
Archive | 2004
Aveek Das; Theodore Camus; Peng Chang