Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sudeep Pillai is active.

Publication


Featured researches published by Sudeep Pillai.


robotics science and systems | 2015

Monocular SLAM Supported Object Recognition

Sudeep Pillai; John J. Leonard

In this work, we develop a monocular SLAM-aware object recognition system that is able to achieve considerably stronger recognition performance, as compared to classical object recognition systems that function on a frame-by-frame basis. By incorporating several key ideas including multi-view object proposals and efficient feature encoding methods, our proposed system is able to detect and robustly recognize objects in its environment using a single RGB camera in near-constant time. Through experiments, we illustrate the utility of using such a system to effectively detect and recognize objects, incorporating multiple object viewpoint detections into a unified prediction hypothesis. The performance of the proposed recognition system is evaluated on the UW RGB-D Dataset, showing strong recognition performance and scalable run-time performance compared to current state-of-the-art recognition systems.


computer vision and pattern recognition | 2015

Line-sweep: Cross-ratio for wide-baseline matching and 3D reconstruction

Srikumar Ramalingam; Michel Goncalves Almeida Antunes; Daniel Snow; Gim Hee Lee; Sudeep Pillai

We propose a simple and useful idea based on cross-ratio constraint for wide-baseline matching and 3D reconstruction. Most existing methods exploit feature points and planes from images. Lines have always been considered notorious for both matching and reconstruction due to the lack of good line descriptors. We propose a method to generate and match new points using virtual lines constructed using pairs of keypoints, which are obtained using standard feature point detectors. We use cross-ratio constraints to obtain an initial set of new point matches, which are subsequently used to obtain line correspondences. We develop a method that works for both calibrated and uncalibrated camera configurations. We show compelling line-matching and large-scale 3D reconstruction.


international conference on robotics and automation | 2016

High-performance and tunable stereo reconstruction

Sudeep Pillai; Srikumar Ramalingam; John J. Leonard

Traditional stereo algorithms have focused their efforts on reconstruction quality and have largely avoided prioritizing for run time performance. Robots, on the other hand, require quick maneuverability and effective computation to observe its immediate environment and perform tasks within it. In this work, we propose a high-performance and tunable stereo disparity estimation method, with a peak frame-rate of 120Hz (VGA resolution, on a single CPU-thread), that can potentially enable robots to quickly reconstruct their immediate surroundings and maneuver at high-speeds. Our key contribution is a disparity estimation algorithm that iteratively approximates the scene depth via a piece-wise planar mesh from stereo imagery, with a fast depth validation step for semi-dense reconstruction. The mesh is initially seeded with sparsely matched keypoints, and is recursively tessellated and refined as needed (via a resampling stage), to provide the desired stereo disparity accuracy. The inherent simplicity and speed of our approach, with the ability to tune it to a desired reconstruction quality and runtime performance makes it a compelling solution for applications in high-speed vehicles.


very large data bases | 2017

Exploring big volume sensor data with vroom

Oscar Moll; Aaron Zalewski; Sudeep Pillai; Samuel Madden; Michael Stonebraker; Vijay Gadepally

State of the art sensors within a single autonomous vehicle (AV) can produce video and LIDAR data at rates greater than 30 GB/hour. Unsurprisingly, even small AV research teams can accumulate tens of terabytes of sensor data from multiple trips and multiple vehicles. AV practitioners would like to extract information about specific locations or specific situations for further study, but are often unable to. Queries over AV sensor data are different from generic analytics or spatial queries because they demand reasoning about fields of view as well as heavy computation to extract features from scenes. In this article and demo we present Vroom, a system for ad-hoc queries over AV sensor databases. Vroom combines domain specific properties of AV datasets with selective indexing and multi-query optimization to address challenges posed by AV sensor data.


international conference on robotics and automation | 2017

SLAMinDB: Centralized graph databases for mobile robotics

Dehann Fourie; Samuel Claassens; Sudeep Pillai; Roxana Mata; John J. Leonard

Robotic systems typically require memory recall mechanisms for a variety of tasks including localization, mapping, planning, visualization etc. We argue for a novel memory recall framework that enables more complex inference schemas by separating the computation from its associated data. In this work we propose a shared, centralized data persistence layer that maintains an ensemble of online, situationally-aware robot states. This is realized through a queryable graph-database with an accompanying key-value store for larger data. In turn, this approach is scalable and enables a multitude of capabilities such as experience-based learning and long-term autonomy. Using multi-modal simultaneous localization and mapping and a few example use-cases, we demonstrate the versatility and extensible nature that centralized persistence and SLAMinDB can provide. In order to support the notion of life-long autonomy, we envision robots to be endowed with such a persistence model, enabling them to revisit previous experiences and improve upon their existing task-specific capabilities.


arXiv: Cryptography and Security | 2015

Bitcoin Transaction Graph Analysis.

Michael Fleder; Michael S. Kester; Sudeep Pillai


robotics science and systems | 2014

Learning Articulated Motions From Visual Demonstration

Sudeep Pillai; Matthew R. Walter; Seth J. Teller


international conference on robotics and automation | 2014

A summary of team MIT's approach to the virtual robotics challenge

Russ Tedrake; Maurice Fallon; Sisir Karumanchi; Scott Kuindersma; Matthew E. Antone; Toby Schneider; Thomas M. Howard; Matthew R. Walter; Hongkai Dai; Robin Deits; Michael Fleder; Dehann Fourie; Riad I. Hammoud; Sachithra Hemachandra; P. Ilardi; Sudeep Pillai; Andrés Valenzuela; Cecilia Cantu; C. Dolan; I. Evans; S. Jorgensen; J. Kristeller; Julie A. Shah; Karl Iagnemma; Seth J. Teller


intelligent robots and systems | 2017

Towards visual ego-motion learning in robots

Sudeep Pillai; John J. Leonard


Archive | 2017

SLAM-aware, self-supervised perception in mobile robots

Sudeep Pillai

Collaboration


Dive into the Sudeep Pillai's collaboration.

Top Co-Authors

Avatar

John J. Leonard

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Matthew R. Walter

Toyota Technological Institute at Chicago

View shared research outputs
Top Co-Authors

Avatar

Seth J. Teller

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Dehann Fourie

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Michael Fleder

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Sachithra Hemachandra

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Srikumar Ramalingam

Mitsubishi Electric Research Laboratories

View shared research outputs
Top Co-Authors

Avatar

Aaron Zalewski

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Andrés Valenzuela

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Cecilia Cantu

Massachusetts Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge