Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Debadeepta Dey is active.

Publication


Featured researches published by Debadeepta Dey.


international conference on robotics and automation | 2013

Learning monocular reactive UAV control in cluttered natural environments

Stéphane Ross; Narek Melik-Barkhudarov; Kumar Shaurya Shankar; Andreas Wendel; Debadeepta Dey; J. Andrew Bagnell; Martial Hebert

Autonomous navigation for large Unmanned Aerial Vehicles (UAVs) is fairly straight-forward, as expensive sensors and monitoring devices can be employed. In contrast, obstacle avoidance remains a challenging task for Micro Aerial Vehicles (MAVs) which operate at low altitude in cluttered environments. Unlike large vehicles, MAVs can only carry very light sensors, such as cameras, making autonomous navigation through obstacles much more challenging. In this paper, we describe a system that navigates a small quadrotor helicopter autonomously at low altitude through natural forest environments. Using only a single cheap camera to perceive the environment, we are able to maintain a constant velocity of up to 1.5m/s. Given a small set of human pilot demonstrations, we use recent state-of-the-art imitation learning techniques to train a controller that can avoid trees by adapting the MAVs heading. We demonstrate the performance of our system in a more controlled environment indoors, and in real natural forest environments outdoors.


workshop on applications of computer vision | 2012

Classification of plant structures from uncalibrated image sequences

Debadeepta Dey; Lily B. Mummert; Rahul Sukthankar

This paper demonstrates the feasibility of recovering fine-scale plant structure in 3D point clouds by leveraging recent advances in structure from motion and 3D point cloud segmentation techniques. The proposed pipeline is designed to be applicable to a broad variety of agricultural crops. A particular agricultural application is described, motivated by the need to estimate crop yield during the growing season. The structure of grapevines is classified into leaves, branches, and fruit using a combination of shape and color features, smoothed using a conditional random field (CRF). Our experiments show a classification accuracy (AUC) of 0.98 for grapes prior to ripening (while still green) and 0.96 for grapes during ripening (changing color), significantly improving over the baseline performance achieved using established methods.


field and service robotics | 2010

Passive, Long-Range Detection of Aircraft: Towards a Field Deployable Sense and Avoid System

Debadeepta Dey; Christopher Geyer; Sanjiv Singh; Matthew Digioia

Unmanned Aerial Vehicles (UAVs) typically fly blind with operators in distant locations. Most UAVs are too small to carry a traffic collision avoidance system (TCAS) payload or transponder. Collision avoidance is currently done by flight planning, use of ground or air based human observers and segregated air spaces. US lawmakers propose commercial unmanned aerial systems access to national airspace (NAS) by 30th September 2013. UAVs must not degrade the existing safety of the NAS, but the metrics that determine this have to be fully determined yet. It is still possible to state functional requirements and determine some performance minimums. For both manned and unmanned aircraft to fly safely in the same airspace UAVs will need to detect other aircraft and follow the same rules as human pilots.


The International Journal of Robotics Research | 2011

A cascaded method to detect aircraft in video imagery

Debadeepta Dey; Christopher Geyer; Sanjiv Singh; Matthew Digioia

Unmanned Aerial Vehicles (UAVs) have played vital roles recently in both military and non-military applications. One of the reasons UAVs today are unable to routinely fly in US National Airspace (NAS) is because they lack the sense and ability to avoid other aircraft. Although certificates of authorization can be obtained for short-term use, it entails significant delays and bureaucratic hurdles. Therefore, there is a great need to develop a sensing system that is equivalent to or has greater performance than a human pilot operating under Visual Flight Rules (VFR). This is challenging because of the need to detect aircraft out to at least 3 statute miles, while doing so on field-of-regard as large as 30°( vertical) × 220°( horizontal) and within the payload constraints of a medium-sized UAV. In this paper we report on recent progress towards the development of a field deployable sense-and-avoid system and concentrate on the detection and tracking aspect of the system. We tested a number of approaches and chose a cascaded approach that resulted in 100% detection rate (over about 40 approaches) and 98% tracking rate out to 5 statute miles and a false positive rate of 1 every 50 frames. Within a range of 3.75 miles we can achieve nearly 100% tracking rate.


robotics: science and systems | 2012

Contextual Sequence Prediction with Application to Control Library Optimization.

Debadeepta Dey; Tian Yu Liu; Martial Hebert; J. Andrew Bagnell

Sequence optimization, where the items in a list are ordered to maximize some reward has many applications such as web advertisement placement, search, and control libraries in robotics. Previous work in sequence optimization produces a static ordering that does not take any features of the item or context of the problem into account. In this work, we propose a general approach to order the items within the sequence based on the context (e.g., perceptual information, environment description, and goals). We take a simple, efficient, reduction-based approach where the choice and order of the items is established by repeatedly learning simple classifiers or regressors for each “slot” in the sequence. Our approach leverages recent work on submodular function maximization to provide a formal regret reduction from submodular sequence optimization to simple costsensitive prediction. We apply our contextual sequence prediction algorithm to optimize control libraries and demonstrate results on two robotics problems: manipulator trajectory prediction and mobile robot path planning.


field and service robotics | 2018

AirSim: High-Fidelity Visual and Physical Simulation for Autonomous Vehicles.

Shital Shah; Debadeepta Dey; Chris Lovett; Ashish Kapoor

Developing and testing algorithms for autonomous vehicles in real world is an expensive and time consuming process. Also, in order to utilize recent advances in machine intelligence and deep learning we need to collect a large amount of annotated training data in a variety of conditions and environments. We present a new simulator built on Unreal Engine that offers physically and visually realistic simulations for both of these goals. Our simulator includes a physics engine that can operate at a high frequency for real-time hardware-in-the-loop (HITL) simulations with support for popular protocols (e.g. MavLink). The simulator is designed from the ground up to be extensible to accommodate new types of vehicles, hardware platforms and software protocols. In addition, the modular design enables various components to be easily usable independently in other projects. We demonstrate the simulator by first implementing a quadrotor as an autonomous vehicle and then experimentally comparing the software components with real-world flights.


field and service robotics | 2016

Vision and Learning for Deliberative Monocular Cluttered Flight

Debadeepta Dey; Kumar Shaurya Shankar; Sam Zeng; Rupesh Mehta; M. Talha Agcayazi; Christopher Eriksen; Shreyansh Daftry; Martial Hebert; J. Andrew Bagnell

Cameras provide a rich source of information while being passive, cheap and lightweight for small Unmanned Aerial Vehicles (UAVs). In this work we present the first implementation of receding horizon control, which is widely used in ground vehicles, with monocular vision as the only sensing mode for autonomous UAV flight in dense clutter. Two key contributions make this possible: novel coupling of perception and control via relevant and diverse, multiple interpretations of the scene around the robot, leveraging recent advances in machine learning to showcase anytime budgeted cost-sensitive feature selection, and fast non-linear regression for monocular depth prediction. We empirically demonstrate the efficacy of our novel pipeline via real world experiments of more than 2 kms through dense trees with an off-the-shelf quadrotor. Moreover our pipeline is designed to combine information from other modalities like stereo and lidar.


international conference on computer vision | 2015

Predicting Multiple Structured Visual Interpretations

Debadeepta Dey; Varun Ramakrishna; Martial Hebert; J. Andrew Bagnell

We present a simple approach for producing a small number of structured visual outputs which have high recall, for a variety of tasks including monocular pose estimation and semantic scene segmentation. Current state-of-the-art approaches learn a single model and modify inference procedures to produce a small number of diverse predictions. We take the alternate route of modifying the learning procedure to directly optimize for good, high recall sequences of structured-output predictors. Our approach introduces no new parameters, naturally learns diverse predictions and is not tied to any specific structured learning or inference procedure. We leverage recent advances in the contextual submodular maximization literature to learn a sequence of predictors and empirically demonstrate the simplicity and performance of our approach on multiple challenging vision tasks including achieving state-of-the-art results on multiple predictions for monocular pose-estimation and image foreground/background segmentation.


international conference on robotics and automation | 2017

Learning to gather information via imitation

Sanjiban Choudhury; Ashish Kapoor; Gireeja Ranade; Debadeepta Dey

The budgeted information gathering problem — where a robot with a fixed fuel budget is required to maximize the amount of information gathered from the world — appears in practice across a wide range of applications in autonomous exploration and inspection with mobile robots. Although there is an extensive amount of prior work investigating effective approximations of the problem, these methods do not address the fact that their performance is heavily dependent on distribution of objects in the world. In this paper, we attempt to address this issue by proposing a novel data-driven imitation learning framework. We present an efficient algorithm, EXPLORE, that trains a policy on the target distribution to imitate a clairvoyant oracle — an oracle that has full information about the world and computes non-myopic solutions to maximize information gathered. We validate the approach on a spectrum of results on a number of 2D and 3D exploration problems that demonstrates the ability of EXPLORE to adapt to different object distributions. Additionally, our analysis provides theoretical insight into the behavior of EXPLORE. Our approach paves the way forward for efficiently applying data-driven methods to the domain of information gathering.


international joint conference on artificial intelligence | 2018

Near Real-Time Detection of Poachers from Drones in AirSim

Elizabeth Bondi; Ashish Kapoor; Debadeepta Dey; James Piavis; Shital Shah; Robert Hannaford; Arvind Iyer; Lucas Joppa; Milind Tambe

The unrelenting threat of poaching has led to increased development of new technologies to combat it. One such example is the use of thermal infrared cameras mounted on unmanned aerial vehicles (UAVs or drones) to spot poachers at night and report them to park rangers before they are able to harm any animals. However, monitoring the live video stream from these conservation UAVs all night is an arduous task. Therefore, we discuss SPOT (Systematic POacher deTector), a novel application that augments conservation drones with the ability to automatically detect poachers and animals in near real time [Bondi et al., 2018b]. SPOT illustrates the feasibility of building upon state-of-the-art AI techniques, such as Faster RCNN, to address the challenges of automatically detecting animals and poachers in infrared images. This paper reports (i) the design of SPOT, (ii) efficient processing techniques to ensure usability in the field, (iii) evaluation of SPOT based on historical videos and a real-world test run by the end-users, Air Shepherd, in the field, and (iv) the use of AirSim for live demonstration of SPOT. The promising results from a field test have led to a plan for larger-scale deployment in a national park in southern Africa. While SPOT is developed for conservation drones, its design and novel techniques have wider application for automated detection from UAV videos.

Collaboration


Dive into the Debadeepta Dey's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

J. Andrew Bagnell

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Martial Hebert

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hanzhang Hu

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Sanjiv Singh

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sam Zeng

Carnegie Mellon University

View shared research outputs
Researchain Logo
Decentralizing Knowledge