Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Daniel Maturana is active.

Publication


Featured researches published by Daniel Maturana.


intelligent robots and systems | 2015

VoxNet: A 3D Convolutional Neural Network for real-time object recognition

Daniel Maturana; Sebastian Scherer

Robust object recognition is a crucial skill for robots operating autonomously in real world environments. Range sensors such as LiDAR and RGBD cameras are increasingly found in modern robotic systems, providing a rich source of 3D information that can aid in this task. However, many current systems do not fully utilize this information and have trouble efficiently dealing with large amounts of point cloud data. In this paper, we propose VoxNet, an architecture to tackle this problem by integrating a volumetric Occupancy Grid representation with a supervised 3D Convolutional Neural Network (3D CNN). We evaluate our approach on publicly available benchmarks using LiDAR, RGBD, and CAD data. VoxNet achieves accuracy beyond the state of the art while labeling hundreds of instances per second.


The International Journal of Robotics Research | 2012

Estimation, planning, and mapping for autonomous flight using an RGB-D camera in GPS-denied environments

Abraham Bachrach; Sam Prentice; Ruijie He; Peter Henry; Albert S. Huang; Michael Krainin; Daniel Maturana; Dieter Fox; Nicholas Roy

RGB-D cameras provide both color images and per-pixel depth estimates. The richness of this data and the recent development of low-cost sensors have combined to present an attractive opportunity for mobile robotics research. In this paper, we describe a system for visual odometry and mapping using an RGB-D camera, and its application to autonomous flight. By leveraging results from recent state-of-the-art algorithms and hardware, our system enables 3D flight in cluttered environments using only onboard sensor data. All computation and sensing required for local position control are performed onboard the vehicle, reducing the dependence on an unreliable wireless link to a ground station. However, even with accurate 3D sensing and position estimation, some parts of the environment have more perceptual structure than others, leading to state estimates that vary in accuracy across the environment. If the vehicle plans a path without regard to how well it can localize itself along that path, it runs the risk of becoming lost or worse. We show how the belief roadmap algorithm prentice2009belief, a belief space extension of the probabilistic roadmap algorithm, can be used to plan vehicle trajectories that incorporate the sensing model of the RGB-D camera. We evaluate the effectiveness of our system for controlling a quadrotor micro air vehicle, demonstrate its use for constructing detailed 3D maps of an indoor environment, and discuss its limitations.


international conference on robotics and automation | 2015

3D Convolutional Neural Networks for landing zone detection from LiDAR

Daniel Maturana; Sebastian Scherer

We present a system for the detection of small and potentially obscured obstacles in vegetated terrain. The key novelty of this system is the coupling of a volumetric occupancy map with a 3D Convolutional Neural Network (CNN), which to the best of our knowledge has not been previously done. This architecture allows us to train an extremely efficient and highly accurate system for detection tasks from raw occupancy data. We apply this method to the problem of detecting safe landing zones for autonomous helicopters from LiDAR point clouds. Current methods for this problem rely on heuristic rules and use simple geometric features. These heuristics break down in the presence of low vegetation, as they do not distinguish between vegetation that may be landed on and solid objects that should be avoided. We evaluate the system with a combination of real and synthetic range data. We show our system outperforms various benchmarks, including a system integrating various hand-crafted point cloud features from the literature.


international conference on robotics and automation | 2016

Real-time 3D scene layout from a single image using Convolutional Neural Networks

Shichao Yang; Daniel Maturana; Sebastian Scherer

We consider the problem of understanding the 3D layout of indoor corridor scenes from a single image in real time. Identifying obstacles such as walls is essential for robot navigation, but also challenging due to the diversity in structure, appearance and illumination of real-world corridor scenes. Many current single-image methods make Manhattan-world assumptions, and break down in environments that do not meet this mold. They also may require complicated hand-designed features for image segmentation or clear boundaries to form certain building models. In addition, most cannot run in real time. In this paper, we propose to combine machine learning with geometric modelling to build a simplified 3D model from a single image. We first employ a supervised Convolutional Neural Network (CNN) to provide a dense, but coarse, geometric class labelling of the scene. We then refine this labelling with a fully connected Conditional Random Field (CRF). Finally, we fit line segments along wall-ground boundaries and “pop up” a 3D model using geometric constraints. We assemble a dataset of 967 labelled corridor images. Our experiments on this dataset and another publicly available dataset show our method outperforms other single image scene understanding methods in pixelwise accuracy while labelling images at over 15Hz.


field and service robotics | 2018

Real-Time Semantic Mapping for Autonomous Off-Road Navigation

Daniel Maturana; Po-Wei Chou; Masashi Uenoyama; Sebastian Scherer

In this paper we describe a semantic mapping system for autonomous off-road driving with an All-Terrain Vehicle (ATVs). The system’s goal is to provide a richer representation of the environment than a purely geometric map, allowing it to distinguish, e.g., tall grass from obstacles. The system builds a 2.5D grid map encoding both geometric (terrain height) and semantic information (navigation-relevant classes such as trail, grass, etc.). The geometric and semantic information are estimated online and in real-time from LiDAR and image sensor data, respectively. Using this semantic map, motion planners can create semantically aware trajectories. To achieve robust and efficient semantic segmentation, we design a custom Convolutional Neural Network (CNN) and train it with a novel dataset of labelled off-road imagery built for this purpose. We evaluate our semantic segmentation offline, showing comparable performance to the state of the art with slightly lower latency. We also show closed-loop field results with an autonomous ATV driving over challenging off-road terrain by using the semantic map in conjunction with a simple path planner. Our models and labelled dataset will be publicly available at http://dimatura.net/offroad.


field and service robotics | 2018

Season-Invariant Semantic Segmentation with a Deep Multimodal Network

Dong-Ki Kim; Daniel Maturana; Masashi Uenoyama; Sebastian Scherer

Semantic scene understanding is a useful capability for autonomous vehicles operating in off-roads. While cameras are the most common sensor used for semantic classification, the performance of methods using camera imagery may suffer when there is significant variation between the train and testing sets caused by illumination, weather, and seasonal variations. On the other hand, 3D information from active sensors such as LiDAR is comparatively invariant to these factors, which motivates us to investigate whether it can be used to improve performance in this scenario. In this paper, we propose a novel multimodal Convolutional Neural Network (CNN) architecture consisting of two streams, 2D and 3D, which are fused by projecting 3D features to image space to achieve a robust pixelwise semantic segmentation. We evaluate our proposed method in a novel off-road terrain classification benchmark, and show a 25% improvement in mean Intersection over Union (IoU) of navigation-related semantic classes, relative to an image-only baseline.


field and service robotics | 2016

Learning a Context-Dependent Switching Strategy for Robust Visual Odometry

Kristen Holtz; Daniel Maturana; Sebastian Scherer

Many applications for robotic systems require the systems to traverse diverse, unstructured environments. State estimation with Visual Odometry (VO) in these applications is challenging because there is no single algorithm that performs well across all environments and situations. The unique trade-offs inherent to each algorithm mean different algorithms excel in different environments. We develop a method to increase robustness in state estimation by using an ensemble of VO algorithms. The method combines the estimates by dynamically switching to the best algorithm for the current context, according to a statistical model of VO estimate errors. The model is a Random Forest regressor that is trained to predict the accuracy of each algorithm as a function of different features extracted from the sensory input. We evaluate our method in a dataset of consisting of four unique environments and eight runs, totaling over 25 min of data. Our method reduces the mean translational relative pose error by 3.5 % and the angular error by 4.3 % compared to the single best odometry algorithm. Compared to the poorest performing odometry algorithm, our method reduces the mean translational error by 39.4 % and the angular error by 20.1 %.


international conference on machine learning | 2017

Improving Stochastic Policy Gradients in Continuous Control with Deep Reinforcement Learning using the Beta Distribution

Po-Wei Chou; Daniel Maturana; Sebastian Scherer


arXiv: Robotics | 2018

Integrating kinematics and environment context into deep inverse reinforcement learning for predicting off-road vehicle trajectories.

Yanfu Zhang; Wenshan Wang; Rogerio Bonatti; Daniel Maturana; Sebastian Scherer


intelligent robots and systems | 2017

Wire detection using synthetic data and dilated convolutional networks for unmanned aerial vehicles

Ratnesh Madaan; Daniel Maturana; Sebastian Scherer

Collaboration


Dive into the Daniel Maturana's collaboration.

Top Co-Authors

Avatar

Sebastian Scherer

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Po-Wei Chou

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Sankalp Arora

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Abraham Bachrach

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Albert S. Huang

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Brad Hamner

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David F. Fouhey

Carnegie Mellon University

View shared research outputs
Researchain Logo
Decentralizing Knowledge