Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Albert S. Huang is active.

Publication


Featured researches published by Albert S. Huang.


international symposium on robotics | 2017

Visual Odometry and Mapping for Autonomous Flight Using an RGB-D Camera

Albert S. Huang; Abraham Bachrach; Peter Henry; Michael Krainin; Daniel Maturana; Dieter Fox; Nicholas Roy

RGB-D cameras provide both a color image and per-pixel depth estimates. The richness of their data and the recent development of low-cost sensors have combined to present an attractive opportunity for mobile robotics research. In this paper, we describe a system for visual odometry and mapping using an RGB-D camera, and its application to autonomous flight. By leveraging results from recent state-of-the-art algorithms and hardware, our system enables 3D flight in cluttered environments using only onboard sensor data. All computation and sensing required for local position control are performed onboard the vehicle, reducing the dependence on unreliable wireless links. We evaluate the effectiveness of our system for stabilizing and controlling a quadrotor micro air vehicle, demonstrate its use for constructing detailed 3D maps of an indoor environment, and discuss its limitations.


intelligent robots and systems | 2010

LCM: Lightweight Communications and Marshalling

Albert S. Huang; Edwin Olson; David Moore

We describe the Lightweight Communications and Marshalling (LCM) library for message passing and data marshalling. The primary goal of LCM is to simplify the development of low-latency message passing systems, especially for real-time robotics research applications.


The International Journal of Robotics Research | 2012

Estimation, planning, and mapping for autonomous flight using an RGB-D camera in GPS-denied environments

Abraham Bachrach; Sam Prentice; Ruijie He; Peter Henry; Albert S. Huang; Michael Krainin; Daniel Maturana; Dieter Fox; Nicholas Roy

RGB-D cameras provide both color images and per-pixel depth estimates. The richness of this data and the recent development of low-cost sensors have combined to present an attractive opportunity for mobile robotics research. In this paper, we describe a system for visual odometry and mapping using an RGB-D camera, and its application to autonomous flight. By leveraging results from recent state-of-the-art algorithms and hardware, our system enables 3D flight in cluttered environments using only onboard sensor data. All computation and sensing required for local position control are performed onboard the vehicle, reducing the dependence on an unreliable wireless link to a ground station. However, even with accurate 3D sensing and position estimation, some parts of the environment have more perceptual structure than others, leading to state estimates that vary in accuracy across the environment. If the vehicle plans a path without regard to how well it can localize itself along that path, it runs the risk of becoming lost or worse. We show how the belief roadmap algorithm prentice2009belief, a belief space extension of the probabilistic roadmap algorithm, can be used to plan vehicle trajectories that incorporate the sensing model of the RGB-D camera. We evaluate the effectiveness of our system for controlling a quadrotor micro air vehicle, demonstrate its use for constructing detailed 3D maps of an indoor environment, and discuss its limitations.


national conference on artificial intelligence | 2011

A Bayesian nonparametric approach to modeling motion patterns

Joshua Mason Joseph; Finale Doshi-Velez; Albert S. Huang; Nicholas Roy

The most difficult—and often most essential—aspect of many interception and tracking tasks is constructing motion models of the targets. Experts rarely can provide complete information about a target’s expected motion pattern, and fitting parameters for complex motion patterns can require large amounts of training data. Specifying how to parameterize complex motion patterns is in itself a difficult task.In contrast, Bayesian nonparametric models of target motion are very flexible and generalize well with relatively little training data. We propose modeling target motion patterns as a mixture of Gaussian processes (GP) with a Dirichlet process (DP) prior over mixture weights. The GP provides an adaptive representation for each individual motion pattern, while the DP prior allows us to represent an unknown number of motion patterns. Both automatically adjust the complexity of the motion model based on the available data. Our approach outperforms several parametric models on a helicopter-based car-tracking task on data collected from the greater Boston area.


international conference on robotics and automation | 2010

A voice-commandable robotic forklift working alongside humans in minimally-prepared outdoor environments

Seth J. Teller; Matthew R. Walter; Matthew E. Antone; Andrew Correa; Randall Davis; Luke Fletcher; Emilio Frazzoli; James R. Glass; Jonathan P. How; Albert S. Huang; Jeong hwan Jeon; Sertac Karaman; Brandon Douglas Luders; Nicholas Roy; Tara N. Sainath

One long-standing challenge in robotics is the realization of mobile autonomous robots able to operate safely in existing human workplaces in a way that their presence is accepted by the human occupants. We describe the development of a multi-ton robotic forklift intended to operate alongside human personnel, handling palletized materials within existing, busy, semi-structured outdoor storage facilities.


The International Journal of Robotics Research | 2010

A High-rate, Heterogeneous Data Set From The DARPA Urban Challenge

Albert S. Huang; Matthew E. Antone; Edwin Olson; Luke Fletcher; David Moore; Seth J. Teller; John J. Leonard

This paper describes a data set collected by MIT’s autonomous vehicle Talos during the 2007 DARPA Urban Challenge. Data from a high-precision navigation system, five cameras, 12 SICK planar laser range scanners, and a Velodyne high-density laser range scanner were synchronized and logged to disk for 90 km of travel. In addition to documenting a number of large loop closures useful for developing mapping and localization algorithms, this data set also records the first robotic traffic jam and two autonomous vehicle collisions. It is our hope that this data set will be useful to the autonomous vehicle community, especially those developing robotic perception capabilities.


international conference on robotics and automation | 2010

Ground robot navigation using uncalibrated cameras

Olivier Koch; Matthew R. Walter; Albert S. Huang; Seth J. Teller

Precise calibration of camera intrinsic and extrinsic parameters, while often useful, is difficult to obtain during field operation and presents scaling issues for multi-robot systems. We demonstrate a vision-based approach to navigation that does not depend on traditional camera calibration, and present an algorithm for guiding a robot through a previously traversed environment using a set of uncalibrated cameras mounted on the robot. On the first excursion through an environment, the system builds a topological representation of the robots exploration path, encoded as a place graph. On subsequent navigation missions, the method localizes the robot within the graph and provides robust guidance to a specified destination. We combine this method with reactive collision avoidance to obtain a system able to navigate the robot safely and reliably through the environment. We validate our approach with ground-truth experiments and demonstrate the method on a small ground rover navigating through several dynamic environments.


intelligent robots and systems | 2009

Lane boundary and curb estimation with lateral uncertainties

Albert S. Huang; Seth J. Teller

This paper describes an algorithm for estimating lane boundaries and curbs from a moving vehicle using noisy observations and a probabilistic model of curvature. The primary contribution of this paper is a curve model we call lateral uncertainty, which describes the uncertainty of a curve estimate along the lateral direction at various points on the curve, and does not attempt to capture uncertainty along the longitudinal direction of the curve. Additionally, our method incorporates expected road curvature information derived from an empirical study of a real road network. Our method is notable in that it accurately captures the geometry of arbitrarily complex lane boundary curves that are not well approximated by straight lines or low-order polynomial curves. Our method operates independently of the direction of travel of the vehicle, and incorporates sensor uncertainty associated with individual observations. We analyze the benefits and drawbacks of the approach, and show results of our algorithm applied to real world data sets


robotics science and systems | 2008

Multi-Sensor Lane Finding in Urban Road Networks

Albert S. Huang; David Moore; Matthew E. Antone; Edwin Olson; Seth J. Teller

This paper describes a system for detecting and estimating the properties of multiple travel lanes in an urban road network from calibrated video imagery and laser range data acquired by a moving vehicle. The system operates in several stages on multiple processors, fusing detected road markings, obstacles, and curbs into a stable non-parametric estimate of nearby travel lanes. The system incorporates elements of a provided piecewise-linear road network as a weak prior. Our method is notable in several respects: it estimates multiple travel lanes; it fuses asynchronous, heterogeneous sensor streams; it handles high-curvature roads; and it makes no assumption about the position or orientation of the vehicle with respect to the road. We analyze the system’s performance in the context of the 2007 DARPA Urban Challenge. With five cameras and thirteen lidars, it was incorporated into a closed-loop controller to successfully guide an autonomous vehicle through a 90 km urban course at speeds up to 40 km/h amidst moving traffic.


Journal of Artificial Intelligence Research | 2012

Modelling Observation Correlations for Active Exploration and Robust Object Detection

Javier Velez; Garrett A. Hemann; Albert S. Huang; Ingmar Posner; Nicholas Roy

Today, mobile robots are expected to carry out increasingly complex tasks in multifarious, real-world environments. Often, the tasks require a certain semantic understanding of the workspace. Consider, for example, spoken instructions from a human collaborator referring to objects of interest; the robot must be able to accurately detect these objects to correctly understand the instructions. However, existing object detection, while competent, is not perfect. In particular, the performance of detection algorithms is commonly sensitive to the position of the sensor relative to the objects in the scene. This paper presents an online planning algorithm which learns an explicit model of the spatial dependence of object detection and generates plans which maximize the expected performance of the detection, and by extension the overall plan performance. Crucially, the learned sensor model incorporates spatial correlations between measurements, capturing the fact that successive measurements taken at the same or nearby locations are not independent. We show how this sensor model can be incorporated into an efficient forward search algorithm in the information space of detected objects, allowing the robot to generate motion plans efficiently. We investigate the performance of our approach by addressing the tasks of door and text detection in indoor environments and demonstrate significant improvement in detection performance during task execution over alternative methods in simulated and real robot experiments.

Collaboration


Dive into the Albert S. Huang's collaboration.

Top Co-Authors

Avatar

Seth J. Teller

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Larry Rudolph

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

David Moore

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Edwin Olson

University of Michigan

View shared research outputs
Top Co-Authors

Avatar

Matthew R. Walter

Toyota Technological Institute at Chicago

View shared research outputs
Top Co-Authors

Avatar

Nicholas Roy

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Luke Fletcher

Australian National University

View shared research outputs
Top Co-Authors

Avatar

Emilio Frazzoli

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

John J. Leonard

Massachusetts Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge