Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Matthew E. Antone is active.

Publication


Featured researches published by Matthew E. Antone.


computer vision and pattern recognition | 2000

Automatic recovery of relative camera rotations for urban scenes

Matthew E. Antone; Seth J. Teller

In this paper we describe a formulation of extrinsic camera calibration that decouples rotation from translation by exploiting properties inherent in urban scenes. We then present an algorithm which uses edge features to robustly and accurately estimate relative rotations among multiple cameras given intrinsic calibration and approximate initial pose. The algorithm is linear both in the number of images and the number of features. We estimate the number and directions of vanishing points (VPs) with respect to each camera using a hybrid approach that combines the robustness of the Hough transform with the accuracy of expectation maximization. Matching and labeling methods identify unique VPs and correspond them across all cameras. Finally, a technique akin to bundle adjustment produces globally optimal estimates of relative camera rotations by bringing all VPs into optimal alignment. Uncertainty is modeled and used at every stage to improve accuracy. We assess the algorithms performance on both synthetic and real data, and compare our results to those of semi-automated photogrammetric methods for a large set of real hemispherical images, using several consistency and error metrics.


International Journal of Computer Vision | 2003

Calibrated, Registered Images of an Extended Urban Area

Seth J. Teller; Matthew E. Antone; Zachary Bodnar; Michael Bosse; Satyan R. Coorg; Manish Jethwa; Neel Master

We describe a dataset of several thousand calibrated, time-stamped, geo-referenced, high dynamic range color images, acquired under uncontrolled, variable illumination conditions in an outdoor region spanning several hundred meters. The image data is grouped into several regions which have little mutual inter-visibility. For each group, the calibration data is globally consistent on average to roughly five centimeters and 0 1°, or about four pixels of epipolar registration. All image, feature and calibration data is available for interactive inspection and downloading at http://city.lcs.mit.edu/data.Calibrated imagery is of fundamental interest in a variety of applications. We have made this data available in the belief that researchers in computer graphics, computer vision, photogrammetry and digital cartography will find it of value as a test set for their own image registration algorithms, as a calibrated image set for applications such as image-based rendering, metric 3D reconstruction, and appearance recovery, and as input for existing GIS applications.


Journal of Field Robotics | 2015

An Architecture for Online Affordance-based Perception and Whole-body Planning

Maurice Fallon; Scott Kuindersma; Sisir Karumanchi; Matthew E. Antone; Toby Schneider; Hongkai Dai; Claudia Pérez D'Arpino; Robin Deits; Matt DiCicco; Dehann Fourie; Twan Koolen; Pat Marion; Michael Posa; Andrés Valenzuela; Kuan-Ting Yu; Julie A. Shah; Karl Iagnemma; Russ Tedrake; Seth J. Teller

The DARPA Robotics Challenge Trials held in December 2013 provided a landmark demonstration of dexterous mobile robots executing a variety of tasks aided by a remote human operator using only data from the robots sensor suite transmitted over a constrained, field-realistic communications link. We describe the design considerations, architecture, implementation, and performance of the software that Team MIT developed to command and control an Atlas humanoid robot. Our design emphasized human interaction with an efficient motion planner, where operators expressed desired robot actions in terms of affordances fit using perception and manipulated in a custom user interface. We highlight several important lessons we learned while developing our system on a highly compressed schedule.


International Journal of Computer Vision | 2002

Scalable Extrinsic Calibration of Omni-Directional Image Networks

Matthew E. Antone; Seth J. Teller

We describe a linear-time algorithm that recovers absolute camera orientations and positions, along with uncertainty estimates, for networks of terrestrial image nodes spanning hundreds of meters in outdoor urban scenes. The algorithm produces pose estimates globally consistent to roughly 0.1° (2 milliradians) and 5 centimeters on average, or about four pixels of epipolar alignment.We assume that adjacent nodes observe overlapping portions of the scene, and that at least two distinct vanishing points are observed by each node. The algorithm decouples registration into pure rotation and translation stages. The rotation stage aligns nodes to commonly observed scene line directions; the translation stage assigns node positions consistent with locally estimated motion directions, then registers the resulting network to absolute (Earth) coordinates.The papers principal contributions include: extension of classic registration methodsto large scale and dimensional extent; a consistent probabilistic framework for modeling projective uncertainty; and a new hybrid of Hough transform and expectation maximization algorithms.We assess the algorithms performance on synthetic and real data, and draw several conclusions. First, by fusing thousands of observations the algorithm achieves accurate registration even in the face of significant lighting variations, low-level feature noise, and error in initial pose estimates. Second, the algorithms robustness and accuracy increase with image field of view. Third, the algorithm surmounts the usual tradeoff between speed and accuracy; it is both faster and more accurate than manual bundle adjustment.


international conference on robotics and automation | 2010

A voice-commandable robotic forklift working alongside humans in minimally-prepared outdoor environments

Seth J. Teller; Matthew R. Walter; Matthew E. Antone; Andrew Correa; Randall Davis; Luke Fletcher; Emilio Frazzoli; James R. Glass; Jonathan P. How; Albert S. Huang; Jeong hwan Jeon; Sertac Karaman; Brandon Douglas Luders; Nicholas Roy; Tara N. Sainath

One long-standing challenge in robotics is the realization of mobile autonomous robots able to operate safely in existing human workplaces in a way that their presence is accepted by the human occupants. We describe the development of a multi-ton robotic forklift intended to operate alongside human personnel, handling palletized materials within existing, busy, semi-structured outdoor storage facilities.


european conference on computer vision | 2004

Spectral Solution of Large-Scale Extrinsic Camera Calibration as a Graph Embedding Problem

Matthew Brand; Matthew E. Antone; Seth J. Teller

Extrinsic calibration of large-scale ad hoc networks of cameras is posed as the following problem: Calculate the locations of N mobile, rotationally aligned cameras distributed over an urban region, subsets of which view some common environmental features. We show that this leads to a novel class of graph embedding problems that admit closed-form solutions in linear time via partial spectral decomposition of a quadratic form. The minimum squared error (mse) solution determines locations of cameras and/or features in any number of dimensions. The spectrum also indicates insufficiently constrained problems, which can be decomposed into well-constrained rigid subproblems and analyzed to determine useful new views for missing constraints. We demonstrate the method with large networks of mobile cameras distributed over an urban environment, using directional constraints that have been extracted automatically from commonly viewed features. Spectral solutions yield layouts that are consistent in some cases to a fraction of a millimeter, substantially improving the state of the art. Global layout of large camera networks can be computed in a fraction of a second.


computer vision and pattern recognition | 2001

Calibrated, registered images of an extended urban area

Seth J. Teller; Matthew E. Antone; Zachary Bodnar; Michael Bosse; Satyan R. Coorg; Manish Jethwa; Neel Master

We describe a dataset of several thousand calibrated, geo-referenced, high dynamic range color images, acquired under uncontrolled, variable illumination in an outdoor region spanning hundreds of meters. All image, feature, calibration, and geo-referencing data are available at http://city.lcs.mit.edu/data. Calibrated imagery is of fundamental interest in a wide variety of applications. We have made this data available in the belief that researchers in computer graphics, computer vision, photogrammetry and digital cartography will find it useful in several ways: as a test set for their own algorithms; as a calibrated image set for applications such as image-based rendering, metric 3D reconstruction, and appearance recovery; and as controlled imagery for integration into existing GIS systems and applications. The Web-based interface to the data provides interactive viewing of high-dynamic-range images and mosaics; extracted edge and point features; intrinsic and extrinsic calibration, along with maps of the ground context in which the images were acquired; the spatial adjacency relationships among images; the epipolar geometry relating adjacent images; compass and absolute scale overlays; and quantitative consistency measures for the calibration data.


ieee-ras international conference on humanoid robots | 2014

Drift-free humanoid state estimation fusing kinematic, inertial and LIDAR sensing

Maurice Fallon; Matthew E. Antone; Nicholas Roy; Seth J. Teller

This paper describes an algorithm for the probabilistic fusion of sensor data from a variety of modalities (inertial, kinematic and LIDAR) to produce a single consistent position estimate for a walking humanoid. Of specific interest is our approach for continuous LIDAR-based localization which maintains reliable drift-free alignment to a prior map using a Gaussian Particle Filter. This module can be bootstrapped by constructing the map on-the-fly and performs robustly in a variety of challenging field situations. We also discuss a two-tier estimation hierarchy which preserves registration to this map and other objects in the robots vicinity while also contributing to direct low-level control of a Boston Dynamics Atlas robot. Extensive experimental demonstrations illustrate how the approach can enable the humanoid to walk over uneven terrain without stopping (for tens of minutes), which would otherwise not be possible. We characterize the performance of the estimator for each sensor modality and discuss the computational requirements.


The International Journal of Robotics Research | 2010

A High-rate, Heterogeneous Data Set From The DARPA Urban Challenge

Albert S. Huang; Matthew E. Antone; Edwin Olson; Luke Fletcher; David Moore; Seth J. Teller; John J. Leonard

This paper describes a data set collected by MIT’s autonomous vehicle Talos during the 2007 DARPA Urban Challenge. Data from a high-precision navigation system, five cameras, 12 SICK planar laser range scanners, and a Velodyne high-density laser range scanner were synchronized and logged to disk for 90 km of travel. In addition to documenting a number of large loop closures useful for developing mapping and localization algorithms, this data set also records the first robotic traffic jam and two autonomous vehicle collisions. It is our hope that this data set will be useful to the autonomous vehicle community, especially those developing robotic perception capabilities.


Signal processing, sensor fusion, and target recognition. Conference | 2004

Multiple-hypothesis tracking of multiple ground targets from aerial video with dynamic sensor control

Pablo O. Arambel; Jeff Silver; Jon Krant; Matthew E. Antone; Thomas M. Strat

The goal of the DARPA Video Verification of Identity (VIVID) program is to develop an automated video-based ground targeting system for unmanned aerial vehicles that significantly improves operator combat efficiency and effectiveness while minimizing collateral damage. One of the key components of VIVID is the Multiple Target Tracker (MTT), whose main function is to track many ground targets simultaneously by slewing the video sensor from target to target and zooming in and out as necessary. The MTT comprises three modules: (i) a video processor that performs moving object detection, feature extraction, and site modeling; (ii) a multiple hypothesis tracker that processes extracted video reports (e.g. positions, velocities, features) to generate tracks of currently and previously moving targets and confusers; and (iii) a sensor resource manager that schedules camera pan, tilt, and zoom to support kinematic tracking, multiple target track association, scene context modeling, confirmatory identification, and collateral damage avoidance. When complete, VIVID MTT will enable precision tracking of the maximum number of targets permitted by sensor capabilities and by target behavior. This paper describes many of the challenges faced by the developers of the VIVID MTT component, and the solutions that are currently being implemented.

Collaboration


Dive into the Matthew E. Antone's collaboration.

Top Co-Authors

Avatar

Seth J. Teller

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Matthew R. Walter

Toyota Technological Institute at Chicago

View shared research outputs
Top Co-Authors

Avatar

Albert S. Huang

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Luke Fletcher

Australian National University

View shared research outputs
Top Co-Authors

Avatar

David Moore

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Edwin Olson

University of Michigan

View shared research outputs
Top Co-Authors

Avatar

Emilio Frazzoli

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Jonathan P. How

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sertac Karaman

Massachusetts Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge