Matthew Johnson-Roberson
University of Michigan
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Matthew Johnson-Roberson.
IEEE Transactions on Robotics | 2008
Ian Mahon; Stefan B. Williams; Oscar Pizarro; Matthew Johnson-Roberson
This paper presents a simultaneous localization and mapping algorithm suitable for large-scale visual navigation. The estimation process is based on the viewpoint augmented navigation (VAN) framework using an extended information filter. Cholesky factorization modifications are used to maintain a factor of the VAN information matrix, enabling efficient recovery of state estimates and covariances. The algorithm is demonstrated using data acquired by an autonomous underwater vehicle performing a visual survey of sponge beds. Loop-closure observations produced by a stereo vision system are used to correct the estimated vehicle trajectory produced by dead reckoning sensors.
IEEE Robotics & Automation Magazine | 2012
Stefan B. Williams; Oscar Pizarro; Michael V. Jakuba; Craig R. Johnson; Ns Barrett; Russell C. Babcock; Gary A. Kendrick; Peter D. Steinberg; Andrew Heyward; Peter Doherty; Ian Mahon; Matthew Johnson-Roberson; Daniel Steinberg; Ariell Friedman
We have established an Australia-wide observation program that exhibits recent developments in autonomous underwater vehicle (AUV) systems to deliver precisely navigated time series benthic imagery at selected reference stations on Australias continental shelf. These observations are designed to help characterize changes in benthic assemblage composition and cover derived from precisely registered maps collected at regular intervals. This information will provide researchers with the baseline ecological data necessary to make quantitative inferences about the long-term effects of climate change and human activities on the benthos. Incorporating a suite of observations that capitalize on the unique capabilities of AUVs into Australias integrated marine observation system (IMOS) [1] is providing a critical link between oceanographic and benthic processes. IMOS is a nationally coordinated program designed to establish and maintain the research infrastructure required to support Australias marine science research. It has, and will maintain, a strategic focus on the impact of major boundary currents on continental shelf environments, ecosystems, and biodiversity. The IMOS AUV facility observation program is designed to generate physical and biological observations of benthic variables that cannot be cost effectively obtained by other means.
international conference on robotics and automation | 2011
Jeannette Bohg; Matthew Johnson-Roberson; Beatriz León; Javier Felip; Xavi Gratal; Niklas Bergström; Danica Kragic; Antonio Morales
We consider the problem of grasp and manipulation planning when the state of the world is only partially observable. Specifically, we address the task of picking up unknown objects from a table top. The proposed approach to object shape prediction aims at closing the knowledge gaps in the robots understanding of the world. A completed state estimate of the environment can then be provided to a simulator in which stable grasps and collision-free movements are planned.
PLOS ONE | 2013
Mitch Bryson; Matthew Johnson-Roberson; Richard J. Murphy; Daniel L. Bongiorno
Intertidal ecosystems have primarily been studied using field-based sampling; remote sensing offers the ability to collect data over large areas in a snapshot of time that could complement field-based sampling methods by extrapolating them into the wider spatial and temporal context. Conventional remote sensing tools (such as satellite and aircraft imaging) provide data at limited spatial and temporal resolutions and relatively high costs for small-scale environmental science and ecologically-focussed studies. In this paper, we describe a low-cost, kite-based imaging system and photogrammetric/mapping procedure that was developed for constructing high-resolution, three-dimensional, multi-spectral terrain models of intertidal rocky shores. The processing procedure uses automatic image feature detection and matching, structure-from-motion and photo-textured terrain surface reconstruction algorithms that require minimal human input and only a small number of ground control points and allow the use of cheap, consumer-grade digital cameras. The resulting maps combine imagery at visible and near-infrared wavelengths and topographic information at sub-centimeter resolutions over an intertidal shoreline 200 m long, thus enabling spatial properties of the intertidal environment to be determined across a hierarchy of spatial scales. Results of the system are presented for an intertidal rocky shore at Jervis Bay, New South Wales, Australia. Potential uses of this technique include mapping of plant (micro- and macro-algae) and animal (e.g. gastropods) assemblages at multiple spatial and temporal scales.
PLOS ONE | 2012
Ariell Friedman; Oscar Pizarro; Stefan B. Williams; Matthew Johnson-Roberson
This paper demonstrates how multi-scale measures of rugosity, slope and aspect can be derived from fine-scale bathymetric reconstructions created from geo-referenced stereo imagery. We generate three-dimensional reconstructions over large spatial scales using data collected by Autonomous Underwater Vehicles (AUVs), Remotely Operated Vehicles (ROVs), manned submersibles and diver-held imaging systems. We propose a new method for calculating rugosity in a Delaunay triangulated surface mesh by projecting areas onto the plane of best fit using Principal Component Analysis (PCA). Slope and aspect can be calculated with very little extra effort, and fitting a plane serves to decouple rugosity from slope. We compare the results of the virtual terrain complexity calculations with experimental results using conventional in-situ measurement methods. We show that performing calculations over a digital terrain reconstruction is more flexible, robust and easily repeatable. In addition, the method is non-contact and provides much less environmental impact compared to traditional survey techniques. For diver-based surveys, the time underwater needed to collect rugosity data is significantly reduced and, being a technique based on images, it is possible to use robotic platforms that can operate beyond diver depths. Measurements can be calculated exhaustively at multiple scales for surveys with tens of thousands of images covering thousands of square metres. The technique is demonstrated on data gathered by a diver-rig and an AUV, on small single-transect surveys and on a larger, dense survey that covers over . Stereo images provide 3D structure as well as visual appearance, which could potentially feed into automated classification techniques. Our multi-scale rugosity, slope and aspect measures have already been adopted in a number of marine science studies. This paper presents a detailed description of the method and thoroughly validates it against traditional in-situ measurements.
international conference on robotics and automation | 2009
Mitch Bryson; Matthew Johnson-Roberson; Salah Sukkarieh
This paper presents a framework for integrating sensor information from an Inertial Measuring Unit (IMU), Global Positioning System (GPS) receiver and monocular vision camera mounted to a low-flying Unmanned Aerial Vehicle (UAV) for building large-scale terrain reconstructions. Our method seeks to integrate all of the sensor information using a statistically optimal non-linear least squares smoothing algorithm to estimate vehicle poses simultaneously to a dense point feature map of the terrain. A visualisation of the terrain structure is then created by building a textured mesh-surface from the estimated point features. The resulting terrain reconstruction can be used for a range of environmental monitoring missions such as invasive plant detection and biomass mapping.
International Journal of Nautical Archaeology | 2013
Jon C. Henderson; Oscar Pizarro; Matthew Johnson-Roberson; Ian Mahon
Creating photo-mosaics and plans of submerged archaeological sites quickly, cost-effectively and, most importantly, to a high level of geometric accuracy remains a huge challenge in underwater archaeology. This paper describes a system that takes geo-referenced stereo imagery from a diver-propelled platform and combines it with mapping techniques widely used in the field of robotic science to create high-resolution 2D photo-mosaics and detailed 3D textured models of submerged archaeological features. The system was field tested on the submerged Bronze Age town of Pavlopetri off the coast of Laconia, Greece, in 2010. This paper outlines the equipment used, data collection in the field, image processing and visualization methodology.
intelligent robots and systems | 2010
Matthew Johnson-Roberson; Jeannette Bohg; Mårten Björkman; Danica Kragic
In this paper we present a framework for the segmentation of multiple objects from a 3D point cloud. We extend traditional image segmentation techniques into a full 3D representation. The proposed technique relies on a state-of-the-art min-cut framework to perform a fully 3D global multi-class labeling in a principled manner. Thereby, we extend our previous work in which a single object was actively segmented from the background. We also examine several seeding methods to bootstrap the graphical model-based energy minimization and these methods are compared over challenging scenes. All results are generated on real-world data gathered with an active vision robotic head. We present quantitive results over aggregate sets as well as visual results on specific examples.
international symposium on experimental robotics | 2009
Stefan B. Williams; Oscar Pizarro; Ian Mahon; Matthew Johnson-Roberson
This paper reviews current work being undertaken at the University of Sydney’s Australian Centre for Field Robotics on efficient, stereo based Simultaneous Localisation and Mapping and dense scene reconstruction suitable for creating detailed maps of seafloor survey sites. A suite of tools have been developed for creating and visualising accurate models of the seafloor, thereby providing marine scientists with a method for assessing the spatial distribution of organisms on the seafloor. The Autonomous Underwater Vehicle Sirius has been operated on two major cruises in 2007 as part of the establishment of an AUV Facility associated with the Australia’s Integrated Marine Observing System (IMOS). A series of deployments were undertaken in collaboration with scientist from the Australian Institute of Marine Science (AIMS) to assess benthic habitats off the Ningaloo Reef, Western Australia in May. The AUV was also part of a three week research cruise in September aboard the R/V Southern Surveyor documenting drowned shelf edge reefs at multiple sites in four areas along the Great Barrier Reef. Preliminary outcomes of these cruises are described.
international conference on robotics and automation | 2017
Matthew Johnson-Roberson; Charles Barto; Rounak Mehta; Sharath Nittur Sridhar; Karl Rosaen; Ram Vasudevan
Deep learning has rapidly transformed the state of the art algorithms used to address a variety of problems in computer vision and robotics. These breakthroughs have relied upon massive amounts of human annotated training data. This time consuming process has begun impeding the progress of these deep learning efforts. This paper describes a method to incorporate photo-realistic computer images from a simulation engine to rapidly generate annotated data that can be used for the training of machine learning algorithms. We demonstrate that a state of the art architecture, which is trained only using these synthetic annotations, performs better than the identical architecture trained on human annotated real-world data, when tested on the KITTI data set for vehicle detection. By training machine learning algorithms on a rich virtual world, real objects in real scenes can be learned and classified using synthetic data. This approach offers the possibility of accelerating deep learnings application to sensor-based classification problems like those that appear in self-driving cars. The source code and data to train and validate the networks described in this paper are made available for researchers.