Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ryan M. Eustice is active.

Publication


Featured researches published by Ryan M. Eustice.


IEEE Transactions on Robotics | 2006

Exactly Sparse Delayed-State Filters for View-Based SLAM

Ryan M. Eustice; Hanumant Singh; John J. Leonard

This paper reports the novel insight that the simultaneous localization and mapping (SLAM) information matrix is exactly sparse in a delayed-state framework. Such a framework is used in view-based representations of the environment that rely upon scan-matching raw sensor data to obtain virtual observations of robot motion with respect to a place it has previously been. The exact sparseness of the delayed-state information matrix is in contrast to other recent feature-based SLAM information algorithms, such as sparse extended information filter or thin junction-tree filter, since these methods have to make approximations in order to force the feature-based SLAM information matrix to be sparse. The benefit of the exact sparsity of the delayed-state framework is that it allows one to take advantage of the information space parameterization without incurring any sparse approximation error. Therefore, it can produce equivalent results to the full-covariance solution. The approach is validated experimentally using monocular imagery for two datasets: a test-tank experiment with ground truth, and a remotely operated vehicle survey of the RMS Titanic


The International Journal of Robotics Research | 2007

Exactly Sparse Extended Information Filters for Feature-based SLAM

Matthew R. Walter; Ryan M. Eustice; John J. Leonard

Recent research concerning the Gaussian canonical form for Simultaneous Localization and Mapping (SLAM) has given rise to a handful of algorithms that attempt to solve the SLAM scalability problem for arbitrarily large environments. One such estimator that has received due attention is the Sparse Extended Information Filter (SEIF) proposed by Thrun et al., which is reported to be nearly constant time, irrespective of the size of the map. The key to the SEIFs scalability is to prune weak links in what is a dense information (inverse covariance) matrix to achieve a sparse approximation that allows for efficient, scalable SLAM. We demonstrate that the SEIF sparsification strategy yields error estimates that are overconfident when expressed in the global reference frame, while empirical results show that relative map consistency is maintained. In this paper, we propose an alternative scalable estimator based on an information form that maintains sparsity while preserving consistency. The paper describes a method for controlling the population of the information matrix, whereby we track a modified version of the SLAM posterior, essentially by ignoring a small fraction of temporal measurements. In this manner, the Exactly Sparse Extended Information Filter (ESEIF) performs inference over a model that is conservative relative to the standard Gaussian distribution. We compare our algorithm to the SEIF and standard EKF both in simulation as well as on two nonlinear datasets. The results convincingly show that our method yields conservative estimates for the robot pose and map that are nearly identical to those of the EKF.


robotics science and systems | 2005

Visually Navigating the RMS Titanic with SLAM Information Filters

Ryan M. Eustice; Hanumant Singh; John J. Leonard; Matthew R. Walter; Robert D. Ballard

This paper describes a vision-based large-area simultaneous localization and mapping (SLAM) algorithm that respects the constraints of low-overlap imagery typical of underwater vehicles while exploiting the information associated with the inertial sensors that are routinely available on such platforms. We present a novel strategy for efficiently accessing and maintaining consistent covariance bounds within a SLAM information filter, greatly increasing the reliability of data association. The technique is based upon solving a sparse system of linear equations coupled with the application of constant-time Kalman updates. The method is shown to produce consistent covariance estimates suitable for robot planning and data association. Realworld results are presented for a vision-based 6 DOF SLAM implementation using data from a recent ROV survey of the wreck of the RMS Titanic.


international conference on robotics and automation | 2005

Exactly Sparse Delayed-State Filters

Ryan M. Eustice; Hanumant Singh; John J. Leonard

This paper presents the novel insight that the SLAM information matrix is exactly sparse in a delayed-state framework. Such a framework is used in view-based representations of the environment which rely upon scan-matching raw sensor data. Scan-matching raw data results in virtual observations of robot motion with respect to a place its previously been. The exact sparseness of the delayed-state information matrix is in contrast to other recent feature based SLAM information algorithms like Sparse Extended Information Filters or Thin Junction Tree Filters. These methods have to make approximations in order to force the feature-based SLAM information matrix to be sparse. The benefit of the exact sparseness of the delayed-state framework is that it allows one to take advantage of the information space parameterization without having to make any approximations. Therefore, it can produce equivalent results to the “full-covariance” solution.


The International Journal of Robotics Research | 2006

Visually Mapping the RMS Titanic: Conservative Covariance Estimates for SLAM Information Filters

Ryan M. Eustice; Hanumant Singh; John J. Leonard; Matthew R. Walter

This paper describes a vision-based, large-area, simultaneous localization and mapping (SLAM) algorithm that respects the low-overlap imagery constraints typical of underwater vehicles while exploiting the inertial sensor information that is routinely available on such platforms. We present a novel strategy for efficiently accessing and maintaining consistent covariance bounds within a SLAM information filter, thereby greatly increasing the reliability of data association. The technique is based upon solving a sparse system of linear equations coupled with the application of constant-time Kalman updates. The method is shown to produce consistent covariance estimates suitable for robot planning and data association. Real-world results are reported for a vision-based, six degree of freedom SLAM implementation using data from a recent survey of the wreck of the RMS Titanic.


IEEE Journal of Oceanic Engineering | 2008

Visually Augmented Navigation for Autonomous Underwater Vehicles

Ryan M. Eustice; Oscar Pizarro; Hanumant Singh

As autonomous underwater vehicles (AUVs) are becoming routinely used in an exploratory context for ocean science, the goal of visually augmented navigation (VAN) is to improve the near-seafloor navigation precision of such vehicles without imposing the burden of having to deploy additional infrastructure. This is in contrast to traditional acoustic long baseline navigation techniques, which require the deployment, calibration, and eventual recovery of a transponder network. To achieve this goal, VAN is formulated within a vision-based simultaneous localization and mapping (SLAM) framework that exploits the systems-level complementary aspects of a camera and strap-down sensor suite. The result is an environmentally based navigation technique robust to the peculiarities of low-overlap underwater imagery. The method employs a view-based representation where camera-derived relative-pose measurements provide spatial constraints, which enforce trajectory consistency and also serve as a mechanism for loop closure, allowing for error growth to be independent of time for revisited imagery. This article outlines the multisensor VAN framework and demonstrates it to have compelling advantages over a purely vision-only approach by: 1) improving the robustness of low-overlap underwater image registration; 2) setting the free gauge scale; and 3) allowing for a disconnected camera-constraint topology.


international conference on robotics and automation | 2007

Experimental Results in Synchronous-Clock One-Way-Travel-Time Acoustic Navigation for Autonomous Underwater Vehicles

Ryan M. Eustice; Louis L. Whitcomb; Hanumant Singh; Matthew Grund

This paper reports recent experimental results in the development and deployment of a synchronous-clock acoustic navigation system suitable for the simultaneous navigation of multiple underwater vehicles. The goal of this work is to enable the task of navigating multiple autonomous underwater vehicles (AUVs) over length scales of O(100 km), while maintaining error tolerances commensurate with conventional long-baseline transponder-based navigation systems (i.e., O(1 m)), but without the requisite need for deploying, calibrating, and recovering seafloor anchored acoustic transponders. Our navigation system is comprised of an acoustic modem-based communication/navigation system that allows for onboard navigational data to be broadcast as a data packet by a source node, and for all passively receiving nodes to be able to decode the data packet to obtain a one-way travel time pseudo-range measurement and ephemeris data. We present results for two different field experiments using a two-node configuration consisting of a global positioning system (GPS) equipped surface ship acting as a global navigation aid to a Doppler-aided AUV. In each experiment, vehicle position was independently corroborated by other standard navigation means. Initial results for a maximum-likelihood sensor fusion framework are reported.


The International Journal of Robotics Research | 2011

Ford Campus vision and lidar data set

Gaurav Pandey; James R. McBride; Ryan M. Eustice

In this paper we describe a data set collected by an autonomous ground vehicle testbed, based upon a modified Ford F-250 pickup truck. The vehicle is outfitted with a professional (Applanix POS-LV) and consumer (Xsens MTi-G) inertial measurement unit, a Velodyne three-dimensional lidar scanner, two push-broom forward-looking Riegl lidars, and a Point Grey Ladybug3 omnidirectional camera system. Here we present the time-registered data from these sensors mounted on the vehicle, collected while driving the vehicle around the Ford Research Campus and downtown Dearborn, MI, during November–December 2009. The vehicle path trajectory in these data sets contains several large- and small-scale loop closures, which should be useful for testing various state-of-the-art computer vision and simultaneous localization and mapping algorithms.


The International Journal of Robotics Research | 2012

Advanced perception, navigation and planning for autonomous in-water ship hull inspection

Franz S. Hover; Ryan M. Eustice; Ayoung Kim; Brendan J. Englot; Hordur Johannsson; Michael Kaess; John J. Leonard

Inspection of ship hulls and marine structures using autonomous underwater vehicles has emerged as a unique and challenging application of robotics. The problem poses rich questions in physical design and operation, perception and navigation, and planning, driven by difficulties arising from the acoustic environment, poor water quality and the highly complex structures to be inspected. In this paper, we develop and apply algorithms for the central navigation and planning problems on ship hulls. These divide into two classes, suitable for the open, forward parts of a typical monohull, and for the complex areas around the shafting, propellers and rudders. On the open hull, we have integrated acoustic and visual mapping processes to achieve closed-loop control relative to features such as weld-lines and biofouling. In the complex area, we implemented new large-scale planning routines so as to achieve full imaging coverage of all the structures, at a high resolution. We demonstrate our approaches in recent operations on naval ships.


oceans conference | 2010

Initial results in underwater single image dehazing

Nicholas Carlevaris-Bianco; Anush Mohan; Ryan M. Eustice

As light is transmitted from subject to observer it is absorbed and scattered by the medium it passes through. In mediums with large suspended particles, such as fog or turbid water, the effect of scattering can drastically decrease the quality of images. In this paper we present an algorithm for removing the effects of light scattering, referred to as dehazing, in underwater images. Our key contribution is to propose a simple, yet effective, prior that exploits the strong difference in attenuation between the three image color channels in water to estimate the depth of the scene. We then use this estimate to reduce the spatially varying effect of haze in the image. Our method works with a single image and does not require any specialized hardware or prior knowledge of the scene. As a by-product of the dehazing process, an up-to-scale depth map of the scene is produced. We present results over multiple real underwater images and over a controlled test set where the target distance and true colors are known.

Collaboration


Dive into the Ryan M. Eustice's collaboration.

Researchain Logo
Decentralizing Knowledge