Edwin Olson
University of Michigan
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Edwin Olson.
international conference on robotics and automation | 2011
Edwin Olson
While the use of naturally-occurring features is a central focus of machine perception, artificial features (fiducials) play an important role in creating controllable experiments, ground truthing, and in simplifying the development of systems where perception is not the central objective. We describe a new visual fiducial system that uses a 2D bar code style “tag”, allowing full 6 DOF localization of features from a single image. Our system improves upon previous systems, incorporating a fast and robust line detection system, a stronger digital coding system, and greater robustness to occlusion, warping, and lens distortion. While similar in concept to the ARTag system, our method is fully open and the algorithms are documented in detail.
IEEE Journal of Oceanic Engineering | 2006
Edwin Olson; John J. Leonard; Seth J. Teller
In this paper, we present a system capable of simultaneously estimating the position of an autonomous underwater vehicle (AUV) and the positions of stationary range-only beacons. Notably, our system does not require beacon positions a priori, and our system performs well even when range measurements are severely degraded by noise and outliers. We present a powerful outlier rejection method that can identify groups of range measurements that are consistent with each other, and a method for initializing beacon positions in an extended Kalman filter (EKF). We have successfully applied our algorithms to real-world data and have demonstrated a simultaneous localization and mapping (SLAM) system whose navigation performance is comparable to that of systems that assume known beacon locations
international conference on robotics and automation | 2009
Edwin Olson
Scan matching, the problem of registering two laser scans in order to determine the relative positions from which the scans were obtained, is one of the most heavily relied-upon tools for mobile robots. Current algorithms, in a trade-off for computational performance, employ heuristics in order to quickly compute an answer. Of course, these heuristics are imperfect: existing methods can produce poor results, particularly when the prior is weak. The computational power available to modern robots warrants a re-examination of these quality vs. complexity trade-offs. In this paper, we advocate a probabilistically-motivated scan-matching algorithm that produces higher quality and more robust results at the cost of additional computation time. We describe several novel implementations of this approach that achieve real-time performance on modern hardware, including a multi-resolution approach for conventional CPUs, and a parallel approach for graphics processing units (GPUs). We also provide an empirical evaluation of our methods and several contemporary methods, illustrating the benefits of our approach. The robustness of the methods make them especially useful for global loop-closing.
intelligent robots and systems | 2010
Albert S. Huang; Edwin Olson; David Moore
We describe the Lightweight Communications and Marshalling (LCM) library for message passing and data marshalling. The primary goal of LCM is to simplify the development of low-latency message passing systems, especially for real-time robotics research applications.
robotics science and systems | 2013
Edwin Olson; Pratik Agarwal
The central challenge in robotic mapping is obtaining reliable data associations (or “loop closures”): state-of-the-art inference algorithms can fail catastrophically if even one erroneous loop closure is incorporated into the map. Consequently, much work has been done to push error rates closer to zero. However, a long-lived or multi-robot system will still encounter errors, leading to system failure. We propose a fundamentally different approach: allow richer error models that allow the probability of a failure to be explicitly modeled. In other words, rather than characterizing loop closures as being “right” or “wrong”, we propose characterizing the error of those loop closures in a more expressive manner that can account for their non-Gaussian behavior. Our approach leads to an fully integrated Bayesian framework for dealing with error-prone data. Unlike earlier multiple-hypothesis approaches, our approach avoids exponential memory complexity and is fast enough for real-time performance. We show that the proposed method not only allows loop closing errors to be automatically identified, but also that in extreme cases, the “front-end” loop-validation systems can be unnecessary. We demonstrate our system both on standard benchmarks and on the real-world data sets that motivated this work.
ieee oes autonomous underwater vehicles | 2004
Edwin Olson; John J. Leonard; Seth J. Teller
In this paper, we present a system capable of simultaneously estimating the position of an autonomous underwater vehicle (AUV) and the positions of stationary range-only beacons. Notably, our system does not require beacon positions a priori, and our system performs well even when range measurements are severely degraded by noise and outliers. We present a powerful outlier rejection method that can identify groups of range measurements that are consistent with each other, and a method for initializing beacon positions in an extended Kalman filter (EKF). We have successfully applied our algorithms to real-world data and have demonstrated a simultaneous localization and mapping (SLAM) system whose navigation performance is comparable to that of systems that assume known beacon locations
intelligent robots and systems | 2010
Johannes H. Strom; Andrew Richardson; Edwin Olson
We present an efficient graph-theoretic algorithm for segmenting a colored laser point cloud derived from a laser scanner and camera. Segmentation of raw sensor data is a crucial first step for many high level tasks such as object recognition, obstacle avoidance and terrain classification. Our method enables combination of color information from a wide field of view camera with a 3D LIDAR point cloud from an actuated planar laser scanner. We extend previous work on robust camera-only graph-based segmentation to the case where spatial features, such as surface normals, are available. Our combined method produces segmentation results superior to those derived from either cameras or laser-scanners alone. We verify our approach on both indoor and outdoor scenes.
international conference on robotics and automation | 2008
Giorgio Grisetti; D. Lodi Rizzini; Cyrill Stachniss; Edwin Olson; Wolfram Burgard
In this paper, we address the problem of incrementally optimizing constraint networks for maximum likelihood map learning. Our approach allows a robot to efficiently compute configurations of the network with small errors while the robot moves through the environment. We apply a variant of stochastic gradient descent and use a tree-based parameterization of the nodes in the network. By integrating adaptive learning rates in the parameterization of the network, our algorithm can use previously computed solutions to determine the result of the next optimization run. Additionally, our approach updates only the parts of the network which are affected by the newly incorporated measurements and starts the optimization approach only if the new data reveals inconsistencies with the network constructed so far. These improvements yield an efficient solution for this class of online optimization problems. Our approach has been implemented and tested on simulated and on real data. We present comparisons to recently proposed online and offline methods that address the problem of optimizing constraint network. Experiments illustrate that our approach converges faster to a network configuration with small errors than the previous approaches.
Journal of Field Robotics | 2012
Edwin Olson; Johannes H. Strom; Ryan D. Morton; Andrew Richardson; Pradeep Ranganathan; Robert Goeddel; Mihai Bulic; Jacob Crossman; Bob Marinier
Tasks like search-and-rescue and urban reconnaissance benefit from large numbers of robots working together, but high levels of autonomy are needed to reduce operator requirements to practical levels. Reducing the reliance of such systems on human operators presents a number of technical challenges, including automatic task allocation, global state and map estimation, robot perception, path planning, communications, and human-robot interfaces. This paper describes our 14-robot team, which won the MAGIC 2010 competition. It was designed to perform urban reconnaissance missions. In the paper, we describe a variety of autonomous systems that require minimal human effort to control a large number of autonomously exploring robots. Maintaining a consistent global map, which is essential for autonomous planning and for giving humans situational awareness, required the development of fast loop-closing, map optimization, and communications algorithms. Key to our approach was a decoupled centralized planning architecture that allowed individual robots to execute tasks myopically, but whose behavior was coordinated centrally. We will describe technical contributions throughout our system that played a significant role in its performance. We will also present results from our system both from the competition and from subsequent quantitative evaluations, pointing out areas in which the system performed well and where interesting research problems remain.
Robotics and Autonomous Systems | 2009
Edwin Olson
Place recognition is a fundamental perceptual problem at the heart of many basic robot operations, most notably mapping. Failures can result from ambiguous sensor readings and environments with similar appearances. In this paper, we describe a robust place recognition algorithm that fuses a number of uncertain local matches into a high-confidence global match. We describe the theoretical basis of the approach and present extensive experimental results from a variety of sensor modalities and environments.