Gabe Sibley
University of Colorado Boulder
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Gabe Sibley.
International Journal of Computer Vision | 2011
Christopher Mei; Gabe Sibley; Mark Cummins; Paul Newman; Ian D. Reid
Large scale exploration of the environment requires a constant time estimation engine. Bundle adjustment or pose relaxation do not fulfil these requirements as the number of parameters to solve grows with the size of the environment. We describe a relative simultaneous localisation and mapping system (RSLAM) for the constant-time estimation of structure and motion using a binocular stereo camera system as the sole sensor. Achieving robustness in the presence of difficult and changing lighting conditions and rapid motion requires careful engineering of the visual processing, and we describe a number of innovations which we show lead to high accuracy and robustness. In order to achieve real-time performance without placing severe limits on the size of the map that can be built, we use a topo-metric representation in terms of a sequence of relative locations. When combined with fast and reliable loop-closing, we mitigate the drift to obtain highly accurate global position estimates without any global minimisation. We discuss some of the issues that arise from using a relative representation, and evaluate our system on long sequences processed at a constant 30–45 Hz, obtaining precisions down to a few meters over distances of a few kilometres.
The International Journal of Robotics Research | 2010
Gabe Sibley; Christopher Mei; Ian D. Reid; Paul Newman
In this paper we describe a relative approach to simultaneous localization and mapping, based on the insight that a continuous relative representation can make the problem tractable at large scales. First, it is well known that bundle adjustment is the optimal non-linear least-squares formulation for this problem, in that its maximum-likelihood form matches the definition of the Cramer—Rao lower bound. Unfortunately, computing the maximum-likelihood solution is often prohibitively expensive: this is especially true during loop closures, which often necessitate adjusting all parameters in a loop. In this paper we note that it is precisely the choice of a single privileged coordinate frame that makes bundle adjustment costly, and that this expense can be avoided by adopting a completely relative approach. We derive a new relative bundle adjustment which, instead of optimizing in a single Euclidean space, works in a metric space defined by a manifold. Using an adaptive optimization strategy, we show experimentally that it is possible to solve for the full maximum-likelihood solution incrementally in constant time, even at loop closure. Our approach is, by definition, everywhere locally Euclidean, and we show that the local Euclidean estimate matches that of traditional bundle adjustment. Our system operates online in realtime using stereo data, with fast appearance-based loop closure detection. We show results on over 850,000 images that indicate the accuracy and scalability of the approach, and process over 330 GB of image data into a relative map covering 142 km of Southern England. To demonstrate a baseline sufficiency for navigation, we show that it is possible to find shortest paths in the relative maps we build, in terms of both time and distance. Query images from the web of popular landmarks around London, such as the London Eye or Trafalgar Square, are matched to the relative map to provide route planning goals.
The International Journal of Robotics Research | 2009
Paul Newman; Gabe Sibley; Mike Smith; Mark Cummins; Alastair Harrison; Chris Mei; Ingmar Posner; Robbie Shade; Derik Schroeter; Liz Murphy; Winston Churchill; Dave Cole; Ian D. Reid
In this paper we describe a body of work aimed at extending the reach of mobile navigation and mapping. We describe how running topological and metric mapping and pose estimation processes concurrently, using vision and laser ranging, has produced a full six-degree-of-freedom outdoor navigation system. It is capable of producing intricate three-dimensional maps over many kilometers and in real time. We consider issues concerning the intrinsic quality of the built maps and describe our progress towards adding semantic labels to maps via scene de-construction and labeling. We show how our choices of representation, inference methods and use of both topological and metric techniques naturally allow us to fuse maps built from multiple sessions with no need for manual frame alignment or data association.
robotics science and systems | 2006
Gabe Sibley; Gaurav S. Sukhatme; Larry H. Matthies
This paper investigates the use of statistical linearization to improve iterative non-linear least squares estimators. In particular, we look at improving long range stereo by filtering feature tracks from sequences of stereo pairs. A novel filter called the Iterated Sigma Point Kalman Filter (ISPKF) is developed from first principles; this filter is shown to achieve superior performance in terms of efficiency and accuracy when compared to the Extended Kalman Filter (EKF), Unscented Kalman Filter (UKF), and Gauss-Newton filter. We also compare the ISPKF to the optimal Batch filter and to a Gauss-Newton Smoothing filter. For the long range stereo problem the ISPKF comes closest to matching the performance of the full batch MLE estimator. Further, the ISPKF is demonstrated on real data in the context of modeling environment structure from long range stereo data.
british machine vision conference | 2009
Christopher Mei; Gabe Sibley; Mark Cummins; Paul Newman; Ian D. Reid
Continuous, real-time mapping of an environment using a camera requires a constanttime estimation engine. This rules out optimal global solving such as bundle adjustment. In this article, we investigate the precision that can be achieved with only local estimation of motion and structure provided by a stereo pair. We introduce a simple but novel representation of the environment in terms of a sequence of relative locations. We demonstrate precise local mapping and easy navigation using the relative map, and importantly show that this can be done without requiring a global minimisation after loop closure. We discuss some of the issues that arise from using a relative representation, and evaluate our system on long sequences processed at a constant 30-45 Hz, obtaining precisions down to a few metres over distances of a few kilometres.
international conference on robotics and automation | 2012
Paul Timothy Furgale; Timothy D. Barfoot; Gabe Sibley
Roboticists often formulate estimation problems in discrete time for the practical reason of keeping the state size tractable. However, the discrete-time approach does not scale well for use with high-rate sensors, such as inertial measurement units or sweeping laser imaging sensors. The difficulty lies in the fact that a pose variable is typically included for every time at which a measurement is acquired, rendering the dimension of the state impractically large for large numbers of measurements. This issue is exacerbated for the simultaneous localization and mapping (SLAM) problem, which further augments the state to include landmark variables. To address this tractability issue, we propose to move the full maximum likelihood estimation (MLE) problem into continuous time and use temporal basis functions to keep the state size manageable. We present a full probabilistic derivation of the continuous-time estimation problem, derive an estimator based on the assumption that the densities and processes involved are Gaussian, and show how coefficients of a relatively small number of basis functions can form the state to be estimated, making the solution efficient. Our derivation is presented in steps of increasingly specific assumptions, opening the door to the development of other novel continuous-time estimation algorithms through the application of different assumptions at any point. We use the SLAM problem as our motivation throughout the paper, although the approach is not specific to this application. Results from a self-calibration experiment involving a camera and a high-rate inertial measurement unit are provided to validate the approach.
international conference on robotics and automation | 2006
Anelia Angelova; Larry H. Matthies; Daniel M. Helmick; Gabe Sibley; Pietro Perona
In this paper we predict the amount of slip an exploration rover would experience using stereo imagery by learning from previous examples of traversing similar terrain. To do that, the information of terrain appearance and geometry regarding some location is correlated to the slip measured by the rover while this location is being traversed. This relationship is learned from previous experience, so slip can be predicted later at a distance from visual information only. The advantages of the approach are: 1) learning from examples allows the system to adapt to unknown terrains rather than using fixed heuristics or predefined rules; 2) the feedback about the observed slip is received from the vehicles own sensors which can fully automate the process; 3) learning slip from previous experience can replace complex mechanical modeling of vehicle or terrain, which is time consuming and not necessarily feasible. Predicting slip is motivated by the need to assess the risk of getting trapped before entering a particular terrain. For example, a planning algorithm can utilize slip information by taking into consideration that a slippery terrain is costly or hazardous to traverse. A generic nonlinear regression framework is proposed in which the terrain type is determined from appearance and then a nonlinear model of slip is learned for a particular terrain type. In this paper we focus only on the latter problem and provide slip learning and prediction results for terrain types, such as soil, sand, gravel, and asphalt. The slip prediction error achieved is about 15% which is comparable to the measurement errors for slip itself
british machine vision conference | 2013
Steven Lovegrove; Alonso Patron-Perez; Gabe Sibley
This paper describes a general continuous-time framework for visual-inertial simultaneous localization and mapping and calibration. We show how to use a spline parameterization that closely matches the torque-minimal motion of the sensor. Compared to traditional discrete-time solutions, the continuous-time formulation is particularly useful for solving problems with high-frame rate sensors and multiple unsynchronized devices. We demonstrate the applicability of the method for multi-sensor visual-inertial SLAM and calibration by accurately establishing the relative pose and internal parameters of multiple unsynchronized devices. We also show the advantages of the approach through evaluation and uniform treatment of both global and rolling shutter cameras within visual and visual-inertial SLAM systems.
intelligent robots and systems | 2010
Christopher Mei; Gabe Sibley; Paul Newman
This paper proposes a new topo-metric representation of the world based on co-visibility that simplifies data association and improves the performance of appearance-based recognition. We introduce the concept of dynamic bagof-words, which is a novel form of query expansion based on finding cliques in the landmark co-visibility graph. The proposed approach avoids the - often arbitrary - discretisation of space from the robots trajectory that is common to most image-based loop closure algorithms. Instead we show that reasoning on sets of co-visible landmarks leads to a simple model that out-performs pose-based or view-based approaches. Using real and simulated imagery, we demonstrate that dynamic bag-of-words query expansion can improve precision and recall for appearance-based localisation.
international conference on intelligent transportation systems | 2010
Ashley Napier; Gabe Sibley; Paul Newman
This paper is about online, constant-time pose estimation for road vehicles. We exploit both the state of the art in vision based SLAM and the wide availability of overhead imagery of road networks. We show that by formulating the pose estimation problem in a relative sense, we can estimate the vehicle pose in real-time and bound its absolute error by using overhead image priors. We demonstrate our technique on data gathered from a stereo pair on a vehicle traveling at 40 kph through urban streets. Crucially our method has no dependence on infrastructure, needs no workspace modification, is not dependent on GPS reception, requires only a single stereo pair and runs on an every day laptop.