Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Paul Timothy Furgale is active.

Publication


Featured researches published by Paul Timothy Furgale.


The International Journal of Robotics Research | 2015

Keyframe-based visual-inertial odometry using nonlinear optimization

Stefan Leutenegger; Simon Lynen; Michael Bosse; Roland Siegwart; Paul Timothy Furgale

Combining visual and inertial measurements has become popular in mobile robotics, since the two sensing modalities offer complementary characteristics that make them the ideal choice for accurate visual–inertial odometry or simultaneous localization and mapping (SLAM). While historically the problem has been addressed with filtering, advancements in visual estimation suggest that nonlinear optimization offers superior accuracy, while still tractable in complexity thanks to the sparsity of the underlying problem. Taking inspiration from these findings, we formulate a rigorously probabilistic cost function that combines reprojection errors of landmarks and inertial terms. The problem is kept tractable and thus ensuring real-time operation by limiting the optimization to a bounded window of keyframes through marginalization. Keyframes may be spaced in time by arbitrary intervals, while still related by linearized inertial terms. We present evaluation results on complementary datasets recorded with our custom-built stereo visual–inertial hardware that accurately synchronizes accelerometer and gyroscope measurements with imagery. A comparison of both a stereo and monocular version of our algorithm with and without online extrinsics estimation is shown with respect to ground truth. Furthermore, we compare the performance to an implementation of a state-of-the-art stochastic cloning sliding-window filter. This competitive reference implementation performs tightly coupled filtering-based visual–inertial odometry. While our approach declaredly demands more computation, we show its superior performance in terms of accuracy.


intelligent robots and systems | 2013

Unified temporal and spatial calibration for multi-sensor systems

Paul Timothy Furgale; Joern Rehder; Roland Siegwart

In order to increase accuracy and robustness in state estimation for robotics, a growing number of applications rely on data from multiple complementary sensors. For the best performance in sensor fusion, these different sensors must be spatially and temporally registered with respect to each other. To this end, a number of approaches have been developed to estimate these system parameters in a two stage process, first estimating the time offset and subsequently solving for the spatial transformation between sensors. In this work, we present on a novel framework for jointly estimating the temporal offset between measurements of different sensors and their spatial displacements with respect to each other. The approach is enabled by continuous-time batch estimation and extends previous work by seamlessly incorporating time offsets within the rigorous theoretical framework of maximum likelihood estimation. Experimental results for a camera to inertial measurement unit (IMU) calibration prove the ability of this framework to accurately estimate time offsets up to a fraction of the smallest measurement period.


ieee intelligent vehicles symposium | 2013

Toward automated driving in cities using close-to-market sensors: An overview of the V-Charge Project

Paul Timothy Furgale; Ulrich Schwesinger; Martin Rufli; Wojciech Waclaw Derendarz; Hugo Grimmett; Peter Mühlfellner; Stefan Wonneberger; Julian Timpner; Stephan Rottmann; Bo Li; Bastian Schmidt; Thien-Nghia Nguyen; Elena Cardarelli; Stefano Cattani; Stefan Brüning; Sven Horstmann; Martin Stellmacher; Holger Mielenz; Kevin Köser; Markus Beermann; Christian Häne; Lionel Heng; Gim Hee Lee; Friedrich Fraundorfer; Rene Iser; Rudolph Triebel; Ingmar Posner; Paul Newman; Lars C. Wolf; Marc Pollefeys

Future requirements for drastic reduction of CO2 production and energy consumption will lead to significant changes in the way we see mobility in the years to come. However, the automotive industry has identified significant barriers to the adoption of electric vehicles, including reduced driving range and greatly increased refueling times. Automated cars have the potential to reduce the environmental impact of driving, and increase the safety of motor vehicle travel. The current state-of-the-art in vehicle automation requires a suite of expensive sensors. While the cost of these sensors is decreasing, integrating them into electric cars will increase the price and represent another barrier to adoption. The V-Charge Project, funded by the European Commission, seeks to address these problems simultaneously by developing an electric automated car, outfitted with close-to-market sensors, which is able to automate valet parking and recharging for integration into a future transportation system. The final goal is the demonstration of a fully operational system including automated navigation and parking. This paper presents an overview of the V-Charge system, from the platform setup to the mapping, perception, and planning sub-systems.


international conference on robotics and automation | 2012

Continuous-time batch estimation using temporal basis functions

Paul Timothy Furgale; Timothy D. Barfoot; Gabe Sibley

Roboticists often formulate estimation problems in discrete time for the practical reason of keeping the state size tractable. However, the discrete-time approach does not scale well for use with high-rate sensors, such as inertial measurement units or sweeping laser imaging sensors. The difficulty lies in the fact that a pose variable is typically included for every time at which a measurement is acquired, rendering the dimension of the state impractically large for large numbers of measurements. This issue is exacerbated for the simultaneous localization and mapping (SLAM) problem, which further augments the state to include landmark variables. To address this tractability issue, we propose to move the full maximum likelihood estimation (MLE) problem into continuous time and use temporal basis functions to keep the state size manageable. We present a full probabilistic derivation of the continuous-time estimation problem, derive an estimator based on the assumption that the densities and processes involved are Gaussian, and show how coefficients of a relatively small number of basis functions can form the state to be estimated, making the solution efficient. Our derivation is presented in steps of increasingly specific assumptions, opening the door to the development of other novel continuous-time estimation algorithms through the application of different assumptions at any point. We use the SLAM problem as our motivation throughout the paper, although the approach is not specific to this application. Results from a self-calibration experiment involving a camera and a high-rate inertial measurement unit are provided to validate the approach.


IEEE Transactions on Robotics | 2014

Associating Uncertainty With Three-Dimensional Poses for Use in Estimation Problems

Timothy D. Barfoot; Paul Timothy Furgale

In this paper, we provide specific and practical approaches to associate uncertainty with 4 ×4 transformation matrices, which is a common representation for pose variables in 3-D space. We show constraint-sensitive means of perturbing transformation matrices using their associated exponential-map generators and demonstrate these tools on three simple-yet-important estimation problems: 1) propagating uncertainty through a compound pose change, 2) fusing multiple measurements of a pose (e.g., for use in pose-graph relaxation), and 3) propagating uncertainty on poses (and landmarks) through a nonlinear camera model. The contribution of the paper is the presentation of the theoretical tools, which can be applied in the analysis of many problems involving 3-D pose and point variables.


international conference on 3d vision | 2014

Placeless Place-Recognition

Simon Lynen; Michael Bosse; Paul Timothy Furgale; Roland Siegwart

Place recognition is a core competency for any visual simultaneous localization and mapping system. Identifying previously visited places enables the creation of globally accurate maps, robust relocalization, and multi-user mapping. To match one place to another, most state-of-the-art approaches must decide a priori what constitutes a place, often in terms of how many consecutive views should overlap, or how many consecutive images should be considered together. Unfortunately, depending on thresholds such as these, limits their generality to different types of scenes. In this paper, we present a placeless place recognition algorithm using a novel vote-density estimation technique that avoids heuristically discretizing the space. Instead, our approach considers place recognition as a problem of continuous matching between image streams, automatically discovering regions of high vote density that represent overlapping trajectory segments. The resulting algorithm has a single free parameter and all remaining thresholds are set automatically using well-studied statistical tests. We demonstrate the efficiency and accuracy of our methodology on three outdoor sequences: A comprehensive evaluation against ground-truth from publicly available datasets shows that our approach outperforms several state-of-the-art algorithms for place recognition.


international conference on robotics and automation | 2012

Visual Teach and Repeat using appearance-based lidar

Colin McManus; Paul Timothy Furgale; Braden Stenning; Timothy D. Barfoot

Visual Teach and Repeat (VT&R) has proven to be an effective method to allow a vehicle to autonomously repeat any previously driven route without the need for a global positioning system. One of the major challenges for a method that relies on visual input to recognize previously visited places is lighting change, as this can make the appearance of a scene look drastically different. For this reason, passive sensors, such as cameras, are not ideal for outdoor environments with inconsistent/inadequate light. However, camera-based systems have been very successful for localization and mapping in outdoor, unstructured terrain, which can be largely attributed to the use of sparse, appearance-based computer vision techniques. Thus, in an effort to achieve lighting invariance and to continue to exploit the heritage of the appearance-based vision techniques traditionally used with cameras, this paper presents the first VT&R system that uses appearance-based techniques with laser scanners for motion estimation. The system has been field tested in a planetary analogue environment for an entire diurnal cycle, covering more than 11km with an autonomy rate of 99.7% of the distance traveled.


The International Journal of Robotics Research | 2013

Gaussian Process Gauss-Newton for non-parametric simultaneous localization and mapping

Chi Hay Tong; Paul Timothy Furgale; Timothy D. Barfoot

In this paper, we present Gaussian Process Gauss–Newton (GPGN), an algorithm for non-parametric, continuous-time, nonlinear, batch state estimation. This work adapts the methods of Gaussian process (GP) regression to address the problem of batch simultaneous localization and mapping (SLAM) by using the Gauss–Newton optimization method. In particular, we formulate the estimation problem with a continuous-time state model, along with the more conventional discrete-time measurements. Two derivations are presented in this paper, reflecting both the weight-space and function-space approaches from the GP regression literature. Validation is conducted through simulations and a hardware experiment, which utilizes the well-understood problem of two-dimensional SLAM as an illustrative example. The performance is compared with the traditional discrete-time batch Gauss–Newton approach, and we also show that GPGN can be employed to estimate motion with only range/bearing measurements of landmarks (i.e. no odometry), even when there are not enough measurements to constrain the pose at a given timestep.


IEEE Transactions on Aerospace and Electronic Systems | 2011

Sun Sensor Navigation for Planetary Rovers: Theory and Field Testing

Paul Timothy Furgale; John Enright; Timothy D. Barfoot

In this paper, we present an experimental study of sun sensing as a rover navigational aid. Algorithms are outlined to determine rover heading in an absolute reference frame. The sensor suite consists of a sun sensor, inclinometer, and clock (as well as ephemeris data). We describe a technique to determine ground-truth orientation in the field (without using a compass) and present a large number of experimental results (both in Toronto and on Devon Island) showing our ability to determine absolute rover heading to within a few degrees.


international conference on robotics and automation | 2014

Long-term 3D map maintenance in dynamic environments

François Pomerleau; Philipp Andreas Krüsi; Francis Colas; Paul Timothy Furgale; Roland Siegwart

New applications of mobile robotics in dynamic urban areas require more than the single-session geometric maps that have dominated simultaneous localization and mapping (SLAM) research to date; maps must be updated as the environment changes and include a semantic layer (such as road network information) to aid motion planning in dynamic environments. We present an algorithm for long-term localization and mapping in real time using a three-dimensional (3D) laser scanner. The system infers the static or dynamic state of each 3D point in the environment based on repeated observations. The velocity of each dynamic point is estimated without requiring object models or explicit clustering of the points. At any time, the system is able to produce a most-likely representation of underlying static scene geometry. By storing the time history of velocities, we can infer the dominant motion patterns within the map. The result is an online mapping and localization system specifically designed to enable long-term autonomy within highly dynamic environments. We validate the approach using data collected around the campus of ETH Zurich over seven months and several kilometers of navigation. To the best of our knowledge, this is the first work to unify long-term map update with tracking of dynamic objects.

Collaboration


Dive into the Paul Timothy Furgale's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michael Bosse

Commonwealth Scientific and Industrial Research Organisation

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge