Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where William P. Maddern is active.

Publication


Featured researches published by William P. Maddern.


international conference on robotics and automation | 2010

FAB-MAP + RatSLAM: Appearance-based SLAM for multiple times of day

Arren Glover; William P. Maddern; Michael Milford; Gordon Wyeth

Appearance-based mapping and localisation is especially challenging when separate processes of mapping and localisation occur at different times of day. The problem is exacerbated in the outdoors where continuous change in sun angle can drastically affect the appearance of a scene. We confront this challenge by fusing the probabilistic local feature based data association method of FAB-MAP with the pose cell filtering and experience mapping of RatSLAM. We evaluate the effectiveness of our amalgamation of methods using five datasets captured throughout the day from a single camera driven through a network of suburban streets. We show further results when the streets are re-visited three weeks later, and draw conclusions on the value of the system for lifelong mapping.


international conference on robotics and automation | 2012

OpenFABMAP: An open source toolbox for appearance-based loop closure detection

Arren Glover; William P. Maddern; Michael Warren; Stephanie Reid; Michael Milford; Gordon Wyeth

Appearance-based loop closure techniques, which leverage the high information content of visual images and can be used independently of pose, are now widely used in robotic applications. The current state-of-the-art in the field is Fast Appearance-Based Mapping (FAB-MAP) having been demonstrated in several seminal robotic mapping experiments. In this paper, we describe OpenFABMAP, a fully open source implementation of the original FAB-MAP algorithm. Beyond the benefits of full user access to the source code, OpenFABMAP provides a number of configurable options including rapid codebook training and interest point feature tuning. We demonstrate the performance of OpenFABMAP on a number of published datasets and demonstrate the advantages of quick algorithm customisation. We present results from OpenFABMAPs application in a highly varied range of robotics research scenarios.


The International Journal of Robotics Research | 2012

CAT-SLAM: probabilistic localisation and mapping using a continuous appearance-based trajectory

William P. Maddern; Michael Milford; Gordon Wyeth

This paper describes a new system, dubbed Continuous Appearance-based Trajectory Simultaneous Localisation and Mapping (CAT-SLAM), which augments sequential appearance-based place recognition with local metric pose filtering to improve the frequency and reliability of appearance-based loop closure. As in other approaches to appearance-based mapping, loop closure is performed without calculating global feature geometry or performing 3D map construction. Loop-closure filtering uses a probabilistic distribution of possible loop closures along the robot’s previous trajectory, which is represented by a linked list of previously visited locations linked by odometric information. Sequential appearance-based place recognition and local metric pose filtering are evaluated simultaneously using a Rao–Blackwellised particle filter, which weights particles based on appearance matching over sequential frames and the similarity of robot motion along the trajectory. The particle filter explicitly models both the likelihood of revisiting previous locations and exploring new locations. A modified resampling scheme counters particle deprivation and allows loop-closure updates to be performed in constant time for a given environment. We compare the performance of CAT-SLAM with FAB-MAP (a state-of-the-art appearance-only SLAM algorithm) using multiple real-world datasets, demonstrating an increase in the number of correct loop closures detected by CAT-SLAM.


The International Journal of Robotics Research | 2017

1 year, 1000 km: The Oxford RobotCar dataset:

William P. Maddern; Geoffrey Pascoe; Chris Linegar; Paul Newman

We present a challenging new dataset for autonomous driving: the Oxford RobotCar Dataset. Over the period of May 2014 to December 2015 we traversed a route through central Oxford twice a week on average using the Oxford RobotCar platform, an autonomous Nissan LEAF. This resulted in over 1000 km of recorded driving with almost 20 million images collected from 6 cameras mounted to the vehicle, along with LIDAR, GPS and INS ground truth. Data was collected in all weather conditions, including heavy rain, night, direct sunlight and snow. Road and building works over the period of a year significantly changed sections of the route from the beginning to the end of data collection. By frequently traversing the same route over the period of a year we enable research investigating long-term localization and mapping for autonomous vehicles in real-world, dynamic urban environments. The full dataset is available for download at: http://robotcar-dataset.robots.ox.ac.uk


international conference on robotics and automation | 2014

Shady dealings: Robust, long-term visual localisation using illumination invariance

Colin McManus; Winston Churchill; William P. Maddern; Alexander D. Stewart; Paul Newman

This paper is about extending the reach and endurance of outdoor localisation using stereo vision. At the heart of the localisation is the fundamental task of discovering feature correspondences between recorded and live images. One aspect of this problem involves deciding where to look for correspondences in an image and the second is deciding what to look for. This latter point, which is the main focus of our paper, requires understanding how and why the appearance of visual features can change over time. In particular, such knowledge allows us to better deal with abrupt and challenging changes in lighting. We show how by instantiating a parallel image processing stream which operates on illumination-invariant images, we can substantially improve the performance of an outdoor visual navigation system. We will demonstrate, explain and analyse the effect of the RGB to illumination-invariant transformation and suggest that for little cost it becomes a viable tool for those concerned with having robots operate for long periods outdoors.


international conference on robotics and automation | 2012

Lost in translation (and rotation): Rapid extrinsic calibration for 2D and 3D LIDARs

William P. Maddern; Alastair Harrison; Paul Newman

This paper describes a novel method for determining the extrinsic calibration parameters between 2D and 3D LIDAR sensors with respect to a vehicle base frame. To recover the calibration parameters we attempt to optimize the quality of a 3D point cloud produced by the vehicle as it traverses an unknown, unmodified environment. The point cloud quality metric is derived from Rényi Quadratic Entropy and quantifies the compactness of the point distribution using only a single tuning parameter. We also present a fast approximate method to reduce the computational requirements of the entropy evaluation, allowing unsupervised calibration in vast environments with millions of points. The algorithm is analyzed using real world data gathered in many locations, showing robust calibration performance and substantial speed improvements from the approximations.


international conference on robotics and automation | 2015

Leveraging experience for large-scale LIDAR localisation in changing cities

William P. Maddern; Geoffrey Pascoe; Paul Newman

Recent successful approaches to autonomous vehicle localisation and navigation typically involve 3D LIDAR scanners and a static, curated 3D map, both of which are expensive to acquire and maintain. In this paper we propose an experience-based approach to matching a local 3D swathe built using a push-broom 2D LIDAR to a number of prior 3D maps, each of which has been collected during normal driving in different conditions. Local swathes are converted to a combined 2D height and reflectance representation, and we exploit the GPU rendering pipeline to densely sample the localisation cost function to provide robustness and a wide basin of convergence. Prior maps are incrementally built into an experience-based framework from multiple traversals of the same environment, capturing changes in environment structure and appearance over time. The LIDAR localisation solutions from each prior map are fused with vehicle odometry in a probabilistic framework to provide a single pose solution suitable for automated driving. Using this framework we demonstrate real-time centimetre-level localisation using LIDAR data collected in a dynamic city environment over a period of a year.


international conference on robotics and automation | 2015

FARLAP: Fast robust localisation using appearance priors

Geoffrey Pascoe; William P. Maddern; Alexander D. Stewart; Paul Newman

This paper is concerned with large-scale localisation at city scales with monocular cameras. Our primary motivation lies with the development of autonomous road vehicles - an application domain in which low-cost sensing is particularly important. Here we present a method for localising against a textured 3-dimensional prior mesh using a monocular camera. We first present a system for generating and texturing the prior using a LIDAR scanner and camera. We then describe how we can localise against that prior with a single camera, using an information-theoretic measure of image similarity. This process requires dealing with the distortions induced by a wide-angle camera. We present and justify an interesting approach to this issue in which we distort the prior map into the image rather than vice-versa. Finally we explain how the general purpose computation functionality of a modern GPU is particularly apt for our task, allowing us to run the system in real time. We present results showing centimetre-level localisation accuracy through a city over six kilometres.


intelligent vehicles symposium | 2014

LAPS-II: 6-DoF day and night visual localisation with prior 3D structure for autonomous road vehicles

William P. Maddern; Alexander D. Stewart; Paul Newman

Robust and reliable visual localisation at any time of day is an essential component towards low-cost autonomy for road vehicles. We present a method to perform online 6-DoF visual localisation across a wide range of outdoor illumination conditions throughout the day and night using a 3D scene prior collected by a survey vehicle. We propose the use of a one-dimensional illumination invariant colour space which stems from modelling the spectral properties of the camera and scene illumination in conjunction. We combine our previous work on Localisation with Appearance of Prior Structure (LAPS) with this illumination invariant colour space to demonstrate a marked improvement in our ability to localise throughout the day compared to using a conventional RGB colour space. Our ultimate goal is robust and reliable any-time localisation - an attractive proposition for low-cost autonomy for road vehicles. Accordingly, we demonstrate our technique using 32km of data collected over a full 24-hour period from a road vehicle.


ieee intelligent vehicles symposium | 2015

Exploiting 3D semantic scene priors for online traffic light interpretation

Dan Barnes; William P. Maddern; Ingmar Posner

In this paper we present a probabilistic framework for increasing online object detection performance when given a semantic 3D scene prior, which we apply to the task of traffic light detection for autonomous vehicles. Previous approaches to traffic light detection on autonomous vehicles have involved either precise knowledge of the relative 3D positions of the vehicle and the traffic light (requiring accurate and expensive mapping and localisation systems), or a classifier-based approach that searches for traffic lights in images (increasing the chance of false detections by searching all possible locations for traffic lights). We combine both approaches by explicitly incorporating both prior map and localisation uncertainty into a classifier-based object detection framework, generating a scale-space search region that only evaluates parts of the image likely to contain traffic lights, and weighting object detection scores by both the classifier score and the 3D occurrence prior distribution. We present results comparing a range of low- and high-cost localisation systems using over 30 km of data collected on an autonomous vehicle platform, demonstrating up to a 40% improvement in detection precision over no prior information and 15% improvement on unweighted detection scores. We demonstrate a 10x reduction in computation time compared to a naïve whole-image classification approach by considering only locations and scales in the image within a confidence bound of the predicted traffic light location. In addition to improvements in detection accuracy, our approach reduces computation time and enables the use of lower cost localisation sensors for reliable and cost-effective object detection.

Collaboration


Dive into the William P. Maddern's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gordon Wyeth

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar

Michael Milford

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Arren Glover

Istituto Italiano di Tecnologia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge