Colin McManus
University of Oxford
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Colin McManus.
international conference on robotics and automation | 2014
Colin McManus; Winston Churchill; William P. Maddern; Alexander D. Stewart; Paul Newman
This paper is about extending the reach and endurance of outdoor localisation using stereo vision. At the heart of the localisation is the fundamental task of discovering feature correspondences between recorded and live images. One aspect of this problem involves deciding where to look for correspondences in an image and the second is deciding what to look for. This latter point, which is the main focus of our paper, requires understanding how and why the appearance of visual features can change over time. In particular, such knowledge allows us to better deal with abrupt and challenging changes in lighting. We show how by instantiating a parallel image processing stream which operates on illumination-invariant images, we can substantially improve the performance of an outdoor visual navigation system. We will demonstrate, explain and analyse the effect of the RGB to illumination-invariant transformation and suggest that for little cost it becomes a viable tool for those concerned with having robots operate for long periods outdoors.
robotics science and systems | 2014
Colin McManus; Ben Upcroft; Paul Newmann
This paper is about localising across extreme lighting and weather conditions. We depart from the traditional point-feature-based approach as matching under dramatic appearance changes is a brittle and hard thing. Point feature detectors are fixed and rigid procedures which pass over an image examining small, low-level structure such as corners or blobs. They apply the same criteria applied all images of all places. This paper takes a contrary view and asks what is possible if instead we learn a bespoke detector for every place. Our localisation task then turns into curating a large bank of spatially indexed detectors and we show that this yields vastly superior performance in terms of robustness in exchange for a reduced but tolerable metric precision. We present an unsupervised system that produces broad-region detectors for distinctive visual elements, called scene signatures, which can be associated across almost all appearance changes. We show, using 21km of data collected over a period of 3 months, that our system is capable of producing metric localisation estimates from night-to-day or summer-to-winter conditions.
international conference on robotics and automation | 2012
Colin McManus; Paul Timothy Furgale; Braden Stenning; Timothy D. Barfoot
Visual Teach and Repeat (VT&R) has proven to be an effective method to allow a vehicle to autonomously repeat any previously driven route without the need for a global positioning system. One of the major challenges for a method that relies on visual input to recognize previously visited places is lighting change, as this can make the appearance of a scene look drastically different. For this reason, passive sensors, such as cameras, are not ideal for outdoor environments with inconsistent/inadequate light. However, camera-based systems have been very successful for localization and mapping in outdoor, unstructured terrain, which can be largely attributed to the use of sparse, appearance-based computer vision techniques. Thus, in an effort to achieve lighting invariance and to continue to exploit the heritage of the appearance-based vision techniques traditionally used with cameras, this paper presents the first VT&R system that uses appearance-based techniques with laser scanners for motion estimation. The system has been field tested in a planetary analogue environment for an entire diurnal cycle, covering more than 11km with an autonomy rate of 99.7% of the distance traveled.
international conference on robotics and automation | 2013
Colin McManus; Winston Churchill; Ashley Napier; Ben Davis; Paul Newman
This paper is concerned with the problem of egomotion estimation in highly dynamic, heavily cluttered urban environments over long periods of time. This is a challenging problem for vision-based systems because extreme scene movement caused by dynamic objects (e.g., enormous buses) can result in erroneous motion estimates. We describe two methods that combine 3D scene priors with vision sensors to generate background-likelihood images, which act as probability masks for objects that are not part of the scene prior. This results in a system that is able to cope with extreme scene motion, even when most of the image is obscured. We present results on real data collected in central London during rush hour and demonstrate the benefits of our techniques on a core navigation system - visual odometry.
Journal of Field Robotics | 2013
Braden Stenning; Colin McManus; Timothy D. Barfoot
Growing a network of reusable paths is a novel approach to navigation that allows a mobile robot to autonomously seek distant goals in unmapped, GPS-denied environments, which may make it particularly well-suited to rovers used for planetary exploration. A network of reusable paths is an extension to visual-teach-and-repeat systems; instead of a simple chain of poses, there is an arbitrary network. This allows the robot to return to any pose it has previously visited, and it lets a robot plan to reuse previous paths. This paradigm results in closer goal acquisition (through reduced localization error) and a more robust approach to exploration with a mobile robot. It also allows a rover to return a sample to an ascent vehicle with a single command. We show that our network-of-reusable-paths approach is a physical embodiment of the popular rapidly exploring random tree (RRT) planner. Simulation results are presented along with the results from two different robotic test systems. These test systems drove over 14 km in planetary analog environments.
Robotics and Autonomous Systems | 2013
Colin McManus; Paul Timothy Furgale; Timothy D. Barfoot
Abstract In an effort to facilitate lighting-invariant exploration, this paper presents an appearance-based approach using 3D scanning laser-rangefinders for two core visual navigation techniques: visual odometry (VO) and visual teach and repeat (VT&R). The key to our method is to convert raw laser intensity data into greyscale camera-like images, in order to apply sparse, appearance-based techniques traditionally used with camera imagery. The novel concept of an image stack is introduced, which is an array of azimuth, elevation, range, and intensity images that are used to generate keypoint measurements and measurement uncertainties. Using this technique, we present the following four experiments. In the first experiment, we explore the stability of a representative keypoint detection/description algorithm on camera and laser intensity images collected over a 24 h period outside. In the second and third experiments, we validate our VO algorithm using real data collected outdoors with two different 3D scanning laser-rangefinders. Lastly, our fourth experiment presents promising preliminary VT&R localization results, where the teaching phase was done during the day and the repeating phase was done at night. These experiments show that it possible to overcome lighting sensitivity encountered with cameras, yet continue to exploit the heritage of the appearance-based visual odometry pipeline.
international conference on robotics and automation | 2014
Ben Upcroft; Colin McManus; Winston Churchill; William P. Maddern; Paul Newman
In this paper we propose the hybrid use of illuminant invariant and RGB images to perform image classification of urban scenes despite challenging variation in lighting conditions. Coping with lighting change (and the shadows thereby invoked) is a non-negotiable requirement for long term autonomy using vision. One aspect of this is the ability to reliably classify scene components in the presence of marked and often sudden changes in lighting. This is the focus of this paper. Posed with the task of classifying all parts in a scene from a full colour image, we propose that lighting invariant transforms can reduce the variability of the scene, resulting in a more reliable classification. We leverage the ideas of “data transfer” for classification, beginning with full colour images for obtaining candidate scene-level matches using global image descriptors. This is commonly followed by superpixellevel matching with local features. However, we show that if the RGB images are subjected to an illuminant invariant transform before computing the superpixel-level features, classification is significantly more robust to scene illumination effects. The approach is evaluated using three datasets. The first being our own dataset and the second being the KITTI dataset using manually generated ground truth for quantitative analysis. We qualitatively evaluate the method on a third custom dataset over a 750m trajectory.
canadian conference on computer and robot vision | 2012
Timothy D. Barfoot; Braden Stenning; Paul Timothy Furgale; Colin McManus
Visual-teach-and-repeat (VT&R) systems have proven extremely useful for practical robot autonomy where the global positioning system is either unavailable or unreliable, examples include tramming for underground mining using a planar laser scanner as well as a return-to-lander function for planetary exploration using a stereo-or laser-based camera. By embedding local appearance/metric information along an arbitrarily long path, it becomes possible to re-drive the path without the need for a single privileged coordinate frame and using only modest computational resources. For a certain class of long-term autonomy problems (e.g., repeatable long-range driving), VT&R appears to offer a simple yet scalable solution. Beyond single paths, we envision that networks of reusable paths could be established and shared from one robot to another to enable practical tasks such as surveillance, delivery (e.g., mail, hospitals, factories, warehouses), worksite operations (e.g., construction, mining), and autonomous roadways. However, for lifelong operations on reusable paths, robustness to a variety of environmental changes, both transient and permanent, is required. In this paper, we relate our experiences and lessons learned with the three above-mentioned implementations of VT&R systems. Based on this, we enumerate both the benefits and challenges of reusable paths that we see moving forwards. We discuss one such challenge, lighting-invariance, in detail and present our progess in overcoming it.
robotics science and systems | 2011
Colin McManus; Timothy D. Barfoot
Pose estimation is a critical skill in mobile robotics and is often accomplished using onboard sensors and a Kalman filter estimation technique. For systems to run online, computational efficiency of the filter design is crucial, especially when faced with limited computing resources. In this paper, we present a novel approach to serially process high-dimensional measurements in the Sigma-Point Kalman Filter (SPKF), in order to achieve a low computational cost that is linear is the measurement dimension. Although the concept of serially processing measurements has been around for quite some time in the context of the Extended Kalman Filter (EKF), few have considered this approach with the SPKF. At first glance, it may be tempting to apply the SPKF update step serially. However, we prove that without re-drawing sigma points, this ‘naive’ approach cannot guarantee the positive-definiteness of the state covariance matrix (not the case for the EKF). We then introduce a novel method for the Sigma-Point Kalman Filter to process high-dimensional, uncorrelated measurements serially that is algebraically equivalent to processing the measurements in parallel, but still achieves a computational cost linear in the measurement dimension.
ISRR | 2016
Timothy D. Barfoot; Colin McManus; Sean Anderson; Hang Dong; Erik Beerepoot; Chi Hay Tong; Paul Timothy Furgale; Jonathan D. Gammell; John Enright
Visual navigation of mobile robots has become a core capability that enables many interesting applications from planetary exploration to self-driving cars. While systems built on passive cameras have been shown to be robust in well-lit scenes, they cannot handle the range of conditions associated with a full diurnal cycle. Lidar, which is fairly invariant to ambient lighting conditions, offers one possible remedy to this problem. In this paper, we describe a visual navigation pipeline that exploits lidar’s ability to measure both range and intensity (a.k.a., reflectance) information. In particular, we use lidar intensity images (from a scanning-laser rangefinder) to carry out tasks such as visual odometry (VO) and visual teach and repeat (VT&R) in realtime, from full-light to full-dark conditions. This lighting invariance comes at the price of coping with motion distortion, owing to the scanning-while-moving nature of laser-based imagers. We present our results and lessons learned from the last few years of research in this area.