Michael Paton
University of Toronto
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Michael Paton.
canadian conference on computer and robot vision | 2012
Michael Paton; Jana Kosecka
The advent of RGB-D cameras which provide synchronized range and video data creates new opportunities for exploiting both sensing modalities for various robotic applications. This paper exploits the strengths of vision and range measurements and develops a novel robust algorithm for localization using RGB-D cameras. We show how correspondences established by matching visual SIFT features can effectively initialize the generalized ICP algorithm as well as demonstrate situations where such initialization is not viable. We propose an adaptive architecture which computes the pose estimate from the most reliable measurements in a given environment and present thorough evaluation of the resulting algorithm against a dataset of RGB-D benchmarks, demonstrating superior or comparable performance in the absence of the global optimization stage. Lastly we demonstrate the proposed algorithm on a challenging indoor dataset and demonstrate improvements where pose estimation from either pure range sensing or vision techniques perform poorly.
Journal of Field Robotics | 2017
Michael Paton; François Pomerleau; Kirk MacTavish; Chris J. Ostafew; Timothy D. Barfoot
Vision-based, autonomous, route-following algorithms enable robots to autonomously repeat manually driven routes over long distances. Through the use of inexpensive, commercial vision sensors, these algorithms have the potential to enable robotic applications across multiple industries. However, in order to extend these algorithms to long-term autonomy, they must be able to operate over long periods of time. This poses a difficult challenge for vision-based systems in unstructured and outdoor environments, where appearance is highly variable. While many techniques have been developed to perform localization across extreme appearance change, most are not suitable or untested for vision-in-the-loop systems such as autonomous route following, which requires continuous metric localization to keep the robot driving. In this paper, we present a vision-based, autonomous, route-following algorithm that combines multiple channels of information during localization to increase robustness against daily appearance change such as lighting. We explore this multichannel visual teach and repeat framework by adding the following channels of information to the basic single-camera, gray-scale, localization pipeline: images that are resistant to lighting change and images from additional stereo cameras to increase the algorithms field of view. Using these methods, we demonstrate robustness against appearance change through extensive field deployments spanning over 26 km with an autonomy rate greater than 99.9%. We furthermore discuss the limits of this system when subjected to harsh environmental conditions by investigating keypoint match degradation through time.
international conference on robotics and automation | 2015
Michael Paton; Kirk MacTavish; Chris J. Ostafew; Timothy D. Barfoot
Stereo Visual Teach & Repeat (VT&R) is a system for long-range, autonomous route following in unstructured 3D environments. As this system relies on a passive sensor to localize, it is highly susceptible to changes in lighting conditions. Recent work in the optics community has provided a method to transform images collected from a three-channel passive sensor into color-constant images that are resistant to changes in outdoor lighting conditions. This paper presents a lighting-resistant VT&R system that uses experimentally trained color-constant images to autonomously navigate difficult outdoor terrain despite changes in lighting. We show through an extensive field trial that our algorithm is capable of autonomously following a 1km outdoor route spanning sandy/rocky terrain, grassland, and wooded areas. Using a single visual map created at midday, the route was autonomously repeated 26 times over a period of four days, from sunrise to sunset with an autonomy rate (by distance) of over 99.9%. These experiments show that a simple image transformation can extend the operation of VT&R from a few hours to multiple days.
intelligent robots and systems | 2016
Michael Paton; Kirk MacTavish; Michael Warren; Timothy D. Barfoot
Vision-based, route-following algorithms enable autonomous robots to repeat manually taught paths over long distances using inexpensive vision sensors. However, these methods struggle with long-term, outdoor operation due to the challenges of environmental appearance change caused by lighting, weather, and seasons. While techniques exist to address appearance change by using multiple experiences over different environmental conditions, they either provide topological-only localization, require several manually taught experiences in different conditions, or require extensive offline mapping to produce metric localization. For real-world use, we would like to localize metrically to a single manually taught route and gather additional visual experiences during autonomous operations. Accordingly, we propose a novel multi-experience localization (MEL) algorithm developed specifically for route-following applications; it provides continuous, six-degree-of-freedom (6DoF) localization with relative uncertainty to a privileged (manually taught) path using several experiences simultaneously. We validate our algorithm through two experiments: i) an offline performance analysis on a 9km subset of a challenging 27km route-traversal dataset and ii) an online field trial where we demonstrate autonomy on a small 250m loop over the course of a sunny day. Both exhibit significant appearance change due to lighting variation. Through these experiments we show that safe localization can be achieved by bridging the appearance gap.
field and service robotics | 2016
Kirk MacTavish; Michael Paton; Timothy D. Barfoot
Colour-constant images have been shown to improve visual navigation taking place over extended periods of time. These images use a colour space that aims to be invariant to lighting conditions—a quality that makes them very attractive for place recognition, which tries to identify temporally distant image matches. Place recognition after extended periods of time is especially useful for SLAM algorithms, since it bounds growing odometry errors. We present results from the FAB-MAP 2.0 place recognition algorithm, using colour-constant images for the first time, tested with a robot driving a 1 km loop 11 times over the course of several days. Computation can be improved by grouping short sequences of images and describing them with a single descriptor. Colour-constant images are shown to improve performance without a significant impact on computation, and the grouping strategy greatly speeds up computation while improving some performance measures. These two simple additions contribute robustness and speed, without modifying FAB-MAP 2.0.
field and service robotics | 2016
Michael Paton; François Pomerleau; Timothy D. Barfoot
In order for vision-based navigation algorithms to extend to long-term autonomy applications, they must have the ability to reliably associate images across time. This ability is challenged in unstructured and outdoor environments, where appearance is highly variable. This is especially true in temperate winter climates, where snowfall and low sun elevation rapidly change the appearance of the scene. While there have been proposed techniques to perform localization across extreme appearance changes, they are not suitable for many navigation algorithms such as autonomous path following, which requires constant, accurate, metric localization during the robot traverse. Furthermore, recent methods that mitigate the effects of lighting change for vision algorithms do not perform well in the contrast-limited environments associated with winter. In this paper, we highlight the successes and failures of two state-of-the-art path-following algorithms in this challenging environment. From harsh lighting conditions to deep snow, we show through a series of field trials that there remain serious issues with navigation in these environments, which must be addressed in order for long-term, vision-based navigation to succeed.
canadian conference on computer and robot vision | 2015
Michael Paton; François Pomerleau; Timothy D. Barfoot
Autonomous path-following robots that use vision-based navigation are appealing for a wide variety of tedious and dangerous applications. However, a reliance on matching point-based visual features often renders vision-based navigation unreliable over extended periods of time in unstructured, outdoor environments. Specifically, scene change caused by lighting, weather, and seasonal variation lead to changes in visual features and result in a reduction of feature associations across time. This paper presents an autonomous, path-following system that uses multiple stereo cameras to increase the algorithm field of view and reliably navigate in these feature-limited scenarios. The addition of a second camera in the localization pipeline greatly increases the probability that a stable feature will be in the robots field of view at any point in time, extending the amount of time the robot can reliably navigate. We experimentally validate our algorithm through a challenging winter field trial, where the robot autonomously traverses a 250m path six times with an autonomy rate of 100% despite significant changes in the appearance of the scene due to lighting and melting snow. We show that the addition of a second stereo camera to the system significantly increases the autonomy window when compared to current state-of-the-art path-following methods.
field and service robotics | 2018
Michael Warren; Michael Paton; Kirk MacTavish; Angela P. Schoellig; Timothy D. Barfoot
Most consumer and industrial Unmanned Aerial Vehicles (UAVs) rely on combining Global Navigation Satellite Systems (GNSS) with barometric and inertial sensors for outdoor operation. As a consequence, these vehicles are prone to a variety of potential navigation failures such as jamming and environmental interference. This usually limits their legal activities to locations of low population density within line-of-sight of a human pilot to reduce risk of injury and damage. Autonomous route-following methods such as Visual Teach and Repeat (VT&R) have enabled long-range navigational autonomy for ground robots without the need for reliance on external infrastructure or an accurate global position estimate. In this paper, we demonstrate the localisation component of VT&R outdoors on a fixed-wing UAV as a method of backup navigation in case of primary sensor failure. We modify the localisation engine of VT&R to work with a single downward facing camera on a UAV to enable safe navigation under the guidance of vision alone. We evaluate the method using visual data from the UAV flying a 1200 m trajectory (at altitude of 80 m) several times during a multi-day period, covering a total distance of 10.8 km using the algorithm. We examine the localisation performance for both small (single flight) and large (inter-day) temporal differences from teach to repeat. Through these experiments, we demonstrate the ability to successfully localise the aircraft on a self-taught route using vision alone without the need for additional sensing or infrastructure.
field and service robotics | 2018
Michael Paton; Kirk MacTavish; Laszlo-Peter Berczi; Sebastian Kai van Es; Timothy D. Barfoot
Autonomous path-following systems based on the Teach and Repeat paradigm allow robots to traverse extensive networks of manually driven paths using on-board sensors. These methods are well suited for applications that involve repeated traversals of constrained paths such as factory floors, orchards, and mines. In order for path-following systems to be viable for these applications they must be able to navigate large distances over long time periods, a challenging task for vision-based systems that are susceptible to appearance change. This paper details Visual Teach and Repeat 2.0, a vision-based path-following system capable of safe, long-term navigation over large-scale networks of connected paths in unstructured, outdoor environments. These tasks are achieved through the use of a suite of novel, multi-experience, vision-based navigation algorithms. We have validated our system experimentally through an eleven-day field test in an untended gravel pit in Sudbury, Canada, where we incrementally built and autonomously traversed a 5 Km network of paths. Over the span of the field test, the robot logged over 140 Km of autonomous driving with an autonomy rate of 99.6%, despite experiencing significant appearance change due to lighting and weather, including driving at night using headlights.
international conference on robotics and automation | 2017
Kirk MacTavish; Michael Paton; Timothy D. Barfoot
Our work builds upon Visual Teach & Repeat 2 (VT&R2): a vision-in-the-loop autonomous navigation system that enables the rapid construction of route networks, safely built through operator-controlled driving. Added routes can be followed autonomously using visual localization. To enable long-term operation that is robust to appearance change, its Multi-Experience Localization (MEL) leverages many previously driven experiences when localizing to the manually taught network. While this multi-experience method is effective across appearance change, the computation becomes intractable as the number of experiences grows into the tens and hundreds. This paper introduces an algorithm that prioritizes experiences most relevant to live operation, limiting the number of experiences required for localization. The proposed algorithm uses a visual Bag-of-Words description of the live view to select relevant experiences based on what the vehicle is seeing right now, without having to factor in all possible environmental influences on scene appearance. This system runs in the loop, in real time, does not require bootstrapping, can be applied to any pointfeature MEL paradigm, and eliminates the need for visual training using an online, local visual vocabulary. By picking a subset of visually similar experiences to the live view, we demonstrate safe, vision-in-the-loop route following over a 31 hour period, despite appearance as different as night and day.