Kirk MacTavish
University of Toronto
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Kirk MacTavish.
Journal of Field Robotics | 2017
Michael Paton; François Pomerleau; Kirk MacTavish; Chris J. Ostafew; Timothy D. Barfoot
Vision-based, autonomous, route-following algorithms enable robots to autonomously repeat manually driven routes over long distances. Through the use of inexpensive, commercial vision sensors, these algorithms have the potential to enable robotic applications across multiple industries. However, in order to extend these algorithms to long-term autonomy, they must be able to operate over long periods of time. This poses a difficult challenge for vision-based systems in unstructured and outdoor environments, where appearance is highly variable. While many techniques have been developed to perform localization across extreme appearance change, most are not suitable or untested for vision-in-the-loop systems such as autonomous route following, which requires continuous metric localization to keep the robot driving. In this paper, we present a vision-based, autonomous, route-following algorithm that combines multiple channels of information during localization to increase robustness against daily appearance change such as lighting. We explore this multichannel visual teach and repeat framework by adding the following channels of information to the basic single-camera, gray-scale, localization pipeline: images that are resistant to lighting change and images from additional stereo cameras to increase the algorithms field of view. Using these methods, we demonstrate robustness against appearance change through extensive field deployments spanning over 26 km with an autonomy rate greater than 99.9%. We furthermore discuss the limits of this system when subjected to harsh environmental conditions by investigating keypoint match degradation through time.
international conference on robotics and automation | 2015
Michael Paton; Kirk MacTavish; Chris J. Ostafew; Timothy D. Barfoot
Stereo Visual Teach & Repeat (VT&R) is a system for long-range, autonomous route following in unstructured 3D environments. As this system relies on a passive sensor to localize, it is highly susceptible to changes in lighting conditions. Recent work in the optics community has provided a method to transform images collected from a three-channel passive sensor into color-constant images that are resistant to changes in outdoor lighting conditions. This paper presents a lighting-resistant VT&R system that uses experimentally trained color-constant images to autonomously navigate difficult outdoor terrain despite changes in lighting. We show through an extensive field trial that our algorithm is capable of autonomously following a 1km outdoor route spanning sandy/rocky terrain, grassland, and wooded areas. Using a single visual map created at midday, the route was autonomously repeated 26 times over a period of four days, from sunrise to sunset with an autonomy rate (by distance) of over 99.9%. These experiments show that a simple image transformation can extend the operation of VT&R from a few hours to multiple days.
intelligent robots and systems | 2016
Michael Paton; Kirk MacTavish; Michael Warren; Timothy D. Barfoot
Vision-based, route-following algorithms enable autonomous robots to repeat manually taught paths over long distances using inexpensive vision sensors. However, these methods struggle with long-term, outdoor operation due to the challenges of environmental appearance change caused by lighting, weather, and seasons. While techniques exist to address appearance change by using multiple experiences over different environmental conditions, they either provide topological-only localization, require several manually taught experiences in different conditions, or require extensive offline mapping to produce metric localization. For real-world use, we would like to localize metrically to a single manually taught route and gather additional visual experiences during autonomous operations. Accordingly, we propose a novel multi-experience localization (MEL) algorithm developed specifically for route-following applications; it provides continuous, six-degree-of-freedom (6DoF) localization with relative uncertainty to a privileged (manually taught) path using several experiences simultaneously. We validate our algorithm through two experiments: i) an offline performance analysis on a 9km subset of a challenging 27km route-traversal dataset and ii) an online field trial where we demonstrate autonomy on a small 250m loop over the course of a sunny day. Both exhibit significant appearance change due to lighting variation. Through these experiments we show that safe localization can be achieved by bridging the appearance gap.
field and service robotics | 2016
Kirk MacTavish; Michael Paton; Timothy D. Barfoot
Colour-constant images have been shown to improve visual navigation taking place over extended periods of time. These images use a colour space that aims to be invariant to lighting conditions—a quality that makes them very attractive for place recognition, which tries to identify temporally distant image matches. Place recognition after extended periods of time is especially useful for SLAM algorithms, since it bounds growing odometry errors. We present results from the FAB-MAP 2.0 place recognition algorithm, using colour-constant images for the first time, tested with a robot driving a 1 km loop 11 times over the course of several days. Computation can be improved by grouping short sequences of images and describing them with a single descriptor. Colour-constant images are shown to improve performance without a significant impact on computation, and the grouping strategy greatly speeds up computation while improving some performance measures. These two simple additions contribute robustness and speed, without modifying FAB-MAP 2.0.
The International Journal of Robotics Research | 2015
Sean Anderson; Kirk MacTavish; Timothy D. Barfoot
Appearance-based techniques for simultaneous localization and mapping (SLAM) have been highly successful in assisting robot-motion estimation; however, these vision-based technologies have long assumed the use of imaging sensors with a global shutter, which are well suited to the traditional, discrete-time formulation of visual problems. In order to adapt these technologies to use scanning sensors, we propose novel methods for both outlier rejection and batch nonlinear estimation. Traditionally, the SLAM problem has been formulated in a single-privileged coordinate frame, which can become computationally expensive over long distances, particularly when a loop closure requires the adjustment of many pose variables. Recent discrete-time estimators have shown that a completely relative coordinate framework can be used to incrementally find a close approximation of the full maximum-likelihood solution in constant time. In order to use scanning sensors, we propose moving the relative coordinate formulation of SLAM into continuous time by estimating the velocity profile of the robot. We derive the relative formulation of the continuous-time robot trajectory and formulate an estimator using temporal basis functions. A motion-compensated outlier rejection scheme is proposed by using a constant-velocity model for the random sample consensus algorithm. Our experimental results use intensity imagery from a two-axis scanning lidar; due to the sensors’ scanning nature, it behaves similarly to a slow rolling-shutter camera. Both algorithms are validated using a sequence of 6880 lidar frames acquired over a 1.1 km traversal.
international conference on robotics and automation | 2016
Patrick McGarey; Kirk MacTavish; François Pomerleau; Timothy D. Barfoot
Mobile robots supported by an electromechanical tether can safely explore extremely rugged terrain in resource-limited environments. While a tether provides power, wired communication, and support on steep surfaces, it also reduces maneuverability; in cluttered environments the tether will contact obstacles, forming intermediate anchor points. In order for the robot to avoid tether entanglement, it must localize itself with respect to any added anchor points. Accordingly, we present a first approach towards nonvisual localization and mapping that utilizes tether measurements and wheel odometry to jointly estimate vehicle trajectory and tether-to-obstacle contact points. The proposed method is inspired by FastSLAM, where instead of updating a map of landmarks, tether length and bearing measurements are used to update sequential lists of anchor points for every particle representing a belief of the robots trajectory. Results from both simulation and experiment using our Tethered Robotic eXplorer (TReX) demonstrate that (i) our method is more accurate than odometry alone, and (ii) we are able to map intermediate anchor points nonvisually.
canadian conference on computer and robot vision | 2015
Kirk MacTavish; Timothy D. Barfoot
Camera-based localization techniques must be robust to correspondence errors, i.e., when visual features (landmarks)are matched incorrectly. The two primary techniques to address this issue are RANSAC and robust M-estimation -- each more appropriate for different applications. This paper investigates the use of different robust cost functions for M-estimation to deal with correspondence outliers, and assesses their performance under varying degrees of data corruption. Experimental results show that using an aggressive red ascending cost function (e.g., Dynamic Covariance Scaling (DCS) or Geman-McClure (G-M)) best improves accuracy by excluding outliers almost entirely. Additionally, adjusting an error-scaling parameter for the robust cost function over the course of the optimization improves convergence with poor initial conditions.
field and service robotics | 2018
Michael Warren; Michael Paton; Kirk MacTavish; Angela P. Schoellig; Timothy D. Barfoot
Most consumer and industrial Unmanned Aerial Vehicles (UAVs) rely on combining Global Navigation Satellite Systems (GNSS) with barometric and inertial sensors for outdoor operation. As a consequence, these vehicles are prone to a variety of potential navigation failures such as jamming and environmental interference. This usually limits their legal activities to locations of low population density within line-of-sight of a human pilot to reduce risk of injury and damage. Autonomous route-following methods such as Visual Teach and Repeat (VT&R) have enabled long-range navigational autonomy for ground robots without the need for reliance on external infrastructure or an accurate global position estimate. In this paper, we demonstrate the localisation component of VT&R outdoors on a fixed-wing UAV as a method of backup navigation in case of primary sensor failure. We modify the localisation engine of VT&R to work with a single downward facing camera on a UAV to enable safe navigation under the guidance of vision alone. We evaluate the method using visual data from the UAV flying a 1200 m trajectory (at altitude of 80 m) several times during a multi-day period, covering a total distance of 10.8 km using the algorithm. We examine the localisation performance for both small (single flight) and large (inter-day) temporal differences from teach to repeat. Through these experiments, we demonstrate the ability to successfully localise the aircraft on a self-taught route using vision alone without the need for additional sensing or infrastructure.
field and service robotics | 2018
Michael Paton; Kirk MacTavish; Laszlo-Peter Berczi; Sebastian Kai van Es; Timothy D. Barfoot
Autonomous path-following systems based on the Teach and Repeat paradigm allow robots to traverse extensive networks of manually driven paths using on-board sensors. These methods are well suited for applications that involve repeated traversals of constrained paths such as factory floors, orchards, and mines. In order for path-following systems to be viable for these applications they must be able to navigate large distances over long time periods, a challenging task for vision-based systems that are susceptible to appearance change. This paper details Visual Teach and Repeat 2.0, a vision-based path-following system capable of safe, long-term navigation over large-scale networks of connected paths in unstructured, outdoor environments. These tasks are achieved through the use of a suite of novel, multi-experience, vision-based navigation algorithms. We have validated our system experimentally through an eleven-day field test in an untended gravel pit in Sudbury, Canada, where we incrementally built and autonomously traversed a 5 Km network of paths. Over the span of the field test, the robot logged over 140 Km of autonomous driving with an autonomy rate of 99.6%, despite experiencing significant appearance change due to lighting and weather, including driving at night using headlights.
international conference on robotics and automation | 2017
Kirk MacTavish; Michael Paton; Timothy D. Barfoot
Our work builds upon Visual Teach & Repeat 2 (VT&R2): a vision-in-the-loop autonomous navigation system that enables the rapid construction of route networks, safely built through operator-controlled driving. Added routes can be followed autonomously using visual localization. To enable long-term operation that is robust to appearance change, its Multi-Experience Localization (MEL) leverages many previously driven experiences when localizing to the manually taught network. While this multi-experience method is effective across appearance change, the computation becomes intractable as the number of experiences grows into the tens and hundreds. This paper introduces an algorithm that prioritizes experiences most relevant to live operation, limiting the number of experiences required for localization. The proposed algorithm uses a visual Bag-of-Words description of the live view to select relevant experiences based on what the vehicle is seeing right now, without having to factor in all possible environmental influences on scene appearance. This system runs in the loop, in real time, does not require bootstrapping, can be applied to any pointfeature MEL paradigm, and eliminates the need for visual training using an online, local visual vocabulary. By picking a subset of visually similar experiences to the live view, we demonstrate safe, vision-in-the-loop route following over a 31 hour period, despite appearance as different as night and day.