Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Stephanie M. Lowry is active.

Publication


Featured researches published by Stephanie M. Lowry.


IEEE Transactions on Robotics | 2016

Visual Place Recognition: A Survey

Stephanie M. Lowry; Niko Sünderhauf; Paul Newman; John J. Leonard; David Cox; Peter Corke; Michael Milford

Visual place recognition is a challenging problem due to the vast range of ways in which the appearance of real-world places can vary. In recent years, improvements in visual sensing capabilities, an ever-increasing focus on long-term mobile robot autonomy, and the ability to draw on state-of-the-art research in other disciplines-particularly recognition in computer vision and animal navigation in neuroscience-have all contributed to significant advances in visual place recognition systems. This paper presents a survey of the visual place recognition research landscape. We start by introducing the concepts behind place recognition-the role of place recognition in the animal kingdom, how a “place” is defined in a robotics context, and the major components of a place recognition system. Long-term robot operations have revealed that changing appearance can be a significant factor in visual place recognition failure; therefore, we discuss how place recognition solutions can implicitly or explicitly account for appearance change within the environment. Finally, we close with a discussion on the future of visual place recognition, in particular with respect to the rapid advances being made in the related fields of deep learning, semantic scene understanding, and video description.


international conference on robotics and automation | 2014

Transforming morning to afternoon using linear regression techniques

Stephanie M. Lowry; Michael Milford; Gordon Wyeth

Visual localization in outdoor environments is often hampered by the natural variation in appearance caused by such things as weather phenomena, diurnal fluctuations in lighting, and seasonal changes. Such changes are global across an environment and, in the case of global light changes and seasonal variation, the change in appearance occurs in a regular, cyclic manner. Visual localization could be greatly improved if it were possible to predict the appearance of a particular location at a particular time, based on the appearance of the location in the past and knowledge of the nature of appearance change over time. In this paper, we investigate whether global appearance changes in an environment can be learned sufficiently to improve visual localization performance. We use time of day as a test case, and generate transformations between morning and afternoon using sample images from a training set. We demonstrate the learned transformation can be generalized from training data and show the resulting visual localization on a test set is improved relative to raw image comparison. The improvement in localization remains when the area is revisited several weeks later.


computer vision and pattern recognition | 2015

Sequence searching with deep-learnt depth for condition- and viewpoint-invariant route-based place recognition

Michael Milford; Stephanie M. Lowry; Niko Sünderhauf; Sareh Shirazi; Edward Pepperell; Ben Upcroft; Chunhua Shen; Guosheng Lin; Fayao Liu; Cesar Cadena; Ian D. Reid

Vision-based localization on robots and vehicles remains unsolved when extreme appearance change and viewpoint change are present simultaneously. The current state of the art approaches to this challenge either deal with only one of these two problems; for example FAB-MAP (viewpoint invariance) or SeqSLAM (appearance-invariance), or use extensive training within the test environment, an impractical requirement in many application scenarios. In this paper we significantly improve the viewpoint invariance of the SeqSLAM algorithm by using state-of-the-art deep learning techniques to generate synthetic viewpoints. Our approach is different to other deep learning approaches in that it does not rely on the ability of the CNN network to learn invariant features, but only to produce“good enough” depth images from day-time imagery only. We evaluate the system on a new multi-lane day-night car dataset specifically gathered to simultaneously test both appearance and viewpoint change. Results demonstrate that the use of synthetic viewpoints improves the maximum recall achieved at 100% precision by a factor of 2.2 and maximum recall by a factor of 2.7, enabling correct place recognition across multiple road lanes and significantly reducing the time between correct localizations.


international conference on robotics and automation | 2014

Towards training-free appearance-based localization : probabilistic models for whole-image descriptors

Stephanie M. Lowry; Gordon Wyeth; Michael Milford

Whole image descriptors have been shown to be remarkably robust to perceptual change especially compared to local features. However, whole-image-based localization systems typically rely on heuristic methods for determining appropriate matching thresholds in a particular environment. These environment-specific tuning requirements and the lack of a meaningful interpretation of arbitrary thresholds limit the general applicability of these systems. In this paper we present a Bayesian model of probability for whole-image descriptors that can be seamlessly integrated into localization systems designed for probabilistic visual input. We demonstrate this method using CAT-Graph, an appearance-based visual localization system originally designed for a FAB-MAP-style probabilistic input. We show that using whole-image descriptors as visual input extends CAT-Graphs functionality to environments that experience a greater amount of perceptual change. We also present a method of estimating whole-image probability models in an online manner, removing the need for a prior training phase. We show that this online, automated training method can perform comparably to pre-trained, manually tuned local descriptor methods.


Neural Networks | 2015

Bio-inspired homogeneous multi-scale place recognition

Stephanie M. Lowry; Adam Jacobson; Michael E. Hasselmo; Michael Milford

Robotic mapping and localization systems typically operate at either one fixed spatial scale, or over two, combining a local metric map and a global topological map. In contrast, recent high profile discoveries in neuroscience have indicated that animals such as rodents navigate the world using multiple parallel maps, with each map encoding the world at a specific spatial scale. While a number of theoretical-only investigations have hypothesized several possible benefits of such a multi-scale mapping system, no one has comprehensively investigated the potential mapping and place recognition performance benefits for navigating robots in large real world environments, especially using more than two homogeneous map scales. In this paper we present a biologically-inspired multi-scale mapping system mimicking the rodent multi-scale map. Unlike hybrid metric-topological multi-scale robot mapping systems, this new system is homogeneous, distinguishable only by scale, like rodent neural maps. We present methods for training each network to learn and recognize places at a specific spatial scale, and techniques for combining the output from each of these parallel networks. This approach differs from traditional probabilistic robotic methods, where place recognition spatial specificity is passively driven by models of sensor uncertainty. Instead we intentionally create parallel learning systems that learn associations between sensory input and the environment at different spatial scales. We also conduct a systematic series of experiments and parameter studies that determine the effect on performance of using different neural map scaling ratios and different numbers of discrete map scales. The results demonstrate that a multi-scale approach universally improves place recognition performance and is capable of producing better than state of the art performance compared to existing robotic navigation algorithms. We analyze the results and discuss the implications with respect to several recent discoveries and theories regarding how multi-scale neural maps are learnt and used in the mammalian brain.


intelligent robots and systems | 2015

Distance metric learning for feature-agnostic place recognition

Stephanie M. Lowry; Adam Jacobson; ZongYuan Ge; Michael Milford

The recent focus on performing visual navigation and place recognition in changing environments has resulted in a large number of heterogeneous techniques each utilizing their own learnt or hand crafted visual features. This paper presents a generally applicable method for learning the appropriate distance metric by which to compare feature responses from any of these techniques in order to perform place recognition under changing environmental conditions. We implement an approach which learns to cluster images captured at spatially proximal locations under different conditions, separated from frames captured at different places. The formulation is a convex optimization, guaranteeing the existence of a global solution. We evaluate the general applicability of our method on two benchmark change datasets using three typical image pre-processing and feature types: GIST, Principal Component Analysis and learnt Convolutional Neural Network features. The results demonstrate that the distance metric learning approach uniformly improves single-image-based visual place recognition performance across all feature types. Furthermore, we demonstrate that this performance improvement is maintained when the sequence-based algorithm SeqSLAM is applied to the single-image place recognition results, leading to state-of-the-art performance.


intelligent robots and systems | 2015

Building beliefs: Unsupervised generation of observation likelihoods for probabilistic localization in changing environments

Stephanie M. Lowry; Michael Milford

This paper is concerned with the interpretation of visual information for robot localization. It presents a probabilistic localization system that generates an appropriate observation model online, unlike existing systems which require pre-determined belief models. This paper proposes that probabilistic visual localization requires two major operating modes - one to match locations under similar conditions and the other to match locations under different conditions. We develop dual observation likelihood models to suit these two different states, along with a similarity measure-based method that identifies the current conditions and switches between the models. The system is experimentally tested against different types of ongoing appearance change. The results demonstrate that the system is compatible with a wide range of visual front-ends, and the dual-model system outperforms a single-model or pre-trained approach and state-of-the-art localization techniques.


ARC Centre of Excellence for Robotic Vision; Science & Engineering Faculty | 2014

Unsupervised Online Learning of Condition-Invariant Images for Place Recognition

Stephanie M. Lowry; Gordon Wyeth; Michael Milford


Science & Engineering Faculty | 2013

Training-free probability models for whole-image based place recognition

Stephanie M. Lowry; Gordon Wyeth; Michael Milford


international conference on robotics and automation | 2018

LOGOS: Local Geometric Support for High-Outlier Spatial Verification

Stephanie M. Lowry; Henrik Andreasson

Collaboration


Dive into the Stephanie M. Lowry's collaboration.

Top Co-Authors

Avatar

Michael Milford

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar

Gordon Wyeth

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar

Niko Sünderhauf

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar

Adam Jacobson

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar

Peter Corke

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

John J. Leonard

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ben Upcroft

Queensland University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge