Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Edward Pepperell is active.

Publication


Featured researches published by Edward Pepperell.


robotics science and systems | 2015

Place Recognition with ConvNet Landmarks: Viewpoint-Robust, Condition-Robust, Training-Free

Niko Suenderhauf; Sareh Shirazi; Adam Jacobson; Feras Dayoub; Edward Pepperell; Ben Upcroft; Michael Milford

Place recognition has long been an incompletely solved problem in that all approaches involve significant compromises. Current methods address many but never all of the critical challenges of place recognition – viewpoint-invariance, condition-invariance and minimizing training requirements. Here we present an approach that adapts state-of-the-art object proposal techniques to identify potential landmarks within an image for place recognition. We use the astonishing power of convolutional neural network features to identify matching landmark proposals between images to perform place recognition over extreme appearance and viewpoint variations. Our system does not require any form of training, all components are generic enough to be used off-the-shelf. We present a range of challenging experiments in varied viewpoint and environmental conditions. We demonstrate superior performance to current state-of-the- art techniques. Furthermore, by building on existing and widely used recognition frameworks, this approach provides a highly compatible place recognition system with the potential for easy integration of other techniques such as object detection and semantic scene interpretation.


computer vision and pattern recognition | 2015

Sequence searching with deep-learnt depth for condition- and viewpoint-invariant route-based place recognition

Michael Milford; Stephanie M. Lowry; Niko Sünderhauf; Sareh Shirazi; Edward Pepperell; Ben Upcroft; Chunhua Shen; Guosheng Lin; Fayao Liu; Cesar Cadena; Ian D. Reid

Vision-based localization on robots and vehicles remains unsolved when extreme appearance change and viewpoint change are present simultaneously. The current state of the art approaches to this challenge either deal with only one of these two problems; for example FAB-MAP (viewpoint invariance) or SeqSLAM (appearance-invariance), or use extensive training within the test environment, an impractical requirement in many application scenarios. In this paper we significantly improve the viewpoint invariance of the SeqSLAM algorithm by using state-of-the-art deep learning techniques to generate synthetic viewpoints. Our approach is different to other deep learning approaches in that it does not rely on the ability of the CNN network to learn invariant features, but only to produce“good enough” depth images from day-time imagery only. We evaluate the system on a new multi-lane day-night car dataset specifically gathered to simultaneously test both appearance and viewpoint change. Results demonstrate that the use of synthetic viewpoints improves the maximum recall achieved at 100% precision by a factor of 2.2 and maximum recall by a factor of 2.7, enabling correct place recognition across multiple road lanes and significantly reducing the time between correct localizations.


The International Journal of Robotics Research | 2016

Routed roads

Edward Pepperell; Peter Corke; Michael Milford

Vision-based place recognition is becoming an increasingly viable component of navigation systems for autonomous robots and personal aids. However, attaining robustness to variations in environmental conditions—such as time of day, weather and season—and camera viewpoint remains a major challenge. Featureless, sequence-based place recognition techniques have demonstrated promise, but often rely on long image sequences, manually-tuned parameters and exhaustive sequence match searching through multiple locations and image scales. In this paper, we address these deficiencies by implementing a condition-invariant, sequence-based place recognition algorithm suitable for networked environments, such as city streets, and routes with lateral platform shift, such as multiple-lane roads. We achieve this capability by augmenting the traditional 1D image database with a directed graph to describe the branching of contiguous sections of imagery at intersections. A particle filter is then used to efficiently explore these paths, as well as various lateral positions synthesized by rescaling imagery. Our proposed approach eliminates manual tuning of sequence length parameters, improves localization on branched routes, improves overall place recognition accuracy and coverage, and reduces computational requirements. We evaluated the new method against the original SeqSLAM and SMART algorithms on two day–night, road-based datasets and a summer–winter train dataset, where it attained superior precision-recall performance and coverage in all environments. Together, these contributions represent a significant step towards the provision of a robust, near parameter-free condition- and viewpoint-invariant visual place recognition capability for vehicles and robots.


international conference on robotics and automation | 2015

Automatic image scaling for place recognition in changing environments

Edward Pepperell; Peter Corke; Michael Milford

Robustness to variations in environmental conditions and camera viewpoint is essential for long-term place recognition, navigation and SLAM. Existing systems typically solve either of these problems, but invariance to both remains a challenge. This paper presents a training-free approach to lateral viewpoint- and condition-invariant, vision-based place recognition. Our successive frame patch-tracking technique infers average scene depth along traverses and automatically rescales views of the same place at different depths to increase their similarity. We combine our system with the condition-invariant SMART algorithm and demonstrate place recognition between day and night, across entire 4-lane-plus-median-strip roads, where current algorithms fail.


Science & Engineering Faculty | 2013

Towards persistent visual navigation using SMART

Edward Pepperell; Peter Corke; Michael Milford


ARC Centre of Excellence for Robotic Vision; Science & Engineering Faculty | 2014

Towards Vision-Based Pose- and Condition-Invariant Place Recognition along Routes

Edward Pepperell; Peter Corke; Michael Milford


international conference on robotics and automation | 2014

Automated sensory data alignment for environmental and epidermal change monitoring

Michael Milford; Jennifer Firn; James Beattie; Adam Jacobson; Edward Pepperell; Eugene D. Mason; Michael G. Kimlin; Matthew Dunbabin


ARC Centre of Excellence for Robotic Vision; Science & Engineering Faculty | 2015

Repeatable Condition-Invariant Visual Odometry for Sequence-Based Place Recognition

Arren Glover; Edward Pepperell; Gordon Wyeth; Ben Upcroft; Michael Milford


publisher | None

title

author


Science & Engineering Faculty | 2016

Routed roads: Probabilistic vision-based place recognition for changing conditions, split streets and varied viewpoints

Edward Pepperell; Peter Corke; Michael Milford

Collaboration


Dive into the Edward Pepperell's collaboration.

Top Co-Authors

Avatar

Michael Milford

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar

Peter Corke

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar

Ben Upcroft

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar

Adam Jacobson

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar

Feras Dayoub

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar

Niko Sünderhauf

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar

Arren Glover

Istituto Italiano di Tecnologia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Eugene D. Mason

Queensland University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge