Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Arren Glover is active.

Publication


Featured researches published by Arren Glover.


international conference on robotics and automation | 2010

FAB-MAP + RatSLAM: Appearance-based SLAM for multiple times of day

Arren Glover; William P. Maddern; Michael Milford; Gordon Wyeth

Appearance-based mapping and localisation is especially challenging when separate processes of mapping and localisation occur at different times of day. The problem is exacerbated in the outdoors where continuous change in sun angle can drastically affect the appearance of a scene. We confront this challenge by fusing the probabilistic local feature based data association method of FAB-MAP with the pose cell filtering and experience mapping of RatSLAM. We evaluate the effectiveness of our amalgamation of methods using five datasets captured throughout the day from a single camera driven through a network of suburban streets. We show further results when the streets are re-visited three weeks later, and draw conclusions on the value of the system for lifelong mapping.


international conference on robotics and automation | 2012

OpenFABMAP: An open source toolbox for appearance-based loop closure detection

Arren Glover; William P. Maddern; Michael Warren; Stephanie Reid; Michael Milford; Gordon Wyeth

Appearance-based loop closure techniques, which leverage the high information content of visual images and can be used independently of pose, are now widely used in robotic applications. The current state-of-the-art in the field is Fast Appearance-Based Mapping (FAB-MAP) having been demonstrated in several seminal robotic mapping experiments. In this paper, we describe OpenFABMAP, a fully open source implementation of the original FAB-MAP algorithm. Beyond the benefits of full user access to the source code, OpenFABMAP provides a number of configurable options including rapid codebook training and interest point feature tuning. We demonstrate the performance of OpenFABMAP on a number of published datasets and demonstrate the advantages of quick algorithm customisation. We present results from OpenFABMAPs application in a highly varied range of robotics research scenarios.


international conference on robotics and automation | 2014

Condition-invariant, top-down visual place recognition

Michael Milford; Walter J. Scheirer; Eleonora Vig; Arren Glover; Oliver Baumann; Jason B. Mattingley; David Cox

In this paper we present a novel, condition-invariant place recognition algorithm inspired by recent discoveries in human visual neuroscience. The algorithm combines intolerant but fast low resolution whole image matching with highly tolerant, sub-image patch matching processes. The approach does not require prior training and works on single images, alleviating the need for either a velocity signal or image sequence, differentiating it from current state of the art methods. We conduct an exhaustive set of experiments evaluating the relationship between place recognition performance and computational resources using part of the challenging Alderley sunny day - rainy night dataset, which has only been previously solved by integrating over 320 frame long image sequences. We achieve recall rates of up to 51% at 100% precision, matching places that have undergone drastic perceptual change while rejecting match hypotheses between highly aliased images of different places. Human trials demonstrate the performance is approaching human capability. The results provide a new benchmark for single image, condition-invariant place recognition.


Science & Engineering Faculty | 2014

Large Scale Monocular Vision-Only Mapping from a Fixed-Wing sUAS

Michael Warren; David McKinnon; Hu He; Arren Glover; Michael Shiel; Ben Upcroft

This paper presents the application of a monocular visual SLAM on a fixed-wing small Unmanned Aerial System (sUAS) capable of simultaneous estimation of aircraft pose and scene structure. We demonstrate the robustness of unconstrained vision alone in producing reliable pose estimates of a sUAS, at altitude. It is ultimately capable of online state estimation feedback for aircraft control and next-best-view estimation for complete map coverage without the use of additional sensors. We explore some of the challenges of visual SLAM from a sUAS including dealing with planar structure, distant scenes and noisy observations. The developed techniques are applied on vision data gathered from a fast-moving fixed-wing radio control aircraft flown over a \(1\times 1\) km rural area at an altitude of 20–100 m. We present both raw Structure from Motion results and a SLAM solution that includes FAB-MAP based loop-closures and graph-optimised pose. Timing information is also presented to demonstrate near online capabilities. We compare the accuracy of the 6-DOF pose estimates to an off-the-shelf GPS aided INS over a 1.7 km trajectory. We also present output 3D reconstructions of the observed scene structure and texture that demonstrates future applications in autonomous monitoring and surveying.


international conference on robotics and automation | 2011

Lingodroids: Studies in spatial cognition and language

Ruth Schulz; Arren Glover; Michael Milford; Gordon Wyeth; Janet Wiles

The Lingodroids are a pair of mobile robots that evolve a language for places and relationships between places (based on distance and direction). Each robot in these studies has its own understanding of the layout of the world, based on its unique experiences and exploration of the environment. Despite having different internal representations of the world, the robots are able to develop a common lexicon for places, and then use simple sentences to explain and understand relationships between places - even places that they could not physically experience, such as areas behind closed doors. By learning the language, the robots are able to develop representations for places that are inaccessible to them, and later, when the doors are opened, use those representations to perform goal-directed behavior.


intelligent robots and systems | 2016

Fast event-based Harris corner detection exploiting the advantages of event-driven cameras

Valentina Vasco; Arren Glover; Chiara Bartolozzi

The detection of consistent feature points in an image is fundamental for various kinds of computer vision techniques, such as stereo matching, object recognition, target tracking and optical flow computation. This paper presents an event-based approach to the detection of corner points, which benefits from the high temporal resolution, compressed visual information and low latency provided by an asynchronous neuromorphic event-based camera. The proposed method adapts the commonly used Harris corner detector to the event-based data, in which frames are replaced by a stream of asynchronous events produced in response to local light changes at μs temporal resolution. Responding only to changes in its field of view, an event-based camera naturally enhances edges in the scene, simplifying the detection of corner features. We characterised and tested the method on both a controlled pattern and a real scenario, using the dynamic vision sensor (DVS) on the neuromorphic iCub robot. The method detects corners with a typical error distribution within 2 pixels. The error is constant for different motion velocities and directions, indicating a consistent detection across the scene and over time. We achieve a detection rate proportional to speed, higher than frame-based technique for a significant amount of motion in the scene, while also reducing the computational cost.


international conference on advanced robotics | 2017

Independent motion detection with event-driven cameras

Valentina Vasco; Arren Glover; Elias Mueggler; Davide Scaramuzza; Lorenzo Natale; Chiara Bartolozzi

Unlike standard cameras that send intensity images at a constant frame rate, event-driven cameras asynchronously report pixel-level brightness changes, offering low latency and high temporal resolution (both in the order of micro-seconds). As such, they have great potential for fast and low power vision algorithms for robots. Visual tracking, for example, is easily achieved even for very fast stimuli, as only moving objects cause brightness changes. However, cameras mounted on a moving robot are typically non-stationary and the same tracking problem becomes confounded by background clutter events due to the robot ego-motion. In this paper, we propose a method for segmenting the motion of an independently moving object for event-driven cameras. Our method detects and tracks corners in the event stream and learns the statistics of their motion as a function of the robots joint velocities when no independently moving objects are present. During robot operation, independently moving objects are identified by discrepancies between the predicted corner velocities from ego-motion and the measured corner velocities. We validate the algorithm on data collected from the neuromorphic iCub robot. We achieve a precision of ∼ 90% and show that the method is robust to changes in speed of both the head and the target.


IEEE Transactions on Cognitive and Developmental Systems | 2018

Toward Lifelong Affordance Learning Using a Distributed Markov Model

Arren Glover; Gordon Wyeth

Robots are able to learn how to interact with objects by developing computational models of affordance. This paper presents an approach in which learning and operation occur concurrently, toward achieving lifelong affordance learning. In a such a regime a robot must be able to learn about new objects, but without a general rule for what an “object” is, the robot must learn about everything in the environment to determine their affordances. In this paper, sensorimotor coordination is modeled using a distributed semi-Markov decision process; it is created online during robot operation, and performs continual action selection to reach a goal state. In an initial experiment we show that this model captures an object’s affordances, which are exploited to perform several different tasks using a mobile robot equipped with a gripper and infrared “tactile” sensor. In a secondary experiment, we show that the robot can learn that the marker is the only visual feature that can be gripped and that walls and floor do not have the affordance of being “grip-able.” The distributed mechanism is necessary for the modeling of multiple sensory stimuli simultaneously, and the selection of the object with the necessary affordances for the task is emergent from the robot’s actions, while other parts of the environment that are perceived, such as walls and floors, are ignored.


intelligent robots and systems | 2016

Event-driven ball detection and gaze fixation in clutter

Arren Glover; Chiara Bartolozzi

The fast temporal-dynamics and intrinsic motion segmentation of event-based cameras are beneficial for robotic tasks that require low-latency visual tracking and control, for example a robot catching a ball. When the event-driven iCub humanoid robot grasps an object its head and torso move, inducing camera motion, and tracked objects become no longer trivially segmented amongst the mass of background clutter. Current event-based tracking algorithms have mostly considered stationary cameras that have clean event-streams with minimal clutter. This paper introduces novel methods to extend the Hough-based circle detection algorithm using optical flow information that is readily extracted from the spatio-temporal event space. Results indicate the proposed directed-Hough algorithm is more robust to other moving objects and the background event-clutter. Finally, we demonstrate successful on-line robot control and gaze following on the iCub robot.


ieee-ras international conference on humanoid robots | 2016

Vergence control with a neuromorphic iCub

Valentina Vasco; Arren Glover; Yeshasvi Tirupachuri; Fabio Solari; Manuela Chessa; Chiara Bartolozzi

Vergence control and tracking allow a robot to maintain an accurate estimate of a dynamic object three dimensions, improving depth estimation at the fixation point. Brain-inspired implementations of vergence control are based on models of complex binocular cells of the visual cortex sensitive to disparity. The energy of cells activation provides a disparity-related signal that can be reliably used for vergence control. We implemented such a model on the neuromorphic iCub, equipped with a pair of brain inspired vision sensors. Such sensors provide low-latency, compressed and high temporal resolution visual information related to changes in the scene. We demonstrate the feasibility of a fully neuromorphic system for vergence control and show that this implementation works in real-time, providing fast and accurate control for a moving stimulus up to 2 Hz, sensibly decreasing the latency associated to frame-based cameras. Additionally, thanks to the high dynamic range of the sensor, the control shows the same accuracy under very different illumination.

Collaboration


Dive into the Arren Glover's collaboration.

Top Co-Authors

Avatar

Gordon Wyeth

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar

Michael Milford

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar

Chiara Bartolozzi

Istituto Italiano di Tecnologia

View shared research outputs
Top Co-Authors

Avatar

Valentina Vasco

Istituto Italiano di Tecnologia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Janet Wiles

University of Queensland

View shared research outputs
Top Co-Authors

Avatar

Michael Warren

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar

Ruth Schulz

University of Queensland

View shared research outputs
Top Co-Authors

Avatar

Ben Upcroft

Queensland University of Technology

View shared research outputs
Top Co-Authors

Avatar

David McKinnon

University of Queensland

View shared research outputs
Researchain Logo
Decentralizing Knowledge