Andrew Vardy
Memorial University of Newfoundland
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Andrew Vardy.
Biological Cybernetics | 2006
Ralf Möller; Andrew Vardy
In natural images, the distance measure between two images taken at different locations rises smoothly with increasing distance between the locations. This fact can be exploited for local visual homing where the task is to reach a goal location that is characterized by a snapshot image: descending in the image distance will lead the agent to the goal location. To compute an estimate of the spatial gradient in the distance measure, its value must be sampled at three noncollinear points. An animal or robot would have to insert exploratory movements into its home trajectory to collect these samples. Here we suggest a method based on the matched-filter concept that allows one to estimate the gradient without exploratory movements. Two matched filters – optical flow fields resulting from translatory movements in the horizontal plane – are used to predict two images in perpendicular directions from the current location. We investigate the relation to differential flow methods applied to the local homing problem and show that the matched-filter approach produces reliable homing behavior on image databases. Two alternative methods that only require a single matched filter are suggested. The matched-filter concept is also applied to derive a home-vector equation for a Fourier-based parameter method.
international symposium on wearable computers | 1999
Andrew Vardy; John A. Robinson; Li-Te Cheng
We show how images of a users hand from a video camera attached to the underside of the wrist can be processed to yield finger movement information. Discrete (and discreet) movements of the fingers away from a rest position are translated into a small set of base symbols. These are interpreted as input to a wearable computer, providing unobtrusive control.
european conference on artificial life | 2003
Andrew Vardy; Franz Oppacher
We present a variant of the snapshot model [1] for insect visual homing. In this model a snapshot image is taken by an agent at the goal position. The disparity between current and snapshot images is subsequently used to guide the agent’s return. A matrix of local low-level processing elements is applied here to compute this disparity and transform it into a motion vector. This scheme contrasts with other variants of the snapshot model which operate on one-dimensional images, generally taken as views from a synthetic or simplified real world setting. Our approach operates directly on two-dimensional images of the real world. Although this system is not a model of any known neural structure, it hopes to offer more biological plausibility than competing techniques because the processing applied is low-level, and because the information processed appears to be of the same sort of information that is processed by insects. We present a comparison of results obtained on a set of real-world images.
Autonomous Robots | 2007
Ralf Möller; Andrew Vardy; Sven Kreft; Sebastian Ruwisch
Abstract Gradient descent in image distances can lead a navigating agent to the goal location, but in environments with an anisotropic distribution of landmarks, gradient home vectors deviate from the true home direction. These deviations can be reduced by applying Newton’s method to matched-filter descent in image distances (MFDID). Based on several image databases we demonstrate that the home vectors produced by the Newton-based MFDID method are usually closer to the true home direction than those obtained from the original MFDID method. The greater accuracy of Newton-MFDID home vectors in the vicinity of the goal location would allow a navigating agent to approach the goal on straighter trajectories, improve the results of triangulation procedures, and enhance a robot’s ability to detect its arrival at a goal.
robotics and biomimetics | 2006
Andrew Vardy
A biologically-inspired approach to robot route following is presented. Snapshot images of a robots environment are captured while learning a route. Later, when retracing the route, the robot uses visual homing to move between positions where snapshot images had been captured. This general approach was inspired by experiments on route following in wood ants. The impact of odometric error and another key parameter is studied in relation to the number of snapshots captured by the learning algorithm. Tests in a photo-realistic simulated environment reveal that route following can succeed even on relatively sparse paths. A major change in illumination reduces, but does eliminate, the robots ability to retrace a route.
symposium on underwater technology and workshop on scientific use of submarine cables and related technologies | 2011
Peter Vandrish; Andrew Vardy; Dan Walker; Octavia A. Dobre
The ability of an AUV to navigate an underwater environment precisely and for an extended period depends on its effectiveness at making accurate observations regarding its location and orientation. An AUV platform equipped with a side-scan sonar system has the potential to register the current sonar image with previously captured images for the purpose of obtaining information about the vehicles pose. Image registration is a procedure which transforms images viewed from different perspectives into a single coordinate system. The significance of using image registration techniques in a surveying or monitoring context comes from the fact that the registration parameters could provide the AUV with an indication of the discrepancy between its expected and observed pose vectors. As such, image registration provides feedback which can be used to compensate for drift in inertial sensors or to provide a standalone navigation solution in the event that the inertial navigation system fails. In order for image registration to provide an effective means for feedback a number of requirements on the performance of the image registration method employed must be met. Not only must the method be accurate in the face of possible image variations, but it must operate in real-time using the limited computing resources available within an AUV. In this paper, a number of key image registration techniques are applied to side-scan sonar images. These techniques include those based on the maximization of mutual information, log-polar cross-correlation, the Scale-Invariant Feature Transform (SIFT), and phase correlation. The performance of these techniques is assessed based on a number of metrics including execution time and registration accuracy. The challenges introduced by side-scan sonar imaging systems which degrade the performance of image registration are also discussed in detail.
intelligent robots and systems | 2008
David Churchill; Andrew Vardy
Local visual homing is the process of determining the direction of movement required to return an agent to a goal location by comparing the current image with an image taken at the goal, known as the snapshot image. One way of accomplishing visual homing is by computing the correspondences between features and then analyzing the resulting flow field to determine the correct direction of motion. Typically, some strong assumptions need to be posited in order to compute the home direction from the flow field. For example, it is difficult to locally distinguish translation from rotation, so many authors assume rotation to be computable by other means (e.g. magnetic compass). In this paper we present a novel approach to visual homing using scale change information from Scale Invariant Feature Transforms (SIFT) which we use to compute landmark correspondences. The method described here is able to determine the direction of the goal in the robotpsilas frame of reference, irrespective of the relative 3D orientation with the goal.
Journal of Intelligent and Robotic Systems | 2012
David Churchill; Andrew Vardy
Visual homing is the ability of an agent to return to a goal position by comparing the currently viewed image with an image captured at the goal, known as the snapshot image. In this paper we present additional mathematical justification and experimental results for the visual homing algorithm first presented in Churchill and Vardy (2008). This algorithm, known as Homing in Scale Space, is far less constrained than existing methods in that it can infer the direction of translation without any estimation of the direction of rotation. Thus, it does not require the current and snapshot images to be captured from the same orientation (a limitation of some existing methods). The algorithm is novel in its use of the scale change of SIFT features as an indication of the change in the feature’s distance from the robot. We present results on a variety of image databases and on live robot trials.
Lecture Notes in Computer Science | 2005
Andrew Vardy; Franz Oppacher
A descriptor is presented for characterizing local image patches in a scale invariant manner. The descriptor is biologically-plausible in that the necessary computations are simple and local. Two different methods for robot visual homing based on this descriptor are also presented and tested. The first method utilizes the common technique of corresponding descriptors between images. The second method determines a home vector more directly by finding the stationary local image patch most similar between the two images. We find that the first method exceeds the performance of Franz et. als warping method. No statistically significant difference was found between the second method and the warping method.
ieee/oes autonomous underwater vehicles | 2014
Daniel Cook; Andrew Vardy; Ron Lewis
This paper presents a survey of a selection of currently available simulation software for robots and unmanned vehicles. In particular, the simulators selected are reviewed for their suitability for the simulation of Autonomous Underwater Vehicles (AUVs), as well as their suitability for the simulation of multi-vehicle operations. The criteria for selection are based on the following features: sufficient physical fidelity to allow modelling of manipulators and end effectors; a programmatic interface, via scripting or middleware; modelling of optical and/or acoustic sensors; adequate documentation; previous use in academic research. A subset of the selected simulators are reviewed in greater detail; these are UWSim, MORSE, and Gazebo. This subset of simulators allow virtual sensors to be simulated, such as GPS, sonar, and multibeam sonar making them suitable for the design and simulation of navigation and mission planning algorithms. We conclude that simulation for underwater vehicles remains a niche problem, but with some additional effort researchers wishing to simulate such vehicles may do so, basing their work on existing software.