Alexander Dewar
University of Sussex
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Alexander Dewar.
Adaptive Behavior | 2014
Alexander Dewar; Andrew Philippides; Paul Graham
The learning walks of ants are an excellent opportunity to study the interaction between brain, body and environment from which adaptive behaviour emerges. Learning walks are a behaviour with the specific function of storing visual information around a goal in order to simplify the computational problem of visual homing, that is, navigation back to a goal. However, it is not known at present why learning walks take the stereotypical shapes they do. Here we investigate how learning-walk form, visual surroundings and the interaction between the two affect homing performance in a range of virtual worlds when using a simple view-based homing algorithm. We show that the ideal form for a learning walk is environment-specific. We also demonstrate that the distant panorama and small objects at an intermediate distance, particularly when the panorama is obscured, are important aspects of the visual environment both when determining the ideal learning walk and when using stored views to navigate. Implications are discussed in the context of behavioural research into the learning walks of ants.
Journal of Comparative Physiology A-neuroethology Sensory Neural and Behavioral Physiology | 2016
Antoine Wystrach; Alexander Dewar; Andrew Philippides; Paul Graham
The visual systems of animals have to provide information to guide behaviour and the informational requirements of an animal’s behavioural repertoire are often reflected in its sensory system. For insects, this is often evident in the optical array of the compound eye. One behaviour that insects share with many animals is the use of learnt visual information for navigation. As ants are expert visual navigators it may be that their vision is optimised for navigation. Here we take a computational approach in asking how the details of the optical array influence the informational content of scenes used in simple view matching strategies for orientation. We find that robust orientation is best achieved with low-resolution visual information and a large field of view, similar to the optical properties seen for many ant species. A lower resolution allows for a trade-off between specificity and generalisation for stored views. Additionally, our simulations show that orientation performance increases if different portions of the visual field are considered as discrete visual sensors, each giving an independent directional estimate. This suggests that ants might benefit by processing information from their two eyes independently.
Current Biology | 2014
Antoine Wystrach; Alexander Dewar; Paul Graham
Neurogenetic tools of Drosophila research allow unique access to the neural circuitry underpinning visually guided behaviours. New research is highlighting how particular areas in the flys central brain needed for pattern recognition provide a coarse visual encoding.
BioSystems | 2015
Alexander Dewar; Antoine Wystrach; Paul Graham; Andrew Philippides
Drosophila melanogaster are a good system in which to understand the minimal requirements for widespread visually guided behaviours such as navigation, due to their small brains (adults possess only 100,000 neurons) and the availability of neurogenetic techniques which allow the identification of task-specific cell types. Recently published data describe the receptive fields for two classes of visually responsive neurons (R2 and R3/R4d ring neurons in the central complex) that are essential for visual tasks such as orientation memory for salient objects and simple pattern discriminations. What is interesting is that these cells have very large receptive fields and are very small in number, suggesting that each sub-population of cells might be a bottleneck in the processing of visual information for a specific behaviour, as each subset of cells effectively condenses information from approximately 3000 visual receptors in the eye, to fewer than 50 neurons in total. It has recently been shown how R1 ring neurons, which receive input from the same areas as the R2 and R3/R4d cells, are necessary for place learning in Drosophila. However, how R1 neurons enable place learning is unknown. By examining the information provided by different populations of hypothetical visual neurons in simulations of experimental arenas, we show that neurons with ring neuron-like receptive fields are sufficient for defining a location visually. In this way we provide a link between the type of information conveyed by ring neurons and the behaviour they support.
PLOS ONE | 2015
Douglas D. Gaffin; Alexander Dewar; Paul Graham; Andrew Philippides
Humans have long marveled at the ability of animals to navigate swiftly, accurately, and across long distances. Many mechanisms have been proposed for how animals acquire, store, and retrace learned routes, yet many of these hypotheses appear incongruent with behavioral observations and the animals’ neural constraints. The “Navigation by Scene Familiarity Hypothesis” proposed originally for insect navigation offers an elegantly simple solution for retracing previously experienced routes without the need for complex neural architectures and memory retrieval mechanisms. This hypothesis proposes that an animal can return to a target location by simply moving toward the most familiar scene at any given point. Proof of concept simulations have used computer-generated ant’s-eye views of the world, but here we test the ability of scene familiarity algorithms to navigate training routes across satellite images extracted from Google Maps. We find that Google satellite images are so rich in visual information that familiarity algorithms can be used to retrace even tortuous routes with low-resolution sensors. We discuss the implications of these findings not only for animal navigation but also for the potential development of visual augmentation systems and robot guidance algorithms.
PLOS Computational Biology | 2017
Alexander Dewar; Antoine Wystrach; Andrew Philippides; Paul Graham
All organisms wishing to survive and reproduce must be able to respond adaptively to a complex, changing world. Yet the computational power available is constrained by biology and evolution, favouring mechanisms that are parsimonious yet robust. Here we investigate the information carried in small populations of visually responsive neurons in Drosophila melanogaster. These so-called ‘ring neurons’, projecting to the ellipsoid body of the central complex, are reported to be necessary for complex visual tasks such as pattern recognition and visual navigation. Recently the receptive fields of these neurons have been mapped, allowing us to investigate how well they can support such behaviours. For instance, in a simulation of classic pattern discrimination experiments, we show that the pattern of output from the ring neurons matches observed fly behaviour. However, performance of the neurons (as with flies) is not perfect and can be easily improved with the addition of extra neurons, suggesting the neurons’ receptive fields are not optimised for recognising abstract shapes, a conclusion which casts doubt on cognitive explanations of fly behaviour in pattern recognition assays. Using artificial neural networks, we then assess how easy it is to decode more general information about stimulus shape from the ring neuron population codes. We show that these neurons are well suited for encoding information about size, position and orientation, which are more relevant behavioural parameters for a fly than abstract pattern properties. This leads us to suggest that in order to understand the properties of neural systems, one must consider how perceptual circuits put information at the service of behaviour.
conference on biomimetic and biohybrid systems | 2016
Andrew Philippides; Nathan Steadman; Alexander Dewar; Christopher Walker; Paul Graham
This paper discusses the implementation of insect-inspired visual navigation strategies in flying robots, in particular focusing on the impact of changing height. We start by assessing the information available at different heights for visual homing in natural environments, comparing results from an open environment against one where trees and bushes are closer to the camera. We then test a route following algorithm using a gantry robot and show that a robot would be able to successfully navigate a route at a variety of heights using images saved at a different height.
Perception | 2015
Alexander Dewar; Antoine Wystrach; Paul Graham; Andrew Philippides
♦ Comparison of crosstalk for several methods of stereoscopic presentation D H Baker, M Kaestner, A Gouws (University of York; e‐mail: [email protected]) To study binocular vision, it is necessary to use equipment capable of presenting distinct images to the left and right eyes. Techniques that have been developed to achieve this include mirror and prism stereoscopes, virtual reality goggles, and systems where the images are presented on a single display and separated using either active shutters, or passive filters tuned to different wavelengths or polarization angles of light. These latter methods are all subject to the phenomenon of crosstalk, where images intended for one eye are faintly visible to the other. We measured crosstalk for a variety of systems, including several active shutter methods with CRT, LCD and DLP displays, circularly polarized light imaged on three different screen materials viewed at various angles, and narrowband red/green anaglyph filters. We displayed squares of various luminances to one eye of the goggles, and measured their physical luminance through each eye. Our measure of crosstalk was the ratio of these luminances. We estimated crosstalk both when the non stimulated eye was shown a black screen (Woods, Apfelbaum, & Peli, 2010; J Biomed Opt, 15(1):016011), and also when it was shown a mid‐grey screen to provide an estimate of contrast crosstalk. Our lowest crosstalk measures (<1%) were for a CRT monitor with fast decaying phosphor, and ferro‐ electric shutter goggles. Surprisingly, some LCD systems produced negative contrast crosstalk, presumably due to polarization at the pixel level. Anaglyph glasses gave poor results, even with narrowband filters, in part because of an asymmetry in the luminance attenuation between filters. ♦ Rotational, radial and translational motion sensitivity: Does the task matter? F Corbett1, J Atkinson1,2, O Braddick2 (1University College London; 2University of Oxford; e‐mail: [email protected]) Relative sensitivity to rotational, radial and translational motions in random dot kinematograms (RDKs) may reflect the performance of specialised detectors for optic flow components. However reported results on this comparison are inconsistent. The present study compared both detection and direction discrimination for these motions, in a large group (N = 40) of young adults with normal vision. Coherence thresholds were estimated using 2AFC procedures. Separate runs of rotational, radial and translational motions were presented in RDKs in randomised order. For each motion, freely fixating participants decided (1) the location of a coherent motion patch presented either side of a central cross (detection) and (2) the direction of motion of a central coherent patch (direction discrimination). Stimuli remained visible until participants responded (~2 s). Patch diameter, dot density and speed were constant across tasks. Poorer coherence thresholds were found for detection than for direction discrimination overall (p < 0.01), with detection thresholds for translation approximately double those for direction discrimination. Detection coherence thresholds were poorer for translation than radial or rotational motions (p < 0.01). In contrast, direction discrimination coherence thresholds were poorer for radial motion than translational or rotational motions (p < 0.01). The pattern of results across tasks suggests that comparison of sensitivity for different motions depends critically on task requirements, and possibly on training. The striking result of greater sensitivity in discrimination than detection will be discussed in relation to models of how specific detectors may be used. ♦ A visual test battery: Comparing elite cricketers and non-cricketers Alice Cruikshank1, Julie Harris2, John Buckley1, Simon Bennett3, Jonathan Flavell1, Nathan Beebe1, Brendan Barrett1 (1University of Bradford; 2University of St Andrews; 3Liverpool John Moores University; e‐mail: [email protected]) Excellence in specific aspects of visual perception has been shown among certain groups including pilots, computer‐gamers and sports elites. For example, clay‐target shooters show faster visual reaction times(Abernethy & Neal, 1999, J Sci & Med in Sport, 2(1), 1–19) and tennis‐players excel at speed discrimination (Overney et al., 2008, PLoS ONE, 3:e2380). However, others found no expert advantage 1
conference on biomimetic and biohybrid systems | 2013
Andrew Philippides; Alexander Dewar; Antoine Wystrach; Michael Mangan; Paul Graham
The ability of insects to visually navigate long routes to their nest has provided inspiration to engineers seeking to emulate their robust performance with limited resources [1-2]. Many models have been developed based on the elegant snapshot idea: remember what the world looks like from your goal and subsequently move to make your current view more like your memory [3]. In the majority of these models, a single view is stored at a goal location and acts as a form of visual attractor to that position (for review see [4]). Recently however, inspired by the behaviour of ants and the difficulties in extending traditional snapshot models to routes [5], we have proposed a new navigation model [6-7]. In this model, rather than using views to recall directions to the place that they were stored, views are used to recall the direction of facing or movement (identical for a forward-facing ant) at the place the view was stored. To navigate, the agent scans the world by rotating and thus actively finds the most familiar view, a behavior observed in Australian desert ants. Rather than recognise a place, the action to take at that place is specified by a familiar view.
Proceedings of the National Academy of Sciences of the United States of America | 2014
Allen Cheung; Matthew Collett; Thomas S. Collett; Alexander Dewar; Fred C. Dyer; Paul Graham; Michael Mangan; Ajay Narendra; Andrew Philippides; Wolfgang Stürzl; Barbara Webb; Antoine Wystrach; Jochen Zeil