Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michael Mangan is active.

Publication


Featured researches published by Michael Mangan.


Adaptive Behavior | 2007

Evolving a Neural Model of Insect Path Integration

Thomas Haferlach; Jan Wessnitzer; Michael Mangan; Barbara Webb

Path integration is an important navigation strategy in many animal species. We use a genetic algorithm to evolve a novel neural model of path integration, based on input from cells that encode the heading of the agent in a manner comparable to the polarization-sensitive interneurons found in insects. The home vector is encoded as a population code across a circular array of cells that integrate this input. This code can be used to control return to the home position. We demonstrate the capabilities of the network under noisy conditions in simulation and on a robot.


PLOS Computational Biology | 2016

Using an Insect Mushroom Body Circuit to Encode Route Memory in Complex Natural Environments

Paul Ardin; Fei Peng; Michael Mangan; Konstantinos Lagogiannis; Barbara Webb

Ants, like many other animals, use visual memory to follow extended routes through complex environments, but it is unknown how their small brains implement this capability. The mushroom body neuropils have been identified as a crucial memory circuit in the insect brain, but their function has mostly been explored for simple olfactory association tasks. We show that a spiking neural model of this circuit originally developed to describe fruitfly (Drosophila melanogaster) olfactory association, can also account for the ability of desert ants (Cataglyphis velox) to rapidly learn visual routes through complex natural environments. We further demonstrate that abstracting the key computational principles of this circuit, which include one-shot learning of sparse codes, enables the theoretical storage capacity of the ant mushroom body to be estimated at hundreds of independent images.


The Journal of Experimental Biology | 2013

Snapshots in ants? New interpretations of paradigmatic experiments

Antoine Wystrach; Michael Mangan; Andrew Philippides; Paul Graham

SUMMARY Ants can use visual information to guide long idiosyncratic routes and accurately pinpoint locations in complex natural environments. It has often been assumed that the world knowledge of these foragers consists of multiple discrete views that are retrieved sequentially for breaking routes into sections controlling approaches to a goal. Here we challenge this idea using a model of visual navigation that does not store and use discrete views to replicate the results from paradigmatic experiments that have been taken as evidence that ants navigate using such discrete snapshots. Instead of sequentially retrieving views, the proposed architecture gathers information from all experienced views into a single memory network, and uses this network all along the route to determine the most familiar heading at a given location. This algorithm is consistent with the navigation of ants in both laboratory and natural environments, and provides a parsimonious solution to deal with visual information from multiple locations.


Proceedings of the Royal Society of London B: Biological Sciences | 2008

Place memory in crickets

Jan Wessnitzer; Michael Mangan; Barbara Webb

Certain insect species are known to relocate nest or food sites using landmarks, but the generality of this capability among insects, and whether insect place memory can be used in novel task settings, is not known. We tested the ability of crickets to use surrounding visual cues to relocate an invisible target in an analogue of the Morris water maze, a standard paradigm for spatial memory tests on rodents. Adult female Gryllus bimaculatus were released into an arena with a floor heated to an aversive temperature, with one hidden cool spot. Over 10 trials, the time taken to find the cool spot decreased significantly. The best performance was obtained when a natural scene was provided on the arena walls. Animals can relocate the position from novel starting points. When the scene is rotated, they preferentially approach the fictive target position corresponding to the rotation. We note that this navigational capability does not necessarily imply the animal has an internal spatial representation.


Proceedings of the Royal Society B: Biological Sciences | 2015

Optimal cue integration in ants

Antoine Wystrach; Michael Mangan; Barbara Webb

In situations with redundant or competing sensory information, humans have been shown to perform cue integration, weighting different cues according to their certainty in a quantifiably optimal manner. Ants have been shown to merge the directional information available from their path integration (PI) and visual memory, but as yet it is not clear that they do so in a way that reflects the relative certainty of the cues. In this study, we manipulate the variance of the PI home vector by allowing ants (Cataglyphis velox) to run different distances and testing their directional choice when the PI vector direction is put in competition with visual memory. Ants show progressively stronger weighting of their PI direction as PI length increases. The weighting is quantitatively predicted by modelling the expected directional variance of home vectors of different lengths and assuming optimal cue integration. However, a subsequent experiment suggests ants may not actually compute an internal estimate of the PI certainty, but are using the PI home vector length as a proxy.


Biological Cybernetics | 2009

Modelling place memory in crickets

Michael Mangan; Barbara Webb

Insects can remember and return to a place of interest using the surrounding visual cues. In previous experiments, we showed that crickets could home to an invisible cool spot in a hot environment. They did so most effectively with a natural scene surround, though they were also able to home with distinct landmarks or blank walls. Homing was not successful, however, when visual cues were removed through a dark control. Here, we compare six different models of visual homing using the same visual environments. Only models deemed biologically plausible for use by insects were implemented. The average landmark vector model and first order differential optic flow are unable to home better than chance in at least one of the visual environments. Second order differential optic flow and GradDescent on image differences can home better than chance in all visual environments, and best in the natural scene environment, but do not quantitatively match the distributions of the cricket data. Two models—centre of mass average landmark vector and RunDown on image differences—could produce the same pattern of results as observed for crickets. Both the models performed best using simple binary images and were robust to changes in resolution and image smoothing.


robotics: science and systems | 2014

Sky segmentation with ultraviolet images can be used for navigation

Thomas Stone; Michael Mangan; Paul Ardin; Barbara Webb

Inspired by ant navigation, we explore a method for sky segmentation using ultraviolet (UV) light. A standard camera is adapted to allow collection of outdoor images containing light in the visible range, in UV only and in green only. Automatic segmentation of the sky region using UV only is significantly more accurate and far more consistent than visible wavelengths over a wide range of locations, times and weather conditions, and can be accomplished with a very low complexity algorithm. We apply this method to obtain compact binary (sky vs non-sky) images from panoramic UV images taken along a 2km route in an urban environment. Using either sequence SLAM or a visual compass on these images produces reliable localisation and orientation on a subsequent traversal of the route under different weather conditions.


Journal of Comparative Physiology A-neuroethology Sensory Neural and Behavioral Physiology | 2015

How variation in head pitch could affect image matching algorithms for ant navigation

Paul Ardin; Michael Mangan; Antoine Wystrach; Barbara Webb

Desert ants are a model system for animal navigation, using visual memory to follow long routes across both sparse and cluttered environments. Most accounts of this behaviour assume retinotopic image matching, e.g. recovering heading direction by finding a minimum in the image difference function as the viewpoint rotates. But most models neglect the potential image distortion that could result from unstable head motion. We report that for ants running across a short section of natural substrate, the head pitch varies substantially: by over 20 degrees with no load; and 60 degrees when carrying a large food item. There is no evidence of head stabilisation. Using a realistic simulation of the ant’s visual world, we demonstrate that this range of head pitch significantly degrades image matching. The effect of pitch variation can be ameliorated by a memory bank of densely sampled along a route so that an image sufficiently similar in pitch and location is available for comparison. However, with large pitch disturbance, inappropriate memories sampled at distant locations are often recalled and navigation along a route can be adversely affected. Ignoring images obtained at extreme pitches, or averaging images over several pitches, does not significantly improve performance.


conference on biomimetic and biohybrid systems | 2015

Route Following Without Scanning

Aleksandar Kodzhabashev; Michael Mangan

Desert ants are expert navigators, foraging over large distances using visually guided routes. Recent models of route following can reproduce aspects of route guidance, yet the underlying motor patterns do not reflect those of foraging ants. Specifically, these models select the direction of movement by rotating to find the most familiar view. Yet scanning patterns are only occasionally observed in ants. We propose a novel route following strategy inspired by klinokinesis. By using familiarity of the view to modulate the magnitude of alternating left and right turns, and the size of forward steps, this strategy is able to continually correct the heading of a simulated ant to maintain its course along a route. Route following by klinokinesis and visual compass are evaluated against real ant routes in a simulation study and on a mobile robot in the real ant habitat. We report that in unfamiliar surroundings the proposed method can also generate ant-like scanning behaviours.


Frontiers in Behavioral Neuroscience | 2016

Ant Homing Ability Is Not Diminished When Traveling Backwards

Paul Ardin; Michael Mangan; Barbara Webb

Ants are known to be capable of homing to their nest after displacement to a novel location. This is widely assumed to involve some form of retinotopic matching between their current view and previously experienced views. One simple algorithm proposed to explain this behavior is continuous retinotopic alignment, in which the ant constantly adjusts its heading by rotating to minimize the pixel-wise difference of its current view from all views stored while facing the nest. However, ants with large prey items will often drag them home while facing backwards. We tested whether displaced ants (Myrmecia croslandi) dragging prey could still home despite experiencing an inverted view of their surroundings under these conditions. Ants moving backwards with food took similarly direct paths to the nest as ants moving forward without food, demonstrating that continuous retinotopic alignment is not a critical component of homing. It is possible that ants use initial or intermittent retinotopic alignment, coupled with some other direction stabilizing cue that they can utilize when moving backward. However, though most ants dragging prey would occasionally look toward the nest, we observed that their heading direction was not noticeably improved afterwards. We assume ants must use comparison of current and stored images for corrections of their path, but suggest they are either able to chose the appropriate visual memory for comparison using an additional mechanism; or can make such comparisons without retinotopic alignment.

Collaboration


Dive into the Michael Mangan's collaboration.

Top Co-Authors

Avatar

Barbara Webb

University of Edinburgh

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Paul Ardin

University of Edinburgh

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge