Alex Cope
University of Sheffield
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Alex Cope.
Neuroinformatics | 2014
Paul Richmond; Alex Cope; Kevin N. Gurney; David J. Allerton
A declarative extensible markup language (SpineML) for describing the dynamics, network and experiments of large-scale spiking neural network simulations is described which builds upon the NineML standard. It utilises a level of abstraction which targets point neuron representation but addresses the limitations of existing tools by allowing arbitrary dynamics to be expressed. The use of XML promotes model sharing, is human readable and allows collaborative working. The syntax uses a high-level self explanatory format which allows straight forward code generation or translation of a model description to a native simulator format. This paper demonstrates the use of code generation in order to translate, simulate and reproduce the results of a benchmark model across a range of simulators. The flexibility of the SpineML syntax is highlighted by reproducing a pre-existing, biologically constrained model of a neural microcircuit (the striatum). The SpineML code is open source and is available at http://bimpa.group.shef.ac.uk/SpineML.
PLOS Computational Biology | 2016
Alex Cope; Chelsea Sabo; Kevin N. Gurney; Eleni Vasilaki; James A. R. Marshall
We present a novel neurally based model for estimating angular velocity (AV) in the bee brain, capable of quantitatively reproducing experimental observations of visual odometry and corridor-centering in free-flying honeybees, including previously unaccounted for manipulations of behaviour. The model is fitted using electrophysiological data, and tested using behavioural data. Based on our model we suggest that the AV response can be considered as an evolutionary extension to the optomotor response. The detector is tested behaviourally in silico with the corridor-centering paradigm, where bees navigate down a corridor with gratings (square wave or sinusoidal) on the walls. When combined with an existing flight control algorithm the detector reproduces the invariance of the average flight path to the spatial frequency and contrast of the gratings, including deviations from perfect centering behaviour as found in the real bee’s behaviour. In addition, the summed response of the detector to a unit distance movement along the corridor is constant for a large range of grating spatial frequencies, demonstrating that the detector can be used as a visual odometer.
PLOS ONE | 2017
Alex Cope; Chelsea Sabo; Eleni Vasilaki; Andrew B. Barron; James A. R. Marshall
The insect central complex (CX) is an enigmatic structure whose computational function has evaded inquiry, but has been implicated in a wide range of behaviours. Recent experimental evidence from the fruit fly (Drosophila melanogaster) and the cockroach (Blaberus discoidalis) has demonstrated the existence of neural activity corresponding to the animal’s orientation within a virtual arena (a neural ‘compass’), and this provides an insight into one component of the CX structure. There are two key features of the compass activity: an offset between the angle represented by the compass and the true angular position of visual features in the arena, and the remapping of the 270° visual arena onto an entire circle of neurons in the compass. Here we present a computational model which can reproduce this experimental evidence in detail, and predicts the computational mechanisms that underlie the data. We predict that both the offset and remapping of the fly’s orientation onto the neural compass can be explained by plasticity in the synaptic weights between segments of the visual field and the neurons representing orientation. Furthermore, we predict that this learning is reliant on the existence of neural pathways that detect rotational motion across the whole visual field and uses this rotation signal to drive the rotation of activity in a neural ring attractor. Our model also reproduces the ‘transitioning’ between visual landmarks seen when rotationally symmetric landmarks are presented. This model can provide the basis for further investigation into the role of the central complex, which promises to be a key structure for understanding insect behaviour, as well as suggesting approaches towards creating fully autonomous robotic agents.
international conference on robotics and automation | 2017
Andreagiovanni Reina; Alex Cope; Eleftherios Nikolaidis; James A. R. Marshall; Chelsea Sabo
Working with large swarms of robots has challenges in calibration, sensing, tracking, and control due to the associated scalability and time requirements. Kilobots solve this through their ease of maintenance and programming, and are widely used in several research laboratories worldwide where their low cost enables large-scale swarms studies. However, the small, inexpensive nature of the Kilobots limits their range of capabilities as they are only equipped with a single sensor. In some studies, this limitation can be a source of motivation and inspiration, while in others it is an impediment. As such, we designed, implemented, and tested a novel system to communicate personalized location-and-state-based information to each robot, and receive information on each robots’ state. In this way, the Kilobots can sense additional information from a virtual environment in real time; for example, a value on a gradient, a direction toward a reference point or a pheromone trail. The augmented reality for Kilobots ( ARK) system implements this in flexible base control software which allows users to define varying virtual environments within a single experiment using integrated overhead tracking and control. We showcase the different functionalities of the system through three demos involving hundreds of Kilobots. The ARK provides Kilobots with additional and unique capabilities through an open-source tool which can be implemented with inexpensive, off-the-shelf hardware.
Neuroinformatics | 2017
Alex Cope; Paul Richmond; Sebastian S. James; Kevin N. Gurney; David J. Allerton
There is a growing requirement in computational neuroscience for tools that permit collaborative model building, model sharing, combining existing models into a larger system (multi-scale model integration), and are able to simulate models using a variety of simulation engines and hardware platforms. Layered XML model specification formats solve many of these problems, however they are difficult to write and visualise without tools. Here we describe a new graphical software tool, SpineCreator, which facilitates the creation and visualisation of layered models of point spiking neurons or rate coded neurons without requiring the need for programming. We demonstrate the tool through the reproduction and visualisation of published models and show simulation results using code generation interfaced directly into SpineCreator. As a unique application for the graphical creation of neural networks, SpineCreator represents an important step forward for neuronal modelling.
AIAA Infotech @ Aerospace | 2016
Chelsea Sabo; Alex Cope; Kevin N. Gurney; Eleni Vasilaki; James A. R. Marshall
Small Unmanned Air Vehicles (UAVs) have become increasingly used in many fields to keep up with demands for technological growth and due to their unique ability to provide an “eye-in-the-sky”. However, traditional guidance systems are not always suitable for a transition to smaller UAVs. Honeybee navigation has long been proposed as a basis for developing navigation for robotics as they are known to solve complex tasks efficiently and robustly. An approach for bio-inspired navigation using optic flow is presented here based on honeybee reactive flight control. This approach is tested and verified in the benchmark hallway navigation problem. It is shown that the control approach can explain a wide-range of biological behaviors using minimal sensors similar to flying insects.
PLOS ONE | 2018
Amélie Cabirol; Alex Cope; Andrew B. Barron; Jean-Marc Devaud
Brain structure and learning capacities both vary with experience, but the mechanistic link between them is unclear. Here, we investigated whether experience-dependent variability in learning performance can be explained by neuroplasticity in foraging honey bees. The mushroom bodies (MBs) are a brain center necessary for ambiguous olfactory learning tasks such as reversal learning. Using radio frequency identification technology, we assessed the effects of natural variation in foraging activity, and the age when first foraging, on both performance in reversal learning and on synaptic connectivity in the MBs. We found that reversal learning performance improved at foraging onset and could decline with greater foraging experience. If bees started foraging before the normal age, as a result of a stress applied to the colony, the decline in learning performance with foraging experience was more severe. Analyses of brain structure in the same bees showed that the total number of synaptic boutons at the MB input decreased when bees started foraging, and then increased with greater foraging intensity. At foraging onset MB structure is therefore optimized for bees to update learned information, but optimization of MB connectivity deteriorates with foraging effort. In a computational model of the MBs sparser coding of information at the MB input improved reversal learning performance. We propose, therefore, a plausible mechanistic relationship between experience, neuroplasticity, and cognitive performance in a natural and ecological context.
PLOS Computational Biology | 2018
Alex Cope; Eleni Vasilaki; Dorian Minors; Chelsea Sabo; James A. R. Marshall; Andrew B. Barron
The capacity to learn abstract concepts such as ‘sameness’ and ‘difference’ is considered a higher-order cognitive function, typically thought to be dependent on top-down neocortical processing. It is therefore surprising that honey bees apparantly have this capacity. Here we report a model of the structures of the honey bee brain that can learn sameness and difference, as well as a range of complex and simple associative learning tasks. Our model is constrained by the known connections and properties of the mushroom body, including the protocerebral tract, and provides a good fit to the learning rates and performances of real bees in all tasks, including learning sameness and difference. The model proposes a novel mechanism for learning the abstract concepts of ‘sameness’ and ‘difference’ that is compatible with the insect brain, and is not dependent on top-down or executive control processing.
BMC Neuroscience | 2015
Alex Cope; Chelsea Sabo; Eleni Vasilaki; Kevin N. Gurney; James A. R. Marshall
In insects the optomotor response produces a motor action to compensate for unintended body rotation. The response is generally modeled as a Reichardt-Hassenstein (HSD) or Barlow-Levick (BL) correlation detector, as anatomical and physiological studies in Drosophila melanogaster have demonstrated consistent neural pathways and responses in the insect brain [1]. Recordings from the descending neurons carrying the optomotor response signal in honeybees indicate an ordering effect for different stimulus spatial frequencies, with a greater response with decreasing frequency [2] (see Figure Figure1A),1A), which is not accounted for by HSD or BL correlation detectors. Figure 1 A. Cartoon of ordering effect indicated by honeybee descending neuron responses. Spatial frequency decreases from blue to green. B. Model diagram showing annealed synapses (coloured, same colours indicate same synaptic conductance). C. Slice of annealing ... We present a model in the SpineML format of the optomotor system, using Izhikevich point neurons tuned to match the respective physiological responses, which is shown in Figure Figure1B.1B. To examine if the model reproduces the ordering effect found in the honeybee we performed simulated annealing on four conductance values in the model, as shown in Figure Figure1.1. The objective function is designed to maximize: correct ordering; a 10Hz maximum response; and contrast between responses to forward and reverse motion. A. The data was imported into a commercial software package (MATLAB 7.14, The MathWorks Inc., Natick, MA, 2012) for analysis and interpolated onto a 414 grid. A 3D slice of this 4D grid can be seen in Figure Figure1C.1C. Spatial frequencies of 32.7, 18.9 and 9.5 Hz are used. A stable region in which a high value of the objective function, and thus correct spatial frequency ordering, could be obtained was found. In the stable region the onset pathway activity is low, leading to offset activity dominating. A RHD using the model up to the Medulla does not show correct ordering.
conference towards autonomous robotic systems | 2011
Alex Cope; Jonathan M. Chambers; Kevin N. Gurney
Navigating the visual world is a challenging problem for autonomous agents, which must be flexible, robust, and preferably easily extensible in order to meet changing task demands. Here, we outline the rationale for an approach to constructing such agents biomimetically, even if they appear ‘over engineered’ at first glance, using the problem of gaze redirection in an attentional task.