Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Abigail Morrison is active.

Publication


Featured researches published by Abigail Morrison.


Frontiers in Neuroinformatics | 2014

Spiking network simulation code for petascale computers.

Susanne Kunkel; Maximilian Schmidt; Jochen Martin Eppler; Hans E. Plesser; Gen Masumoto; Jun Igarashi; Shin Ishii; Tomoki Fukai; Abigail Morrison; Markus Diesmann; Moritz Helias

Brain-scale networks exhibit a breathtaking heterogeneity in the dynamical properties and parameters of their constituents. At cellular resolution, the entities of theory are neurons and synapses and over the past decade researchers have learned to manage the heterogeneity of neurons and synapses with efficient data structures. Already early parallel simulation codes stored synapses in a distributed fashion such that a synapse solely consumes memory on the compute node harboring the target neuron. As petaflop computers with some 100,000 nodes become increasingly available for neuroscience, new challenges arise for neuronal network simulation software: Each neuron contacts on the order of 10,000 other neurons and thus has targets only on a fraction of all compute nodes; furthermore, for any given source neuron, at most a single synapse is typically created on any compute node. From the viewpoint of an individual compute node, the heterogeneity in the synaptic target lists thus collapses along two dimensions: the dimension of the types of synapses and the dimension of the number of synapses of a given type. Here we present a data structure taking advantage of this double collapse using metaprogramming techniques. After introducing the relevant scaling scenario for brain-scale simulations, we quantitatively discuss the performance on two supercomputers. We show that the novel architecture scales to the largest petascale supercomputers available today.


BMC Neuroscience | 2011

NineML: the network interchange for neuroscience modeling language

Ivan Raikov; Robert C. Cannon; Robert Clewley; Hugo Cornelis; Andrew Davison; Erik De Schutter; Mikael Djurfeldt; Padraig Gleeson; Anatoli Gorchetchnikov; Hans E. Plesser; Sean L. Hill; Mike Hines; Birgit Kriener; Yann Le Franc; Chung-Chan Lo; Abigail Morrison; Eilif Muller; Subhasis Ray; Lars Schwabe; Botond Szatmary

The growing number of large-scale neuronal network models has created a need for standards and guidelines to ease model sharing and facilitate the replication of results across different simulators. To foster community efforts towards such standards, the International Neuroinformatics Coordinating Facility (INCF) has formed its Multiscale Modeling program, and has assembled a task force of simulator developers to propose a declarative computer language for descriptions of large-scale neuronal networks. n nThe name of the proposed language is Network Interchange for Neuroscience Modeling Language (NineML) and its initial focus is restricted to point neuron models. n nThe INCF Multiscale Modeling task force has identified the key concepts of network modeling to be 1) spiking neurons 2) synapses 3) populations of neurons and 4) connectivity patterns across populations of neurons. Accordingly, the definition of NineML includes a set of mathematical abstractions to represent these concepts. n nNineML aims to provide tool support for explicit declarative definition of spiking neuronal network models both conceptually and mathematically in a simulator independent manner. In addition, NineML is designed to be self-consistent and highly flexible, allowing addition of new models and mathematical descriptions without modification of the previous structure and organization of the language. To achieve these goals, the language is being iteratively designed using several representative models with various levels of complexity as test cases. n nThe design of NineML is divided in two semantic layers: the Abstraction Layer, which consists of core mathematical concepts necessary to express neuronal and synaptic dynamics and network connectivity patterns, and the User Layer, which provides constructs to specify the instantiation of a network model in terms that are familiar to computational neuroscience modelers. n nAs part of the Abstraction Layer, NineML includes a flexible block diagram notation for describing spiking dynamics. The notation represents continuous and discrete variables, their evolution according to a set of rules such as a system of ordinary differential equations, and the conditions that induce a regime change, such as the transition from subthreshold mode to spiking and refractory modes. n nThe User Layer provides syntax for specifying the structure of the elements of a spiking neuronal network. This includes parameters for each of the individual elements (cells, synapses, inputs) and the grouping of these entities into networks. In addition, the user layer defines the syntax for supplying parameter values to abstract connectivity patterns. n nThe NineML specification is defined as an implementation-neutral object model representing all the concepts in the User and Abstraction Layers. Libraries for creating, manipulating, querying and serializing the NineML object model to a standard XML representation will be delivered for a variety of languages. The first priority of the task force is to deliver a publicly available Python implementation to support the wide range of simulators which provide a Python user interface (NEURON, NEST, Brian, MOOSE, GENESIS-3, PCSIM, PyNN, etc.). These libraries will allow simulator developers to quickly add support for NineML, and will thus catalyze the emergence of a broad software ecosystem supporting model definition interoperability around NineML.


Frontiers in Neuroanatomy | 2016

Automatic Generation of Connectivity for Large-Scale Neuronal Network Models through Structural Plasticity

Sandra Diaz-Pier; Mikaël Naveau; Markus Butz-Ostendorf; Abigail Morrison

With the emergence of new high performance computation technology in the last decade, the simulation of large scale neural networks which are able to reproduce the behavior and structure of the brain has finally become an achievable target of neuroscience. Due to the number of synaptic connections between neurons and the complexity of biological networks, most contemporary models have manually defined or static connectivity. However, it is expected that modeling the dynamic generation and deletion of the links among neurons, locally and between different regions of the brain, is crucial to unravel important mechanisms associated with learning, memory and healing. Moreover, for many neural circuits that could potentially be modeled, activity data is more readily and reliably available than connectivity data. Thus, a framework that enables networks to wire themselves on the basis of specified activity targets can be of great value in specifying network models where connectivity data is incomplete or has large error margins. To address these issues, in the present work we present an implementation of a model of structural plasticity in the neural network simulator NEST. In this model, synapses consist of two parts, a pre- and a post-synaptic element. Synapses are created and deleted during the execution of the simulation following local homeostatic rules until a mean level of electrical activity is reached in the network. We assess the scalability of the implementation in order to evaluate its potential usage in the self generation of connectivity of large scale networks. We show and discuss the results of simulations on simple two population networks and more complex models of the cortical microcircuit involving 8 populations and 4 layers using the new framework.


Frontiers in Neuroinformatics | 2014

CyNEST: a maintainable Cython-based interface for the NEST simulator

Yury V. Zaytsev; Abigail Morrison

NEST is a simulator for large-scale networks of spiking point neuron models (Gewaltig and Diesmann, 2007). Originally, simulations were controlled via the Simulation Language Interpreter (SLI), a built-in scripting facility implementing a language derived from PostScript (Adobe Systems, Inc., 1999). The introduction of PyNEST (Eppler et al., 2008), the Python interface for NEST, enabled users to control simulations using Python. As the majority of NEST users found PyNEST easier to use and to combine with other applications, it immediately displaced SLI as the default NEST interface. However, developing and maintaining PyNEST has become increasingly difficult over time. This is partly because adding new features requires writing low-level C++ code intermixed with calls to the Python/C API, which is unrewarding. Moreover, the Python/C API evolves with each new version of Python, which results in a proliferation of version-dependent code branches. In this contribution we present the re-implementation of PyNEST in the Cython language, a superset of Python that additionally supports the declaration of C/C++ types for variables and class attributes, and provides a convenient foreign function interface (FFI) for invoking C/C++ routines (Behnel et al., 2011). Code generation via Cython allows the production of smaller and more maintainable bindings, including increased compatibility with all supported Python releases without additional burden for NEST developers. Furthermore, this novel approach opens up the possibility to support alternative implementations of the Python language at no cost given a functional Cython back-end for the corresponding implementation, and also enables cross-compilation of Python bindings for embedded systems and supercomputers alike.


International Workshop on Brain-Inspired Computing | 2013

Integrating Brain Structure and Dynamics on Supercomputers

S. J. van Albada; Susanne Kunkel; Abigail Morrison; Markus Diesmann

Large-scale simulations of neuronal networks provide a unique view onto brain dynamics, complementing experiments, small-scale simulations, and theory. They enable the investigation of integrative models to arrive at a multi-scale picture of brain dynamics relating macroscopic imaging measures to the microscopic dynamics. Recent years have seen rapid development of the necessary simulation technology. We give an overview of design features of the NEural Simulation Tool (NEST) that enable simulations of spiking point neurons to be scaled to hundreds of thousands of processors. The performance of supercomputing applications is traditionally assessed using scalability plots. We discuss reasons why such measures should be interpreted with care in the context of neural network simulations. The scalability of neural network simulations on available supercomputers is limited by memory constraints rather than computational speed. This calls for future generations of supercomputers that are more attuned to the requirements of memory-intensive neuroscientific applications.


BMC Neuroscience | 2014

Calcium current improves coincidence detection of the LIF model

Yansong Chua; Moritz Helias; Abigail Morrison

Dendritic spikes are known to improve efficacy of synaptic inputs in causing action potentials [1]. The calcium spike at distal apical dendrites of layer 5 pyramidal neurons has been observed in-vitro and argued to support the propagation of synaptic inputs from distal tufts to the soma [2]. When combined with a back-propagating action potential, a smaller distal current is sufficient to trigger a calcium spike [3]. Recently, it has also been shown in-vivo that dendritic spikes contribute to the neuronal activity [4,5]. n nCalcium spikes have been modeled in multi-compartment point neuron models using first order kinetics [6]. Here we show that calcium spikes, in the regime of large synchronous inputs on top of a background of weakly fluctuating synaptic noise can be well approximated by a threshold-triggered current of fixed waveform. The exact contribution of the calcium spike to the somatic membrane potential can then be analytically derived. Accurate predictions are only obtained if correlations between the membrane potential and synaptic conductances are taken into account [7]. n nComparing neuron models with and without calcium dynamics, we find that the calcium current increases the sensitivity of the neurons spiking response to sufficiently large coincident input. In numerical simulations carried out with NEST [8], we investigate the effect of the jitter of close to synchronous inputs on the probability to elicit a calcium spike. With increased jitter, fewer calcium spikes are elicited and their average amplitude decreases.


BMC Neuroscience | 2015

ROS-MUSIC toolchain for spiking neural network simulations in a robotic environment

Philipp Weidel; Renato Duarte; Karolína Korvasová; Jenia Jitsev; Abigail Morrison

Studying a functional, biologically plausible neural network that performs a particular task is highly relevant for progress in both neuroscience and machine learning. Most tasks used to test the function of a simulated neural network are still very artificial and thus too narrow, providing only little insight into the true value of a particular neural network architecture under study. For example, many models of reinforcement learning in the brain rely on a discrete set of environmental states and actions [1]. In order to move closer towards more realistic models, modeling studies have to be conducted in more realistic environments that provide complex sensory input about the states. A way to achieve this is to provide an interface between a robotic and a neural network simulation, such that a neural network controller gains access to a realistic agent which is acting in a complex environment that can be flexibly designed by the experimentalist. n nTo create such an interface, we present a toolchain, consisting of already existing and robust tools, which forms the missing link between robotic and neuroscience with the goal of connecting robotic simulators with neural simulators. This toolchain is a generic solution and is able to combine various robotic simulators with various neural simulators by connecting the Robot Operating System (ROS) [2] with the Multi-Simulation Coordinator (MUSIC) [3]. ROS is the most widely used middleware in the robotic community with interfaces for robotic simulators like Gazebo, Morse, Webots, etc, and additionally allows the users to specify their own robot and sensors in great detail with the Unified Robot Description Language (URDF). MUSIC is a communicator between the major, state-of-the-art neural simulators: NEST, Moose and NEURON. By implementing an interface between ROS and MUSIC, our toolchain is combining two powerful middlewares, and is therefore a multi-purpose generic solution. n nOne main purpose is the translation from continuous sensory data, obtained from the sensors of a virtual robot, to spiking data which is passed to a neural simulator of choice. The translation from continuous data to spiking data is performed using the Neural Engineering Framework (NEF) proposed by Eliasmith & Anderson [4]. By sending motor commands from the neural simulator back to the robotic simulator, the interface is forming a closed loop between the virtual robot and its spiking neural network controller. n nTo demonstrate the functionality of the toolchain and the interplay between all its different components, we implemented one of the vehicles described by Braitenberg [5] using the robotic simulator Gazebo and the neural simulator NEST. n nIn future work, we aim to create a testbench, consisting of various environments for reinforcement learning algorithms, to provide a validation tool for the functionality of biological motivated models of learning.


bioRxiv | 2018

Inferring health conditions from fMRI-graph data

PierGianLuca Porta Mana; Claudia Bachmann; Abigail Morrison

Automated classification methods for disease diagnosis are currently in the limelight, especially for imaging data. Classification does not fully meet a clinician’s needs, however: in order to combine the results of multiple tests and decide on a course of treatment, a clinician needs the likelihood of a given health condition rather than binary classification yielded by such methods. We illustrate how likelihoods can be derived step by step from first principles and approximations, and how they can be assessed and selected, using fMRI data from a publicly available data set containing schizophrenic and healthy control subjects, as a working example. We start from the basic assumption of partial exchangeability, and then the notion of sufficient statistics and the “method of translation” (Edgeworth, 1898) combined with conjugate priors. This method can be used to construct a likelihood that can be used to compare different data-reduction algorithms. Despite the simplifications and possibly unrealistic assumptions used to illustrate the method, we obtain classification results comparable to previous, more realistic studies about schizophrenia, whilst yielding likelihoods that can naturally be combined with the results of other diagnostic tests.


Frontiers in Neuroinformatics | 2018

Toward Rigorous Parameterization of Underconstrained Neural Network Models Through Interactive Visualization and Steering of Connectivity Generation

Christian Nowke; Sandra Diaz-Pier; Benjamin Weyers; Bernd Hentschel; Abigail Morrison; Torsten W. Kuhlen; Alexander Peyser

Simulation models in many scientific fields can have non-unique solutions or unique solutions which can be difficult to find. Moreover, in evolving systems, unique final state solutions can be reached by multiple different trajectories. Neuroscience is no exception. Often, neural network models are subject to parameter fitting to obtain desirable output comparable to experimental data. Parameter fitting without sufficient constraints and a systematic exploration of the possible solution space can lead to conclusions valid only around local minima or around non-minima. To address this issue, we have developed an interactive tool for visualizing and steering parameters in neural network simulation models. In this work, we focus particularly on connectivity generation, since finding suitable connectivity configurations for neural network models constitutes a complex parameter search scenario. The development of the tool has been guided by several use cases—the tool allows researchers to steer the parameters of the connectivity generation during the simulation, thus quickly growing networks composed of multiple populations with a targeted mean activity. The flexibility of the software allows scientists to explore other connectivity and neuron variables apart from the ones presented as use cases. With this tool, we enable an interactive exploration of parameter spaces and a better understanding of neural network models and grapple with the crucial problem of non-unique network solutions and trajectories. In addition, we observe a reduction in turn around times for the assessment of these models, due to interactive visualization while the simulation is computed.


European Journal of Neuroscience | 2018

Exploring the role of striatal D1 and D2 medium spiny neurons in action selection using a virtual robotic framework.

Jyotika Bahuguna; Abigail Morrison; Philipp Weidel

The basal ganglia have been hypothesized to be involved in action selection, i.e. resolving competition between simultaneously activated motor programs. It has been shown that the direct pathway facilitates action execution whereas the indirect pathway inhibits it. However, as the pathways are both active during an action, it remains unclear whether their role is co‐operative or competitive. In order to investigate this issue, we developed a striatal model consisting of D1 and D2 medium spiny neurons (MSNs) and interfaced it to a simulated robot moving in an environment. We demonstrate that this model is able to reproduce key behavioral features of several experiments involving optogenetic manipulation of the striatum, such as freezing and ambulation. We then investigate the interaction of D1‐ and D2‐MSNs. We find that their fundamental relationship is co‐operative within a channel and competitive between channels; this turns out to be crucial for action selection. However, individual pairs of D1‐ and D2‐MSNs may exhibit predominantly competition or co‐operation depending on their distance, and D1‐ and D2‐MSNs population activity can alternate between co‐operation and competition modes during a stimulation. Additionally, our results show that D2–D2 connectivity between channels is necessary for effective resolution of competition; in its absence, a conflict of two motor programs typically results in neither being selected.

Collaboration


Dive into the Abigail Morrison's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hans E. Plesser

Norwegian University of Life Sciences

View shared research outputs
Top Co-Authors

Avatar

Philipp Weidel

Allen Institute for Brain Science

View shared research outputs
Top Co-Authors

Avatar

Anna Lührs

Forschungszentrum Jülich

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jakob Jordan

Allen Institute for Brain Science

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge