Andrew P. Davison
Centre national de la recherche scientifique
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Andrew P. Davison.
Journal of Computational Neuroscience | 2007
Romain Brette; Michelle Rudolph; Ted Carnevale; Michael L. Hines; David Beeman; James M. Bower; Markus Diesmann; Abigail Morrison; Philip H. Goodman; Frederick C. Harris; Milind Zirpe; Thomas Natschläger; Dejan Pecevski; Bard Ermentrout; Mikael Djurfeldt; Anders Lansner; Olivier Rochel; Thierry Viéville; Eilif Muller; Andrew P. Davison; Sami El Boustani; Alain Destexhe
We review different aspects of the simulation of spiking neural networks. We start by reviewing the different types of simulation strategies and algorithms that are currently implemented. We next review the precision of those simulation strategies, in particular in cases where plasticity depends on the exact timing of the spikes. We overview different simulators and simulation environments presently available (restricted to those freely available, open source and documented). For each simulation tool, its advantages and pitfalls are reviewed, with an aim to allow the reader to identify which simulator is appropriate for a given task. Finally, we provide a series of benchmark simulations of different types of networks of spiking neurons, including Hodgkin–Huxley type, integrate-and-fire models, interacting with current-based or conductance-based synapses, using clock-driven or event-driven integration strategies. The same set of models are implemented on the different simulators, and the codes are made available. The ultimate goal of this review is to provide a resource to facilitate identifying the appropriate integration strategy and simulation tool to use for a given modeling problem related to spiking neural networks.
Frontiers in Neuroinformatics | 2008
Andrew P. Davison; Daniel Brüderle; Jochen Martin Eppler; Jens Kremkow; Eilif Muller; Dejan Pecevski; Laurent Perrinet; Pierre Yger
Computational neuroscience has produced a diversity of software for simulations of networks of spiking neurons, with both negative and positive consequences. On the one hand, each simulator uses its own programming or configuration language, leading to considerable difficulty in porting models from one simulator to another. This impedes communication between investigators and makes it harder to reproduce and build on the work of others. On the other hand, simulation results can be cross-checked between different simulators, giving greater confidence in their correctness, and each simulator has different optimizations, so the most appropriate simulator can be chosen for a given modelling task. A common programming interface to multiple simulators would reduce or eliminate the problems of simulator diversity while retaining the benefits. PyNN is such an interface, making it possible to write a simulation script once, using the Python programming language, and run it without modification on any supported simulator (currently NEURON, NEST, PCSIM, Brian and the Heidelberg VLSI neuromorphic hardware). PyNN increases the productivity of neuronal network modelling by providing high-level abstraction, by promoting code sharing and reuse, and by providing a foundation for simulator-agnostic analysis, visualization and data-management tools. PyNN increases the reliability of modelling studies by making it much easier to check results on multiple simulators. PyNN is open-source software and is available from http://neuralensemble.org/PyNN.
PLOS Computational Biology | 2010
Padraig Gleeson; Sharon M. Crook; Robert C. Cannon; Michael L. Hines; Guy O. Billings; Matteo Farinella; Thomas M. Morse; Andrew P. Davison; Subhasis Ray; Upinder S. Bhalla; Simon R. Barnes; Yoana Dimitrova; R. Angus Silver
Biologically detailed single neuron and network models are important for understanding how ion channels, synapses and anatomical connectivity underlie the complex electrical behavior of the brain. While neuronal simulators such as NEURON, GENESIS, MOOSE, NEST, and PSICS facilitate the development of these data-driven neuronal models, the specialized languages they employ are generally not interoperable, limiting model accessibility and preventing reuse of model components and cross-simulator validation. To overcome these problems we have used an Open Source software approach to develop NeuroML, a neuronal model description language based on XML (Extensible Markup Language). This enables these detailed models and their components to be defined in a standalone form, allowing them to be used across multiple simulators and archived in a standardized format. Here we describe the structure of NeuroML and demonstrate its scope by converting into NeuroML models of a number of different voltage- and ligand-gated conductances, models of electrical coupling, synaptic transmission and short-term plasticity, together with morphologically detailed models of individual neurons. We have also used these NeuroML-based components to develop an highly detailed cortical network model. NeuroML-based model descriptions were validated by demonstrating similar model behavior across five independently developed simulators. Although our results confirm that simulations run on different simulators converge, they reveal limits to model interoperability, by showing that for some models convergence only occurs at high levels of spatial and temporal discretisation, when the computational overhead is high. Our development of NeuroML as a common description language for biophysically detailed neuronal and network models enables interoperability across multiple simulation environments, thereby improving model transparency, accessibility and reuse in computational neuroscience.
Frontiers in Neuroinformatics | 2009
Michael L. Hines; Andrew P. Davison; Eilif Muller
The NEURON simulation program now allows Python to be used, alone or in combination with NEURONs traditional Hoc interpreter. Adding Python to NEURON has the immediate benefit of making available a very extensive suite of analysis tools written for engineering and science. It also catalyzes NEURON software development by offering users a modern programming tool that is recognized for its flexibility and power to create and maintain complex programs. At the same time, nothing is lost because all existing models written in Hoc, including graphical user interface tools, continue to work without change and are also available within the Python context. An example of the benefits of Python availability is the use of the xml module in implementing NEURONs Import3D and CellBuild tools to read MorphML and NeuroML model specifications.
Neuroinformatics | 2003
Michele Migliore; Thomas M. Morse; Andrew P. Davison; Luis N. Marenco; Gordon M. Shepherd; Michael L. Hines
Computational neuroscience as a scientific discipline must provide for the ready testing of published models by others in the field. Unfortunately this has rarely been fulfilled. When exact reproduction of a model simulation is achieved, it is often a long and difficult process. Too often, missing or typographically incorrect equations and parameter values have made it difficult to explore or build upon published models. Compounding this difficulty is the proliferation of platforms and operating systems that are incompatible with the authors original computing environment. Because of these problems, most models are never subjected to the rigorous testing by others in the field that is a hallmark of the scientific method. This not only impedes validation of a model, but also prevents a deeper understanding of its inner workings, especially through modification of the parameters. Furthermore, modular pieces of the model, e.g. ion channels or the morphology of a cell, cannot be reused to build new models and propel research forward. ModelDB (http://senselab.med.yale.edu/modeldb) is intended to address these issues (Peterson et al, 1996; Shepherd et al, 1998). ModelDB is a database of computational models, either classics in the field or published in recent years. It focuses on models for different types of neurons, and presently contains over 60 models for 15 neuron types. In addition to compartmental models, it contains models covering from ion channels and receptors through axons and dendrites through neurons to networks. Models can be accessed by author, model name, neuron type, concept, e.g. synaptic plasticity, pattern recognition, etc, or by simulation environment. ModelDB is a member of a major neuroscience database collection called SenseLab. Each SenseLab database has an easily extensible structure achieved through the EAV/CR (Entity-Attribute-Value with Classes and Relationships) data schema (Nadkarni et al 1999, Miller et al 2001). ModelDB is integrated with NeuronDB (Marenco et al 1999), another SenseLab database that stores neuronal properties derived from the neuroscience literature (http://senselab.med.yale.edu/senselab/NeuronDB). Use of the models is free to all. Contributing to the database is also open to all. Contributions are tested for quality-control purposes before being made public. Here we describe how to find, run, and submit models to ModelDB.
Biological Cybernetics | 2011
Daniel Brüderle; Mihai A. Petrovici; Bernhard Vogginger; Matthias Ehrlich; Thomas Pfeil; Sebastian Millner; Andreas Grübl; Karsten Wendt; Eric Müller; Marc-Olivier Schwartz; Dan Husmann de Oliveira; Sebastian Jeltsch; Johannes Fieres; Moritz Schilling; Paul Müller; Oliver Breitwieser; Venelin Petkov; Lyle Muller; Andrew P. Davison; Pradeep Krishnamurthy; Jens Kremkow; Mikael Lundqvist; Eilif Muller; Johannes Partzsch; Stefan Scholze; Lukas Zühl; Christian Mayr; Alain Destexhe; Markus Diesmann; Tobias C. Potjans
In this article, we present a methodological framework that meets novel requirements emerging from upcoming types of accelerated and highly configurable neuromorphic hardware systems. We describe in detail a device with 45 million programmable and dynamic synapses that is currently under development, and we sketch the conceptual challenges that arise from taking this platform into operation. More specifically, we aim at the establishment of this neuromorphic system as a flexible and neuroscientifically valuable modeling tool that can be used by non-hardware experts. We consider various functional aspects to be crucial for this purpose, and we introduce a consistent workflow with detailed descriptions of all involved modules that implement the suggested steps: The integration of the hardware interface into the simulator-independent model description language PyNN; a fully automated translation between the PyNN domain and appropriate hardware configurations; an executable specification of the future neuromorphic system that can be seamlessly integrated into this biology-to-hardware mapping process as a test bench for all software layers and possible hardware design modifications; an evaluation scheme that deploys models from a dedicated benchmark library, compares the results generated by virtual or prototype hardware devices with reference software simulations and analyzes the differences. The integration of these components into one hardware–software workflow provides an ecosystem for ongoing preparative studies that support the hardware design process and represents the basis for the maturity of the model-to-hardware mapping software. The functionality and flexibility of the latter is proven with a variety of experimental results.
Computing in Science and Engineering | 2012
Andrew P. Davison
Published scientific research that relies on numerical computations is too often not reproducible. For computational research to become consistently and reliably reproducible, the process must become easier to achieve, as part of day-to-day research. A combination of best practices and automated tools can make it easier to create reproducible research.
Frontiers in Neuroinformatics | 2009
Daniel Brüderle; Eric Müller; Andrew P. Davison; Eilif Muller; Johannes Schemmel; K. Meier
Neuromorphic hardware systems provide new possibilities for the neuroscience modeling community. Due to the intrinsic parallelism of the micro-electronic emulation of neural computation, such models are highly scalable without a loss of speed. However, the communities of software simulator users and neuromorphic engineering in neuroscience are rather disjoint. We present a software concept that provides the possibility to establish such hardware devices as valuable modeling tools. It is based on the integration of the hardware interface into a simulator-independent language which allows for unified experiment descriptions that can be run on various simulation platforms without modification, implying experiment portability and a huge simplification of the quantitative comparison of hardware and simulator results. We introduce an accelerated neuromorphic hardware device and describe the implementation of the proposed concept for this system. An example setup and results acquired by utilizing both the hardware system and a software simulator are demonstrated.
International Journal of Neural Systems | 2006
Mathilde Badoual; Quan Zou; Andrew P. Davison; Michael Rudolph; Thierry Bal; Yves Frégnac; Alain Destexhe
Spike-timing dependent plasticity (STDP) is a form of associative synaptic modification which depends on the respective timing of pre- and post-synaptic spikes. The biophysical mechanisms underlying this form of plasticity are currently not known. We present here a biophysical model which captures the characteristics of STDP, such as its frequency dependency, and the effects of spike pair or spike triplet interactions. We also make links with other well-known plasticity rules. A simplified phenomenological model is also derived, which should be useful for fast numerical simulation and analytical investigation of the impact of STDP at the network level.
Brain Research Bulletin | 2000
Andrew P. Davison; Jianfeng Feng; David Brown
We have developed two-, three- and four-compartment models of a mammalian olfactory bulb mitral cell as a reduction of a complex 286-compartment model [1]. A minimum of three compartments, representing soma, secondary (basal) dendrites and the glomerular tuft of the primary dendrite, is required to adequately reproduce the behaviour of the full model over a broad range of firing rates. Adding a fourth compartment to represent the shaft of the primary dendrite gives a substantial improvement. The reduced models exhibit behaviours in common with the full model which were not used in fitting the model parameters. The reduced models run 75 or more times faster than the full model, making their use in large, realistic network models of the olfactory bulb practical.