Eilif Muller
École Polytechnique Fédérale de Lausanne
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Eilif Muller.
Journal of Computational Neuroscience | 2007
Romain Brette; Michelle Rudolph; Ted Carnevale; Michael L. Hines; David Beeman; James M. Bower; Markus Diesmann; Abigail Morrison; Philip H. Goodman; Frederick C. Harris; Milind Zirpe; Thomas Natschläger; Dejan Pecevski; Bard Ermentrout; Mikael Djurfeldt; Anders Lansner; Olivier Rochel; Thierry Viéville; Eilif Muller; Andrew P. Davison; Sami El Boustani; Alain Destexhe
We review different aspects of the simulation of spiking neural networks. We start by reviewing the different types of simulation strategies and algorithms that are currently implemented. We next review the precision of those simulation strategies, in particular in cases where plasticity depends on the exact timing of the spikes. We overview different simulators and simulation environments presently available (restricted to those freely available, open source and documented). For each simulation tool, its advantages and pitfalls are reviewed, with an aim to allow the reader to identify which simulator is appropriate for a given task. Finally, we provide a series of benchmark simulations of different types of networks of spiking neurons, including Hodgkin–Huxley type, integrate-and-fire models, interacting with current-based or conductance-based synapses, using clock-driven or event-driven integration strategies. The same set of models are implemented on the different simulators, and the codes are made available. The ultimate goal of this review is to provide a resource to facilitate identifying the appropriate integration strategy and simulation tool to use for a given modeling problem related to spiking neural networks.
Frontiers in Neuroinformatics | 2008
Andrew P. Davison; Daniel Brüderle; Jochen Martin Eppler; Jens Kremkow; Eilif Muller; Dejan Pecevski; Laurent Perrinet; Pierre Yger
Computational neuroscience has produced a diversity of software for simulations of networks of spiking neurons, with both negative and positive consequences. On the one hand, each simulator uses its own programming or configuration language, leading to considerable difficulty in porting models from one simulator to another. This impedes communication between investigators and makes it harder to reproduce and build on the work of others. On the other hand, simulation results can be cross-checked between different simulators, giving greater confidence in their correctness, and each simulator has different optimizations, so the most appropriate simulator can be chosen for a given modelling task. A common programming interface to multiple simulators would reduce or eliminate the problems of simulator diversity while retaining the benefits. PyNN is such an interface, making it possible to write a simulation script once, using the Python programming language, and run it without modification on any supported simulator (currently NEURON, NEST, PCSIM, Brian and the Heidelberg VLSI neuromorphic hardware). PyNN increases the productivity of neuronal network modelling by providing high-level abstraction, by promoting code sharing and reuse, and by providing a foundation for simulator-agnostic analysis, visualization and data-management tools. PyNN increases the reliability of modelling studies by making it much easier to check results on multiple simulators. PyNN is open-source software and is available from http://neuralensemble.org/PyNN.
Cell | 2015
Henry Markram; Eilif Muller; Srikanth Ramaswamy; Michael W. Reimann; Marwan Abdellah; Carlos Aguado Sanchez; Anastasia Ailamaki; Lidia Alonso-Nanclares; Nicolas Antille; Selim Arsever; Guy Antoine Atenekeng Kahou; Thomas K. Berger; Ahmet Bilgili; Nenad Buncic; Athanassia Chalimourda; Giuseppe Chindemi; Jean Denis Courcol; Fabien Delalondre; Vincent Delattre; Shaul Druckmann; Raphael Dumusc; James Dynes; Stefan Eilemann; Eyal Gal; Michael Emiel Gevaert; Jean Pierre Ghobril; Albert Gidon; Joe W. Graham; Anirudh Gupta; Valentin Haenel
UNLABELLED We present a first-draft digital reconstruction of the microcircuitry of somatosensory cortex of juvenile rat. The reconstruction uses cellular and synaptic organizing principles to algorithmically reconstruct detailed anatomy and physiology from sparse experimental data. An objective anatomical method defines a neocortical volume of 0.29 ± 0.01 mm(3) containing ~31,000 neurons, and patch-clamp studies identify 55 layer-specific morphological and 207 morpho-electrical neuron subtypes. When digitally reconstructed neurons are positioned in the volume and synapse formation is restricted to biological bouton densities and numbers of synapses per connection, their overlapping arbors form ~8 million connections with ~37 million synapses. Simulations reproduce an array of in vitro and in vivo experiments without parameter tuning. Additionally, we find a spectrum of network states with a sharp transition from synchronous to asynchronous activity, modulated by physiological mechanisms. The spectrum of network states, dynamically reconfigured around this transition, supports diverse information processing strategies. PAPERCLIP VIDEO ABSTRACT.
Frontiers in Neuroinformatics | 2009
Michael L. Hines; Andrew P. Davison; Eilif Muller
The NEURON simulation program now allows Python to be used, alone or in combination with NEURONs traditional Hoc interpreter. Adding Python to NEURON has the immediate benefit of making available a very extensive suite of analysis tools written for engineering and science. It also catalyzes NEURON software development by offering users a modern programming tool that is recognized for its flexibility and power to create and maintain complex programs. At the same time, nothing is lost because all existing models written in Hoc, including graphical user interface tools, continue to work without change and are also available within the Python context. An example of the benefits of Python availability is the use of the xml module in implementing NEURONs Import3D and CellBuild tools to read MorphML and NeuroML model specifications.
Frontiers in Neuroinformatics | 2008
Jochen Martin Eppler; Moritz Helias; Eilif Muller; Markus Diesmann; Marc-Oliver Gewaltig
The neural simulation tool NEST (http://www.nest-initiative.org) is a simulator for heterogeneous networks of point neurons or neurons with a small number of compartments. It aims at simulations of large neural systems with more than 104 neurons and 107 to 109 synapses. NEST is implemented in C++ and can be used on a large range of architectures from single-core laptops over multi-core desktop computers to super-computers with thousands of processor cores. Python (http://www.python.org) is a modern programming language that has recently received considerable attention in Computational Neuroscience. Python is easy to learn and has many extension modules for scientific computing (e.g. http://www.scipy.org). In this contribution we describe PyNEST, the new user interface to NEST. PyNEST combines NESTs efficient simulation kernel with the simplicity and flexibility of Python. Compared to NESTs native simulation language SLI, PyNEST makes it easier to set up simulations, generate stimuli, and analyze simulation results. We describe how PyNEST connects NEST and Python and how it is implemented. With a number of examples, we illustrate how it is used.
Physical Review E | 2011
Farzad Farkhooi; Eilif Muller; Martin P. Nawrot
Sequences of events in noise-driven excitable systems with slow variables often show serial correlations among their intervals of events. Here, we employ a master equation for generalized non-renewal processes to calculate the interval and count statistics of superimposed processes governed by a slow adaptation variable. For an ensemble of neurons with spike-frequency adaptation, this results in the regularization of the population activity and an enhanced postsynaptic signal decoding. We confirm our theoretical results in a population of cortical neurons recorded in vivo.
Frontiers in Neural Circuits | 2015
Srikanth Ramaswamy; Jean-Denis Courcol; Marwan Abdellah; Stanisław Adaszewski; Nicolas Antille; Selim Arsever; Guy Atenekeng; Ahmet Bilgili; Yury Brukau; Athanassia Chalimourda; Giuseppe Chindemi; Fabien Delalondre; Raphael Dumusc; Stefan Eilemann; Michael Emiel Gevaert; Padraig Gleeson; Joe W. Graham; Juan Hernando; Lida Kanari; Yury Katkov; Daniel Keller; James G. King; Rajnish Ranjan; Michael W. Reimann; Christian Rössert; Ying Shi; Julian C. Shillcock; Martin Telefont; Werner Van Geit; Jafet Villafranca Díaz
We have established a multi-constraint, data-driven process to digitally reconstruct, and simulate prototypical neocortical microcircuitry, using sparse experimental data. We applied this process to reconstruct the microcircuitry of the somatosensory cortex in juvenile rat at the cellular and synaptic levels. The resulting reconstruction is broadly consistent with current knowledge about the neocortical microcircuit and provides an array of predictions on its structure and function. To engage the community in exploring, challenging, and refining the reconstruction, we have developed a collaborative, internet-accessible facility-the Neocortical Microcircuit Collaboration portal (NMC portal; https://bbp.epfl.ch/nmc-portal). The NMC portal allows users to access the experimental data used in the reconstruction process, download cellular and synaptic models, and analyze the predicted properties of the microcircuit: six layers, similar to 31,000 neurons, 55 morphological types, 11 electrical types, 207 morpho-electrical types, 1941 unique synaptic connection types between neurons of specific morphological types, predicted properties for the anatomy and physiology of similar to 40 million intrinsic synapses. It also provides data supporting comparison of the anatomy and physiology of the reconstructed microcircuit against results in the literature. The portal aims to catalyzee consensus on the cellular and synaptic organization of neocortical microcircuitry (ion channel, neuron and synapse types and distributions, connectivity, etc.). Community feedback will contribute to refined versions of the reconstruction to be released periodically. We consider that the reconstructions and the simulations they enable represent a major step in the development of in silica neuroscience.
Frontiers in Neuroscience | 2009
Andrew P. Davison; Michael L. Hines; Eilif Muller
Neuroscience simulators allow scientists to express models in terms of biological concepts, without having to concern themselves with low-level computational details of their implementation. The expressiveness, power and ease-of-use of the simulator interface is critical in efficiently and accurately translating ideas into a working simulation. We review long-term trends in the development of programmable simulator interfaces, and examine the benefits of moving from proprietary, domain-specific languages to modern dynamic general-purpose languages, in particular Python, which provide neuroscientists with an interactive and expressive simulation development environment and easy access to state-of-the-art general-purpose tools for scientific computing.
Frontiers in Computational Neuroscience | 2015
Michael W. Reimann; James G. King; Eilif Muller; Srikanth Ramaswamy; Henry Markram
Experimentally mapping synaptic connections, in terms of the numbers and locations of their synapses and estimating connection probabilities, is still not a tractable task, even for small volumes of tissue. In fact, the six layers of the neocortex contain thousands of unique types of synaptic connections between the many different types of neurons, of which only a handful have been characterized experimentally. Here we present a theoretical framework and a data-driven algorithmic strategy to digitally reconstruct the complete synaptic connectivity between the different types of neurons in a small well-defined volume of tissue—the micro-scale connectome of a neural microcircuit. By enforcing a set of established principles of synaptic connectivity, and leveraging interdependencies between fundamental properties of neural microcircuits to constrain the reconstructed connectivity, the algorithm yields three parameters per connection type that predict the anatomy of all types of biologically viable synaptic connections. The predictions reproduce a spectrum of experimental data on synaptic connectivity not used by the algorithm. We conclude that an algorithmic approach to the connectome can serve as a tool to accelerate experimental mapping, indicating the minimal dataset required to make useful predictions, identifying the datasets required to improve their accuracy, testing the feasibility of experimental measurements, and making it possible to test hypotheses of synaptic connectivity.
PLOS Computational Biology | 2013
Farzad Farkhooi; Anja Froese; Eilif Muller; Randolf Menzel; Martin P. Nawrot
Most neurons in peripheral sensory pathways initially respond vigorously when a preferred stimulus is presented, but adapt as stimulation continues. It is unclear how this phenomenon affects stimulus coding in the later stages of sensory processing. Here, we show that a temporally sparse and reliable stimulus representation develops naturally in sequential stages of a sensory network with adapting neurons. As a modeling framework we employ a mean-field approach together with an adaptive population density treatment, accompanied by numerical simulations of spiking neural networks. We find that cellular adaptation plays a critical role in the dynamic reduction of the trial-by-trial variability of cortical spike responses by transiently suppressing self-generated fast fluctuations in the cortical balanced network. This provides an explanation for a widespread cortical phenomenon by a simple mechanism. We further show that in the insect olfactory system cellular adaptation is sufficient to explain the emergence of the temporally sparse and reliable stimulus representation in the mushroom body. Our results reveal a generic, biophysically plausible mechanism that can explain the emergence of a temporally sparse and reliable stimulus representation within a sequential processing architecture.