Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Marcel Stimberg is active.

Publication


Featured researches published by Marcel Stimberg.


Frontiers in Neuroscience | 2007

Dynamics of orientation tuning in cat V1 neurons depend on location within layers and orientation maps

James Schummers; Beau Cronin; Klaus Wimmer; Marcel Stimberg; Robert Martin; Klaus Obermayer; Konrad Koerding; Mriganka Sur

Analysis of the timecourse of the orientation tuning of responses in primary visual cortex (V1) can provide insight into the circuitry underlying tuning. Several studies have examined the temporal evolution of orientation selectivity in V1 neurons, but there is no consensus regarding the stability of orientation tuning properties over the timecourse of the response. We have used reverse-correlation analysis of the responses to dynamic grating stimuli to re-examine this issue in cat V1 neurons. We find that the preferred orientation and tuning curve shape are stable in the majority of neurons; however, more than forty percent of cells show a significant change in either preferred orientation or tuning width between early and late portions of the response. To examine the influence of the local cortical circuit connectivity, we analyzed the timecourse of responses as a function of receptive field type, laminar position, and orientation map position. Simple cells are more selective, and reach peak selectivity earlier, than complex cells. There are pronounced laminar differences in the timing of responses: middle layer cells respond faster, deep layer cells have prolonged response decay, and superficial cells are intermediate in timing. The average timing of neurons near and far from pinwheel centers is similar, but there is more variability in the timecourse of responses near pinwheel centers. This result was reproduced in an established network model of V1 operating in a regime of balanced excitatory and inhibitory recurrent connections, confirming previous results. Thus, response dynamics of cortical neurons reflect circuitry based on both vertical and horizontal location within cortical networks.


bioRxiv | 2016

Fast and accurate spike sorting in vitro and in vivo for up to thousands of electrodes

Pierre Yger; Giulia Spampinato; Elric Esposito; Baptiste Lefebvre; Stephane Deny; Christophe Gardella; Marcel Stimberg; Florian Jetter; Guenther Zeck; Serge Picaud; Jens Duebel; Olivier Marre

Understanding how assemblies of neurons encode information requires recording large populations of cells in the brain. In recent years, multi-electrode arrays and large silicon probes have been developed to record simultaneously from hundreds or thousands of electrodes packed with a high density. However, these new devices challenge the classical way to do spike sorting. Here we developed a new method to solve these issues, based on a highly automated algorithm to extract spikes from extracellular data, and show that this algorithm reached near optimal performance both in vitro and in vivo. The algorithm is composed of two main steps: 1) a “template-finding” phase to extract the cell templates, i.e. the pattern of activity evoked over many electrodes when one neuron fires an action potential; 2) a “template-matching” phase where the templates were matched to the raw data to find the location of the spikes. The manual intervention by the user was reduced to the minimal, and the time spent on manual curation did not scale with the number of electrodes. We tested our algorithm with large-scale data from in vitro and in vivo recordings, from 32 to 4225 electrodes. We performed simultaneous extracellular and patch recordings to obtain “ground truth” data, i.e. cases where the solution to the sorting problem is at least partially known. The performance of our algorithm was always close to the best expected performance. We thus provide a general solution to sort spikes from large-scale extracellular recordings.


BMC Neuroscience | 2014

Brian 2: neural simulations on a variety of computational hardware

Dan F. M. Goodman; Marcel Stimberg; Pierre Yger; Romain Brette

Brian 2 is a fundamental rewrite of the Brian [1,2] simulator for spiking neural networks. It is written in the Python programming language and focuses on simplicity and extensibility: neuronal and synaptic models can be described using mathematical formulae and with the use of physical units [3]. The same formalism can also be used to specify connectivity patterns (e.g. spatial connectivity), using mathematical expressions to define connections, probabilities of connections, number of synapses between neurons, and synaptic delays. Brian 2 offers two modes of operation: a “runtime mode”, where executable code is generated from the model descriptions on the fly and executed from Python and a “standalone mode”, where a set of source code files is generated that can then be compiled and executed with no dependency on Brian or Python. The runtime mode is ideal for rapid prototyping and interactive exploration, e.g. from a Python console. The standalone mode on the other hand is designed for maximum of performance and for simulating models on a variety of hardware and platforms. We show a number of example applications for the standalone mode, generating code for a wide range of devices: • C++ code that is completely independent of Brian, Python or Python libraries. Optionally, this code can be parallelized over multiple CPU cores using the OpenMP libraries. • Java/Renderscript for Android-based devices, enabling Brian to run neural models on commodity hardware (e.g. phones) for robotic applications [5]. • The GPU enhanced Neuronal Networks (GeNN) framework [6,7], a neural simulator using GPUs to accelerate neural simulations. • The same approach would also allow the generation of code targeted at neuromorphic computing architectures such as the SpiNNaker platform [8] for which we have started preliminary work. Brian is made available under a free software license and all development takes place in public code repositories [9].


Neurocomputing | 2007

The effect of background noise on the precision of pulse packet propagation in feed-forward networks

Marcel Stimberg; Thomas Hoch; Klaus Obermayer

Evidence suggests precise spike timing plays a functional role in cortical information processing. However, it is still a matter of debate whether such precision can be achieved in the presence of ongoing synaptic background activity. We investigate this question by modeling a feed-forward network of Hodgkin-Huxley neurons. Extending the basic model, we additionally include variable synaptic delays. We find that suprathreshold waves of synchronous activity propagate through the network reliably and with submillisecond precision. On the other hand, background activity allows slightly subthreshold activity to propagate reliably as well, but not with high temporal precision.


bioRxiv | 2018

Brian2GeNN: a system for accelerating a large variety of spiking neural networks with graphics hardware

Marcel Stimberg; Dan F. M. Goodman; Thomas Nowotny

“Brian” is a popular Python-based simulator for spiking neural networks, commonly used in computational neuroscience. GeNN is a C++-based meta-compiler for accelerating spiking neural network simulations using consumer or high performance grade graphics processing units (GPUs). Here we introduce a new software package, Brian2GeNN, that connects the two systems so that users can make use of GeNN GPU acceleration when developing their models in Brian, without requiring any technical knowledge about GPUs, C++ or GeNN. The new Brian2GeNN software uses a pipeline of code generation to translate Brian scripts into C++ code that can be used as input to GeNN, and subsequently can be run on suitable NVIDIA GPU accelerators. From the user’s perspective, the entire pipeline is invoked by adding two simple lines to their Brian scripts. We have shown that using Brian2GeNN, typical models can run tens to hundreds of times faster than on CPU.


bioRxiv | 2017

Modeling neuron-glia interactions with the Brian 2 simulator

Marcel Stimberg; Dan F. M. Goodman; Romain Brette; Maurizio De Pittà

Despite compelling evidence that glial cells could crucially regulate neural network activity, the vast majority of available neural simulators ignores the possible contribution of glia to neuronal physiology. Here, we show how to model glial physiology and neuron-glia interactions in the Brian 2 simulator. Brian 2 offers facilities to explicitly describe any model in mathematical terms with limited and simple simulator-specific syntax, automatically generating high-performance code from the user-provided descriptions. The flexibility of this approach allows us to model not only networks of neurons, but also individual glial cells, electrical coupling of glial cells, and the interaction between glial cells and synapses. We therefore conclude that Brian 2 provides an ideal platform to efficiently simulate glial physiology, and specifically, the influence of astrocytes on neural activity.


BMC Neuroscience | 2015

Origin of the kink of somatic action potentials

Maria Telenczuk; Marcel Stimberg; Romain Brette

The Hodgkin and Huxley (1952) model of action potential (AP) generation accounts for many properties of APs observed experimentally and has been successfully used in modeling neurons of different types. In this model, however, the spike onset is much shallower than in experimental recordings from the soma suggesting different activation properties of sodium channels in the real tissue. To explain the origin of the observed sharpness (kink) in the spike onset three hypotheses were proposed: 1. Cooperative hypothesis: sodium channels cooperate in the axon initial segment, which makes their collective activation curve much sharper [1]. However, there is no experimental evidence for this hypothesis. 2. Active backpropagation hypothesis: spikes are initiated in the axon and backpropagate to the soma. The kink is caused by the sharpening of the axonal spike by active conductances during its backpropagation through the axon [2]. 3. Compartmentalization hypothesis: the kink comes from distal initiation and the current sink caused by the difference in the size of the soma and axon [3]. To find out what is truly happening in the cell during the action potential, we investigated the active backpropagation and compartmentalization hypotheses by means of computational modeling and theoretical analysis. In order to differentiate the hypotheses, we varied systematically the morphology of the neuron and distribution of the ionic channels along the cell, and tested how they contribute to the appearance of the kink. We show that the kink at spike onset is primarily due to compartmentalization rather than to active backpropagation. Figure 1 Kink in the action potential. Patch clamp recordings (red) from a cortical pyramidal cell and action potential produced by a Hodgkin Huxley type model (black). Left: Voltage-time relationship. Right: Phase plot of the same traces as in the left (dV/dt ...


BMC Neuroscience | 2009

Map location affects center-surround modulation in a network model of V1

Marcel Stimberg; Klaus Obermayer

The computational role of the local recurrent network in primary visual cortex is still a matter of debate. To address this issue, we analyze intracellular recording data of cat V1, which combine measuring the tuning of a range of neuronal properties with a precise localization of the recording sites in the orientation preference map. For the analysis, we consider a network model of Hodgkin-Huxley type neurons arranged according to a biologically plausible two-dimensional topographic orientation preference map. We then systematically vary the strength of the recurrent excitation and inhibition relative to the strength of the afferent input. Each parametrization gives rise to a different model instance for which the tuning of model neurons at different locations of the orientation map is compared to the experimentally measured orientation tuning of membrane potential, spike output, excitatory, and inhibitory conductances. A quantitative analysis shows that the data provides strong evidence for a network model in which the afferent input is dominated by strong, balanced contributions of recurrent excitation and inhibition. This recurrent regime is close to a regime of “instability”, where strong, self-sustained activity of the network occurs. The firing rate of neurons in the best-fitting network is particularly sensitive to small modulations of model parameters, which could be one of the functional benefits of a network operating in this particular regime.


Cerebral Cortex | 2009

The Operating Regime of Local Computations in Primary Visual Cortex

Marcel Stimberg; Klaus Wimmer; Robert Martin; Lars Schwabe; Jorge Mariño; James Schummers; David C. Lyon; Mriganka Sur; Klaus Obermayer


neural information processing systems | 2008

Dependence of Orientation Tuning on Recurrent Excitation and Inhibition in a Network Model of V1

Klaus Wimmer; Marcel Stimberg; Robert Martin; Lars Schwabe; Jorge Mariño; James Schummers; David C. Lyon; Mriganka Sur; Klaus Obermayer

Collaboration


Dive into the Marcel Stimberg's collaboration.

Top Co-Authors

Avatar

Klaus Obermayer

Technical University of Berlin

View shared research outputs
Top Co-Authors

Avatar

Klaus Wimmer

Technical University of Berlin

View shared research outputs
Top Co-Authors

Avatar

Robert Martin

Technical University of Berlin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Romain Brette

École Normale Supérieure

View shared research outputs
Top Co-Authors

Avatar

David C. Lyon

University of California

View shared research outputs
Top Co-Authors

Avatar

Mriganka Sur

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Pierre Yger

Centre national de la recherche scientifique

View shared research outputs
Researchain Logo
Decentralizing Knowledge