Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Richard Naud is active.

Publication


Featured researches published by Richard Naud.


Biological Cybernetics | 2008

Firing patterns in the adaptive exponential integrate-and-fire model

Richard Naud; Nicolas Marcille; Claudia Clopath; Wulfram Gerstner

For simulations of large spiking neuron networks, an accurate, simple and versatile single-neuron modeling framework is required. Here we explore the versatility of a simple two-equation model: the adaptive exponential integrate-and-fire neuron. We show that this model generates multiple firing patterns depending on the choice of parameter values, and present a phase diagram describing the transition from one firing type to another. We give an analytical criterion to distinguish between continuous adaption, initial bursting, regular bursting and two types of tonic spiking. Also, we report that the deterministic model is capable of producing irregular spiking when stimulated with constant current, indicating low-dimensional chaos. Lastly, the simple model is fitted to real experiments of cortical neurons under step current stimulation. The results provide support for the suitability of simple models such as the adaptive exponential integrate-and-fire neuron for large network simulations.


Science | 2009

How Good Are Neuron Models

Wulfram Gerstner; Richard Naud

A recent competition encouraged modelers to predict neuronal activity. Which neuron model performed the best? Opinions strongly diverge on what constitutes a good model of a neuron (1–3). Two lines of thought on this have coexisted for a long time: detailed biophysical models (of the style proposed in 1952 by the physiologists Alan Hodgkin and Andrew Huxley) that describe ion channels on the tree-like spatial structure of the neuronal cell (4), and simple “integrate-and-fire” models based on the much older insight that pulsatile electrical activity (known as an action potential or spike) is a threshold process. Electrophysiologists generally prefer the biophysical models, familiar with the notion of ion channels that open and close (and hence, alter neuronal activity) depending on environmental conditions. Theoreticians, by contrast, typically prefer simple neuron models with few parameters that are amenable to mathematical analysis. Earlier this year, following previous attempts at model comparison on a smaller scale (5), the International Neuroinformatics Coordinating Facility (INCF) launched an international competition (6) that allowed a quantitative comparison of neuron models.


Journal of Neuroscience Methods | 2008

A benchmark test for a quantitative assessment of simple neuron models

Renaud Jolivet; Ryota Kobayashi; Alexander Rauch; Richard Naud; Shigeru Shinomoto; Wulfram Gerstner

Several methods and algorithms have recently been proposed that allow for the systematic evaluation of simple neuron models from intracellular or extracellular recordings. Models built in this way generate good quantitative predictions of the future activity of neurons under temporally structured current injection. It is, however, difficult to compare the advantages of various models and algorithms since each model is designed for a different set of data. Here, we report about one of the first attempts to establish a benchmark test that permits a systematic comparison of methods and performances in predicting the activity of rat cortical pyramidal neurons. We present early submissions to the benchmark test and discuss implications for the design of future tests and simple neurons models.


Biological Cybernetics | 2008

The quantitative single-neuron modeling competition

Renaud Jolivet; Felix Schürmann; Thomas K. Berger; Richard Naud; Wulfram Gerstner; Arnd Roth

As large-scale, detailed network modeling projects are flourishing in the field of computational neuroscience, it is more and more important to design single neuron models that not only capture qualitative features of real neurons but are quantitatively accurate in silico representations of those. Recent years have seen substantial effort being put in the development of algorithms for the systematic evaluation and optimization of neuron models with respect to electrophysiological data. It is however difficult to compare these methods because of the lack of appropriate benchmark tests. Here, we describe one such effort of providing the community with a standardized set of tests to quantify the performances of single neuron models. Our effort takes the form of a yearly challenge similar to the ones which have been present in the machine learning community for some time. This paper gives an account of the first two challenges which took place in 2007 and 2008 and discusses future directions. The results of the competition suggest that best performance on data obtained from single or double electrode current or conductance injection is achieved by models that combine features of standard leaky integrate-and-fire models with a second variable reflecting adaptation, refractoriness, or a dynamic threshold.


Nature Neuroscience | 2013

Temporal whitening by power-law adaptation in neocortical neurons

Christian Pozzorini; Richard Naud; Skander Mensi; Wulfram Gerstner

Spike-frequency adaptation (SFA) is widespread in the CNS, but its function remains unclear. In neocortical pyramidal neurons, adaptation manifests itself by an increase in the firing threshold and by adaptation currents triggered after each spike. Combining electrophysiological recordings in mice with modeling, we found that these adaptation processes lasted for more than 20 s and decayed over multiple timescales according to a power law. The power-law decay associated with adaptation mirrored and canceled the temporal correlations of input current received in vivo at the somata of layer 2/3 somatosensory pyramidal neurons. These findings suggest that, in the cortex, SFA causes temporal decorrelation of output spikes (temporal whitening), an energy-efficient coding procedure that, at high signal-to-noise ratio, improves the information transfer.


Journal of Neurophysiology | 2012

Parameter extraction and classification of three cortical neuron types reveals two distinct adaptation mechanisms

Skander Mensi; Richard Naud; Christian Pozzorini; Michael Avermann; Carl C. H. Petersen; Wulfram Gerstner

Cortical information processing originates from the exchange of action potentials between many cell types. To capture the essence of these interactions, it is of critical importance to build mathematical models that reflect the characteristic features of spike generation in individual neurons. We propose a framework to automatically extract such features from current-clamp experiments, in particular the passive properties of a neuron (i.e., membrane time constant, reversal potential, and capacitance), the spike-triggered adaptation currents, as well as the dynamics of the action potential threshold. The stochastic model that results from our maximum likelihood approach accurately predicts the spike times, the subthreshold voltage, the firing patterns, and the type of frequency-current curve. Extracting the model parameters for three cortical cell types revealed that cell types show highly significant differences in the time course of the spike-triggered currents and moving threshold, that is, in their adaptation and refractory properties but not in their passive properties. In particular, GABAergic fast-spiking neurons mediate weak adaptation through spike-triggered currents only, whereas regular spiking excitatory neurons mediate adaptation with both moving threshold and spike-triggered currents. GABAergic nonfast-spiking neurons combine the two distinct adaptation mechanisms with reduced strength. Differences between cell types are large enough to enable automatic classification of neurons into three different classes. Parameter extraction is performed for individual neurons so that we find not only the mean parameter values for each neuron type but also the spread of parameters within a group of neurons, which will be useful for future large-scale computer simulations.


Neural Computation | 2011

Improved similarity measures for small sets of spike trains

Richard Naud; Felipe Gerhard; Skander Mensi; Wulfram Gerstner

Multiple measures have been developed to quantify the similarity between two spike trains. These measures have been used for the quantification of the mismatch between neuron models and experiments as well as for the classification of neuronal responses in neuroprosthetic devices and electrophysiological experiments. Frequently only a few spike trains are available in each class. We derive analytical expressions for the small-sample bias present when comparing estimators of the time-dependent firing intensity. We then exploit analogies between the comparison of firing intensities and previously used spike train metrics and show that improved spike train measures can be successfully used for fitting neuron models to experimental data, for comparisons of spike trains, and classification of spike train data. In classification tasks, the improved similarity measures can increase the recovered information. We demonstrate that when similarity measures are used for fitting mathematical models, all previous methods systematically underestimate the noise. Finally, we show a striking implication of this deterministic bias by reevaluating the results of the single-neuron prediction challenge.


PLOS Computational Biology | 2012

Coding and Decoding with Adapting Neurons: A Population Approach to the Peri-Stimulus Time Histogram

Richard Naud; Wulfram Gerstner

The response of a neuron to a time-dependent stimulus, as measured in a Peri-Stimulus-Time-Histogram (PSTH), exhibits an intricate temporal structure that reflects potential temporal coding principles. Here we analyze the encoding and decoding of PSTHs for spiking neurons with arbitrary refractoriness and adaptation. As a modeling framework, we use the spike response model, also known as the generalized linear neuron model. Because of refractoriness, the effect of the most recent spike on the spiking probability a few milliseconds later is very strong. The influence of the last spike needs therefore to be described with high precision, while the rest of the neuronal spiking history merely introduces an average self-inhibition or adaptation that depends on the expected number of past spikes but not on the exact spike timings. Based on these insights, we derive a ‘quasi-renewal equation’ which is shown to yield an excellent description of the firing rate of adapting neurons. We explore the domain of validity of the quasi-renewal equation and compare it with other rate equations for populations of spiking neurons. The problem of decoding the stimulus from the population response (or PSTH) is addressed analogously. We find that for small levels of activity and weak adaptation, a simple accumulator of the past activity is sufficient to decode the original input, but when refractory effects become large decoding becomes a non-linear function of the past activity. The results presented here can be applied to the mean-field analysis of coupled neuron networks, but also to arbitrary point processes with negative self-interaction.


Proceedings of the National Academy of Sciences of the United States of America | 2013

Speed-invariant encoding of looming object distance requires power law spike rate adaptation

Stephen E. Clarke; Richard Naud; André Longtin; Leonard Maler

Neural representations of a moving object’s distance and approach speed are essential for determining appropriate orienting responses, such as those observed in the localization behaviors of the weakly electric fish, Apteronotus leptorhynchus. We demonstrate that a power law form of spike rate adaptation transforms an electroreceptor afferent’s response to “looming” object motion, effectively parsing information about distance and approach speed into distinct measures of the firing rate. Neurons with dynamics characterized by fixed time scales are shown to confound estimates of object distance and speed. Conversely, power law adaptation modifies an electroreceptor afferent’s response according to the time scales present in the stimulus, generating a rate code for looming object distance that is invariant to speed and acceleration. Consequently, estimates of both object distance and approach speed can be uniquely determined from an electroreceptor afferent’s firing rate, a multiplexed neural code operating over the extended time scales associated with behaviorally relevant stimuli.


PLOS Computational Biology | 2015

Automated High-Throughput Characterization of Single Neurons by Means of Simplified Spiking Models.

Christian Pozzorini; Skander Mensi; Olivier Hagens; Richard Naud; Christof Koch; Wulfram Gerstner

Single-neuron models are useful not only for studying the emergent properties of neural circuits in large-scale simulations, but also for extracting and summarizing in a principled way the information contained in electrophysiological recordings. Here we demonstrate that, using a convex optimization procedure we previously introduced, a Generalized Integrate-and-Fire model can be accurately fitted with a limited amount of data. The model is capable of predicting both the spiking activity and the subthreshold dynamics of different cell types, and can be used for online characterization of neuronal properties. A protocol is proposed that, combined with emergent technologies for automatic patch-clamp recordings, permits automated, in vitro high-throughput characterization of single neurons.

Collaboration


Dive into the Richard Naud's collaboration.

Top Co-Authors

Avatar

Wulfram Gerstner

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Werner M. Kistler

Technische Universität München

View shared research outputs
Top Co-Authors

Avatar

Skander Mensi

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Christian Pozzorini

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Carl C. H. Petersen

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Michael Avermann

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Brice Bathellier

Centre national de la recherche scientifique

View shared research outputs
Researchain Logo
Decentralizing Knowledge