Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Moritz Deger is active.

Publication


Featured researches published by Moritz Deger.


PLOS Computational Biology | 2010

Instantaneous Non-Linear Processing by Pulse-Coupled Threshold Units

Moritz Helias; Moritz Deger; Stefan Rotter; Markus Diesmann

Contemporary theory of spiking neuronal networks is based on the linear response of the integrate-and-fire neuron model derived in the diffusion limit. We find that for non-zero synaptic weights, the response to transient inputs differs qualitatively from this approximation. The response is instantaneous rather than exhibiting low-pass characteristics, non-linearly dependent on the input amplitude, asymmetric for excitation and inhibition, and is promoted by a characteristic level of synaptic background noise. We show that at threshold the probability density of the potential drops to zero within the range of one synaptic weight and explain how this shapes the response. The novel mechanism is exhibited on the network level and is a generic property of pulse-coupled networks of threshold units.Gaussian white noise is frequently used to model fluctuations in physical systems. In Fokker-Planck theory, this leads to a vanishing probability density near the absorbing boundary of threshold models. Here we derive the boundary condition for the stationary density of a first-order stochastic differential equation for additive finite-grained Poisson noise and show that the response properties of threshold units are qualitatively altered. Applied to the integrate-and-fire neuron model, the response turns out to be instantaneous rather than exhibiting low-pass characteristics, highly non-linear, and asymmetric for excitation and inhibition. The novel mechanism is exhibited on the network level and is a generic property of pulse-coupled systems of threshold units.


Journal of Computational Neuroscience | 2015

Reconstruction of recurrent synaptic connectivity of thousands of neurons from simulated spiking activity

Yury V. Zaytsev; Abigail Morrison; Moritz Deger

Dynamics and function of neuronal networks are determined by their synaptic connectivity. Current experimental methods to analyze synaptic network structure on the cellular level, however, cover only small fractions of functional neuronal circuits, typically without a simultaneous record of neuronal spiking activity. Here we present a method for the reconstruction of large recurrent neuronal networks from thousands of parallel spike train recordings. We employ maximum likelihood estimation of a generalized linear model of the spiking activity in continuous time. For this model the point process likelihood is concave, such that a global optimum of the parameters can be obtained by gradient ascent. Previous methods, including those of the same class, did not allow recurrent networks of that order of magnitude to be reconstructed due to prohibitive computational cost and numerical instabilities. We describe a minimal model that is optimized for large networks and an efficient scheme for its parallelized numerical optimization on generic computing clusters. For a simulated balanced random network of 1000 neurons, synaptic connectivity is recovered with a misclassification error rate of less than 1 % under ideal conditions. We show that the error rate remains low in a series of example cases under progressively less ideal conditions. Finally, we successfully reconstruct the connectivity of a hidden synfire chain that is embedded in a random network, which requires clustering of the network connectivity to reveal the synfire groups. Our results demonstrate how synaptic connectivity could potentially be inferred from large-scale parallel spike train recordings.


Journal of Computational Neuroscience | 2012

Statistical properties of superimposed stationary spike trains

Moritz Deger; Moritz Helias; Clemens Boucsein; Stefan Rotter

The Poisson process is an often employed model for the activity of neuronal populations. It is known, though, that superpositions of realistic, non- Poisson spike trains are not in general Poisson processes, not even for large numbers of superimposed processes. Here we construct superimposed spike trains from intracellular in vivo recordings from rat neocortex neurons and compare their statistics to specific point process models. The constructed superimposed spike trains reveal strong deviations from the Poisson model. We find that superpositions of model spike trains that take the effective refractoriness of the neurons into account yield a much better description. A minimal model of this kind is the Poisson process with dead-time (PPD). For this process, and for superpositions thereof, we obtain analytical expressions for some second-order statistical quantities—like the count variability, inter-spike interval (ISI) variability and ISI correlations—and demonstrate the match with the in vivo data. We conclude that effective refractoriness is the key property that shapes the statistical properties of the superposition spike trains. We present new, efficient algorithms to generate superpositions of PPDs and of gamma processes that can be used to provide more realistic background input in simulations of networks of spiking neurons. Using these generators, we show in simulations that neurons which receive superimposed spike trains as input are highly sensitive for the statistical effects induced by neuronal refractoriness.


PLOS Computational Biology | 2012

Spike-Timing Dependence of Structural Plasticity Explains Cooperative Synapse Formation in the Neocortex

Moritz Deger; Moritz Helias; Stefan Rotter; Markus Diesmann

Structural plasticity governs the long-term development of synaptic connections in the neocortex. While the underlying processes at the synapses are not fully understood, there is strong evidence that a process of random, independent formation and pruning of excitatory synapses can be ruled out. Instead, there must be some cooperation between the synaptic contacts connecting a single pre- and postsynaptic neuron pair. So far, the mechanism of cooperation is not known. Here we demonstrate that local correlation detection at the postsynaptic dendritic spine suffices to explain the synaptic cooperation effect, without assuming any hypothetical direct interaction pathway between the synaptic contacts. Candidate biomolecular mechanisms for dendritic correlation detection have been identified previously, as well as for structural plasticity based thereon. By analyzing and fitting of a simple model, we show that spike-timing correlation dependent structural plasticity, without additional mechanisms of cross-synapse interaction, can reproduce the experimentally observed distributions of numbers of synaptic contacts between pairs of neurons in the neocortex. Furthermore, the model yields a first explanation for the existence of both transient and persistent dendritic spines and allows to make predictions for future experiments.


Frontiers in Computational Neuroscience | 2013

The relevance of network micro-structure for neural dynamics.

Volker Pernice; Moritz Deger; Stefano Cardanobile; Stefan Rotter

The activity of cortical neurons is determined by the input they receive from presynaptic neurons. Many previous studies have investigated how specific aspects of the statistics of the input affect the spike trains of single neurons and neurons in recurrent networks. However, typically very simple random network models are considered in such studies. Here we use a recently developed algorithm to construct networks based on a quasi-fractal probability measure which are much more variable than commonly used network models, and which therefore promise to sample the space of recurrent networks in a more exhaustive fashion than previously possible. We use the generated graphs as the underlying network topology in simulations of networks of integrate-and-fire neurons in an asynchronous and irregular state. Based on an extensive dataset of networks and neuronal simulations we assess statistical relations between features of the network structure and the spiking activity. Our results highlight the strong influence that some details of the network structure have on the activity dynamics of both single neurons and populations, even if some global network parameters are kept fixed. We observe specific and consistent relations between activity characteristics like spike-train irregularity or correlations and network properties, for example the distributions of the numbers of in- and outgoing connections or clustering. Exploiting these relations, we demonstrate that it is possible to estimate structural characteristics of the network from activity data. We also assess higher order correlations of spiking activity in the various networks considered here, and find that their occurrence strongly depends on the network structure. These results provide directions for further theoretical studies on recurrent networks, as well as new ways to interpret spike train recordings from neural circuits.


Physical Review E | 2010

Nonequilibrium dynamics of stochastic point processes with refractoriness

Moritz Deger; Moritz Helias; Stefano Cardanobile; Fatihcan M. Atay; Stefan Rotter

Stochastic point processes with refractoriness appear frequently in the quantitative analysis of physical and biological systems, such as the generation of action potentials by nerve cells, the release and reuptake of vesicles at a synapse, and the counting of particles by detector devices. Here we present an extension of renewal theory to describe ensembles of point processes with time varying input. This is made possible by a representation in terms of occupation numbers of two states: active and refractory. The dynamics of these occupation numbers follows a distributed delay differential equation. In particular, our theory enables us to uncover the effect of refractoriness on the time-dependent rate of an ensemble of encoding point processes in response to modulation of the input. We present exact solutions that demonstrate generic features, such as stochastic transients and oscillations in the step response as well as resonances, phase jumps and frequency doubling in the transfer of periodic signals. We show that a large class of renewal processes can indeed be regarded as special cases of the model we analyze. Hence our approach represents a widely applicable framework to define and analyze nonstationary renewal processes.


PLOS Computational Biology | 2017

Towards a theory of cortical columns: From spiking neurons to interacting neural populations of finite size

Tilo Schwalger; Moritz Deger; Wulfram Gerstner

Neural population equations such as neural mass or field models are widely used to study brain activity on a large scale. However, the relation of these models to the properties of single neurons is unclear. Here we derive an equation for several interacting populations at the mesoscopic scale starting from a microscopic model of randomly connected generalized integrate-and-fire neuron models. Each population consists of 50–2000 neurons of the same type but different populations account for different neuron types. The stochastic population equations that we find reveal how spike-history effects in single-neuron dynamics such as refractoriness and adaptation interact with finite-size fluctuations on the population level. Efficient integration of the stochastic mesoscopic equations reproduces the statistical behavior of the population activities obtained from microscopic simulations of a full spiking neural network model. The theory describes nonlinear emergent dynamics such as finite-size-induced stochastic transitions in multistable networks and synchronization in balanced networks of excitatory and inhibitory neurons. The mesoscopic equations are employed to rapidly integrate a model of a cortical microcircuit consisting of eight neuron types, which allows us to predict spontaneous population activities as well as evoked responses to thalamic input. Our theory establishes a general framework for modeling finite-size neural population dynamics based on single cell and synapse parameters and offers an efficient approach to analyzing cortical circuits and computations.


PLOS Computational Biology | 2017

On the stability and dynamics of stochastic spiking neuron models: Nonlinear Hawkes process and point process GLMs.

Felipe Gerhard; Moritz Deger; Wilson Truccolo

Point process generalized linear models (PP-GLMs) provide an important statistical framework for modeling spiking activity in single-neurons and neuronal networks. Stochastic stability is essential when sampling from these models, as done in computational neuroscience to analyze statistical properties of neuronal dynamics and in neuro-engineering to implement closed-loop applications. Here we show, however, that despite passing common goodness-of-fit tests, PP-GLMs estimated from data are often unstable, leading to divergent firing rates. The inclusion of absolute refractory periods is not a satisfactory solution since the activity then typically settles into unphysiological rates. To address these issues, we derive a framework for determining the existence and stability of fixed points of the expected conditional intensity function (CIF) for general PP-GLMs. Specifically, in nonlinear Hawkes PP-GLMs, the CIF is expressed as a function of the previous spike history and exogenous inputs. We use a mean-field quasi-renewal (QR) approximation that decomposes spike history effects into the contribution of the last spike and an average of the CIF over all spike histories prior to the last spike. Fixed points for stationary rates are derived as self-consistent solutions of integral equations. Bifurcation analysis and the number of fixed points predict that the original models can show stable, divergent, and metastable (fragile) dynamics. For fragile models, fluctuations of the single-neuron dynamics predict expected divergence times after which rates approach unphysiologically high values. This metric can be used to estimate the probability of rates to remain physiological for given time periods, e.g., for simulation purposes. We demonstrate the use of the stability framework using simulated single-neuron examples and neurophysiological recordings. Finally, we show how to adapt PP-GLM estimation procedures to guarantee model stability. Overall, our results provide a stability framework for data-driven PP-GLMs and shed new light on the stochastic dynamics of state-of-the-art statistical models of neuronal spiking activity.


Physical Review E | 2014

Fluctuations and information filtering in coupled populations of spiking neurons with adaptation

Moritz Deger; Tilo Schwalger; Richard Naud; Wulfram Gerstner

Finite-sized populations of spiking elements are fundamental to brain function but also are used in many areas of physics. Here we present a theory of the dynamics of finite-sized populations of spiking units, based on a quasirenewal description of neurons with adaptation. We derive an integral equation with colored noise that governs the stochastic dynamics of the population activity in response to time-dependent stimulation and calculate the spectral density in the asynchronous state. We show that systems of coupled populations with adaptation can generate a frequency band in which sensory information is preferentially encoded. The theory is applicable to fully as well as randomly connected networks and to leaky integrate-and-fire as well as to generalized spiking neurons with adaptation on multiple time scales.


Frontiers in Computational Neuroscience | 2010

Equilibrium and Response Properties of the Integrate-and-Fire Neuron in Discrete Time.

Moritz Helias; Moritz Deger; Markus Diesmann; Stefan Rotter

The integrate-and-fire neuron with exponential postsynaptic potentials is a frequently employed model to study neural networks. Simulations in discrete time still have highest performance at moderate numerical errors, which makes them first choice for long-term simulations of plastic networks. Here we extend the population density approach to investigate how the equilibrium and response properties of the leaky integrate-and-fire neuron are affected by time discretization. We present a novel analytical treatment of the boundary condition at threshold, taking both discretization of time and finite synaptic weights into account. We uncover an increased membrane potential density just below threshold as the decisive property that explains the deviations found between simulations and the classical diffusion approximation. Temporal discretization and finite synaptic weights both contribute to this effect. Our treatment improves the standard formula to calculate the neurons equilibrium firing rate. Direct solution of the Markov process describing the evolution of the membrane potential density confirms our analysis and yields a method to calculate the firing rate exactly. Knowing the shape of the membrane potential distribution near threshold enables us to devise the transient response properties of the neuron model to synaptic input. We find a pronounced non-linear fast response component that has not been described by the prevailing continuous time theory for Gaussian white noise input.

Collaboration


Dive into the Moritz Deger's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Wulfram Gerstner

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hesam Setareh

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge