Tom Tetzlaff
Norwegian University of Life Sciences
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Tom Tetzlaff.
PLOS Computational Biology | 2012
Tom Tetzlaff; Moritz Helias; Gaute T. Einevoll; Markus Diesmann
Correlations in spike-train ensembles can seriously impair the encoding of information by their spatio-temporal structure. An inevitable source of correlation in finite neural networks is common presynaptic input to pairs of neurons. Recent studies demonstrate that spike correlations in recurrent neural networks are considerably smaller than expected based on the amount of shared presynaptic input. Here, we explain this observation by means of a linear network model and simulations of networks of leaky integrate-and-fire neurons. We show that inhibitory feedback efficiently suppresses pairwise correlations and, hence, population-rate fluctuations, thereby assigning inhibitory neurons the new role of active decorrelation. We quantify this decorrelation by comparing the responses of the intact recurrent network (feedback system) and systems where the statistics of the feedback channel is perturbed (feedforward system). Manipulations of the feedback statistics can lead to a significant increase in the power and coherence of the population response. In particular, neglecting correlations within the ensemble of feedback channels or between the external stimulus and the feedback amplifies population-rate fluctuations by orders of magnitude. The fluctuation suppression in homogeneous inhibitory networks is explained by a negative feedback loop in the one-dimensional dynamics of the compound activity. Similarly, a change of coordinates exposes an effective negative feedback loop in the compound dynamics of stable excitatory-inhibitory networks. The suppression of input correlations in finite networks is explained by the population averaged correlations in the linear network model: In purely inhibitory networks, shared-input correlations are canceled by negative spike-train correlations. In excitatory-inhibitory networks, spike-train correlations are typically positive. Here, the suppression of input correlations is not a result of the mere existence of correlations between excitatory (E) and inhibitory (I) neurons, but a consequence of a particular structure of correlations among the three possible pairings (EE, EI, II).
Neural Computation | 2008
Tom Tetzlaff; Stefan Rotter; Eran Stark; Moshe Abeles; Ad Aertsen; Markus Diesmann
Correlated neural activity has been observed at various signal levels (e.g., spike count, membrane potential, local field potential, EEG, fMRI BOLD). Most of these signals can be considered as superpositions of spike trains filtered by components of the neural system (synapses, membranes) and the measurement process. It is largely unknown how the spike train correlation structure is altered by this filtering and what the consequences for the dynamics of the system and for the interpretation of measured correlations are. In this study, we focus on linearly filtered spike trains and particularly consider correlations caused by overlapping presynaptic neuron populations. We demonstrate that correlation functions and statistical second-order measures like the variance, the covariance, and the correlation coefficient generally exhibit a complex dependence on the filter properties and the statistics of the presynaptic spike trains. We point out that both contributions can play a significant role in modulating the interaction strength between neurons or neuron populations. In many applications, the coherence allows a filter-independent quantification of correlated activity. In different network models, we discuss the estimation of network connectivity from the high-frequency coherence of simultaneous intracellular recordings of pairs of neurons.
PLOS Computational Biology | 2013
Szymon Łęski; Henrik Lindén; Tom Tetzlaff; Klas H. Pettersen; Gaute T. Einevoll
Despite its century-old use, the interpretation of local field potentials (LFPs), the low-frequency part of electrical signals recorded in the brain, is still debated. In cortex the LFP appears to mainly stem from transmembrane neuronal currents following synaptic input, and obvious questions regarding the ‘locality’ of the LFP are: What is the size of the signal-generating region, i.e., the spatial reach, around a recording contact? How far does the LFP signal extend outside a synaptically activated neuronal population? And how do the answers depend on the temporal frequency of the LFP signal? Experimental inquiries have given conflicting results, and we here pursue a modeling approach based on a well-established biophysical forward-modeling scheme incorporating detailed reconstructed neuronal morphologies in precise calculations of population LFPs including thousands of neurons. The two key factors determining the frequency dependence of LFP are the spatial decay of the single-neuron LFP contribution and the conversion of synaptic input correlations into correlations between single-neuron LFP contributions. Both factors are seen to give low-pass filtering of the LFP signal power. For uncorrelated input only the first factor is relevant, and here a modest reduction (<50%) in the spatial reach is observed for higher frequencies (>100 Hz) compared to the near-DC () value of about . Much larger frequency-dependent effects are seen when populations of pyramidal neurons receive correlated and spatially asymmetric inputs: the low-frequency () LFP power can here be an order of magnitude or more larger than at 60 Hz. Moreover, the low-frequency LFP components have larger spatial reach and extend further outside the active population than high-frequency components. Further, the spatial LFP profiles for such populations typically span the full vertical extent of the dendrites of neurons in the population. Our numerical findings are backed up by an intuitive simplified model for the generation of population LFP.
The Journal of Neuroscience | 2009
Clemens Boucsein; Tom Tetzlaff; Ralph Meier; Ad Aertsen; Björn Naundorf
To understand the mechanisms of fast information processing in the brain, it is necessary to determine how rapidly populations of neurons can respond to incoming stimuli in a noisy environment. Recently, it has been shown experimentally that an ensemble of neocortical neurons can track a time-varying input current in the presence of additive correlated noise very fast, up to frequencies of several hundred hertz. Modulations in the firing rate of presynaptic neuron populations affect, however, not only the mean but also the variance of the synaptic input to postsynaptic cells. It has been argued that such modulations of the noise intensity (multiplicative modulation) can be tracked much faster than modulations of the mean input current (additive modulation). Here, we compare the response characteristics of an ensemble of neocortical neurons for both modulation schemes. We injected sinusoidally modulated noisy currents (additive and multiplicative modulation) into layer V pyramidal neurons of the rat somatosensory cortex and measured the trial and ensemble-averaged spike responses for a wide range of stimulus frequencies. For both modulation paradigms, we observed low-pass behavior. The cutoff frequencies were markedly high, considerably higher than the average firing rates. We demonstrate that modulations in the variance can be tracked significantly faster than modulations in the mean input. Extremely fast stimuli (up to 1 kHz) can be reliably tracked, provided the stimulus amplitudes are sufficiently high.
PLOS Computational Biology | 2014
Moritz Helias; Tom Tetzlaff; Markus Diesmann
Correlated neuronal activity is a natural consequence of network connectivity and shared inputs to pairs of neurons, but the task-dependent modulation of correlations in relation to behavior also hints at a functional role. Correlations influence the gain of postsynaptic neurons, the amount of information encoded in the population activity and decoded by readout neurons, and synaptic plasticity. Further, it affects the power and spatial reach of extracellular signals like the local-field potential. A theory of correlated neuronal activity accounting for recurrent connectivity as well as fluctuating external sources is currently lacking. In particular, it is unclear how the recently found mechanism of active decorrelation by negative feedback on the population level affects the network response to externally applied correlated stimuli. Here, we present such an extension of the theory of correlations in stochastic binary networks. We show that (1) for homogeneous external input, the structure of correlations is mainly determined by the local recurrent connectivity, (2) homogeneous external inputs provide an additive, unspecific contribution to the correlations, (3) inhibitory feedback effectively decorrelates neuronal activity, even if neurons receive identical external inputs, and (4) identical synaptic input statistics to excitatory and to inhibitory cells increases intrinsically generated fluctuations and pairwise correlations. We further demonstrate how the accuracy of mean-field predictions can be improved by self-consistently including correlations. As a byproduct, we show that the cancellation of correlations between the summed inputs to pairs of neurons does not originate from the fast tracking of external input, but from the suppression of fluctuations on the population level by the local network. This suppression is a necessary constraint, but not sufficient to determine the structure of correlations; specifically, the structure observed at finite network size differs from the prediction based on perfect tracking, even though perfect tracking implies suppression of population fluctuations.
New Journal of Physics | 2013
Moritz Helias; Tom Tetzlaff; Markus Diesmann
Correlations are employed in modern physics to explain microscopic and macroscopic phenomena, like the fractional quantum Hall effect and the Mott insulator state in high temperature superconductors and ultracold atoms. Simultaneously probed neurons in the intact brain reveal correlations between their activity, an important measure to study information processing in the brain that also influences the macroscopic signals of neural activity, like the electroencephalogram (EEG). Networks of spiking neurons differ from most physical systems: the interaction between elements is directed, time delayed, mediated by short pulses and each neuron receives events from thousands of neurons. Even the stationary state of the network cannot be described by equilibrium statistical mechanics. Here we develop a quantitative theory of pairwise correlations in finite-sized random networks of spiking neurons. We derive explicit analytic expressions for the population-averaged cross correlation functions. Our theory explains why the intuitive mean field description fails, how the echo of single action potentials causes an apparent lag of inhibition with respect to excitation and how the size of the network can be scaled while maintaining its dynamical state. Finally, we derive a new criterion for the emergence of collective oscillations from the spectrum of the time-evolution propagator.
Frontiers in Computational Neuroscience | 2013
Dmytro Grytskyy; Tom Tetzlaff; Markus Diesmann; Moritz Helias
The diversity of neuron models used in contemporary theoretical neuroscience to investigate specific properties of covariances in the spiking activity raises the question how these models relate to each other. In particular it is hard to distinguish between generic properties of covariances and peculiarities due to the abstracted model. Here we present a unified view on pairwise covariances in recurrent networks in the irregular regime. We consider the binary neuron model, the leaky integrate-and-fire (LIF) model, and the Hawkes process. We show that linear approximation maps each of these models to either of two classes of linear rate models (LRM), including the Ornstein–Uhlenbeck process (OUP) as a special case. The distinction between both classes is the location of additive noise in the rate dynamics, which is located on the output side for spiking models and on the input side for the binary model. Both classes allow closed form solutions for the covariance. For output noise it separates into an echo term and a term due to correlated input. The unified framework enables us to transfer results between models. For example, we generalize the binary model and the Hawkes process to the situation with synaptic conduction delays and simplify derivations for established results. Our approach is applicable to general network structures and suitable for the calculation of population averages. The derived averages are exact for fixed out-degree network architectures and approximate for fixed in-degree. We demonstrate how taking into account fluctuations in the linearization procedure increases the accuracy of the effective theory and we explain the class dependent differences between covariances in the time and the frequency domain. Finally we show that the oscillatory instability emerging in networks of LIF models with delayed inhibitory feedback is a model-invariant feature: the same structure of poles in the complex frequency plane determines the population power spectra.
Neurocomputing | 2003
Tom Tetzlaff; Michael Buschermöhle; Theo Geisel; Markus Diesmann
Abstract The analysis of the spatial and temporal structure of spike cross-correlation in experimental data is an important tool in the exploration of cortical processing. Recent theoretical studies investigated the impact of correlation between afferents on the spike rate of single neurons and the effect of input correlation on the output correlation of pairs of neurons. Here, this knowledge is combined to a model simultaneously describing the spatial propagation of rate and correlation, allowing for an interpretation of its constituents in terms of network activity. The application to an embedded feed-forward network provides insight into the mechanisms stabilizing its asynchronous mode.
Neurocomputing | 2002
Tom Tetzlaff; Theo Geisel; Markus Diesmann
Abstract The occurrence of spatio-temporal spike patterns in the cortex is explained by models of divergent/convergent feed-forward subnetworks—synfire chains. Their excited mode is characterized by spike volleys propagating from one neuron group to the next. We demonstrate the existence of an upper bound for group size: above a critical value synchronous activity develops spontaneously from random fluctuations. Stability of the ground state, in which neurons independently fire at low rates, is lost. Comparison of an analytic rate model with network simulations shows that the transition from the asynchronous into the synchronous regime is driven by an instability in rate dynamics.
Cerebral Cortex | 2016
Espen Hagen; David Dahmen; Maria L. Stavrinou; Henrik Lindén; Tom Tetzlaff; Sacha J. van Albada; Sonja Grün; Markus Diesmann; Gaute T. Einevoll
With rapidly advancing multi-electrode recording technology, the local field potential (LFP) has again become a popular measure of neuronal activity in both research and clinical applications. Proper understanding of the LFP requires detailed mathematical modeling incorporating the anatomical and electrophysiological features of neurons near the recording electrode, as well as synaptic inputs from the entire network. Here we propose a hybrid modeling scheme combining efficient point-neuron network models with biophysical principles underlying LFP generation by real neurons. The LFP predictions rely on populations of network-equivalent multicompartment neuron models with layer-specific synaptic connectivity, can be used with an arbitrary number of point-neuron network populations, and allows for a full separation of simulated network dynamics and LFPs. We apply the scheme to a full-scale cortical network model for a ∼1 mm2 patch of primary visual cortex, predict laminar LFPs for different network states, assess the relative LFP contribution from different laminar populations, and investigate effects of input correlations and neuron density on the LFP. The generic nature of the hybrid scheme and its public implementation in hybridLFPy form the basis for LFP predictions from other and larger point-neuron network models, as well as extensions of the current application with additional biological detail.