Self-sustained activity, bursts, and variability in recurrent networks
11 Self-sustained activity, bursts, and variability in recurrentnetworks
Marc-Oliver Gewaltig ,
1) Blue Brain Project, `Ecole Polytechnique F`ederale de Lausanne, QIJ, CH-1015 Lausanne, Switzerland2) Honda Research Institute Europe GmbH, D-63073 Offenbach, GermanyE-mail: marc-oliver.gewaltig@epfl.ch
Abstract
There is consensus in the current literature that stable states of asynchronous irregular spiking activityrequire (i) large networks of 10 000 or more neurons and (ii) external background activity or pacemakerneurons. Yet already in 1963, Griffith showed that networks of simple threshold elements can be persis-tently active at intermediate rates. Here, we extend Griffith’s work and demonstrate that sparse networksof integrate-and-fire neurons assume stable states of self-sustained asynchronous and irregular firing with-out external input or pacemaker neurons. These states can be robustly induced by a brief pulse to asmall fraction of the neurons, or by short a period of irregular input, and last for several minutes. Self-sustained activity states emerge when a small fraction of the synapses is strong enough to significantlyinfluence the firing probability of a neuron, consistent with the recently proposed long-tailed distributionof synaptic weights. During self-sustained activity, each neuron exhibits highly irregular firing patterns,similar to experimentally observed activity. Moreover, the interspike interval distribution reveals thatneurons switch between discrete states of high and low firing rates. We find that self-sustained activitystates can exist even in small networks of only a thousand neurons. We investigated networks up to100 000 neurons. Finally, we discuss the implications of self-sustained activity for learning, memory andsignal propagation.
Author Summary
Neurons in many brain areas are active even in the absence of a sensory stimulus. Many models have triedto explain this spontaneous activity by spiking activity, reverberating in recurrent networks of excitatoryand inhibitory neurons. But so far the conclusions have been that such networks can only sustainspontaneous activity under certain conditions: The networks must be large and there must be eitherendogeneously firing neurons (so called pacemaker neurons ) or diffuse external input which keeps thenetwork active. Here we show that recurrent networks of excitatory and inhibitory neurons can sustainspontaneous activity for periods of many minutes, provided that a small percentage of the connections aresufficiently strong. Thus, contrary to previous findings, self-sustained (spontaneous) activity does neitherrequire large networks nor external input or pacemaker neurons. The spike patterns observed duringself-sustained activity are chaotic and highly irregular. The interspike interval distribution during self-sustained activity reveals that the network switches between different discrete states, each characterizedby their own time scale. Our results provide a possible explanation of self-sustained cortical activity andthe role of the recently observed long-tailed weight distributions in the mammalian cortex.
Introduction
Spontaneous activity, that is, activity in the absense of a sensory stimulus, is a ubiquitous phenomenonin the brain that has puzzled generations of researchers. Spontaneous activity is highly irregular andhas a strong effect on evoked neuronal responses [1, 2]. In fact many researchers argue that this ongoingactivity represents information rather than noise [3, 4]. Moreover, spontaneous activity in the cortex is a r X i v : . [ q - b i o . N C ] N ov stable and robust and can be observed in awake as well as in anesthesized animals throughout their entirelife.It is commonly assumed that spontaneous activity is created by reverberating activity within recurrentneuronal circuits, but the exact mechanisms by which neuron maintain their low rate firing are still notwell understood.Already in 1956 Beurle showed that networks of excitatory neurons (“a mass of units capable ofemitting regenerative pulses”) generally have an inherently unstable activity in which all or none of theunits are excited [5].In 1962 Ashby and co-workers [6] reproduced Beurle’s findings under more simplified assumptionsand delivered a mathematical proof for the instability of recurrently connected excitatory thresholdunits. They derived an expression for the probability of an output pulse as function of the probabilityfor an input pulse and showed that this function takes the form of the now well known sigmoid. Theyconcluded that “the more richly organized regions of the brain offer us something of a paradox. Theyuse threshold intensively, but usually transmit impulses at some moderate frequency, seldom passing inphysiological conditions into total inactivity or maximal excitation. Evidently there must exist factorsor mechanisms for stability which do not rely on fixed threshold alone” [6].In 1963 Griffith extended the models of Beurle [5] and Ashby [6] in two ways. First, he remarkedthat networks with dedicated connectivity can support stable states of low or intermediate firing rates.For example the complete transmission line consists of consecutive groups of neurons that are connectedby diverging/converging connections. In such a network, activity will travel unperturbed from one groupto the next, without exciting the entire network. This network architecture became later known as the synfire chain [7].The second addition of Griffith were inhibitory neurons which Ashby had neglected. Griffith also foundthat networks of excitatory and inhibitory neurons don’t support stable states at low or intermediate levelsof activity if the neurons have many excitatory and inhibitory inputs with correspondingly small synapticweights.He then went on to show that for the special case of a few synaptic inputs per neuron, such stableactivity states should indeed exist. Since computational power in 1963 was more limited than it is today,he restricted his analysis to the case of few excitatory inputs with global inhibition, i.e. and one inhibitoryinput, strong enough to suppress the combined input of all excitatory neurons. In this case, activity isstable at an intermediate rate. This suggests that stable, low or intermediate firing rates should exist formore realistic network configurations.Thirty years later, van Vreeswijk and Sompolinski revived interest in self-sustaining activity in re-current networks with two seminal papers [8, 9] in which the authors introduced the concept of balancedexcitation and inhibition as a criterion for the emergence of stable activity states. But in their model,external input is needed to obtain stable activity at low or intermediate rates.Since then several studies have shown that self-sustained activity is possible under some conditions.In recurrent networks of conductance based neurons, self-sustained activity can survive for a limitedtime [10, 11]. Otherwise additional activating mechanisms, like external input or [12, 13], endogeneouslyfiring neurons [14,15], or cells which respond to an inhibitory stimulus [16], are needed to sustain activity.Moreover, self-sustained activity in recurrent network models is still too regular compared to experi-mental data [17, 18] and additional de-correlating mechanims are needed [19].In this paper we show that highly irregular self-sustained activity is an inherent property of recurrentneural networks with excitation and inhibition. These states occur in relatively small networks (onethousand neurons and more) if the connectivity is sparse and the connection strengths is large.Self-sustained activity can be robustly induced by a brief pulse to a small fraction of the neurons.We will show that these self-sustained states differ in their survival time statistics and their interspikeinterval distributions from previously reported self-sustained states.In the following section, we will briefly revisit the results of Ashby [6] and Griffith [20] to showunder which conditions recurrent networks of excitatory and inhibitory neurons can sustain states of lowrate almost indefinitely. We will then investigate the nature and properties of these states in computersimulations. We demonstrate that self-sustained activity, even in small networks is stable and long-lived,provided the connectivity is sparse and synapses are strong. We compare self-sustained activity with theasynchronous irregular state (AI state), characterized by Brunel [13]. We find that the firing patterns ofneurons in the self-sustained states are highly irregular. This irregularity can be seen in the wide range offiring rates and large coefficients of variation (CV) of the interspike intervals. While self-sustained activitystates require sparse and strong connections, the required post-synaptic potential (PSP) amplitudes arestill in the physiological range. Moreover, self-sustained activity states emerge in weakly coupled networkswith a few strong connections, corresponding to the recently proposed idea that cortical connectivity isbest described as a few strong synapses in a sea of weak ones [21] or a long-tailed distribution of synapticweights. Results
Stability in networks of simple threshold elements
Following Ashby [6], we describe the activity of a neuron by a single number p which satisfies 0 ≤ p ≤ t : p = lim t →∞ (cid:18) nC · ∆ tt (cid:19) (1)where n is the number of spikes arriving at the inputs C at time t . The probability of observing exactly n spikes on the C inputs follows a binomial distribution: p n = (cid:18) Cn (cid:19) p n (1 − p ) C − n (2)Now assume that a neuron fires, if there are n ≥ θ input spikes. Then the probability for producingan output spike is given by the cumulative binomial probability function: P ( p ) = C (cid:88) n = θ (cid:18) Cn (cid:19) p n (1 − p ) C − n (3)We can estimate the long-term behavior of the network activity by considering equation (3) as aniterative map. Starting from an initial activity p , we repeatedly apply (3). Ashby found, that the onlystable fixed points in this iteration are p = 0 and p = 1, that is, either all the neurons are silent or allneurons are active. Griffith [20] noted that this situation changes if the network also contains inhibitoryneurons. In the case of C E excitatory and C I inhibitory inputs, the new threshold condition becomes: n E − g · n I ≥ θ (4)where n E is the number of spikes at the C E excitatory inputs and n I the number of spikes at the C I inhibitory inputs, and g a factor that captures the difference in synaptic efficacies. To reflect the corticalratio of 80% excitatory neurons to 20% inhibitory neurons, we choose: C I = γ · C E with γ = 0 . n E excitatory and n I inhibitory spikes. The respective cumulative probability function givenby: P ( p ) = (cid:88) n E − g · n E ≥ θ (cid:18) C E n E (cid:19)(cid:18) C I n I (cid:19) p n E + n I q C E + C I − n E − n I (5)where q = 1 − p and the sum is over all combinations of n E and n I that satisfy the threshold condition(4). Again, we can use this expression as an iterative map to determine whether a given probability p isstable under repeated application of equation (5), that is: p ∗ = p = P ( p ) (6)with the condition that (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:18) ∂P ( p ) ∂p (cid:19) p ∗ (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) < C E and C I become large. But, if the number of inputsis small and the inhibitory efficacy is very strong, then there is a stable solution at p = 1 /
2. Griffithsolved equation (5) for the case of global inhibition which is strong enough to suppress an output spike,even if all excitatory inputs are active.Unfortunately, it is not easy to determine whether other stable solutions to equation (5) exist, becausethe equation is discrete and involves the binomial coefficients of all possible input combinations that reachor exceed threshold. Typical approximations use the assumptions that the number of inputs is large andthe individual probabilities are low. Then, the law of large numbers allows us to replace the binomialdistribution with a normal distribution. Unfortunately, these are the conditions for which we alreadyknow that all solutions are unstable [20] and that external input is needed to obtain self-reproducingactivities [9, 12, 13].In this paper we won’t attempt to simplify equation (5) further, but rather we will investigate itnumerically. We will demonstrate that for sparse networks with large synaptic efficacies, there indeedexist stable self-sustaining activity states. We will then investigate these states in large-scale networksimulations of current based integrate-and-fire neurons. In a companion paper by Enger et al. (2013) wederive a theory which explains self-sustained activity states as well as their survival times.
Many inputs and high-threshold
Neurons in cortex have a large number of synapses [22, 23] and it is often assumed that this also impliesthat the individual synapses must be small [23], that is, the threshold is large. Unfortunately, it is difficultto reach threshold under these conditions, because a large number of neurons must be simultaneouslyactive. This is reflected in the very low output probability for this situation. Figure 1A shows thenumerical solution of equation (5) for a network with C E = 1 000 , C I = 250 , θ = 100. The probabilitystays orders of magnitudes below the diagonal and the only stable fixed-point is P (0 .
0) = 0 . θ = 50 and as a result the peak probabilityhas increased by a factor of almost 1 000, but the curve is still far below the diagonal. A B (cid:180) (cid:45) (cid:180) (cid:45) (cid:180) (cid:45) (cid:180) (cid:45) (cid:180) (cid:45) (cid:180) (cid:45) (cid:180) (cid:45) Figure 1.
Effect of the firing threshold on the firing probability transfer function (numerical solutionof equation (5)) (A)
Many synapses, high threshold; θ = 100 , C E = 1 000 , C I = 250. The probabilitystays orders of magnitudes below the diagonal and the only stable fixed-point is P (0 .
0) = 0 . (B) Many synapses, lower threshold; θ = 50 , C E = 1 000 , C I = 250. The peak probability is increased byalmost three orders of magnitude. Stability with low threshold
The threshold has a strong non-linear influence on the firing probability, because the odds of findingsufficiently many coincident (or near coincident) spikes decrease at least exponentially with increasingthreshold. We can use this lever to our advantage and lift the firing probability above the diagonal andthus generating stable self-sustaining activity. We do this by considerably lowering the firing threshold.Figure 2A shows the probability for θ = 5 ( C E = 1 000 , C I = 250). This time, the probabilitycrosses the diagonal, indicating that there are stable, self-reproducing activity states. The curve hasthree important points p + , p ∗ , and p − .First, the ignition point p + , defined as p + = min { p ∈ (0 ,
1] with P ( p ) = p } . (8)Second, the stable self-reproducing point p ∗ , defined as p ∗ = min { p ∈ ( p + ,
1] with P ( p ) = p } . (9)Third, the shut-off point p − , beyond which all activity ceases again. It is defined as: p − = min { p ∈ ( p ∗ ,
1] with P ( p ) = p + } . (10)For p < p + the firing probability quickly tends to zero, for p + < p < p ∗ the firing probability isamplified, and for p > p ∗ the probability P ( p ) quickly tends to zero again. Thus, stable activity is onlypossible within the range p + < p < p − . If too few or too many neurons in the network are active, activitywill cease in the next iteration.In figure 2A we have p + ∈ (0 . , . p ∗ ≈ .
11, and p − ∈ (0 . , . P ( p ) quickly rises towardsits maximum at p = 0 . p − = 0 .
6, activity will cease in the nextiteration.Figure 2B demonstrates that the stable point persists, even if we reduce the number of synapses bya factor of 10 ( C E = 100 , C I = 25 , θ = 5). The probability still rises above the diagonal and crossesit again at p ∗ ≈ .
2. Decreasing the number of connections has obviously increased the activity at the
A B
Figure 2.
Stable points of the firing probability transfer function (numerical solution of equation (5)) (A)
Many synapses, low threshold; θ = 5 , C E = 1 000 , C I = 250. The probability steeply rises to itspeak at p = 0 . p ∗ ≈ .
11 asa stable fixed point. (B)
Low threshold and fewer synapses ( C E = 100 , C I = 25 , θ = 5). The curve risesmore slowly than in (a) and also falls off more slowly towards zero. The intersection with the diagonalat p ∗ ≈ . g = 5 > /γ , the synaptic weights decrease more strongly than the number ofconnections.Another notable change is the slope of the curve. The probability rises more gently, has a widerpeak, and also decreases more slowly, compared to 2A. Thus, we expect the rates in a network withthese parameters to be less volatile. Moreover, the shut-off point p − is beyond 0 .
9, thus, the network canendure much higher activities without shutting off.
Stability in recurrent neural networks
We now turn from numerical evaluations of the simplified Griffith model to simulations of recurrentnetworks of current based integrate-and-fire neurons. In these simulations we investigate whether thestable states of self-sustaining activity found in Griffith’s model can indeed be induced in networks ofspiking neurons.
Neuron and network model
The network model that we will be using is based on the sparse random network by Brunel [13]. Itconsists of N E excitatory and N I inhibitory neurons, with N I = γN E and γ = 0 . C E excitatory and C I inhibitory neurons, with C E = (cid:15)N E and C I = (cid:15)N I , where (cid:15) satisfies 0 ≤ (cid:15) ≤
1. The cases (cid:15) = 0 and (cid:15) = 1 correspond to an unconnected and afully connected network, respectively.We consider integrate-and-fire neurons with current based synapses [24], whose membrane potentialis given by: τ m dV im dt ( t ) = − V im ( t ) + R m I i ( t ) , (11)for each neuron i = 1 . . . N = N E + N I , where τ m is the membrane time constant, R m the membraneresistance, and I i ( t ) the synaptic current. Whenever the membrane potential V m reaches the thresholdvalue V th , a spike is send to all post-synaptic neurons.Each spike induces a post-synaptic current, modeled as alpha functions: psc ( t ) = α · t exp (cid:18) − tτ syn (cid:19) , (12)where α is chosen such that the resulting post-synaptic potential has amplitudes J E and J I for excitatoryand inhibitory synapses, respectively. J ij is the efficacy and D ij the delay of the synapse from neuron j to neuron i . Excitatory synapseshave efficacy J E and inhibitory synapses efficacy J I = − g · J E , with g >
0. The parameters g and γ determine the ratio of excitation to inhibition. The regime g ≈ /γ is called the balanced regime, g > /γ the inhibition dominated regime, and g < /γ the excitation dominated regime.The details of our model are summarized in the figures 10 and 11. Relation to Griffith’s model
The parameters of Griffith’s model are the number of excitatory and inhibitory inputs C E and C I , theratio between excitation and inhibition g , and the threshold θ . C E and C I are determined by the respective number of excitatory and inhibitory neurons, N E and N I , as well as the connection probability (cid:15) .The threshold θ is given by the membrane threshold V θ and the excitatory synaptic weight J E : θ = V θ /J E (13)Before we turn to stable states of self-sustained activity, we will review the model Brunel [13] whichdescribes how external excitatory spike input can induce low-rate activity in a recurrent network. We usethis model as a reference for comparison with the self-sustained states that don’t require external input.Next, we consider the cases of stable self-sustained activity, discussed in the previous section. We willthen extend our simulations and investigate how the firing rate and the survival time of self-sustainedstates depend on the ratio of excitation and inhibition as well as the size of the excitatory synaptic weight J E . Brunel’s model: many connections, high threshold
In his model of asynchronous irregular activity, Brunel [13] assumed that each neuron has a large numberof synapses with accordingly small synaptic weights. In Griffith’ terms, these assumptions correspond tothe case of many connections with high threshold and we have seen that in this regime, there is no stableactivity state except for p = 0. To overcome this problem the model is usually supplied with externalexcitatory input [12, 13] or pacemaker neurons [15, 25].Figure 3 shows the spiking activity, induced by an external Poisson input, as proposed by Brunel [13].The configuration corresponds to the case, used in figure 1A and to the asynchronous irregular state ,shown in figure 8C of [13].Figure 3A shows a raster plot of the spiking activity of 50 neurons over an epoch of 500 ms . Eachpoint in the raster-plot corresponds to a spike of a neuron at the respective time. At time t = 0 ms allneurons are in the quiescent state. At t = 50 ms an external Poisson input is switched on and inducesspiking activity in the network. The Poisson input then persists for the duration of the simulation. After A B N e u r o n I D r a t e ( H z ) D e n s i t y C D D e n s i t y D e n s i t y Figure 3.
Activity in a random network with 12 500 neurons, g = 5, J = 0 . (cid:15) = 0 .
1, and Poissonbackground activity with ν ext = 38 Hz. Panels B-D are computed from simulations that lasted 100seconds. (A) Raster plot (top) and spike count (bottom) of 50 neurons for 500 milliseconds. Poissonianbackground activity is supplied between t = 50 ms and t = 400 ms . (B) Firing rate distribution of allexcitatory neurons. The firing rates are approximately Gaussian distributed within the range of 33 Hzto 34 . (C) interspike interval distribution. The interspike intervals are Gaussian distributed with amean of 30 ms and standard deviation of approx. 6 ms . (D) Histogram of the coefficient of variation.The coefficients of variation of the interspike intervals are also Gaussian distributed with mean 0 .
22 andstandard deviation 0 . t = 400 ms the network falls back into thequiescent state again.Barbieri and Brunel [18] observed that the spiking activity produced by these types of networks cannotexplain the irregularity of experimentally recorded data. This is illustrated in figures 3B-3D.Figure 3B shows that the firing rates in the network, measured over 100 seconds follow a very narrowGaussian distribution with mean 33 . ms ± ms ) , as opposed to anexponential distribution that would be the signature of Poissonian firing. The coefficients of variationthat result from this narrow ISI distribution (Figure 3D) are accordingly small (CV ≈ . ± . Self-sustained activity in recurrent networks of integrate-and-fire neurons
Griffith’s model predicts that stable self-sustained activity states exist in networks where the firing thresh-old is sufficiently low. We will first look for these states in simulation. Next we will investigate, howabundant and how robust these self-sustained activity states are. Do they exist only for a very limitedset of parameters, or do they exist in a larger region of the parameter space? We will then have a closerlook at the firing pattern of self-sustained activity.Our starting point is the configuration shown in figure 2B. Assuming a firing threshold of V th = 20 mV ,Griffith’s model predicts that we can observe stable self-sustained activity with an excitatory synapticweight of J E = 20 mV / mV . Indeed, in simulation we found that a network with J E = 4 mV showsself-sustained activity states.Figure 4 shows the results of a simulation that lasted 100 s . Figure 4A shows the first spiking activityof 50 excitatory neurons during the first 500 ms (top) and the corresponding ensemble rate of 1 000excitatory neurons (bottom). Again, at t = 0, all neurons are in the quiescent state. To trigger activity,we stimulated each neuron with weak Poisson noise for a period of t = 150 ms . After that, no furtherinput was given and the network was left to its own devices.The noise stimulus causes a transient network response, after which the network settles to a meanrate firing rate of 32 Hz. This state persists until the end of simulation at t = 100 s . Figure 4B showsthe spiking activity after 90 seconds of simulation to illustrate that the self-sustained state is stable.There are a number of obvious differences between the raster plots of the Brunel network in figure3 and the raster plots in figure 4. First, the Brunel network is only active with external Poisson input,whereas the network in figure 4 continues to fire even in the absence of external input. This state persistsfor very long times, in this case 100 seconds. We call this network state the self-sustained asynchronousirregular (SSAI) state.The self-sustained asynchronous irregular state is not stable in the strict mathematical sense, becausethe random connectivity may cause the number of active neurons to drop below a critical limit and thenspiking will stop altogether. Since there is no noise input into the network, it depends on the instantiationof the random connectivity if and when the self-sustained state will stop (see also [11, 16]).Second, in figure 3 the spikes are relatively homogeneously distributed, whereas in figure 4 the distri-bution of spikes is very irregular. By contrast, neurons in the SSAI state exhibit large periods with fewor no spikes at all, as well as bursting periods where spikes are so close together that they can hardly bedistinguished. It appears as if the neurons switch between states of high activity and states of silence.Third, the population rates (lower part of each panel) of the self-sustained networks differ from theBrunel case. In Brunel’s case (figure 3), the population rate stays quite constant after the initial transientor it slightly oscillates [26]. By contrast, the population rate of the self-sustained networks in figure 4show large fluctuations without any obvious temporal structure.0 A B N e u r o n I D r a t e ( H z ) N e u r o n I D r a t e ( H z ) C D
20 25 30 35 40 45 50mean spike rate (Hz)0.000.020.040.060.080.100.120.140.16 D e n s i t y mean = 31.83sdev = 2.68 2.0 2.1 2.2 2.3 2.4 2.5 2.6Coefficient of variation012345 D e n s i t y mean = 2.29sdev = 0.08 Figure 4.
Self-sustained activity in a network of 12 ,
500 neurons. Each panel shows the spikingactivity of 50 representative excitatory neurons (top) and the population rate of 1 000 neurons(bottom). (A) first 500 ms of a simulation with parameters g = 5, J = 4 . mV , (cid:15) = 0 .
01 which lasted100 s . Between t = 50 ms and t = 200 ms , the neurons were supplied a weak Poisson stimulus to triggerspiking. After that, spiking activity continues for the whole simulation epoch at an average rate of31 .
83 Hz. (B)
Activity of the same network as in A after 90 s . (C) Distribution of the mean firingrates of the neurons. The mean firing rates follow a Gaussian distribution with mean µ = 31 .
83 Hz andstandard deviation σ = 2 .
28 Hz. (D)
Distribution of the coefficients of variation (CV) of the interspikeintervall (ISI) distribution. The CV follows a Gaussian distribution with mean µ CV = 2 .
29 andstandard deviation σ CV = 0 .
08. The rate and CV distributions were estimated from the activity of1 000 neurons, recorded for 100 seconds.1To quantify the irregularity of neuronal activity we looked at the distribution of mean firing rates(figure 4C) as well as the distribution of the coefficients of variation (figure 4D).For figure 4D, we computed the mean firing rates for 1 000 neurons over the simulation period of 100seconds. Firing rates are roughly Gaussian distributed with a mean of 31 .
83 Hz and a standard deviation2 .
68 Hz. The lowest firing rate in the population is 20 Hz and the highest rate is 40 Hz.From figures 4A and 4B we know that over time, the firing rate of each neuron fluctuates considerably.These fluctuations lead to a correspondingly large coefficient of variation (CV), which is the standarddeviation of the interspike intervals (ISIs) divided by their mean. Figure 4D shows the distribution of thecoefficients of variation of 1 000 neurons. We find a Gaussian distribution with mean 2 .
29 and standarddeviation 0 .
08. The smallest CV is 2.1 which is more than twice as large as the CV of a Poisson process.
Interspike interval statistics in the SSAI state
Next, we investigate the statistical properties of interspike intervals in the SSAI state. If the neuron werefiring Poissonian with rate λ then their interspike intervals should be exponentially distributed: f exp ( t ) = λ exp ( − λ · t ) (14)On a logarithmic scale, the exponential distribution is a straight line whose slope is proportional tothe rate of the exponential distribution, which is also the rate of the Poisson process:log ( f exp ( t )) = log ( λ exp ( − λ · t )) (15)= log ( λ ) − λ log ( e ) · t (16)= a · t + b (17)with a = − λ log ( e ) and b = log ( λ ).If the firing patterns in the SSAI state can at least partially be described by a Poisson process, thelogarithmic interspike interval distribution should have a linear part.Figure 5A shows the interspike interval distribution for the network from figure 4 on a logarithmicscale. For intervals larger than 80 ms, the distribution is well described by a straight line with slope a t> = − . ms − , which corresponds to a firing rate of λ t> = 8 .
81 Hz.Intervalls smaller than 80 ms cannot be explained by this Poisson process. In fact, almost 80% (78.8%)of all spikes contribute to these intervals, although the neurons spend 80% (79.2%) of their time in theremaining intervals larger than 80 ms.To understand how this surplus of short intervalls is generated, we return to the hypothesis thatduring SSAI activity the neurons are in one of two states: a low rate state and a high rate state. Thelow rate state accounts for all intervalls larger than 80 ms, while the high-rate state must account for thesurplus of intervals below the threshold.Under this hypothesis, the interval distribution in figure 5A should be the convolution of two expo-nential distributions with rates λ and λ . f ISI ( t ) = f exp 1 ( t ) ∗ f exp 2 ( t ) (18)= λ · λ λ − λ (exp ( − λ t ) − exp ( − λ t )) (19)For t · λ (cid:29)
1, we get f ISI ( t ) ≈ λ · λ λ − λ (exp ( − λ t )) (20)2 A B I n t e r v a l s I n t e r v a l s C D I n t e r v a l s I n t e r v a l s Figure 5.
Interspike interval distribution for the network in figure 4 on a logarithmic scale. In eachpanel, a regression line (black) shows which region of the distribution is well described by anexponential distribution. In panels B-D, only the gray bars were used to determine the regression lines. (A)
ISI distribution for all intervals smaller than 1 000 ms . (B) ISI distribution for intervals smallerthan 80 ms , corrected by the intervals explained by the distribution in panel A. (C) ISI distribution forintervals smaller than 30 ms , corrected by the intervals explained by the distributions in panels A andB. (D) ISI distribution for intervals smaller than 15 ms , corrected by the intervals explained by thedistributions in panels A through C. See text for details.3No. range slope ( m Hz) rate (Hz) (cid:80) time (%) (cid:80) spikes (%)1 3 < t ≤ − . .
41 4 282 10 < t ≤ − . .
48 7 373 25 < t ≤ − . .
20 6 114 80 < t − . .
82 79 21
Table 1.
Times scales and firing rates of a network during the SSAI state. The slopes correspond tothe regression lines in figure 5.Using equation (20) we can estimate the slow firing rate component.We can also estimate the fast firing rate component. To this end, we subtract the expected numberof intervals from the slow component, because from equation (19) we know that the interval distributionis basically the sum of the two superimposed distributions. Thus: f ISI ( t ) − λ · λ λ − λ · f exp 2 ( t ) = λ · λ λ − λ exp( − λ t ) (21)Figure 5B shows the ISI distribution for intervals smaller than 80 ms with the intervals, generated bythe slow rate subtracted. Again, a good part of the distribution is well described by a Poisson process,this time with rate λ 20 Hz, but for intervals shorter than 20 ms , the model again fails.We can repeat this procedure again to estimate the rate of the next faster process. This is shown infigures 5 C and 5D.Altogether we find, that the interval distribution of this network can be described by four Poissonprocesses with widely different rates, valid for different ranges of interspike intervals.If the four Poisson processes were active in parallel, it would be impossible to separate them in theinterval distribution, because the sum of two or more Poisson processes with rates λ , λ , . . . , λ n is againa Poisson process with rate λ ∗ = (cid:80) i λ i . Thus, we must assume that the neurons switch between a smallnumber of Poisson states, each with its own firing rate and its own range of intervals. These states andtheir interval ranges are summarized in table 1.The fastest process has a rate of λ = 786 . 41 Hz and is responsible for the high-frequency burst withinterspike intervals between 3 and 10 ms , observed in the raster plots of figure 4. Note that althoughthis rate is larger than the theoretical maximum of λ max = 1 /t ref = 500 . Hz the smallest interval is with2 . ms still larger than the refractory period of 2 ms . The very high rate of λ is the result of the largenumber of intervals between 3 and 10 ms .The slowest process is responsible for the large gaps between spikes, observed in figure 4 and has arate of λ = 8 . 82 Hz. Firing rates and survival times depend on J and g So far we have looked at specific network configurations which were suggested by the discrete Griffithmodel. We now step up from these specific cases to see how the mean firing rate and the survival timeof the self-sustained activity states depend on the ration between excitation and inhibition g and on thesynaptic strength J .To this end, we simulated a network with 10 000 excitatory and 2500 inhibitory neurons for up to 100seconds for combinations of ( g, J ), where g was varied from 3 to 6 in steps of 0.01 and J was varied from1 mV to 5 mV in steps of 0 . mV . In each simulation, we measured the mean firing rate of the neuronsas well as the survival time of the SSAI state.The results of these simulations are summarized in figure 6. Figure 6A shows the contour plot of themean firing rate as a function of J E and g . Darker colors correspond to lower rates. Along the contourlines, the firing rate remains approximately constant (iso-rate boundaries).4 A B 3. 3.5 4. 4.5 5. 5.5 6.1.1.52.2.53.3.54.4.5 3. 3.5 4. 4.5 5. 5.5 6. 1.1.52.2.53.3.54.4.5g (cid:31) JI (cid:31) JE J E (cid:30) m V (cid:29) >110 Hz 100 Hz 50 Hz40 Hz30 Hz20 Hz0-20 Hz 3. 3.5 4. 4.5 5. 5.5 6.1.1.52.2.53.3.54.4.5 3. 3.5 4. 4.5 5. 5.5 6. 1.1.52.2.53.3.54.4.5g (cid:31) JI (cid:31) JE J E (cid:30) m V (cid:29) >10 sec < 100 ms Figure 6. (A) Contour plot of the mean firing rate as a function of J and g . For small g and J > /t ref . with increasing g , the mean rate decreasesmonotonically. The region between 0–20 Hz is characterized by short-lived self-sustained states. Stableself-sustained states can be found for mean rates grater than 20 Hz. (B) Contour plot of the survivaltime as a function of J and g . There is a sharp transition between immediate network death (blue area)and long survival (beige area) which does not soften significantly with increasing J and g .The abcissa represents ratio of inhibition to excitation g . For g = 4, excitation and inhibitioncontribute equally to the synaptic current. For g < 4, excitation contributes more than inhibition, andfor g > 4, inhibition contributes more than excitation.For g < ν max = 1 /t ref . This is true even for small excitatory amplitudes.As g increases with the inhibitory amplitude, rates become lower, until they reach the physiologicallyinteresting range of 0–40 Hz. This range is reached even before inhibition balances excitation, howeverthis activity is usually unstable. This is also evident from the very steep transition between the whitearea of high rates and the dark area of low rates.With further increasing relative inhibition (larger g ), the slope of the transition between high and lowrates decreases, creating wide regions of intermediate rates.Figure 6B shows the survival time of network activity for the same parameter range. Here, we observea sharp transition between basically two states: either activity ceases after less than 100 ms , or it survivesfor a very long time. In contrast to the firing rates in figure 6A, the transition remains sharp for all valuesof the relative inhibition g . This means that SSAI states in inhibition dominated networks ( g > 4) aremore stable than in balanced ( g = 4) or excitation dominated networks ( g < Self-sustained states exist in a wide range of network sizes Self-sustained asynchronous irregular states are an emergent network property which require a minimalnetwork size. Thus, we were interested in the smallest network which still show SSAI states. We werealso interested in larger networks where the number of connections per neuron increases and the synapticstrength becomes smaller relative to the firing threshold.5 A N E = 1000 B N E = 10 000 C N E = 100 000 N e u r o n I D r a t e ( H z ) N e u r o n I D r a t e ( H z ) N e u r o n I D r a t e ( H z ) 10 50 100 200 300 400 500Inter-spike Interval (ms)10 D e n s i t y 10 50 100 200 300 400 500Inter-spike Interval (ms)10 D e n s i t y 10 50 100 200 300 400 500Inter-spike Interval (ms)10 D e n s i t y Figure 7. Self-Sustained activity in networks of different sizes. Raster plots of the spiking activity isshown in the top row, the interval distribution in the bottom row. The firing rates decrease withincreasing network size. This is also visible from the interval distributions, that shift towards largerintervals. (A) (B) 10 000 excitatory and 2 500 inhibitoryneurons, and (C) 100 000 excitatory and 25 000 inhibitory neurons.A trivial way of scaling the network is to start with a given configuration (e.g. N = 10 000, g = 5,and J = 3 . mV ) and to increase the number of neurons, while keeping the synaptic amplitude J and thenumbers of connections C E and C I constant. This strategy works well, and yields qualitatively the sameresults as the original smaller network (not shown). In particular the mean firing rates stay constant.Alternatively, we can keep the connection probability constant. Then, the number of connections perneuron will grow as the network increases. To compensate for the increasing number of connections, wemust decrease the synaptic amplitudes J proportional to (cid:112) N/N (cid:48) . This scaling works until the synapticamplitudes become small compared to the threshold.Similarly, there is a lower limit to the network size. If we reduce the number of neurons for a fixedamplitude J and fixed numbers of connections C E and C I , the connectivity (cid:15) increases accordingly. Thus,we quickly reach the case where (cid:15) approaches 1 and each neuron receives input from all other neurons.In this state of total symmetry asynchronous irregular states cannot exist.If we keep the connectivity (cid:15) constant, we can, in theory, reduce the network size until (cid:98) (cid:15) ∗ N (cid:99) = 0.Thus, for (cid:15) = 0 . N = 10. However, simulations showthat already for (cid:15) = 0 . N = 100 the self-sustained state requires synaptic amplitudes of J > θ andshows unphysiological spike trains where each neuron switches between high-frequency bursts and silence.Moreover, the self-sustained state becomes unstable. The main reason seems to be that for such smallnetworks each neuron receives less than 10 inputs, each sufficiently strong to trigger a spike. At the sametime, each neuron sees a considerable portion of the entire network. Thus, correlations amplify quicklyuntil coherent down-states stop the network activity. In between these extremes there is a wide range ofnetwork sizes and connectivities where self-sustained states of asynchronous irregular activity exist.Figure 7 shows the raster plots and interval distributions for example networks of three sizes: 1 000,10 000, and 100 000 excitatory neurons. The network in figure 7A has the highest rate and the largest6 A B N e u r o n I D r a t e ( H z ) I n t e r v a l s Figure 8. Self-Sustained activity in a network with many weak and few strong synapses. (A) Rasterplot of spiking activity (top) and population firing rate (bottom). (B) ISI-histogram on a logarithmicscale.PSP amplitude, but the smallest number of connections per neuron.The networks in figures 7B and 7C have the same synaptic amplitudes and the same connectionprobability, thus, they differ only in the number of connections per neuron and in the resulting firingrates. Apart from the firing rates, their is no qualitative difference in the raster plots or the intervaldistributions. The most prominent feature seems to be that small networks are more dominated bysmall spike intervals than large networks. This is best seen in the interval distribution which widensconsiderably as the networks become larger. Few strong synapses in a sea of weak ones Recently, it has been proposed that cortical networks have a long tailed distribution of synaptic weights,where most connections are very weak, but some connections are very strong [21, 27]. While the strongconnections in such network might facilitate self-sustained activity, the larger number of weak connectionsmay destroy these states again. In the following, we tested, whether a netwok with many weak and a fewstrong synapses will exhibit self-sustained activity states. To do so, we took the Brunel network of figure3 and increased the weight of 1% of its connections to J = 4 mV . The spiking activity of this network isshown in figure 8A and its ISI distribution in figure 8B.Figure 8A shows the spiking activity after 90 seconds of simulation which means that a Brunel typenetwork which is doped with a small number (1%) of strong synapses also shows self-sustained activity.The firing rate in this state is with 26 Hz about 20% lower than in the undoped network.Figure 8B shows the ISI-distribution on a logarithmic scale, which clearly looks like the ISI distribu-tions of the plain SSAI networks. In the doped network it is also possible to describe the ISI histogramas superposition of different states. Here, the fastest state has a rate of 956 Hz which is assumed for 3%of the time. The slowest state has a rate of 5 . Discussion In this paper, we show that self-sustained states of asynchronous irregular activity can be induced inrecurrent networks of simple threshold elements, like integrate-and-fire neurons, under the followingconditions:1. a fraction of the connections are sufficiently strong (i.e. Θ /J (cid:28) C E ).2. a sufficient number of excitatory neurons is activated to trigger the self-sustained activity state.Condition 1 contradicts the common view that the dense cortical connectivity implies weak connectionstrengths [13,23]. Even if all connections in a recurrent network are strong, the firing rate will not approachthe theoretical limit of 1 /τ ref . Moreover, in a network with weak synaptic connections, a small fractionof strong synapses will actually decrease the overall firing rate in the network.Condition 2 implies that SSAI states can be induced by temporarily lifting the network activity abovethe ignition point (equation (8)). This can be achieved by supplying either a brief synchronized inputto the excitatory neurons or an asynchronous low-rate stimulus over a longer period of time. Once theSSAI state is reached, the ignition stimulus may be switched off and the AI state will persist. Self-sustained activity as model for ongoing or spontaneous activity Spontaneous activity in the mammalian nervous systems is marked by low firing rates between 5 and 10Hz. By contrast, the self-sustained activity states considered here show much higher rates of 20 Hz andmore. This raises the question if and how self-sustained states give rise to low-rate spontaneous activity.There are several possibilities and we will outline the two most probable ones. Leaking excitation It has been repeatedly shown that weakly coupled networks with random externalinput shows stable low-rate firing. Thus, the simplest model to explain low rate spontaneous activity, isexcitation from a strongly coupled assembly which acts as external drive for a weakly coupled network. Coupled assemblies Consider a network of many self-sustained networks (assemblies) which are cou-pled by competetive inhibition such that at any point in time only one or a small fraction of the assembliescan be active. During normal operation, each assembly would be activated for a short time, before activ-ity switches to another assembly. As a result, the average firing rate of the entire network will be muchlower than within each of the assemblies. Self-sustained activity in conductance based networks The self-sustained activity states described here result from the combinatorics of strong inputs. They donot rely on specific properties of the synapses or the membrane dynamics. Thus, SSAI states will occurunder similar conditions in networks of more complex conductance based neurons. Kumar et al. reportedself-sustained activity in large networks of conductance based neurons. In their model, they found thatthe survival time of self-sustained states grows exponentially with the network size. For networks of 1 · neurons, they found a survival time of less than one second.By applying the reasoning developed in this manuscript, we found that also in networks of conductancebased neurons with few strong connections spiking activity persists for very long periods. Figure 9 shows8 A B N e u r o n I D r a t e ( H z ) 10 50 100 200 300 400 500Inter-spike Interval (ms)10 D e n s i t y Figure 9. Self-Sustained activity in a network of conductance based neurons. (A) Raster plot ofspiking activity (top) and population firing rate (bottom). (B) ISI-histogram on a logarithmic scale.the results of a simulation that lasted 100 seconds. Both raster plot (figure 9A) and ISI distribution(figure 9B) look qualitatively identical to that which we found in the current based case. This is a goodindication that the arguments and consequences of self-sustained activity, induced by strong synapticweights also apply to networks of more realistic model neurons or even to networks in the brain. Variability of firing It has been argued that that the high irregularity of cortical cells is inconsistent with the temporalintegration of random EPSPs [17], because such a counting process would inevitably be more regularthan a Poisson process [2]. This phenomenon can be observed in Brunel’s model in figure 3 [18]. Therandom superposition of small synaptic weights leads to very regular firing patterns with CV i (cid:28) CV i > 1, more consistentwith the irregularity of experimentally observed ISI distributions [29].We found that during SSAI activity, the neurons switch between several discrete states which resultsin the observed high-variability of the ISIs. Survival and survival time Kumar et al. [11] observed that the survival time of self-sustained activity states in networks of conduc-tance based neurons grows exponentially with the network size. Networks of 10 000 neurons could sustainfiring for as little as 10 ms, while larger networks of 50 000 neurons sustained firing for up to 10 seconds.By contrast, we observed little dependence of the survival time on the network size. Rather, thenetworks discussed here show bimodal survival times: Either the activity ceases after a few milliseconds,or it persists for many seconds or minutes (see figure 6B). This was also true for the conductance basednetwork of figure 9 which means that the SSAI state described here is different from the persistent activitydescribed by Kumar et al. [11].9The strong coupling between neurons results in activity which is determined by the combinatoricsof the connectivity matrix (the network). Since there is no external source of noise, the response of agiven network to a particular stimulus is deterministic. At each point in time a different subset of theconnections carries the activity forward. Only if one of these subsets is too small to fulfill the thresholdcriterion will the activity cease. Burstiness and multi-state interspike interval distributions A common measure for burstiness of firing is the surplus of small ISIs (e.g. ISIs < Self-sustained activity as memory Self-sustained activity states do not require very large networks, but can be found in networks largerthan about 100 neurons. Moreover, self-sustained states are very stable and persist for many seconds.Thus, a small population of a few hundred neurons could already store information in its activity state,which is very economic compared to other attractor memory models [33]. However, in the presence ofstrong connections, even small amounts of activity leaking into the memory population will trigger theSSAI state, irrespective of a stimulus. This poses a serious problem for models that store informationin the activity state of a neural population. To be reliable, such memories need an additional controlmechanism which prevents spurious activation of memory populations by the embedding network. Sucha control mechanism could be provided by inhibition between different memory pools so that at any timeonly one or a few memories can be active [34, 35]. Self-sustained activity as signal propagation In self-sustained networks, the only source of randomness is the connectivity matrix. Once the connec-tivity is fixed, the activity in the network is deterministic. Consequently, we may describe self-sustainedactivity as a sequence of activated neuron groups: An initially activated group of neurons G will triggerspikes in another group of neurons G which in turn will activate the group G and so on.Since the connections are strong, activating G will always result is the same activation sequence G → G → . . . → G N and a different starting group G (cid:48) will result in a different activation sequence G (cid:48) → G (cid:48) → . . . → G (cid:48) N . Thus, we may regard the initially activated neuron group G as a signal which propagates through thenetwork.An alternative interpretation of the self-sustained activity is that of recalling a sequence which haspreviously been stored by an appropriate learning mechanisms.In this respect, strongly coupled networks differ from other network architectures like weakly couplednetworks and synfire chains [23].0In weakly coupled networks, signal propagation is difficult due to the weak connections and the highlevel of noise [15].Strongly coupled networks could also be interpreted as an intertwined synfire chain, because bothare characterized by a fixed sequence in which neurons are activated. However, there is an importantdifference: Activity in strongly coupled networks is not only deterministic, it is also chaotic [8]. Thus,changing only one neuron in G will quickly result in an activation sequence G (cid:48) → G (cid:48) → . . . which isdistinct from the original activation sequence. By contrast, synfire chains are robust to small variationsin their activation sequence [23, 36]. If one or a few neurons are missing or replaced in the initial set G , the divergent/convergent architecture of the synfire chain will repair this defect in the subsequentactivation stages [37]. Methods Neuron and network parameters All networks were simulated using a current based integrate-and-fire model where synaptic currents are modeled as alpha functions. The neuron model and networkarchitecture are described in figures 10 and 11 according to the recommendations of Nordlie et al. [38].Brunel [13] originally used an integrate-and-fire model with delta currents. These lead to instantaneousvoltage jumps in response to each spike. In networks with weak synaptic couplings, such as Brunel’snetwork, these discontinuities are smeared out by the noise. In networks with strong synaptic couplings,however, the voltage jumps remain visible in the interval and rate distributions, since most spikes are thenbound to the time grid imposed by the synaptic delays. Thus, we used the more realistic alpha-functionsas synaptic currents. Data acquisition Spike data was recorded to file from the first 1 000 non-input neurons and analyzedoffline. Since the network was connected as a random graph, we could record from a consecutive rangeof neurons without introducing a measurement bias. Spike data For each neuron i , we record the sequence of spike times S i = { t , t , . . . } i over an obser-vation interval T . Unless stated otherwise, the observation interval was 100 s . In the following, we writespike-trains as s i ( t ) = (cid:88) t (cid:48) ∈ S i δ ( t − t (cid:48) ) (22) Firing rates The average firing rate of a neuron i was determined by dividing the total number ofspikes n i within the observation interval T by the its duration: n i = (cid:90) T s i ( t (cid:48) ) dt (cid:48) (23) < ν i > = n i T (24)Since there is no noise disturbing the neurons, each independent simulation run (trial) yields the samespike trains. The random connectivity, however, allows us to interpret different neurons as independentrealizations of the same random process. Thus, the instantaneous population rate is a good approximationof the average instantaneous firing rate of a neuron.We computed the population rate, by summing the spikes of all observed neurons within a ∆ T = 1 ms window around the time of interest: ν N ( t, ∆ t ) = 1 N N (cid:88) i =1 (cid:90) t +∆ tt s i ( t (cid:48) ) dt (cid:48) (25)1 A Model SummaryPopulations Three: excitatory, inhibitory, external input Topology — Connectivity Random convergent connections with probability P = 0 . and fixed in-degree of C E = P N E and C I = P N I . Neuron model Leaky integrate-and-fire, fixed voltage threshold, fixed absolute refractory time(voltage clamp) Channel models — Synapse model α -current inputs Plasticity — Input Independent fixed-rate Poisson spike trains to all neurons Measurements Spike activity B PopulationsName Elements Size E Iaf neuron N E = 4 N I I Iaf neuron N I E ext Poisson generator C E ( N E + N I ) C ConnectivityName Source Target Pattern EE E E Random convergent C E → , weight J , delay D IE E I Random convergent C E → , weight J , delay D EI I E Random convergent C I → , weight − gJ , delay D II I I Random convergent C I → , weight − gJ , delay D Ext E ext E ∪ I Non-overlapping C E → , weight J , delay D D Neuron and Synapse ModelName iaf_psc_alpha (NEST 2.0) Type Leaky integrate-and-fire, α -current input Membrane potential τ m ˙ V m ( t ) = − V m ( t ) + R m I ( t ) if not refractory ( t > t ∗ + τ rp ) V m ( t ) = V r while refractory ( t ∗ < t ≤ t ∗ + τ rp ) I ( t ) = τ m R m (cid:80) ˜ t wδ ( t − (˜ t + D )) Spiking If V m ( t − ) < V θ ∧ V m ( t +) ≥ V θ 1. set t ∗ = t 2. emit spike with time-stamp t ∗ Synaptic current I syn = J eτ syn · t · exp (cid:18) − tτ syn (cid:19) E InputType Description Poisson generators Fixed rate ν ext , C E generators per neuron, each generator projects to one neuron. F Measurements Spike times of all neurons. Figure 10. Model description according to [38], part 1.2 G Network ParametersNetwork configurationParameter Figure 3 Figure 4, 5, 7B Figure 7A Figure 7C Figure 8Number of excitatory neurons N E 10 000 10 000 1000 100 000 10 000 Number of inhibitory neurons N I Excitatory synapses per neuron C E Inhibitory synapses per neuron C I 250 25 2 250 250 G Neuron ParametersNetwork configurationParameter Figure 3 Figure 4, 5, 7B Figure 7A Figure 7C Figure 8Membrane time constant τ m /ms 30 30 30 30 30 Refractory period τ rp /ms Firing threshold V th /mV 20 20 20 20 20 Membrane capacitance C m /pF Resting potential V E /mV Reset potential V reset /mV 10 0 . . . Excitatory PSP amplitude J E /mV . . . . . , . Ratio g = J I /J E Synaptic delay D/ms . . . . . Synaptic time contant τ syn /ms . . . . . Stimulus rate ν ext /s − . . . . . Stimulation period ( ms ) 50 − − 200 50 − 200 50 − 200 50 − Figure 11. Model description according to [38], part 2.The refractory period of 1 ms ensures that each neuron contributes at most one spike to the populationrate. Rate distribution To compute the distribution of firing rates, we first compute the average firing-rateof each neuron according to 24 and then construct a histogram with bin-size ∆ r = 1 Hz from the set ofall firing-rates: H ν ( r, ∆ r ) = 1 N N (cid:88) i =1 (cid:90) r +∆ rr δ ( ν i − r (cid:48) ) dr (cid:48) (26) ISI distribution Given the ordered sequence of spike times S i of neuron i , we construct the set ofinterspike intervals as: ISI i = { t − t , t − t , . . . } i = τ , τ , . . . , τ ISI i ) i (27)The ISI distribution was then constructed by counting the number of intervals that fall in consecutive1 ms bins in a histogram. To improve the statistics of the histogram, we combined the intervals of allrecorded neurons. H ISI,i ( isi, ∆ isi ) = 1 ISI i ) (cid:88) τ ∈ ISI i (cid:90) isi +∆ isiisi δ ( τ − τ (cid:48) ) dτ (cid:48) (28) H ISI ( isi, ∆ isi ) = 1 N N (cid:88) i =1 H ISI,i ( isi, ∆ isi ) (29) CV distribution To compute the CV distribution, we first computed the ISI distributions for eachneuron i individually, according to the procedure described above. We then determined the coefficients3of variation for each neuron i according to: CV i (cid:112) < ISI i > − < ISI i > < ISI i > (30) Simulation and analysis All simulations were done with the Neural Simulation Tool NEST [39], usingits Python interface pyNEST [40]. The simulation data was written to disk and analyzed off-line, usingthe NumPy and SciPy libraries for Python (see and ). Acknowledgements The author would like to thank Gaute Einevoll, H˚akon Enger, Morits Helias, Edgar K¨orner, BirgitKriener, Arvind Kumar, Hans-Ekkehard Plesser, and Tom Tetzlaf for discussions and comments. References 1. Arieli A, Sterkin A, Aertsen A, Grinvald A (1996) Dynamics of ongoing activity: explanation ofthe large variability in evoked cortical responses. Science 273: 1868.2. Shadlen MN, Newsome WT (1998) The variable discharge of cortical neurons: implications forconnectivity, computation, and information coding. The Journal of neuroscience : the officialjournal of the Society for Neuroscience 18: 3870–96.3. Tsodyks MV, Kenet T, Grinvald A, Arieli A (1999) The spontaneous activity of single corticalneuron depends on the underlying global functional architecture. Science 286: 1943–1946.4. Fukushima M, Saunders RC, Leopold Da, Mishkin M, Averbeck BB (2012) Spontaneous high-gamma band activity reflects functional organization of auditory cortex in the awake macaque.Neuron 74: 899–910.5. Beurle RL (1956) Properties of a Mass of Cells Capable of Regenerating Pulses. PhilosophicalTransactions of the Royal Society of London Series B, Biological Sciences (1934-1990) 240: 55–94.6. Ashby WR, Von Foerster H, Walker CC (1962) Instability of Pulse Activity in a Net with Threshold.Nature 196: 561–562.7. Abeles M (1982) Local Cortical Circuits: An Electrophysiological Study. Berlin, Heidelberg, NewYork: Springer Verlag.8. van Vreeswijk C, Sompolinsky H (1996) Chaos in Neuronal Networks with Balanced Excitatoryand Inhibitory Activity. Science 274: 1724–1726.9. van Vreeswijk C, Sompolinsky H (1998) Chaotic balanced state in a model of cortical circuits.Neural computation 10: 1321–71.10. Kuhn A, Aertsen A, Rotter S (2004) Neuronal integration of synaptic input in the fluctuation-driven regime. The Journal of neuroscience : the official journal of the Society for Neuroscience24: 2345–56.11. Kumar A, Schrader S, Aertsen A, Rotter S (2008) The high-conductance state of cortical networks.Neural computation 20: 1–43.412. Amit D, Brunel N (1997) Model of global spontaneous activity and local structured activity duringdelay periods in the cerebral cortex. Cerebral cortex (New York, NY : 1991) 7: 237–52.13. Brunel N (2000) Dynamics of sparsely connected networks of excitatory and inhibitory spikingneurons. Journal of computational neuroscience 8: 183–208.14. Latham PE, Nirenberg S (2004) Computing and stability in cortical networks. Neural computation16: 1385–412.15. Vogels T, Abbot LF (2005) Signal propagation and logic gating in networks of integrate-and-fireneurons. Journal of Neuroscience 25: 10786–10795.16. Destexhe A (2009) Self-sustained asynchronous irregular states and Up-Down states in thalamic,cortical and thalamocortical networks of nonlinear integrate-and-fire neurons. Journal of compu-tational neuroscience 27: 493–506.17. Softky WR, Koch C (1993) The highly irregular firing of cortical cells is inconsistent with temporalintegration of random EPSPs. The Journal of neuroscience : the official journal of the Society forNeuroscience 13: 334–50.18. Barbieri F, Brunel N (2008) Can attractor network models account for the statistics of firing duringpersistent activity in prefrontal cortex? Frontiers in neuroscience 2: 114–22.19. Barbieri F, Brunel N (2007) Irregular persistent activity induced by synaptic excitatory feedback.Frontiers in computational neuroscience 1: 5.20. Griffith JS (1963) On the stability of brain-like structures. Biophysical journal 3: 299–308.21. Song S, Sj¨ostr¨om PJ, Reigl M, Nelson S, Chklovskii DB (2005) Highly nonrandom features ofsynaptic connectivity in local cortical circuits. PLoS biology 3: e68.22. Braitenberg V, Sch¨uz A (1991) Anatomy of the cortex: Statistics and geometry. Springer-Verlag,Berlin .23. Abeles M (1991) Corticonics: Neural circuits of the cerebral cortex, volume 1stedition. CambridgeUniv Press, 1 edition, 294 pp. doi:10.2277/0521376173. URL