Emergence of irregular activity in networks of strongly coupled conductance-based neurons
EEmergence of irregular activity in networks of strongly coupled conductance-basedneurons
A. Sanzeni,
1, 2
M.H. Histed, and N. Brunel
1, 3 Department of Neurobiology, Duke University, Durham, NC, USA National institute of Mental Health Intramural Program, NIH, Bethesda, MD, USA Department of Physics, Duke University, Durham, NC, USA
Cortical neurons are characterized by irregular firing and a broad distribution of rates. The bal-anced state model explains these observations with a cancellation of mean excitatory and inhibitorycurrents, which makes fluctuations drive firing. In networks of neurons with current-based synapses,the balanced state emerges dynamically if coupling is strong, i.e. if the mean number of synapsesper neuron K is large and synaptic efficacy is of order 1 / √ K . When synapses are conductance-based, current fluctuations are suppressed when coupling is strong, questioning the applicabilityof the balanced state idea to biological neural networks. We analyze networks of strongly coupledconductance-based neurons and show that asynchronous irregular activity and broad distributionsof rates emerge if synaptic efficacy is of order 1 / log( K ). In such networks, unlike in the standardbalanced state model, current fluctuations are small and firing is maintained by a drift-diffusionbalance. This balance emerges dynamically, without fine tuning, if inputs are smaller than a criticalvalue, which depends on synaptic time constants and coupling strength, and is significantly morerobust to connection heterogeneities than the classical balanced state model. Our analysis makesexperimentally testable predictions of how the network response properties should evolve as inputincreases. I. INTRODUCTION
Each neuron in cortex receives inputs from hundredsto thousands of pre-synaptic neurons. If these inputswere to sum to produce a large net current, the centrallimit theorem argues that fluctuations should be smallcompared to the mean, leading to regular firing, as ob-served during in vitro experiments under constant cur-rent injection [1, 2]. Cortical activity, however, is highlyirregular, with a coefficient of variation of interspike in-tervals (CV of ISI) close to one [3, 4]. To explain theobserved irregularity, it has been proposed that neuralnetworks operate in a balanced state, where strong feed-forward and recurrent excitatory inputs are canceled byrecurrent inhibition and firing is driven by fluctuations[5, 6]. At the single neuron level, in order for this stateto emerge, input currents must satisfy two constraints.First, excitatory and inhibitory currents must be finetuned so to produce an average input below threshold.Specifically, if K and J represent the average numberof input connections per neuron and synaptic efficacy,respectively, the difference between excitatory and in-hibitory presynaptic inputs must be of order 1 /KJ . Sec-ond, input fluctuations should be large enough to drivefiring.It has been shown that the balanced state emerges dy-namically (without fine tuning) in randomly connectednetworks of binary units [7, 8] and networks of current-based spiking neurons [9, 10], provided that coupling isstrong, and recurrent inhibition is powerful enough tocounterbalance instabilities due to recurrent excitation.However, these results have all been derived assumingthat the firing of a presynaptic neuron produces a fixedamount of synaptic current, hence neglecting the depen-dence of synaptic current on the membrane potential, a key aspect of neuronal biophysics. In real synapses,synaptic inputs are mediated by changes in conduc-tance, due to opening of synaptic receptor-channels onthe membrane, and synaptic currents are proportionalto the product of synaptic conductance and a drivingforce which depends on the membrane potential. Mod-els that incorporate this description are referred to as‘conductance-based synapses’.Large synaptic conductances has been shown to havemajor effects on the stationary [11] and dynamical [12]response of single cells, and form the basis of the ‘high-conductance state’ [13–19] that has been argued to de-scribe well in vivo data [20–22] (but see [23] and Dis-cussion). At the network level, conductance modulationplays a role in controlling signal propagation [24], inputsummation [25], and firing statistics [26]. However, mostof the previously mentioned studies rely exclusively onnumerical simulations, and in spite of a few attempts atanalytical descriptions of networks of conductance-basedneurons [17, 27–31], an understanding of the behavior ofsuch networks when coupling is strong is still lacking.Here, we investigate networks of strongly coupledconductance-based neurons. We find that, for synapsesof order 1 / √ K , fluctuations are too weak to sustain fir-ing, questioning the relevance of the balanced state ideato cortical dynamics. Our analysis, on the other hand,shows that stronger synapses (of order 1 / log( K )) gen-erate irregular firing when coupling is strong. We char-acterize the properties of networks with such a scaling,showing that they match properties observed in cortex,and discuss constraints induced by synaptic time con-stant. The model generates qualitatively different pre-dictions compared to the current-based model, whichcould be tested experimentally. a r X i v : . [ q - b i o . N C ] O c t II. MODELS OF SINGLE NEURON ANDNETWORK DYNAMICS
Membrane potential dynamics.
We study thedynamics of networks of leaky integrate-and-fire (LIF)neurons with conductance-based synaptic inputs. Themembrane potential V j of the j -th neuron in the net-work follows the equation C j dV j dt = − (cid:88) A = L,E,I g jA ( V j − E A ) , (1)where C j is the neuronal capacitance; E L , E E and E I are the reversal potentials of the leak, excitatory andinhibitory currents; while g jL , g jE and g jI are the leak,excitatory and inhibitory conductances. Assuming in-stantaneous synapses (the case of finite synaptic timeconstants is discussed at the end of the results section),excitatory and inhibitory conductances are given by g jE,I g jL = τ j (cid:88) m a jm (cid:88) n δ ( t − t nm ) . (2)In Eq. (2), τ j = C j /g jL is the single neuron mem-brane time constant, a jm are dimensionless measuresof synaptic strength between neuron j and neuron m , (cid:80) n δ ( t − t nm ) represents the sum of all the spikes gen-erated at times t nm by neuron m . Every time the mem-brane potential V j reaches the firing threshold θ , the j thneuron emits a spike, its membrane potential is set toa reset V r , and stays at that value for a refractory pe-riod τ rp ; after this time the dynamics resumes, followingEq. (1).We use a jm = a ( a g ) for all excitatory (inhibitory)synapses. In the homogeneous case, each neuron re-ceives synaptic inputs from K E = K ( K I = γK ) ex-citatory (inhibitory) cells. In the network case, eachneuron receives additional K X = K excitatory inputsfrom an external population firing with Poisson statis-tics with rate ν X . We use excitatory and inhibitoryneurons with the same biophysical properties, hence theabove assumptions imply that the firing rates of excita-tory and inhibitory neurons are equal, ν = ν E = ν I .Models taking into account the biophysical diversitybetween the excitatory and inhibitory populations arediscussed in Appendix D. When heterogeneity is takeninto account, the above defined values of K E,I,X rep-resent the means of Gaussian distributions. We usethe following single neuron parameters: τ rp = 2ms, θ = − V r = − E E = 0mV, E I = − E L = − τ j = τ L = 20ms. We explore variousscalings of a with K and, in all cases, we assume that a (cid:28)
1. When a (cid:28)
1, an incoming spike produced byan excitatory presynaptic neuron produces a jump inthe membrane potential of amplitude a ( E E − V ), where V is the voltage just before spike arrival. In cortex, V ∼ − . − . a to be in the order of 0 . Diffusion and effective time constant approx-imations.
We assume that each cell receives projec-tions from a large number of cells ( K (cid:29) a (cid:28) g E g L = aτ L (cid:104) Kr E + (cid:112) Kr E ζ E (cid:105) ,g I g L = agτ L (cid:104) γKr I + (cid:112) γKr I ζ I (cid:105) . (3)where r E and r I are the firing rates of pre-synaptic Eand I neurons, respectively, and ζ E and ζ I are indepen-dent Gaussian white noise terms with zero mean andunit variance density. In the single neuron case, we take r E = ν X , r I = ην X where η represents the ratio of I/Einput rate. In the network case, r E = ν X + ν , r I = ν where ν X is the external rate, while ν is the firing rate ofexcitatory and inhibitory neurons in the network, deter-mined self-consistently (see below). We point out that,for some activity levels, the assumption of Poisson pre-synaptic firing made in the derivation of Eq. (3) breaksdown, as neurons in the network show interspike inter-vals with CV significantly different from one (e.g. seeFig. 3C). However, comparisons between mean field re-sults and numerical simulations (see Appendix E) showthat neglecting non-Poissonianity (as well as other con-tributions discussed above Eq. (3)) generates quantita-tive but not qualitative discrepancies, with magnitudethat decreases with coupling strength. Moreover, in Ap-pendix B, we show that if a (cid:28) ν (cid:28) /τ rp , deviationsfrom Poissonianity become negligible.Using the diffusion approximation, Eq. (1) reduces to τ dVdt = − V + µ + σ ( V ) √ τ ζ, (4)where ζ is a white noise term, with zero mean and unitvariance density, while τ − = τ L − + aK ( r E + r I gγ ) ,µ = τ { E L /τ L + aK [ r E E E + r I gγE I ] } ,σ ( V ) = a Kτ (cid:104) r E ( V − E E ) + g γr I ( V − E I ) (cid:105) . (5)In Eq. (4), τ is an effective membrane time constant,while µ and σ ( V ) represent the average and the vari-ance of the synaptic current generated by incomingspikes, respectively.The noise term in Eq. (4) can be decomposed into anadditive and a multiplicative component. The latter hasan effect on membrane voltage statistics that is of thesame order of the contribution coming from synapticshot noise [39], a factor which has been neglected inderiving Eq. (3). Therefore, for a consistent analysis,we neglect the multiplicative component of the noise inthe above derivation; this leads to an equation of theform of Eq. (4) with the substitution σ ( V ) → σ ( µ ) . (6)This approach has been termed the effective time con-stant approximation [39]. Note that the substitution ofEq. (6) greatly simplifies mathematical expressions butit is not a necessary ingredient for the results presentedin this paper. In fact, all our results can be obtainedwithout having to resort to this approximation (see Ap-pendix A, B and D). Current-based model.
The previous definitionsand results translate directly to current-based models,with the only exception that the dependency of excita-tory and inhibitory synaptic currents on the membranepotential are neglected (see [10] for more details). There-fore, Eq. (1) becomes τ j dV j dt = − V j + I jE − I jI , (7)where I jA = τ j (cid:88) m J jm (cid:88) n δ ( t − t nm )represent the excitatory and inhibitory input currents.Starting from Eq. (7), making assumptions analogous tothose discussed above and using the diffusion approxi-mation [10], the dynamics of current-based neurons isgiven by an equation of the form of Eq. (4) with τ = τ L , µ = τ JK [ r E − gγr I ] ,σ = τ J K (cid:2) r E + g γr I (cid:3) ; (8)Note that, unlike what happens in conductance-basedmodels, τ is a fixed parameter and does not depend onnetwork firing rate or external drive. Another differ-ence between the current-based and conductance-basedmodels is that in the latter, but not the former, model σ depends on V ; as we discussed above, this differenceis neglected in the main text, where we use the effectivetime constant approximation. III. BEHAVIOR OF SINGLE NEURONRESPONSE FOR LARGE K We start our analysis investigating the effects ofsynaptic conductance on single neuron response. Weconsider a neuron receiving K ( γK ) excitatory (in-hibitory) inputs, each with synaptic efficacy J ( gJ ),from cells firing with Poisson statistics with a rate r E = ν X , r I = ην X , (9) and analyze its membrane potential dynamics in theframeworks of current-based and conductance-basedmodels. In both models, the membrane potential V follows a stochastic differential equation of the form ofEq. (4); differences emerge in the dependency of τ , µ and σ on the parameters characterizing the connectiv-ity, K and J . In particular, in the current-based model,the different terms in Eq. (8) can be writen as τ ∼ τ curr , µ ∼ KJµ curr , σ ∼ √ KJσ curr ;where τ curr , µ curr , and σ curr are independent of J and K . In the conductance-based model, the efficacy of ex-citatory and inhibitory synapses depend on the mem-brane potential as J = a ( E E,I − V ); the different termsin Eq. (4), under the assumption that Ka (cid:29)
1, becomeof order τ ∼ τ cond Ka , µ ∼ µ cond , σ ∼ √ aσ cond . Here, all these terms depend on parameters in a com-pletely different way than in the current-based case. Aswe will show below, these differences drastically modifyhow the neural response changes as K and J are variedand hence the size of J ensuring finite response for agiven value of K .The dynamics of a current-based neuron is shownin Fig. 1Ai, with parameters leading to irregular fir-ing. Because of the chosen parameter values, the meanexcitatory and inhibitory inputs approximately canceleach other, generating subthreshold average input andfluctuation-driven spikes, which leads to irregularity offiring. If all parameters are fixed while K is increased( J ∼ K ), the response changes drastically (Fig. 1Aii),since the mean input becomes much larger than thresh-old and firing becomes regular. To understand this ef-fect, we analyze how terms in Eq. (4) are modified as K increases. The evolution of the membrane poten-tial in time is determined by two terms: a drift term − ( V − µ ) /τ , which drives the membrane potential to-ward its mean value µ , and a noise term σ/ √ τ , whichleads to fluctuations around this mean value. Increas-ing K modifies the equilibrium value µ of the driftforce and the input noise, which increase proportion-ally to KJ (1 − γgη ) and KJ ( γg η + 1), respectively(Fig. 1B,C).This observation suggests that, to preserve irregularfiring as K is increased, two ingredients are needed.First, the rates of excitatory and inhibitory inputs mustbe fine tuned to maintain a mean input below thresh-old; this can be achieved choosing γgη − ∼ /KJ .Second, the amplitude of input fluctuations should bepreserved; this can be achieved scaling synaptic efficacyas J ∼ / √ K . Once these two conditions are met, irreg-ular firing is restored (Fig. 1Aiii). Importantly, in a net-work with J ∼ / √ K , irregular firing emerges withoutfine tuning, since rates dynamically adjust to balanceexcitatory and inhibitory inputs and maintain mean in-puts below threshold [7, 8]. FIG. 1.
Effects of coupling strength on the firing behavior of current-based and conductance-based neurons. ( A ) Membrane potential of a single current-based neuron for (i) J = 0 . K = 10 , g = γ = 1, η such that 1 − gγη = 0 . K = 5 10 ; (iii) with K = 5 10 and scaled synaptic efficacy ( J ∼ / √ K , which gives J = 0 . − gγη = 0 .
01; ( B , C ) Effect of coupling strength on drift force and input noise in a current-based neuron. ( D ) Membranepotential of a single conductance-based neuron for fixed input difference ( g − γη = − .
8) and (i) a = 0 . K = 10 ; (ii) K = 5 10 ; (iii) K = 5 10 and scaled synaptic efficacy ( a ∼ / √ K , a = 0 . E , F ) Effect of coupling strength on drift forceand input noise in a conductance-based neuron. In panels A and D , dashed lines represent threshold and reset (black) andequilibrium value of membrane potential (green). In panels A ii and D ii, light purple traces represent dynamics in the absenceof spiking mechanism. Input fluctuations in C and F represent input noise per unit time, i.e. the integral of σ √ τ ζ of Eq. (4)computed over an interval ∆ t and normalized over ∆ t . We now show that this solution does not work oncesynaptic conductance is taken into account. The dy-namics of a conductance-based neuron in response to theinputs described above is shown in Fig. 1Di. As in thecurrent-based neuron, it features irregular firing, withmean input below threshold and spiking driven by fluc-tuations, and firing becomes regular for larger K , leavingall other parameters unchanged (Fig. 1Dii). However,unlike the current-based neuron, input remains belowthreshold at large K ; regular firing is produced by largefluctuations, which saturate response and produce spikesthat are regularly spaced because of the refractory pe-riod. These observations can be understood looking atthe equation for the membrane potential dynamics: in-creasing K leaves invariant the equilibrium value of themembrane potential µ but increases the drift force andthe input noise amplitude as Ka and √ Ka , respectively(Fig. 1E,F). Since the equilibrium membrane potentialis fixed below threshold, response properties are deter-mined by the interplay between drift force and inputnoise, which have opposite effects on the probability ofspike generation. The response saturation observed inFig. 1Dii shows that, as K increases at fixed a , fluctu-ations dominate over drift force. On the other hand,using the scaling a ∼ / √ K leaves the amplitude offluctuations unchanged, but generates a restoring forceof order √ K (Fig. 1E) which dominates and completely abolishes firing at strong coupling (Fig. 1Diii).Results in Fig. 1 show that the response of aconductance-based neuron when K is large depends onthe balance between drift force and input noise. Thescalings a ∼ O (1) and a ∼ / √ K leave one of the twocontributions dominate; suggesting that an intermedi-ate scaling could keep a balance between them. Belowwe derive such a scaling, showing that it preserves firingrate and CV of ISI when K becomes large. IV. A SCALING RELATION THATPRESERVES SINGLE NEURON RESPONSEFOR LARGE K We analyze under what conditions the response of asingle conductance-based neuron is preserved when K islarge. For a LIF neuron driven described by Eqs. (4, 5,6), the single cell transfer function, i.e. the dependencyof the firing rate ν on the external drive ν X , is given by[40, 41] ν = (cid:20) τ rp + τ √ π (cid:90) v max v min dx exp( x ) (1 + erf( x )) (cid:21) − , (10)with v ( x ) = x − µσ , v min = v ( V r ) , v max = v ( θ ) . (11) FIG. 2.
The scaling of Eq. (14) preserves the response of a single conductance-based neuron for large K . ( A )Scaling relation preserving firing in conductance-based neurons (Eq. (14), solid line); constant scaling ( a ∼ K , dotted line) andscaling of the balanced state model ( a ∼ / √ K , dashed line) are shown as a comparison. Colored dots indicate values of a, K used in the subsequent panels. ( B - H ) Response of conductance-based neurons, for different values of coupling strength andsynaptic efficacy (colored lines). The scaling of Eq. (14) preserves how firing rate ( B , C ); equilibrium value of the membranepotential ( D ); and CV of the inter-spike interval distribution ( E ) depend on external input rate ν X . This invariance is achievedincreasing the drift force ( F ) and input fluctuation ( G ) in a way that weakly decreases (logarithmically in K ) membranepotential fluctuations ( H ). Different scalings either saturate or suppress response ( B , black lines correspond to K = 10 and a values as in panel A ). Parameters: a = 0 .
01 for K = 10 , g = 12, η = 1 . γ = 1 / In the biologically relevant case of a (cid:28)
1, Eq. (10) sim-plifies significantly. In fact, the distance between theequilibrium membrane potential measured in units ofnoise u max is of order 1 / √ a (except for inputs ν X (cid:28) /aKτ L , where it is of order 1 /a √ Kν X τ L (cid:29) / √ a .)Therefore u max is large when a is small; in this limit,the firing rate is given by Kramers escape rate [42], andEq. (10) becomes: ν = 1 τ rp + Q ν X , Q = ¯ τ √ π √ aK ¯ v exp (cid:18) ¯ v a (cid:19) , (12)where we have defined ¯ v = av max and ¯ τ = aKν X τ . Themotivation to introduce ¯ v and ¯ τ is that they remain oforder 1 in the small a limit, provided the external inputs ν X are at least of order 1 / ( aKτ L ). When the externalinputs are such that ν X (cid:29) / ( aKτ L ), these quantitiesbecome independent of ν X , a and K and are given by¯ τ = (1 + gγη ) − , ¯ v = θ − ¯ µ ¯ σ , ¯ µ = ¯ τ ( E E + gγηE I ) , ¯ σ = ¯ τ (cid:104) (¯ µ − E E ) + g γη (¯ µ − E I ) (cid:105) . (13)The firing rate given by Eq. (12) remains finite when a is small and/or K is large if Q remains of order one; this condition leads to the following scaling relationship K ∼ ¯ τ √ a ¯ v exp (cid:18) ¯ v a (cid:19) ; (14)i.e. a should be of order 1 / log( K ).In Appendix C, we show that expressions analo-gous to Eq. (12) can be derived in integrate-and-fire neuron models which feature additional intrinsicvoltage-dependent currents, as long as synapses areconductance-based and input noise is small ( a (cid:28) a ∼ / log( K ), and voltage-dependent currents generate corrections to the logarith-mic scaling which are negligible when coupling is strong.Since ¯ v and ¯ τ vary with ν X , Eq. (14) can be satisfied,and hence firing can be supported, only if the inputsspan a small range of values, such that ¯ τ and ¯ v are ap-proximately constant, or if ν X (cid:29) /aKτ L . Note that,while in the strong coupling limit (i.e. when K goes in-finity), only the second of these two possibilities can beimplemented with input rates spanning physiologicallyrelevant values, both are are admissible when couplingis moderate (i.e. when K is large but finite, a condi-tion consistent with experimental data on cortical net-works [44, 45]). In what follows, with the exception ofthe section on finite synaptic time constant, we focus onthe case ν X (cid:29) /aKτ L , and investigate how differentproperties evolve with K using the scaling defined byEq. (14) with ¯ v and ¯ τ given by Eq. (13). Importantly,all the results discussed below hold for inputs outsidethe region ν X (cid:29) /aKτ L , as long as ν X is at least of or-der 1 /aKτ L (a necessary condition for the derivation ofEq. (12) to be valid), and that inputs span a region smallenough for the variations of ¯ v and ¯ τ to be negligible.In Fig. 2A, we compare the scaling defined by Eq. (14)with the a ∼ / √ K scaling of current-based neurons.At low values of K , the values of a obtained with thetwo scalings are similar; at larger values of K , synapticstrength defined by Eq. (14) decays as a ∼ / log( K ), i.e.synapses are stronger in the conductance-based modelthan in the current-based model. Examples of singleneuron transfer function computed from Eq. (10) fordifferent coupling strength are shown in Fig. 2B,C. Re-sponses are nonlinear at onset and close to saturation.As predicted by the theory, scaling a with K accordingto Eq. (14) preserves the firing rate over a region of in-puts that increases with coupling strength (Fig. 2C,D),while the average membrane potential remains belowthreshold (Fig. 2D). The quantity ¯ v/ √ a represents thedistance from threshold of the equilibrium membranepotential in units of input fluctuations; Eq. (14) im-plies that this distance increases with coupling strength.When K is very large, the effective membrane time con-stant, which is of order τ ∼ /aKν X , becomes small andfiring is driven by fluctuations that, on the time scale ofthis effective membrane time constant, are rare.We next investigated if the above scaling preserves ir-regular firing by analyzing the CV of interspike intervals.This quantity is given by [10] CV = 2 πν τ (cid:90) v max v min dx e x (cid:90) x −∞ dy e y (1 + erf ( y )) (15)and, for the biologically relevant case of a (cid:28) µ <θ , reduces to (see Appendix B for details) CV = 1 − τ rp ν ; (16)i.e. the CV is close to one at low rates and it de-cays monotonically as the neuron approaches saturation.Critically, Eq. (16) depends on coupling strength onlythrough ν , hence any scaling relation preserving firingrate will also produce CV of order one at low rate. Wevalidated numerically this result in Fig. 2E.We now investigate how Eq. (14) preserves irregularfiring in conductance-based neurons. We have shownthat increasing K at fixed a produces large input andmembrane fluctuations, which saturate firing; the scal-ing a ∼ / √ K preserves input fluctuations but, be-cause of the strong drift force, suppresses membranepotential fluctuations, and hence firing. The scalingof Eq. (14), at every value of K , yields the value of a that balances the contribution of drift and input fluc-tuations, so that membrane fluctuations are of the rightsize to preserve the rate of threshold crossing. Notethat, unlike what happens in the current-based model,both input fluctuations and drift force increase with K (Fig. 2F,G) while the membrane potential distribution,which is given by [46] P ( V ) = 2 ντσ (cid:90) v max v ( V ) dxθ ( x − v ( V r )) exp (cid:2) x − v ( V ) (cid:3) , (17)slowly becomes narrower (Fig. 2H). This result can beunderstood by noticing that, when a (cid:28) P ( V ) = 1 σ √ π exp (cid:32) − ( V − µ ) σ (cid:33) . (18)Hence, the probability distribution becomes Gaussianwhen coupling is strong, with a variance proportional to σ ∼ a . We note that, since a is of order 1 / log K , thewidth of the distribution becomes small only for unreal-istically large values of K . V. ASYNCHRONOUS IRREGULAR ACTIVITYIN NETWORK RESPONSE AT STRONGCOUPLING
We have so far considered the case of a single neu-ron subjected to stochastic inputs. We now show howthe above results generalize to the network case, whereinputs to a neuron are produced by a combination ofexternal and recurrent inputs.We consider networks of recurrently connected excita-tory and inhibitory neurons, firing at rate ν , stimulatedby an external population firing with Poisson statisticswith firing rate ν X . Using again the diffusion approxi-mation, the response of a single neuron in the networksis given by Eq. (10) (and hence Eq. (12)) with r E = ν X + ν , r I = ν . (19)Eq (10), if all neurons in a given population are de-scribed by the same single cell parameters and the net-work is in an asynchronous state in which cells fire at aconstant firing rate, provides an implicit equation whosesolution is the network transfer function. Example so-lutions are shown in Fig. 3B (numerical validation ofthe mean field results is provided in Appendix E). InAppendix D, we prove that firing in the network is pre-served when coupling is strong if parameters are rescaledaccording to Eq. (14). Moreover, we show that responsenonlinearities are suppressed and the network responsein the strong coupling limit (i.e. when K goes infinity)is given, up to saturation, by ν = ρν X . (20) FIG. 3.
Response of networks of conductance-based neurons for large K . ( A ) Scaling relation de-fined by self-consistency condition given by Eqs . (14) and(19) (black line), values of parameters used in panels B - D (colored-dots). Constant scaling ( a ∼ K , dotted line) andscaling of the balanced state model ( a ∼ / √ K , dashed line)are shown for comparison. ( B , C ) Firing rate and CV ofISI as a function of external input, obtained from Eqs. (10)and (15) (colored lines) with strong coupling limit solution ofEqs. (20) and (16) (black line). ( D ) Probability distributionof the membrane potential obtained from (17). In panels B - D , dotted and dashed lines represent quantities obtainedwith the scalings J ∼ K and J ∼ / √ K , respectively, forvalues of K and J indicated in panel A (black dots). Param-eters: γ = 1 / g = 30. The parameter ρ , which is obtained solving Eq. (12)self-consistently (see Appendix D for details), is the re-sponse gain in the strong coupling limit. Finally, ourderivation implies that Eq. (14) preserves irregular fir-ing and creates a probability distribution of membranepotential whose width decreases only logarithmically as K increases (Fig. 3C,D and numerical validation in Ap-pendix E), as in the single neuron case. While thislogarithmic decrease is a qualitative difference with thecurrent-based balanced state in which the width staysfinite in the large K limit, in practice for realistic valuesof K , realistic fluctuations of membrane potential (a fewmV) can be observed in both cases.We now turn to the question of what happens in net-works with different scalings between a and K . Ouranalysis of single neuron response described above showsthat scalings different from that of Eq. (14) fail to pre-serve firing for large K , as they let either input noiseor drift dominate. However, the situation in networksmight be different, since recurrent interactions could inprinciple adjust the statistics of input currents such thatirregular firing at low rates is preserved when couplingbecomes strong. Thus, we turn to the analysis of the net-work behavior when a scaling a ∼ K − α is assumed. For α ≤
0, the dominant contribution of input noise at thesingle neuron level (Figs. 1 and 2) generates saturation of response and regular firing in the network (Fig. 3).This can be understood by noticing that, for large K ,the factor Q in Eq. (12) becomes negligible and the self-consistency condition defining the network rate is solvedby ν = 1 /τ rp . For α >
0, the network response for large K is determined by two competing elements. On the onehand, input drift dominates and tends to suppress fir-ing (Figs. 1 and 2). On the other hand, for the networkto be stable, inhibition must dominate recurrent inter-actions [9]. Hence, any suppression in network activityreduces recurrent inhibition and tends to increase neuralactivity. When these two elements conspire to generatea finite network response, the factor Q in Eq. (12) mustbe of order one and ¯ v ∼ a ∼ K − α . In this scenario, thenetwork activity exhibits the following features (Fig. 3):(i) the mean inputs drive neurons very close to threshold( θ − ¯ µ ∼ a ¯ σ ∼ K − α ); (ii) the response of the network toexternal inputs is linear and, up to corrections of order K − α , given by ν = ( E E − θ ) ν X θ (1 + gγ ) − E E − gγE I ; (21)(iii) firing is irregular (because of Eq. (16)); (iv) thewidth of the membrane potential distribution is of order a ∼ K − α (because of Eq. (18)). Therefore, scalings dif-ferent from that of Eq. (14) can produce asynchronousirregular activity in networks of conductance-based neu-rons, but this leads to networks with membrane poten-tials narrowly distributed close to threshold, a propertywhich seems at odds with what is observed in cortex [47–52]. VI. ROBUST LOGNORMAL DISTRIBUTIONOF FIRING RATES IN NETWORKS WITHHETEROGENEOUS CONNECTIVITY
Up to this point, we have assumed a number of con-nections equal for all neurons. In real networks, how-ever, this number fluctuates from cell to cell. The goalof this section is to analyze the effects of heterogeneousconnectivity in networks of conductance-based neurons.We investigated numerically the effects of connectionheterogeneity as follows. We chose a Gaussian distri-bution of the number of connections per neuron, withmean K and variance ∆ K for excitatory connections,and mean γK and variance γ ∆ K for inhibitory con-nections. The connectivity matrix was constructed bydrawing first randomly E and I in-degrees K iE,X,I fromthese Gaussian distributions for each neuron, and thenselecting at random K iE,X,I E/I pre-synaptic neurons.We then simulated network dynamics and measured thedistribution of rates and CV of the ISI in the popula-tion. Results for different values of CV K = ∆ K/K areshown in Fig. 4A-C. For small and moderate values ofconnection heterogeneity, increasing CV K broadens the FIG. 4.
Effects of heterogeneous connectivity on thenetwork response. ( A - B ) Distribution of ν and CV ofISI computed from network simulations (dots) and from themean field analysis ( A , black lines) for different values of CV K (values are indicated by dots in panel C ). ( C ) ∆ ν/ν (green, left axis) and fraction of quiescent cells (brown, rightaxis) computed from network simulations as a function of CV K . For CV K (cid:46) CV ∗ K , ∆ ν/ν increases linearly, as pre-dicted by the mean field analysis; deviations from linearscaling emerge for CV K (cid:38) CV ∗ K , when a significant frac-tion of cells become quiescent. The deviation from linearscaling at low CV K is due to sampling error in estimatingthe firing rate from simulations. ( D ) CV ∗ K as a function of K computed from the mean field theory (green, left axis),with a rescaled according to Eq. (14). For large K , CV ∗ K decays proportionally to a (brown, right axis). When K istoo low, the network is silent and CV ∗ K = 0. In panels A - C K = 10 , g = 20, a = 1 . − , N E = N X = N I /γ = 10 K , ν X = 0 . /τ rp . In network simulations, the dynamics wasrun for 20 seconds using a time step of 50 µ s Parameters inpanel D are as in Fig. 3. distribution of rates and CV of the ISI, but both dis-tributions remain peaked around a mean rate that isclose to that of homogeneous networks (Fig. 4A,B). Forlarger CV K , on the other hand, the distribution of rateschanges its shape, with a large fraction of neurons mov-ing to very low rates, while others increase their rates(Fig. 4A) and the distribution of the CV of ISI becomesbimodal, with a peak at low CV corresponding to thehigh rate neurons, while the peak at a CV close to 1 cor-responds to neurons with very low firing rates (Fig. 4B).To characterize more systematically the change in thedistribution of rates with CV K , we measured, for eachvalue of CV K , the fraction of quiescent cells, defined asthe number of cells that did not spike during 20 secondsof the simulated dynamics (Fig. 4C). This analysis showsthat the number of quiescent cells, and hence the distri-bution of rates, changes abruptly as the CV K is abovea critical value CV ∗ K . Importantly, unlike our definitionof the fraction of quiescent cells, this abrupt change isa property of the network that is independent of theduration of the simulation.To understand these numerical results, we performed a mean field analysis of the effects of connection hetero-geneity on the distribution of rates (Appendix F). Thisanalysis captures quantitatively numerical simulations(Fig. 4A) and shows that, in the limit of small CV K and a , rates in the network are given by ν i = ν exp (cid:20) Ω CV K a z i (cid:21) (22)where ν is the population average in the absence ofheterogeneity, z i is a Gaussian random variable, and theprefactor Ω is independent of a , K and ν X . The expo-nent in Eq. (22) represents a quenched disorder in thevalue of v i , i.e. in the distance from threshold of thesingle cell µ i in units of input noise. As shown in Ap-pendix F, Eq. (22) implies that the distribution of ratesis lognormal, a feature consistent with experimental ob-servations [53–55] and distributions of rates in networksof current-based LIF neurons [56]. It also implies thatthe variance of the distribution ∆ ν/ν should increaselinearly with CV K , a prediction which is confirmed bynumerical simulations (Fig. 4C). The derivation in Ap-pendix F also provides an explanation for the change inthe shape of the distribution for larger CV K . In fact,for larger heterogeneity, the small CV K approximationis not valid and fluctuations in input connectivity pro-duce cells for which µ i far from θ , that are either firingat extremely low rate ( µ i < θ ) or regularly ( µ i > θ ).The latter generates the peak at low values in the CVof the ISI seen for large values CV K .The quantity CV ∗ K represents the level of connectionheterogeneity above which significant deviations fromthe asynchronous irregular state emerges, i.e. largefractions of neurons show extremely low or regular fir-ing. Eq. (22) suggests that CV ∗ K should increase lin-early with a . We validated this prediction with ourmean field model, by computing the minimal value of CV K at which 1% of the cells fire at rate of 10 − spk/s.(Fig. 4D). Note that the derivation of Eq. (22) only as-sumes a to be small and does not depend on the scal-ing relation between a and K . On the other hand,the fact that CV ∗ K increases linearly with a makes thestate emerging in networks of conductance-based neu-rons with a ∼ / log( K ) significantly more robust toconnection fluctuations than that emerging with a ∼ K − α , for which CV ∗ K ∼ K − α , and with current-basedneurons, where CV ∗ K ∼ / √ K [57]. Note that, while inrandomly connected networks CV K ∼ / √ K , a largerdegree of heterogeneity has been observed in corticalnetworks [51, 57–61]. Our results show that networks ofconductance-based neurons could potentially be muchmore robust to such heterogeneities than networks ofcurrent-based neurons. VII. COMPARISON WITH EXPERIMENTALDATA
The relation between synaptic efficacy and number ofconnections per neuron has been recently studied experi-mentally using a culture preparation [62]. In this paper,it was found that cultures in which K was larger hadweaker synapses than cultures with smaller K (Fig. 5).In what follows we compare this data with the scalingsexpected in networks of current-based and conductance-based neurons, and discuss implications for in vivo net-works.In the current-based model, the strength of excitatoryand inhibitory post synaptic potentials as a functionof K can be written as J E = J / √ K and J I = g J E .In the conductance-based model, these quantities be-come J E = ( V − E E ) a and J I = g ( V − E I ) a ; where a = a ( K, ¯ v ) is given by Eq. (14) while, for the datasetof [62], V ∼ − J E ∼ J I , E E ∼ E I ∼ − g = 1 . J = 20mV inthe current-based model; g = 3 . v = 0 .
08 in theconductance-based model) and computed the expectedsynaptic strength as a function of K (lines in Fig. 5A).Our analysis shows that the performance of the current-based and the conductance-based model in describingthe data, over the range of K explored in the experiment,are similar, with the former being slightly better thanthe latter (root mean square 2.2mV vs 2.4mV). This re-sult is consistent with the observation made in [62] that,when fitted with a power-law J ∼ K − β , data are bestdescribed by β = 0 .
59 but are compatible with a broadrange of values (95% confidence interval: [0.47:0.70]).Note that even though both models give similar resultsfor PSP amplitudes in the range of values of K presentin cultures ( ∼ K . For instance, for K = 10 , J E is expected to be ∼ . ∼ . µ and threshold θ in unitsof input fluctuations, ¯ v/ √ a as a function of K usingthe value of ¯ v obtained above, and find that the ex-pected value in vivo, where K ∼ − , is in therange 2-3. In Fig. 5C,D, we plot how total synapticexcitatory conductance, and effective membrane timeconstant, change as a function of K . Both quantitieschange significantly faster using the conductance-basedscaling ( g E /g L ∼ K/ log( K ); τ /τ L ∼ log( K ) /K ) thanwhat expected by the scaling of the current-based model( g E /g L ∼ √ K ; τ /τ L ∼ / √ K ). For K in the range10 − and mean firing rates in the range 1-5 spk/s,the total synaptic conductance is found to be in a rangefrom about 2 to 50 times the leak conductance, while theeffective membrane time constant is found to be smallerthan the membrane time constant by a factor 2 to 50. We compare these values with available experimentaldata in the Discussion. VIII. EFFECT OF FINITE SYNAPTIC TIMECONSTANTS
Results shown in Fig. 5 beg the question whether theassumption of negligible synaptic time constants we havemade in our analysis is reasonable. In fact, synapticdecay time constants of experimentally recorded post-synaptic currents range from a few ms (for AMPA andGABA A receptor-mediated currents) to tens of ms (forGABA B and NMDA receptor-mediated currents, seee.g. [63]), i.e. they are comparable to the membrane timeconstant already at weak coupling, where τ ∼ τ L is typ-ically in the range 10-30ms [64, 65]. In the strong cou-pling limit, the effective membrane time constant goesto zero, and so this assumption clearly breaks down. Inthis section, we investigate the range of validity of thisassumption, and what happens once the assumption ofnegligible time constants is no longer valid.With finite synaptic time constants, the temporal evo-lution of conductances of Eq. (2) is replaced by τ E,I dg jE,I dt = − g jE,I + g jL τ E,I (cid:88) m a jm (cid:88) n δ ( t − t nm ) . (23)It follows that the single-neuron membrane potential dy-namics is described by Eqs. (1,23). Here, for simplicity,we take excitatory and inhibitory synaptic currents tohave the same decay time constant τ S . Fig. 6A showshow increasing the synaptic time constant modifies themean firing rate of single integrate-and-fire neurons inresponse to K ( γK ) excitatory (inhibitory) inputs withsynaptic strength a ( ga ) and frequency ν X ( ην X ). Thefigure shows that, though the mean firing rate is closeto predictions obtained with instantaneous synapses forsmall τ S /τ , deviations emerge for τ S /τ ∼
1, and firingis strongly suppressed as τ S /τ becomes larger. To un-derstand these numerical results, we resort again to thediffusion approximation [67], together with the effectivetime constant approximation [11, 68] to derive a simpli-fied expression of the single neuron membrane potentialdynamics with finite synaptic time constant (details inAppendix G); this is given by τ dVdt = − ( V − µ ) + σ (cid:114) ττ S z , (24)where τ , µ and σ are as in the case of negligible synap-tic time constant (Eq. (5)), whilst z is an Ornstein-Uhlenbeck process with correlation time τ S . It followsthat, with respect to Eq. (4), input fluctuations withfrequency larger than 1 /τ S are suppressed and, for large τ S /τ , the membrane potential dynamics is given by V ( t ) = µ + σ (cid:114) ττ S z ( t ) , (25)0 FIG. 5.
Comparison of predictions given by current-based and the conductance-based models in describingexperimental data from cultures . A Strength of excitatory (EPSP) and inhibitory (IPSP) post synaptic potentials recordedin [62] are compared with best fits using scaling relationships derived from networks with current-based synapses (dashed line)and conductance-based synapses (continuous line). Root mean square (RMS) and best fit parameters are: RMS=2 . g = 1 . J = 20mV, for the current-based model; and RMS=2 . g = 3 .
4, ¯ v = 0 .
08, for the conductance-based model. B Value of ¯ v/ √ a predicted by the conductance-based model as a function of K . C Ratio between excitatory and leak conductanceas a function of K , for ν E = ν I = ν X = 1 . spk/s (black) and ν E = ν I = ν X = 5spk/s (gray) obtained with a rescaled asEq. (14) (continuous line) and as 1 / √ K (dashed line). D Ratio between τ and τ L as a function of K , parameters and scalingas in panel C . i.e. the membrane potential is essentially slaved to a timedependent effective reversal potential corresponding tothe r.h.s. of Eq. (25) [14]. Note that Eq. (25) is valid onlyin the subthreshold regime. When the r.h.s. of Eq. (25)exceeds the threshold, the neuron fires a burst of actionpotentials whose frequency, in the strong coupling limit,is close to the inverse of the refractory period [69]. As ν X increases, the equilibrium value µ remains constantwhile τ decreases, leading to a suppression of membranefluctuations (Fig. 6D), and in turn to the suppression ofresponse observed in Fig. 6A.In Appendix G, we use existing analytical expan-sions [67, 69, 70], as well as numerical simulations, toshows that neural responses obtained with finite τ S arein good agreement with predictions obtained with in-stantaneous synapses as long as τ S /τ (cid:46) .
1. It followsthat the single neuron properties we discussed in thecase of instantaneous synapses hold in the region of in-puts for which τ S /τ (cid:46) . ν X (cid:46) . /aKτ S ) andthe derivation of Eq. (14) is valid (i.e. ν X is at least oforder 1 /aKτ L ). Thus, there is at best a narrow rangeof inputs for which these properties carry over to the fi-nite synaptic constant case. Interestingly, when biolog-ically relevant parameters are considered (e.g. Fig. 6),inputs within this region generate firing rates that arein the experimentally observed range in cortical net-works [23, 47–55]. The analysis of Appendix G alsoshows that, when τ S /τ ∼
1, i.e. once the input rate ν X is of order 1 /aKτ S , firing is suppressed exponen-tially. The scaling relation of Eq. (14) does not pre-vent this suppression, which emerges for external rates of order 1 /aKτ S ∼ log( K ) /Kτ S . Scalings of the form a ∼ K − α , with α >
0, on the other hand, create a largerregion of inputs for which τ S /τ (cid:28) K . Wenext asked if another scaling relation between a and K could prevent suppression of neural response when τ S is finite. The single neuron response computed in Ap-pendix G is a nonlinear function of the input ν X , whichdepends parametrically on a and K . It follows that, inorder to preserve the single neuron response, a shouldscale differently with K for different values of ν X . Sincein cortical networks input rates, i.e. ν X , change dynam-ically on a time scale much shorter than that over whichplasticity can modify synaptic connections, we concludethat a biologically realistic scaling between a and K ,which prevents suppression of neural response when τ S is finite in a broad range of external inputs, does notexist. Moreover, the membrane potential dynamics forlarge K and τ S /τ (Eq. (25)) becomes independent of a . This shows that rescaling synaptic efficacy with K cannot prevent suppression of response.We next examined the effect of synaptic time constanton network response. Numerically computed responsesin networks of neurons with finite synaptic time con-stant are shown in Fig. 6B,C. Network response is closeto the prediction obtained with instantaneous synapsesfor small τ S /τ , and deviations emerge for τ S /τ ∼ FIG. 6.
Effects of synaptic time constant on single neuron and network response. ( A ) Single neuron response asa function of input rate ν X , computed numerically from Eqs. (1), (23) for different synaptic time constants τ S (indicated inthe bottom right of the figure). In all panels, black lines correspond to the prediction obtained with instantaneous synapses.Colored bars below the first and the fourth row indicate inputs that gives 0 . < τ S /τ <
1, i.e. the region where deviations inthe neural response from the case of instantaneous synapses emerge. Firing rates (first row) match predictions obtained forinstantaneous synapses for small τ S /τ ; significant deviations and response suppression emerge for larger τ S /τ . The effectivemembrane time constant ( τ , second row) decreases with input rate, is independent of τ S , and reaches the value τ S /τ ∼ τ S ). The equilibrium value of the membranepotential ( µ , third row) increases with input rate and is independent of τ S (dotted line represents spiking threshold). Thefluctuation of the membrane potential ( σ V , fourth row) has a non-monotonic relationship with input rate, and peaks at a valueof ν X for which τ is of the same order as τ S . ( B ) Analogous to panel A but in the network case. Firing rates are no longersuppressed as τ S /τ increases, but approach the response scaling predicted by Eq. (21) (dashed line). As discussed in the text,high firing rates are obtained by increasing the value of µ towards threshold. ( C ) Zoom of panel B in the neurobiologicallyrelevant region of low rates. ( D , E ) Examples of membrane potential dynamics for single neuron ( D ) and network ( E ) in theabsence of spiking mechanisms ( ν X = 5spk/s in D and 20spk/s in E ). High frequency fluctuations are suppressed at large τ S . In the network case, increasing τ S reduces recurrent inhibition and produces membrane potential trajectories which areincreasingly closer to firing threshold. Single neuron parameters: a = 0 . K = 10 , g = 8, η = 1 . γ = 1 /
4. Networkparameters: a = 0 . K = 10 , g = 20, γ = 1 /
4. Simulations were performed with the simulator BRIAN2 [66], with neuronsreceiving inputs from N EX,IX = 10 K independent Poisson units firing at rates ν X , ην X , in the single neuron case, or ν X , inthe network case . Network simulations used N E,I = 10 K excitatory and inhibitory neurons. the single neuron case, no suppression appears for larger τ S /τ . This lack of suppression in the network response,analogously to the one we discussed in networks with in-stantaneous synapses and a ∼ K − α , is a consequence ofthe fact that, to have stable dynamics when K is large,inhibition must dominate recurrent interactions [9]. Inthis regime, any change which would produce suppres- sion of single neuron response (e.g. increase of ν X or τ S ) lowers recurrent inhibition and increases the equilib-rium value of the membrane potential µ (Fig. 6B,C,E).The balance between these two effects determines thenetwork firing rate and, when τ S /τ (cid:29)
1, generates a re-sponse which (see derivation in Appendix G), up to cor-rections of order 1 / √ Kτ S , is given by Eq. (21) (dashed2line in Fig. 6B). Similarly to what happens in networkswith instantaneous synapses and a ∼ K − α , this finiteresponse emerges because recurrent interactions set µ very close to threshold, at a distance θ − µ ∼ / √ K that matches the size of the membrane potential fluc-tuations (Eq. (25), σ (cid:112) τ /τ S ∼ / √ K ). In addition,the network becomes much more sensitive to connec-tion heterogeneity, with CV ∗ K ∼ / √ K . However, herethe dynamics of the single neuron membrane potentialis correlated over a timescale τ S (Fig. 6E) and firing isbursty, with periods of regular spiking randomly inter-spersed in time. Moreover, the properties discussed hereare independent of the scaling of a with K , since theyalways emerge once τ S /τ (cid:29)
1, a condition that is metfor any scaling once ν X (cid:29) /aKτ S . The specific scalingrelation, on the other hand, is important to determinethe input strength at which τ S /τ ∼ a ∼ / log( K ),while other scalings generate network properties that areat odds with experimental data (see Tab. I for a sum-mary). In this section, we have found that, when thesynaptic time constant is considered, these propertiesare preserved in the model for low inputs. As the inputincreases, the structure of the network response evolvesgradually and, for large inputs ( ν X (cid:29) /aKτ S ), signifi-cant deviations from the case of instantaneous synapsesemerge (see Tab. I for a summary). In particular, as theinput to the network increases, our analysis shows that:the membrane potential approaches threshold while itsfluctuations become smaller and temporally correlated;firing becomes more bursty; the network becomes moresensitive to heterogeneity in the in-degree and, if theheterogeneity is larger than that of random networks,significant fractions of neurons become quiescent or fireregularly. These features of the model provide a listof predictions which could be tested experimentally bymeasuring the evolution of the membrane potential dy-namics of cells in cortex with the intensity of inputs tothe network. IX. DISCUSSION
In this work, we analyzed networks of strongly coupledconductance-based neurons. The study of this regime ismotivated by the experimental observation that in cor-tex K is large, with single neurons receiving inputs fromhundreds or thousands of pre-synaptic cells. We showedthat the classical balanced state idea [5, 6], which wasdeveloped in the context of current-based models andfeatures synaptic strength of order 1 / √ K [7, 8], resultsin current fluctuations of very small amplitude, whichcan generate firing in networks only if the mean mem-brane potential is extremely close to threshold. Thisseems problematic since intracellular recordings in cor- tex show large membrane potential fluctuations (seee.g. [47–52]). To overcome this problem, we introduceda new scaling relation which, in the case of instanta-neous synaptic currents, maintains firing by preservingthe balance of input drift and diffusion at the singleneuron level. Assuming this scaling, the network re-sponse automatically shows multiple features that areobserved in cortex in vivo: irregular firing, wide distri-bution of rates, membrane potential with non-negligibledistance from threshold and fluctuation size. When fi-nite synaptic time constants are included in the model,we showed that these properties are preserved for low in-puts, but are gradually modified as inputs increase: themembrane mean approaches threshold while its fluctu-ations decrease in size and develop non-negligible tem-poral correlations. These properties, which are summa-rized in Tab. I, provide a list of predictions that couldbe tested experimentally by analyzing the membrane po-tential dynamics as a function of input strength in cor-tical neurons.When synaptic time constants are negligible with re-spect to the membrane time constant, our theory showsproperties that are analogous to those of the classicalbalanced state model: linear transfer function, CV oforder one, and distribution of membrane potentials withfinite width. However, these properties emerge from adifferent underlying dynamics than in the current basedmodel. In current-based models, the mean input currentis at distance of order one from threshold in units of in-put fluctuations. In conductance-based models, this dis-tance increases with coupling strength and firing is gen-erated by large fluctuations at strong coupling. The dif-ferent operating mechanism manifests itself in two ways:the strength of synapses needed to sustain firing and therobustness to connection heterogeneity, as we discuss inthe next paragraphs.The scaling relation determines how strong synapsesshould be to allow firing at a given firing rate, for agiven a value of K . In current-based neurons, irregularfiring is produced as long as synaptic strengths are oforder 1 / √ K . In conductance-based neurons, strongersynapses are needed, with a scaling which approaches1 / log( K ) for large K . We showed that both scaling re-lations are in agreement with data obtained from culturepreparations [62], which are limited to relatively smallnetworks, and argued that differences might be impor-tant in vivo , where K should be larger.In current-based models, the mean input current mustbe set at an appropriate level to produce irregular fir-ing; this constraint is realized by recurrent dynamics innetworks with random connectivity and strong enoughinhibition [7–9]. However, in networks with structuralheterogeneity, with connection heterogeneity larger than1 / √ K , the variability in mean input currents producessignificant departures from the asynchronous irregularstate, with large fractions of neurons that become silentor fire regularly [57]. This problem is relevant in cor-tical networks [57], where significant heterogeneity of3 Synaptic model Ratio of synapticand membranetime constant ( τ S /τ ) Synapticstrength Membrane potentialstatistics Activitystructure Heterogeneityof in-degreesupported ( CV ∗ K )Current-based(balanced state model) constant,independent of ν X , a , and K J ∼ √ K θ − µ ∼ σ V ∼ τ V ∼ τ L Irregular firing,CV of ISI ∼ ∼ √ K Conductance-based (cid:28) ν X (cid:28) aKτ S ;always satisfiedfor instantaneoussynapses ( τ S = 0) a ∼ K θ − µ ∼ σ V ∼ √ log K ; τ V ∼ log( K ) /K Irregular firing,CV of ISI ∼ ∼ K a ∼ K − α , α > θ − µ ∼ σ V ∼ K − α ; τ V ∼ K α − Irregular firing,CV of ISI ∼ ∼ K − α (cid:29) ν X (cid:29) aKτ S any scaling θ − µ ∼ σ V ∼ √ K ; τ V ∼ τ S Irregular bursting ∼ √ K TABLE I.
Overview of of networks of current-based and conductance-based neurons.
Synaptic strength and timeconstant strongly affect response properties in networks of conductance based neurons. Properties similar to what is observedin cortex emerge in these networks if a ∼ / log K and input rates are lower than a critical value, which is fixed by synaptictime constant and coupling strength. The model predicts that these properties should gradually mutate as the input to thenetwork increases and, for large inputs, should coincide with those indicated in the last line of the table. In the table, thedifferent quantities related to the membrane potential represent: the mean distance from threshold ( θ − µ ); the size of temporalfluctuations ( σ V ); the membrane potential correlation time constant ( τ V ). in-degrees as been reported [51, 58–61], and differentmechanisms have been proposed to solve it [57]. Herewe showed that networks of conductance-based neuronsalso generate irregular activity without any need for fi-nite tuning, and furthermore can support irregular ac-tivity with substantial structural heterogeneity, up toorder 1 / log( K ). Therefore, these networks are more ro-bust to connection heterogeneity than the current-basedmodel, and do not need the introduction of additionalmechanism to sustain the asynchronous irregular state.The strength of coupling in a network, both in thecurrent-based model [71, 72] and in the conductance-based model (e.g. Fig. 3) determines the structure ofits response and hence the computations it can imple-ment. Recent theoretical work, analyzing experimentaldata in the framework of current-based models, has sug-gested that cortex operates in a regime of moderate cou-pling [44, 45], where response nonlinearities are promi-nent. In conductance-based models, the effective mem-brane time constant can be informative on the strengthof coupling in a network, as it decreases with couplingstrength. Results from in vivo recordings in cat pari-etal cortex [21] showed evidence that single neuron re-sponse is sped up by network interactions. In particular,measurements are compatible with inhibitory conduc-tance approximately 3 times larger than leak conduc-tance and support the idea that cortex operates in a“high-conductance state” [22] (but see [23] and discus-sion below). This limited increase in conductance sup-ports the idea of moderate coupling in cortical networks,in agreement with what found in previous work [44, 45].When the synaptic time constant is much larger thanthe membrane time constant, we showed that, regard-less of synaptic strength, the size of membrane poten- tial fluctuations decreases and firing in the network ispreserved by a reduction of the distance from thresh-old of the mean membrane potential. Moreover, therobustness to heterogeneity in connection fluctuationsdecreases substantially (the maximum supported het-erogeneity becomes of order 1 / √ K ) and the membranepotential dynamics becomes correlated over a time scalefixed by the synaptic time constant. For really strongcoupling, the regime of large synaptic time constant isreached for low input rates. In the case of moderatecoupling, which is consistent with experimental data oncortical networks [44, 45], the network response at lowrates is well approximated by that of networks with in-stantaneous synapses, and the regime of large synaptictime constant is reached gradually, as the input to thenetwork increases (Fig. 6). This observation provides alist of prediction on how properties of cortical networksshould evolve with input strength (summary in Tab. I),that are testable experimentally.Experimental evidence suggests that the response tomultiple inputs in cortex is non-linear (for an overview,see [73]). Such nonlinearities, which are thought to befundamental to perform complex computations, cannotbe captured by the classical balanced state model, asit features a linear transfer function [7, 8]. Alternativemechanisms have been proposed [71, 73, 74], but theirbiophysical foundations [71, 73] or their ability to cap-ture experimentally observed nonlinearities [74] are stillnot fully understood. We have recently shown [72] that,in networks of current-based spiking neurons, nonlinear-ities compatible with those used in [71, 73] to explainphenomenology of inputs summation in cortex emergeat moderate coupling. Here we have shown that, as inthe case of networks of current-base neurons [72], nonlin-4ear responses appears in networks of conductance-basedneurons at moderate coupling, both at response onsetand close to single neuron saturation. In addition, wehave found that synaptic time constants provide an ad-ditional source of nonlinearity, with nonlinear responsesemerging as the network transitions between the tworegimes described above. A full classification of thenonlinearities generated in these networks is outside thescope of this work, but could be performed generalizingthe approach developed in [72].Recent whole cell recording have reported that an in-trinsic voltage-gated conductance, whose strength de-creases with membrane potential, contributes to themodulation of neuronal conductance of cells in pri-mary visual cortex of awake macaques and anesthetizedmice [23]. For spontaneous activity, this intrinsic con-ductance is the dominant contribution to the cell con-ductance and drives its (unexpected) decrease with in-creased depolarization. For activity driven by sen-sory stimuli, on the other hand, modulations comingfrom synaptic interactions overcome the effect of the in-trinsic conductance and neuronal conductance increaseswith increased depolarization. Our analysis shows thatvoltage-dependent currents, such as that produced bythe voltage-gated channels [23] or during spike gener-ation [43], affect quantitatively, but not qualitatively, the single neuron response and the scaling relation al-lowing firing. Therefore, the results we described inthis contribution seem to be a general property of net-works of strongly coupled integrate-and-fire neuronswith conductance-based synapses.Understanding the dynamical regime of operation ofthe cortex is an important open question in neuro-science, as it constrains which computations can be per-formed by a network [71]. Most of the theories of neuralnetworks have been derived using rate models or current-based spiking neurons. Our work provides the first the-ory of the dynamics of strongly coupled conductance-based neurons, it can be easily related to measurablequantities because of its biological details, and suggestspredictions that could be tested experimentally. ACKNOWLEDGMENTS
Supported by the NIMH Intramural Research Pro-gram and by NIH BRAIN U01 NS108683 (to M.H. andN.B.). We thank Alex Reyes for providing his data,and Vincent Hakim and Magnus Richardson for com-ments on the manuscript. This work used the com-putational resources of the NIH HPC Biowulf cluster(http://hpc.nih.gov) and the Duke Compute Cluster(https://rc.duke.edu/dcc). [1] Rodney J. Douglas and Kevan A. C. Martin. A func-tional microcircuit for cat visual cortex. Journal ofPhysiology, 440:735–769, 1991.[2] ZF Mainen and TJ Sejnowski. Reliability of spike timingin neocortical neurons. Science, 268(5216):1503–1506,1995.[3] WR Softky and C Koch. The highly irregular firing ofcortical cells is inconsistent with temporal integration ofrandom epsps. Journal of Neuroscience, 13(1):334–350,1993.[4] Albert Compte, Christos Constantinidis, Jesper Tegn´er,Sridhar Raghavachari, Matthew V. Chafee, Patricia S.Goldman-Rakic, and Xiao-Jing Wang. Temporally ir-regular mnemonic persistent activity in prefrontal neu-rons of monkeys during a delayed response task. Journalof Neurophysiology, 90(5):3441–3454, 2003. PMID:12773500.[5] Michael N. Shadlen and William T. Newsome. Noise,neural codes and cortical organization. Current Opinionin Neurobiology, 4(4):569 – 579, 1994.[6] Michael N. Shadlen and William T. Newsome. The vari-able discharge of cortical neurons: implications for con-nectivity, computation, and information coding. Journalof Neuroscience, 18(10):3870–3896, 1998.[7] C van Vreeswijk and H Sompolinsky. Chaos in neuronalnetworks with balanced excitatory and inhibitory activ-ity. Science, 274(5293):1724–6, 1996.[8] C van Vreeswijk and H Sompolinsky. Chaotic balancedstate in a model of cortical circuits. Neural computation,10(6):1321–71, 1998. [9] D J Amit and N Brunel. Model of global spontaneousactivity and local structured activity during delay peri-ods in the cerebral cortex. Cerebral cortex (New York,N.Y. : 1991), 7(3):237–252, 1997.[10] N Brunel. Dynamics of sparsely connected networlsof excitatory and inhibitory neurons. Journal ofComputational Neuroscience, 8:183–208, 2000.[11] Magnus J E Richardson. Effects of synaptic conductanceon the voltage distribution and firing rate of spiking neu-rons. Physical review. E, Statistical, nonlinear, and softmatter physics, 69(5 Pt 1):051918, 2004.[12] Charles Capaday and Carl van Vreeswijk. Direct con-trol of firing rate gain by dendritic shunting inhibi-tion. Journal of Integrative Neuroscience, 05(02):199–222, 2006.[13] A Destexhe, M Rudolph, J M Fellous, and T J Se-jnowski. Fluctuating synaptic conductances recreate invivo-like activity in neocortical neurons. Neuroscience,107(1):13–24, 2001.[14] Michael Shelley, David McLaughlin, Robert Shapley,and Jacob Wielaard. States of high conductance ina large-scale model of the visual cortex. Journal ofComputational Neuroscience, 13(2):93–109, Sep 2002.[15] M Rudolph and A Destexhe. Characterization of sub-threshold voltage fluctuations in neuronal membranes.Neural Comput, 15(11):2577–2618, Nov 2003.[16] Alexander Lerchner, Mandana Ahmadi, and John Hertz.High-conductance states in a mean-field cortical networkmodel. Neurocomputing, 58-60:935 – 940, 2004. Com-putational Neuroscience: Trends in Research 2004. [17] Hamish Meffin, Anthony N. Burkitt, and David B. Gray-den. An analytical model for the “large, fluctuatingsynaptic conductance state” typical of neocortical neu-rons in vivo. Journal of Computational Neuroscience,16(2):159–175, 2004.[18] Michael Rudolph, Joe Guillaume Pelletier, Denis Par´e,and Alain Destexhe. Characterization of synaptic con-ductances and integrative properties during electricallyinduced eeg-activated states in neocortical neurons invivo. Journal of Neurophysiology, 94(4):2805–2821,2005. PMID: 16014785.[19] A. Kumar, S. Schrader, A. Aertsen, and S. Rotter.The high-conductance state of cortical networks. NeuralComputation, 20(1):1–43, 2008.[20] D Pare, E Shink, H Gaudreau, A Destexhe, and E JLang. Impact of spontaneous synaptic activity on theresting properties of cat neocortical pyramidal neuronsin vivo. J Neurophysiol, 79(3):1450–1460, Mar 1998.[21] Alain Destexhe and Denis Par´e. Impact of network ac-tivity on the integrative properties of neocortical pyra-midal neurons in vivo. Journal of Neurophysiology,81(4):1531–1547, 1999. PMID: 10200189.[22] Alain Destexhe, Michael Rudolph, and Denis Par´e. Thehigh-conductance state of neocortical neurons in vivo.Nature Reviews Neuroscience, 4(9):739–751, sep 2003.[23] Baowang Li, Brandy N. Routh, Daniel Johnston, EyalSeidemann, and Nicholas J. Priebe. Voltage-gated in-trinsic conductances shape the input-output relationshipof cortical neurons in behaving primate v1. Neuron,107(1):185 – 196.e4, 2020.[24] Tim P. Vogels and L. F. Abbott. Signal propagation andlogic gating in networks of integrate-and-fire neurons.Journal of Neuroscience, 25(46):10786–10795, 2005.[25] Mark H. Histed. Feedforward inhibition allows in-put summation to vary in recurrent cortical networks.eNeuro, 2018.[26] Stefano Cavallari, Stefano Panzeri, and Alberto Maz-zoni. Comparison of the dynamics of neural inter-actions between current-based and conductance-basedintegrate-and-fire recurrent networks. Frontiers inNeural Circuits, 8:12, 2014.[27] Nicolas Brunel and Xiao-Jing Wang. Effects of neuro-modulation in a cortical network model of object work-ing memory dominated by recurrent inhibition. Journalof Computational Neuroscience, 11(1):63–85, 2001.[28] Yann Zerlaut, Sandrine Chemla, Frederic Chavane, andAlain Destexhe. Modeling mesoscopic cortical dynam-ics using a mean-field model of conductance-based net-works of adaptive exponential integrate-and-fire neu-rons. Journal of Computational Neuroscience, 44(1):45–61, 2018.[29] Christopher Ebsch and Robert Rosenbaum. Imbalancedamplification: A mechanism of amplification and sup-pression from local imbalance of excitation and inhibi-tion in cortical circuits. PLOS Computational Biology,14(3):1–28, 03 2018.[30] Cristiano Capone, Matteo di Volo, Alberto Romagnoni,Maurizio Mattia, and Alain Destexhe. State-dependentmean-field formalism to model different activity states inconductance-based networks of spiking neurons. Phys.Rev. E, 100:062413, Dec 2019.[31] Matteo di Volo, Alberto Romagnoni, Cristiano Capone,and Alain Destexhe. Biologically realistic mean-fieldmodels of conductance-based networks of spiking neu- rons with adaptation. Neural Computation, 31(4):653–680, 2019. PMID: 30764741.[32] Randy M. Bruno and Bert Sakmann. Cortex isdriven by weak but synchronously active thalamocor-tical synapses. Science, 312(5780):1622–1627, 2006.[33] Henry Markram, Joachim Lubke, Michael Frotscher,and Bert Sakmann. Regulation of synaptic efficacy bycoincidence of postsynaptic APs and EPSPs. Science,275(5297):213–215, jan 1997.[34] Per Jesper Sj¨ostr¨om, Gina G Turrigiano, and Sacha BNelson. Rate, timing, and cooperativity jointly deter-mine cortical synaptic plasticity. Neuron, 32(6):1149–1164, dec 2001.[35] Carl Holmgren, Tibor Harkany, Bj¨orn Svennenfors, andYuri Zilberter. Pyramidal cell communication withinlocal networks in layer 2/3 of rat neocortex. Journal ofPhysiology, 551(1):139–153, 2003.[36] Sandrine Lefort, Christian Tomm, J. C. Floyd Sarria,and Carl C.H. Petersen. The excitatory neuronal net-work of the C2 barrel column in mouse primary so-matosensory cortex. Neuron, 61(2):301–316, 2009.[37] Rodrigo Perin, Thomas K Berger, and Henry Markram.A synaptic organizing principle for cortical neuronalgroups. Proceedings of the National Academy ofSciences, 108(13):5419–5424, 2011.[38] Xiaolong Jiang, Shan Shen, Cathryn R Cadwell, PhilippBerens, Fabian Sinz, Alexander S Ecker, Saumil Patel,and Andreas S Tolias. Principles of connectivity amongmorphologically defined cell types in adult neocortex.Science, 350(6264):aac9462–aac9462, 2015.[39] Magnus J. E. Richardson and Wulfram Gerstner. Synap-tic shot noise and conductance fluctuations affect themembrane voltage with equal significance. NeuralComputation, 17(4):923–947, 2005.[40] Arnold J. F. Siegert. On the first passage time proba-bility problem. Phys. Rev., 81:617–623, Feb 1951.[41] Luigi M. Ricciardi. Diffusion processes and relatedtopics in biology. Springer-Verlag Berlin ; New York,1977.[42] Crispin Gardiner. Stochastic Methods. Springer, 4th ed.2009 edition, January 2009.[43] Nicolas Fourcaud-Trocm´e, David Hansel, Carl vanVreeswijk, and Nicolas Brunel. How spike generationmechanisms determine the neuronal response to fluctu-ating inputs. Journal of Neuroscience, 23(37):11628–11640, 2003.[44] Alessandro Sanzeni, Bradley Akitake, Hannah C Gold-bach, Caitlin E Leedy, Nicolas Brunel, and Mark HHisted. Inhibition stabilization is a widespread propertyof cortical networks. eLife, 9:e54875, 06 2020.[45] Yashar Ahmadian and Kenneth D. Miller. What isthe dynamical regime of cerebral cortex? arXiv, page1908.10101, 2019.[46] Nicolas Brunel and Vincent Hakim. Fast global oscil-lations in networks of integrate-and-fire neurons withlow firing rates. Neural Computation, 11(7):1621–1671,1999.[47] Jeffrey Anderson, Ilan Lampl, Iva Reichova, MatteoCarandini, and David Ferster. Stimulus dependence oftwo-state fluctuations of membrane potential in cat vi-sual cortex. Nature Neuroscience, 3(6):617–621, 2000.[48] Sylvain Crochet and Carl C H Petersen. Correlatingwhisker behavior with membrane potential in barrel cor-tex of awake mice. Nature Neuroscience, 9(5):608–610, Appendix A: Calculations in the multiplicative noise case
In the main text, we analyze the distribution of membrane potential, firing rate and CV using the effective timeconstant approximation, which neglects the dependence of the noise amplitude on the membrane potential. Thisapproximation is motivated by the fact that corrections to this approximation are of the same order of shot noisecorrections to the diffusion approximation used to describe synaptic inputs [75]. In this section, we derive resultswithout resorting to the effective time constant approximation (i.e. keeping the voltage dependence of the noise term),and show that the results derived in the main text remain valid, even though it complicates the calculations. Theinclusion of shot noise corrections is outside the scope of this contribution.
1. Equations for arbitrary drift and diffusion terms
In this section, we compute the probability distribution of the membrane potential, the firing rate, and the CV ofISI of a neuron whose membrane potential follows the equation dVdt = A ( V ) + B ( V ) ζ . (A1)Eq. (4) of the main text is a special form of Eq. (A1) with A ( V ) = µ − Vτ , B ( V ) = σ ( V ) √ τ . (A2)The Fokker-Plank equation associated to Eq. (A1), in the Stratonovich regularization scheme, is given by dPdt = − ∂J∂V , where P is the probability of finding a neuron with membrane potential V and J is the corresponding probabilitycurrent given by J = (cid:18) A + 12 B ∂B∂V (cid:19) P − ∂B P∂V . (A3)We are interested in the stationary behavior of the system in which P does not depend on time and the current J is piecewise constant. In particular, for V between the activation threshold θ and the resting potential V r , J is equalto the neuron firing rate ν and the normalization condition implies (cid:90) θV r P ( V ) dV + ν τ rp = 1 , where τ rp is the refractory period.To derive the probability distribution of the neuron potential, we introduce in Eq. (A3) the integrating factor W ( V ) = exp (cid:34) − (cid:90) V du A ( u ) + B ( u ) ∂B ( u ) ∂u B ( u ) (cid:35) and obtain − νW ( V ) θ ( V − V r ) = ∂∂V (cid:34) W ( V ) B ( V ) P ( V ) (cid:35) . Using the boundary condition P ( θ ) = 0, we find P ( V ) = 2 νW ( V ) B ( V ) (cid:90) θV duW ( u ) θ ( u − V r ) (A4)and 1 ν = τ rp + 2 (cid:90) θ −∞ dx W ( x ) B ( x ) (cid:90) θx duW ( u ) θ ( u − V r )8Integrating by parts, we obtain 1 ν = τ rp + 2 (cid:90) θV r dvW ( v ) (cid:90) v −∞ dx W ( x ) B ( x ) (A5)This solution has been obtained in general form in [40] and for the specific form of Eq. (A2) in [11].We now compute the coefficient of variation of the interspike interval. The moments T k of the interspike intervalsof the stochastic process defined by Eq. (A1) are given by (see [42]) B ( x ) d T k ( x ) dx + (cid:18) A ( x ) + 12 B ( x ) ∂B ( x ) ∂x (cid:19) dT k ( x ) dx = − kT k − ( x )with boundary conditions T k ( θ ) = 0 , dT k ( b ) dx = 0 , i.e. θ is an absorbing boundary and b is a reflective boundary (we will then consider the limit b → −∞ ). The generalsolution of an equation of the form d f ( x ) dx + P ( x ) df ( x ) dx = Q ( x ) (A6)is f ( x ) = (cid:90) xθ dt (cid:90) t −∞ dzQ ( z ) exp (cid:18)(cid:90) zt dwP ( w ) (cid:19) . For T ( x ) we have P ( x ) = 2 A ( x ) + B ( x ) ∂B ( x ) ∂x B ( x ) , Q ( x ) = − B ( x ) For T ( x ) we look for a solution of the form T ( x ) = T ( x ) + R ( x )and find that R obeys to an equation of the form of Eq. (A6) with P ( x ) = 2 A ( x ) + B ( x ) ∂B ( x ) ∂x B ( x ) , Q ( x ) = − (cid:18) dT ( x ) dx (cid:19) Combining the previous results, the CV of ISI is obtained as CV = R ( x ) T ( x ) ; (A7)the explicit expression of the CV is given in the following section.Eqs. (17), (10) and (15) of the main text have been obtained from Eqs. (A4), (A5) and (A7) using Eq. (A2).
2. Equations for conductance-based LIF neurons
Starting from Eqs. (4,5) of the main text, we write the different terms as τ − = τ L − + aK ω − , µ = τ { E L /τ L + aK [ r E E E + r I gγE I ] } ,σ = a K τχ (cid:104) ( V − E S ) + E D (cid:105) , (A8)where, to shorten the expressions, we have introduced two auxiliary variables with time dimension ω − = r E + r I gγ , χ − = r E + r I g γ , (A9)9 FIG. 7.
Drift and diffusion terms of Eq. (4) as a function of voltage. ( A ) Input drift as a function of membranepotential V produced with both inhibitory and excitatory inputs (black line), excitatory inputs only (red dotted line), orinhibitory inputs only (blue dotted line). The drift term decreases monotonically with V and it is zero at V = µ , which is astable fixed point of the deterministic dynamics. ( B ) The noise variance is quadratic in V . Its minimum at V = E S is equal to E D (cid:112) a K/χ . Note that the minimum amplitudes of drift and variance are obtained at different values of V . as well as two variables with voltage dimensions, E S = χ (cid:0) r E E E + r I g γE I (cid:1) , E D = χ (cid:112) r E r I g γ ( E E − E I ) . (A10)The terms − ( V − µ ) /τ and σ ( V ) ζ/ √ τ of Eq. (4) represent the input drift and noise to the membrane dynamicsrespectively. The voltage dependence of these terms is sketched in Fig. 7.In the large K limit, the different terms in Eq. (4) scale as τ ∼ ωaK , µ ∼ ω ( r E E E + r I gγE I ) , σ √ τ ∼ ω √ χK (cid:113) ( V − E S ) + E D ; (A11)while the values of ω , µ , E S , and E D are independent of K . It follows that the noise term σ √ τ and the time constant τ in Eq. (4) become small in the strong coupling limit. This result is analogous to what we obtained in the main textwith the effective time constant approximation, since this approximation does not change how these terms scale with a and K .We now insert the drift and diffusion terms of the conductance-based LIF neuron in Eqs. (A4), (A5), and (A7),and obtain P ( V ) = 2 νχ E D e − F ( V ) a a K (cid:104) ( V − E S ) + E D (cid:105) (cid:90) v max u ( V ) dxθ ( x − u ( V r )) e F ( x ) a , (A12)1 ν = τ rp + 2 χa K (cid:90) v max v min dv (cid:90) v −∞ dx x + 1 exp (cid:20) F ( v ) − F ( x ) a (cid:21) , (A13)0and CV = 8 χ ν a K (cid:90) v max v min dv (cid:90) v −∞ dz exp (cid:20) F ( v ) − F ( z ) a (cid:21) (cid:40) (cid:90) z −∞ dw w + 1 exp (cid:20) F ( z ) − F ( w ) a (cid:21) (cid:41) (A14)where F ( x ) = 2 χaKτ (cid:20) (cid:18) − a Kτ χ (cid:19) log( x + 1) − α arctan( x ) (cid:21) , u ( V ) = V − E S E D ,v min = u ( V r ) , v max = u ( θ ) , α = u ( µ ) . (A15)Eqs. (A12) and (A13) are analogous to those derived in [11]. To simplify the following analysis, we will neglect thecontribution of the term a Kτ / χ , which derives from the regularization scheme. This assumption is justified by thefact that, for large K , τ ∼ /aK and the factor a Kτ / χ is of order a (cid:28) Appendix B: Calculations in the strong coupling regime - Single neurons
In the main text, we derived a simplified expression for the single neuron response neglecting the dependency ofnoise on membrane potential. In this section we generalize this result to the case in which the full noise expressionis considered. We compute simplified expressions of the single neuron transfer function and CV, both in the sub-threshold regime µ < θ , and the supra-threshold regime µ > θ . These expressions are validated numerically in Fig. 8and used in the last part of this section to define a scaling relation between a and K which preserves single neuronfiring in the strong coupling limit. FIG. 8.
Response of single conductance-based neuron to noisy inputs.
Estimates of firing rate ( A , B , E , F ), µ ( C , G ) and CV ( D , H ) obtained with numerical integration of Eqs (A13), (13) and (A14) for different values of a and K (coloreddots). For the two regimes µ < θ (first row) and µ > θ (second row), the transfer function saturates as K increases. Note thesame change in a has a more drastic effect if µ < θ , this is due to the exponential dependence that appears in Eq. (B6). Theapproximated expressions (continuous lines) capture the properties of transfer function ( A Eq. (B6) and E , Eq. (B4)) and CV ( C , Eq (B17) and G , Eq (B9)). For small inputs ( F ), Eq. (B4) fails to describe the transfer function for some values of K because the corresponding µ is below threshold. Simulations parameter are: g = 12; γ = 1 / η = 1 . .
1. Single neuron transfer function at strong coupling
The starting point of our analysis is the observation that the integrand in Eq. (A13) depends exponentially on1 /a (cid:29)
1. This suggests to perform the integration with a perturbative expansion of the exponent. We will showbelow that, since the exponent has a stationary point at x = v = α (see Fig. 9), the integration gives two qualitativelydifferent results if α is larger or smaller than the upper bound of the integral v max . Moreover, since the condition α ≶ v max corresponds to θ ≶ µ , the two behaviors correspond to supra/sub-threshold regimes, respectively. FIG. 9.
Graphical representation of the exponent in Eq. (A13) The function F ( v ) − F ( x ) is stationary at x = v = α ,this point is a maximum for x and a minimum for v . Parameters are as in Fig. 7. In this figure, α = 1 . Supra-threshold regime v max < α ( θ < µ ) The exponent in Eq. (A13) is negative for every value of x , except for x = v in which it is zero. The integral in x canbe written has I = (cid:90) v −∞ dx g ( x ) e fv ( x ) a = (cid:90) v −∞ dx g ( x ) e a ( f (cid:48) v ( v )( x − v )+ f (cid:48)(cid:48) v ( v ) / x − v ) + ... ) (B1)With a change of variable z = ( x − v ) /a we obtain I = a (cid:90) −∞ dz g ( v + az ) e f (cid:48) v ( v ) z + af (cid:48)(cid:48) v ( v ) z + ... (B2)Neglecting all the terms of order a we get I = a g ( v ) f (cid:48) v ( v ) . (B3)Performing the integration in v we obtain 1 ν = τ rp + τ log (cid:18) µ − V r µ − θ (cid:19) . (B4)Eq. (B4) is the transfer function of a deterministic conductance-based neuron with the addition of the refractoryperiod. This is not surprising since the noise term becomes negligible compared to mean inputs in the small a limit.2In Fig. 8B we show that Eq. (B4) gives a good description of the transfer function predicted by the mean field theoryin the supra-threshold regime. Sub-threshold regime v max > α ( θ > µ ) First we consider α < v min ( µ < V r ). For every value of v , the integral in x in Eq (A13) has a maximum in theintegration interval, hence it can be performed through saddle-point method and gives1 ν − τ rp = (cid:115) πχτa K ( α + 1) (cid:90) v max v min dv exp (cid:20) F ( v ) − F ( α ) a (cid:21) . (B5)In the last equation, the exponent in the integrand has a minimum for v = α and is maximum at v = v max ; we expandthe exponent around v = v max and, keeping term up to the first order, obtain1 ν − τ rp = τ (cid:115) πa Kτχ ( α + 1) v max + 1 | v max − α | exp (cid:20) F ( v max ) − F ( α ) a (cid:21) . (B6)In the regime v min < α < v max , the integral in v of Eq. (A13) can be divided into three parts (cid:90) v max v min dv = (cid:90) α − (cid:15)v min dv + (cid:90) α + (cid:15)α − (cid:15) dv + (cid:90) v max α + (cid:15) dv ; (B7)the third integral is analogous to case α < v min , hence it has an exponential dependency on the parameters anddominates the other terms. In Fig. 8A we show that Eq. (B6) gives a good description of the transfer functionpredicted by the mean field theory for µ < θ .
2. Single neuron CV of ISI at strong coupling In this section we provide details of the derivation the approximated expressions of the response CV . Starting fromthe mean field result of Eq. (A14), we compute integrals using the approach discussed above. Suprathreshold regime v max < α ( θ < µ ) The inner integral in Eq. (A14) yields in the small a limit (cid:90) z −∞ dw w + 1 exp (cid:20) F ( z ) − F ( w ) a (cid:21) = az + 1 1 d F ( z ) dz (B8)from which we obtain CV = a ν ( aKτ ) a K χ (cid:20) log (cid:18) v min − αv max − α (cid:19) + − α + 4 αv max + 12( α − v max ) − − α + 4 αv min + 12( α − v min ) (cid:21) (B9)hence the rescaling needed to preserve the deterministic component a ∼ /K produces CV ∼ a (cid:28)
1. We validatedthis result numerically in Figs. 8H and 10F.
Subthresold regime v max > α ( θ > µ ) The integral defining the CV , Eq. (A14), can be expressed as (cid:90) v −∞ dz exp (cid:20) F ( v ) − F ( z ) a (cid:21) g ( z ) = (cid:90) v ∗ −∞ dz exp (cid:20) F ( v ) − F ( z ) a (cid:21) g ( z ) + (cid:90) vv ∗ dz exp (cid:20) F ( v ) − F ( z ) a (cid:21) g ( z ) (B10)with g ( z ) = (cid:40) (cid:90) z −∞ dw w + 1 exp (cid:20) F ( z ) − F ( w ) a (cid:21) (cid:41) , v ∗ = α − (cid:15) . (B11)The first integral gives (cid:90) v ∗ −∞ dz exp (cid:20) F ( v ) − F ( z ) a (cid:21) g ( z ) = a ( v ∗ + 1) (cid:104) d F ( v ∗ ) dz (cid:105) (B12)3In the second integral g ( z ) = aπ ( α + 1) d F ( α ) dz exp (cid:20) F ( z ) − F ( α ) a (cid:21) . (B13)from which we get (cid:90) vv ∗ dz exp (cid:20) F ( v ) + F ( z ) − F ( α ) a (cid:21) aπ ( α + 1) d F ( α ) dz . (B14)Integrating in z we obtain (cid:90) vv ∗ dz exp (cid:20) F ( z ) a (cid:21) = a d F ( v ) dz exp (cid:20) F ( v ) a (cid:21) . (B15)Integrating in v we obtain CV = 8 χ ν π ( α + 1) d F ( α ) dz (cid:16) d F ( v max ) dz (cid:17) exp (cid:104) F ( v max ) − F ( α ) a (cid:105) √ aK . (B16)Using Eq. (B6) we obtain CV = 1 − ντ rp , (B17)which corresponds to the CV of the ISIs of a Poisson process with dead time, with rate ν and refractory period τ rp .We validated this result numerically in Figs. 8D and 10C.
3. Scaling relations preserving firing in the strong coupling limit
In this section we use the simplified expressions derived above to define scaling relations of a with K which preservesneural response in the strong coupling limit. Importantly, the scaling defined here depends on the operating regimeof the neuron, i.e. on the asymptotic value of µ .In the limit of large K , terms in Eq. (A8) can be written as τ − = aKν X (1 + ηgγ ) , ω − = ν X (1 + ηgγ ) , χ − = ν X (cid:0) ηg γ (cid:1) , (B18)while µ , E D , E S , v max , α and the function F ( x ) are independent of K , a and ν E . We have shown in the previoussection that the single neuron transfer function is given by1 ν = τ rp + Q ν E (B19)with Q = (cid:16) √ aK exp F ( v max ) − F ( α ) a (cid:17) (cid:113) π (1+ ηg γ )(1+ ηgγ ) ( α +1) v max +1 | v max − α | for µ < θ aK (1+ ηgγ ) log (cid:16) µ − θµ − V r (cid:17) for µ > θ (B20)For µ > θ , the parameters a and K in Eq. (B20) appear only in the combination aK . It follows that a rescaling a ∼ K (B21)leaves invariant the neural response for large K . For µ < θ , Eq. (B20), and hence the transfer function, is invariantunder the rescaling K ∼ √ a exp (cid:20) F ( v max ) − F ( α ) a (cid:21) (B22)4 X rp r p A < , = 1.5 X rp ( m V ) B < , = 1.5 X rp C V C < , = 1.5 K = 5.0e+02, a = 1.7e-02 K = 1.0e+03, a = 1.0e-02 K = 5.0e+03, a = 4.7e-03 K = 1.0e+04, a = 3.8e-03 K = 5.0e+04, a = 2.5e-030.00 0.05 0.10 X rp r p D > , = 0.6 X rp ( m V ) E > , = 0.6 X rp C V F > , = 0.6 K = 5.0e+02, a = 2.0e-02 K = 1.0e+03, a = 1.0e-02 K = 5.0e+03, a = 2.0e-03 K = 1.0e+04, a = 1.0e-03 K = 5.0e+04, a = 2.0e-04 FIG. 10.
Scaling relationships preserving firing in the large K limit. Colored dots represent mean field transfer function( A , B ), CV ( C , D ) and membrane potential ( E , F ) obtained from Eqs. (A13), (A14) and (A8), respectively. Different colorscorrespond to different values of a and K which are scaled according to Eqs. (B22) (first row) and (B21) (second row). Meanfield predictions are well described by the relevant approximated expressions (continuous lines). For µ < θ transfer functionand CV are described by Eqs. (B22) ( A ) and (B17) ( C ); both quantities are invariant as K increases. For µ > θ , transferfunction and CV are described by Eqs. (B21) ( A ) and (B9) ( C ); note that, as explained in the text, the firing is preservedwhile the CV becomes smaller as K increases (different line colors correspond to different values of K ). Parameters: g = 12; γ = 1 / In Fig. 10A,D we show neural responses computed for different values of K with a rescaled according to Eqs (B21)or (B22); as predicted the network transfer function remains invariant as K increases. Note that the response remainsnonlinear in the limit of large K ; we will show in the next section that in the network case, because of the selfconsistency relation, nonlinearities are suppressed by the scaling relation.Finally, from Fig. 10C,F, we see that the rescaling preserves the CV for µ < θ and suppresses it for µ > θ . In thecase µ < θ , the CV is given by Eq. (B17). This expression shows that the scaling relation of Eq. (B22) also leavesinvariant the CV . Interestingly, in some parameter regime, the CV in Figs. 8D and 10C shows a non-monotonicbehavior with ν X which is not captured by Eq. (B17). In particular, a CV above one 1 is observed when µ is belowthe reset V r . As pointed out in [76], this supra-Poissonian firing is explained by the fact that, when µ < V r , spikingprobability is higher just after firing that it is afterwards. In agreement with this interpretation, we find that thenon-monotonic behavior of the CV is disappears in the large K limit, where the region of inputs for which µ < V r becomes negligible. Thus, our analysis shows that the irregularity of firing is preserved in the strong coupling limitof a single neuron with µ < θ .In the case µ > θ , the CV is given by Eq. (B9). This expression shows that the scaling relation of Eq. (B21)produces a CV which decreases as 1 /K in the strong coupling limit. It follows that, in a single neuron with µ > θ ,the strong coupling limit produces finite firing that is regular.Starting from the next section we will focus our attention to network of conductance-based neurons. Since we areinterested in describing the irregular firing observed in the cortex, we will focus our study on networks with µ < θ .5 Appendix C: Firing rate and scaling relation in leaky integrate-and-fire neuron models withvoltage-dependent currents
In the main text, we have shown that, when coupling is strong and a (cid:28)
1, the response of a single LIF neuron withconductance-based synapses is well approximated by Eq. (12), i.e. Kramers escape rate. Using this expression, wehave show that the scaling relation of Eq. (14) allows finite firing in single neuron and in networks of neurons. Here,we show that the first order approximation of this scaling, i.e. a ∼ / log( K ), appears also in neuron models withadditional biophysical details, such as spike generating currents [43] and voltage-gated subthreshold currents [23], aslong as coupling is strong, a is small, and synapses are conductance-based.We consider integrate-and-fire models featuring voltage-dependent currents, indicated here as φ ( V ), andconductance-based synapses. In these models, the membrane potential dynamics can be written as C j dV j dt = − (cid:88) A = L,E,I g jA ( V j − E A ) + ψ ( V ) . (C1)In the leaky integrate-and-fire model (LIF), ψ ( V ) = 0 and Eq. (C1) reduces to Eq. (1) analyzed in the main text. Inthe exponential integrate-and-fire model (EIF) [43], the function ψ ( V ) = ∆ T g L exp[( V − θ ) / ∆ T ] describes the spikegeneration current; in this model, once the membrane potential crosses the threshold θ it diverges to infinity in finitetime. The current generated by inward rectifier voltage-gated channels, such as the one recently reported in [23], iscaptured by an expression of the form ψ ( V ) = − g in ( V )( V − E in ), where g in ( V ) and E in represent the conductanceand the reversal potential of the channels, respectively; in the case of [23], 1 /g in ( V ) was shown to well approximatedby a linear increasing function of V .The dynamics Eq. (C1), following an approach analogous to the one we used for the derivation of Eq. (4), can beapproximated by τ dVdt = − ∂ H ( V ) ∂V + σ √ τ ζ , H ( V ) = 12 ( V − µ ) − ττ L g L (cid:90) V ψ ( x ) dx (C2)where ζ is a white noise term, with zero mean and unit variance density, while τ , µ and σ ( V ) are as in Eq. (5). In whatfollows, as in the main text, we use the effective time constant approximation [39], i.e. we neglect the multiplicativecomponent of the noise term in Eq. (C2), and make the substitution σ ( V ) → σ ( µ ∗ ), where µ ∗ is the mean value ofthe membrane potential dynamics.The firing rate of a neuron following Eq. (C2) can be computed exactly using Eq. (A5) and is given by ν = (cid:34) τ rp + 2 τσ (cid:90) ∞−∞ dx (cid:90) ∞ max( V r ,x ) exp (cid:18) H ( z ) − H ( x ) σ (cid:19) dz (cid:35) − . (C3)In what follows, we provide a more intuitive derivation of the single neuron response, which is valid in the biologicallyrelevant case of a (cid:28)
1. The function H in Eq. (C2) can be though of as an energy function which drives the dynamicsof the membrane potential. In the case of LIF neurons, H is a quadratic function with a minimum at V = µ . Inneuron models with a spike generation current, such as the EIF model [43], the shape of the function H far fromthreshold is qualitatively similar to that of the LIF model (with a minimum at V = µ ∗ ), but becomes markedlydifferent close to threshold, where the potential energy has a maximum at V = θ ∗ and goes to −∞ for V > θ ∗ . Here,we focus on the case in which additional subthreshold voltage-gated currents do not lead to additional minima of theenergy function, a scenario that can happen with potassium inward-rectifier currents (e.g. see [77] chapter 4.4.3). Inmodels in which H has a single minimum in the subthreshold range at µ ∗ , and a maximum at θ ∗ , the firing rate of aneuron when input noise is small (i.e. when a (cid:28)
1) can again be computed using Kramers escape rate, which givesthe average time it take for the membrane potential to go from µ ∗ to θ ∗ , (see [42] section 5.5.3)1 ν − τ rp = 2 π ¯ τ ¯Υ aKν X exp (cid:18) ¯∆ a (cid:19) (C4)where ¯Υ = (cid:18) d H dV (cid:12)(cid:12)(cid:12) θ ∗ d H dV (cid:12)(cid:12)(cid:12) µ ∗ (cid:19) − , ¯∆ = H ( θ ∗ ) − H ( µ ∗ )¯ σ , ¯ τ = aKν X τ , ¯ σ = σ √ a , while ¯ . indicates quantities that remain of order 1 in the small a limit, provided the external inputs ν X are at leastof order 1 / ( aKτ L ). Eq. (C4) is the generalization of Eq. (12) to the case of integrate-and-fire neuron models withvoltage-dependent currents; it shows that, at the dominant order, finite firing emerges if a ∼ / log( K ). Moreover,Eq. (C4) shows that corrections to the logarithmic scaling depend on the specific type of voltage-dependent currentsused in the model.6 Appendix D: Calculations in the strong coupling regime - Networks
In this section, we show how the results on the strong coupling limit of single neuron response can be generalized tothe network case. First, we analyze the problem in the case in which excitatory and inhibitory neurons have the samebiophysical properties (model A). In this model we start by discussing the results using the effective time constantapproximation, and then discuss the full results. Then we study the case in which excitatory and inhibitory neuronshave different biophysical properties (model B).
1. Model A, effective time constant approximation
As discussed in the main text, the network response in model A with the effective time constant approximationis obtained solving the self-consistency condition given by (19) and Eq. (10). At strong coupling, this condition canbe simplified to the form of Eq. (12). In the strong coupling limit, when ν X (cid:29) /aKτ L and ν (cid:29) /τ rp , the righthand side of Eq. (10) depends on ν and ν X only through their ratio. Therefore, we look for solutions of the simplifiedself-consistency condition with a Taylor expansion νν X = k = ∞ (cid:88) k =1 ρ k x k − , x = τ rp ν X (D1)Keeping only terms up to first order in x , the self-consistency condition becomes1 ρ − (cid:18) ρ ρ (cid:19) x = Q ( ρ ) + ρ d Q ( y ) dy (cid:12)(cid:12)(cid:12) y = ρ x from which we find ρ = 1 Q ( ρ ) . (D2)The solution of Eq. (D2) provides the linear component of the network response; this is preserved in the strongcoupling limit with an expression analogous to Eq. (14) but with r E ν X = 1 + ρ , r I ν X = ρ . This uniquely defines a scaling between a and K (see Fig. 3A for an example of the scaling function). We test thevalidity of our result in Fig. 3B. The numerical analysis shows that, as K increases, the scaling relation preventssaturation and suppression of the network response. However, unlike what happens in the single neuron case, theshape of the transfer function is not preserved and becomes increasingly linear as K becomes larger. This is analogousto what happens in the balanced state model [7, 8, 10, 72], where the network transfer function becomes linear in thestrong coupling limit. For the case under investigation here, we can understand this suppression of nonlinearities bylooking at the second order terms in the expansion of Eq. (D1). Keeping the dominant contribution in a , we find ρ ∼ a ρ ¯ σ v max (cid:16) ¯ σ dµdy + ( θ − µ ) d ¯ σdy (cid:17) . (D3)Hence ρ goes to zero as a decreases, producing a linear transfer function. This follows directly from the self-consistencyrelation and is not present in the single neuron case, where in fact a nonlinear transfer function is observed in the large K limit. Fig. 3B shows that linearity is reached really slowly with K ; this follows directly from Eq. (D3) where thesuppression of nonlinear terms is controlled by a , which slowly goes to zero with K (approximately logarithmically).
2. Model A, multiplicative noise
In this section, we generalize the approach used above, relaxing the effective time constant approximation.As discussed in Appendix B, Eq. (A13) in the strong coupling limit becomes1 ν = τ rp + Q ν X (D4)7 FIG. 11.
Strong coupling limit of networks of conductance-based neurons in model A.
Numerically computednetwork transfer function ( A ), CV ( B ) and probability distribution of the membrane potential ( D ) obtained from Eqs. (D4),(A12) and (B17). Different colors correspond to different values of a and K which have been changed according to the scalingrelation (D10) ( C ). As K increases the network transfer function and CV converges to the expression derived in the maintext (black lines). Note that, unlike the case of single neuron, the network transfer functions becomes linear. The probabilitydistribution of the membrane potential becomes Gaussian and slowly converges to a delta function. Panels ( E ) and ( F ) showthe network gain and membrane potential for different values of a at fixed K . Note that, unlike what happens in current-basednetworks (black dashed lines), the gain is not monotonic with g . Simulation parameters are as in Fig. 8; in panels A - D g = 20. with Q = (cid:40) √ aK exp (cid:20) F ( v max ) − F ( α ) a (cid:21) (cid:41)(cid:118)(cid:117)(cid:117)(cid:117)(cid:117)(cid:116) π (cid:104) νν X (1 + g γ ) (cid:105)(cid:16) νν X (1 + gγ ) (cid:17) ( α + 1) v max + 1 | v max − α | , (D5)and τ − = aKω − , ω − = ν X (cid:20) νν X (1 + gγ ) (cid:21) , χ − = ν X (cid:20) νν X (1 + g γ ) (cid:21) ,µ = E E + νν X ( E E + gγE I )1 + νν X (1 + gγ ) , E S = E E + νν X ( E E + g γE I )1 + νν X (1 + g γ ) , E D = ( E E − E I ) (cid:113) (1 + νν X ) νν X g γ νν X (1 + g γ ) . (D6)Here we assumed aK (cid:29) /τ L ν X so that the function Q depends on ν and ν X only through the combination ν/ν X .We will show below that a scaling relation analogous to that of single neurons holds, hence for K large enough8 aK (cid:29) /τ L ν X is automatically implemented. To solve the self consistency condition, we express the firing rate ν witha Taylor expansion τ rp ν = k = ∞ (cid:88) k =1 ρ k x k , x = τ rp ν X . (D7)Note that in Eq. (D7) we assumed ρ = 0, we will come back to this point at the end of the section. Under thisassumption y := ν/ν X = (cid:80) k = ∞ k =1 ρ k x k − and the function Q depends only on powers of the dimensionless variable x .Keeping only terms up to first order in x , Eq. (D4) becomes1 ρ − (cid:18) ρ ρ (cid:19) x = Q ( ρ ) + ρ d Q ( y ) dy (cid:12)(cid:12)(cid:12) y = ρ x (D8)from which we find ρ = 1 Q ( ρ ) . (D9)The solution of Eq. (D9) provides the linear component of the network response, i.e. its gain; we will discuss thisfunction in more detail at the end of this section.From Eq. (D9) we find that the network gain ρ is preserved in the strong coupling limit if the factor1 √ aK exp (cid:20) F ( v max ) − F ( α ) a (cid:21) , (D10)is constant. Eq. (D10) uniquely defines a scaling between a and K (see Fig. 11C for an example of the scaling function).We test the validity of the scaling in Fig. 11 as follows: given a set of parameters a , K and ρ , we compute numericallythe transfer function from Eq. (A13), then we increased K , determined the corresponding change in a using Eq. (D10)and compute again the transfer function; results of this procedure are shown in Fig. 11A. The numerical analysisshows that, as K increases our scaling relation prevent saturation and the network response remains finite.As in the case with diffusion approximation, the shape of the transfer function is not preserved by the scaling andan increasing linear response is observed. We can understand this suppression of nonlinearities by looking at thesecond order terms in the expansion of Eq. (D4); we find ρ = − ρ ρ d log( Q ( y )) dy + 1 , (D11)and, keeping the dominant contribution in 1 /a at the denominator, ρ ∼ − a ρ d F ( v max ( y ) ,y ) dy (cid:12)(cid:12)(cid:12) ρ + d F ( α ( y ) ,y )) dy (cid:12)(cid:12)(cid:12) ρ . (D12)Hence ρ goes to zero as a decreases, producing a linear transfer function. The nonlinearities at low rate in Fig. 11A(e.g. see red and yellow lines) show that our assumption ρ = 0 is not valid in general. However it turns out that theabove defined scaling relation suppresses also these nonlinearities in the limit of strong coupling (e.g. blue and cyanlines).We now characterize the dependency of the transfer function gain, i.e. its slope, on network parameters. For fixednetwork parameters, the network gain ρ is defined as the solution of Eq. (D9); solutions as a function of a and g are shown in Fig. 11E. At fixed values of a , the gain initially decreases as g increases and, for g large enough, theopposite trend appears. This behavior is due to two different effects which are produced by the increase of g : onone hand, it increases the strength of recurrent inhibition; on the other hand, it decreases the equilibrium membranepotential µ and bring it closer to the inhibitory reversal potential E i , which in turn weakens inhibition (see Fig. 11F).Fig. 11E shows that the gain is finite only for a finite range of the parameter g ; divergences appear because recurrentinhibition is not sufficiently strong to balance excitation. At small g , the unbalance is produced by week efficacy ofinhibitory synapses; at large g , inhibition is suppressed by the approach of the membrane potential to the reversalpoint of inhibitory synapses. Increasing the value of a produces an upward shift in the curve and, at the same time,decreases the range of values in which the gain is finite. The observed decrease in gain generated at low values of g is9observed also networks of current-based neurons [10] where the gain is found to be 1 / ( gγ − a .To conclude this analysis, we give an approximated expression of the probability distribution of the membranepotential of Eq. (A12) which, in the strong coupling limit, becomes P ( V ) = νω | v max − µ | (cid:34) u ( V max ) + 1 u ( V ) + 1 (cid:35) e F ( vmax ) − F ( V ) a aK (D13)where V max is the value of the membrane potential V which maximizes the integrand of Eq. (A12) while the function u () has been defined in Eq. (A15). Examples of the probability distribution and the corresponding approximatedexpressions are given in Fig. 11D.
3. Model B, multiplicative noise
In this section we generalize the results obtained so far to the case of networks with excitatory and inhibitoryneurons with different biophysical properties.
FIG. 12.
Limit of large K for networks, model B . Firing rate and CV of excitatory and inhibitory neurons in a networkpredicted by the mean field model for different values of inputs and K ; the expected asymptotic behavior is shown in black. Onthe left ( C , F ),we show the corresponding scaling relations with dots associated to the connectivity parameters. Simulationsparameter: the two populations have g e = 20 . g i = 19 .
0; for both populations the a = 0 . K = 10 ; otherparameters as in Fig. 8. a. Model definition Here we take into account the diversity of the two types of neurons with τ j = τ E , a jm = a EX , a EE , a EI ; (D14)for excitatory neurons and τ j = τ I , a jm = a IX , a IE , a EE ; (D15)for inhibitory neurons. We use the parametrization a EX = a E , a EE = a E , a EI = g E a E ,a IX = a I , a IE = a I , a II = g I a I , (D16)0and K EX = K E , K EE = K E , K EI = γ E K E ,K IX = K I , K IE = K I , K II = γ I K I . (D17)Eq. (1) becomes (cid:40) τ E dV E dt = − ( V E − µ E ) − σ E ( V E ) √ τ E ζ E ,τ I dV I dt = − ( V I − µ I ) − σ I ( V I ) √ τ I ζ I . (D18)The expressions for excitatory neurons are τ − E = τ − L,E + a E K E ω − E , ω − E = ν EX + ν E + g E γ E ν I ,µ E = τ E { E L + a E K E τ L,E [ ν EX E E + ν E E E + ν I g E γ E E I ] } ,σ E = a E K E τ E χ E (cid:104) ( V − E S ,E ) + E D ,E (cid:105) , χ − E = ν EX + ν E + g E γ E ν I E S ,E = χ E (cid:2) ν EX E E + ν E E E + ν I g E γ E E I ) (cid:3) , E D ,E = χ E (cid:113) ( ν EX + ν E ) g E γ E ν I ( E E − E I ) ; (D19)analogous expressions are valid for inhibitory neurons.The firing rate is given by solving a system of two equations ν E − τ rp = χ E a E K E (cid:82) v max,E v min,E dv (cid:82) v −∞ dx x +1 exp (cid:104) F E ( v ) − F E ( x ) a E (cid:105) , ν I − τ rp = χ I a I K I (cid:82) v max,I v min,I dv (cid:82) v −∞ dx x +1 exp (cid:104) F I ( v ) − F I ( x ) a I (cid:105) (D20)with F E ( x ) = 2 χ E a E K E τ E (cid:20)
12 log( x + 1) − α E arctan( x ) (cid:21) ,v min,E = V r − E S ,E E D ,E , v max,E = θ − E S ,E E D ,E , α E = µ E − E S ,E E D ,E . (D21)and analogous expressions for the inhibitory population. The probability distribution of the membrane potential andthe CV are straightforward generalizations of Eqs. (A12) and (A14). b. Scaling analysis We parametrize inputs to the two populations as ν EX and ν IX = ην EX . Using an analysis analogous to the onedepicted above, we obtain a simplified expression for the self-consistency Eq. (D20) that is (cid:40) ν E − τ rp = Q E ( ν E /ν EX ,ν I /ν EX ) ν EX , ν I − τ rp = Q i ( ν E /ν EX ,ν I /ν EX ) ν EX , (D22)where Q E = (cid:20) √ a E K E exp F E ( v max,E ) − F E ( α E ) a E (cid:21) (cid:118)(cid:117)(cid:117)(cid:117)(cid:117)(cid:116) π (cid:104) ν E ν EX + g E γ E ν I ν EX (cid:105)(cid:104) ν E ν EX + g E γ E ν I ν EX (cid:105) ( α E + 1) v max,E + 1 | v max,E − α E | , (D23)and Q I = (cid:20) √ a I K I exp F I ( v max,I ) − F I ( α I ) a I (cid:21) (cid:118)(cid:117)(cid:117)(cid:117)(cid:117)(cid:116) π (cid:104) η + ν E ν EX + g I γ I ν I ν EX (cid:105)(cid:104) η + ν E ν EX + g I γ I ν I ν EX (cid:105) ( α I + 1) v max,I + 1 | v max,I − α I | . (D24)1We investigate the solution in the strong coupling limit using an expansion τ rp ν E = k = ∞ (cid:88) k =1 ρ Ek x k , τ rp ν I = k = ∞ (cid:88) k =1 ρ Ik x k , x = τ rp ν EX , (D25)and obtain (cid:40) ρ E = Q E ( ρ E ,ρ I ) ρ I = Q I ( ρ E ,ρ I ) . (D26)Eq. (D26) defines the gain of the excitatory and inhibitory populations. As for model A, requiring that network gainis preserved in the large K limit is equivalent to assuming the products1 √ a j K j exp F j ( v max,j ) − F j ( α j ) a j (D27)constant; these constraints defines how synaptic strength should scale with K to preserve the response gain. We notethat, since F j ( v max,j ) − F j ( α j ) is different for the two populations, in the general case there are two different scalingsfor the two populations; in Fig 12 we verify this prediction. Appendix E: Simulations vs theory
All the results showed in the main text are based on the mean field analysis of the network dynamics. in thissection we investigate how the predictions of the mean field theory compare to numerical simulations of networks ofconductance-based neurons.Using the simulator Brian2 [66], we simulated the dynamics of networks of spiking neurons defined by Eq. (1). Weinvestigated networks of N E excitatory and N I inhibitory neurons; the two groups were driven by two populations ofPoisson units of size N EX and N IX , respectively. Simulations were performed for N E = N I = N EX = N IX = 10 K and 100 K , with no significant differences between the two. We used uniformly distributed delays of excitatoryand inhibitory synapses. Delays were drawn randomly and independently at each existing synapse from uniformdistributions in the range [0, 10]ms (E synapses) and [0, 1]ms (I synapses). For fixed network parameters, thedynamics was simulated for 10 seconds with a time step of 10 µ s. We performed simulations for different valuesof K ; the values of a was rescaled according to the scaling relation of Eq. (D10). From the resulting activity wemeasured firing rate, CV and probability distribution of the membrane potential; results are shown in Fig. 13. Meanfield predictions are in qualitative agreement with numerical simulations, and the agreement improves as a decreases.Deviations from mean-field are expected to arise potentially from three factors: (1) Finite size of conductance jumpsdue to pre-synaptic action potentials; (2) Correlations in synaptic inputs to different neurons in the network dueto recurrent connectivity; (3) Temporal correlations in synaptic inputs due to non-Poissonian firing behavior. Inour simulations, deviations due to (1) and (2) become small when both a and the connection probability are small.Deviations due to (3) become small when ν (cid:28) /τ rp , since as shown in Eq. (B17) of Appendix B, the statistics ofpresynaptic neurons firing tend to those of a Poisson process. As predicted by the mean field analysis, with increasing K (and decreasing a ) the network response becomes linear and approaches the asymptotic scaling; the firing remainsirregular, as shown by the CV , and the membrane potential becomes Gaussian distributed. Appendix F: Effects of heterogeneity in the connectivity between neurons
In this section, we describe how fluctuations in single cell properties modify the expressions described above; inparticular we investigate the effect of heterogeneities in number of connections per neuron in the simplified frameworkof model A. The formalism described here is a generalization to networks of conductance-based neurons of the analysisdone in refs [56, 78] for networks of current-based neurons.We assume that the i -th neuron in the network receives projections from K iX , K iE and K iI external, excitatory andinhibitory neurons, respectively. These numbers are drown randomly from Gaussian distributions with mean K ( γK )and variance ∆ K ( γ ∆ K ) for excitatory (inhibitory) synapses. Note that ∆ K is assumed to be sufficiently smallso that the probability to generate a negative number can be neglected. Fluctuations in the number of connections2 FIG. 13.
Comparison of mean field theory and numerical simulations.
Network transfer function (first row), CV of ISIdistribution (second row) and probability distribution of the membrane potential at ν E = 0 . τ rp (third row). In every panelwe show mean field prediction (green), results from numerical simulations (red) and value expected in the strong coupling limit(black). Different columns correspond to different values of K and a which were scaled according to Eq. (D10). The agreementbetween network simulations (red) and mean field predictions (green) improves as a decreases, as expected since we used thediffusion approximation to derive the results. Simulation parameters are: g = 20, N E = N I = N EX = N IX = 100 K . are expected to produce a distribution of rates in the population, characterized by mean and variance ν and ∆ ν . Asa result, the rates of incoming excitatory and inhibitory spikes differ from cell to cell and become K iE r iE = K (cid:0) r E + ∆ E z iE (cid:1) , K iI r iI = γK (cid:0) r I + ∆ I z iI (cid:1) , r E = ν + ν X , r I = ν , ∆ E = CV K (cid:0) ν + ν X (cid:1) + ∆ ν K ≈ CV K (cid:0) ν + ν X (cid:1) , ∆ I = CV K ν + ∆ ν γK ≈ CV K ν ; (F1)where r E,I are the average presynaptic rates and z iE,I are realizations of a quenched normal noise with zero meanand unit variance, fixed in a given realization of the network connectivity. Starting from Eq. (F1), the rate ν i of thecell is derived as in the case without heterogeneities, the main difference is that it is now a function of the particularrealizations of z iE and z iI . The quantities ν and ∆ ν are obtained from population averages through the self consistency3relations (cid:40) ν = (cid:104) ν ( z E , z I ) (cid:105) , ∆ ν = (cid:104) ν ( z E , z I ) (cid:105) − ν , (F2)where (cid:104) . (cid:105) represents the Gaussian average over the variables z E and z I . Once ν and ∆ ν are known, the probabilitydistribution of firing rate in the population is given by P ( ν ) = 12 π (cid:90) ∞−∞ dz E dz I e − z E / e − z I / δ [ ν − ν ( z E , z I )] . (F3)As showed in the main text (Fig. 4A), Eq. (F3) captures quantitatively the heterogeneity in rates observed innumerical simulations.In the large K (small a ) limit, the mathematical expressions derived above simplify significantly. First, as long asthe parameter µ i of the i-th neuron is below threshold, its rate is given by an expression analogous to Eq. (12) which,for small ∆ E,I , can be written Q i = Q exp (Γ z i ) , Γ = (cid:18) ∂v max ∂r E ∆ E (cid:19) + (cid:18) ∂v max ∂r I ∆ I (cid:19) , (F4)where z i is generated from a Gaussian random variable with zero mean and unit variance. Moreover, if responses arefar from saturation, the single rate can be written as ν i = ν X Q i = ν exp ( − Γ z i ) , Γ = Ω CV K a , Ω = (cid:34)(cid:18) a ∂v max ∂ ( r E /ν X ) (cid:19) ( ρ + 1) + (cid:18) a ∂v max ∂ ( r I /ν X ) (cid:19) ρ (cid:35) (F5)where ν is the rate in the absence of quenched noise (i.e. Eq. (20) of the main text). It is easy to show that, inEq. (F5), Ω is independent of a , K and ν X in the large K (small a ) limit. Finally, as noted in [56], if the singleneuron rate can be expressed as an exponential function of a quenched variable z , Eq. (F3) can be integrated exactlyand the distribution of rates is lognormal and given by P ( ν ) = 1 √ π Γ ν exp (cid:32) − (log( ν ) − log( ν )) (cid:33) . (F6)Therefore, when the derivation of Eq. (F5) is valid, rates in the network should follow a log normal distribution, withparameters given by ν = ν exp (cid:16) Γ (cid:17) ∆ ν = ν (cid:104) exp (cid:16) Γ (cid:17) − (cid:105) , (F7)For Γ (cid:28)
1, we find ∆ ν/ν ≈ Γ / CV K , consistent with numerical results shown in Fig. 4C. Appendix G: Finite synaptic time constants
In this section, we discuss the effect of synaptic time constant on single neuron and network responses. First, wederive an approximated expression for the single neuron membrane time constant; we then compute approximatedexpressions which are valid for different values of the ratio τ S /τ ; at the end of the section, we discussing the responseof networks of neurons with large τ S /τ .The single neuron membrane potential dynamics is given by C j ˙ V j ( t ) = − g jL ( V j − E L ) − (cid:80) A = E,I g jA ( t ) ( V j − E A ) ,τ E ˙ g jE = − g jE + g jL τ E (cid:80) m a jm (cid:80) n δ ( t − t nm − D ) τ I ˙ g jI = − g jI + g jL τ I (cid:80) m a jm (cid:80) n δ ( t − t nm − D ) (G1)4Using the effective time constant approximation [39], we have C ˙ V = − g ( V − µ ) − g EF ( µ − E E ) − g IF ( µ − E I ) ,τ E ˙ g EF = − g EF + σ E √ τ E ζ E ,τ I ˙ g IF = − g IF + σ I √ τ I ζ I , (G2)where g AF represents the fluctuating component of the conductance g A , i.e. g A ( t ) = g A + g AF ( t ) , (G3)and (cid:104) ζ A ( t ) ζ B ( t (cid:48) ) (cid:105) = δ A,B δ ( t − t (cid:48) ) , g = g L + g E + g I ,g A = a A τ A R A , σ A = a A τ A R A (G4)We are interested in stationary response, we introduce the term z = ( µ − E E ) g EF + ( µ − E I ) g IF (G5)with derivative ˙ z = ( µ − E E ) − g EF + σ E ζ E τ E + ( µ − E I ) − g IF + σ I ζ I τ I (G6)Since we are interested in understanding the effect of an additional time scale, we can simplify the analysis assuminga unique synaptic time scale τ E = τ I = τ S and obtain τ S ˙ z = − z + σ z √ τ S ζσ z = σ E ( µ − E E ) + σ I ( µ − E I ) (G7)To have the correct limit for τ S →
0, we impose a A = a A τ L /τ S , where a A is the value of the synaptic efficacy in thelimit of instantaneous synaptic time scale. With these assumptions the system equation becomes (cid:40) τ dVdt = − ( V − µ ) − σ (cid:113) ττ S z ,τ s dzdt = − z + √ τ S ζ . (G8)One can check that in the limit τ S →
0, the equations become analogous to those of the main text with η = z/ √ τ S . Inwhat follows, we provide approximated expressions for the single neuron transfer function in three regimes: small timeconstant [67], large time constant [69], and for intermediate values [70]. We also note that a numerical procedure tocompute the firing rate exactly for any value synaptic time constant was introduced recently, using Fredholm theory[79].
1. Single neuron transfer function for different values of τ S /τ For τ S /τ (cid:28)
1, as shown in [67], the firing rate can be computed with a perturbative expansion and is given by1 ν = τ √ π (cid:90) ˜ v max ˜ v min dx (1 + erf( x )) , ˜ v ( x ) = x − µσ − ˜ α (cid:114) τ S τ . (G9)with ¯ α = − ζ (1 / ≈ .
46. As shown in Fig. 14, Eq. (G9) generates small corrections around the prediction obtainedwith instantaneous synapses, and captures well the response for values τ S /τ (cid:46) . τ S /τ ≈
1, as shown in [70] using Rice formula [80], the single neuron firing rate is well approximated by therate of upward threshold crossing of the membrane potential dynamics without reset. Starting from Eq. (G8) andusing the results of [70], we obtain ν = 12 π √ τ τ S exp (cid:104) − v max (cid:16) τ S τ (cid:17)(cid:105) . (G10)5 FIG. 14.
Synaptic time constant suppresses single neuron response in the strong coupling limit.
Single neuronresponse for different values of K , with a rescaled according to Eq. (14). Rates are plotted as a function of K (first row)and τ S /τ (second row); different columns correspond to different synaptic time constant τ S (title). As K increases, becauseof the synaptic time constant τ S non-negligible compared to the membrane time constant τ , rates computed numerically fromEq. (23) (black dots) depart from the prediction of Eq. 10 (green). The dependency of the rate on K is captured by Eq. (G9)(blue) for small values of τ S /τ and by Eq. (G11) (red) for large values of τ S /τ . This decay cannot be prevented by a newscaling relation of a with K and provides an upper bound to how much coupling can be increased while preserving response.Simulations parameter: a = 0 .
006 for K = 10 , g = 12, η = 1 . For τ S /τ (cid:29)
1, as shown in [69], the neuron fires only when fluctuations of z are large enough for V to be abovethreshold; the corresponding rate is given by ν = (cid:90) ∞ v max /(cid:15) dw e − w √ π τ rp + τ log (cid:16) v min − (cid:15)wv max − (cid:15)w (cid:17) , (cid:15) = (cid:114) ττ S (G11)As shown in Fig. 14, Eq. (G11) captures the response for values τ S /τ (cid:38) τ S /τ .Higher order terms in the τ S /τ expansion could be computed using the approach described in [79]. However,Fig. 14 shows that Eqs. (G9-G11) are sufficient to capture quantitatively responses observed in numerical simulationsfor different regimes of τ S /τ . Eqs. (G9-G11) show that the single neuron response is a nonlinear function of inputrates, this nonlinearity prevents a scaling relation between a and K to rescue the suppression observed in Fig. 14 andFig. 6A.
2. Network response for τ S /τ larger than one In this section, we study responses in networks of neurons with large τ S /τ . As in the case of instantaneous synapses,the network response can be obtained solving the self-consistency relation given by the single neuron transfer functionusing input rates r E = ν X + ν , r I = ν . FIG. 15.
Approximation of network response for large τ S /τ . Plots analogous to Fig. 6B,C of the main text. Dotsrepresent network response as a function of input rate ν X , computed numerically from Eqs. (1), (23) for τ S = 1ms (green) and τ S = 10ms (red). Continuous lines correspond to the prediction obtained with instantaneous synapses (black) and for largesynaptic time constant (Eqs. (G12,5, G13), colored lines). As explained in the text, the latter predictions are valid only forlarge τ S /τ ; because of this, we plotted only values obtained for τ S /τ >
1. For τ S /τ (cid:29)
1, the network response is well describeby Eq. (21) of the main text.
In particular, solutions of the implicit equation generated by Eq. (G11) give the network response in the region ofinputs for which τ S /τ (cid:29)