What is the dynamical regime of cerebral cortex?
WWhat is the dynamical regime of cerebral cortex?
Yashar Ahmadian and Kenneth D. Miller Institute of Neuroscience, Departments of Biology and Mathematics, University of Oregon, OR, USA As of March 2020: Computational and Biological Learning Lab, Department of Engineering, University ofCambridge, Cambridge, UK Center for Theoretical Neuroscience, Swartz Program in Theoretical Neuroscience, Kavli Institute for BrainScience, and Dept. of Neuroscience, College of Physicians and Surgeons and Morton B. Zuckerman MindBrain Behavior Institute, Columbia University, NY, USAAugust 29, 2019
Summary
Many studies have shown that the excitation and inhibition received by cortical neurons remain roughlybalanced across many conditions. A key question for understanding the dynamical regime of cortex is thenature of this balancing. Theorists have shown that network dynamics can yield systematic cancellationof most of a neuron’s excitatory input by inhibition. We review a wide range of evidence pointing to thiscancellation occurring in a regime in which the balance is loose, meaning that the net input remainingafter cancellation of excitation and inhibition is comparable in size to the factors that cancel, rather thantight, meaning that the net input is very small relative to the cancelling factors. This choice of regime hasimportant implications for cortical functional responses, as we describe: loose balance, but not tight bal-ance, can yield many nonlinear population behaviors seen in sensory cortical neurons, allow the presenceof correlated variability, and yield decrease of that variability with increasing external stimulus drive asobserved across multiple cortical areas.In what regime does cerebral cortex operate? This is afundamental question for understanding cerebral corti-cal function. The concept of a “regime” can be defined invarious ways. Here we will focus on a definition in termsof the balance of excitation and inhibition: how strongare the excitation and inhibition that cortical cells receive,and how tight is the balance between them? As we willsee, the answers to these questions have important im-plications for the dynamical function of cortex.We first consider several more fundamental distinctionsin cortical regime. First, neurons may fire in a regularor irregular fashion, where regular firing refers to emit-ting spikes in a more clock-like manner, while irregularfiring refers to emitting spikes in a more random man-ner, like a Poisson process. Cortex appears to be in an ir-regular regime (Shadlen and Newsome, 1998; Softky and Koch, 1993), though some areas are less irregular thanothers (Maimon and Assad, 2009). Second, neurons mayfire in a synchronous regime, meaning with strong cor-relations between the firing of different neurons, or anasynchronous regime, meaning with weak (or no) corre-lations. Cortical firing, particularly in the awake state,generally appears to be in an asynchronous regime (Co-hen and Kohn, 2011; Doiron et al., 2016; Ecker et al., 2014,2010), although some conditions may show more syn-chronous firing (DeWeese and Zador, 2006; Poulet andPetersen, 2008; Stevens and Zador, 1998; Tan et al., 2014).Thus, we will take cortex to be in an asynchronous irreg-ular regime. Brunel (2000) first defined conditions on net-works of excitatory and inhibitory neurons that led themto operate in the asynchronous irregular regime.1 a r X i v : . [ q - b i o . N C ] A ug third distinction is whether a network goes to a stablefixed rate of firing for a fixed input (with noisy fluctua-tions about that fixed rate given noisy inputs), or whetherit shows more complex behaviors, such as movement be-tween multiple fixed points, oscillations, or chaotic wan-dering of firing rates. We will focus on the case of asingle fixed point, which seems likely to reasonably ap-proximate at least awake sensory cortex (see discussionin Miller, 2016). The fixed point is taken to be stable ,meaning that the network dynamics cause firing ratesto return to the fixed point levels after small perturba-tions. Finally, for a given fixed point, recurrent excitationmay be strong enough to destabilize the fixed point inthe absence of feedback inhibition; that is, if inhibitoryfiring were held frozen at its fixed point level, a small per-turbation of excitatory firing rates would cause them toeither grow very large or collapse to zero. In that case,the fixed point is stabilized by feedback inhibition, andthe network is known as an inhibition-stabilized network(ISN). Alternatively, the recurrent excitation may be weakenough to remain stable even without feedback inhibi-tion. A number of studies have found strong evidencethat at least primary visual and auditory cortices are ISNsboth at spontaneous (Sanzeni et al., 2019) and stimulus-driven (Adesnik, 2017; Kato et al., 2017; Ozeki et al., 2009)levels of activity.Note that for some of the distinctions we describe be-tween regimes, there is a sharp transition from oneregime to the other, while for others the transition isgradual. We use the word “regime” in either case to de-scribe qualitatively different network behaviors.The assumption that cortex is in an irregularly-firingregime (as well as its operation as an ISN) strongly pointsto the need for some kind of balance between excita-tion and inhibition. Stochasticity of cellular and synap-tic mechanisms (Mainen and Sejnowski, 1995; O’Donnelland van Rossum, 2014; Schneidman et al., 1998) and in-put correlations (DeWeese and Zador, 2006; Stevens andZador, 1998) may contribute to irregular firing. How-ever, a number of authors have argued that, assuminginputs are un- or weakly-correlated, then irregular fir-ing will arise if the mean input to cortical cells is sub-or peri-threshold, so that firing is induced by fluctua-tions from the mean rather than by the mean itself (Amitand Brunel, 1997; Troyer and Miller, 1997; Tsodyks andSejnowski, 1995; van Vreeswijk and Sompolinsky, 1996).This is referred to as the fluctuation-driven regime, asopposed to the mean-driven regime in which the mean input is strongly suprathreshold and spiking is largelydriven by integration of this mean input. The fluctuation-driven regime yields random, Poisson-process-like firing,because fluctuations are equally likely to occur at anytime, whereas the mean-driven regime yields regular fir-ing.Given the strength of inputs to cortex (to be discussedbelow), the mean excitation received by a strongly-responding cell is likely to be sufficient to drive the cellnear or above threshold. Therefore, for the mean input tobe subthreshold, the mean inhibition is likely to cancel asignificant portion of the mean excitation; that is, the ex-citation and inhibition received by a cortical cell shouldbe at least roughly balanced (Tsodyks and Sejnowski,1995; van Vreeswijk and Sompolinsky, 1996). Consis-tent with the idea that inhibition balances excitation,many experimental investigations have suggested thatcortical or hippocampal excitation and inhibition remainbalanced or inhibition-dominated across varying activ-ity levels (Anderson et al., 2000; Atallah and Scanziani,2009; Barral and Reyes, 2016; Dehghani et al., 2016; Galar-reta and Hestrin, 1998; Graupner and Reyes, 2013; Haideret al., 2006, 2013; Higley and Contreras, 2006; Marinoet al., 2005; Okun and Lampl, 2008; Shu et al., 2003; Wehrand Zador, 2003; Wu et al., 2008, 2006; Yizhar et al., 2011;Zhou et al., 2014).Excitation and inhibition may be balanced in at least twoways. First, inhibitory and excitatory synaptic weightsmay be co-tuned, so that cells that receive more (or less)excitatory weight receive correspondingly more (or less)inhibitory weight (Bhatia et al., 2019; Xue et al., 2014).This does not ensure balancing of excitation and inhibi-tion received across varying patterns of activity. Second,given sufficiently strong feedback inhibitory weights, thenetwork dynamics may ensure that the mean inhibitionand mean excitation received by neurons remain bal-anced across patterns of activity, without requiring tun-ing of synaptic weights. Here, we will focus on this sec-ond, dynamic form of balancing.As we will discuss, theorists have described mecha-nisms by which inhibition and excitation dynamically re-main balanced, keeping neurons in the fluctuation-drivenregime, without any need for fine tuning of parameterssuch as synaptic weights. This dynamical balance can bea “tight balance”, which we define to mean that the exci-tation and inhibition that cancel are much larger than theresidual input that remains after cancellation, or a “loose2alance”, meaning that the canceling inputs are compa-rable in size to the remaining residual input (terms to de-scribe balanced networks are not yet standardized; seeAppendix 1 for comparison of our usage to other nomen-clatures). The question of whether the balance is tight orloose has important implications for the behavior of thenetwork. Here we will review these issues and argue thatthe evidence is most consistent with a loosely balancedregime. A Theoretical Problem: How to Achieve InputMean and Fluctuations That Are Both Compa-rable in Size to Threshold?
How do cortical neurons stay in the irregularly fir-ing regime? There are two requirements to be in thefluctuation-driven regime, which yields irregular firing:(1) The mean input the neurons receive must be sub- orperi-threshold; (2) Input fluctuations must be sufficientlylarge to bring neuronal voltages to spiking threshold suf-ficiently often to create reasonable firing rates. We willmeasure the voltage effects of a neuron’s inputs in unitsof the voltage distance from the neuron’s rest to thresh-old; this distance, typically around 20 mV for a corti-cal cell ( e.g. , Constantinople and Bruno, 2013, Fig. 3K), isequal to 1 in these units. Thus, a necessary condition forbeing in the irregularly firing regime is that the voltagemean driven by the mean input (henceforth abbreviatedto “the mean input”) should have order of magnitude 1,which we write as O (1), or smaller. The second require-ment above then dictates that the voltage fluctuationsdriven by the input fluctuations from the mean (hence-forth abbreviated to “input fluctuations”) should also be O (1). In particular, this means that the ratio of the meaninput to the input fluctuations should be O (1). (Note, weuse the O () notation simply to indicate order of magni-tude, and not in its more technical, asymptotic sense ofthe scaling with some parameter as that parameter goesto zero or infinity.)Several authors have considered the requirements forthese conditions to be true (Renart et al., 2010; Tsodyksand Sejnowski, 1995; van Vreeswijk and Sompolinsky,1996, 1998). Following these authors, we assume the net-work is composed of excitatory ( E ) and inhibitory ( I ) neu-ron populations, which receive excitatory inputs from anexternal ( X ) population. The latter could represent anycortical or subcortical neurons outside the local corticalnetwork, for example, the thalamic input to an area of pri- mary sensory cortex. As a simplified toy model of the as-sumption that the network is in the asynchronous irreg-ular regime, we assume that the cortical cells fire as Pois-son processes without any correlations between them, asdo the cells in the external population.Suppose that a neuron receives K E excitatory inputs. Sup-pose these inputs produce EPSPs that have an exponen-tial time course, with mean amplitude J E and time con-stant τ E , and have mean rate r E . Then the mean depolar-ization produced by these excitatory inputs is J E K E r E τ E .Defining n E = r E τ E to be the mean number of spikes of aninput in time τ E , we find that the mean excitatory inputto the neuron is µ E = J E K E n E (1)Let σ E denote the standard deviation (SD) of fluctuationsin this input. Assuming the spike counts of pre-synapticneurons are uncorrelated, their spike count variances justadd. Because they are firing as Poisson processes, thevariance in a neuron’s spike count in time τ E is equal toits mean spike count n E . Thus, the variance of input fromone pre-synaptic neuron is J n E , and so the variance inthe total input is K E J n E and σ E = J E (cid:112) K E n E (2)Therefore the ratio of the mean to the SD of the neuron’sexcitatory input is µ E σ E = (cid:112) K E n E , (3)independent of J E . Similar reasoning about the neuron’sinhibitory or external input leads to all the same expres-sions, except with E subscripts replaced with I or X sub-scripts to represent quantities describing the inhibitoryor external input the cell receives.Again assuming that the different populations are uncor-related so that their contributed variances add, the total or net input the neuron receives has mean, µ , and stan-dard deviation, σ , given by: µ = µ E + µ X − µ I (4) σ = (cid:113) σ + σ + σ (5)We imagine that K E and K I are the same order of magni-tude, O ( K ) for some number K , and similarly n E and n I are O ( n ) for some number n . We also assume µ X and σ X are the same order of magnitude as µ E or µ I and σ E or σ I ,respectively, or smaller. Then, if √ Kn is O (1), both µ and σ can simultaneously be made O (1) with suitable choiceof the J ’s (generically, i.e. barring special cases in which3he elements of µ precisely cancel). This means thatthe irregularly-firing regime is self-consistent: having as-sumed that neurons are in the irregularly-firing regime,we arrive at expressions for the mean and SD of their in-put that indeed can keep the network in this regime.If K is very large, however – large enough that √ Kn (cid:29)
1– then the ratio of the mean to the SD of each type ofinput, and hence of the net input, is much greater than1. Van Vreeswijk and Sompolinsky (1996,1998) consid-ered the case of such very large K and showed how thenetwork could remain in the AI regime. They proposedchoosing the J ’s proportional to √ K , so that the standarddeviations σ E and σ I are O (1) (Eq. 2), but then by Eq. 3 themeans µ E and µ I are large, O ( √ K ). Then, for the neuronsto be in the asynchronous irregular regime, inhibitory in-put, − µ I , must cancel or “balance” a sufficient portion ofthe excitatory input, µ E + µ X , so that the mean input, µ ,is O (1).If a neuron’s mean excitatory and inhibitory inputs veryprecisely cancel each other, so that the mean net input µ is much smaller than either of these factors alone, we saythere is tight balance . If the net input is more comparablein size to the factors that are cancelling, we call this loosebalance . The two cases may be distinguished by the sizeof a dimensionless balance index β : β = | µ | µ E + µ X (6)Note that, using the above analysis, in the limit oflarge K considered by Van Vreeswijk and Sompolinsky(1996,1998), β ∼ √ K . Tight balance means that the bal-ance index is very small, β (cid:28)
1; in loose balance, theindex is not so small, say 0.1 < β <
1, very roughly. Aswe will see, whether the network is in tight or loose bal-ance has important implications for the network’s behav-ior and computational ability. (Note that, in general, thedegree of balance can be different in different neurons inthe same network. In the above discussion we assumedthat different neurons of the same E / I type are statisti-cally equivalent, e.g. , in terms of the number and activityof presynaptic inputs; this is the case in the randomlyconnected network of Van Vreeswijk and Sompolinsky(1996,1998). In that case β would not vary systematicallybetween neurons of the same type.) The Tightly Balanced Solution
Van Vreeswijk and Sompolinsky (1996,1998) showed that,for very large K , and hence very large Kn , and all J ’s ∝ √ K , the network dynamics would produce a tightlybalanced solution provided only that some mild (inequal-ity) conditions on the weights are satisfied, without anyrequirements for fine tuning. This is known more gen-erally as the “balanced network” solution. To under-stand this solution, we define the mean number of in-puts, PSP amplitude, and time constant from population B ( B ∈ { E , I , X } ) to a neuron in population A ( A ∈ { E , I } )to be K AB , J AB and τ AB respectively. We define the meaneffective weight from population B to a neuron in popu-lation A as W AB = J AB K AB τ AB . Letting r B be the averagefiring rate of population B , then W AB r B = J AB K AB n B , themean input from population B to population A . We as-sume that W AB r B = O ( √ K ) for all A , B . The requirementsfor balance are then that the mean net input to both ex-citatory and inhibitory cells, u E and u I respectively, are O (1), where (from Eqs. 1,4), u E = W EE r E − W EI r I + W EX r X (7) u I = W IE r E − W II r I + W IX r X (8)If we define the external inputs to the network I E = W EX r X , I I = W IX r X , then these equations can be writtenas the vector equation u = Wr + I (9)where u ≡ (cid:18) u E u I (cid:19) , r ≡ (cid:18) r E r I (cid:19) , I ≡ (cid:18) I E I I (cid:19) , and W isthe weight matrix W = (cid:18) W EE − W EI W IE − W II (cid:19) .The balanced network solution arises by noting that theleft side of Eq. 9 is very small ( O (1)) relative to the indi-vidual terms on the right ( O ( √ K )). So we first find anapproximate solution r to Eq. 9 in which the small leftside is replaced by 0 to yield the equation for perfect bal-ance, i.e. all inputs perfectly cancelling: Wr + I = 0, or r = − W − I , where W − is the matrix inverse of W .Note that r is O (1), because the elements of W and I are all the same order of magnitude, so their ratio isgenerically O (1). We can then write r as an expansionin powers of √ K , r = r + r √ K + …, where r , r , … areall O (1), to obtain a consistent solution: u = Wr √ K + …where the first term on the right is O (1), as desired, andthe remaining terms are very small ( O (cid:16) √ K (cid:17) or smaller).4he authors showed that, with some mild general con-ditions on the weights W and inputs I , this tightly bal-anced solution would be the unique stable solution ofthe network dynamics. That is, for a given fixed input I , the network’s excitatory/inhibitory dynamics will leadit to flow to this balanced solution for the mean rates: r = − W − I + O (cid:16) √ K (cid:17) .We immediately see two points about the tightly bal-anced ( √ Kn (cid:29)
1) solution:1.
Mean population responses are linear in the inputs . − W − I is a linear function of the input I . Tight bal-ance implies that nonlinear corrections to r ≈ r = − W − I are very small relative to this linear term,except for very small external inputs, so mean re-sponse r is for practical purposes a linear functionof the input.2. External input must be large relative to the net inputand to the distance from rest to threshold.
The exter-nal input I must have the same order of magnitudeas the recurrent input Wr , so that balance can oc-cur, Wr = − I , with rates that are O (1). If I weresmaller, the firing rates r ≈ r would correspond-ingly be unrealistically small.In the above treatment we focused on population-averaged responses, r E and r I . We emphasize that thebalancing only applies to the mean input across neuronsof each type, and leaves unaffected input componentswith mean of zero across a given type; while the meaninput is very large in the tightly balanced regime, zero-mean input components can be O (1) and yet elicit O (1)responses in individual neurons (see e.g. (Hansel and vanVreeswijk, 2012; Pehlevan and Sompolinsky, 2014; Sadehand Rotter, 2015)). Furthermore, even in the tightly bal-anced regime, individual neurons can exhibit nonlinear-ities in their responses, but these are washed out at thelevel of population-averaged responses. We also note thatsynaptic nonlinearities, e.g. synaptic depression, whichwere neglected here, can allow a tightly balanced statewith nonlinear population-averaged responses (Mongilloet al., 2012). A Loosely Balanced Regime
As we have seen, if Kn is O (1), the mean and the fluc-tuations of the input that neurons receive can both be O (1) without requiring any balancing. Given that there isboth excitatory and inhibitory input, there will always besome input cancellation or ”balancing” – some portion ofthe input excitation will be cancelled by input inhibition,leaving some smaller net input. When Kn is O (1), all ofthese quantities – the excitatory input, the inhibitory in-put, and the net input after cancellation – will genericallybe O (1), and thus balancing is “loose” – the factors thatcancel and the net input after cancellation are of compa-rable size, and the balance index β is not small.However, the fact that there is some inhibition that can-cels some excitation does not by itself imply interestingconsequences for network behavior. We will use the term”loosely balanced solution” to refer more specifically toa solution having two features: (1) the dynamics yieldsa systematic cancellation of excitation by inhibition likethat in the tightly balanced solution. In particular, in theloosely balanced networks on which we will focus, a sig-nature of this systematic cancellation is that the net in-put a neuron receives grows sublinearly as a function ofits external input (we will make this more precise below);(2) this cancellation is “loose”, as just described. As wewill discuss, such a loosely balanced solution producesvarious specific nonlinear network behaviors that are ob-served in cortex.Ahmadian et al. (2013) showed that such a loosely bal-anced solution would naturally arise from E/I dynam-ics for recurrent weights and external inputs that arenot large, provided that the neuronal input/output func-tion, determining firing rate vs. input level, is supralinear(having ever-increasing slope) over the neuron’s dynamicrange. They modeled this supralinear input/output func-tion as a power law with power greater than 1 (Fig. 1).Such a power-law input-output function is theoreticallyexpected for a spiking neuron when firing is induced byinput fluctuations rather than the input mean (Hanseland van Vreeswijk, 2002; Miller and Troyer, 2002), and isobserved in intracellular recordings over the full dynamicrange of neurons in primary visual cortex (V1) (Priebeand Ferster, 2008). Of course, a neuron’s input/outputfunction must ultimately saturate but, at least in V1, theneurons do not reach the saturating portion of their in-put/output function under normal operation. For theloosely balanced solution to arise, some general condi-tions on the weight matrix, similar to those for the tightlybalanced network solution but less restrictive, must alsobe satisfied.5 igure 1. The supralinear (power-law) neuronaltransfer function. The transfer function of neurons incat V1 is non-saturating in the natural dynamic rangeof their inputs and outputs, and is well fit by a supralin-ear rectified power-law with exponents empirically foundto be in the range 2-5. Such a curve exhibits increasinginput-output gain (i.e. slope, indicated by red lines) withgrowing inputs, or equivalently with increasing outputfiring rates. Gray points indicate a studied neuron’s aver-age membrane potential and firing rate in 30ms bins; bluepoints are averages for different voltage bins; and blackline is fit of power law, r = [ V − θ ] p + , where r is firing rate, V is voltage, [ x ] + = x , x >
0, =0 otherwise; θ is a fittedthreshold; and p , the fitted exponent, here is 2.79. Figuremodified from (Priebe et al., 2004).In the presence of a supralinear input/output function,the loosely balanced solution arises as follows. Whereaspreviously we considered the effects of increasing K whenrecurrent and external inputs were all O ( √ K ), now weconsider the more biological case of increasing externalinput ( i.e. , stimulus) strength while recurrent weights areat some fixed level. The supralinear input/output func-tion means that a neuron’s gain – its change in outputfor a given change in input – is continuously increas-ing with its activation level. This in turn means that ef-fective synaptic strengths are increasing with increasingnetwork activation. The effective synaptic strength mea-sures the change in the postsynaptic cell’s firing rate perchange in presynaptic firing rate. This is the product ofthe actual synaptic strength – the change in postsynapticinput induced by a change in presynaptic firing – and the postsynaptic neuron’s gain, hence the effective synapticstrengths increase with increasing gains.The increasing effective synaptic strengths lead to tworegimes of network operation. For very weak exter-nal drive and thus weak network activation, all effectivesynaptic strengths are very weak, for both externally-driven and network-driven synapses. External drive toa neuron is delivered monosynaptically, via the weakexternally-driven synapses. In contrast, assuming thatthe network is inactive in the absence of external in-put, network drive involves a chain of two or more weaksynapses: the weak externally driven synapses activatecortical cells, which then drive the weak network-drivensynapses. From the same principle that x (cid:28) x when x (cid:28)
1, the network drive is therefore much weaker thanthe external drive. Thus, the input to neurons is dom-inated by the external input, with only relatively smallcontributions from recurrent network input. In sum, inthis weakly-activated regime, the neurons are weakly cou-pled , largely responding directly to their external inputwith little modification by the local network.With increasing external (stimulus) drive and thus in-creasing network activation, the gains and thus the ef-fective synaptic strengths grow. This causes the relativecontribution of network drive to grow until the networkdrive is the dominant input. At some point, the effective E → E connections become strong enough that the net-work would be prone to instability – a small upward fluc-tuation of excitatory activities would recruit sufficient re-current excitation to drive excitatory rates still higher,which if unchecked would lead to runaway, epileptic ac-tivity (and to ever-growing effective synaptic strengthsand thus ever-more-powerful instability). However, iffeedback inhibition is strong and fast enough, the inhi-bition will stabilize the network, that is, it becomes anISN. This stabilization is achieved by a loose balancing ofexcitation and inhibition, as we will explain in more de-tail below. Thus, in this more strongly-activated regime,the neurons are strongly coupled and are loosely balanced .Note that the input driving spontaneous activity may al-ready be strong enough to obscure the weakly coupledregime, as suggested by the finding that V1 under spon-taneous activity is already an ISN (Sanzeni et al., 2019).As in the tightly balanced network, the network’s exci-tatory/inhibitory dynamics lead it to flow to this looselybalanced solution, without any need for fine tuning of pa-rameters. Because this mechanism involves stabilization,by inhibition, of the instability induced by the supralinear6 External input (mV) I n p u t s t o E c e ll ( m V ) externalrecurrentnet Figure 2. Loose vs. tight balance.
We illustrate the external (blue), recurrent or within-network (green) and net(orange, equal to external plus recurrent) inputs to a typical excitatory cell in a Stabilized Supralinear Network. At allbiological ranges of external input (stimulus) strength, the balance is loose, as exhibited by the left set of three arrows(representing the external, recurrent and net inputs): the net input is comparable in size to the other two. Neverthelessthe balance systematically tightens with increasing external input (right set of arrows), as the net input grows onlysublinearly with growing external input strength. At very high (possibly non-biological) levels of external input, thebalance can become very tight, with the net input much smaller in magnitude than the external and recurrent inputs.input/output function of individual neurons along with E → E connections, it has been termed the StabilizedSupralinear Network (SSN) (Ahmadian et al., 2013; Rubinet al., 2015).To describe the mathematics of this mechanism, we againconsider an excitatory and an inhibitory population alongwith external input. We define the vectors r , u and I andthe matrix W as before. Then the power-law input/outputfunction means that the network’s steady state firing rate r SS for a steady input I satisfies r SS = k ( u ) p + = k ( Wr SS + I ) p + (10) where ( v ) + is the vector v with negative elements set tozero, ( v ) p + means raising each element of ( v ) + to the power p , p is a number greater than 1 (typically, 2 to 5, Priebeand Ferster, 2008), and k is a constant with units Hz ( mV ) p (and the units of W , r , and I are mVHz , Hz , and mV respec-tively). It is convenient to absorb k into effective weightsand inputs by writing (cid:102) W = k / p W , (cid:101) I = k / p I , so the equa-tion becomes r SS = ( (cid:102) Wr SS + (cid:101) I ) p + (11)If we let ψ = (cid:107) (cid:102) W (cid:107) represent a norm of (cid:102) W (think of itas a typical total E or I recurrent weight received by aneuron), and similarly let c = (cid:107) (cid:101) I (cid:107) represent a typical in-7ut strength, then it turns out the network regime is con-trolled by the dimensionless parameter α = ψ c p − . (12)As we increase the strength of external drive and thusof network activation, c and thus α is increasing. When α (cid:28)
1, the network is in the weakly coupled regime; for α (cid:29)
1, the network is in the strongly coupled regime; andthe transition between regimes generically occurs when α is O (1) (Ahmadian et al., 2013).The loosely-balanced solution then turns out to be of theform r = − W − I + c ψ (cid:16) r α / p + … (cid:17) (13)where r is dimensionless and O (1), and the higher-orderterms (indicated by …) involve higher powers of α / p (seeAppendix 2). Equation 13 is precisely the same equa-tion as for the tightly balanced solution, in the casethat the input/output function is a power law. In thetightly balanced network, ψ and c are both O ( √ K ), so α is O (( K ) p / ), i.e. very large, and the α / p in the sec-ond term becomes O (cid:16) √ K (cid:17) , as expected. The loosely-balanced solution arises, however, when α is O (1). In par-ticular, in the biological case of fixed weights but increas-ing stimulus drive, and given the supralinear neuronal in-put/output functions, the same E/I dynamics that lead tothe tightly balanced solution when inputs are very largewill already yield a loosely balanced solution when inputsare O (1). The conditions for this loosely balanced solutionto arise are further discussed in Appendix 2.The fact that the solution is loosely balanced can be seenby computing the balance index, β (Eq. 6). The networkexcitatory drive is O ( ψ r ), the external drive is O ( c ), andbecause the first term on the right side of Eq. 13 can-cels the external input, the net input after balancing is W (which is O ( ψ )) times the 2nd term on the right sideof Eq. 13, or O (cid:16) c α / p (cid:17) = O (cid:18)(cid:16) c ψ (cid:17) / p (cid:19) . Since 1 / p < c , as illustrated in Fig. 2. Moreover, itfollows that the balance index (Eq. 6) is O (cid:16) c /α / p c + ψ r (cid:17) whichis O (cid:16) α / p (cid:17) (assuming the order of magnitude of the re-current input, ψ r , is the same as or smaller than that ofthe external input strength, c ). Again, for the tightly bal-anced solution this is very small, O (cid:16) √ K (cid:17) , but the looselybalanced solution arises when this is O (1). In more complex models (involving many neurons withstructured connectivity and stimulus selectivity), loosely-balanced solutions still arise when α is O (1). That is, evenin such cases the full nonlinear steady state equations,Eq. (10), can yield biologically plausible solutions, andwhen that happens the net inputs to activated neuronsgrow sublinearly with growing external input strength,and balance indices are O (1). The case of structured net-works with stimulus selectivity is further discussed in Ap-pendix 2.We can now see that the loosely balanced regime differsfrom the tightly balanced in the two points summarizedpreviously:1. In the loosely balanced regime, mean population re-sponses are nonlinear in the inputs.
This is because,when balance is loose, the second term in Eq. 13,which is not linear in the input, cannot be ne-glected relative to the first, linear term. In particu-lar, the nonlinear population behaviors observed inthe loosely balanced regime with a supralinear in-put/output function closely match the specific non-linear behaviors observed in sensory cortex (Rubinet al., 2015), as we will discuss below.2.
In the loosely balanced regime, external input can becomparable to the net input and to the distance fromrest to threshold.
What Regime Do Experimental MeasurementsSuggest?
As we have seen above, the same model can give a looselybalanced solution (Eq. 13) when α is O (1) ( e.g. , when c and ψ are both O (1)), but give a tightly balanced solu-tion when α is large ( e.g. , when c and ψ are both O ( √ K )).Which of these regimes is supported by experimentalmeasurements? Measurements of Biological ParametersHow large is √ Kn ? We saw in Eq. 3 that the ratio ofthe mean to the standard deviation, µ Y /σ Y , of the inputof type Y ( Y ∈ { E , I , X } ) received by a neuron is equalto √ K Y n Y , where K Y is the number of inputs of type Y a given neuron receives and n Y is the average number ofspikes one of these inputs fires in a PSP decay time τ y E = 200 K E = 1000 K E = 5000 r E = 0.1 Hz 0.6 1.4 3.2 r E = 1 Hz 2.0 4.5 10.0 r E = 10 Hz 6.3 14.1 31.6 Table 1.
Values of √ K E n E for varying K E and r E , for τ = 20ms.( n Y = r Y τ y , where r Y is the average firing rate of one ofthese inputs). Here we estimate √ K E n E .Note that √ K Y n Y is actually an upper bound for the ratio µ Y /σ Y for a given input type, because we have neglecteda number of factors that would increase fluctuations fora given mean. These include (i) correlations among in-puts which, even if weak, can significantly boost the inputSD, σ Y , without altering input mean; (ii) variance in theweights, J Y , which would increase the estimate of σ Y by afactor (cid:113) (cid:104) J (cid:105) / (cid:104) J Y (cid:105) ; and (iii) network effects that can am-plify input variances by creating firing rate fluctuations,although this amplification may be small for strong stim-uli (Hennequin et al., 2018). Furthermore, the ratio µ/σ of total input is expected to be smaller than the ratio µ Y /σ Y for any single type. This is because σ involves the sumof three variances (Eq. 5), while µ involves a difference ofone mean from the sum of two others (Eq. 4), represent-ing the effect of loose balancing.Given these considerations, we are primarily concernedwith estimating the overall magnitude of √ K E n E ratherthan detailed values. If this magnitude is very muchlarger than the observed µ/σ ratio in vivo , then tight bal-ancing may be needed to explain the in vivo ratio. To esti-mate the in vivo µ/σ ratio, we note that, in anesthetizedcat V1, σ varies from 1 to 7 mV and µ ranges from 0 to 15 mV (20 mV in one case) for a strong stimulus (Finn et al.,2007; Sadagopan and Ferster, 2012). While these authorsdid not give the paired µ and σ values for individual cells,it seems reasonable to guess from these values that thevalue of µ/σ for the total input to these cells is generallyin the range 0 −
15. Finn et al. (2007) also reported that, atthe peak of a simple cell’s voltage modulation to a high-contrast drifting grating, the ratio σ/µ had an averagevalue of about 0.15 (here, we are taking µ to be the meanvoltage at the peak). This suggests that the average valueof µ/σ at peak activation is around 1 / √ K E n E ? In a study of input to excitatorycells in layer 4 of rat S1 (Schoonover et al., 2014), theEPSP decay time τ E was around 20 ms. From 1800 to4000 non-thalamic-recipient spines were found on stud- ied cells which, with an estimated average of 3.4 synapsesper connection between layer 4 cells (Feldmeyer et al.,1999), corresponds to a K E – the number of other corti-cal cells providing input to one cell – of 530 to 1200. If r E is expressed in Hz, then √ K E n E ranges from 3.3 √ r E to4.9 √ r E . Thus, even if average input firing rates were 10Hz, which would be very high for rodent S1 (Barth andPoulet, 2012) (note that the average is over all inputs, notjust those that are well driven in a given situation, andso is likely far below the rate of a well-driven neuron),these ratios would be 10.4 − − − in vivo levels of µ/σ .More generally, estimates across species and cortical ar-eas of the number of spines on excitatory cells, and thusof the number of excitatory synapses they receive, rangefrom 700 to 16,000, with numbers increasing from pri-mary sensory to higher sensory to frontal cortices (Ama-trudo et al., 2012; Elston, 2003; Elston and Fujita, 2014; El-ston and Manger, 2014). Estimates of the mean number ofsynapses per connection between excitatory cells rangefrom 3.4 to 5.5 across different areas and layers studied(Fares and Stepanyants, 2009; Feldmeyer et al., 1999, 2002;Markram et al., 1997). These numbers yield a K E of 130to 4700. In Table 1, we show the value of √ K E n E for K E ranging from 200 to 5000 (rounded upward to bias resultsmost in favor of a need for tight balancing) and for rates r E of 0.1 to 10 Hz. The results are all comparable to the µ/σ ’s observed in vivo , except for the most extreme caseconsidered ( K E = 5000, r E = 10Hz), and even that caseis only off by a factor of 2. Thus, the numbers stronglyargue that tight balancing is not needed for the ratio ofvoltage mean to variance to have values as observed invivo . External input is comparable in strength to net in-put.
Several studies have silenced cortical firing whilerecording intracellularly to determine the strength of ex-ternal input, with cortex silenced, relative to the net in-put with cortex intact. These find the external input to9e comparable to the net input, consistent with the loosebalance scenario, rather than much larger as the tight bal-ance scenario requires.Ferster et al. (1996) cooled V1 and surrounding V2 in anes-thetized cats to block spiking of almost all cortical cells,both excitatory and inhibitory, leaving axonal transmis-sion (e.g. of thalamic inputs) intact, though with weak-ened release. By measuring the size of EPSPs evoked byelectrical stimulation of the thalamic lateral geniculatenucleus (LGN) in thalamic-recipient cells in layer 4 of V1,they could estimate the degree of voltage attenuation ofEPSPs induced by cooling. Correcting for this attenua-tion, they estimated that the first harmonic voltage re-sponse to an optimal drifting luminance grating stimulusof a layer 4 V1 cell was, on average, about 1/3 as greatwith cortex cooled as with cortex intact, suggesting thatthe external input to cortex is smaller than the net inputwith cortex intact. Chung and Ferster (1998) and Finnet al. (2007) assayed the same question by using corticalshock to silence the local cortex for about 150 ms, duringwhich time the voltage response to an optimal flashedgrating was measured. They found that on average thetransient voltage response in layer 4 cells with cortex si-lenced was about 1/2 the size of that with cortex intact(Chung and Ferster, 1998), and more generally rangedfrom 0% to 100% of the intact cortical response (Finn et al.,2007). This again suggests that the external input to cor-tex is smaller than the net input, i.e. the external input is O (1), consistent with loose but not tight balance. Total excitatory or inhibitory conductance is com-parable to threshold.
The above results suggest thatdepolarization due to thalamus alone is less than that in-duced by the combination of thalamic and cortical ex-citation plus cortical inhibition, i.e. after cortical ”bal-ancing” has occurred. One can also ask what propor-tion of the total excitation is provided by thalamus. Thishas been addressed in voltage-clamp recordings in anes-thetized mice by silencing cortex through light activa-tion of parvalbumin-expressing inhibitory cells express-ing channelrhodopsin. In layer 4 cells of V1 (Li et al.,2013b; Lien and Scanziani, 2013) and primary auditorycortex (A1) (Li et al., 2013a), mean stimulus-evoked ex-citatory conductance with cortex silenced was estimatedto be 30-40% of that with cortex intact.This tells us that the external and cortical contributionsto excitation are comparable. How large are they com-pared to the excitation needed to depolarize the cell from rest to threshold, which is typically a distance of about20 mV (Constantinople and Bruno, 2013)? With corticalspiking intact, these authors (Li et al., 2013a; Lien andScanziani, 2013) found mean stimulus-evoked peak exci-tatory currents ranging from 60 to 150pA for various stim-uli. Even assuming a membrane resistance of 200 M Ω ,which seems on the high end for in vivo recordings (Liet al. (2013a) reported input resistances of 150 −
200 M Ω ),these would induce depolarizations of 12 to 30 mV; thatis, the total excitatory current is comparable to threshold, i.e. it is O (1).A similar result can be found from decomposition of exci-tatory and inhibitory conductances from current-clamprecordings at varying hyperpolarizing current levels. Incat V1 cells for an optimal visual stimulus, one finds thatpeak excitatory and inhibitory stimulus-induced conduc-tances, g E and g I respectively, are typically < < g L , for leak conductance) around 10nS(Anderson et al., 2000; Ozeki et al., 2009). A studyof response to whisker stimulation in rat barrel cortexfound excitatory and inhibitory conductances of ≤ g E g E + g L V E , where V E is the driving potential of ex-citatory conductance, about 50 mV at spike threshold ofaround −
50 mV ( e.g.
Wilent and Contreras, 2005). Usingthe cat V1 numbers, this means that the depolarizationdriven by excitatory conductance is typically <
25 mVand almost always <
33 mV. Hyperpolarization drivenby the inhibitory conductance alone would be 0.4 to 0.6times these values, given inhibitory driving force of − −
30 mV at spike threshold. These values are all quitecomparable to the distance from rest to threshold, ∼ O (1). How large is the expected mean excitatory input?
We have seen that the expected mean depolarization in-duced by recurrent excitation is J E K E n E , where J E is themean EPSP amplitude. Based on the measurements ofLien and Scanziani (2013) and Li et al. (2013b), discussedabove, total excitation may be about 1.5 times greaterthan recurrent excitation. J E can be difficult to estimate,because some of the K E anatomical synapses may be veryweak and not sampled in physiology, and because synap-tic failures, depression, or facilitation can alter averageEPSP size relative to measured EPSP sizes. Furthermore,measurements are variable, for example J E for layer 4 tolayer 4 connections in rodent barrel cortex has been es-10imated to be 1.6 mV in vitro (Feldmeyer et al., 1999) vs.0.66 mV in vivo (Schoonover et al., 2014). If we assumetypical values for J E of 0.5 − J E K E n E wouldexceed 75 mV for √ Kn > −
10 and exceed 150 mVfor √ Kn > −
14 (compare values of √ Kn in Table 1).We can very roughly guess that neural responses may be-come better described by tight rather than loose balancesomewhere in this range of mean excitatory input (andcorresponding √ Kn ). While the measurements of exci-tatory currents and conductances described above arguethat such a range is not reached in primary sensory cor-tex, it could conceivably be reached (Table 1) in areas withhigher K E , e.g. frontal cortex. Nonlinear Behaviors
Sensory cortical neuronal responses display a variety ofnonlinear behaviors that, as we’ll describe, are expectedfrom the SSN loosely balanced regime but not from thetightly balanced regime. Many of these nonlinearitiesare often summarized as ”normalization” (Carandini andHeeger, 2012), meaning that responses can be fit by a phe-nomenological model of an unnormalized response thatis divided by (normalized by) some function of all of theunnormalized responses of all the neurons within someregion. To describe these nonlinear behaviors, we mustfirst define the classical receptive field (CRF): the local-ized region of sensory space in which appropriate stimulican drive a neuron’s response.One nonlinear property is sublinear response summation:across many cortical areas, the response to two stimuli si-multaneously presented in the CRF is less than the sum ofthe responses to the individual stimuli, and is often closerto the average than the sum of the individual responses(reviewed in Carandini and Heeger, 2012; Reynolds andChelazzi, 2004). An additional nonlinearity is that theform of the summation changes with the strength of thestimulus: summation becomes linear for weaker stimuli(Heuer and Britten, 2002; Ohshiro et al., 2013). It is of-ten difficult to determine if such nonlinear behaviors arecomputed in the recorded area or involve changes in theinputs to that area. However, some recent experimentsstudied summation of response to an optogenetic and avisual stimulus, a case in which the inputs driven by eachstimulus should not alter those driven by the other. Sub-linear summation of responses to these stimuli was found(Nassi et al., 2015; Wang et al., 2019, but see Histed, 2018),which became linear for weak stimuli (Wang et al., 2019). Another set of nonlinearities involve interaction of a CRFstimulus and a “surround” stimulus, which is located out-side the CRF. Across many cortical areas, surround stim-uli can suppress response to a CRF stimulus (“surroundsuppression”; reviewed in Angelucci et al., 2017; Rubinet al., 2015), but this effect varies with stimulus strength.When the center stimulus is weak, a surround stimuluscan facilitate rather than suppress response (Ichida et al.,2007; Polat et al., 1998; Sato et al., 2014; Schwabe et al.,2010; Sengpiel et al., 1997). Similarly, the summation fieldsize – the size of a stimulus that elicits strongest response,before further expansion yields surround suppression – islargest for weak stimuli and shrinks with increasing stim-ulus strength (Anderson et al., 2001; Cavanaugh et al.,2002; Nienborg et al., 2013; Sceniak et al., 1999; Shushruthet al., 2009; Song and Li, 2008; Tsui and Pack, 2011). Thesummation field size in feature space – the optimal rangeof simultaneously presented motion directions in mon-key area MT – similarly shrinks with increasing stimulusstrength (Liu et al., 2018).Additional nonlinearities include a decrease, with in-creasing stimulus strength, in the ratio of excitation toinhibition received by neurons (Adesnik, 2017) and in thewavelength of a characteristic spatial oscillation of activ-ity (Rubin et al., 2015).All of these nonlinear cortical response properties, andmore, follow naturally (Ahmadian et al., 2013; Rubin et al.,2015) from the two regimes of the loosely balanced sce-nario with a supralinear input/output function, alongwith simple assumptions on connectivity ( e.g. that con-nections decrease in strength and/or probability withspatial distance, e.g.
Markov et al., 2011, or with differ-ence in preferred features, e.g.
Cossell et al., 2015; Ko et al.,2011). In contrast, as described previously, the tightly bal-anced scenario causes population-averaged responses tobe linear responses to the input (individual neurons, butnot the population average, may have nonlinear behav-iors), and thus appears inconsistent with these nonlin-ear cortical behaviors, which in most cases are consis-tent enough across neurons that they should character-ize the mean population response. Synaptic nonlineari-ties can give nonlinear population-averaged behavior inthe tightly balanced regime (Mongillo et al., 2012), butit has not been claimed or demonstrated that this couldproduce specific nonlinearities like those seen in corticalresponses.11 orrelations and Variability
Across many cortical systems, the correlated componentof neuronal variability is decreased when a stimulus isgiven, with variability decrease seen both in neurons thatrespond to the stimulus and those that don’t respond(Churchland et al., 2010). This is also naturally explainedby the loosely balanced SSN network (Hennequin et al.,2018). In the strongly coupled regime of the looselybalanced SSN network, increasing stimulus strength in-creases the strength with which correlated patterns of ac-tivity inhibit themselves, thus damping their responsesto input fluctuations. The tightly balanced state repre-sents the end state of this process – a fully asynchronousregime in which correlations are completely suppressed(Renart et al., 2010; van Vreeswijk and Sompolinsky, 1998;with dense connectivity, the mean correlation is propor-tional to 1 / K , and the standard deviation of the distribu-tion of correlations is proportional to 1 / √ K Renart et al.(2010); recall that K is meant to be a very large numberto achieve the tightly balanced state). Thus, the tightlybalanced state appears incompatible with the observeddecrease in correlated variability induced by a stimulus,because the state has essentially no correlated variabil-ity. However, it should be noted that variants of thetightly balanced network involving structured connectiv-ity can yield finite correlated variability among prefer-entially connected neurons while maintaining tight bal-ance, although average correlation over all neuron pairscan still go to zero with increasing √ K (Litwin-Kumaret al., 2012; Rosenbaum et al., 2017). Discussion
We have seen that many independent lines of evidenceare all consistent with cortex being in a loosely balancedregime, and are inconsistent with tight balance. We de-fine balance to mean that the dynamics yields a system-atic cancellation of excitation by inhibition; a signatureof this for the loosely balanced scenario that we con-sider is that the net input a neuron receives after can-cellation grows sublinearly as a function of its externalinput. Loose balance means that the net input after can-cellation is comparable in size to the factors that cancel,whereas tight balance means that the net input is verysmall relative to the cancelling factors. In both cases, thenet input after cancellation is comparable in size to thedistance from rest to threshold so that neuronal firing can be in the fluctuation-driven regime that produces irregu-lar firing like that observed in cortex.One line of evidence for loose balance involves a vari-ety of measurements on the numbers and/or strengths ofthe inputs cells receive, including spine counts, strengthsof external and total input, and strengths of excitatoryand of inhibitory input. These measurements show thatthe expected ratio of mean to standard deviation of thenetwork input before any tight balancing is already con-sistent with the ratios observed for a cell’s net input asjudged by its voltage response. That is, tight cancella-tion is not needed to achieve the ratios observed. Thesemeasurements further show that external input and net-work input are comparable in size to the net input re-maining after cancellation, and that they and the totalexcitatory and total inhibitory input are all comparable tothe distance from rest to threshold, consistent with loosebut not tight balancing. Other lines of evidence includea variety of nonlinear population response properties ofsensory cortical neurons, as well as the presence of corre-lated variability in neural responses and its decrease uponpresentation of a stimulus, all of which emerge naturallyfrom loose balance with a supralinear input/output func-tion, but appear largely incompatible with tight balance.It should be emphasized that the number of excitatorysynapses received by an excitatory cell, K E , increasesfrom primary sensory to higher sensory to frontal cor-tex ( e.g. Elston, 2003). Higher numbers are expected topush in the direction of tighter balance. The expected ra-tio of input mean to standard deviation and the expectedsize of the mean input both can become high enough topotentially yield tight balance for the highest K E ’s, par-ticularly if higher average firing rates r E are assumed.Our other arguments depend largely, but not entirely, onmeasurements from sensory cortex. The measurementsof net input and external input are all from primary sen-sory cortex. The studied nonlinear response propertiesare primarily from both lower and higher visual cortices(reviewed in Rubin et al., 2015). Suppression of correlatedvariability by a stimulus, however, has been observed infrontal and parietal as well as sensory cortex (Churchlandet al., 2010). In sum, while the evidence strongly favorsloose balance in sensory cortex, the evidence as to theregime of frontal cortex is weaker.The seminal discovery of the tightly balanced network(van Vreeswijk and Sompolinsky, 1996, 1998) solved a keyproblem in theoretical neuroscience: how can neurons re-12ain in the fluctuation-driven regime, so that they haveirregular firing with reasonable firing rates, without re-quiring fine tuning of parameters? The answer was thatwhen external and network inputs were very large, thenetwork’s dynamics could robustly tightly balance theexcitation and inhibition that neurons receive, leaving anet input after cancellation that is negligibly small rela-tive to the factors that cancel. This allows both the meanand standard deviation of the net input to be comparableto the distance from rest to threshold despite the verylarge size assumed for the factors that cancel, yieldingthe fluctuation-driven regime. This achievement alongwith the model’s mathematical tractability have made ita popular model for the theoretical study of neural cir-cuits. However, for all of the reasons stated above, thistightly balanced regime does not seem to match observa-tions of at least sensory cortical anatomy and physiology.The loosely balanced solution shows that, when neuronalinput/output functions are supralinear, the same dynam-ical balancing can arise from network dynamics withoutfine tuning, but in a regime in which external and net-work inputs are not large. Instead, the balancing ariseswhen these inputs, and the net input remaining after can-cellation of excitatory and inhibitory input, are all com-parable in size to one another and to the distance fromrest to threshold. Furthermore, for weak inputs this samescenario produces a weakly-coupled, feedforward-drivenregime which explains the observation that summationchanges from sublinear, or suppressive, for stronger stim-uli to linear, or facilitative, for weak inputs.The tightly balanced network demonstrated that a net-work could self-consistently generate its own variability.As we showed in the section “How large is √ Kn ?”, theloosely balanced regime can also generate realistic levelsof variability. However, biologically there is no need forthe network to generate all of its own variability, as allinputs to cortex are noisy (and there are other sources ofnoise such as stochasticity of cellular and synaptic mech-anisms (Mainen and Sejnowski, 1995; O’Donnell and vanRossum, 2014; Schneidman et al., 1998) and input corre-lations (DeWeese and Zador, 2006; Stevens and Zador,1998)). In at least one case (Sadagopan and Ferster,2012), the noise derived from the cortical area’s input wasshown to be large enough to potentially fully account forthe noise seen in the cortical neurons. In the SSN net-work, the network will amplify input noise in the weaklycoupled regime, and then decrease noise for increasinglystrong inputs in the strongly coupled, loosely balanced regime; the result is that, for higher input strengths, noisecan be reduced to the level driven by the inputs ( e.g. , seeFig. 2D of Hennequin et al., 2018), consistent with the ob-servations of Sadagopan and Ferster (2012).In conclusion, we believe that at least sensory, and per-haps all of, cortex operates in a regime in which the in-hibition and excitation neurons receive are loosely bal-anced. This along with the supralinear input/outputfunction of individual neurons and simple assumptionson connectivity explains a large set of cortical responseproperties. A key outstanding question is the computa-tional function or functions of this loosely balanced stateand the response properties it creates ( e.g. , see Echevesteet al., 2019; G. Barello and Y. Ahmadian, in preparation). Appendix 1: Nomenclature for balanced solu-tions
There is no standard nomenclature for describing bal-anced solutions. Here we have used loose vs. tight bal-ance to describe, given systematic cancellation, whetherthe remainder after cancellation is comparable to, ormuch smaller than, the factors that cancel.Deneve and Machens (2016) used loose balance to meanthat fast fluctuations of excitation and inhibition are un-correlated, although they are balanced in the mean, asin the sparsely-connected network of van Vreeswijk andSompolinsky (1998); and used tight balance to mean thatfast fluctuations of excitation and inhibition are tightlycorrelated with a small temporal offset, as in the densely-connected network of Renart et al. (2010) and in thespiking networks of Deneve, Machens and colleagues inwhich recurrent connectivity has been optimized for effi-cient coding (Barrett et al., 2013; Boerlin et al., 2013; Bour-doukan et al., 2012). All of these networks are tightly bal-anced under our definition.Hennequin et al. (2017) also defined balance to be tightif it occurs on fast timescales, and loose otherwise, butthey implied that this is equivalent to our definition, thattight balance means the remainder is small compared tothe factors that cancel, and loose balance means the re-mainder is comparable to the factors that cancel. The im-plied equivalence rests on the fact that tight balance un-der our definition produces very large ( i.e. , O ( √ K )) nega-tive eigenvalues (in linearization about the fixed point)which means very fast dynamics, approaching instan-taneous population response as K and hence negative13igenvalues go to infinity. We point out, however, thatloose balance under our definition can produce negativeeigenvalues large enough to produce quite fast dynamics,with effective time constants on the order of single neu-rons’ membrane time-constant, or even as small as a fewmilliseconds, depending on parameters.An additional source of confusion is that there are twoforms of fast fluctuations, with different behaviors. Fastfluctuations can be shared (correlated) across neurons,or they can be independent. The large negative eigen-values in tightly balanced networks (in our definition) af-fect shared fluctuations corresponding to eigen-modes inwhich the activities of excitatory and inhibitory neuronsfluctuate coherently. Thus, shared fluctuations are bal-anced on fast time scales. By contrast, spatial activitypatterns in which neurons fluctuate independently arelargely unaffected by those eigenvalues, and need not bebalanced.Fluctuations due to changes in population mean rates ofthe external input are shared, and so this form of fluctu-ation is followed on fast time scales in all balanced net-works (at finite rates in loosely balanced networks, andapproaching instantaneous following in tightly balancednetworks). Fluctuations also arise from network and ex-ternal neuronal spiking noise. In networks with sparseconnectivity (van Vreeswijk and Sompolinsky, 1998), thisyields independent fluctuations in different neurons, andthus independent fluctuations of excitation and inhibi-tion on fast time scales (though their means are bal-anced). In networks with dense connectivity (Renartet al., 2010), these spiking fluctuations become sharedfluctuations due to common inputs arising from thedense connectivity, and so in these networks excitationand inhibition are balanced on fast time scales. To sum-marize, all balanced networks will balance shared fluctu-ations, such as those due to changing external input rates,on fast time scales; but excitation and inhibition cannonetheless be unbalanced on fast time scales in sparsenetworks, due to independent fluctuations induced byspiking noise.To conclude, we would argue for a future standardizedterminology for dynamically-induced balancing of exci-tation and inhibition, in which “loose” vs. “tight” balancerefer to our definition as to whether the remainder af-ter cancellation is comparable to, or much smaller than,the factors that cancel. We suggest the use of “temporal”vs. “mean” balance to refer to whether or not excitation and inhibition are balanced on fast time scales, whichdepends on whether there are substantial shared inputfluctuations across neurons. “Finite” vs. “instantaneous”time scales of balancing can distinguish whether relax-ation rates – the rates of balancing shared fluctuations –are moderately-sized vs. very large. Appendix 2: When do balanced solutions arise?
We consider a rate model in which the neuron’s in-put/output function is described by some function f ( x ),which is zero for x ≤ x ≥
0. Then the network’s steady-state firing rate r SS fora steady input I is r SS = f ( Wr SS + I ) (14)where f acts element by element on its argument, that is, f ( u ) is a vector whose i th element is f ( µ i ) (the f ’s mightdiffer for different neurons, which we neglect for simplic-ity). As before, we let ψ = (cid:107) W (cid:107) and c = (cid:107) I (cid:107) . We definethe dimensionless and O (1) matrix J = W /ψ and vec-tor g = I / c , so that J and g represent the relative synapticstrengths and relative input strengths, respectively, whiletheir overall magnitudes and dimensions are in ψ and c .Then, as in Ahmadian et al. (2013), we can define the di-mensionless variable y = ψ c r , and Eq. 14 can be rewritten y SS = ψ c f ( c ( Jy SS + g )) (15)Note that this equation ensures that y SS ≥
0. Note alsothat, when f ( x ) = ( x ) p + (( x ) + = x , x ≥
0; = 0, otherwise),then this equation can be rewritten y SS = α ( Jy SS + g ) p + where α = ψ c p − . This is the origin of the dimensionlessconstant α mentioned in the main text.If we define f − (0) = 0, then because f is monotoni-cally increasing for non-negative arguments, it is invert-ible over that range, i.e. f − ( x ) is defined for x ≥
0. Wecan then rewrite Eq. 15 as1 c f − (cid:18) c ψ y SS (cid:19) = ( Jy SS + g ) + (16)If we assume ( Jy SS + g ) i ≥ i , (17)that is, if ( Jy SS ) i ≥ − ( g ) i for all i , then we can replacethe right side of Eq. 16 with Jy SS + g (without the () + ).This condition, Eq. 17, is a condition on the solution y SS ,which we must check is self-consistently met for any so-lution we derive under this assumption. Note also that,14rom Eq. 15, the condition ( Jy SS + g ) i > y SS ) i >
0, so if we find a solution y SS that hasall positive elements, it will automatically satisfy Eq. 17.Given this assumption, a bit of further manipulation thenyields y SS = − J − g + 1 c J − f − (cid:18) c ψ y SS (cid:19) (18)The first term, y ≡ − J − g , is the balancing term, whichcancels the external input g , i.e. ( Jy + g ) = 0. If the sec-ond term becomes small relative to the first in some limit,then the tightly balanced solution, y SS ≈ − J − g or equiv-alently r SS ≈ − W − I , exists in that limit, while a looselybalanced solution (balance index O (1)) arises when the2nd term is comparable in size to the first. (More carefulanalysis is needed to ensure that this solution is stable,and that there are not also other solutions.) Note thatEq. 18 gives an equation of the form Eq. 13 when we (1)Take f − ( x ) = ( x ) / p + and (2) Multiply both sides of Eq. 18by c /ψ to convert y SS to r SS .Assuming all the elements of y ≡ − J − g are >
0, a self-consistent solution in which the second term in Eq. 18becomes small can be found in at least three cases:• If c and ψ are scaled by the same factor, whichbecomes arbitrarily large, then there is a self-consistent solution in which y SS is converging to − J − g . Then the f − factor is not changing (exceptfor the small changes due to the changes in y SS asit converges), but it is multiplied by the factor c ,which becomes arbitrarily small; thus the secondterm becomes arbitrarily small, regardless of theparticular structure of f . This is the case studiedfor the tightly balanced solution, where both c and ψ are taken proportional to √ K with K very large.(Note that the mean field equations derived in (vanVreeswijk and Sompolinsky, 1998) differ from thegeneric steady-state rate equations, Eq. (14), in thatthey also involve the self-consistently calculatedinput fluctuation strengths, σ A ; the scaling argu-ment given here nevertheless holds in that casetoo.)• Suppose c is scaled for fixed ψ , which is the bio-logical case in which synaptic strengths are fixedand the strength of the external input is variedfrom small to large. Then if f − ( x ) grows moreslowly than linearly in increasing x , then the c factor shrinks faster than the f − term grows, so again there is a self-consistent solution in which y SS is converging to − J − g and the second termbecomes arbitrarily small with increasing c . This isthe case studied for the loosely balanced solutionin the SSN, in which f ( x ) grows supralinearly with x and therefore f − ( x ) grows sublinearly with x .• We again suppose c is scaled for fixed ψ , but nowimagine that f − ( x ) grows faster than linearly in in-creasing x , i.e. f ( x ) is sublinear (for example, f ( x ) =( x ) p + for 0 < p < y → − J − g as c →
0, with thesecond term in Eq. 18 going to zero as c →
0. Thiscase is the reverse of the SSN: the strongly coupled,balanced regime arises for c →
0, while the weaklycoupled, feedforward-driven regime arises for large c .In sum, if the elements of − J − g are positive, then a self-consistent tightly-balanced solution arises for any f if c and ψ are scaled together by an increasing factor; forsupralinear f if c is scaled by an increasing factor; or forsublinear f if c is scaled by a decreasing factor. In all ofthese cases, for moderate sizes of the scaled parameter(s)( e.g. , for the SSN, for α = O (1)) such that the second termof Eq. 18 is comparable in size to the first, a loosely bal-anced solution should arise. Note that, since r SS = c ψ y SS ,then from Eq. 18 the net input after balancing shouldgrow with increasing external input c as f − ( c ); this issublinear in c for the SSN case of supralinear f .If one considers a two-population model – a population ofE cells and a population of I cells, with each population’saverage rate represented by a single variable – then con-ditions on J and g can be defined such that the elementsof − J − g are positive and the balanced fixed point is sta-ble and is the only fixed point (Ahmadian et al., 2013;Kraynyukova and Tchumatchenko, 2018; van Vreeswijkand Sompolinsky, 1998). On the other hand, when theE element of − J − g is negative, Eq. 18 cannot serve as abasis for an asymptotic expansion with the leading term − J − g , and the tightly balanced state does not exist.(Given ( − J − g ) E <
0, if the tightly balanced state ex-isted – meaning that the second term of Eq. 18 becomesmuch smaller than the first while Eq. 18 is applicable –then y SS must have crossed zero to become negative, butonce that has happened we could no longer proceed pastEq. 16 and Eq. 18 would no longer be applicable, which isa contradiction; hence the tightly balanced state cannotexist.) However, the loosely balanced state can still arise15n this case in a broad parameter regime of J and g , andcan be found as the fixed point of the iterative equation, y SS ( t ) = − J − g + c J − f − (cid:16) c ψ y SS ( t − (cid:17) , given appropri-ate initial conditions y SS (0); see Ahmadian et al. (2013). Inthis case, with increasing c , r E grows, but then saturatesand starts decreasing, and eventually is pushed down to0. However, if we assume that the maximal external input( i.e. the maximal c ; for example, the maximal firing rate ofthe thalamic input to a primary sensory cortical area) canonly drive r E to saturation or slightly beyond, this repre-sents a viable model of cortical systems (Ahmadian et al.,2013; Hennequin et al., 2018; Rubin et al., 2015).A two-population model accurately describes the behav-ior of an unstructured model with many E and I neu-rons, i.e. with random connectivity and with neurons ineach population receiving comparable stimulus inputs. Insome cases this model also can form a good approxima-tion to the behavior of a multi-neuron circuit with struc-tured connectivity and stimulus selectivity (Ahmadianet al., 2013). More generally, though, in such a structuredcircuit with localized connectivity, for larger/stronger lo-calized stimuli, some set of neurons ( e.g. , neurons notselective for the stimulus) may eventually receive a netinhibition and become silent, meaning that the condi-tion of Eq. 17 is not met and Eq. 18 does not apply.(However, if the connectivity is translation-invariant –the same at any position in the model – and the ex-ternal input extends more narrowly than the networkconnections, then a balanced fixed point can still be at-tained, Rosenbaum and Doiron, 2014.) Nonetheless, wefind in simulations (Ahmadian et al., 2013; Hennequinet al., 2018; Rubin et al., 2015) that for reasonable stimu-lus input strengths, SSN behavior is reasonably describedby the two-population model, in that (1) there is a transi-tion with increasing input strength from a weakly cou-pled, feedforward-driven regime to a strongly-coupled,loosely balanced regime in which the input to excitedneurons grows sublinearly as a function of the external in-put strength; and (2) if we define the W and g of the two-population model as describing the net input received bya cell in the larger, structured model – e.g. , W EE repre-sents the mean summed synaptic strength from excita-tory cells to a single excitatory cell, and g E representsthe mean external input received by stimulus-selectiveexcitatory cells – then reasonably good insight into theoperating regime of the larger model can be obtainedfrom the analysis of the two-population model presented here and, in much more detail, in Ahmadian et al. (2013);Kraynyukova and Tchumatchenko (2018).We believe the same overall analysis of a transition froma weakly coupled regime to a strongly coupled, looselybalanced regime will apply to multi-population modelsincorporating multiple subtypes of inhibitory cells ( e.g. Garcia Del Molino et al., 2017; Kuchibhotla et al., 2017;Litwin-Kumar et al., 2016), but more detailed aspectsof the analysis of the two-population model (Ahmadianet al., 2013; Kraynyukova and Tchumatchenko, 2018) needto be generalized to that case.
Acknowledgements
We thank Larry Abbott, Mario DiPoppa and AgostinaPalmigiano for many helpful comments on themanuscript, David Hansel, Gianluigi Mongillo and Al-fonso Renart for many useful discussions, and RandyBruno for help with references. YA is supported by start-up funds from the University of Oregon. KDM is sup-ported by NSF DBI-1707398, NIH U01NS108683, NIHR01EY029999, NIH U19NS107613, Simons Foundationaward 543017, and the Gatsby Charitable Foundation.
References
Adesnik, H. (2017). Synaptic Mechanisms of FeatureCoding in the Visual Cortex of Awake Mice.
Neuron ,95:1147–1159.Ahmadian, Y., Rubin, D. B., and Miller, K. D. (2013). Anal-ysis of the stabilized supralinear network.
Neural Com-putation , 25:1994–2037.Amatrudo, J. M., Weaver, C. M., Crimins, J. L., Hof, P. R.,Rosene, D. L., and Luebke, J. I. (2012). Influence ofhighly distinctive structural properties on the excitabil-ity of pyramidal neurons in monkey visual and pre-frontal cortices.
J. Neurosci. , 32(40):13644–13660.Amit, D. and Brunel, N. (1997). Dynamics of a recurrentnetwork of spiking neurons before and following learn-ing.
Network: Comput. Neural Syst. , 8:373–404.Anderson, J. S., Carandini, M., and Ferster, D. (2000). Ori-entation tuning of input conductance, excitation, andinhibition in cat primary visual cortex.
J. Neurophysiol. ,84:909–926.16nderson, J. S., Lampl, I., Gillespie, D. C., and Ferster, D.(2001). Membrane potential and conductance changesunderlying length tuning of cells in cat primary visualcortex.
J. Neurosci. , 21:2104–2112.Angelucci, A., Bijanzadeh, M., Nurminen, L., Federer, F.,Merlin, S., and Bressloff, P. (2017). Circuits and mech-anisms for surround modulation in visual cortex.
Ann.Rev. Neurosci. , 40:425–451.Atallah, B. V. and Scanziani, M. (2009). Instantaneousmodulation of gamma oscillation frequency by balanc-ing excitation with inhibition.
Neuron , 62:566–577.Barral, J. and Reyes, A. (2016). Synaptic scaling rule pre-serves excitatory-inhibitory balance and salient neu-ronal network dynamics.
Nat. Neurosci. , 19:1690–1696.Barrett, D., Deneve, S., and Machens, C. (2013). Fir-ing rate predictions in optimal balanced networks. InPereira, F., Burges, C., Bottou, L., and Weinberger, K.,editors,
Advances in Neural Information Processing Sys-tems , pages 1538–1546. MIT Press.Barth, A. L. and Poulet, J. F. (2012). Experimental evi-dence for sparse firing in the neocortex.
Trends Neu-rosci. , 35:345–355.Bhatia, A., Moza, S., and Bhalla, U. S. (2019). Preciseexcitation-inhibition balance controls gain and timingin the hippocampus.
Elife , 8.Boerlin, M., Machens, C. K., and Deneve, S. (2013). Predic-tive coding of dynamical variables in balanced spikingnetworks.
PLoS Comput. Biol. , 9(11):e1003258.Bourdoukan, R., Barrett, D., Deneve, S., and Machens, C.(2012). Learning optimal spike- based representations.In Pereira, F., Burges, C., Bottou, L., and Weinberger,K., editors,
Advances in Neural Information ProcessingSystems , pages 2285–2293. MIT Press.Brunel, N. (2000). Dynamics of sparsely connected net-works of excitatory and inhibitory spiking neurons.
JComput Neurosci , 8:183–208.Carandini, M. and Heeger, D. J. (2012). Normalization asa canonical neural computation.
Nat. Rev. Neurosci. ,13:51–62.Cavanaugh, J. R., Bair, W., and Movshon, J. A. (2002). Na-ture and interaction of signals from the receptive fieldcenter and surround in macaque V1 neurons.
J. Neuro-physiol. , 88:2530–2546. Chung, S. and Ferster, D. (1998). Strength and orienta-tion tuning of the thalamic input to simple cells re-vealed by electrically evoked cortical suppression.
Neu-ron , 20:1177–89.Churchland, M. M., Yu, B. M., Cunningham, J. P., Sugrue,L. P., Cohen, M. R., Corrado, G. S., Newsome, W. T.,Clark, A. M., Hosseini, P., Scott, B. B., Bradley, D. C.,Smith, M. A., Kohn, A., Movshon, J. A., Armstrong,K. M., Moore, T., Chang, S. W., Snyder, L. H., Lisberger,S. G., Priebe, N. J., Finn, I. M., Ferster, D., Ryu, S. I., San-thanam, G., Sahani, M., and Shenoy, K. V. (2010). Stim-ulus onset quenches neural variability: a widespreadcortical phenomenon.
Nat. Neurosci. , 13:369–378.Cohen, M. R. and Kohn, A. (2011). Measuring and inter-preting neuronal correlations.
Nat. Neurosci. , 14:811–819.Constantinople, C. M. and Bruno, R. M. (2013). Deep cor-tical layers are activated directly by thalamus.
Science ,340:1591–1594.Cossell, L., Iacaruso, M. F., Muir, D. R., Houlton, R., Sader,E. N., Ko, H., Hofer, S. B., and Mrsic-Flogel, T. D.(2015). Functional organization of excitatory synapticstrength in primary visual cortex.
Nature , 518:399–403.Dehghani, N., Peyrache, A., Telenczuk, B., Le Van Quyen,M., Halgren, E., Cash, S. S., Hatsopoulos, N. G., andDestexhe, A. (2016). Dynamic Balance of Excitationand Inhibition in Human and Monkey Neocortex.
SciRep , 6:23176.Deneve, S. and Machens, C. K. (2016). Efficient codes andbalanced networks.
Nat. Neurosci. , 19:375–382.DeWeese, M. and Zador, A. (2006). Non-Gaussian mem-brane potential dynamics imply sparse, synchronousactivity in auditory cortex.
J. Neurosci. , 26:12206–12218.Doiron, B., Litwin-Kumar, A., Rosenbaum, R., Ocker,G. K., and Josic, K. (2016). The mechanics of state-dependent neural correlations.
Nat. Neurosci. , 19:383–393.Echeveste, R., Aitchison, L., Hennequin, G., and Lengyel,M. (2019). Cortical-like dynamics in recurrent circuitsoptimized for sampling-based probabilistic inference. biorxiv , doi: http://dx.doi.org/10.1101/696088.17cker, A. S., Berens, P., Cotton, R. J., Subramaniyan,M., Denfield, G. H., Cadwell, C. R., Smirnakis, S. M.,Bethge, M., and Tolias, A. S. (2014). State dependenceof noise correlations in macaque primary visual cortex.
Neuron , 82:235–248.Ecker, A. S., Berens, P., Keliris, G. A., Bethge, M., Lo-gothetis, N. K., and Tolias, A. S. (2010). Decorre-lated neuronal firing in cortical microcircuits.
Science ,327(5965):584–587.Elston, G. N. (2003). Cortex, cognition and the cell:new insights into the pyramidal neuron and prefrontalfunction.
Cereb. Cortex , 13:1124–1138.Elston, G. N. and Fujita, I. (2014). Pyramidal cell develop-ment: postnatal spinogenesis, dendritic growth, axongrowth, and electrophysiology.
Front Neuroanat , 8:78.Elston, G. N. and Manger, P. (2014). Pyramidal cells in V1of African rodents are bigger, more branched and morespiny than those in primates.
Front Neuroanat , 8:4.Fares, T. and Stepanyants, A. (2009). Cooperative synapseformation in the neocortex.
Proc. Natl. Acad. Sci. U.S.A. ,106(38):16463–16468.Feldmeyer, D., Egger, V., L¨ubke, J., and Sakmann, B.(1999). Reliable synaptic connections between pairsof excitatory layer 4 neurones within a single “barrel”of developing rat somatosensory cortex.
J. Physiol. ,521:169–190.Feldmeyer, D., L¨ubke, J., Silver, R. A., and Sakmann, B.(2002). Synaptic connections between layer 4 spinyneurone-layer 2/3 pyramidal cell pairs in juvenile ratbarrel cortex: physiology and anatomy of interlaminarsignalling within a cortical column.
J Physiol , 538:803–822.Ferster, D., Chung, S., and Wheat, H. (1996). Orientationselectivity of thalamic input to simple cells of cat visualcortex.
Nature , 380:249–252.Finn, I. M., Priebe, N. J., and Ferster, D. (2007). The emer-gence of contrast-invariant orientation tuning in sim-ple cells of cat visual cortex.
Neuron , 54:137–152.Galarreta, M. and Hestrin, S. (1998). Frequency-dependent synaptic depression and the balance of ex-citation and inhibition in the neocortex.
Nat. Neurosci. ,1:587–594. Garcia Del Molino, L. C., Yang, G. R., Mejias, J. F., andWang, X. J. (2017). Paradoxical response reversal oftop-down modulation in cortical circuits with three in-terneuron types.
Elife , 6.Graupner, M. and Reyes, A. D. (2013). Synaptic inputcorrelations leading to membrane potential decorre-lation of spontaneous activity in cortex.
J. Neurosci. ,33:15075–15085.Haider, B., Duque, A., Hasenstaub, A. R., and McCormick,D. A. (2006). Neocortical network activity in vivo is gen-erated through a dynamic balance of excitation and in-hibition.
J. Neurosci. , 26:4535–4545.Haider, B., Hausser, M., and Carandini, M. (2013). Inhibi-tion dominates sensory responses in the awake cortex.
Nature , 493:97–100.Hansel, D. and van Vreeswijk, C. (2002). How noise con-tributes to contrast invariance of orientation tuning incat visual cortex.
J. Neurosci. , 22:5118–5128.Hansel, D. and van Vreeswijk, C. (2012). The mechanismof orientation selectivity in primary visual cortex with-out a functional map.
J. Neurosci. , 32:4049–4064.Hennequin, G., Agnes, E. J., and Vogels, T. P. (2017). In-hibitory Plasticity: Balance, Control, and Codepen-dence.
Annu. Rev. Neurosci. , 40:557–579.Hennequin, G., Ahmadian, Y., Rubin, D. B., Lengyel,M., and Miller, K. D. (2018). The Dynamical Regimeof Sensory Cortex: Stable Dynamics around a Sin-gle Stimulus-Tuned Attractor Account for Patterns ofNoise Variability.
Neuron , 98(4):846–860.Heuer, H. W. and Britten, K. H. (2002). Contrast depen-dence of response normalization in area MT of the rhe-sus macaque.
J. Neurophysiol. , 88:3398–3408.Higley, M. and Contreras, D. (2006). Balanced excita-tion and inhibition determine spike timing during fre-quency adaptation.
J. Neurosci. , 26:448–457.Histed, M. H. (2018). Feedforward Inhibition Allows InputSummation to Vary in Recurrent Cortical Networks. eNeuro , 5(1).Ichida, J. M., Schwabe, L., Bressloff, P. C., and Angelucci,A. (2007). Response facilitation from the ”suppressive”receptive field surround of macaque V1 neurons.
J.Neurophysiol. , 98:2168–2181.18ato, H. K., Asinof, S. K., and Isaacson, J. S. (2017).Network-Level Control of Frequency Tuning in Audi-tory Cortex.
Neuron , 95(2):412–423.Ko, H., Hofer, S. B., Pichler, B., Buchanan, K. A., Sjostrom,P. J., and Mrsic-Flogel, T. D. (2011). Functional speci-ficity of local synaptic connections in neocortical net-works.
Nature , 473:87–91.Kraynyukova, N. and Tchumatchenko, T. (2018). Stabi-lized supralinear network can give rise to bistable, os-cillatory, and persistent activity.
Proc. Natl. Acad. Sci.U.S.A. , 115(13):3464–3469.Kuchibhotla, K. V., Gill, J. V., Lindsay, G. W., Papadoy-annis, E. S., Field, R. E., Sten, T. A., Miller, K. D., andFroemke, R. C. (2017). Parallel processing by corticalinhibition enables context-dependent behavior.
Nat.Neurosci. , 20:62–71.Lankarany, M., Heiss, J. E., Lampl, I., and Toyoizumi, T.(2016). Simultaneous Bayesian Estimation of Excita-tory and Inhibitory Synaptic Conductances by Exploit-ing Multiple Recorded Trials.
Front Comput Neurosci ,10:110.Li, L. Y., Li, Y. T., Zhou, M., Tao, H. W., and Zhang, L. I.(2013a). Intracortical multiplication of thalamocorti-cal signals in mouse auditory cortex.
Nat. Neurosci. ,16:1179–1181.Li, Y. T., Ibrahim, L. A., Liu, B. H., Zhang, L. I., and Tao,H. W. (2013b). Linear transformation of thalamocor-tical input by intracortical excitation.
Nat. Neurosci. ,16:1324–1330.Lien, A. D. and Scanziani, M. (2013). Tuned thalamic exci-tation is amplified by visual cortical circuits.
Nat. Neu-rosci. , 16:1315–1323.Litwin-Kumar, A., Chacron, M. J., and Doiron, B. (2012).The spatial structure of stimuli shapes the timescale ofcorrelations in population spiking activity.
PLoS Com-put. Biol. , 8:e1002667.Litwin-Kumar, A., Rosenbaum, R., and Doiron, B. (2016).Inhibitory stabilization and visual coding in corticalcircuits with multiple interneuron subtypes.
J. Neuro-physiol. , 115:1399–1409.Liu, L. D., Miller, K. D., and Pack, C. C. (2018). A Uni-fying Motif for Spatial and Directional Surround Sup-pression.
J. Neurosci. , 38(4):989–999. Maimon, G. and Assad, J. A. (2009). Beyond Poisson: in-creased spike-time regularity across primate parietalcortex.
Neuron , 62:426–440.Mainen, Z. F. and Sejnowski, T. J. (1995). Reliability ofspike timing in neocortical neurons.
Science , 268:1503–1506.Marino, J., Schummers, J., Lyon, D. C., Schwabe, L., Beck,O., Wiesing, P., Obermayer, K., and Sur, M. (2005). In-variant computations in local cortical networks withbalanced excitation and inhibition.
Nature Neurosci. ,8:194–201.Markov, N. T., Misery, P., Falchier, A., Lamy, C., Vezoli,J., Quilodran, R., Gariel, M. A., Giroud, P., Ercsey-Ravasz, M., Pilaz, L. J., Huissoud, C., Barone, P., De-hay, C., Toroczkai, Z., Van Essen, D. C., Kennedy, H.,and Knoblauch, K. (2011). Weight consistency speci-fies regularities of macaque cortical networks.
Cereb.Cortex , 21(6):1254–1272.Markram, H., Lubke, J., Frotscher, M., Roth, A., and Sak-mann, B. (1997). Physiology and anatomy of synapticconnections between thick tufted pyramidal neuronesin the developing rat neocortex.
J. Physiol. (Lond.) , 500( Pt 2):409–440.Miller, K. D. (2016). Canonical computations of cerebralcortex.
Curr. Opin. Neurobiol. , 37:75–84.Miller, K. D. and Troyer, T. W. (2002). Neural noise canexplain expansive, power-law nonlinearities in neuralresponse functions.
J. Neurophysiol. , 87:653–659.Mongillo, G., Hansel, D., and van Vreeswijk, C. (2012).Bistability and spatiotemporal irregularity in neuronalnetworks with nonlinear synaptic transmission.
Phys.Rev. Lett. , 108:158101.Nassi, J. J., Avery, M. C., Cetin, A. H., Roe, A. W., andReynolds, J. H. (2015). Optogenetic Activation of Nor-malization in Alert Macaque Visual Cortex.
Neuron ,86:1504–1517.Nienborg, H., Hasenstaub, A., Nauhaus, I., Taniguchi,H., Huang, Z. J., and Callaway, E. M. (2013). Con-trast dependence and differential contributions fromsomatostatin- and parvalbumin-expressing neurons tospatial integration in mouse V1.
J. Neurosci. , 33:11145–11154.19’Donnell, C. and van Rossum, M. C. (2014). System-atic analysis of the contributions of stochastic voltagegated channels to neuronal noise.
Front Comput Neu-rosci , 8:105.Ohshiro, T., Angelaki, D. E., and DeAngelis, G. (2013). Anormalization model of multisensory integration ac-counts for distinct forms of cross-modal and within-modal cue integration by cortical neurons.
Program No.360.19. 2013 Neuroscience Meeting Planner. San Diego,CA: Society for Neuroscience, 2013. Online.
Okun, M. and Lampl, I. (2008). Instantaneous correla-tion of excitation and inhibition during ongoing andsensory-evoked activities.
Nat. Neurosci. , 11:535–537.Ozeki, H., Finn, I. M., Schaffer, E. S., Miller, K. D., and Fer-ster, D. (2009). Inhibitory stabilization of the corticalnetwork underlies visual surround suppression.
Neu-ron , 62:578–592.Pehlevan, C. and Sompolinsky, H. (2014). Selectivity andsparseness in randomly connected balanced networks.
PLoS ONE , 9:e89992.Polat, U., Mizobe, K., Pettet, M. W., Kasamatsu, T., andNorcia, A. M. (1998). Collinear stimuli regulate visualresponses depending on cell’s contrast threshold.
Na-ture , 391:580–584.Poulet, J. F. and Petersen, C. C. (2008). Internal brain stateregulates membrane potential synchrony in barrel cor-tex of behaving mice.
Nature , 454:881–885.Priebe, N., Mechler, F., Carandini, M., and Ferster, D.(2004). The contribution of spike threshold to the di-chotomy of cortical simple and complex cells.
Nat. Neu-rosci. , 7:1113–22.Priebe, N. J. and Ferster, D. (2008). Inhibition, spikethreshold and stimulus selectivity in primary visualcortex.
Neuron , 57:482–497.Renart, A., de la Rocha, J., Bartho, P., Hollender, L., Parga,N., Reyes, A., and Harris, K. D. (2010). The asyn-chronous state in cortical circuits.
Science , 327:587–590.Reynolds, J. H. and Chelazzi, L. (2004). Attentional modu-lation of visual processing.
Annu. Rev. Neurosci. , 27:611–647.Rosenbaum, R. and Doiron, B. (2014). Balanced networksof spiking neurons with spatially dependent recurrentconnections.
Physical Review X , 4(2):021039. Rosenbaum, R., Smith, M. A., Kohn, A., Rubin, J. E., andDoiron, B. (2017). The spatial structure of correlatedneuronal variability.
Nat. Neurosci. , 20:107–114.Rubin, D. B., Van Hooser, S. D., and Miller, K. D. (2015).The stabilized supralinear network: A unifying circuitmotif underlying multi-input integration in sensorycortex.
Neuron , 85:402–417.Sadagopan, S. and Ferster, D. (2012). Feedforward ori-gins of response variability underlying contrast invari-ant orientation tuning in cat visual cortex.
Neuron ,74:911–923.Sadeh, S. and Rotter, S. (2015). Orientation selectivity ininhibition-dominated networks of spiking neurons: ef-fect of single neuron properties and network dynamics.
PLoS Comput. Biol. , 11:e1004045.Sanzeni, A., Akitake, B., Goldback, H., Leedy, C., Brunel,N., and Histed, M. (2019). Inhibition stabilization is awidespread property of cortical networks. biorxiv , doi:https://doi.org/10.1101/656710.Sato, T. K., Hausser, M., and Carandini, M. (2014). Dis-tal connectivity causes summation and division acrossmouse visual cortex.
Nat. Neurosci. , 17:30–32.Sceniak, M., Ringach, D. L., Hawken, M., and Shapley,R. (1999). Contrast’s effect on spatial summation bymacaque v1 neurons.
Nature Neurosci. , 2:733–739.Schneidman, E., Freedman, B., and Segev, I. (1998). Ionchannel stochasticity may be critical in determiningthe reliability and precision of spike timing.
NeuralComput , 10:1679–1703.Schoonover, C. E., Tapia, J. C., Schilling, V. C., Wimmer, V.,Blazeski, R., Zhang, W., Mason, C. A., and Bruno, R. M.(2014). Comparative strength and dendritic organi-zation of thalamocortical and corticocortical synapsesonto excitatory layer 4 neurons.
J. Neurosci. , 34:6746–6758.Schwabe, L., Ichida, J. M., Shushruth, S., Mangapathy, P.,and Angelucci, A. (2010). Contrast-dependence of sur-round suppression in Macaque V1: experimental test-ing of a recurrent network model.
Neuroimage , 52:777–792.Sengpiel, F., Blakemore, C., and Sen, A. (1997). Character-istics of surround inhibition in cat area 17.
Exp. BrainRes. , 116:216–228.20hadlen, M. N. and Newsome, W. T. (1998). The variabledischarge of cortical neurons: implications for connec-tivity, computation, and information coding.
J. Neu-rosci. , 18:3870–3896.Shu, Y., Hasenstaub, A., and McCormick, D. A. (2003).Turning on and off recurrent balanced cortical activity.
Nature , 423:288–293.Shushruth, S., Ichida, J. M., Levitt, J. B., and Angelucci, A.(2009). Comparison of spatial summation propertiesof neurons in macaque V1 and V2.
J. Neurophysiol. ,102:2069–2083.Softky, W. and Koch, C. (1993). The highly irregular firingof cortical-cells is inconsistent with temporal integra-tion of random EPSPS.
J. Neurosci. , 13:334–350.Song, X. M. and Li, C. Y. (2008). Contrast-dependentand contrast-independent spatial summation of pri-mary visual cortical neurons of the cat.
Cerebral Cortex ,18:331–336.Stevens, C. and Zador, A. (1998). Input synchrony andthe irregular firing of cortical neurons.
Nat. Neurosci. ,1:210–217.Tan, A. Y., Chen, Y., Scholl, B., Seidemann, E., and Priebe,N. J. (2014). Sensory stimulation shifts visual cortexfrom synchronous to asynchronous states.
Nature ,509:226–229.Troyer, T. W. and Miller, K. D. (1997). Physiological gainleads to high ISI variability in a simple model of a cor-tical regular spiking cell.
Neural Comput. , 9:971–983.Tsodyks, M. V. and Sejnowski, T. (1995). Rapid stateswitching in balanced cortical network models.
Net-work , 6:111–124.Tsui, J. M. and Pack, C. C. (2011). Contrast sensitivityof MT receptive field centers and surrounds.
J. Neuro-physiol. , 106:1888–1900.van Vreeswijk, C. and Sompolinsky, H. (1996). Chaosin neuronal networks with balanced excitatory and in-hibitory activity.
Science , 274:1724–1726. van Vreeswijk, C. and Sompolinsky, H. (1998). Chaoticbalanced state in a model of cortical circuits.
NeuralComputation , 10:1321–1371.Wang, S., Miller, K. D., and van Hooser, S. B. (2019). Com-bined visual and patterned optogenetic stimulation offerret visual cortex reveals that cortical circuits respondto independent inputs in a sublinear manner.
CosyneAbstracts 2019, Lisbon, Portugal , (to appear):1–15.Wehr, M. and Zador, A. M. (2003). Balanced inhibitionunderlies tuning and sharpens spike timing in auditorycortex.
Nature , 426:442–446.Wilent, W. B. and Contreras, D. (2005). Stimulus-dependent changes in spike threshold enhance featureselectivity in rat barrel cortex neurons.
J. Neurosci. ,25:2983–2991.Wu, G. K., Arbuckle, R., Liu, B. H., Tao, H. W., and Zhang,L. I. (2008). Lateral sharpening of cortical frequencytuning by approximately balanced inhibition.
Neuron ,58:132–143.Wu, G. K., Li, P., Tao, H. W., and Zhang, L. I. (2006).Nonmonotonic synaptic excitation and imbalanced in-hibition underlying cortical intensity tuning.
Neuron ,52:705–715.Xue, M., Atallah, B. V., and Scanziani, M. (2014). Equal-izing excitation-inhibition ratios across visual corticalneurons.
Nature , 511:596–600.Yizhar, O., Fenno, L. E., Prigge, M., Schneider, F., David-son, T. J., O’Shea, D. J., Sohal, V. S., Goshen, I., Finkel-stein, J., Paz, J. T., Stehfest, K., Fudim, R., Ramakrish-nan, C., Huguenard, J. R., Hegemann, P., and Deis-seroth, K. (2011). Neocortical excitation/inhibition bal-ance in information processing and social dysfunction.
Nature , 477:171–178.Zhou, M., Liang, F., Xiong, X. R., Li, L., Li, H., Xiao, Z., Tao,H. W., and Zhang, L. I. (2014). Scaling down of balancedexcitation and inhibition by active behavioral states inauditory cortex.