Mean-field approximations of networks of spiking neurons with short-term synaptic plasticity
MMean-field approximations of networks of spiking neurons with short-term synapticplasticity
Richard Gast, ∗ Thomas R. Kn¨osche, † and Helmut Schmidt Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany (Dated: January 18, 2021)Low-dimensional descriptions of spiking neural network dynamics are an effective tool for bridg-ing different scales of organization of brain structure and function. Recent advances in derivingmean-field descriptions for networks of coupled oscillators have sparked the development of a newgeneration of neural mass models. Of notable interest are mean-field descriptions of all-to-all cou-pled quadratic integrate-and-fire (QIF) neurons, which have already seen numerous extensions andapplications. These extensions include different forms of short-term adaptation (STA) consideredto play an important role in generating and sustaining dynamic regimes of interest in the brain. Itis an open question, however, whether the incorporation of pre-synaptic forms of synaptic plasticitydriven by single neuron activity would still permit the derivation of mean-field equations using thesame method. Here, we discuss this problem using an established model of short-term synapticplasticity at the single neuron level, for which we present two different approaches for the derivationof the mean-field equations. We compare these models with a recently proposed mean-field approx-imation that assumes stochastic spike timings. In general, the latter fails to accurately reproducethe macroscopic activity in networks of deterministic QIF neurons with distributed parameters. Weshow that the mean-field models we propose provide a more accurate description of the networkdynamics, although they are mathematically more involved. Thus, we provide novel insight intothe macroscopic effects of short-term synaptic plasticity in spiking neural networks, as well as twodifferent mean-field descriptions for future investigations of such networks.
I. LOW-DIMENSIONAL MANIFOLDS OFSPIKING NEURAL NETWORK ACTIVITY
The brain can generate a variety of highly complexand chaotic patterns of neural activity [1]. However,given the vast number of neurons in the brain, thesepatterns appear to be less complex than they could betheoretically, indicating a high level of neuronal redun-dancy [2, 3]. Electrophysiological recordings of macro-scopic neural activity have revealed highly stereotypedresponses to sensory stimulation as well as strongly syn-chronized regimes of neural activity [4–7]. More recently,multi-unit recordings have demonstrated that strong re-dundancies are present at the level of spiking neuronsas well [8, 9]. These findings indicate the existence oflow-dimensional manifolds in the state space of the brainthat typically govern its neural dynamics and its responseto extrinsic stimulation. The identification and descrip-tion of such low-dimensional manifolds has been a centraltopic of neuroscientific research for many years [10–15].Different approaches for the derivation of mathematicaldescriptions of the temporal evolution of low-dimensionalneural activity have been proposed [16]. Among those areclassic neural mass models that use direct, phenomeno-logical descriptions of macroscopic measures of neural dy-namics [17–21]. For these neural mass models, equiv-alent spiking neural networks do not exist in general.Other approaches make use of probabilistic descriptions ∗ [email protected] † Also at Institute for Biomedical Engineering and Informatics,TU Ilmenau, Germany of the evolution of the collective behavior inside a neu-ral population [22–24], which make it possible to cap-ture the statistics inside the spiking neural network upto a certain order. However, some of these approachesare restricted to asynchronous regimes of neural activity[22, 23], whereas others use approximations of randomfluctuations in the spiking neural network [24]. Hence,both of these approaches do not provide a mathemati-cally exact set of mean-field equations that can describethe macroscopic dynamics of a spiking neural network ingeneral.The Ott-Antonsen ansatz has provided a new tool toderive mean-field models of spiking neural networks [25].While originally devised for networks of all-to-all cou-pled Kuramoto oscillators [26], it has since been ap-plied to networks of theta neurons [27, 28], and, mostrelevant to this study, to networks of all-to-all coupledquadratic integrate-and-fire (QIF) neurons [29]. For fu-ture applications of this method, it is of interest howwell the derivation of the mean-field equations generalizesto other descriptions of neural dynamics than the par-ticular QIF networks considered in [29]. Consequently,different extensions of the QIF model have been pro-posed that added biophysical mechanisms or structuraldetails to the model in order to explain interesting neuro-dynamic phenomena, such as the onset of synchronizedneural activity [30–34]. Particularly interesting are ex-tensions that include dynamic variables which are notdriven by the mean-field activity of the network, but byneuron- or synapse-specific processes instead. In suchcases, it is unclear whether mean-field equations can stillbe found. In [34], the QIF network was extended by aspike-frequency adaptation mechanism, where a neuron- a r X i v : . [ q - b i o . N C ] J a n specific adaptation current was elicited by the spikingactivity of the same neuron. Thus, the adaptation vari-able was not simply driven by the mean-field activity ofthe network. To derive the mean-field equations nonethe-less, the authors applied an adiabatic approximation tothe adaptation dynamics. This approximation assumesthat the adaptation variable evolves slow in compari-son to the membrane potential dynamics and permitsto apply the mean-field derivation on the fast time-scale.Based on this mean-field model it will be possible to in-vestigate the effects of neuron-specific currents at meso-and macroscopic scales, such as for example the effectsof calcium-dependent spikes on thalamic dynamics [35]or the effects of spike-frequency adaptation on corticalmicrocircuits [36].In this work, we address the question whether ex-act mean-field equations can be derived for QIF net-works with synapse-specific dynamic variables. Synap-tic dynamics are especially interesting for the computa-tional modeling of macroscopic neurodynamic phenom-ena. This is, because synaptic currents are thought totrigger the potential changes visible in macroscopic elec-trophysiological recordings of brain activity, and differ-ent synapse types come with different dynamic charac-teristics that are pivotal for our understanding of braindynamics. Classic neural mass models, for example, typ-ically use different synaptic time scales to model rhythmgeneration in the brain [18, 20, 21]. The QIF mean-fieldreduction generalizes to any convolution of the synapticinput with a synaptic response kernel [29, 30] and, hence,allows to derive mean-field descriptions of QIF networkswith standard descriptions of synaptic dynamics such asthe alpha kernel convolution [20, 21]. However, givenappropriate stimulation, synaptic dynamics also undergoshort-term plasticity (STP) that changes properties ofthe synaptic response. It has been shown that synapsescan express short-term depression and facilitation andthat time scales and strengths of these two STP forms dif-fer between synapse and neuron types. Moreover, synap-tic STP has been linked to various functions and dynamicproperties of the brain, such as working memory [37] oroperating in a critical regime [38]. A generalization ofthe above discussed mean-field approaches to neural net-works with synaptic STP would thus provide a valuabletool for modeling brain dynamics and function at themeso- and macroscopic level.Here, we discuss the descriptions of synaptic STP thatare allowed for in the context of deriving Ott-Antonsenmanifolds for heterogeneous QIF networks. Recent workhas demonstrated that mean-field equations can be de-rived for QIF networks with synaptic STP if two condi-tions are satisfied [34]: First, each time a neuron spikesin the network, it triggers synaptic STP at every otherneuron, which is the case in all-to-all coupled networks.Second, a single incoming spike triggers synaptic STPat all synapses of a neuron. Under those conditions,synaptic STP is no longer neuron specific and can sim-ply be treated as a macroscopic variable driven by the mean-field activity of the network. This form of synap-tic STP could be used to model forms of post-synapticreceptor desensitization, short-term changes in the num-ber of available post-synaptic receptors, or resource de-pletion at the post-synaptic complex. Importantly, itcannot be considered to represent pre-synaptic forms ofplasticity, such as vesicle depletion. While the first as-sumption would still hold for pre-synaptic STP in all-to-all coupled QIF networks, the second assumption wouldnot. Pre-synaptic resource depletion cannot be assumedto affect all network connections, but only the efferentconnections of a specific neuron (see Fig. 1). FIG. 1. Pre- vs. Post-Synaptic Forms of Short-Term Plastic-ity. Nodes represent neurons in an all-to-all coupled networkand edges between the nodes represent bidirectional synapticcouplings. Red nodes are active, i.e. did just spike, whereasblue nodes have not spiked for a sufficient period in time.Edges that are colored in red show adaptation in response tothe activity of the red nodes, whereas grey edges do not.
A well established model of pre-synaptic STP is thephenomenological model introduced in [39], which de-scribes the dynamics of pre-synaptic facilitation and de-pression. We will discuss the derivation of mean-fieldequations for QIF networks with pre-synaptic STP withrespect to this model, though we will discuss the impli-cations of our findings for general descriptions of pre-synaptic STP dynamics as well. In the following sec-tion, we define the microscopic model under considera-tion. This will be followed by sections in which we dis-cuss different approaches to derive equations for the low-dimensional network dynamics. While we do not findthe exact mean-field equations for QIF networks withpre-synaptic STP, we provide two different approxima-tions that match well with the QIF network dynamics.We point to the problems that would have to be solvedin future attempts at an exact mean-field derivation andevaluate the accuracy of our approximate solutions vianumerical simulations and bifurcation analysis.
II. LOW-DIMENSIONAL MANIFOLDS OF QIFNETWORKS WITH STP
We consider a network of N all-to-all coupled QIF neu-rons with pre-synaptic STP τ ˙ V i = V i + η i + I ( t ) + JτN N (cid:88) j =1 X − j U + j S j , (1a) τ x ˙ X i = 1 − X i − αX − i U + i S i τ x , (1b) τ u ˙ U i = U − U i + U (1 − U − i ) S i τ u , (1c) S i = (cid:88) k \ t ki 0, such that S i represents the spiking ac-tivity of neuron i . Eq. (1b) and eq. (1c) resemble the pre-synaptic STP mechanism described in [39]. We note herethat · − denotes a quantity just before a spike occurs (leftlimit), and · + denotes a quantity just after the neuronspiked (right limit). This discontinuity accounts for thebiological fact that a pre-synaptic spike triggers synapticfacilitation before it can affect the post-synaptic neuron,by moving vesicles closer to the membrane. Synaptic de-pression, however, results from the consumption of vesi-cles for the synaptic transmission process is thus affectedslightly later than synaptic facilitation. We assume neu-ral spiking activity to affect all outgoing synapses of aneuron equally, hence X i and U i can be considered asneuron- and not synapse-specific. The adaptation dy-namics are controlled by the depression and facilitationtime constants τ x and τ u , a depression strength α , and abaseline synaptic efficacy U . Eq. (1a) describes the evo-lution of the membrane potential V i of neuron i , whichdepends on a background excitability parameter η i , anextrinsic forcing term I ( t ), the membrane time constant τ , and the coupling with the network activity. The lat-ter is given by a sum over the output S i of each neuronin the network, weighted by a global coupling strength J , and the neuron-specific synaptic depression X i andfacilitation U i .The membrane potential V i of a neuron can be directlyrelated to its phase via the transform V i = tan( θ j ). Un-der this transformation, (1a-1d) represents a network oftheta neurons [40], which can be considered a networkof globally coupled oscillators. Thus, the network satis-fies the conditions for the existence of the Ott-Attonsenmanifold, a low-dimensional manifold along which thenetwork dynamics are guaranteed to evolve for N → ∞ [25]. This manifold can be described for (1a-1d) by fol-lowing the Lorentzian ansatz described in [29], i.e. bymaking the assumption that the state variables V i are distributed according to a Lorentzian at any given time: ρ ( V | η, t ) = 1 π z ( η, t )[ V − y ( η, t )] + z ( η, t ) . (2)The center z ( η, t ) and half-width-at-half-maximum(HWHM) y ( η, t ) of eq. (2) are associated with the meanfiring rate r ( η, t ) and the membrane potential averageover all neurons v ( η, t ) via z ( η, t ) = πr ( η, t ), and y ( η, t ) = v ( η, t ), respectively. Due to the conservation of the num-ber of neurons, the network dynamics obey the followingcontinuity equation: ∂ t ρ + ∂ V (cid:20)(cid:18) V + η + Iτ + Jr eff (cid:19) ρ (cid:21) = 0 , (3)where r eff = N (cid:80) Nj =1 X j U j S j is the effective mean-fieldnetwork activity that arrives at each neuron. By insertingeq. (2) into eq. (3) it can be shown that the dynamics of z ( η, t ) and y ( η, t ) obey ∂ t w ( η, t ) = i (cid:20) − w ( η, t ) + V + η + Iτ + Jr eff (cid:21) , (4)for any η , with w ( η, t ) = z ( η, t )+ iy ( η, t ). Without synap-tic STP, i.e. for U ( t ) = X ( t ) = 1, and assuming aLorentzian distribution of the background excitabilitiesacross neurons, eq. (4) can be solved for the average firingrate r and average membrane potential v in the network: τ ˙ r = ∆ πτ + 2 rv, (5a) τ ˙ v = v + ¯ η + I ( t ) + Jrτ − ( πrτ ) . (5b)Here, we additionally used r eff = N (cid:80) Nj =1 S j = r , and ¯ η and ∆ represent the center and HWHM of a Lorentziandistribution over the background excitabilities η i : g ( η ) = 1 π ∆( η − ¯ η ) + ∆ . (6)However, for non-constant X and U , solving eq. (4)for r and v becomes a non-trivial problem. In this case, r eff = N (cid:80) Nj =1 X j U j S j (cid:54) = r and, hence, r eff must be cal-culated to arrive at closed-form equations for r and v .Two major problems have to be solved in this regard:(a) The effective network input r eff has to be expressedvia mean-field variables such as the average firing rate r and average depression and facilitation variables X (cid:63) and U (cid:63) . If this cannot be done, the mean-field equationswould still contain neuron-specific variables, thus increas-ing their dimensionality dramatically. (b) The mean-fieldequations for the average depression x = N (cid:80) Ni =1 X i andfacilitation u = N (cid:80) Ni =1 U i have to be solved. How-ever, the evaluation of these sums requires to solve thecoupled, non-linear differential equations (1b) and (1c),which only has been achieved for stationary network in-put so far [39]. In the following section, we will addressproblem (b) and compare our results with recently pro-posed mean-field equations for a similar synaptic STPmodel [41]. The remainder of this article will addressdifferent attempts to solve problem (a). III. ANALYTICAL SOLUTIONS FORMICROSCOPIC STP As argued in the previous section, finding closed-form mean-field equations for the system given by equa-tions (1) requires to calculate the average depression x = N (cid:80) Ni =1 X i and average facilitation u = N (cid:80) Ni =1 U i across neurons. We start by considering a steady-statesolution of (1), where each neuron has a static firingrate S i ( t ) = const . . In this scenario, solutions for themicroscopic STP variables can be obtained analytically[39]. The evolution equations for synaptic short-termdepression X i and short-term facilitation U i are givenby eq. (1b) and eq. (1c), respectively. For the remain-der of this section, we will omit the neuron index i forbrevity. The (relative) strength of a synapse is given by0 < U + X − < 1. In the case of stationary firing activitythe neuron under consideration spikes periodically with aperiod T , which corresponds to a firing rate of 1 /T . Wedenote U by U − n just before the corresponding neuronemitted its n th spike, and by U + n just after the n th spike.Solving the homogeneous part of the model equation, weobtain U − n +1 = U + ( U + n − U ) exp( − T /τ u ) , (7)and the change of U due to a spike is found to be U + n +1 = U − n +1 + U (1 − U − n +1 ) . (8)These expressions can be reformulated into the followingiteration scheme: U + n +1 = U + (1 − U )( U + ( U + n − U )e − T/τ u ) , (9a) U − n +1 = U + (1 − U ) U − n e − T/τ u . (9b)For the depression variable X , we find the following setof equations: X + n +1 = 1 + (cid:0) (1 − αU + n ) X − n − (cid:1) e − T/τ x , (10a) X − n +1 = (1 − αU + n +1 )(1 + ( X + n − − T/τ x ) . (10b)In the stationary case, i.e. in the absence of transientdynamics, stationary solutions U + (cid:63) = U + n , U − (cid:63) = U − n and X − (cid:63) = X − n , ∀ n can be found: U + (cid:63) = U + U (1 − U )(1 − exp( − T /τ u ))1 − (1 − U ) exp( − T /τ u ) , (11a) U − (cid:63) = U − (1 − U ) exp( − T /τ u ) , (11b) X + (cid:63) = (1 − αU + (cid:63) )(1 − exp( − T /τ x ))1 − (1 − αU + (cid:63) ) exp( − T /τ x ) , (11c) X − (cid:63) = 1 − exp( − T /τ x )1 − (1 − αU + (cid:63) ) exp( − T /τ x ) . (11d) It is interesting to note that these results differ from theresults when the firing rate is assumed to be a constant: U (cid:63) = U + U τ u /T U τ u /T , (12a) X (cid:63) = 11 + ατ x U (cid:63) /T , (12b)which is obtained by setting S ( t ) = 1 /T , and ˙ U = ˙ X = 0in the model equations.In figure 2 we compare these solutions for varying val-ues of T . As can be seen, the results for constant firingrates r are more closely related to the adaptation vari-ables before spikes than after spikes. This shows thatit does matter for microscopic STP whether exact spiketimings and the time of evaluation of U and X are con-sidered or not, a finding which we expect to hold fornon-stationary firing rates S ( t ) as well.The expressions derived above can be used to evaluatethe mean-field quantities x and u , if the spike times orfiring rates of all neurons are known. Alternatively, theycan be used to evaluate r eff directly. In the followingsections, we will address the problem of evaluating r eff to derive the mean-field equations for equations (1). Wewill derive two different mean-field models, for which theresults of this section will be used to refine the mean-fielddescriptions of the pre-synaptic STP dynamics. In thiscontext, we will evaluate how eq. (11) vs. eq. (12) affectthe mean-field dynamics of the QIF network. IV. MEAN-FIELD DERIVATION UNDER APOISSONIAN ASSUMPTION OF NEURALDYNAMICS Recently, an approach for the derivation of a mean-field model for the system defined by eqs. (1) has beenpresented in [37]. The authors used a mean-field approx-imation of macroscopic quantities x and u , averaged overall neurons in the network, that has been proposed in[41]. In this article, a mean-field approximation of theeffective network input r eff ( t ) = 1 N N (cid:88) j =1 U − j X − j s j , (13)is derived, where X − j and U − j are given by eq. (1b) andeq. (1c), respectively, with the modification that U + j isreplaced by U − j . Whereas the original STP model formu-lation described in [39] uses U + j X − j as the effective weightof a synapse at the time of an incoming spike, Schmutzet al. use U − j X − j instead [41]. As shown in Fig. 2C,these two choices can lead to substantial differences ofthe synaptic weight for small input rates. Since an ef-fective synaptic weight of U − j X − j is also used in [37], wewill discuss the validity of their mean-field description for a b c U+U -U ( r ) U X+X -X ( r ) U+X -U -X -U ( r ) X ( r ) 0 0.5 1 1.5 2 0 0.5 1 1.5 2 X XU FIG. 2. Comparison of the microscopic adaptation variables before and after spikes for discrete spikes, and for constant firingrates r . The inter-spike interval T is varied. The constant firing rate is expressed as r = 1 /T . Parameters: α = 0 . U = 0 . τ x = 50 . τ u = 20 . both the spiking neural network given by eq. (1) and thespiking neural network considered in [37]. Henceforth,we will refer to the former as SNN pre and to the latter asSNN pre II. Under the assumption that all S i follow inde-pendent Poisson processes, the effective network input inSNN pre II is approximated by r eff ≈ u ( t ) x ( t ) r ( t ), where r ( t ) is the average firing rate across neurons at time t .As explained in [41], this mean-field approximation restson two assumptions: (I) Synapse indices can be random-ized, i.e. the spike times matter, but not the synapses atwhich those spikes occur. (II) The average impact of aspike on X i and U i , ∀ i can be approximated by samplingfrom Gaussian distributions around the current values of x and u . A first-order mean-field approximation is thengiven by τ x ˙ x = 1 − x − ατ x xur, (14a) τ u ˙ u = U − u + U τ u (1 − u ) r. (14b)As can be seen from these equations, both x and u aredriven by the average firing rate r = N (cid:80) Nj =1 S j of theQIF network. This allows to apply the Lorentzian ansatzin the same way as demonstrated for post-synaptic de-pression in [34]. The dynamics of the complex variable w ( η, t ) can be expressed as ∂ t w ( η, t ) = i [ − w ( η, t ) + η + I ( t ) + Jτ xur ] , (15)and by evaluating eq. (15) at r ( t ) + iv ( t ) = w (¯ η − i ∆ , t )one finds that the dynamics of r and v follow: τ ˙ r = ∆ πτ + 2 rv, (16a) τ ˙ v = v + ¯ η + I ( t ) + Jxurτ − ( πrτ ) . (16b)We will refer to the set of mean-field equations given by(14) and (16) as FRE Poisson where FRE stands for firingrate equations.It is important to notice, however, that FRE Poisson can-not be considered exact. While assumption (I) holds fora network of independent, homogeneous Poisson neurons(hence called Poissonian assumption), it does not hold in general [41]. Therefore, the mean-field derivation es-sentially approximates a heterogeneous network of de-terministic QIF neurons by a homogeneous network ofstochastic Poisson neurons. Furthermore, the first-orderapproximation given by eq. (14a) and eq. (14b) ignoresthe non-linear interaction between X i and U i in eq. (1b).As shown in [41], considering second order dynamicscan improve the accuracy of the mean-field approxima-tion, especially in the vicinity of transient inputs to thenetwork. Adding second-order dynamics would involvesampling from a multivariate Gaussian distribution over( x, u ), however. This means that the mean-field deriva-tion could not be considered deterministic and, hence,also not exact anymore.Still, it has been shown in [37] that FRE Poisson can ac-curately describe the mean-field dynamics of SNN pre IIunder certain conditions. To test whether this holds ingeneral, we compared the dynamics of the two models forthree different STP parametrizations, leading to synapsesthat are either depressing, facilitating, or depressing andfacilitating. We solved the initial value problem of bothsets of equations via an explicit Euler formalism withan integration step-size of dt = 0 . pre II and FRE Poisson . As canbe seen in Fig. 3A for the average depression x , there isa considerable offset between the mean-field model (or-ange) and the average of X i evaluated across neurons inthe QIF network (black). With respect to purely facili-tating synapses, we find that the mean-field model pro-vides a reasonable approximation of the QIF network.Even though offsets can be observed between the mean-field model and the QIF network (see dynamics of v inFig. 3B), the qualitative behavior of the QIF network iscaptured well by the mean-field model. This holds both U i X i A: Short-term depression 50 100050100 n e u r o n r − v . . x . . . u B: Short-term facilitation 50 100050100 0 20 40 60 80 1001 . . . . − . − . . . . . 05 SNN pre IIFRE Poisson time . . . 905 0 25 50 75 100050100 C: Short-term depr. + facil. 50 100050100 0 20 40 60 80 1000 . . . − . − . − . . . . . . . . . FIG. 3. Evolution of the state variables of a QIF network and a mean-field approximation thereof for three different typesof synaptic short-term plasticity ( A: depression, B: facilitation, combined C: depression and facilitation). The first two rowsshow the distribution over the synaptic state X j U j and the spiking activity of 100 randomly selected neurons, respectively.The last 4 rows show a comparison between the spiking neural network (black) and the mean-field approximation (orange)for the average firing rate r , the average membrane potential v , the average depression x , and the average facilitation u . Inthe SNN, averages were calculated across neurons i . Grey-shaded areas depict time intervals in which a rectangular input of I ( t ) = 2 . X i U i .Parameters for A : U = 1 . α = 0 . 1. Parameters for B : U = 0 . α = 0 . 0. Parameters for C : U = 0 . α = 0 . 1. Other modelparameters: τ = 1 . 0, ∆ = 2 . 0, ¯ η = − . J = 15 . √ ∆, τ x = 50 . τ u = 20 . N = 10000. in the steady-state regimes and during transient behavioraround the on- and offsets of the input I ( t ). In the caseof synapses with short-term depression and facilitation,the mean-field model expresses a substantial mismatch tothe QIF network dynamics again. For example, Fig. 3Cshows that the dynamics of the average firing rate r ex-press focus dynamics for FRE Poisson after the onset of thefirst stimulus, whereas the average firing inside SNN pre II does not show such behavior. In the upper row ofFig. 3, we show the evolution of the distribution over thecombined synaptic state X i U i in the microscopic model.We find that this distribution tends to express multi-modalities in regions with a strong miss-match betweenmean-field and microscopic model. These results sug-gest that the mean-field model can approximate the low-dimensional dynamics of the QIF network only if X i and U i express uni-modal, narrow distributions. This findingmakes intuitive sense, since the mean-field approxima-tion of the dynamics of U i and X i given by eqs. (14)represents a first order approximation. Our results con-firm that this approximation only performs well if themean over X i and U i contains much information aboutthe actual underlying distributions. Thus, by providingthese counter examples, we have shown that the mean-field model resulting from the Poisson assumption doesnot provide an exact mean-field description of the QIFnetwork.Since we are actually interested in the mean-field equa-tions for SNN pre given by eqs. (1), we now examinewhether FRE Poisson can nonetheless provide an approx-imation of SNN pre under some conditions. To gain fur-ther insight into the relationship between the mean-fieldequations and the QIF network, we asked whether thereexists a QIF network description for which the mean-fieldmodel given by (14a, 14b, 16a, 16b) can be consideredexact. Indeed, such a network exists and is easy to find.Since x and u are only driven by the mean-field firing rate r , we can just introduce microscopic variables U i and X i that enter the microscopic evolution equation for v i inthe same was as the macroscopic evolution equation for v ((16b)) and are also driven by the mean-field activityof the QIF network: τ ˙ V i = V i + η i + I ( t ) + JτN U i X i s, (17a) τ x ˙ X i = 1 − X i − αX i U i sτ x , (17b) τ u ˙ U i = U − U i + U (1 − U i ) sτ u , (17c) s = N (cid:88) j =1 (cid:88) k \ t kj 4. The latter introduces two different levelsof firing rate heterogeneity to the QIF network. We ex-pected this firing rate heterogeneity to directly affect thebroadness of the distributions over X i and U i . If that isindeed the case, the mean-field model should provide abetter description of the SNN pre dynamics for ∆ = 0 . . . . 01. These fold bifurcations mark the outer limitsof a bi-stable regime in which a stable high-activity fo-cus and a stable low-activity node can co-exist, separatedby a saddle-focus. Indeed, we find that the steady-statebehavior of the mean-field model and SNN post can beforced towards either of the two stable equilibria via ex-trinsic stimulation. As shown for ∆ = 0 . . pre , we failed to identify the bi-stable regime for∆ = 0 . 4. In Fig. 4A, it can be seen that the systembehavior is only governed by a high-activity focus, eventhough the mean-field model predicts the co-existenceof a low-activity stable node for ¯ η = − . 6. Thus, themean-field model fails to predict the behavior of the QIFnetwork with pre-synaptic STP in this case. However, inthe case of very low heterogeneity, we identified both sta-ble states exists in SNN pre and found a good agreementwith the mean-field model (see Fig. 4B).For depressing synapses, we found regimes of synchro-nized oscillations that emerge via Andronov-Hopf bifur-cations for small as well as for high firing rate heterogene-ity (see Fig. 4C and D). Again, these oscillations could − . − . . . . . . . r . . r I ( t ) = 0 . I ( t ) = − . A: U = 0 . α = 0 . 0, ∆ = 0 . n e u r o n − . − . . . . . . . r r I ( t ) = 0 . I ( t ) = − . B: U = 0 . α = 0 . 0, ∆ = 0 . SNN pre SNN post FRE Poisson n e u r o n − . − . . . . . . . r . . r I ( t ) = − . C: U = 1 . α = 0 . 04, ∆ = 0 . n e u r o n − ¯ η . . . . . r r I ( t ) = 1 . D: U = 1 . α = 0 . 04, ∆ = 0 . time n e u r o n FIG. 4. Comparison between FRE Poisson (orange), SNN pre (black), and SNN post (purple) for 4 different parameter sets ( A - D ).The first column shows 1D bifurcation diagrams in ¯ η . Grey triangles represent fold bifurcations and green circles representAndronov-Hopf bifurcations. Blue dashed lines mark the value of ¯ η that was used for the firing rate and spike raster plots inthe second column. Spike raster plots show the spiking activity of 50 randomly selected neurons of SNN pre . Grey shaded areasrepresent time intervals during which an extrinsic input I ( t ) was applied to the models. Remaining model parameters: J = 8 . τ u = 20 . τ x = 50 . τ = 1 . N = 10000 be induced in FRE Poisson as well as in SNN post with avery good match between the two. Consistent with ourfindings for facilitating synapses, SNN pre expressed os-cillations only for ∆ = 0 . 01 (see Fig. 4D). For higherfiring rate heterogeneity (∆ = 0 . η = − . 85 (see Fig. 4C).Thus, our results confirm that FRE Poisson is indeedan exact mean-field equation of SNN post . Furthermore,they demonstrate that SNN pre and SNN post can behaveboth very different and very similar, depending on thefiring rate heterogeneity inside the network. In our sim-ulations, we were able to control this heterogeneity suc-cessfully via the parameter ∆. In regimes of low firingrate heterogeneity, SNN pre and SNN post expressed sim-ilar behavior, thus allowing for a good approximationof the mean-field dynamics of SNN pre via FRE Poisson .In regimes of high firing rates heterogeneity, the oppo-site was the case. In the next sections, we investigatewhether more accurate mean-field models of QIF net-works with pre-synaptic STP can be derived and, if so,how they perform near the parameter regimes describedin this section. V. MULTI-POPULATION APPROXIMATIONOF DISTRIBUTED PARAMETERS IN THE QIFNETWORK In the previous section, we have found that FRE Poisson is in good agreement with the dynamics of SNN pre , whenthe distribution of η i is particularly narrow, i.e. when∆ (cid:28) 1. Here, we exploit this fact and approximatethe mean field dynamics by dividing the microscopicnetwork into sub-networks with narrow distributions in η i . In other words, the Cauchy-distribution with { ¯ η, ∆ } is divided into a set of M Cauchy distributions with { ¯ η m , ∆ m } , m = 1 , . . . , M , such that∆ /π ( η − ¯ η ) + ∆ ≈ M M (cid:88) m =1 ∆ m /π ( η − ¯ η m ) + ∆ m . (18)The resulting set of equations for the evolution of themean field variables is then given by τ ˙ r m = ∆ m πτ + 2 r m v m ,τ ˙ v m = v m + ¯ η m + I ( t ) + JN N (cid:88) n =1 x n u n r n τ − ( πr m τ ) , ˙ x m = 1 − x m τ x − αu m x m r m , ˙ u m = U − uτ u + U (1 − u m ) r m . (19)We will refer to this set of mean-field equations asFRE mpa , for multi-population approximation. One as-sumption we make here is that each sub-network con-tains the same number of neurons, which means that the weights for each sub-network are the same, and the meanfield variables can be obtained by computing the mean y = (1 /N ) (cid:80) Nn =1 y m , where y represents the mean fieldvariable under consideration. The parameters ¯ η m and∆ m are chosen as follows:¯ η m = ¯ η + ∆ tan[( π/ m − N − / ( N + 1)] , (20)∆ m = ∆ tan[( π/ m − N − / / ( N + 1)] (21) − ∆ tan[( π/ m − N − / / ( N + 1)] . (22)The density of the parameters η m follows the Cauchydistribution, and the ∆ m are chosen such that the half-widths approximately match the distances between thecenters of the distributions of the sub-networks, i.e.¯ η m +1 − ¯ η m ≈ ∆ m +1 + ∆ m . The results are shown infigure 5(a-c). As can be seen, even at large N the adap-tation variables still show a small discrepancy with theresult obtained from the spiking neural network SNN pre .We hypothesise that this difference is due to differentresults for the adaptation variables when the firing rateis assumed constant, and when it is assumed to be aspike train with constant ISI, as shown in Fig. 2. Inother words, we expect that accounting for the fact thatFRE Poisson was derived for SNN pre II instead of SNN pre will reduce the difference. As the adaptation variablesare in essence time-averaged quantities, the adaptationvariables could be posed as x = ( X − + X + ) / u = ( U − + U + ) / 2. However, with the update rules U + = U − + U (1 − U − ) and X + = X − − αU + X − , thiswould yield out-of-bound values for X − at x = 1, and U − at u = 0. The results shown in Figure 2 suggest thatthe mean field variables are closest to X − and U − , whichis why we set X − ≈ x , and U − ≈ u . The update rule for U + gives the following correction term: U + ( u ) ≈ u + U (1 − u ) . (23)Inserting this term into the mean field equations forFRE mpa produces a closer match of the mean field vari-ables with the results of the microscopic model SNN pre ,see figure 5(d-f).As a final test of the predictive accuracy of FRE MPA ,we examined how well the model can predict the onset ofoscillations in the QIF network. Using bifurcation anal-ysis, we identified the Hopf bifurcation leading to theoscillations in Fig. 4E and investigated the locus of thatHopf bifurcation in the 2D parameter space spanned by¯ η and ∆. This, we did for both FRE Poisson and FRE MPA with m = 100 mean-field populations. As can be seen inFig.6A, the different between the Hopf curves predictedby FRE Poisson and FRE MPA becomes larger when ∆ in-creases. For ∆ = 0 . 4, FRE Poisson predicts stable oscilla-tions to exist at ¯ η = − . 85, which we already failed tofind in the QIF network in Fig.4D. FRE MPA predicts theexistence of a stable node at ¯ η = − . 85, however, andthe existence of stable oscillations for − . < ¯ η < − . MPA indeed exist in SNN pre , we performed numerical simula-tions where we initialized the QIF network at ¯ η = − . B xr u A u x r pre FRE MPA , M = 10 FRE MPA , M = 10 FRE MPA , M = 10 time 0 20 40 60 80 100 time 0 20 40 60 80 100 time 0 20 40 60 80 1000 20 40 60 80 100 0 20 40 60 80 100 0 20 40 60 80 100 FIG. 5. Comparison of the mean field variables of the microscopic spiking neural network, and the mean field model of thespiking neural network divided into M sub-networks with narrow distribution (multi-population approximation, MPA). Greyshaded areas indicate time intervals with I ( t ) = 3 . A : MPA with standard mean field description, B : MPA with correctionterm for U + . Parameters: α = 0 . τ = 1 . 0, ∆ = 2 . 0, ¯ η = − . J = 15 . √ ∆, τ x = 50 . τ u = 20 . N = 10000. and then forced it towards ¯ η = − . 62 via extrinsic stim-ulation. As can be seen in Fig.6B, the QIF network ex-pressed steady-state behavior for ¯ η = − . 85 and startedto oscillate when pushed to ¯ η = − . 62. Hence, FRE MPA correctly predicted the existence of oscillatory bursts inthe QIF network for m = 100, but not for m = 1, forwhich FRE MPA reduces to FRE Poisson . The bursts havesimilar properties as the ones found in QIF networks withpost-synaptic plasticity [34] and can be expected to re-sult from the interaction between synaptic short-term de-pression and recurrent excitation via the network. Com-paring the firing rate dynamics of FRE MPA and SNN pre in Fig.6 reveals a slight difference between the oscilla-tion period of the mean-field model and the QIF net-work. This difference shows that FRE MPA can not beconsidered an exact mean-field model, even for m = 100.Still, we find that it captures the phase transitions insideSNN pre well and thus provides a reasonable trade-off be-tween accuracy and computational complexity. VI. ADIABATIC APPROXIMATION OF STPDYNAMICS For simplification, we will consider synapses with mereshort-term depression in this section, since we showedin section IV that the mismatch between the mean-fieldmodel FRE Poisson and the QIF networks SNN pre and SNN pre II could be reproduced in this simpler case aswell. We thus consider the microscopic system given by τ ˙ V i = V i + η i + I ( t ) + JτN N (cid:88) j =1 X − j S j , (24) τ x ˙ X i = 1 − X i − αX − i s i τ x , (25) S i = (cid:88) k \ t kj FRE Poisson FRE MPA time n e u r o n FIG. 6. Phase transitions between steady-state and oscillatory regimes in FRE Poisson and FRE MPA . A: 2D bifurcation diagramof the Hopf curve in FRE Poisson (orange) and FRE MPA (blue). The arrow represents the phase transition introduced by I(t) ineither model. The black square represents the Bogdanov-Takens bifurcation from which the Hopf bifurcations emerge. B: Thefirst row shows the simulated firing dynamics of the spiking neural network and both mean-field models. The second row showsthe corresponding spiking activity of 100 randomly selected neurons of SNN pre . Parameters: α = 0 . U = 1 . τ = 1 . . 4, ¯ η = − . J = 8 . τ x = 50 . τ u = 20 . N = 10000, m = 100, I ( t ) = 0 . 23 for t > ms and I ( t ) = 0 . To solve eq. (28) for r and v , the effective firing rate r eff = (cid:82) ∞−∞ ( G ∗ r ( η )) r ( η ) g ( η )d η must be determined,which requires to evaluate the product between the singlecell firing rate and a convolution of itself. This makes itdifficult to find a closed-form solution for r and v , sincethe synaptic depression kernel G cannot simply be pulledout from the convolution integral. The simplest approx-imation of this problem is to replace the convolution in-tegral by a mean synaptic depression, as is done for thePoissonian assumption. Alternatively, we assume thatthe dynamics of X i are slow in comparison to the dy-namics of v i . For the relaxation dynamics of X i , thisassumption is met if τ x (cid:29) τ . We note here, however,that the spiking activity of the neuron also introduces arelatively fast time scale to eq. (25), which may violateour assumption. Still, under this assumption, we can ap-ply an adiabatic approximation to the system and con-sider the dynamics of the fast sub-system for effectivelyconstant adaptation (see [34, 44] for a similar approach): τ ˙ V i = V i + η i + I ( t ) + JτN N (cid:88) j =1 X − j S j , (29) S i = (cid:88) k \ t kj 1] and hence a Lorentzian distribution cannot2be assumed. In the upper row of Fig. 3, we show theevolution of the distribution over X i U i for three differ-ent parametrizations, corresponding to a purely depress-ing synapse, a purely facilitating synapse, and a synapsewith facilitation and depression acting on different timescales. Importantly, the evolution of the distribution re-veals that it is not always uni-modal. For purely depress-ing synapses, it clearly expresses an at least bi-modal dis-tribution over the whole time course. Thus, finding anappropriate form of h that holds in general is a highlynon-trivial problem that we did not find a solution for.To further simplify the problem, we assume that thedepression of a neuron’s efferent synapses X i is merelya function of the firing rate r i of the same neuron. Thestationary firing rate of a QIF neuron in response to anexternal Input I in is √ I in /π if I in > 0, and zero other-wise. Hence, the distribution of firing rates for a giveninput is (in the stationary case) given by r ( η ; I in ) = H ( η + I in ) (cid:112) η + I in /π, (32)where H is the Heaviside step function. Therefore, forany given mean field firing rate r one can find a uniqueconstant I r for which r = (cid:90) ∞−∞ r ( η ; I r ) g ( η )d η, (33)which allows us to translate the mean field variable r intothe distribution r ( η ; I r ).Similarly, we can use the assumption that X i is a func-tion of r i to translate the mean field variable for synap-tic depression, x , into the distribution X ( η ; I x ). First, weuse the rate relationship given by eq. (12) to approximate x ( η ; I x ) = 1 / (1 + ατ x r ( η ; I x )) , (34)for any given input I x , and then define x = (cid:90) ∞−∞ ρ ( η ) / (1 + ατ x r ( η ; I x ))d η. (35)Alternatively, we can use eq. (11) to approximate thedistribution x ( η ) in the spiking scenario: x ( η ; I x ) = 1 − exp( − /τ x r ( η ; I x ))1 − (1 − α ) exp( − /τ x r ( η ; I x )) , (36)which yields x = (cid:90) ∞−∞ (1 − exp( − /τ x r ( η ; I x ))) g ( η )1 − (1 − α ) exp( − /τ x r ( η ; I x )) d η. (37)Having obtained I r and I x , we can ultimately compute r eff = (cid:90) ∞−∞ r ( η ; I r ) x ( η ; I x )) g ( η )d η, (38)where x ( η ; I x ) is either chosen for the rate scenario(eq. (34)), or in the spike scenario (eq. (36)). This re-quires to solve r eff = ∆ π ∞ (cid:90) min( − I x , − I r ) ατ x √ η + I x √ η + I r ( η − ¯ η ) + ∆ d η, (39) in the rate scenario, and r eff = ∆ π ∞ (cid:90) min( − I x , − I r ) exp (cid:16) πτ x √ η + I x (cid:17) − (cid:16) πτ x √ η + I x (cid:17) − (1 − α ) √ η + I r ( η − ¯ η ) + ∆ d η, (40)in the spiking scenario. We refer to this mean-field modelas FRE aa for adiabatic approximation.The integrals involved in this approximation are hardto be evaluated analytically, therefore we solve these in-tegrals numerically for a range of values of I r and I x andcreate look-up tables for I r , I x and r eff in order to be ableto integrate the resulting model equations numerically.In Figure 7 we compare the results of the mean-fieldmodel FRE aa with the dynamics of the spiking neuralnetwork SNN pre , and the mean field model FRE Poisson .We find that FRE aa is closer to the microscopic dynamicsof SNN pre than FRE Poisson . VII. CONCLUSION In this work, we examined whether spiking neural net-works with pre-synaptic short-term plasticity allow forthe derivation of low-dimensional mean-field equationsvia the Lorentzian ansatz described in [29]. To this end,we considered heterogeneous, all-to-all coupled QIF net-works with pre-synaptic STP dynamics, described by awell-known phenomenological model of synaptic short-term depression and facilitation [39]. For such QIF net-works, other forms of STP have already been shown tobe compatible with the Lorentzian ansatz [34]. In thecase of pre-synaptic STP, we identified the evaluation ofthe effective network input r eff as the central problem fora mean-field derivation via the Lorentzian ansatz. Thiseffective network input represents a weighted sum of in-coming spikes, where the weights are given by the pre-synaptic depression and facilitation terms. We presentedthree different approaches to express r eff and thus findthe mean-field equations: First, a mean-field descriptionof the STP dynamics via the Poissonian assumption usedin [37]; second, a multi-population approximation thatapproximates distributed parameters inside the QIF net-work via a set of coupled sub-populations with differentparametrizations; and third, an adiabatic approximationof the STP time scales.For the first approach, the effective network input r eff is approximated by a modulation of the mean-field firingrate with an average depression and an average facilita-tion. Our analysis revealed that this approach essentiallyapproximates pre-synaptic STP with post-synaptic STP.We compared the behavior of QIF networks with pre-vs. post-synaptic STP and found that they can expresssubstantial qualitative differences in their dynamics, es-pecially when SNN pre expresses a high firing rate hetero-geneity across neurons. Near such regimes, FRE Poisson follows the dynamics of SNN post , and thus fails to cap-ture the behavior of SNN pre . It is worth noticing that3 rv x pre FRE aa1 FRE aa2 FRE Poisson time 0 20 40 60 80 100 time 0 20 40 60 80 100 time 0 20 40 60 80 100 FIG. 7. Comparison of the mean field variables of the microscopic spiking neural network, the mean field model using thePoissonian assumption, and the mean field model with approximation of the effective firing rate. Grey shaded areas indicatetime intervals with I ( t ) = 3 . 0. Parameters: α = 0 . τ = 1 . 0, ∆ = 1 . 0, ¯ η = − . J = 15 . τ x = 50 . τ u = 20 . N = 10000. the mean-field derivation via the Poissonian assumptionworks well for networks of homogeneous Poisson neuronswith independent noise [41]. In such networks, single cellfiring rates can differ momentarily due to noise, but ap-proach the same rate when averaged over increasing timeintervals. This is a very different scenario compared tothe QIF network considered here, where the Lorentziandistribution over η i causes substantial heterogeneity inthe single cell firing rates. Hence, the Poissonian approx-imation becomes worse the stronger the heterogeneity ofsingle cell firing rates inside the QIF network is. In [37],where the Poissonian approximation was first applied toa QIF network with pre-synaptic STP, the authors choseQIF networks with relatively low firing rate heterogene-ity, leading to a good correspondence with the mean-field model. Here, we clarified that this correspondencedoes not generalize to regimes where the QIF networkexpresses more heterogeneous firing rates.Populations of neurons that naturally express hetero-geneous firing rates exist in sub-cortical structures, forexample. Single cell firing rates in the globus pallidushave been shown to differ substantially across neurons[45, 46]. This firing rate heterogeneity has been sug-gested as an important de-synchronization mechanism ofpallidal activity [47, 48]. Our results suggest that study-ing the mean-field dynamics in such a population viaFRE Poisson comes at the risk of substantial errors. Wethus developed a mean-field model that addresses the is-sue of high firing rate heterogeneities. Since the distribu-tion over η i is the source of heterogeneity in the QIF net-work, we attempted to improve the mean-field model byconsidering a set of coupled sub-networks with distinct,but narrow distributions over η i . This way, the neuronsinside each sub-population are parametrized such thatthey express a considerably lower firing rate heterogene-ity than the overall network. We found that, by increas-ing the number of sub-populations, the mean-field modelconverges to the QIF network behavior. Of course, thisapproach leads to mean-field models of relatively highdimensionality. Still, we found that a mean-field modelwith 100 sub-populations (i.e. a 400-dimensional model), accurately predicted phase transitions of the QIF net-work from steady-state to oscillatory behavior in a regimewhere FRE Poisson failed to do so. Thus, we argue thatthis multi-population approximation provides a flexiblemean-field description, the dimensionality of which canbe chosen based on the expected firing rate heterogeneityin the neural population under investigation.As an alternative to the Poissonian approximation, weapplied an adiabatic approximation to the QIF network,assuming slow STP dynamics in comparison to the QIFdynamics. This assumption is supported by experimentalresults that suggest depression and facilitation recoverytime scales that are at least 10 times slower than typi-cal membrane potential time scales [37, 39, 49]. Previ-ously, this approach has been used successfully for thederivation of mean-field equations for QIF networks withspike-frequency adaptation [34]. By approximating thepre-synaptic STP dynamics as slow, they can be consid-ered as constant, distributed quantities in the fast sub-system. This way, the STP dynamics do not have to beconsidered for the evaluation of r eff . Instead, appropriatedistributions over the STP constants have to be chosen.In our work, we derived analytical solutions of the mi-croscopic STP dynamics in the stationary case and usedthese solutions to approximate the STP distributions.This approach can be considered exact for the descriptionof steady-state solutions, but not for transient dynamics.That is, the network must have converged to an equilib-rium for our approximation to be accurate. Still, we findthat our adiabatic approximation provides a more ac-curate approximation of the mean-field dynamics of theQIF network dynamics than the Poissonian approxima-tion, even for transient dynamics. A disadvantage of thismethod is, however, that we had to approximate the inte-grals over the STP distribution numerically and calculate r eff via look-up tables. This makes it more difficult toimplement the model equations and perform parametercontinuations.In conclusion, we performed a thorough analysis of theproblems that arise when attempting to derive the mean-field equations for QIF networks with synaptic short-4term plasticity. Though we did not find a set of exact,closed-form mean-field equations, we provided two differ-ent mean-field approximations that we found to be moreaccurate than a previously proposed mean-field model.Both of these mean-field approximations can capture thequalitative dynamics of the QIF network and can thus beused for future investigations of its macroscopic dynam-ics. Finally, our work provides insight into the distincteffects that pre- vs post-synaptic STP can have on themean-field dynamics of spiking neural networks. ACKNOWLEDGEMENTS R.G. was funded by the Studienstiftung des deutschenVolkes. H.S. was supported by the German ResearchFoundation (DFG (KN 588/7-1) awarded to T.R.K.via Priority Program 2041, “Computational Connec-tomics”). [1] E. Ba¸sar, Chaos in Brain Function: Containing Origi-nal Chapters by E. Basar and T. H. Bullock and TopicalArticles Reprinted from the Springer Series in Brain Dy-namics (Springer Science & Business Media, 2012).[2] D. R. Chialvo, Emergent complex neural dynamics, Na-ture Physics , 744 (2010).[3] G. Deco, V. K. Jirsa, and A. R. McIntosh, Emergingconcepts for the dynamical organization of resting-stateactivity in the brain, Nature Reviews Neuroscience ,43 (2011).[4] A. K. Engel, P. Fries, and W. Singer, Dynamic predic-tions: Oscillations and synchrony in top–down process-ing, Nature Reviews Neuroscience , 704 (2001).[5] T. R. Kn¨osche, C. Neuhaus, J. Haueisen, K. Alter,B. Maess, O. W. Witte, and A. D. Friederici, Perceptionof phrase structure in music, Human Brain Mapping ,259 (2005).[6] T. Kujala, M. Tervaniemi, and E. Schr¨oger, The mis-match negativity in cognitive and clinical neuroscience:Theoretical and methodological considerations, Biologi-cal Psychology , 1 (2007).[7] V. K. Jirsa, W. C. Stacey, P. P. Quilichini, A. I. Ivanov,and C. Bernard, On the nature of seizure dynamics, Brain , 2210 (2014).[8] P. T. Sadtler, K. M. Quick, M. D. Golub, S. M. Chase,S. I. Ryu, E. C. Tyler-Kabara, B. M. Yu, and A. P.Batista, Neural constraints on learning, Nature , 423(2014).[9] J. D. Murray, A. Bernacchia, N. A. Roy, C. Constantini-dis, R. Romo, and X.-J. Wang, Stable population codingfor working memory coexists with heterogeneous neuraldynamics in prefrontal cortex, Proceedings of the Na-tional Academy of Sciences , 394 (2017).[10] A. Babloyantz and A. Destexhe, Low-dimensional chaosin an instance of epilepsy, Proceedings of the NationalAcademy of Sciences , 3513 (1986).[11] J. A. S. Kelso, Dynamic Patterns: The Self-organizationof Brain and Behavior (MIT Press, 1995).[12] A. Celletti and A. E. P. Villa, Low-dimensional chaoticattractors in the rat brain, Biological Cybernetics , 387(1996).[13] A. Bollimunta, Y. Chen, C. E. Schroeder, and M. Ding,Neuronal Mechanisms of Cortical Alpha Oscillations inAwake-Behaving Macaques, Journal of Neuroscience ,9976 (2008).[14] A. Spiegler, T. R. Kn¨osche, K. Schwab, J. Haueisen, andF. M. Atay, Modeling Brain Resonance Phenomena Usinga Neural Mass Model, PLOS Computational Biology ,e1002298 (2011). [15] G. Deco and V. K. Jirsa, Ongoing Cortical Activity atRest: Criticality, Multistability, and Ghost Attractors,Journal of Neuroscience , 3366 (2012).[16] G. Deco, V. K. Jirsa, P. A. Robinson, M. Breakspear, andK. Friston, The Dynamic Brain: From Spiking Neuronsto Neural Masses and Cortical Fields, PLOS Computa-tional Biology , e1000092 (2008).[17] H. R. Wilson and J. D. Cowan, Excitatory and InhibitoryInteractions in Localized Populations of Model Neurons,Biophysical Journal , 1 (1972).[18] F. H. Lopes da Silva, A. Hoeks, H. Smits, and L. H.Zetterberg, Model of brain rhythmic activity, Kybernetik , 27 (1974).[19] W. Freeman, Models of the dynamics of neural popula-tions, Electroencephalography and clinical neurophysiol-ogy. Supplement , 9 (1978).[20] B. H. Jansen and V. G. Rit, Electroencephalogram andvisual evoked potential generation in a mathematicalmodel of coupled cortical columns, Biological Cybernet-ics , 357 (1995).[21] P. A. Robinson, C. J. Rennie, and J. J. Wright, Propa-gation and stability of waves of electrical activity in thecerebral cortex, Physical Review E , 826 (1997).[22] S. El Boustani and A. Destexhe, A Master Equation For-malism for Macroscopic Modeling of Asynchronous Irreg-ular Activity States, Neural Computation , 46 (2009).[23] M. A. Buice, J. D. Cowan, and C. C. Chow, System-atic Fluctuation Expansion for Neural Network ActivityEquations, Neural Computation , 377 (2009).[24] T. Schwalger, M. Deger, and W. Gerstner, Towards atheory of cortical columns: From spiking neurons to in-teracting neural populations of finite size, PLOS Compu-tational Biology , e1005507 (2017).[25] E. Ott and T. M. Antonsen, Low dimensional behavior oflarge systems of globally coupled oscillators, Chaos: AnInterdisciplinary Journal of Nonlinear Science , 037113(2008).[26] Y. Kuramoto, Collective synchronization of pulse-coupled oscillators and excitable units, Physica D: Non-linear Phenomena , 15 (1991).[27] T. B. Luke, E. Barreto, and P. So, Complete Classifica-tion of the Macroscopic Behavior of a Heterogeneous Net-work of Theta Neurons, Neural Computation , 3207(2013).[28] S. Coombes and A. Byrne, Next Generation Neural MassModels, in Nonlinear Dynamics in Computational Neu-roscience , PoliTO Springer Series, edited by F. Corintoand A. Torcini (Springer International Publishing, Cham,2019) pp. 1–16. [29] E. Montbri´o, D. Paz´o, and A. Roxin, Macroscopic De-scription for Networks of Spiking Neurons, Physical Re-view X , 021028 (2015).[30] I. Ratas and K. Pyragas, Macroscopic self-oscillationsand aging transition in a network of synaptically coupledquadratic integrate-and-fire neurons, Physical Review E , 032215 (2016).[31] A. Byrne, M. J. Brookes, and S. Coombes, A mean fieldmodel for movement induced changes in the beta rhythm,Journal of Computational Neuroscience , 143 (2017).[32] M. di Volo and A. Torcini, Transition from Asynchronousto Oscillatory Dynamics in Balanced Spiking Networkswith Instantaneous Synapses, Physical Review Letters , 128301 (2018).[33] B. Pietras, F. Devalle, A. Roxin, A. Daffertshofer, andE. Montbri´o, Exact firing rate model reveals the differen-tial effects of chemical versus electrical synapses in spik-ing networks, Physical Review E , 042412 (2019).[34] R. Gast, H. Schmidt, and T. R. Kn¨osche, A Mean-FieldDescription of Bursting Dynamics in Spiking Neural Net-works with Short-Term Adaptation, Neural Computation , 1615 (2020).[35] P. Suffczynski, S. Kalitzin, G. Pfurtscheller, andF. Lopes da Silva, Computational model of thalamo-cortical networks: dynamical control of alpha rhythmsin relation to focal attention, International Journal ofPsychophysiology , 25 (2001).[36] R. Moran, S. Kiebel, K. Stephan, R. Reilly, J. Daunizeau,and K. Friston, A neural mass model of spectral responsesin electrophysiology, NeuroImage , 706 (2007).[37] H. Taher, A. Torcini, and S. Olmi, Exact neural massmodel for synaptic-based working memory, PLOS Com-putational Biology , e1008533 (2020).[38] A. Levina, J. M. Herrmann, and T. Geisel, Dynamicalsynapses causing self-organized criticality in neural net-works, Nature Physics , 857 (2007).[39] M. Tsodyks, K. Pawelzik, and H. Markram, Neural Net-works with Dynamic Synapses, Neural Computation , 821 (1998).[40] G. B. Ermentrout and N. Kopell, Parabolic Bursting inan Excitable System Coupled with a Slow Oscillation,SIAM Journal on Applied Mathematics , 233 (1986).[41] V. Schmutz, W. Gerstner, and T. Schwalger, Mesoscopicpopulation equations for spiking neural networks withsynaptic short-term plasticity, The Journal of Mathemat-ical Neuroscience , 5 (2020).[42] R. Gast, D. Rose, C. Salomon, H. E. M¨oller, N. Weiskopf,and T. R. Kn¨osche, PyRates—A Python framework forrate-based neural simulations, PLOS ONE , e0225900(2019).[43] E. J. Doedel, T. F. Fairgrieve, B. Sandstede, A. R.Champneys, Y. A. Kuznetsov, and X. Wang, AUTO-07P: Continuation and bifurcation software for ordinarydifferential equations , Tech. Rep. (2007).[44] G. Gigante, M. Mattia, and P. D. Giudice, DiversePopulation-Bursting Modes of Adapting Spiking Neu-rons, Physical Review Letters , 148101 (2007).[45] H. Kita, A. Nambu, K. Kaneda, Y. Tachibana, andM. Takada, Role of Ionotropic Glutamatergic andGABAergic Inputs on the Firing Activity of Neuronsin the External Pallidum in Awake Monkeys, Journal ofNeurophysiology , 3069 (2004).[46] J. N. Mercer, C. S. Chan, T. Tkatch, J. Held, and D. J.Surmeier, Nav1.6 Sodium Channels Are Critical to Pace-making and Fast Spiking in Globus Pallidus Neurons,Journal of Neuroscience , 13552 (2007).[47] C. J. Wilson, Active decorrelation in the basal ganglia,Neuroscience , 467 (2013).[48] R. Gast, R. Gong, H. Schmidt, H. G. E. Meijer, and T. R.Knoesche, On the role of arkypallidal and prototypicalneurons for phase transitions in the external pallidum,bioRxiv , 2021.01.06.425526 (2021).[49] M. V. Tsodyks and H. Markram, The neural code be-tween neocortical pyramidal neurons depends on neuro-transmitter release probability, Proceedings of the Na-tional Academy of Sciences94