Coherence resonance in neuronal populations: mean-field versus network model
Emre Baspinar, Leonhard Schülen, Simona Olmi, Anna Zakharova
CCoherence resonance in neuronal populations: mean-field versusnetwork model
Emre Baspinar, ∗ Leonhard Sch¨ulen, † Simona Olmi,
1, 3, ‡ and Anna Zakharova § Inria Sophia Antipolis M´editerran´ee Research Centre,2004 Route des Lucioles, 06902 Valbonne, France Institut f¨ur Theoretische Physik, Technische Universit¨at Berlin,Hardenbergstrae 36, 10623 Berlin, Germany CNR - Consiglio Nazionale delle Ricerche - Istitutodei Sistemi Complessi, 50019, Sesto Fiorentino, Italy
Abstract
The counter-intuitive phenomenon of coherence resonance describes a non-monotonic behavior ofthe regularity of noise-induced oscillations in the excitable regime, leading to an optimal responsein terms of regularity of the excited oscillations for an intermediate noise intensity. We study thisphenomenon in populations of FitzHugh-Nagumo (FHN) neurons with different coupling architec-tures. For networks of FHN systems in excitable regime, coherence resonance has been previouslyanalyzed numerically. Here we focus on an analytical approach studying the mean-field limits ofthe locally and globally coupled populations. The mean-field limit refers to the averaged behaviorof a complex network as the number of elements goes to infinity. We derive a mean-field limitapproximating the locally coupled FHN network with low noise intensities. Further, we applymean-field approach to the globally coupled FHN network. We compare the results of the mean-field and network frameworks for coherence resonance and find a good agreement in the globallycoupled case, where the correspondence between the two approaches is sufficiently good to capturethe emergence of anticoherence resonance. Finally, we study the effects of the coupling strengthand noise intensity on coherence resonance for both the network and the mean-field model. ∗ [email protected] † [email protected] ‡ simona.olmi@fi.isc.cnr.it § [email protected] a r X i v : . [ n li n . AO ] S e p . INTRODUCTION All real-world processes are affected by random fluctuations that are intrinsically pro-duced by the system itself and/or introduced by the extrinsic mechanisms to the system.The fluctuations are modeled mathematically as noise [1–3]. In neural networks noise occursnaturally, for example, due to random synaptic input from other neurons, spontaneous neu-ral activity or random opening and closing of ionic channels resulting in so-called intrinsicbrain noise [4–6]. Investigation of noise impact in the brain, and in nonlinear dynamicalnetworks in general, is a challenging problem. Noise can give rise to new dynamic behavior,e.g., stochastic bifurcations [2, 7, 8] or stochastic synchronization [9, 10]. It can even inducepartial synchronization patterns such as chimera states [11–15]. Intriguingly, random fluctu-ations do not always have a destructive impact deteriorating the regularity of a deterministicsystem. Instead they can play a constructive role and lead to an increase of coherence forincreasing noise intensities. The most prominent examples are stochastic resonance [16–29]that is observed for periodically driven bistable systems and coherence resonance that takesplace in autonomous systems, i.e., under purely noise excitation without any periodic drivingsignal [7, 30–43].The counter-intuitive phenomenon of coherence resonance was originally detected foran excitable FitzHugh-Nagumo neuron [31]. It describes a non-monotonic behavior of theregularity of noise-induced oscillations in the excitable regime, leading to an optimal responsein terms of regularity of the excited oscillations for an intermediate noise intensity. Since thediscovery of coherence resonance, it has been investigated theoretically and experimentallyin various systems and networks [11, 13, 38, 43–50]. It has been shown that coherenceresonance can be observed not only in excitable [31, 51], but also in non-excitable systems[7, 8, 10, 40–42]. Depending on the nature of the external noisy inputs, different mechanismsfor coherence resonance have been observed, like the double coherence resonance, occurringfor an optimal combination of noise variance and correlation of inputs stimulating a singleFitzHugh-Nagumo neuron [52, 53]. On the other hand, in complex networks of FitzHugh-Nagumo units, the existence of coherence resonance has been reported for one-layer [50]and two-layer [54] networks. Further topologies include local, nonlocal, global coupling,lattice networks as well as more complex structures such as random or small-world networks[50, 55–59]. 2nterestingly, large ensembles of neurons in the brain demonstrate a rich variety of co-herent dynamics at the macroscopic scale which results from random perturbations [60].Therefore, understanding coherence resonance is important for the study of brain and neu-ral networks. Coherence is significant for communication between brain regions [61] and,as recently suggested in [62], the improvement of neural communication can be reachedvia coherence resonance. In more detail, the brain adjusts its internal noise in order tomaximize the coherence. Futhermore, information processing and its encoding for trans-mission to different areas of the brain require a coherent activity of neuronal populations.In particular, information processing in the brain can be represented as a non-stationaryspatiotemporal process of activity propagation [63, 64]. In this view, brain activity duringtask conditions simultaneously evolves in a hierarchy of characteristic network activations.For instance, information processing in sensorimotor coordination [65] and auditory, visualand linguistic tasks [66] show robust propagation through well-tuned activation chains ofcharacteristic subnetworks. Especially in the sensory cortex, many neurons locally sensitiveto similar stimulus features give a similar response to a given input stimulus (see for exam-ple [67–69] for neurophysiological studies, and also [70–73] for some models using the localtuning feature of neurons). This suggests that the activity of such neurons can be measuredand studied at a macroscopic scale, which provides reliable data due to the averaging ef-fects diminishing the independent chaotic random behaviors of single neurons observed at amicroscopic scale.The growing interest in the phenomenon of coherence resonance for neural networks isconfirmed by a number of works [50, 55–58, 74, 75], including very recent ones [54, 59,62]. While the majority of these investigations is based on numerical simulations or onexperimental data, much less attention is paid to the analytical treatment of coherenceresonance in complex networks. Although analytical treatment has been provided for singlesystems (e.g., for FHN system in [76], for generalized van der Pol oscillator in [10] and forleaky integrate-and fire neuron in [77]), it remains a demanding problem for networks. Herewe address this challenging question by developing the mean-field framework for analyzingcoherence resonance in neural populations. Using a paradigmatic model of FHN neuron,we study the phenomenon of coherence resonance both at the network level (i.e., where wemodel each neuron in the population as a perturbed coupled FHN system) and at the levelof mean-field limit (known also as thermodynamic limit ), which is the asymptotic limit of3he averaged network as we send the number of neurons to infinity.Our main contributions are at both theoretical and numerical levels. At the theoreticallevel, we provide a mathematical underpinning of the results obtained from numerical sim-ulations of locally and globally coupled neural networks presented in [50]. In particular wederive a mean-field description for the locally coupled case which coincides with the net-work framework for small noise intensities and low excitability threshold. For the globallycoupled case, we employ the mean-field description provided in [78, 79] for a generic familyof stochastic differential equations of stochastic FHN type and we decline the model in twoversions, which differ from the number of employed noise terms: a single additive noise termin the differential equation for the recovery variable or two different noise terms, one for eachvariable equation. The latter mean-field model, which represents an extended version of theformer, has been designed to take into account the noise effects experienced by electricalsynapses when arranging the passive flow of the ionic current through the pores betweenneurons: this flow, together with the charge carriers, has a stochastic nature.At the numerical level, we investigate the emergence of anticoherence resonance in theglobally coupled framework, which results in different outcomes depending on the inves-tigated model. When one single noise term is present, anticoherence resonance does notemerge in the system, while, when two noise terms are present, anticoherence resonanceemerges for large noise intensity and it is characterized by an alteration of the dynamics,which is now guided by the noise.The manuscript is organized as follows. In Section II, we provide the network settings foreach topological architecture. Afterwards, we explain the associated mean-field frameworksin Section III. In Section IV, we explain the experimental setting and present our simulationresults where we compare the coherence resonance results obtained from the mean-fieldsystems and network framework to find out to which level they provide the same outcome,i.e., we identify the limitations of the mean-field approach. Finally, we give the conclusionsin Section V.
II. NETWORK EQUATION
Several dynamical models have been proposed to study both single neuron and coupledneuronal population behaviors such as the Hodgkin-Huxley model [80], or other conductance-4ased models, like the Morris-Lecar model [81]. Conductance-based models are the simplestpossible biophysical representation of an excitable cell, such as a neuron, in which its proteinmolecule ion channels are represented by conductances and its lipid bilayer by a capacitor.However, other types of models have been developed, that predict the dynamics of themembrane output voltage as a function of electrical stimulation at the input stage, likeHindmarsh and Rose [82], integrate-and-fire [83–85] or Galves-L¨ocherbach model [86]. Atthe mean-field level, heuristic firing rate models are commonly used, like the Wilson-Cowanexcitatory-inhibitory neural mass model [87, 88] and its stochastically modified versions [89].Only recently have been developed neural mass models that are not derived heuristically, butthat reproduce exactly the dynamics of excitatory and inhibitory networks of spiking neuronsfor any degree of synchronization [90–93]. In particular these neural masses reproducethe macroscopic dynamics of quadratic integrate-and-fire neurons, which are normal formsfor the saddle-node on a limit cycle bifurcation (SNIC) [94] and describe, in general, thedynamics of class I neurons (i.e. neurons with a continuous gain function), to which belongthe Hindmarsh-Rose model and the Morris-Lecar model under some circumstances. In thepresent work, we are particularly interested in the celebrated FitzHugh-Nagumo (FHN)model [95, 96] which represents a reduced version of the HodgkinHuxley model. It stillcaptures closely the dynamical behaviors produced by the HodgkinHuxley model and hasthe advantage of facilitating efficient large-scale simulation of groups of neurons.Network equations describe the dynamics of each neuron belonging to the network at amicroscopic level. The classical deterministic Fitzhugh-Nagumo (FHN) equations describingthe evolution of a single neuron belonging to a coupled population read as ε du i ( t ) dt = f ( u i ( t ) , v i ( t )) + σ P i + P (cid:88) j = i − P [ u j ( t ) − u i ( t )]= u i ( t ) − u i ( t ) − v i ( t ) + σ P i + P (cid:88) j = i − P [ u j ( t ) − u i ( t )] ,dv i ( t ) dt = g a ( u i ( t )) = u i ( t ) + a, (1)where σ is a positive constant denoting the coupling strength. u i and v i represent theactivator (membrane potential) and inhibitor (recovery) variables of the i -th neuron respec-tively ( i = 1 , . . . , N , where N is the population size). ε > = 0 .
01. Parameter a determines the nature of the equilibrium points and thus the ex-citability threshold of the isolated, uncoupled neuron. In particular, the parameter a servesas a threshold in our model and determines whether the single neuron is in the excitable | a | > | a | < a = 1. Finally P denotes, for the i -th neuron, the number of its nearest neighbours ineach direction of the ring. Thus, it determines the topology of the population: if P = 1 wehave local coupling, if P = ( N − / N is odd) we have global coupling andfinally if 1 < P < ( N − /
2, we have nonlocal coupling.We assign a stochastic behavior to the system, by adding a Gaussian white noise term dW i ( t ) to the recovery variable equation of each neuron, as described in [50] and [54]. Thenoise term dW i ( t ) is built, for each neuron i , from an independent Wiener process. Moreprecisely E [ dW i ( t )] = 0 , E [ dW i ( t ) dW j ( t (cid:48) )] = δ (cid:16) ( i − j )( t − t (cid:48) ) (cid:17) , ∀ i, j (2)with E and δ denoting expectation value and Dirac delta function, respectively. For the sakeof simplicity, we drop showing the time dependency explicitly from now on, as long as theotherwise is required. The stochastic FHN equations read as follows: εdu i ( t ) = f ( u i ( t ) , v i ( t )) dt + σ P i + P (cid:88) j = i − P [ u j ( t ) − u i ( t )] dt,dv i ( t ) = g a ( u i ( t )) dt + √ D dW i ( t ) , i = 1 , . . . , N, (3)where D denotes the level of noise intensity. Gaussian white noise is intended to representtriggering perturbations that alters the state of the system. Here we focus on the dynamicsof neuronal populations in their excitable regime. In this regime, a sufficiently strong per-turbation triggers the whole population to produce spikes before the population comes backto its steady state. If the perturbation is not strong enough, the population relaxes back toits unique stable steady state without producing a spike. In particular, we are interested inthe state of the network characterized by the best temporal regularity of the noise-inducedspiking dynamics achieved for an intermediate optimal noise intensity, i.e., when the net-work undergoes coherence resonance. In Sec. IV, when comparing the network dynamicsemergent in the network with the mean-field prediction, in the globally coupled regime, wewill also integrate a second set of equations that differ from Eqs. (3) by adding a secondnoise term √ ε ¯ D d ¯ W i ( t ) in the first differential equation, where ¯ D denotes the level of noise6ntensity and d ¯ W i ( t ) represents, as before, a Gaussian noise source. The reason for this willbecome clear when introducing the extended version of the mean-field limit associated tothe globally coupled FHN system (see Eqs. (9) in Sec. III). A. Macroscopic indicators of coherence resonance
Coherence resonance characterizes the emergence of relatively coherent noise-inducedoscillations occurring for an optimal noise intensity. It was initially found in a single FHNsystem in the excitable regime and later detected for neural networks. There exist severaldifferent measures for quantifying coherence resonance, such as the normalized standarddeviation of the interspike interval, the correlation time, and the signal-to-noise-ratio [30,31, 41]. Since in the present work we deal with a neural model showing spiking behaviour,it is convenient to use the standard deviation of the interspike interval (ISI), defined as R ISI = (cid:113) (cid:104) t ISI (cid:105) − (cid:104) t ISI (cid:105) (cid:104) t ISI (cid:105) , (4)where t ISI is the time between two subsequent spikes and (cid:104)· · · (cid:105) indicates the average overthe time series. A system undergoing coherence resonance will show a pronounced minimumin the value of R ISI [31]. The definition (4) is limited to characterizing coherence resonancefor a single FitzHugh-Nagumo oscillator. For a network of oscillators, coherence resonancecan be measured by redefining R as follows [50]: R = (cid:113) (cid:104) t ISI (cid:105) − (cid:104) t ISI (cid:105) (cid:104) t ISI (cid:105) , (5)where the over-bar indicates the additional average over nodes.To illustrate the dynamics of the system in the regime of coherence resonance we providehere space-time plots and time series for a locally and globally coupled network in thestochastic regime (see Fig. 1). In more detail, we show the dynamical behavior emergingfor different threshold parameter values ( a = 1 .
05 and a = 1 .
3) for the locally coupled (Fig.1 a) and b)) and the globally coupled (Fig. 1 c) and d)) case. In the locally coupled casewe observe, for the lower threshold value ( a = 1 . a = 1 . R value7panel b). When the threshold is increased, it is harder for the system to overcome it. Asimilar scenario emerges when passing from local to global coupling. For the lower thresholdvalue the coherent response of the network is clearly observable. In this case the FHNneurons spike in a highly synchronized way (see the straight yellow lines in panel c), due tothe fact that each element receives the input from the entire network being connected withall the others. The simultaneous interaction among all neurons enhance them to overcomethe threshold all at once. At the contrary, in the locally coupled case, each neuron pullsonly its immediate neighbors over the threshold, which explains why the excitation needssome time to “travel” over the ring. Similarly to what shown in panel b), the case of higherthreshold value gives a more irregular picture even in the globally coupled configuration(panel d). The same conclusions can be drawn looking at the time series of single units. Inparticular in Fig. 1 e) are shown the time series of one selected neuron in the locally couplednetwork for a = 1 .
05 (blue curve) and a = 1 . a = 1 .
05 (blue curve) and a = 1 . a canbe clearly seen for both coupling topologies. III. MEAN-FIELD POPULATION EQUATIONSA. Globally coupled equations
In the globally coupled mean-field framework, we have P = N/ N → ∞ . We assumethat the coupling strength is the same and constant for all neurons in a single population.To avoid using negative indices in Eqs. (3) we change the indexing of the coupling termaccording to N +1 (cid:88) j =1 [ u i − u j ] , (6)without losing the generality.We emphasize that the noise terms in Eqs. (3) are assumed to be independent andidentically distributed for each neuron and also for each state variable. This allows usto describe, in the thermodynamic limit, each state variable of a neuron as a continuous setof random variables, each one corresponding to an instant of time. In other words, eachstate variable can be thought of as a stochastic process representing the values of the state8 IG. 1. Space-time plots for optimal D values (i.e., R takes minimum values) for different networktopology and a values: a) a locally coupled network with a = 1 .
05 and D = 0 . a = 1 . D = 0 .
08, c) a globally coupled network with a = 1 .
05 and D = 0 . a = 1 . D = 0 . a = 1 .
05 (blue)and a = 1 . N = 100, ε = 0 . σ = 0 . variable changing randomly in time. As such, the state variables u i and v i of the i -th neuron(for any i = 1 . . . , N ) evolve in accordance with their associated probability distributions,which converge in the thermodynamic limit, to the ones of the mean-field state variables u and v characterized bylim N →∞ max i =1 ,...,N E (cid:104) sup s ≤ t (cid:16) u i ( s ) − u ( s ) (cid:17) ] (cid:105) = 0 , lim N →∞ max i =1 ,...,N E (cid:104) sup s ≤ t (cid:16) v i ( s ) − v ( s ) (cid:17) ] (cid:105) = 0 . (7)As it was shown in [78, 79], the state variables u and v comprise the solution to the followingmean-field system: εdu = f ( u, v ) dt + σ (cid:0) E [ u ] − u (cid:1) dt,dv = g a ( u ) dt + √ D dW, (8)where dW denotes Gaussian white noise with the properties given in Eqs. (2) and D definesthe noise intensity level. Here σ represents the coupling constant, as introduced in the9etwork Eqs. (3). This mean-field equation was derived in [78, 79] from a network of FHNneurons of the same type as Eqs. (3), based on two steps: i) the first step shows that aunique solution to Eqs. (8) exists for a finite time, under the assumption that the terms f and g a are locally sufficiently regular (locally Lipschitz); ii) the second step shows that foreach neuron the probability distributions of the processes u i and v i converge towards theprobability distributions of the mean-field state variables u and v , respectively.While the mean-field framework shown in Eqs. (8) corresponds to the classical mean-field limit of the globally coupled networks of FHN oscillators of the type given in [50, 54],we consider in the following an extended version of the mean-field limit associated to theglobally coupled FHN system. In [78, 79] it was shown that the same results and propertiesas in Eqs. (8) hold also for the extended version, which is adapted from [78, 79], as follows: ε du = f ( u, v ) dt + σ ( E [ u ] − u ) dt + (cid:112) ε ¯ D d ¯ W ,dv = g a ( u ) dt + √ D dW, (9)where d ¯ W , dW are the noise terms with standard normal distribution, constructed from twoindependent Brownian motions ¯ W and W . We denote the noise intensity levels as ¯ D , D .As for Eqs. (8), the presence of σ ( E [ u ] − u ) dt requires studying the state variable u , whosesolution depends on its own expectation value.The choice of this extended model is motivated by the fact that the coupling term is basedon modeling electrical synapses. Such synapses arrange the passive flow of the ionic currentthrough the pores between neurons and, as a result of the stochastic nature of ionic currentsand charge carriers, they experience noise effects. The coupling related terms persist inthe mean-field framework of the globally coupled configuration, while this is not the casein the locally coupled case (see Eqs. (18)). Therefore, we model the noise effects arisingfrom the global coupling by introducing an additional white noise term ( d ¯ W ) in Eqs. (8), asshown in Eqs. (9). In order to guarantee that d ¯ W does not dominate dW , neither the wholesystem behavior, we impose ¯ D to be of the same order as D and the scaling √ ε . Finally,we remark that, when comparing the results obtained from the simulation of Eqs. (9) withthose obtained from the network, the corresponding network equations are described, forconsistency, by Eqs. (3) with an additional noise term, (cid:112) ε ¯ D d ¯ W i ( t ), in the first equation,where ¯ D denotes the level of noise intensity for the Gaussian white noise d ¯ W i ( t ).The mean-field limit models the population behavior by employing a single FHN sys-10em once the number of the neurons in the population is sufficiently high, whereas, for thenetwork equations, we need a separate FHN equations system for each neuron in the popu-lation. In other words, the network equations require a high-dimensional dynamical systemwhile the mean-field limit requires only a two-dimensional dynamical system, being a goodrepresentative of the averaged dynamics of the population and making analytical treatmentof the system feasible. However, studying the statistics of such a system is not trivial sincethe right-hand side requires the expectation of the solution of the equation. Yet, it is possi-ble to use a semi-analytical approach, where we obtain the first-moment statistics of u fromnumerical simulations of the network equations given by Eqs. (3), as shown in [78]. Practi-cally, the expectation value of u will be introduced numerically in the differential equations(9), by obtaining the statistics of the state variables from the network simulations. In thisway it will be possible to investigate the interplay of coupling and noise to determine theemergence of coherence resonance and to provide an analytical framework for the numericalresults.Finally, we note that, although the notation for the mean-field state variables u and v will be the same for both globally and locally coupled settings, the corresponding definitionsare different and, in the rest of the paper, it should be tacitly understood from the couplingtype. B. Locally coupled equations
In the locally coupled topology P = 1 and the network equations for the i -th neuron canbe written as εdu i = f ( u i , v i ) dt + σ (cid:16) u i − + u i +1 − u i (cid:17) dt,dv i = g a ( u i ) dt + √ D dW i , i = 1 , . . . , N. (10)We assume to have periodic boundary conditions: the i -th neuron is coupled to the ( i − i + 1)-th neurons for all i ∈ { , , . . . , N } , while it holds that u N + K = u K for all K ∈ {− N, − N + 1 , . . . , N − , N } . (11)11he average dynamics of a locally coupled system in the thermodynamic limit can be foundby considering that εN N (cid:88) i =1 du i = 1 N N (cid:88) i =1 f ( u i , v i ) dt + σ N N (cid:88) i =1 (cid:16) u i − + u i +1 − u i (cid:17) dt, N N (cid:88) i =1 dv i = 1 N N (cid:88) i =1 g a ( u i ) dt + 1 N N (cid:88) i =1 √ D dW i . (12)where the term proportional to the coupling constant σ in the r.h.s. of the first equationvanishes for N → ∞ . Moreover, we define in the following u := 1 N N (cid:88) i =1 u i , v := 1 N N (cid:88) i =1 v i , (13)and 1 N N (cid:88) i =1 dW i = dW, (14)where dW is a Gaussian white noise as a result of the central limit theorem. It is notstraightforward to write the average dynamics directly from Eqs. (12) (neither the exactmean-field limit) due to the nonlinearity of f . In order to handle the nonlinearity, weapproximate the state variables u i as random variables distributed according to a Gaussiandistribution, as described in [97] (see also [98–101] for details regarding the use of Gaussianrandom variables in such approximations). This approximation assumes that the excitablesystem is sufficiently close to the equilibrium point and the noise intensity D is small. Weemploy the law of large numbers (see, for example [102]), more precisely1 N N (cid:88) i =1 u i = E [ u i ] as N → ∞ , (15)and write the mean-field limit from the average dynamics given in Eqs. (12). Since weapproximate u i as a Gaussian random variable with expectation value u and variance ρ , wehave E [ u i ] = 1 (cid:112) πρ ∞ (cid:90) −∞ u i e − ( ui − u )22 ρ du i = u, E [ u i ] = 1 (cid:112) πρ ∞ (cid:90) −∞ u i e − ( ui − u )22 ρ du i = u + 3 ρ u. (16)12y implementing Eqs. (14), (15) and (16) in Eqs. (12) we find, in the limit N → ∞ , that εdu = f ( u, v ) dt − ρ u dt,dv = g a ( u ) + √ D dW, (17)and more explicitly εdu = (cid:16) (1 − ρ ) u − u − v (cid:17) dt,dv = (cid:16) u + a (cid:17) dt + √ D dW. (18)The fact that the coupling term vanishes in Eqs. (12) indicates that there is no effect of thecoupling at the mean-field level. Moreover, since we do not have any information about ρ apriori, we obtain the ρ values from the numerical network simulations and introduce themat each time sample when we perform the numerical integration of Eqs. (18). C. Nonlocally coupled equations
Finally we consider the intermediate nonlocally coupled case with 1 < P < N . It isconvenient to distinguish between two cases, that represent two classes of systems [103]:sparse (or strongly diluted) networks, where P (cid:28) N , and specifically P is independent of N as N → ∞ ; massive networks, where P is proportional to the network size N . In our ringtopology, these cases can be translated in the following limitslim N →∞ PN = 0 and lim N →∞ PN = C, (19)where C is a constant value and C ≤ / P (cid:28) N ), if we fix P to be a constant connectivity, we can write the following averagednetwork equations for N → ∞ εN N (cid:88) i =1 du i = 1 N N (cid:88) i =1 f ( u i , v i ) dt + 1 N N (cid:88) i =1 σ P (cid:16) u i − P + · · · + u i + P − P u i (cid:17) dt, N N (cid:88) i =1 dv i = 1 N N (cid:88) i =1 g a ( u i ) dt + 1 N N (cid:88) i =1 √ D dW i , (20)which turns out to be the same as the locally coupled system given in Eqs. (18). In thesecond case ( P ∝ N ), we may write P as a function of N , i.e. P = P ( N ), such that thesame limit holds for P and N , when N → ∞ . This means that, in the thermodynamic limit,13he mean-field system will converge to the globally coupled case given in Eqs. (8) as long asthe coupling constant σ is rescaled in accordance with the limit value comparable to C .We conclude that nonlocally coupled topology cannot be treated as a separate one, sinceits dynamics can be attributable to the one emergent in either sparse or massive networks.This case confirms what already found in literature: i) for massive networks, i.e. whenthe connectivity scales with N , the network behaves like a globally coupled system with arescaled coupling constant to account for the different fraction of active links [104]; ii) forsparse networks, characterized by constant connectivity, not increasing with N , the ther-modynamic limit shows a completely different behavior, typical of locally coupled topology[105]. IV. SIMULATION RESULTSA. Globally coupled framework
Here we study the role of noise intensity D and coupling strength σ in inducing coherenceresonance in a network of globally coupled FitzHugh-Nagumo oscillators. In particular, forthe mean-field model we initially simulate the system given by Eqs. (9), by employing theclassical Euler-Maruyama numerical scheme, as detailed in Appendix A. We measure R intwo different parameter settings: first we increase D , keeping all parameters fixed and secondwe increase σ , keeping all the other parameters fixed. The results are shown in Figs. 2, 3for different excitability threshold values. Note that the x -axis is logarithmic in both cases.Coherence resonance is visible both for a = 1 .
05 (Fig. 2 a)) and for a = 1 . R emerges; the location of the minimum depends on the excitabilitythreshold value and it occurs for different noise intensities D in the two cases. It is worthnoticing here that, if the system is closer to the Hopf bifurcation point, i.e., for a = 1 .
05, itrequires lower noise intensity for coherence resonance to occur. On the other hand, if thesystem is further away from the Hopf bifurcation point, i.e., for a = 1 .
3, the system requireshigher noise intensity. An interesting observation is that for both a = 1 .
05 and a = 1 . R ( D )-curve has both minimum and maximum (Fig. 2 a) and Fig. 3 a)). The occurance ofthe maximum is associated with the phenomenon of anticoherence resonance investigated in[106–108]. Here we show that the anticoherence is captured by the mean-field analysis.14 IG. 2. Normalized standard deviation of the interspike interval R for a globally coupled network(red circles) with noise terms in both system variable equations and its mean-field system (bluetriangles) with a = 1 .
05: a) for fixed coupling strength σ = 0 . D ,b) for fixed noise intensity D = 0 .
001 and varying coupling strength σ . The results are obtainedby integrating over 10000 time units and then averaging over time, oscillators, and realizations (5simulations for each σ ). The x -axis has logarithmic scaling. Other parameters: N = 100, ε = 0 . R for a globally coupled network(red circles) with noise terms in both system variable equations and its mean-field system (bluetriangles) with a = 1 .
3: a) for fixed coupling strength σ = 0 . D , b)for fixed noise intensity D = 0 . σ . The results are obtainedby integrating over 10000 time units and then averaging over time, oscillators, and realizations (5simulations for each σ ). The x -axis has logarithmic scaling. Other parameters: N = 100, ε = 0 . IG. 4. Probability distribution p ( t ISI ) of the measured interspike intervals t ISI in a globallycoupled network with a = 1 .
05 and with noise terms in both system variable equations for noiseintensity a) D = 0 .
001 b) D = 0 . D = 1 . D = 3 . To study the effects of coupling strength on the above observed coherence resonance,we measure R as σ is varied, for fixed D , in two different parameter settings: first, for a = 1 .
05 (Fig. 2 b)) and second, for a = 1 . a =1 .
05 that coherence resonance is enhanced for a certain range of coupling strength (when0 . ≤ σ < . a = 1 .
3, the same can be observed. As can be seen in Fig. 3 b)the network behavior is captured well in the mean-field framework.The emergence of anticoherence resonance is due to the increasing role played by thenoise, which destroys the refractory time proper of each neuron, thus allowing for infinitelysmall ISIs. This can be seen by plotting the probability distribution p ( t ISI ) of the timebetween two successive spikes t ISI , for increasing noise intensity (see Figs. 4 and 5 for a = 1 .
05 and a = 1 . IG. 5. Probability distributions p ( t ISI ) of the measured interspike intervals t ISI in a globallycoupled network with a = 1 . D = 0 . D = 0 . D = 1 . D = 3 . distributions for D = 0 .
001 (panel a), D = 0 . D = 1 . D = 3 . D = 0 . D = 0 . D = 1 . D = 0 . D = 0 . D = 1 . D = 3 . IG. 6. Normalized standard deviation of the interspike interval R for a globally coupled networkwith single noise term as given in Eqs. (3) (red circles) and its mean-field system (blue triangles)with a = 1 .
05: a) for fixed coupling strength σ = 0 . D and b) forfixed noise intensity D = 0 . σ . The results are obtained byintegrating over 10000 time units and then averaging over time, oscillators, and realizations (for 5simulations each). The x -axis has logarithmic scaling. Other parameters: N = 100, ε = 0 . noise intensity to achieve coherence resonance at a = 1 .
3, the level of coherence is lower withrespect to the previous case and the probability distribution at the minimum ( D = 0 . t ISI values that are smaller than the refractory timedue to the presence of strong fluctuations that guide the network dynamics, thus triggeringthe anticoherence phenomenon.The shown results are stable with respect to the integration scheme and do not dependon the chosen network size. More details are given in the Appendix A for the stability ofthe mean-field solution with respect to the integration scheme, while the impact of networksize on the results is further discussed in the Appendix B. In the following we consider theglobally coupled mean-field system with only one noise term, as given in Eqs. (8), which isthe exact mean-field limit in the globally coupled case of Eqs. (3). As previously done forthe extended mean-field model, we study the role of noise intensity D and coupling strength σ in inducing coherence resonance and compare the results obtained from the mean-fieldand network simulations.In Figs. 6 and 7 are shown the results for a = 1 .
05 and a = 1 .
3, respectively. In both cases18
IG. 7. Normalized standard deviation of the interspike interval R for a globally coupled network(red circles) with single noise term as given in Eqs. (3) and its mean-field system (blue triangles)with a = 1 .
3: a) for fixed coupling strength σ = 0 . D and b) forfixed noise intensity D = 0 . σ . The results are obtained byintegrating over 10000 time units and then averaging over time, oscillators, and realizations (for 5simulations each). The x -axis has logarithmic scaling. Other parameters: N = 100, ε = 0 . coherence resonance is visible and a minimum in R emerges (panel a). As for the extendedmean-field model, the location of the minimum depends on the excitability threshold valueand it occurs for higher noise intensities D when a higher excitability threshold is chosen.Moreover, we observe that R values obtained from the mean-field framework and the directnetwork simulation overlap almost completely for the whole considered range of noise values.In order to study the effects of coupling strength on the observed coherence resonance wehave measured R as σ is varied for fixed D (panel b): for both excitability threshold valuescoherence resonance can be obtained by choosing appropriate coupling strengths. Also forthis numerical experiment we see a good agreement between the network and the mean-fieldsystem.Finally we observe a different behavior, in terms of anticoherence resonance, for the casesshown in Figs. 6 a) and 7 a) with respect to the cases shown in Figs. 2 a) and 3 a): differentlyfrom the extended model, in the latter case we do not observe the presence of a relativemaximum for high noise intensity values. A maximum appears in the mean-field system, forvery small values of noise intensity, similarly to what reported in [106], where anticoherenceresonance has been shown to appear for a single stochastic FHN neuron, due to the mixing of19wo different time-scales: when adding additive noise of small intensity the neuron respondsin what appears to be trains consisting of a few number of enchained pulses. While thepulses belonging to a single spike train are separated by a small deterministic time scaleseparation, the time interval between two consecutive pulses is longer and unpredictable.However the same conjecture cannot be extended to our globally coupled network of FHNneurons since we observe, in the network system, a purely Poissonian statistics, with thePoisson limit R = 1 reached from the above. Therefore we conclude that the maximum inthe mean-field system is rather an artificial effect due to the poor prediction of the modelin the Poissonian regime. B. Locally coupled framework
In the following we show the results obtained from the investigation of the mean-fieldsystem in the locally coupled regime (Eqs. (18)), which has been simulated by applyingthe discrete scheme given in Eqs. (23). These results are compared with the simulations ofthe network system (Eqs. (3)) with the equivalent topological structure. Similarly to whatshown in the globally coupled framework, we perform two different numerical experiments.In the first one, we keep the coupling strength parameter σ fixed and vary the noise intensitylevel for two different excitability threshold values (i.e. a = 1 .
05 and a = 1 . σ . The first setof experiments shows the relation between the mean-field approximation and the small noiserequirement of the approximation. In the second set of experiments, we see the effect of thedistance of the excitable system from the equilibrium point on the mean-field approximation.The dependence on the noise intensity of the normalized standard deviation R of theinterspike intervals is shown in Fig. 8 for a = 1 .
05 and in Fig. 9 for a = 1 .
3, respectively.While the parameters ε = 0 . σ = 0 . D has been varied over logarithmically sampled values between 0 . a = 1 .
05 onthe left panel. It can be seen that R depends non-monotonically on D and a minimum ofthis quantity is observable for small noise intensity, thus indicating coherence resonance. Weremark that in the case of a = 1 .
05, for the mean-field system, the coherence of oscillationsgrows ( R decreases) as we increase the noise intensity D towards the value 6 . × − ,where it reaches its highest level, as shown in Fig. 8 a) in terms of blue triangles. Then it20 IG. 8. Normalized standard deviations of the interspike interval R for a locally coupled network(red circles) and the mean-field system (blue triangles) with a = 1 .
05. Left: fixed coupling strength σ = 0 . D . Right: fixed noise intensity D = 0 . σ . The results are obtained by integrating over 10000 time units and then averaging overtime, oscillators, and realizations (for 5 simulations each). The x -axis has logarithmic scaling.Other parameters: N = 100, ε = 0 .
01, ∆ t = 0 . T = M ∆ t = 10000. starts decaying as we increase D further. The coherence indicator R shows, for the networksystem, a pattern similar to the one obtained in the mean-field framework until the noisevalues are close to and smaller than 10 − , as shown in Fig. 8 in terms of red circles. Aswe increase the noise intensity D towards values higher than 6 . × − , the coherenceof the oscillations starts decreasing and the mean-field curve does not overlap with the oneobtained from the network equations any longer. This is due to the fact that we approximatethe state variables u and v via Gaussian random processes for small noise intensity values toderive the mean-field system and this approximation does not hold for high noise intensityvalues (i.e. D > . × − ). On the right panel, we keep the level of noise intensityfixed and vary the coupling strength. We observe that the mean-field model overlaps withthe network in the coherence resonance region, while there is no overlap outside that region.In the case of a = 1 .
3, the excitable system is further away from the border betweenexcitable and oscillatory regimes in comparison to the case of a = 1 .
05. This results ina rather incoherent spiking of neurons and the Gaussian approximation is not valid, thusaffecting the mean-field predictions, as can be seen in Fig. 9 a). On the right panel, wherethe coupling strength is varied while keeping the noise intensity level fixed, we observe that21
IG. 9. Normalized standard deviations of the interspike interval R for a locally coupled network(red circles) and the mean-field system (blue triangles) with a = 1 .
3. Left: fixed coupling strength σ = 0 . D . Right: fixed noise intensity D = 0 .
001 and varying couplingstrength σ . The results are obtained by integrating over 10000 time units and then averaging overtime, oscillators, and realizations (for 5 simulations each). The x -axis has logarithmic scaling.Other parameters: N = 100, ε = 0 .
01, ∆ t = 0 . T = M ∆ t = 10000. the mean-field model and the network coincides in the coherence resonance region as in thecase with a = 1 .
05. In the rest, although they follow the same pattern (almost constant) forsmall coupling strength values, they do not overlap outside the coherence resonance region.We note that, in both cases a = 1 .
05 and a = 1 .
3, the mean-field is able to reproducethe network dynamics only when the network spikes coherently. In this regime the varianceis low and it is possible to observe a regular spike activity in the mean-field system. Forsmall coupling strength values, while there are always some neurons in the network whichspike occasionally, it is not possible to observe spike emissions in the mean-field system dueto the high variance values at each time instant: due to the high variance, the mean-fieldsystem drags u to negative values as soon as u exceeds 0, thus impeding the spike emission.This is at the origin of the different behaviors observable at a = 1 .
05 for small couplingstrength values (see Fig. 8 b)). At a = 1 . . CONCLUSION In this paper we have developed a mean-field framework that allows analytical treatmentof coherence resonance in complex networks of FHNs. In particular we have compared theobtained results with the direct numerical simulation of the network equations for the locallyand globally coupled networks.For the globally coupled case, we demonstrate a good agreement. Moreover, all the “nu-ances” of coherence resonance, such as sensitivity to excitability threshold and the couplingstrength, are captured by the mean-field framework. On the other hand, for the locallycoupled case, we have disagreement for large noise values, due to the fact that we approx-imate the state variables via Gaussian random processes for small noise intensity values.Therefore, we identify the limitations of our mean-field framework that works well only forsmall noise values in the locally coupled case.The better agreement for the globally coupled case compared to the locally coupled casecan be explained by the fact that the two mean-field models are different for the two cases.The locally coupled case needs the variances ρ of u at each time step. Since the excitationtravels through the network (compare Fig. 9), the variances become quite large when spikesoccur, making it harder, if not impossible, to capture the full dynamics in a mean-fieldmodel. The globally coupled case simply takes the mean of all oscillators. As all oscillatorshave the same coupling term, they also show the same spiking behavior. This high similarityin the behavior leads to a very small variance in u (and v ), meaning that the mean-fieldcaptures the entire behavior. Further, the better agreement for the globally coupled casecompared to the local topology can be explained by the push-pull effect generated by theall-to-all interaction among the neurons. This allows neurons to spike coherently, while inthe locally coupled architecture, due to the absence of such push-pull effect, highly variedspiking patterns emerge for each neuron in the network. This affects the mean-field modelvia the ρ term, thus preventing the mean-field from giving reasonable insights in the networkdynamics (see Eqs. (18)).Finally in the globally coupled system we have found, for the extended model Eqs. (9),anticoherence resonance, which takes place for high noise intensity values. For the extendedmodel anticoherence is originated from noise induced activation processes: the noise is sostrong that guides neurons over threshold continuously, thus inducing firing emissions at23nfinitely small t ISI values. At the contrary, for the original mean-field model Eqs. (8), theanticoherence phenomenon cannot be observed neither at low nor at high intensity values.Interesting future research directions on the topic would be to extend the mean-fieldframework to a multi-layer topology, where the emergence and control of coherence reso-nance have been recently found [54, 109] and to introduce biological features in the model,like electrical and chemical synapses, that play functional roles in information processing,similarly to what shown in [110] for self-induced stochastic resonance.
ACKNOWLEDGMENTS
This work was supported by the Deutsche Akademische Austauschdienst (DAAD, Ger-man Academic Exchange Service) - Projektkennziffer - 57445304 - PPP Frankreich Phase Iand the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - Projekt-nummer - 163436311 - SFB 910. From the French side, this work was supported by CampusFrance - programme PHC PROCOPE 2019 - Num´ero de projet : 42511TA.
APPENDIX A: INTEGRATION SCHEME
In the globally coupled framework we simulate the system given by Eqs. (9), by employingthe classical Euler-Maruyama numerical scheme. The discretized equations read asˆ u ( t k +1 ) − ˆ u ( t k ) = ∆ tε (cid:110) f (cid:16) ˆ u, ˆ v (cid:17) + σ (cid:16) E [ˆ u ( t k )] − ˆ u ( t k ) (cid:17)(cid:111) + (cid:114) Dε d ˆ¯ W, ˆ v ( t k +1 ) − ˆ v ( t k ) = (cid:16) ˆ u ( t k ) + a (cid:17) ∆ t + √ D d ˆ W ( t k ) , (21)where the ˆ denotes the discretized variables in time, ∆ t is the time step and t k is the k th sample of time. More precisely t k = k ∆ t with k = 0 , , . . . , M , where the initial and finaltime instants of the evolution are t = 0 and t M = T = M ∆ t , respectively.In the Euler-Maruyama scheme the noise terms are discretized as d ˜¯ W ( t k ) = √ ∆ t ¯ µ ( t k ) , d ˜ W ( t k ) = √ ∆ t µ ( t k ) , (22)where ¯ µ ( t k ) and µ ( t k ) are random numbers generated independently from the standardnormal distribution at each time instant t k .24n the locally coupled framework we simulate the system given by Eqs. (18). We em-ploy the classical Euler-Maruyama numerical scheme where we use the following explicitdiscretized equations:ˆ u ( t k +1 ) − ˆ u ( t k ) = ∆ tε (cid:110)(cid:16) − ˆ ρ ( t k ) (cid:17) ˆ u ( t k ) − ˆ u ( t k )3 − ˆ v ( t k ) (cid:111) , ˆ v ( t k +1 ) − ˆ v ( t k ) = (cid:16) ˆ u ( t k ) + a (cid:17) ∆ t + √ D d ˆ W ( t k ) , (23)where the notation and the parameters are the same as in Eqs. (21). In order to performthe numerical integration, the value of E [ˆ u ] (respectively ˆ ρ ) is needed at each time instant t k for the globally (locally) coupled setting. We find the values of E [ˆ u ] (respectively ˆ ρ ), ateach time step, by integrating the network Eqs. (3), with initial conditions generated from aGaussian distribution and by implementing the Euler-Maruyama method. In particular wehave performed L = 20 simulations of globally (locally) coupled networks, where the initialconditions are randomly generated in each simulation. This provides L different set of E [ˆ u ](respectively ˆ ρ ) values for each time instant. Finally the E [ˆ u ] (respectively ˆ ρ ) values tobe introduced in the associated mean-field framework, given by Eqs. (21) (Eqs. (23)), aredetermined as the average over L of the values calculated for each time instant.Finally, we investigate the influence of the time step size used for the integration of theglobally coupled mean-field Eqs. (9). It is important to use a small time step (∆ t = 0 . APPENDIX B: DEPENDENCE ON THE NETWORK SIZE
Here we analyze the impact of network size on our results. In more detail, we compare R ( D ) curves obtained for the globally coupled case from the direct numerical simulation ofthe network equations for N = 50 and N = 500 with the curve resulting from the mean-fieldframework. In particular, for the mean-field model we simulate the system given by Eqs.(9) and analogously, for the network equations, we integrate Eqs. (3) with noise terms inboth system variable equations. It turns out that the system size does not have impact hereand all three curves agree well (see Fig. 11).25 IG. 10. Normalized standard deviation of the interspike interval R for a globally coupled network(red circles) with N = 100 nodes and with noise terms in both state variable equations; and thesame of the corresponding mean-field system simulated with different time steps: ∆ t = 0 .
001 (darkblue triangles), ∆ t = 0 .
002 (light blue squares) and ∆ t = 0 .
004 (orange stars) for fixed couplingstrength σ = 0 . D . The results are obtained by integrating over 5000time units and then averaging over time and oscillators. The x -axis has logarithmic scaling. Otherparameters: a = 1 . ε = 0 . R for a globally coupled network(red circles) with noise terms in both system variable equations. Cases with N = 50 (red circles)and N = 500 nodes (orange diamonds) are compared to the mean-field system (blue triangles)where coupling strength σ is fixed to 0 . D is varied. The results are obtainedby integrating over 5000 time units and then averaging over time and oscillators. The x -axis haslogarithmic scaling. Other parameters: a = 1 . ε = 0 .
1] R. L. Stratonovich,
Topics in the Theory of Random Noise , Vol. I+II (Gordon and Breach,1965, 1967).[2] L. Arnold,
Random dynamical systems (Springer, 1998).[3] V. S. Anishchenko, V. Astakhov, A. B. Neiman, T. Vadivasova, and L. Schimansky-Geier,
Nonlinear dynamics of chaotic and stochastic systems: tutorial and modern developments (Springer, Berlin, 2007).[4] G. Deco, E. T. Rolls, and R. Romo, Prog. Neurobiol. , 1 (2009).[5] A. N. Pisarchik, R. Jaimes-Re´ategui, C. A. Magall´on-Garc´ıa, and C. O. Castillo-Morales,Biol. Cybern. , 397 (2014).[6] A. E. Runnova, A. E. Hramov, V. V. Grubov, A. A. Koronovskii, M. K. Kurovskaya, andA. N. Pisarchik, Chaos, Solitons & Fractals , 201 (2016).[7] A. Zakharova, T. Vadivasova, V. Anishchenko, A. Koseska, and J. Kurths, Phys. Rev. E ,011106 (2010).[8] A. Zakharova, J. Kurths, T. Vadivasova, and A. Koseska, PlOS One (2011).[9] V. S. Anishchenko and A. B. Neiman, in Stochastic Dynamics (Springer, 1997) pp. 154–166.[10] A. Zakharova, A. Feoktistov, T. Vadivasova, and E. Sch¨oll, Eur. Phys. J. Spec. Top. ,2481 (2013).[11] N. Semenova, A. Zakharova, V. S. Anishchenko, and E. Sch¨oll, Phys. Rev. Lett. , 014102(2016).[12] A. Zakharova, N. Semenova, V. S. Anishchenko, and E. Sch¨oll, in
Patterns of Dynamics ,Springer Proceedings in Mathematics & Statistics, Vol. 205, edited by P. Gurevich, J. Hell,and B. Sandstede (Springer, 2018) p. 44.[13] A. Zakharova, N. Semenova, V. S. Anishchenko, and E. Sch¨oll, Chaos , 114320 (2017).[14] S. Olmi and A. Torcini, in Nonlinear Dynamics in Computational Neuroscience (Springer,2019) pp. 65–79.[15] A. Zakharova,
Chimera Patterns in Networks: Interplay between Dynamics, Structure, Noise,and Delay , Understanding Complex Systems (Springer, 2020).[16] R. Benzi, G. Parisi, A. Sutera, and A. Vulpiani, Tellus , 10 (1982).[17] J. K. Douglass, L. Wilkens, E. Pantazelou, and F. Moss, Nature , 337 (1993).
18] F. Moss, D. Pierson, and D. OGORMAN, Int. J. Bifurc. Chaos , 1383 (1994).[19] K. Wiesenfeld and F. Moss, Nature , 33 (1995).[20] Z. Gingl, L. Kiss, and F. Moss, EPL (Europhys. Lett.) , 191 (1995).[21] W.-J. Rappel and A. Karma, Phys. Rev. Lett. , 3256 (1996).[22] A. R. Bulsara and A. Zador, Phys. Rev. E , R2185 (1996).[23] J. J. Collins, T. T. Imhoff, and P. Grigg, J. Neurophysiol. , 642 (1996).[24] L. Gammaitoni, P. H¨anggi, P. Jung, and F. Marchesoni, Rev. Mod. Phys. , 223 (1998).[25] V. S. Anishchenko, A. B. Neiman, F. Moss, and L. Schimansky-Geier, Phys. Usp. , 7(1999).[26] C. B. Muratov, E. Vanden-Eijnden, and E. Weinan, Physica D , 227 (2005).[27] M. D. McDonnell and D. Abbott, PLOS Comput. Biol. (2009).[28] L. Gammaitoni, P. H¨anggi, P. Jung, and F. Marchesoni, Eur. Phys. J. B , 1 (2009).[29] V. Semenov, A. Zakharova, Y. Maistrenko, and E. Sch¨oll, EPL (Europhys. Lett.) , 10005(2016).[30] H. Gang, T. Ditzinger, C.-Z. Ning, and H. Haken, Phys Rev. Lett. , 807 (1993).[31] A. S. Pikovsky and J. Kurths, Phys. Rev. Lett. , 775 (1997).[32] A. Neiman, P. I. Saparin, and L. Stone, Phys. Rev. E , 270 (1997).[33] S.-G. Lee, A. Neiman, and S. Kim, Phys. Rev. E , 3292 (1998).[34] J. R. Pradines, G. V. Osipov, and J. J. Collins, Phys. Rev. E , 6407 (1999).[35] G. Giacomelli, M. Giudici, S. Balle, and J. R. Tredicce, Phys. Rev. Lett. , 3298 (2000).[36] K. Miyakawa and H. Isikawa, Phys. Rev. E , 046204 (2002).[37] T. Tateno and K. Pakdaman, Chaos , 511 (2004), https://doi.org/10.1063/1.1756118.[38] B. Lindner, J. Garcıa-Ojalvo, A. Neiman, and L. Schimansky-Geier, Physics reports ,321 (2004).[39] R. L. DeVille, E. Vanden-Eijnden, and C. B. Muratov, Phys. Rev. E , 031105 (2005).[40] O. Ushakov, H.-J. W¨unsche, F. Henneberger, I. Khovanov, L. Schimansky-Geier, andM. Zaks, Phys. Rev. Lett. , 123903 (2005).[41] P. M. Geffert, A. Zakharova, A. V¨ullings, W. Just, and E. Sch¨oll, Eur. Phys. J. B , 291(2014).[42] V. Semenov, A. Feoktistov, T. Vadivasova, E. Sch¨oll, and A. Zakharova, Chaos , 033111(2015).
43] W. Just, P. M. Geffert, A. Zakharova, and E. Sch¨oll, in
Control of Self-Organizing NonlinearSystems (Springer, 2016) pp. 147–168.[44] B. Hu and C. Zhou, Phys. Rev. E , 1001(R) (2000).[45] A. G. Balanov, N. B. Janson, and E. Sch¨oll, Physica D , 1 (2004).[46] E. Sch¨oll, A. G. Balanov, N. B. Janson, and A. B. Neiman, Stoch. Dyn. , 281 (2005).[47] B. Hauschildt, N. B. Janson, A. G. Balanov, and E. Sch¨oll, Phys. Rev. E , 051906 (2006).[48] D. Ziemann, R. Aust, B. Lingnau, E. Sch¨oll, and K. L¨udge, EPL (Europhys. Lett.) ,14002 (2013).[49] P. Balenzuela, P. Ru´e, S. Boccaletti, and J. Garc´ıa-Ojalvo, New J. Phys. , 013036 (2014).[50] M. Masoliver, N. Malik, E. Sch¨oll, and A. Zakharova, Chaos , 101102 (2017).[51] R. Aust, P. H¨ovel, J. Hizanidis, and E. Sch¨oll, Eur. Phys. J. Spec. Top. , 77 (2010).[52] T. Kreuz, S. Luccioli, and A. Torcini, Physical review letters , 238101 (2006).[53] T. Kreuz, S. Luccioli, and A. Torcini, Neurocomputing , 1970 (2007).[54] N. Semenova and A. Zakharova, Chaos , 051104 (2018).[55] E. Wang, D. Chik, and Z. Wang, Phys. Rev. E , 740 (2000).[56] O. Kwon and H. T. Moon, Phys. Lett. A , 319 (2002).[57] X. Sun, Q. Lu, and J. Kurths, Physica A , 6679 (2008).[58] E. Yilmaz, M. Ozer, V. Baysal, and M. Perc, Sci. Rep. , 30914 (2016).[59] A. V. Andreev, V. V. Makarov, A. E. Runnova, A. N. Pisarchik, and A. E. Hramov, Chaos,Solitons & Fractals , 80 (2018).[60] N. Brunel and X.-J. Wang, Journal of neurophysiology , 415 (2003).[61] G. Deco and M. L. Kringelbach, Trends Neurosci. , 125 (2016).[62] A. Pisarchik, V. A. Maksimenko, A. V. Andreev, N. S. Frolov, V. V. Makarov, M. O. Zhu-ravlev, A. E. Runnova, and A. E. Hramov, Sci. Rep. , 18325 (2019).[63] C. Kirst, M. Timme, and D. Battaglia, Nature communications , 1 (2016).[64] A. Palmigiano, T. Geisel, F. Wolf, and D. Battaglia, Nature neuroscience , 1014 (2017).[65] B. van Wijk, P. J. Beek, and A. Daffertshofer, Frontiers in human neuroscience , 252 (2012).[66] B. Horwitz and A. R. Braun, Brain and language , 377 (2004).[67] D. H. Hubel and T. N. Wiesel, J. Physiol. (Lond.) , 574 (1959).[68] D. H. Hubel and T. N. Wiesel, J. Physiol. (Lond.) , 106 (1962).[69] B. B. Averbeck, P. E. Latham, and A. Pouget, Nature Rev. Neurosci. , 358 (2006).
70] G. Citti and A. Sarti, J. Math. Imaging. Vis. , 307 (2006).[71] A. Sarti, G. Citti, and J. Petitot, Biol. Cybern. , 33 (2008).[72] E. Baspinar, G. Citti, and A. Sarti, J. Math. Imaging. Vis. , 900 (2018).[73] E. Baspinar, A. Sarti, and G. Citti, The Journal of Mathematical Neuroscience , 1 (2020).[74] R. Toral, C. R. Mirasso, and J. D. Gunton, EPL (Europhys. Lett.) , 162167 (2003).[75] W. Du-Qu and L. Xiao-Shu, Commun. Theor. Phys. , 759 (2008).[76] B. Lindner and L. Schimansky-Geier, Phys. Rev. E , 7270 (1999).[77] S. Olmi, D. Angulo-Garcia, A. Imparato, and A. Torcini, Sci. Rep. , 1 (2017).[78] J. Baladron, D. Fasoli, O. Faugeras, and J. Touboul, J. Math. Neurosci. (JMN) , 10 (2012).[79] M. Bossy, O. Faugeras, and D. Talay, J. Math. Neurosci. (JMN) , 19 (2015).[80] A. L. Hodgkin and A. F. Huxley, J. Physiol. (Lond.) , 500 (1952).[81] C. Morris and H. Lecar, Biophysical journal , 193 (1981).[82] J. L. Hindmarsh and R. Rose, Proceedings of the Royal society of London. Series B. Biologicalsciences , 87 (1984).[83] W. Gerstner and W. M. Kistler, Spiking neuron models: Single neurons, populations, plas-ticity (Cambridge university press, 2002).[84] R. Brette, M. Rudolph, T. Carnevale, M. Hines, D. Beeman, J. M. Bower, M. Diesmann,A. Morrison, P. H. Goodman, F. C. Harris, et al. , J. Comput. Neurosci. , 349 (2007).[85] E. M. Izhikevich, Dynamical systems in neuroscience (MIT press, 2007).[86] A. Galves and E. L¨ocherbach, Journal of Statistical Physics , 896 (2013).[87] H. R. Wilson and J. D. Cowan, Biophys. J. , 1 (1972).[88] H. R. Wilson and J. D. Cowan, Kybernetik , 55 (1973).[89] J. Touboul, G. Hermann, and O. Faugeras, SIAM J. Appl. Dyn. Syst. , 49 (2012).[90] E. Montbri´o, D. Paz´o, and A. Roxin, Phys. Rev. X , 021028 (2015).[91] F. Devalle, A. Roxin, and E. Montbri´o, PLoS computational biology , e1005881 (2017).[92] A. Ceni, S. Olmi, A. Torcini, and D. Angulo-Garcia, Chaos: An Interdisciplinary Journalof Nonlinear Science , 053121 (2020).[93] M. Segneri, H. Bi, S. Olmi, and A. Torcini, Front. Comput. Neurosci. .[94] G. B. Ermentrout and N. Kopell, SIAM Journal on Applied Mathematics , 233 (1986).[95] R. FitzHugh, Biophys. J. , 445 (1961).[96] J. Nagumo, S. Arimoto, and S. Yoshizawa, Proc. IRE , 2061 (1962).
97] N. Buri´c, D. Rankovi´c, K. Todorovi´c, and N. Vasovi´c, Physica A , 3956 (2010).[98] R. Rodriguez and H. C. Tuckwell, Phys. Rev. E , 5585 (1996).[99] H. Tuckwell, J. Comput. Neurosci. , 91 (1998).[100] S. Tanabe and K. Pakdaman, Phys. Rev. E , 031911 (2001).[101] M. Zaks, X. Sailer, L. Schimansky-Geier, and A. Neiman, Chaos , 026117 (2005).[102] N. Etemadi, Z. Wahrscheinlichkeitstheor. Verwandte Geb. , 119 (1981).[103] D. Golomb, D. Hansel, and G. Mato, in Handbook of biological physics , Vol. 4 (Elsevier,2001) pp. 887–968.[104] S. Olmi, R. Livi, A. Politi, and A. Torcini, Physical Review E , 046119 (2010).[105] S. Luccioli, S. Olmi, A. Politi, and A. Torcini, Physical review letters , 138103 (2012).[106] A. Lacasta, F. Sagues, and J. Sancho, Physical Review E , 045105 (2002).[107] B. Lindner, L. Schimansky-Geier, and A. Longtin, Physical Review E , 031916 (2002).[108] S. Luccioli, T. Kreuz, and A. Torcini, Physical Review E , 041902 (2006).[109] M. E. Yamakou and J. Jost, Physical Review E , 022313 (2019).[110] M. E. Yamakou, P. G. Hjorth, and E. A. Martens, Front. Comput. Neurosci. (2020).(2020).