Noisy threshold in neuronal models: connections with the noisy leaky integrate-and-fire model
NNoname manuscript No. (will be inserted by the editor)
Noisy threshold in neuronal models: connections withthe noisy leaky integrate-and-fire model.
G. Dumont · J. Henry · C.O. TarniceriuAbstract
Providing an analytical treatment to the stochastic feature of neu-rons’ dynamics is one of the current biggest challenges in mathematical biol-ogy. The noisy leaky integrate-and-fire model and its associated Fokker-Planckequation are probably the most popular way to deal with neural variability.Another well-known formalism is the escape-rate model: a model giving theprobability that a neuron fires at a certain time knowing the time elapsedsince its last action potential. This model leads to a so-called age-structuredsystem, a partial differential equation with non-local boundary condition fa-mous in the field of population dynamics, where the age of a neuron is theamount of time passed by since its previous spike. In this theoretical paper,we investigate the mathematical connection between the two formalisms. Weshall derive an integral transform of the solution to the age-structured modelinto the solution of the Fokker-Planck equation. This integral transform high-lights the link between the two stochastic processes. As far as we know, anexplicit mathematical correspondence between the two solutions has not beenintroduced until now.
Neurons are strongly noisy. They never respond in the same way under re-peated exposure to identical stimuli and it is difficult for theoreticians to ap-ply the correct analytical treatment in order to express this variability. Two
G. DumontPhysics Department, 150 Louis Pasteur Ottawa, Ontario, Canada K1N 6N5E-mail: [email protected]. HenryINRIA Bordeaux Sud Ouest, 200 avenue de la vieille tour, 33405 Talence Cedex, FranceE-mail: [email protected]. TarniceriuDepartment of Sciences of ”Al. I. Cuza” University, Lascar Catargi 54, 700107 Ia¸si, RomaniaE-mail: [email protected] a r X i v : . [ q - b i o . N C ] D ec G. Dumont et al. distinct sources of noise are usually mentioned: external and internal [15].While the external source of noise usually refers to the random fluctuationsattributed to the environment of the neurons, the internal source is mainlyimputed to the probabilistic nature of the chemical reactions governing thefiring process of neurons. More precisely, noise is present because a neuron isbombarded by thousands of synaptic inputs, and also due to the randomnessin the openings and closings of the ion channels underlying action potentials[28].The noisy leaky integrate-and-fire (NLIF) model is a mathematical modelthat takes into account the stochastic features of neurons [7]. The model ispreferred by theoreticians since it can be seen as a simplification of the bio-physiological Hodgkin-Huxley model [22], which is sufficiently detailed to allowa qualitative comparison with physical data obtained via intracranial recording[24] (see also [20] for a recent discussion about the quality of the neural mod-eling). Nonetheless, despite its apparent simplicity, many questions regardingits dynamics remain open.By definition, the NLIF model describes a stochastic process, which is givenby a Langevin equation plus a discontinuous reset mechanism to mimic theonset of the action potential (see [7] and [24]). Starting with the Langevinequation, one can write the well-known associated Fokker-Planck (FP) equa-tion [16], that gives the evolution in time of the density probability to find aneuron’s membrane potential in a certain voltage value [27].Let us remind that, in mathematical neuroscience, the concept of proba-bility density function has already a long history, as it can be seen in [40], [2],and it is used in a variety of contexts. Indeed, assuming the number of neu-rons to be infinitely large, one can write the so-called thermodynamics’ meanfield equation, where the effect of the whole network on any given neuron isapproximated by a single averaged effect. Under some assumptions and ap-proximations, the equation takes the form of a nonlinear FP equation. It is inparticular pertinent for the simulation of large sparsely connected populationsof neurons [33], [32], [13]. Furthermore, this density approach has brought animportant added value on the theoretical understanding of synchronizationand brain rhythms. Particularly, this approach has been successfully used tounderstand synchronization caused by recurrent excitation [11], [12], [8], bydelayed inhibition feedback [5], by both recurrent excitation and inhibition [4]and by gap junction [35]. On a similar trend, it has been used to study theoccurrence of the neural cascade [30], [31] and the emergence of self criticality[29] with synaptic adaptation.In this paper, we do not investigate the effect of interactions among neu-rons, but focus on the analytical treatment of neural noise. NLIF model is apopular way to deal with the stochastic aspect of neurons, another way is theescape-rate model [38], [19]. The main difference between the two approachesconsists in the treatment of noise; while in the NLIF model the noise actson the trajectories, the escape rate model considers deterministic trajectoriesand the noise is present in expressing the variability of firings that is modeledin the form of a hazard function . Therefore, the noisy trajectories with fixed ge-structure in neuronal populations 3 threshold are replaced by deterministic trajectories with noisy thresholds. Inthe equivalent description of the NLIF model as a FP equation with absorb-ing boundary condition at the firing threshold, the noise is expressed by thediffusion term of the FP equation; it has been shown in [38] that, in the sub-threshold regime, the integrate and fire model with stochastic input (diffusivenoise) can be mapped onto an escape rate model with a certain escape rate.Starting from this, the equivalence of the FP equation with escape noise anda partial differential equation that describes the evolutions of refractory den-sities has been shown [19]; the last equation is strikingly similar to those ofthe well-known age-structured (AS) models and it gives the evolution in timeof the refractory densities with respect to their refractory state, which is infact the time elapsed since the last firing. To underline the above mentionedsimilarity, we will refer to this variable in this paper as age . Age structure ina neural context has been also discussed in [36].In our paper, we shall prove that the solution to the AS system can betransformed via an integral transform into the solution to the FP equation as-sociated to the NLIF model. The kernel of the integral transform will involvein particular the notion of survivor function [18], [19]. In renewal theory, thehazard is known also as the age dependent death rate and expresses the rateof decay of the survivor function [10]. The concept of time dependent inter-spike interval (ISI) distribution and corresponding survivor function has beenconsidered later [17].In the neuroscience context, the survivor function, which was introducedinitially to describe the probability of a particle to reach a given target, willgive the probability for a neuron to ”survive” without firing. We refer again formore about these considerations to [19], and further analysis on these functionsand the related first passage time problem in the neural context can be foundin the review [3]. First passage time problem in cellular domains has beeninvestigated in [39] and [23].There is a strong advantage in using an AS formalism: the AS systems havebeen thoroughly investigated in the last decades, and many qualitative resultsof the various forms of AS population models have been obtained. By provingan equivalence between a membrane potential density model and an AS model,we will be in position to obtain insights of the qualitative behavior of thepopulation density function such as long time behavior, stability, bifurcationpoints, and so on.The paper is structured as follows: we remind in the first two sections theNLIF model and its associated FP equation, and we present some simula-tions of the models. Next, some considerations about the survivor function,the interspike interval distribution and the first passage time problem are pre-sented. We introduce in the following the stochastic threshold model and thecorresponding AS system. We prove our main theoretical results in the last twosections: first we establish in Proposition 1 an integral correspondence betweenits solution and the solution to FP equation. We also consider the stationarycase, and show that, by our integral transform, we obtain an expression of thecorresponding stationary solution which verifies the stationary FP equation.
G. Dumont et al.
Last, we show the asymptotic convergence of the solution to FP system to thesolution of the stationary system defined through our transform. The existenceof an inverse transform to the one introduced here as well as extensions of theproblem to the case of time-dependent parameters of the systems are subjectto our further investigations .Before getting started, let us summarize in Table 1 the main mathematicalnotations and their associated biophysiological meaning used throughout thisdocument.
Notation Biophysiological interpretation v ( t ) Neuron’s membrane potential v r Reset potential µ Bias current σ Noise intensity p ( t, v ) Population density with respect to potential r ( t ) Neuron’s firing rate a ( t ) Neuron’s age , i.e. time elapsed since the last spike q ( a, v ) Joint probability density of the membrane’s potential and neuron’s age ISI ( a ) First passage time S ( a ) Age dependent death rate n ( t, a ) Population density for the age-structured model Table 1
Main notations used throughout this paper and their biophysiological interpreta-tions
The NLIF model is a well known model in the field of computational neuro-science [24]. The model consists in an ordinary differential equation describingthe subthreshold dynamics of a single neuron membrane’s potential and theonset of an action potential described by a reset mechanism: a spike occurswhenever a given threshold V T is reached by the membrane potential variable V . Whenever the firing threshold is reached, it is considered that a spike hasbeen fired and the membrane potential is instantaneously reset to a given value V R . The dynamics of the subthreshold potentials are given by τ ddt V ( t ) = − g ( V ( t ) − V L ) + η ( t ) , where V ( t ) is the membrane potential at time t , τ is the membrane capaci-tance, g - the leak conductance, V L - the reversal potential and η ( t ) - a gaussianwhite noise, see [6] and [1] for the history of the model, [7] for a recent reviewand see [24] for other spiking models. In what follows, we will use a normalizedversion of the above equation, i.e. we define µ as the bias current and v themembrane’s potential which will be given by ge-structure in neuronal populations 5 P o t en t i a l P o t en t i a l P o t en t i a l D en s i t y D en s i t y D en s i t y Fig. 1
Simulation of the neuron model (1) for different values of the noise coefficient. Theparameters of the simulation are: v r = 0 . µ = 20, and σ = 0 . σ = 0 . σ = 0 . µ = V L V T , v = VV T , v r = V R V T . After rescaling the time in units of the membrane constant g/τ , the normalizedmodel reads (cid:26) ddt v ( t ) = µ − v ( t ) + ξ ( t )If v > v = v r . (1)Again, ξ ( t ) is a Gaussian white noise stochastic process with intensity σ : (cid:104) ξ ( t ) (cid:105) = 0 , (cid:104) ξ ( t ) ξ ( t (cid:48) ) (cid:105) = σδ ( t − t (cid:48) ) . In Fig. 1, a simulation of the neuron model (1) is presented. The three pan-els correspond to the same simulation with different level of noise. As expected,when the stochastic coefficient is increased, the corresponding dynamics be-come much more irregular. Note that for µ small enough, the equilibrium ofthe membrane potential will be located under the threshold. In this situation,the neuron will fire only due to the stochastic Brownian motion. We refer tothis situation as to a subthreshold regime. In a real-world setting, such situ-ation appears in a balanced neural network for instance, when the excitatoryand inhibitory pre-synaptic inputs cancel out. Considering a population of neurons that are individually described by thestochastic equation (1), the evolution of the population density function hasbeen proven to satisfy the FP equation. We remind that the FP equation hasbeen used in two different contexts in mathematical neuroscience: to model theevolution of both probability density function and population density function.For more considerations about the link between the two approaches we referto [32], [26], [25], [5].In this paper we shall use both formalisms: we shall consider a densityof neurons characterized by a population density function, denoted here by p ( t, v ), which satisfies the FP equation, and each neuron of the given popu-lation has the evolution of the potential of the membrane given by the NLIF G. Dumont et al.
Age0 Reset After SpikePotential 1Reset After Spike ThresholdDriftDrift + Diffusion
Fig. 2
Schematic representation of the state space for the FP equation (2). model. Then, the probability density function of each neuron to be at a cer-tain voltage at a given time will be described by the same FP equation [16],this time considered only in an inter-spike interval, as we shall see in the nextsection.This equation is a conservation law taking into account three phenomenamodeled by: a drift term due to the continuous evolution in the NLIF model,a diffusion term due to the noise and a term due to the reset to v r right afterthe firing process. Let r ( t ) be the firing rate of the population, i.e. the fluxthrough the threshold. Then, the dynamics of the population density p ( t, v )is: ∂∂t p ( t, v ) + Drift (cid:122) (cid:125)(cid:124) (cid:123) ∂∂v [( µ − v ) p ( t, v )] − Diffusion (cid:122) (cid:125)(cid:124) (cid:123) σ ∂ ∂v p ( t, v ) = Reset (cid:122) (cid:125)(cid:124) (cid:123) δ ( v − v r ) r ( t ) . (2)We show in Fig. 2 a schematic representation of the state space for the FPequation (2). Because a neuron reaching the threshold fires an action potentialand is instantaneously reset to v r , we impose an absorbing condition at thethreshold ([21]), namely p ( t,
1) = 0 , ∀ t ≥ . (3)Usually, a reflecting boundary is imposed at v = −∞ in order to assure theconservation propertylim v →−∞ ( − µ + v ) p ( t, v ) + σ ∂∂v p ( t, v ) = 0 , ∀ t ≥ . (4)Of course, an initial distribution of the membrane potential is taken as a givenfunction: p (0 , v ) = p ( v ) , v ∈ ( −∞ , . (5)As previously said, the firing rate r ( t ) is defined as the flux at the thresholdand, due to the boundary condition for the population density function in thisvalue, is given by r ( t ) = − σ ∂∂v p ( t, . (6) ge-structure in neuronal populations 7 D en s i t y D en s i t y D en s i t y D en s i t y D en s i t y D en s i t y Fig. 3
Simulations of the FP equation (2)-(6) and of the stochastic process (1): black curvefor the FP equation, blue curve for the stochastic process. A gaussian was taken as initialcondition; the parameters of the simulation are: v r = 0 . µ = 20, σ = 0 .
4. The plots showthe evolution in time of the solution at t = 0, t = 0 . t = 0 . t = 0 . t = 0 . t = 7. Using the boundary condition and the expression of r ( t ) given by (6), onecan easily check the conservation property of the equation (2) by directlyintegrating it on the interval ( −∞ , (cid:90) −∞ p ( v ) dv = 1 , (7)then the solution to (2)-(6) necessarily satisfies the normalization condition (cid:90) −∞ p ( t, w ) dw = 1 . (8)Despite its ”weird” singular source term, the existence of a solution to theabove model has been proved in [9]. The FP equation can be written as aStephan problem and an implicit solution can be given in the form of anintegral equation. Note that in the literature, the equation (2) is often exposedin terms of a conservation law. In this setting, the flux that we denote J ( t, v )is defined as − J ( t, v ) = ( − µ + v ) p ( t, v ) + σ ∂∂v p ( t, v ) . Therefore, the evolution in time of the density function p is given by ∂∂t p ( t, v ) = − ∂∂v J ( t, v ) . In this formulation, the singular source term that appears in (2) can be seenas a flux discontinuity, see [4] for instance,lim v → v + r J ( t, v ) − lim v → v − r J ( t, v ) = J ( t, . G. Dumont et al. N eu r on N eu r on N eu r on F i r i ng r a t e F i r i ng r a t e F i r i ng r a t e Fig. 4
Simulation of the firing rate of the neuron (6) via the FP formalism (2)-(6), blackcurve, and the stochastic process (1), blue curve. The parameters of the simulation are: v r = 0 . µ = 15 and σ = 0 . µ = 20 and σ = 0 . µ = 30 and σ = 0 . We present in Fig 3 a simulation of the FP model (2)-(6). The numericalresults are compared with Monte Carlo simulations for the stochastic NLIFmodel (1). In Fig 3, the black curve corresponds to the FP equation (2)-(6) andthe blue curve to the stochastic process (1). A Gaussian was taken as initialcondition (see the first panel of Fig. 3). Under the drift and the diffusioneffects, the density function gives a non zero flux at the threshold. This flux isreset to v r according to the reset process. This effect can be seen clearly in thethird panel of the simulation presented in Fig. 3. Asymptotically the solutionreaches a stationary density. The steady state is shown in the last panel ofFig. 3. Note that the stationary state can be easily computed (we remind itsexpression later in the text). One can show the convergence of the solutiontowards the stationary density using the general relative entropy principle.In Fig. 4 a comparison of the firing rate (6) computed via the FP formalism(2) and via the stochastic model (1) is represented. Again the blue curve isobtained by direct simulations of the stochastic process (1), and the blackcurve corresponds to the simulations of the FP model (2)-(6). To be moreprecise, we also show a raster plot depicting the spike timing of the neuronsfor each simulation run. In the three different simulations that we present, wehave varied the drift term µ .The stationary state of the FP equation is known from decades. A straight-forward computation shows that the steady state p ∞ ( v ) is given by p ∞ ( v ) = 2 r ∞ σ e − ( v − µ ) σ (cid:90) v,v r ) e ( w − µ ) σ dw, (9)with r ∞ the corresponding stationary firing rate (see Fig 4). The latter isdetermined by the normalization condition: ge-structure in neuronal populations 9 D en s i t y D en s i t y D en s i t y D en s i t y D en s i t y D en s i t y Fig. 5
Simulation of the model (11) - (14), black curve, and the stochastic process (1), bluecurve. A Dirac mass at the reset potential was taken as initial condition. The parameters ofthe simulation are v r = 0 . µ = 20, σ = 0 .
4. The plots show the evolution in time ( age ) ofthe solution at a = 0, a = 0 . a = 0 . a = 0 . a = 0 . a = 7. r − ∞ = 2 σ (cid:90) −∞ e − ( v − µ ) σ (cid:90) v,v r ) e ( w − µ ) σ dw dv. (10)These expressions are well-known and details can be found in [14] for example. In the following we will define the age of a neuron as the time passed sinceits last firing.
Age is a somehow forced notion in this context, but we havechosen to use it due to the similarity of the model that we will present inthe next section to those from the AS systems theory. The evolution of aprobability density function for a neuron’s membrane potential to be at age a in the potential value v , q ( a, v ), is given by a similar FP equation: ∂∂a q ( a, v ) + Drift part (cid:122) (cid:125)(cid:124) (cid:123) ∂∂v [( µ − v ) q ( a, v )] − Diffusion (cid:122) (cid:125)(cid:124) (cid:123) σ ∂ ∂v q ( a, v ) = 0 , (11)again with an absorbing boundary condition for the threshold value v = 1 q ( a,
1) = 0 , ∀ a ≥ , (12)and a reflecting boundary condition at the boundary v = −∞ lim v →−∞ ( − µ + v ) q ( a, v ) + σ ∂∂v q ( a, v ) = 0 , ∀ a ≥ . (13)Since a neuron firing an action potential is reset to v r , we consider the initialdensity given by q (0 , v ) = δ ( v − v r ) , v ∈ ( −∞ , , (14) N eu r on N eu r on N eu r on I S I I S I I S I Fig. 6
Simulation of the ISI function (15) via the FP formalism (11), black curve, respec-tively the stochastic process (1), blue curve. The parameters of the simulation are v r = 0 . µ = 15 and σ = 0 . µ = 20 and σ = 0 . µ = 30, σ = 0 . where δ is the Dirac distribution. As for the equation (2), the equation (11)is often represented in terms of an integral equation. In this setting, the fluxthat we denote F ( a, v ) is defined as − F ( a, v ) = ( − µ + v ) q ( a, v ) + σ ∂∂v q ( a, v ) . Therefore, the evolution in time of the density function q is given by ∂∂a q ( a, v ) = − ∂∂v F ( a, v ) . Note that, in the case considered here, the re-injection of the (probability)flux to the reset value (right hand side of equation (2)) is not considered,therefore the above model represents the evolution of the probability density ofneurons before firing, i.e. in an inter-spike interval. The interpretation is that,once the neuron fired, it becomes of age zero, and the source term δ ( v − v r )as initial condition can be understood intuitively as that, right after a spikethe membrane potential is v r with probability 1. It should be stressed thatas the bias current µ and the noise intensity σ do not depend on time, theprobability density q does not depend on time but only on age a . The flux atthe threshold value, which is again given only by the diffusive part of the flux,is a measure of great interest since it gives the ISI distribution (for a neuronhaving at age a = 0 the potential v r ): ISI ( a ) = − σ ∂∂v q ( a, , ∀ a ≥ . (15)Until now, no analytical solution of the ISI curve has been found.We present in Fig. 5 a simulation of the problem (11)–(14). Again, wehave made a comparison between the stochastic process (blue curve) and theevolution in time of the density function (black curve). The simulation starts ge-structure in neuronal populations 11 with a Dirac mass as initial condition (see first panel of Fig. 5). Under theinfluence of the drift term and the diffusion process (gaussian white noise),the density function spreads to the threshold. This is clearly seen in the upperplot of Fig. 5. At last, as expected, it converges to a zero density (see thelast panel in Fig. 5). In Fig. 6, we make some different simulations of theISI curve. As before, the blue curve corresponds to the stochastic processsimulated via a Monte Carlo method, and the black curve - to the deterministicprocess (11). We present here three different panels corresponding to threedifferent simulations where the bias current µ was increased. We also presenta raster plot depicting the spike timing of each neuron simulated via the NLIFmodel (see upper panels of Fig 6). It can be clearly seen that, increasing theintensity current leads to a more concentrated ISI density. The ISI curve startsat zero, which means that right after spiking the neuron needs some time beforespiking again. Then, the ISI curve increases and, after reaching a maximum,it decreases rapidly. This is depicted by the raster plot presented in Fig 6.The first passage time problem is intimately related to the FP equation.Starting from the Chapman-Kolmogorov equation, it has been shown that theprobability to find the state (potential) of a neuron at time t in a certainvalue v, is the solution to the FP or Kolmogorov’s forward equation. Fromit, one can derive the equation that describes the evolution in time of theprobability for a neuron that started at time 0 from a potential value v tonot have reached yet the value threshold, named survival probability density .This equation is known as Kolmogorov’s backward equation, and the choiceof boundary conditions has been discussed in [16]. In what follows, we shall use the concept of survival probability or survivorfunction as in [19]. In our case, this function will be only age-dependent sincewe shall consider it for the case of a neuron that starts at age zero from thereset potential v r .Namely, if q ( a, v ) is the solution to (11)-(14), then the following quantity (cid:90) −∞ q ( a, v ) d v stands for the probability of survival at age a of a neuron that started at age0 from the position v r . Again, ”survival” at age a means that up to that time,the potential of the neuron’s membrane has not reached yet the thresholdvalue. Then, the rate of decay of the survivor function S ( a ) = − dda (cid:82) −∞ q ( a, w ) dw (cid:82) −∞ q ( a, w ) dw (16)represents the rate at which the threshold is reached and it has been called age-dependent death rate or hazard . S ( a ) has the interpretation that, in order I S I I S I I S I S S S Fig. 7
Simulation of the function S ( a ) given by (16) in the lower panels and its correspond-ing ISI given by (15) in the upper panels. The parameters of the simulation are v r = 0 . µ = 5, σ = 0 . σ = 0 . σ = 0 . to emit a spike, the neuron has to ”survive” without firing in the interval (0 , a )and then fire at age a .In Fig. 7, numerical simulations of the age-dependent death rate for dif-ferent parameters is presented. Let us notice that S defines clearly a positivefunction that converges toward a constant. Indeed, q ( a, v ) has the same asymp-totic behavior as e − λa q ( v ) , which implies that lim a → + ∞ S ( a ) = λ, with λ the dominant eigenvalue of the operator of the stationary FP equationand q the corresponding eigenvector (see [34] for example).Note that S can also be expressed in terms of the ISI function given above S ( a ) = − ISI ( a )1 − (cid:82) a ISI ( s ) da which is the expression that we used in our numerical estimations of S .We can now define properly the new stochastic process. The model is givenby the evolution of the age of the neuron plus a stochastic reset mechanism totake into account the initiation of an action potential, and it is ddt a ( t ) = 1The spiking probability in ( t, t + dt ) is given by S ( a ( t )) dt If a spike is triggered then a ( t ) = 0 , (17)where S is the age-dependent death rate given by (16). In this model, the ageof the neuron follows a trivial deterministic process, but the firing thresholdis stochastic since at each time the neuron can fire. When this happens, itsage is reset to zero. As reminded before, the difference between the models(1) and (17) is that, in the NLIF the dynamics are stochastic and the reset ge-structure in neuronal populations 13 A ge A ge A ge D en s i t y D en s i t y D en s i t y Fig. 8
Simulation of the escape rate model (17). The parameters of the simulation are v r = 0 . µ = 5, σ = 0 . σ = 0 . σ = 0 . process is deterministic while in the escape-rate model above, the dynamicsare deterministic but the reset mechanism is stochastic.As we pointed out in the introduction, the escape-rate models have beenintroduced in [38] in order to arrive to more tractable models from mathe-matical point of view. It has been shown here that in the subthreshold regimefor integrate and fire neurons, the diffusive noise can be replaced by a hazardnoise (noisy threshold) described by a certain escape rate. More considerationsabout viable choices of age-dependent death rates as well as the derivation ofthe refractory-densities model that we will remind in the next section can befound in [19].We present in Fig. 8 a numerical simulation of the stochastic process definedby (17) for different age-dependent death rates. Note that the neuron neverfires exactly at the same age, since its probability to fire (escape) is purelystochastic. We can now introduce the AS model in the same way as it has been done in[19], [36] and [37].The model describes the evolution in time of the population density func-tion with respect to the age of a neuron in the following way: denoting by n ( t, a ) the density of neurons at time t at age a , then the evolution of n is ∂∂t n ( t, a ) + Drift part (cid:122) (cid:125)(cid:124) (cid:123) ∂∂a n ( t, a ) + Spiking term (cid:122) (cid:125)(cid:124) (cid:123) S ( a ) n ( t, a ) = 0 . (18)In Fig. 9, a schematic representation of the state space of the AS equation(18) is presented.Because once a neuron triggers a spike, its age is reset to zero, we get thenatural boundary condition Reset (cid:122) (cid:125)(cid:124) (cid:123) n ( t,
0) = r ( t ) , ∀ t > , where r ( t ) is the firing rate and is given by r ( t ) = (cid:90) + ∞ S ( a ) n ( t, a ) da, ∀ t ≥ . (19) Age0 Reset After SpikePotential 1Reset After Spike ThresholdDriftDrift + Diffusion
Fig. 9
Schematic representation of the state space of the AS equation (18).
An initial distribution is assumed known: n (0 , a ) = n ( a ) , ∀ a > . (20)In the above equations, S ( a ) stands for the age-dependent death rate givenby (16). Using the boundary condition and the expression of r ( t ) given by(19), one can check easily the conservation property of the equation (2) byintegrating it on the interval (0 , ∞ ), so that if the initial condition satisfies (cid:90) + ∞ n ( a ) da = 1 , (21)the solution at any t > (cid:90) + ∞ n ( t, a ) da = 1 . (22)We present in Fig. 10 a simulation of the problem (18)-(20). Again, wehave made a comparison between the stochastic process (blue curve) given by(16) and the evolution in time of the density function (black curve) given by(18)-(20). The simulation starts with a Gaussian as initial condition (the firstpanel of Fig. 10). Under the influence of the drift term, the density functionadvances in age, which is clearly seen in the upper plots of Fig. 10. Afterthe spiking process, the age of the neuron is reset to zero. The effect is wellperceived in the lower panels of Fig. 10. As expected from the model, thedensity function converges to an equilibrium density (see the last panel in Fig.10). The stationary state of the AS model can be easily computed; denotingby r ∞ the stationary firing rate, we get: n ∞ ( a ) = r ∞ e − (cid:82) a S ( s ) ds , and, if we take into account the normalization condition, we obtain the ex-pression of the stationary firing rate r − ∞ = (cid:90) + ∞ e − (cid:82) a S ( s ) ds da. Since e − (cid:82) a S ( s ) ds is the expression of the survivor function, its integral over(0 , ∞ ) has the interpretation of the mean firing time, therefore the last relationsays nothing else than the fact that the stationary firing rate equals to theinverse of the mean firing time. ge-structure in neuronal populations 15 D en s i t y D en s i t y D en s i t y D en s i t y D en s i t y D en s i t y Fig. 10
Simulation of the AS model (18)-(20). Inhere, the black curve represents the sim-ulation of the AS model and the blue curve the simulation of the stochastic process (17).A gaussian was taken as initial condition; the parameters of the simulation are: v r = 0 . µ = 5, σ = 0 .
1. The six plots in the figure show the evolution in time of the solution at t = 0, t = 0 . t = 0 . t = 0 . t = 0 . t = 7. In this section, we present our main result that introduces an analytical linkbetween the two formalisms, that states that there exists an integral transformthat translates the solution to the problem (18)-(20) into the solution to (2)-(5).
Proposition 1
Let p a solution to (2)-(5) and n a solution to (18)-(20), and p ( v ) and n ( a ) two corresponding initial distributions. Then, if p and n satisfy p ( v ) = (cid:90) + ∞ q ( a, v ) (cid:82) −∞ q ( a, w ) dw n ( a ) da, the following relation holds true: p ( t, v ) = (cid:90) + ∞ q ( a, v ) (cid:82) −∞ q ( a, w ) dw n ( t, a ) da. (23) Here, q ( a, v ) is the solution to (11)-(14).Remark 1 The integral transform given by the equation (23) can be inter-preted with the help of probability theory. Since the integral (cid:82) −∞ q ( a, w ) dw is the survivor function and q ( a, v ) is the probability density for a neuron tobe at age a and at potential v , the kernel of the transform can be interpretedas the probability density for a neuron to be at potential value v given thatit survived up to age a . The solution n ( t, a ) denotes the density of populationat time t in state a , the integral over the whole possible states a of the kernelmultiplied by the density n gives indeed the density of population at time t inthe state v . Remark 2
In proposition 1, the integral transform is given in the sense ofdistributions, as we shall define it bellow.Let us show for the beginning that the integral in (23) is well defined.If we denote by N ( t, a ) = n ( t,a ) (cid:82) −∞ q ( a,v ) d v , since ∂∂a n ( t, a ) (cid:82) −∞ q ( a, v ) d v = ∂∂a n ( t, a ) (cid:82) −∞ q ( a, v ) d v + n ( t, a ) (cid:32) ∂∂a (cid:82) −∞ q ( a, v ) d v (cid:33) = ∂∂a n ( t, a ) (cid:82) −∞ q ( a, v ) d v + S ( a ) n ( t, a ) (cid:82) −∞ q ( a, v ) d v , where we have used the definition of S ( a ), one can see that N is solution tothe following system: ∂∂t N ( t, a ) + ∂∂a N ( t, a ) = 0 ,N ( t,
0) = r ( t ) = (cid:82) ∞ ISI ( a ) N ( t, a ) d a,N (0 , a ) = n ( a ) (cid:82) −∞ q ( a,v ) d v . (24)The system above can be easily integrated N ( t, a ) = (cid:26) N ( a − t ) , a > t,r ( t − a ) , t ≥ a, and the regularity of the solution is dictated by the regularity of the initialcondition. In particular, if N ∈ L (0 , ∞ ) then N ( t, · ) ∈ L (0 , ∞ ). Also, choos-ing n such that N ∈ H (0 , ∞ ), and as soon as N (0) = (cid:90) ∞ ISI ( a ) N ( a ) d a ,we obtain that N ( t, · ) ∈ H (0 , ∞ ).On the other hand, q ( a, v ) is the solution to (11)-(14), which is a parabolicequation with zero right hand side and homogeneous boundary conditions,therefore its exponential convergence towards zero as a → ∞ for a.e. v ∈ ( −∞ ,
1] is immediate. We therefore can assert that, for every ε , q ( a, v ) < c ( v ) , a.e. v ∈ ( −∞ , , a > a ε . Since the product of a function from L ∞ with an L function is integrable, wehave then that the transform (23) is well defined.We also point out that, due to the large time behavior of q , we have that (cid:90) ∞ S ( a ) d a = + ∞ , condition which is known in age structured systems theory to imply that n ( t, a )tends to zero as a tends to infinity (which can be easily seen by simply in-tegrating (18) ). We have chosen to work on [0 , ∞ ) as the age interval, butone could have chosen, exactly as in AS systems theory, to work on a finiteinterval [0 , A max ], where A max is the maximal age that can be reached. In this ge-structure in neuronal populations 17 context, the condition that on the age interval, the integral of mortality rate tobe infinity, has the biological interpretation that the density of the populationat ages bigger than maximal one is zero, therefore the meaning of maximal ageis exact. Of course, in our context, it would mean that all the neurons wouldhave fired before reaching this maximal value. Also, let us notice that the sys-tem in n has a classical solution on the defined domain; since the mortalityrate does not depend explicitly on t , the derivatives with respect to t and a exist in classical sense.Before starting the proof, let us make some considerations over the solutionsto the systems (2)–(5), respectively (11)-(14). We shall consider weak solutionsto both systems in the sense introduced in [8], namely: Definition 1
A pair of nonnegative functions ( p, r ) such that p ∈ L (cid:0) , T ; L ( −∞ , (cid:1) , r ∈ L (0 , T )is a solution to (2)–(5) if, for any test functions ϕ ( t, v ) ∈ L ([0 , T ] × ( −∞ , ∂ ∂v ϕ ( t, v ) , ∂∂t ϕ ( t, v ) , ( µ − v ) ∂∂v ϕ ( t, v ) ∈ L ((0 , T ) × ( −∞ , , ϕ ( T, v ) = 0 , the following relation takes place: (cid:90) T (cid:90) −∞ p ( t, v ) (cid:20) − ∂ϕ ( t, v ) ∂t − ( µ − v ) ∂ϕ ( t, v ) ∂v − σ ∂ ϕ ( t, v ) ∂v (cid:21) d v d t = (cid:90) T r ( t )[ ϕ ( t, v r ) − ϕ ( t, t + (cid:90) −∞ p ( v ) ϕ (0 , v ) d v. (25)As functions of the form Φ ( v ) Ψ ( t ) with Φ ( v ) ∈ L ( −∞ ,
1) such that( µ − v ) Φ (cid:48) ( v ) , Φ (cid:48)(cid:48) ( v ) ∈ L ( −∞ , , and Ψ ( t ) ∈ L (0 , T ) with Ψ (cid:48) ( t ) ∈ L (0 , T ) , Ψ ( T ) = 0 are a dense subset of thetest functions in definition 1, we will restrict (25) to (cid:90) T (cid:90) −∞ p ( t, v ) (cid:20) − Ψ (cid:48) ( t ) Φ ( v ) − Ψ ( t )( µ − v ) Φ (cid:48) ( v ) − σ Ψ ( t ) Φ (cid:48)(cid:48) ( v ) (cid:21) d v d t = (cid:90) T r ( t )[ Φ ( v r ) − Φ (1)] Ψ ( t ) d t + (cid:90) −∞ p ( v ) Φ ( v ) Ψ (0) d v, (26)which gives the expression of the distributional derivative with respect to t : ∂∂t (cid:90) −∞ p ( t, v ) Φ ( v ) d v (27)= (cid:90) −∞ p ( t, v ) (cid:20) ( µ − v ) Φ (cid:48) ( v ) + σ Φ (cid:48)(cid:48) ( v ) (cid:21) d v + r ( t )[ Φ ( v r ) − Φ (1)] , again, for all the test functions Φ defined as above.In the same way we will use a weak formulation for (11) as (cid:90) ∞ (cid:90) −∞ q ( a, v ) (cid:20) − χ (cid:48) ( a ) Φ ( v ) − χ ( a )( µ − v ) Φ (cid:48) ( v ) − χ ( a ) σ Φ (cid:48)(cid:48) ( v ) (cid:21) d v d a = Φ ( v r ) χ (0) − Φ (1) (cid:90) ∞ χ ( a ) ISI ( a ) d a, (28)with χ ( a ) ∈ L (0 , ∞ ) and χ (cid:48) ( a ) ∈ L (0 , ∞ ) and Φ ( v ) as in the previous defi-nition.We can now proceed with our proof. Proof
Let us notice for the beginning that the AS system in n ( t, a ) can beformulated equivalently as (24) in terms of the new variable N ( t, a ) = n ( t, a ) (cid:82) −∞ q ( a, v ) d v . Moreover, using this notation, the integral transform reads: p ( t, v ) = (cid:90) ∞ q ( a, v ) N ( t, a ) d a. (29)In order to show that the above formula defines a solution to the FP system,let us apply the distributional derivative to (29) and show that it satisfies (27): ∂∂t (cid:90) −∞ p ( t, v ) Φ ( v ) d v = (cid:90) −∞ (cid:90) ∞ q ( a, v ) ∂∂t N ( t, a ) Φ ( v ) d a d v. Using the fact that N ( t, a ) is solution to (24), the last equation becomes: ∂∂t (cid:90) −∞ p ( t, v ) Φ ( v ) d v = − (cid:90) −∞ (cid:90) ∞ q ( a, v ) ∂∂a N ( t, a ) Φ ( v ) d a d v. Let us turn now to the definition of the weak solution q ( a, v ) given by (28).Noticing that, under proper assumptions over the initial state N , for each tarbitrary but fixed, N ( t, a ) and ∂∂a N ( t, a ) are in L (0 , ∞ ), we can apply thedefinition for q by choosing χ ( a ) as N ( t, · ). Then we get that − (cid:90) −∞ (cid:90) ∞ q ( a, v ) ∂∂a N ( t, a ) Φ ( v ) d a d v = (cid:90) −∞ (cid:90) ∞ N ( t, a ) q ( a, v ) (cid:20) ( µ − v ) Φ (cid:48) ( v ) + σ Φ (cid:48)(cid:48) ( v ) (cid:21) d a d v + Φ ( v r ) N ( t, − Φ (1) (cid:90) ∞ N ( t, a ) ISI ( a ) d a. ge-structure in neuronal populations 19 Since N ( t,
0) = r ( t ) = (cid:90) ∞ N ( t, a ) ISI ( a ) d a , the last two terms in the expres-sion above give r ( t )[ Φ ( v r ) − Φ (1)]and therefore, we have obtained that ∂∂t (cid:90) −∞ (cid:90) ∞ q ( a, v ) N ( t, a ) Φ ( v ) d a d v = (cid:90) −∞ (cid:90) ∞ q ( a, v ) N ( t, a ) (cid:20) ( µ − v ) Φ (cid:48) ( v ) + σ Φ (cid:48)(cid:48) ( v ) (cid:21) d a d v + r ( t )[ Φ ( v r ) − Φ (1)] , which is exactly (27) for p ( t, v ) given by (29), which completes our proof.The integral transform gives a corresponding relation between the twostationary states. Due to the form of the age-dependent death-rate S , thestationary density of the AS systems is completely determined by the solutionto (11)-(14): n ∞ ( a ) = (cid:82) −∞ q ( a, v ) dv (cid:82) + ∞ (cid:82) −∞ q ( a, v ) dv da . Using now the integral transform, we obtain that the stationary solution to(2)-(5) satisfies p ∞ ( v ) = (cid:82) + ∞ q ( a, v ) da (cid:82) + ∞ (cid:82) −∞ q ( a, v ) dv da . Since we have the relation r ∞ = 1 (cid:82) + ∞ (cid:82) −∞ q ( a, v ) dv da , one may check directly that the above formula is indeed a solution to thestationary problem of the potential-structured system by multiplying (11) by r ∞ and integrating it on (0 , ∞ ). In the previous section, we have shown that there is an integral transformrelating the solution of the time elapsed model and the solution of the FPequation. Our integral transform goes that way: for a density in age, n ( t, a ),one can associate a corresponding solution p ( t, v ) to the (2)-(5). Let us definethe operator F : L (0 , + ∞ ) → L ( −∞ , n ∞ (cid:55)→ p ∞ defined by p ∞ ( v ) = (cid:90) + ∞ q ( a, v ) (cid:82) −∞ q ( a, w ) dw n ∞ ( a ) da, where p ∞ and n ∞ are the stationary solutions to the potential- respectivelyage- structured systems. In the previous section, we have shown that, as soonas the initial conditions p and n are related by F , the whole trajectories p ( · , v ) and n ( · , a ) are also related by F . Here we show that in the case therelation between initial conditions is not satisfied, F transforms the knownconvergence of n to n ∞ for t → ∞ in the convergence of p to p ∞ = F n ∞ .This gives an additional way to study the behavior of p for large time, alreadystudied in [8], [9] by other means. Proposition 2
For all initial conditions p belonging to L ( −∞ , , the so-lution p of the potential structured problem (2)-(5) converges to F n ∞ where n ∞ is such that, for any initial condition, the solution n to (18), verifies lim t → + ∞ (cid:107) n ( t, · ) − n ∞ ( · ) (cid:107) L (0 , ∞ ) = 0 . Proof
The model (18)-(20) is a classical McKendrick-von Foerster model, wellknown in population dynamics, with the particularity that the age specificmortality and fertility rates are the same. Then, defining the intrinsic repro-duction number R as R = (cid:90) + ∞ S ( a ) exp( − (cid:90) a S ( a (cid:48) ) da (cid:48) ) d a, by direct computations we get R = 1 . Then it is well known that in this case, for t → ∞ , n converges to n ∞ satisfying dda n ∞ ( a ) + S ( a ) n ∞ ( a ) = 0 , a > , (30)with n ∞ (0) = (cid:90) ∞ S ( a ) n ∞ ( a ) da, (31)and (cid:90) ∞ n ∞ ( a ) da = 1 . (32)Let us now define α ( t, v ) the solution to: ∂∂t α ( t, v ) + ∂∂v [( µ − v ) α ( t, v )] − σ ∂ ∂v α ( t, v ) = δ ( v − v r ) r ( t ) , with boundary conditions similar to (3), (4), r ( t ) being the firing rate relativeto (18) and the initial condition is given by ge-structure in neuronal populations 21 α (0 , · ) = α = F n . Then, thanks to Proposition 1, α ( t, · ) = F n ( t, · ) . The transform F being continuous from to L (0 , + ∞ ) to L ( −∞ , α ( t, v ) → p ∞ ( v ) = F n ∞ in L ( −∞ , , as t → ∞ , where one can check as in Proposition 1 that p ∞ ( v ) satisfies ∂∂v [( µ − v ) p ∞ ( v )] − σ ∂ ∂v p ∞ ( v ) = δ ( v − v r ) r ∞ , (33)with r ∞ = − σ ∂∂v p ∞ (1) , and boundary conditionslim v →−∞ ( − µ + v ) p ∞ ( v ) + σ ∂∂v p ∞ ( v ) = 0 , p ∞ (1) = 0 . Now let us consider β ( t, v ) = p ( t, v ) − α ( t, v ). It satisfies ∂∂t β ( t, v ) + ∂∂v [( µ − v ) β ( t, v )] − σ ∂ ∂v β ( t, v ) = 0 , with boundary conditionslim v →−∞ ( − µ + v ) β ( t, v ) + σ ∂∂v β ( t, v ) = 0 , β ( t,
1) = 0 , and with the initial condition given by: β (0 , v ) = p ( v ) − F n ( a ) . Using a change of variable similar to the one in [9], this equation is transformedin a heat equation on ( −∞ ,
0) with a zero Dirichlet condition in 0. Then it isclear that β ( t, v ) goes to 0 as t → ∞ in L ∞ ( −∞ , ∩ L ( −∞ , It has been shown in [38] that the integrate-and-fire model with stochasticinput can be mapped approximately onto an escape-rate model. Despite thefact that the two systems reproduce the same statistical activity, no analyticalconnection between them has been given until now. This paper is intended asa first step in this direction. We have proven here the existence of an exactanalytical transform of the solution to the AS system into the solution to theF-P system which is an equivalent description of the NLIF model. Our findinghighlights the theoretical relationships between the two stochastic processesand explain why the statistical firing distributions across time are similar forboth models, see the red dots in Fig. 11. To our knowledge, such a result hasnot been proven until now.As we have pointed it out in the introduction section, there are severaladvantages in using the AS formalism, and the main reason is that it has beenalready well-studied by mathematicians throughout the past decades. Anotheradvantage in using the age structured model regards its numerical simulations,see Fig. 11. While the NLIF model requires the numerical implementationof the Euler-Maruyama scheme, the escape model can be simulated via aGillespie-like algorithm. However, the noisy threshold model is probably alittle bit more difficult to relate to the underlying biophysics of a cell. For thiscrucial point, one would prefer the use of NLIF model where each parameterof the model can be easily measured by neuroscientists.We have to stress though that the results obtained here have been provenin the case of a not-connected neural population, which is a strong simplifyingassumption. The case we considered is known in the framework of renewalsystems as a stationary process. A possible extension of the present transformfor the case of interconnected neurons remains thus for us an open issue to beinvestigated. But the most important thing to be investigated, and which iscurrently in working progress, is the existence of an inverse transform to theone introduced here. Nevertheless, in the absence of such an inverse transform,we proved here that the set of solutions to the F-P system defined throughour transform is an attractor set of the solutions as t tends to infinity.The integral transform given in Proposition 1 has a probability meaningand this meaning can be interpreted using Bayes’ rule. Indeed, the kernelof our transformation can be read out as P ( v | a ), the probability to find aneuron at potential v knowing its age a . The most important feature of thiskernel is that it is time independent; the very nature of the age a containsall the information about time that is needed to properly define the integraltransform. On contrary, to define an inverse transform, one faces the problemof having a kernel that must depend on time. Transforming then the solutionto the FP system into the solution to the AS system is therefore a little bittrickier. Another important aspect about the nature of the AS formalism isthat the variable a also entails information about the last firing moment.Indeed, attributing an age to a neuron presupposes that the considered neuronhas already fired an action potential. From our perspective, the membrane ge-structure in neuronal populations 23 PotentialRepresentationp(t,v) AgeRepresentationn(t,a)Theoretical TransformationApproximative Transformation P o t en t i a l A ge Fig. 11
In the top panel, we show a schematic representation of the integral transformationbetween the two mathematical representations of the neural noise. In the bottom panel, wegive two simulations illustrating the different mathematical treatments of noise in neuro-science context. The red dots indicates the firing time of the cell. The time distributionof those dots are similar in the two representations. The parameters of the simulation are v r = 0 . µ = 5, σ = 0 . potential variable v does not carry out such knowledge. There is therefore ahidden information in the AS model that is not present in the FP approach. Itis therefore our belief that, in order to properly define an inverse transform, onewould be forced to assume that the FP initial density shares the informationabout the last firing event. A compatibility condition on the initial data isthen required, we therefore believe that p and n should be related throughthe same transform presented in this paper.The easiest way to formalize all this is probably to write down the Bayes’rule P ( a | v ; t ) = P ( v | a ) ∗ P ( a ; t ) / P ( v ; t ) = P ( v | a ) ∗ n ( t, a ) /p ( t, v )Note that the time-dependence of P ( a | v ; t ) is inescapable. Moreover, the fullform of P ( a | v ; t ) is not useful within the context of an inverse transformation,since such an inverse transformation is trivial (reducing to n ( t, a ) on bothsides). Given these observations, it might make sense to assume that the systemis close to equilibrium. With this maximum-entropy-like assumption, we canwrite a time-independent version of P ( a | v ; t ): P ( a | v ) ≈ P ( v | a ) ∗ n ∞ ( a ) /p ∞ ( v )where n ∞ ( a ) and p ∞ ( v ) are the equilibrium distributions for n ( t, a ) and p ( t, v ),respectively. These equilibriums can be calculated. With such reasoning, aperfect theoretical inverse from the AS model to the FP representations canbe found, and an approximate inverse transformation can be constructed usingthe values of n and p at equilibrium, see Fig. 11. As we stressed in the introduction, the benefit of such a representationwould be the transfer of the analysis of special behaviors of the function p ( t, v )which is the solution of a system that raises technical problems, to the study ofthe behavior of the AS system, which is obviously simpler. The only quantitywhich will be significant then it will be the age dependent death rate whichwill contain all the information needed to give insights of the behavior of thesystem. References
1. Abbott, L.: Lapique’s introduction of the integrate-and-fire model neuron (1907). BrainResearch Bulletin (5), 303–304 (1999)2. Abbott, L.F., van Vreeswijk, C.: Asynchronous states in networks of pulse-coupled os-cillators. Phys. Rev. E. , 1483–1490 (1993)3. Bressloff, P.C., Newby, J.M.: Stochastic models of intra-cellular transport. Review ofModern Physics
85 (1) (2013)4. Brunel, N.: Dynamics of sparsely connected networks of excitatory and inhibitory spik-ing neurons. Journal of Computational Neuroscience , 183–208 (2000)5. Brunel, N., Hakim, V.: Fast global oscillations in networks of integrate-and-fire neuronswith low firing rates. Neural Computation , 1621–1671 (1999)6. Brunel, N., van Rossum, M.: Lapicque’s 1907 paper: from frogs to integrate-and-fire.Biological Cybernetics , 341–349 (2007)7. Burkitt, A.N.: A review of the integrate-and-fire neuron model: I. homogeneous synapticinput. Biological Cybernetics , 1–19 (2006)8. C´aceres, M.J., Carrillo, J.A., Perthame, B.: Analysis of nonlinear noisy integrate & fireneuron models: blow-up and steady states. The Journal of Mathematical Neuroscience (2011)9. Carrillo, J.A., d M Gonz´alez, M., Gualdani, M.P., Schonbek, M.E.: Classical solutionsfor a nonlinear fokker-planck equation arising in computational neuroscience. Commu-nications in PDEs 38, 385-409, 2013 , 385–409 (2013)10. Cox, D.R.: Renewal Theory. Mathuen, London (1962)11. Dumont, G., Henry, J.: Population density models of integrate-and-fire neurons withjumps, well-posedness. Journal of Mathematical Biology (3), 453–81 (2013)12. Dumont, G., Henry, J.: Synchronization of an excitatory integrate-and-fire neural net-work. Bulletin of Mathematical Biology (4), 629–48 (2013)13. Dumont, G., Henry, J., Tarniceriu, C.: A density model for a population of theta neu-rons. Journal of Mathematical Neuroscience (2014)14. Ermentrout, G.B., Terman, D.: Mathematical foundations of neuroscience. Springer(2010)15. Faisal, A., Selen, L., Wolpert, D.: Noise in the nervous system. Nature Reviews Neuro-science (4), 292–303 (2008)16. Gardiner, C.W.: Handbook of Stochastic MMethod for Physics, Chemistry and NaturalSciences. Springer (1996)17. Gerstner, W.: Time structure of the activity in neural network models. Phys. Rev. E. , 738–758 (1995)18. Gerstner, W.: Population dynamics of spiking neurons: fast transients, asynchronousstates, and locking. Neural Computation , 43–89 (2000)19. Gerstner, W., Kistler, W.: Spiking neuron models. Cambridge university press (2002)20. Gerstner, W., Naud, R.: How good are neuron models? Science (5951), 379–380(2009)21. Gillespie, D.T., Seitaridou, E.: Simple Brownian Diffusion: An Introduction to the Stan-dard Theoretical Models. Oxford university press (2012)22. Hodgkin, A.L., Huxley, A.F.: A quantitative description of membrane current and itsapplication to conduction and excitation in nerve. The Journal of physiology (4),500–544 (1952)ge-structure in neuronal populations 2523. Holcman, D., Schuss, Z.: The narrow escape problem. SIAM Rev , 213–257 (2014)24. Izhikevich, E.M.: Dynamical Systems in Neuroscience. The MIT Press (2007)25. Knight, B.: Dynamics of encoding in neuron populations: Some general mathematicalfeatures. Neural Computation (3), 473–518 (2000)26. Knight, B., Manin, D., Sirovich, L.: Dynamical models of interacting neuron populationsin visual cortex. Robotics and cybernetics , 4–8 (1996)27. Longtin, A.: Stochastic dynamical systems. Scholarpedia (4), 1619. (2010)28. Longtin, A.: Neuronal noise. Scholarpedia (9), 1618 (2013)29. Millman, D., Mihalas, S., Kirkwood, A., Niebur, E.: Self-organized criticality occursin non-conservative neuronal networks during ‘up’ states. Nature physics , 801–805(2010)30. Newhall, K.A., Kovacic, G., Kramer, P.R., Cai, D.: Cascade-induced synchrony instochastically-driven neuronal networks. Physical review (2010)31. Newhall, K.A., Kovacic, G., Kramer, P.R., Zhou, D., Rangan, A.V., Cai, D.: Dynamicsof current-based, poisson driven, integrate-and-fire neuronal networks. Communicationsin Mathematical Sciences , 541–600 (2010)32. Nykamp, D.Q., Tranchina, D.: A population density appraoch that facilitates large-scalemodeling of neural networks : analysis and an application to orientation tuning. Journalof computational neurosciences , 19–50 (2000)33. Omurtag, A., Knight, B., Sirovich, L.: On the simulation of large population of neurons.Journal of computational , 51–63 (2000)34. Ostojic, S.: Interval interspike distributions of spiking neurons driven by fluctuatinginputs. Journal of Neurophysiology , 361–373 (2011)35. Ostojic, S., Brunel, N., Hakim, V.: Synchronization properties of networks of electricallycoupled neurons in the presence of noise and heterogeneities. Journal of computationalneurosciences , 369–392 (2009)36. Pakdaman, K., Perthame, B., Salort, D.: Dynamics of a structured neuron population.Nonlinearity , 23–55 (2009)37. Pakdaman, K., Perthame, B., Salort, D.: Relaxation and self-sustained oscillations inthe time elapsed neuron network model. SIAM Journal of Applied Mathematics (3),1260–1279 (2013)38. Plesser, H.E., Gerstner, W.: Noise in integrate-and-fire neurons: from stochastic inputto escape rates. Neural Computation , 367–384 (2000)39. Schuss, Z., Singer, A., Holcman, D.: The narrow escape problem for diffusion in cellulardomains. Proceedings of the National Academy of Sciences , 16,098 – 16,103(2007)40. Wilbur, W.J., Rinzel, J.: A theoretical basis for large coefficient of variation and bi-modality in interspike interval distributions. J. Theor. Biol.105