Library-based Fast Algorithm for Simulating the Hodgkin-Huxley Neuronal Networks
aa r X i v : . [ q - b i o . N C ] J a n Library-based Fast Algorithm for Simulating theHodgkin-Huxley Neuronal Networks
Zhong-Qi Kyle Tian and Douglas Zhou ∗ School of Mathematical Sciences, MOE-LSC, and Institute of Natural Sciences, Shanghai JiaoTong University, Shanghai, China
Abstract.
We present a modified library-based method for simulating the Hodgkin-Huxley (HH) neuronalnetworks. By pre-computing a high resolution data library during the interval of an action potential (spike),we can avoid evolving the HH equations during the spike and can use a large time step to raise efficiency. Thelibrary method can stably achieve at most 10 times of speedup compared with the regular Runge-Kutta methodwhile capturing most statistical properties of HH neurons like the distribution of spikes which data is widelyused in the statistical analysis like transfer entropy and Granger causality. The idea of library method can beeasily and successfully applied to other HH-type models like the most prominent “regular spiking”, “fast spiking”,“intrinsically bursting” and “low-threshold spike” types of HH models.
Keywords
Library method; Hodgkin-Huxley networks; Efficiency; Fast algorithm
The Hodgkin-Huxley (HH) model [1–3], originally proposed to describe the behavior of the giant axon,is one of the most realistic models. It is then regarded as the foundation for other neuronal models withmore complicated behaviors like bursting and adaptation. Because of its complexity, we often use regularRunge-Kutta scheme in the numerical simulation to study the dynamics of this model. However the HHequations become stiff when the neuron fires a spike. As a consequence, we have to take a sufficientlysmall time step to avoid stability problems. But we need to simulate the model quite frequently to studyits properties with different aims. Sometimes we even need to simulate the model with hundreds ofneurons for hours to record the spikes or voltages [4, 5]. Therefore, it is important to find a fast algorithmto simulate the model.The stiff part during a firing event is from the activities of its sodium and potassium ion channels andcan last for about 3 ms. We offer a library method to deal with the stiff period and it can use much largertime steps compared with the regular Runge-Kutta method. The library method treats a HH neuron asan I&F one. Once a HH neuron’s membrane potential reaches the threshold, we stop evolving its HHequations and restart after the stiff part. The time-courses of membrane potential and gating variablesduring the stiff part can be recovered from a pre-computed high resolution data library. So once themembrane potential reaches the threshold, we record its state and decide the restart state interpolatedfrom the data library. Therefore, we can avoid the stiff part and use a large time step to evolve the HHmodel. The library method can use time steps one order of magnitude larger than the regular Runge-Kuttamethod while achieving precise statistical information of the HH model, e.g. , the distribution of spikes.Recently, statistical tools like Granger causality, transfer entropy and maximum entropy have been provento be effective in probing neural interactions, e.g. , detecting causality, identifying effective connectivityand reconstructing the fire patterns [4–7]. These works are mainly based on the spike trains recorded ∗ [email protected] ∼ minutes to obtain an accurate distribution. We specially point out that the library method canstably speed up at most 10 times compared with the regular Runge-Kutta methods, which may make theHH model attractive as a base model in these statistical tools.We emphasize that we should take into account the causality of synaptic spikes within a single timestep. In general, when we evolve the HH neurons for one time step in the regular Runge-Kutta methods[8,9], we only use the feedforward input during the time step. Without knowing when and which neuronswill fire during the time step, we have to wait until the end of the evolution of this time step to considerthe effect of these possible synaptic spikes. This approach may work well with a sufficiently small timestep that there are only O (1) spikes during one time step and the causal effect is negligible. However,when we use a large time step, the first spikes may influence the network via the spike-spike interactionsthat some extra spikes may appear if the first spikes are excitatory and some latter spikes may vanishotherwise. Therefore, to use a large time step, we should take the spike-spike correction procedure [10]to obtain accurate spike sequences.We also investigate the validity of the library method in more complicated HH-type models with morevoltage-dependent currents. They are the four most prominent types: “fast spiking”, “regular spiking”,“intrinsically bursting” and “low-threshold spike” [11, 12]. For each type of model, we build and usethe data library in the same way as that in the standard HH model. The library method can still achieveaccurate statistical information of these HH-type models with remarkable computational speedup. The dynamics of the i th neuron of an excitatory Hodgkin-Huxley (HH) neuronal network is governed by C dV i dt = − ( V i − V Na ) G Na m i h i − ( V i − V K ) G K n i − ( V i − V L ) G L + I input i dm i dt = (1 − m i ) α m ( V i ) − m i β m ( V i ) dhidt = (1 − h i ) α h ( V i ) − h i β h ( V i ) dn i dt = (1 − n i ) α n ( V i ) − n i β n ( V i ) (1)where V i is the membrane potential, m i , h i and n i are gating variables, I input i is the input current, α and β are empirical functions of V , α m ( V i ) = 0 . V i + 41 − exp( − . V i − β m ( V i ) = 4 exp( − ( V i + 65) / α h ( V i ) = 0 .
07 exp( − ( V i + 65) / β h ( V i ) = 11 + exp( − . − . V i ) α n ( V i ) = 0 . V i + 0 . − exp( − . V i − . β n ( V i ) = 0 .
125 exp( − ( V i + 65) / (2)Other parameters are constants: C = 1 µ F · cm − is the membrane capacitance; V Na = 50 mV, V K = − mV and V L = − . mV are reversal potentials; G Na = 120 mS · cm − , G K = 36 mS · cm − , and G L = 0 . mS · cm − are the maximum conductances.The input current I input i is given by I input i = − G i ( t )( V i − V G ) with dG i ( t ) dt = − G i ( t ) σ r + H i ( t ) (3)2 H i ( t ) dt = − H i ( t ) σ d + f X l δ ( t − s il ) + X j = i X l S ij δ ( t − τ jl ) (4)where V G is the reversal potential with value V G = 0 mV, G i ( t ) is the conductance, H i ( t ) is an additionalparameter to describe G i ( t ) , σ r and σ d are fast rise and slow decay time scale, respectively, and δ ( · ) isthe Dirac delta function. In this Letter, we use σ r = 0 . ms, σ d = 3 . ms. The second term in Eq. (4)is the feedforward input with magnitude f . The input time s il is generated from a Poisson process withrate ν . The third term in Eq. (4) is the synaptic current from synaptic interactions in the network, where S ij is the coupling strength from the j th neuron to the i th neuron, τ jl is the l th spike time of j th neuron.When the voltage V i , evolving continuously according to Eq. (1), reaches the threshold V th , we saythe i th neuron fires a spike at this time. Instantaneously, all its postsynaptic neurons receive this spikeand their corresponding parameter H jumps by an appropriate amount, e . g . , S ji for the j th neuron. Forthe sake of simplicity, we mainly consider an all-to-all coupled network with S ij = S/N , where S is thecoupling strength and N is the number of neurons. The given methods can be easily extended to othertypes of networks, e . g . , with inhibitory neurons, randomly and inhomogeneously connected. We first introduce the most widely used Runge-Kutta fourth-order scheme (RK4) with fixed time step ∆ tto evolve the HH model. Since the neurons interact with each other through the spikes by influencingof conductance of the postsynaptic neurons, it is important to obtain accurate spike sequences. Then,there are some issues need to be clarified in simulation. For example, how to determine the spike timeaccurately [8, 9]. Suppose the i th neuron fires a spike at time ˜ t in [ t, t + ∆ t ] , a naive way to determinethe spike time is to set ˜ t = t + ∆ t and then an error of order ∆ t is introduced. Therefore, the wholescheme is limited to the first-order. To solve this problem, we can use numerical interpolation schemesto decide the spike time more accurately [8, 9]. After evolving the trajectory of the i th neuron from t to t + ∆ t , we can use the obtained values V i ( t ) , V i ( t + ∆ t ) , dV i dt ( t ) , dV i dt ( t + ∆ t ) to perform a cubic Hermiteinterpolation to decide the spike time. Then the whole scheme have an accuracy of fourth-order.Another problem is the causality of the spike events [10]. A usually used strategy is to evolve thenetwork (1), for example from t to t + ∆ t , only considering the feedforward input within the time interval [ t, t + ∆ t ] . If some synaptic spikes are fired during this interval, they will be assigned at the end of thetime step t + ∆ t . This strategy will lead to some problems. One is that since we assign the synapticspikes at the end of the time step rather than the real spike times, the accuracy is limited to the first-order.Another problem is that the first few synaptic spikes may strongly influence the other spiking neurons byspike-spike interactions, especially in the simulation with a large time step, hence the rest of the synapticspikes may be incorrect.To solve this problem, we take the spike-spike correction procedure [10], which strategy is similar tothe event-driven approach [13–15]. Suppose we evolve the all-to-all connected network from t to t + ∆ t .Here is the details. We first preliminarily evolve the neurons in the network independently from t to t +∆ t considering only the feedforward input. If any neuron fires a spike during this time interval, say the i thneuron, we denote the spike time by t ( i ) fire , along with the cubic Hermite interpolation to determine thespike time. If a neuron does not fire during [ t, t + ∆ t ] , still say the i -th neuorn, we then set t ( i ) fire = t + ∆ t .After obtaining all these t ( i ) fire values independently, we find the minimum one, without loss of generality,say t (1) fire . If t (1) fire = t + ∆ t , then there are no synaptic spikes in [ t, t + ∆ t ] and therefore, there is no causalproblem and we accept the preliminary trajectories as the solution. Otherwise, t (1) fire is the first synapticspike in [ t, t + ∆ t ] and there is no causal problem during [ t, t (1) fire ] . So we update all the neurons from t to t (1) fire , at which time neuron 1 still has a firing event. Then we move on to start another loop to find thenext first synaptic spike time in [ t (1) fire , t + ∆ t ] until the evolving is finished.3ith the cubic Hermite interpolation and spike-spike correction, we give the regular RK4 scheme.For the easy of illustration, we use the vector X i ( t ) = ( V i ( t ) , m i ( t ) , h i ( t ) , n i ( t ) , G i ( t ) , H i ( t )) (5)to represent the state of the i th neuron. Details of the numerical algorithm to evolve the network from t to t + ∆ t is given in Algorithm 1. Algorithm 1:
Regular RK4 algorithm
Input: t , ∆ t , { X i ( t ) } and { s il } Output: { X i ( t + ∆ t ) } and { τ il } (if any fired)Preliminarily evolve the network from t to t + ∆ t to find the first synaptic spike: for i = 1 to N do Let M denote the total number of feedforward spikes of the i th neuron within [ t, t + ∆ t ] and sort them into an increasing list { T sorted m } . Then we extend this notation such that T sorted = t and T sorted M +1 = t + ∆ t . for m = 1 to M + 1 do Advance the equations for the i th HH neuron from T sorted m − to T sorted m using the standardRK4 scheme. Then update the conductance H i ( T sorted m ) by adding f . if V i ( T sorted m − ) < V th , V i ( T sorted m ) ≥ V th then Find the spike time t ( i ) fire by cubic Hermite interpolation using the values V i ( T sorted m − ) , V i ( T sorted m ) , dV i dt ( T sorted m − ) and dV i dt ( T sorted m ) . else Set t ( i ) fire = t + ∆ t . while The minimum of { t ( i ) fire } < t + ∆ t do Suppose t (1) fire is the minimum one.Evolve neuron 1 to t (1) fire , and generate a spike at this moment. Then update all the remainingneurons to t (1) fire .Preliminarily evolve the network from t (1) fire to t + ∆ t to find the next first synaptic spike.We accept { X i ( T sorted M +1 ) } as the solution { X i ( t + ∆ t ) } ; When a neuron fires a spike, the HH neuron equations are stiff for some milliseconds denoted by T stiff ,as shown in Fig 1. This stiff period requires a sufficiently small time step to avoid stability problem.Therefore, in the regular RK4 scheme, we have to use a relatively small time step, e . g . , the widely used ∆ t = 1 / ms. To overcome the limitation in time step, we propose a modified library method [16]. It isbased on the regular RK4 scheme and has the advantage of using a large time step to raise efficiency, whilehaving comparable accuracy in statistical quantifications, e . g . , mean firing rate and the spike pattern(Details are given in Section 3). The library method depends on the length of stiff period, which isexperientially set T stiff = 3 . ms, long enough to cover the stiff parts in general firing events.4
80 885 890 895 900 t (ms) -1000100200300 I n t r i n s i c c u rr e n t
880 885 890 895 900 t (ms) -202468 I i n p u t
880 885 890 895 900 t (ms) -1000100200300 d V / d t
880 885 890 895 900 t (ms) mhn
880 885 890 895 900 t (ms) -100-50050 V ( m V ) T stiff (a) (e)(d)(c) (b) Figure 1: Typical firing event. (a) The trajectory of voltage V . The black line indicates the threshold V th and the red dotted lines indicate the stiff period. (b) The trajectory of m, h, n . (c) The trajectory of dVdt . (d) The trajectory of the input current I input i ( µ A · cm − ). (e) The trajectory of the intrinsic current( µ A · cm − ) which is the sum of ionic and leakage currents.The library method [16] treats the HH neuron’s firing event like an I&F neuron. Once a neuron’smembrane potential reaches the threshold V th , its voltage rises and reaches the peak value very quicklybecause of a large influx of the sodium current, then it drops back down to the lowest point by thepotassium current. This process is actually an action potential and lasts for about 3 ms which is indeed thestiff period, as shown in Fig 1(a). If we have a pre-computed high resolution data library of V, m, h, n ,we can recover their time-courses. In other words, once a neuron’s membrane potential reaches thethreshold V th , we stop evolving V, m, h, n for the following stiff period T stiff , and restart with the valuesinterpolated from the library. Thus the stiff part is avoided and we can use a large time step to evolve themodel to raise efficiency. Now we describe how to build the library in detail. Once a neuron’s membrane potential reaches thethreshold V th , we record the values I input , m, h, n and denote them by I th , m th , h th , n th , respectively. Ifwe know the exact trajectory of I input for the following stiff period T stiff , we can use a sufficiently smalltime step to evolve the Eq. (1) for T stiff with initial values V th , m th , h th , n th to obtain high resolutiontrajectories of V, m, h, n . We denote the obtained values after evolving by V re , m re , h re , n re , where thesuperscript -re stands for the reset value.However it is impossible to obtain the exact trajectory of I input without knowing the feedforward andsynaptic spike information. As shown in Fig 1(d, e), I input varies during the stiff period with peak value inthe range of O (5) µ A · cm − , while the intrinsic current, the sum of ionic and leakage current, is about 305 A · cm − at the spike time, and quickly rises to the peak value about 250 µ A · cm − , then stays at O ( − µ A · cm − in the remaining stiff period. Therefore, the intrinsic current is dominant in the stiff period.With this observation, we keep I input as constant throughout the whole stiff period. We emphasize thatthis is the only assumption made in the library method. Then, given a suite of I th , m th , h th , n th , we canobtain a suite of V re , m re , h re , n re . th V th = −50mV m th h th n th Figure 2: The ranges of values I th , m th , h th , n th .Before building the library, we choose N I , N m , N h , N n different values of I th , m th , h th , n th , respectively,equally distributed in their ranges as shown in Fig 2. For each suite of I th , m th , h th , n th , we evolve theEq. (1) for a time interval of T stiff to obtain V re , m re , h re , n re with a sufficiently small time step, e . g . , ∆ t = 2 − ≈ . × − ms. Note that we should set I input constant I input = I th throughout thewhole time interval T stiff . Then the data library is built with total size N I N m N h N n . Larger values of N I , N m , N h , N n can increase the accuracy of the library and greatly increase the size of library at thesame time. In our simulation, we take N I = 21 , N m = 16 , N h = 21 , N n = 16 , which can make thelibrary sufficiently accurate as is shown in section 3. The library occupies only 6.89 megabyte in binaryform and is quite small for today’s computers.One key point in building the library is to choose a proper threshold value V th . The thresholdshould be relatively low to keep the HH equations not stiff and allow a large time step with the stabilityrequirement satisfied. On the other hand, it should be relatively high that a neuron will definitely fire aspike after its membrane potential reaches the threshold. In this Letter, we take V th = − mV. We now illustrate how to use the library. Once a neuron’s membrane potential reaches the threshold,we first record the values I th , m th , h th , n th , then stop evolving its HH equations of V, m, h, n for thefollowing T stiff ms and restart with values V re , m re , h re , n re linearly interpolated from the pre-computedhigh resolution data library. For the easy of writing, suppose I th falls between two data points I th and I th in the library. Simultaneously find the data points m th and m th , h th and h th , n th and n th , respectively. Sowe need 16 suites of values in the library to do a linear interpolation. Then the linear interpolation for6 re is V re = X i,j,k,l ∈{ , } V re ( I th i , m th j , h th k , n th l ) I th − I th − i I th i − I th − i m th − m th − j m th j − m th − j h th − h th − k h th k − h th − k n th − n th − l n th l − n th − l (6)Same results hold for the computing of m re , h re , n re .As for the parameters G and H , obviously, they are not affected by V, m, h, n and are evolved asusual. After obtaining the high resolution library, we can use a large time step to evolve the HH neuronnetwork. The detailed numerical algorithm is given in Algorithm 2.
Algorithm 2:
Library algorithm
Input: t , ∆ t , { X i ( t ) } and { s il } Output: { X i ( t + ∆ t ) } and { τ il } (if any fired)Preliminarily evolve the network from t to t + ∆ t to find the first synaptic spike: for i = 1 to N do Let M denote the total number of feedforward spikes of the i th neuron within [ t, t + ∆ t ] and sort them into an increasing list T sorted m . Then we extend this notation such that T sorted = t and T sorted M +1 = t + ∆ t . for m = 1 to M + 1 do Advance the equations for the i th HH neuron from T sorted m − to T sorted m using the standardRK4 scheme. Then update the conductance H i ( T sorted m ) by adding f . if V i ( T sorted m − ) < V th , V i ( T sorted m ) ≥ V th then Find the spike time t ( i ) fire by cubic Hermite interpolation using the values V i ( T sorted m − ) , V i ( T sorted m ) , dV i dt ( T sorted m − ) and dV i dt ( T sorted m ) . else Set t ( i ) fire = t + ∆ t . while The minimum of { t ( i ) fire } < t + ∆ t do Suppose t (1) fire is the minimum one.Evolve neuron 1 to t (1) fire , and generate a spike at this moment.Record the values I th , m th , h th , n th , then perform a linear interpolation from the library toget V re , m re , h re , n re .Meanwhile, we stop evolving V, m, h, n of neuron 1 for the next T stiff ms, but we still evolvethe conductance parameters G, H as usual.Update all the remaining neurons.Preliminarily evolve the network from t (1) fire to t + ∆ t to find the next first synaptic spike.We accept { X i ( T sorted M +1 ) } as the solution { X i ( t + ∆ t ) } . The introduced library method is intuitive, regarding the reset values
V, m, h, n as a function of the inputvalues I th , m th , h th , n th . When building the library, we require that the ranges of I th , m th , h th , n th cancover almost all the cases in general firing events. Therefore, given enough cases of I th , m th , h th , n th ,the library method can predict the reset values quite accurately. We now consider the accuracy oflibrary method with the ideal condition: one single HH neuron driven by constant input I input , i.e. , theassumption made in building the library is satisfied.There is a type II behavior that only when the input current larger than a critical value I input ≈ . µ A · cm − can a neuron fire regularly and periodically [17, 18]. The HH model has a sudden jump7round this critical value from zero firing rate to regular nonzero firing rate because of a subcriticalHopf bifurcation [18], as shown in Fig 3(a). Below the critical value, some spikes may appear beforethe neuron converges to stable zero firing rate state. The number of spikes during this transient perioddepends on how close the constant input is to the critical value, as shown in Fig 3(b). Because our libraryis built based on the whole information of I th , m th , h th , n th , the library method can indeed capture theHopf bifurcation and transient states. We should point out that the library method misses one spike when I input = 6 . µ A · cm − . This is because the library we use is relatively coarse, with N I = 21 , N m =16 , N h = 21 , N n = 16 . F i r i n g r a t e ( H z ) I input RegularAdaptiveLibrary S p i k e s I input RegularAdaptiveLibrary (a) (b)
Figure 3: (a) The firing rate as a function of constant input I input ( µ A · cm − ). (b) The number of spikesduring the transient period with initial voltage V = − mV. The blue and red curves in (a) and (b)indicate the regular method with ∆ t = 2 − = 0 . ms and library method with ∆ t = 0 . ms,respectively.We now further check whether the library method can still capture this Hopf phenomena in a large-scale network. We use an all-to-all connected network with 100 excitatory neurons. Each neuron isdriven by a constant input I input that follows a uniform distribution with mean value around the criticalvalue. As shown in Fig 4, the library method can still capture the spike events well, with few spikesmissed because of the same coarse reason. N e u r o n I D S=20 50 100 150 200020406080100 time (ms) N e u r o n I D S=0.5 (a) (b)
Figure 4: Raster plots of spike events in an all-to-all connected network with 100 excitatory neurons,coupling strength (a) S = 0 . mS · cm − and (b) S = 2 mS · cm − . Each neuron is driven by a constantinput I input ∼ U (5 . , . . Time steps and colors are the same as that in Fig 3.8 Results
In this section, we show that the library method with large time steps can capture the statistical propertiesof the HH network. For the sake of simplicity, we show the numerical results mainly using an all-to-allconnected network of 100 excitatory neurons with fixed feedforward Poisson input ν = 100 Hz and f = 0 . mS · cm − . Then the coupling strength S is the only remaining variable. Other types of HHnetwork and other dynamic regimes can be easily extended and similar results can be obtained.We first study the dynamic properties of the system by computing the largest Lyapunov exponentwhich is one of the most important tools to characterize chaotic dynamics [19]. The spectrum of Lyapunovexponents can measure the average rate of divergence or convergence of the reference and the initiallyperturbed orbits [20–22]. If the largest Lyapunov exponent is positive, then the reference and perturbedorbits will exponentially diverge and the dynamics is chaotic, otherwise, the dynamics is non-chaotic.When calculating the largest Lyapunov exponent, denoted by λ , we use X = [ X , X , ..., X N ] torepresent all the variables of the neurons in the HH model. Denote the reference and perturbed trajectoriesby X ( t ) and ˜X ( t ) , respectively, then λ = lim t →∞ lim ǫ → t ln || ˜X ( t ) − X ( t ) |||| ǫ || (7)where ǫ is the initial separation. However we cannot use Eq. (7) to compute λ directly, because for achaotic system the separation || ˜X ( t ) − X ( t ) || is unbounded as t → ∞ and a numerical ill-conditionwill happen. The standard algorithm to compute the largest Lyapunov exponent can be found in [22–24].The regular method can compute λ using these algorithms directly. However, for the library method,the information of V, m, h, n are blank during the stiff period and these algorithms do not work. Theextended algorithm in [25] can solve this problem and we use it in this Letter.As shown in Fig 5(a), we compute the largest Lyapunov exponent as a function of coupling strength S from 0 to 2 mS · cm − by regular and library methods, respectively. The total run time T is 60 secondswhich is sufficiently long to have convergent results. The library method with a large time step ∆ t = 0 . ms can obtain accurate largest Lyapunov exponent compared with the regular method with a small timestep ∆ t = 0 . ms. The results show three typical dynamical regimes that the system is chaotic in . . S . . mS · cm − and non-chaotic in . S . . and . . S . mS · cm − , dependingon whether λ is positive or negative.As shown in Fig 5(b), we compute the mean firing rates, denoted by R , obtained by the regular andlibrary methods to further demonstrate how accurate the library method is. We also give the relative errorin the mean firing rate, which is defined by E R = | R library − R regular | /R regular (8)As shown in Fig 5(c), the library method can achieve at least 2 digits of accuracy using large time steps( ∆ t = 0 . ms). 9 −8−6−4−20 S l o g ( R e l a t i v e E rr o r ) F i r i n g r a t e RegularLibrary L y a p u n o v e x p o n e n t RegularLibrary (a) (c)(b)
Figure 5: (a) The largest Lyapunov exponent of the HH network versus the coupling strength S . (b) Meanfiring rate (Hz) versus the coupling strength S . (c) The relative error in the mean firing rate between thelibrary and the regular method . The blue and red curves in (a) and (b) indicate the regular method with ∆ t = 2 − = 0 . ms and library method with ∆ t = 0 . ms, respectively. The total run time is 60seconds to obtain convergent results.From the calculation of the largest Lyapunov exponent, we have known that there are three typicaldynamical regimes in the HH model. As shown in Fig 6, these three regimes are asynchronous regimein . S . . mS · cm − , chaotic regime in . . S . . mS · cm − and synchronous regime in . . S . mS · cm − . Hence we choose three typical coupling strength S = 0 . , . and mS · cm − to represent these dynamical regimes, respectively, in the following numerical tests. N e u r o n I D S=11200 1300 1400 1500 1600020406080100 time (ms) N e u r o n I D S=0.71200 1300 1400 1500 1600020406080100 time (ms) N e u r o n I D S=0.3 (a) (c)(b)
Figure 6: Raster plots of firing events in three typical dynamical regimes with coupling strength (a) S = 0 . mS · cm − , (b) S = 0 . mS · cm − , (c) S = 1 mS · cm − . Since all the three methods have similarraster, we only show the result obtained by the regular method. We now study the statistical accuracy of voltage. We first compute the power spectrum of the voltagetrace, averaged over all the neurons, as shown in Fig 7. The library method with large time steps cancapture the frequencies as well as the regular method, i.e. , the library method can capture the first orderinformation of the voltage.We should point out that when computing the power spectrum, we need the trace of voltage V , whichis blank during the stiff period in the library method. To solve this problem, we record the trace ofvoltage V ( t ; I th , m th , h th , n th ) when building the library, where I th , m th , h th , n th are the correspondinginitial values. In this Letter, we use sampling rate kHz to record. Then we can perform a linearinterpolation to estimate the voltage during the stiff period in the library method. Note that the recordprocess of the trace of voltage is not necessary to evolve the HH model with the library method.10 (Frequency)(Hz) l o g ( P o w e r ) RegularLibrary (Frequency)(Hz) l o g ( P o w e r ) RegularLibrary (Frequency)(Hz) l o g ( P o w e r ) RegularLibrary (a) (c)(b)
Figure 7: The averaged power spectrum of voltage trace in three typical dynamical regimes with couplingstrength (a) S = 0 . mS · cm − , (b) S = 0 . mS · cm − , (c) S = 1 mS · cm − . We choose 2kHz for oursampling rate. The time steps and colors used are the same as the one in Fig 5.We further demonstrate that the library method can capture higher order information of voltage, e.g. ,the second and third order. Given the voltage traces of two neurons, we can compute their cross powerspectral density (cpsd) and its L -norm. We use cpsd function in Matlab to compute in this Letter. Fig8(a-c) show the L -norm of cpsd between two randomly chosen neurons with different coupling strength S = 0 . , . and mS · cm − , while Fig 8(d-f) show the results among three neurons. Therefore, thelibrary method with large time steps can also capture the second and third order information of voltageas well as the regular method. −5051015 S=1log (Frequency)(Hz) l o g ( P o w e r ) RegularLibrary (Frequency)(Hz) l o g ( P o w e r ) RegularLibrary (Frequency)(Hz) l o g ( P o w e r ) RegularLibrary (Frequency)(Hz) l o g ( P o w e r ) RegularLibrary (Frequency)(Hz) l o g ( P o w e r ) RegularLibrary (Frequency)(Hz) l o g ( P o w e r ) RegularLibrary (a) (f)(e)(d) (c)(b)
Figure 8: Top panel : The L -norm of cpsd between two randomly chosen neurons with coupling strength S = 0 . , . and mS · cm − . Bottom panel : The L -norm of cpsd among three randomly chosenneurons with coupling strength S = 0 . , . and mS · cm − . For example, given the traces of voltage V ( t ) , V ( t ) and V ( t ) , we use the information of V ( t ) , V ( t + τ ) and V ( t ) to compute cpsd. The timesteps and colors used are the same as the one in Fig 5. Thanks to the advances in the spike train measurement like multiple electrode recording techniques,neuroscientists can obtain large amounts of spike data much easier than the voltage. The data can be used11n statistical tools, e.g. , transfer entropy, maximum entropy and Granger causality [5, 7, 26–28]. Basedon the spike trains, these tools can not only solve the directed causal information and network inferenceproblem [4, 5] but also probe the structure of fire patterns [7, 27, 29]. Another advantage over the voltageis that the spike train data is binary, hence the state space is very small. For example, considering thevector x (10) = ( x , x , ..., x ) from the spike data, the state space is only = 1024 . So it is mucheasier applied to these tools while achieving faster calculation. Therefore, it is necessary to check if thelibrary method can still obtain accurate spike trains.As shown in Fig 5, we demonstrate the accuracy of mean firing rate by the library method. However,this is the first order information of spike trains and the at least two digits of accuracy is still very coarse.We now further demonstrate the statistical accuracy of spikes. When evolving the HH model (1), werandomly choose 10 neurons from the network, record their spike times and transform them into binarytime series with a time bin 10 ms. We set the value 1 if there is a spike event during the time bin and0 otherwise. Therefore, the 10 neurons can make up a 10-dimensional vector with total 1024 kinds ofcombinations. After evolving with a sufficiently long run time, we can obtain a quite accurate distributionof the 10-dimensional vector. As shown in Fig 9, we compare the probabilities of the 10-dimensionalvector computed by the library and regular methods. Each star indicates the probability of the samevector (fire pattern) computed by the two methods. If the stars are on the diagonal line, then the librarymethod can capture quite the same distribution as the regular method. −4 −2 −4 −2 Probs Regular P r o b s L i b r a r y S=1 py=x −4 −2 −4 −2 Probs Regular P r o b s L i b r a r y S=0.7 py=x10 −4 −2 −4 −2 Probs Regular P r o b s L i b r a r y S=0.3 py=x (a) (c)(b)
Figure 9: The comparison of the probabilities of the 10-dimesional vector by regular and library method.The stars indicate the probability of the same vector computed by two methods. Therefore, the stars onthe diagonal line y = x mean the probabilities are the same and the compared method is as well as theregular method. Time steps are the same as the one in Fig 5. Total run time is seconds to obtainprecise distribution.We also do chi-square two sample tests for the comparison of distributions between the library andregular methods. The test statistic with different coupling strength all has p-value greater than − − .Therefore, the distributions from the regular and library methods cannot be distinguished in a statisticalsense, e.g. , the library method can capture the fire patterns very well. Since the statistical tools need onlythe distribution of fire patterns, we can use the spike trains computed by the library method with largetime steps in application. We now demonstrate the computational efficiency of the library method by comparing the time eachmethod costs with the same total run time. To reduce the system error from the computer, we use asufficiently long run time of 50 seconds. In the comparison, we fix the time step ∆ t = 2 − = 0 . ms in the regular method and change time step in the library method with a maximum value ∆ t = 0 . ms. As shown in Fig 10(a), the library method can overall stably obtain a maximum computationalspeedup around 6 times compared with the regular method. We find that the computational speedup is12ot increased linearly with the time step. This is because when we use a large time step, the spike-spikecorrection procedure requires more computation. Besides, once a neuron fires a spike, the library methodshould call the library for once and evolve the parameters G and H during the stiff period as usual whichalso costs some time. Therefore, the computational speedup is not increased straightforwardly with thetime step. E ff i c i e n c y F i r i n g r a t e RegularLibrary E ff i c i e n c y (a) (c)(b) Figure 10: (a) The efficiency of the library method versus the coupling strength S with the time steps ∆ t = 0 . , . , . , . , . ms from top to bottom. Total run time is 50 seconds. (b) Meanfiring rate (Hz) and (c) efficiency in a sparse network with 100 excitatory neurons randomly connectedwith probability 15%. The coupling strength S ij = S/N if there is a connection from j -th neuron to i -thneuron. Other parameters and time steps are the same as that in (a)In the all-to-all connected network, once a neuron fires a spike, all the other neurons should updatetheir state and preliminarily evolve the HH equations (1) to find the next spike time. We should point outthat real neuron networks are often sparsely connected [30–33]. So only a few portion of the remainingneurons should update their state when there is a spike event. Therefore, the efficiency shown in Fig 10(a)is underestimated. We consider a sparse network with 100 excitatory neurons randomly connected withprobability 15%. As shown in Fig 10(c), the library method can achieve at most 10 times of efficiencywith the maximum time step ∆ t = 0 . ms. The results presented in section 3 are mainly based on an all-to-all coupled network. We now considernetworks with more complicated structure, e.g. , let the firing rates and coupling strength of the modelneurons have a distribution [31,34] rather than a homogeneously value of
S/N or nearly fixed firing rate.And further check the validity of the library method in more complicated situations.We first use a network of 100 excitatory neurons, randomly coupled with probability 15% as shown inFig 11(a). The coupling strength of each connection follows a Uniform distribution U (0 , . mS · cm − and the Poisson input rate to each neuron follows also a Uniform distribution U (0 , Hz. Then thefiring rate of each neuron has a large range from 0 to tens of Hz as shown in Fig 11(b). We check thestatistical accuracy of both voltage and fire patterns in Fig 11(c-f). The test statistic of chi-square twosample tests always has p -value greater than − − , unless stated otherwise. The library method witha large time step ∆ t = 0 . ms can still achieve good performance compared with the regular method.13 R o w I n d e x (Frequency)(Hz) l o g ( P o w e r ) RegularLibrary −4 −2 −4 −2 Probs Regular P r o b s L i b r a r y py=x0 0.5 1 1.5 2 2.5−10−50510 S=0.020log (Frequency)(Hz) l o g ( P o w e r ) RegularLibrary (Frequency)(Hz) l o g ( P o w e r ) RegularLibrary F i r i n g r a t e (a) (f)(e)(d) (c)(b) Figure 11: The performance of library method in a network of 100 excitatory neurons, randomly coupledwith probability 15%. The coupling strength S ij ∼ U (0 , . mS · cm − , the Poisson input rate ν i ∼ U (0 , Hz, the Poisson input strength f = 0 . mS · cm − . The blue and red colors indicate the resultfrom regular method with ∆ t = 2 − = 0 . ms and library method with ∆ t = 0 . ms, respectively.(a) The synaptic adjacency matrix with a black color indicating a connection and white color otherwise.(b) Firing rate of each neuron. (c) The comparison of the probabilities of the 10-dimesional vector. The L -norm of cpsd between two neurons with different coupling strength (d) S = 0 . mS · cm − , (e) S = 0 . mS · cm − , (f) S = 0 . mS · cm − According to data from living cortical neuronal networks, the coupling strength follows a Log-normaldistribution [31]. We then adjust the coupling strength following from a Uniform distribution to a Log-normal distribution but keep the mean value of 0.02 mS · cm − . As shown in Fig 12, the library methodcan still achieve good performance. 14 −4 −2 −4 −2 Probs Regular P r o b s L i b r a r y py=x0 0.5 1 1.5 2 2.5−50510 S=0.426log (Frequency)(Hz) l o g ( P o w e r ) RegularLibrary (Frequency)(Hz) l o g ( P o w e r ) RegularLibrary (Frequency)(Hz) l o g ( P o w e r ) RegularLibrary F i r i n g r a t e −10 −8 −6 −4 −2 000.020.040.060.08 D e n s i t y log(S) (a) (f)(e)(d) (c)(b) Figure 12: The performance of library method in the same network as shown in Fig 11 but with couplingstrength S ij ∼ Lognormal ( − . , . mS · cm − (the mean value of S ij = 0 . mS · cm − ). Thecolors and time steps are the same as that in Fig 11. (a) The distribution of the coupling strength. (b)Firing rate of each neuron. (c) The comparison of the probabilities of the 10-dimesional vector. The L -norm of cpsd between two neurons with different coupling strength (d) S = 0 . mS · cm − , (e) S = 0 . mS · cm − , (f) S = 0 . mS · cm − . In this part, we apply the library method to other types of HH model for the four most prominent classesof neurons. They are “regular spiking” (RS), “fast spiking” (FS), “intrinsically bursting” (IB) and “low-threshold spike” (LTS) cells according to the pattern of spiking and bursting in intracellular recordings[35]. These HH-type models are obtained by fit method based on different experimental data like fromrat somatosensory cortex in vitro, ferret visual cortex in vitro, cat visual cortex in vivo and cat associationcortex in vivo. All the four extended HH-type models can be described by the following equation [12]: C m dV i dt = − g leak ( V i − E leak ) − I Na i − I Kd i − I M i − I T i − I L i + I input i (9)where V i is the membrane potential of the i -th neuron, I Na i , I Kd i , I M i , I T i and I L i are voltage-dependentcurrents, I input i is the input current, C m = 1 µ F · c m − is the specific capacitance of the membrane, g leak and E leak are the resting membrane conductance and reversal potential, respectively. Detailed functionsand parameters for the four extended HH-type models [12] are given in Appendix.RS neurons are the most typical neurons in neocortex and is in general excitatory. When injected bya constant depolarizing current, the neurons can fire with short inter-spike-interval (ISI) at first and thenthe ISI increases and tends to be stable as shown in Fig 13(a). This is called spike-frequency adaptationwhich is one of the mean features. We also use an excitatory RS-type network with coupling strengthfollowing a Log-normal distribution, Poisson input rate following a Uniform distribution to illustrate thevalidity of library method. The statistical accuracy of spikes are given in Fig 13(e, i).We should point out that the extended HH-type model contains more voltage-dependent currents, e.g. , the IB-type model requires 7 parameters: I th , m th , h th , n th , p th , q th , r th when building the library. Asa consequence, the trace of voltage during the stiff period will occupy huge storage space (>10 gigabyte15n binary form). Therefore, we do not record the trace of voltage when building the library and presentthe accuracy of voltage.FS neurons are another kind of typical neurons in cortex and is in general inhibitory. The meanfeature of a FS neuron is that it can fire high-frequency spikes with little or no adaptation, as shown inFig 13(b). We use an inhibitory FS-type network to show the validity of library method in Fig 13(f, j).IB neurons are usually excitatory and can produce bursts of spikes, while LTS neurons are usuallyinhibitory and can fire high-frequency spikes with spike-frequency adaptation. The validity of librarymethod for the IB and LTS network are given in the third and last column of Fig 13, respectively. −80−60−40−200204060 time (ms) v o l t a g e RS 0 100 200 300 400 500−80−60−40−200204060 time (ms) v o l t a g e FS 0 500 1000 1500−80−60−40−200204060 time (ms) v o l t a g e IB 0 200 400 600 800 1000−80−60−40−200204060 time (ms) v o l t a g e LTS0 20 40 60 80 100010203040 Neuron ID F i r i n g r a t e RS 0 20 40 60 80 10001020304050 Neuron ID F i r i n g r a t e FS 0 20 40 60 80 100020406080 Neuron ID F i r i n g r a t e IB 0 20 40 60 80 100010203040 Neuron ID F i r i n g r a t e LTS10 −4 −2 −4 −2 Probs Regular P r o b s L i b r a r y RS py=x 10 −4 −2 −4 −2 Probs Regular P r o b s L i b r a r y FS py=x 10 −4 −2 −4 −2 Probs Regular P r o b s L i b r a r y IB py=x 10 −4 −2 −4 −2 Probs Regular P r o b s L i b r a r y LTS py=x (a) (k)(j) (l)(i) (h)(g)(f)(e) (d)(c)(b)
Figure 13: The validity of library method for the extended HH-type models. Top panel: Voltage tracesof a single neuron driven by a constant input. Constant input are I input = 0 . , , . , . µ A · cm − forthe RS, FS, IB and LTS neurons, respectively. Middle panel: Firing rate of each neuron of the RS,FS, IB and LTS networks. Bottom panel: The comparison of the probabilities of the 10-dimesionalvector for the RS, FS, IB and LTS networks. The colors and time steps are the same as that in Fig11. All the four networks have 100 neurons (RS and IB are excitatory network while FS and LTS areinhibitory network), coupled as in Fig 11(a), Poisson input strength f = 0 . mS · cm − . Other parametersare RS: coupling strength S ij ∼ Lognormal ( − . , . , Poisson input rate ν i ∼ U (0 , Hz;FS: S ij ∼ Lognormal ( − . , . , ν i ∼ U (200 , Hz; IB: S ij ∼ Lognormal ( − . , . , ν i ∼ U (0 , Hz; LTS: S ij ∼ Lognormal ( − . , . , ν i ∼ U (0 , Hz.
In conclusion, we have shown a modified library method to deal with the stiff part during the firing eventin evolving the HH model. The library method can enlarge time step (maximum time step 0.354 ms) toreduce the computational cost while achieving high accurate statistical information of the HH neurons.It is worthwhile pointing out that the library method with large time steps can capture the fire patterns orthe distribution of the spikes as well as the regular method. This holds a spectral attraction to apply tostatistical analysis like the transfer entropy and maximum entropy which require a sufficiently long runtime ∼ minutes to obtain a precise distribution of spikes. However, due to the extra error introduced16y calling the library, it can never obtain numerical convergence. But the library method can still retainmost of the properties of HH neurons like the chaotic dynamics which is observed in the HH model bycomputing the largest Lyapunov exponent.We emphasize that the library method is very attractive with a stably maximum 10 times of speedupin a sparse network. The remarkable speedup of library method holds in spite of the size of the networkor the structure of connectivity.We can successfully extend the idea of library method in more complicated HH-type models withbursting and adaptation behavior. The way to build and use the library are the same as that in the standardHH model although there are more voltage-dependent currents. The library method can also capture mostof the statistical properties of these HH-type neurons with high times of speedup.Finally, we emphasize that the spike-spike correction procedure [10] is necessary in the two methods,especially using a large time step in the library method. This procedure ensures that the spiking sequencesestimated are accurate. Even in a strongly coupled network, the synaptic interactions are still correct andwill not influence the accuracy of the library method. References [1] Alan L Hodgkin and Andrew F Huxley. A quantitative description of membrane current and itsapplication to conduction and excitation in nerve.
The Journal of physiology , 117(4):500, 1952.[2] Brian Hassard. Bifurcation of periodic solutions of the hodgkin-huxley model for the squid giantaxon.
Journal of Theoretical Biology , 71(3):401–420, 1978.[3] Peter Dayan and LF Abbott. Theoretical neuroscience: computational and mathematical modelingof neural systems.
Journal of Cognitive Neuroscience , 15(1):154–155, 2003.[4] Shinya Ito, Michael E Hansen, Randy Heiland, Andrew Lumsdaine, Alan M Litke, and John MBeggs. Extending transfer entropy improves identification of effective connectivity in a spikingcortical network model.
PloS one , 6(11):e27431, 2011.[5] Douglas Zhou, Yanyang Xiao, Yaoyu Zhang, Zhiqin Xu, and David Cai. Granger causality networkreconstruction of conductance-based integrate-and-fire neuronal systems.
PloS one , 9(2):e87636,2014.[6] Douglas Zhou, Yanyang Xiao, Yaoyu Zhang, Zhiqin Xu, and David Cai. Causal and structuralconnectivity of pulse-coupled nonlinear networks.
Physical review letters , 111(5):054102, 2013.[7] Zhi-Qin John Xu, Guoqiang Bi, Douglas Zhou, and David Cai. A dynamical state underlying thesecond order maximum entropy principle in neuronal networks.
Communications in MathematicalSciences , 15(3):665–692, 2017.[8] David Hansel, Germán Mato, Claude Meunier, and L Neltner. On numerical simulations ofintegrate-and-fire neural networks.
Neural Computation , 10(2):467–483, 1998.[9] Michael J Shelley and Louis Tao. Efficient and accurate time-stepping schemes for integrate-and-fire neuronal networks.
Journal of Computational Neuroscience , 11(2):111–119, 2001.[10] Aaditya V Rangan and David Cai. Fast numerical methods for simulating large-scale integrate-and-fire neuronal networks.
Journal of Computational Neuroscience , 22(1):81–100, 2007.[11] Eugene M Izhikevich. Simple model of spiking neurons.
IEEE Transactions on neural networks ,14(6):1569–1572, 2003. 1712] Martin Pospischil, Maria Toledo-Rodriguez, Cyril Monier, Zuzanna Piwkowska, Thierry Bal, YvesFrégnac, Henry Markram, and Alain Destexhe. Minimal hodgkin–huxley type models for differentclasses of cortical and thalamic neurons.
Biological cybernetics , 99(4):427–441, 2008.[13] Maurizio Mattia and Paolo Del Giudice. Efficient event-driven simulation of large networks ofspiking neurons and dynamical synapses.
Neural Computation , 12(10):2305–2329, 2000.[14] Jan Reutimann, Michele Giugliano, and Stefano Fusi. Event-driven simulation of spiking neuronswith stochastic dynamics.
Neural Computation , 15(4):811–830, 2003.[15] Michelle Rudolph and Alain Destexhe. How much can we trust neural simulation strategies?
Neurocomputing , 70(10):1966–1969, 2007.[16] Yi Sun, Douglas Zhou, Aaditya V Rangan, and David Cai. Library-based numerical reductionof the hodgkin–huxley neuron for network simulation.
Journal of computational neuroscience ,27(3):369–390, 2009.[17] Wulfram Gerstner and Werner M Kistler.
Spiking neuron models: Single neurons, populations,plasticity . Cambridge university press, 2002.[18] Christof Koch and Idan Segev.
Methods in neuronal modeling: from ions to networks . MIT press,1998.[19] Valery Iustinovich Oseledec. A multiplicative ergodic theorem. lyapunov characteristic numbersfor dynamical systems.
Trans. Moscow Math. Soc , 19(2):197–231, 1968.[20] Edward Ott.
Chaos in dynamical systems . Cambridge university press, 2002.[21] John Michael Tutill Thompson and H Bruce Stewart.
Nonlinear dynamics and chaos . John Wiley& Sons, 2002.[22] Thomas S Parker and Leon Chua.
Practical numerical algorithms for chaotic systems . SpringerScience & Business Media, 2012.[23] Douglas Zhou, Yi Sun, Aaditya V Rangan, and David Cai. Spectrum of lyapunov exponents ofnon-smooth dynamical systems of integrate-and-fire type.
Journal of computational neuroscience ,28(2):229–245, 2010.[24] Alan Wolf, Jack B Swift, Harry L Swinney, and John A Vastano. Determining lyapunov exponentsfrom a time series.
Physica D: Nonlinear Phenomena , 16(3):285–317, 1985.[25] Douglas Zhou, Aaditya V Rangan, Yi Sun, and David Cai. Network-induced chaos in integrate-and-fire neuronal ensembles.
Physical Review E , 80(3):031918, 2009.[26] Donald H Perkel, George L Gerstein, and George P Moore. Neuronal spike trains and stochasticpoint processes: Ii. simultaneous spike trains.
Biophysical journal , 7(4):419–440, 1967.[27] Jonathon Shlens, Greg D Field, Jeffrey L Gauthier, Matthew I Grivich, Dumitru Petrusca, AlexanderSher, Alan M Litke, and EJ Chichilnisky. The structure of multi-neuron firing patterns in primateretina.
Journal of Neuroscience , 26(32):8254–8266, 2006.[28] Christopher J Quinn, Todd P Coleman, Negar Kiyavash, and Nicholas G Hatsopoulos. Estimatingthe directed information to infer causal relationships in ensemble neural spike train recordings.
Journal of computational neuroscience , 30(1):17–44, 2011.1829] Jonathon Shlens, Greg D Field, Jeffrey L Gauthier, Martin Greschner, Alexander Sher, Alan MLitke, and EJ Chichilnisky. The structure of large-scale synchronized firing in primate retina.
Journal of Neuroscience , 29(15):5022–5031, 2009.[30] David Golomb and David Hansel. The number of synaptic inputs and the synchrony of large, sparseneuronal networks.
Neural computation , 12(5):1095–1139, 2000.[31] Sen Song, Per Jesper Sjöström, Markus Reigl, Sacha Nelson, and Dmitri B Chklovskii. Highlynonrandom features of synaptic connectivity in local cortical circuits.
PLoS biology , 3(3):e68,2005.[32] Christopher J Honey, Rolf Kötter, Michael Breakspear, and Olaf Sporns. Network structure ofcerebral cortex shapes functional connectivity on multiple time scales.
Proceedings of the NationalAcademy of Sciences , 104(24):10240–10245, 2007.[33] Wulfram Gerstner, Werner M Kistler, Richard Naud, and Liam Paninski.
Neuronal dynamics: Fromsingle neurons to networks and models of cognition . Cambridge University Press, 2014.[34] Tomáš Hromádka, Michael R DeWeese, and Anthony M Zador. Sparse representation of sounds inthe unanesthetized auditory cortex.
PLoS biology , 6(1):e16, 2008.[35] Barry W Connors and Michael J Gutnick. Intrinsic firing patterns of diverse neocortical neurons.