Design Fast Algorithms For Hodgkin-Huxley Neuronal Networks
aa r X i v : . [ q - b i o . N C ] J a n Design Fast Algorithms For Hodgkin-Huxley Neuronal Networks
Zhong-Qi Kyle Tian and Douglas Zhou ∗ School of Mathematical Sciences, MOE-LSC, and Institute of Natural Sciences, Shanghai JiaoTong University, Shanghai, China
Abstract.
The stiffness of the Hodgkin-Huxley (HH) equations during an action potential (spike) limits theuse of large time steps. We observe that the neurons can be evolved independently between spikes, i.e., differentneurons can be evolved with different methods and different time steps. This observation motivates us to designfast algorithms to raise efficiency. We present an adaptive method, an exponential time differencing (ETD) methodand a library-based method to deal with the stiff period. All the methods can use time steps one order of magnitudelarger than the regular Runge-Kutta methods to raise efficiency while achieving precise statistical properties ofthe original HH neurons like the largest Lyapunov exponent and mean firing rate. We point out that the ETD andlibrary methods can stably achieve maximum 8 and 10 times of speedup, respectively.
Keywords
Fast algorithm; Hodgkin-Huxley networks; Adaptive method; Exponential time differencing method;Library method
The Hodgkin-Huxley (HH) system [1–3] is widely used to simulate neuronal networks in computationalneuroscience [1–3]. One of its attractive properties is that it can describe the detailed generation of actionpotentials realistically, e.g. , the spiking of the squid’s giant axon. For numerical simulation in practice, itis generally evolved with simple explicit methods, e . g . , the Runge-Kutta scheme, and often with a fixedtime step. However, the HH equations become stiff when the HH neuron fires a spike and we have totake a sufficiently small time step to avoid stability problems. The stiffness of HH equations limits itsapplication and as a substitution, the conductance-based integrate-and-fire (I&F) systems are often usedin the consideration of efficiency [4–7] although detailed generation of action potentials in I&F equationsare omitted. Therefore, it is meaningful to solve the stiff problem in HH system and design fast algorithmsallowing a large time step.The stiff problem of HH system only occurs during the action potentials which is from the activitiesof sodium and potassium ions channels. Then a natural idea is to use the adaptive method so that we mayuse a larger time step outside the stiff period. It works well for a single HH neuron but fails in large-scaledHH networks [8] since there are firing events almost everywhere and the obtained time step in standardadaptive method is very small. One key point we observed is that the neurons interact only at the spikemoments, so they can be evolved independently between spikes. Our strategy for the adaptive method isthat we use a large time step to evolve the network and split it up into subintervals (small time step) toevolve the neurons that are during the stiff period.The exponential time differencing (ETD) method [8–11], proposed for stiff systems, may be anothereffective and efficiency method for HH equations. The idea of ETD method is that, in each time step, theequations can be written into a dominant linear term and a relatively small residual term. By multiplyingan integrating factor, the evolving of HH equations turns to estimate the integral of the residual, which ∗ [email protected] e.g. , the changeof conductance and membrane potential of postsynaptic neurons. Without a carefully recalibration, theaccuracy of the network is only the first-order. We point out that the spike-spike correction procedureintroduced in Ref. [7] can solve this problem by iteratively sorting the possible spike times, updating thenetwork to the first one and recomputing all future spikes within the time step. In this Letter, we give afourth-order ETD method for networks with spike-spike correction.The above adaptive and ETD methods can achieve high quantitative accuracy of HH neurons likethe spike shapes. But sometimes the goal of numerical simulations is to achieve qualitative insight orstatistical accuracy, since with uncertainty in the model parameters the HH equations can only approximatelydescribe the biological reality. In this case, we offer a library method [14] to raise efficiency which canavoid the stiff period by treating the HH neuron as an I&F one. The idea of the library method is that oncea HH neuron’s membrane potential reaches the threshold, we stop evolving its HH equations and restartafter the stiff period with reset values interpolated from a pre-computed high resolution data library.Therefore, we avoid the stiff period and can use a large time step to evolve the HH model. Comparedwith the original library method in Ref. [14], our library method can significantly simply the way to buildthe library and improve the precision of library (see section 6).We use the regular fourth-order Runge-Kutta method (RK4) to compare the performance of theadaptive, ETD and library methods with spike-spike correction procedure included in all the methods.The three advanced methods can use time steps one order of magnitude larger than that in the regularRK4 method while achieving precise statistical properties of the HH model, e.g., firing rates and chaoticdynamical property. We also give a detailed comparison of efficiency for the given methods. Ournumerical results show that the ETD and library methods can achieve stable high times of speedup witha maximum 8 and 10 times, respectively.The outline of the paper is as follows. In Section 2, we give the equations of the HH model. InSection 3, 4 and 5, we describe the regular RK4, adaptive and ETD method, respectively. In Section 6,we give the details of how to build and use the data library. In Section 7, we give the numerical results.We discuss and conclude in Section 8. The dynamics of the i th neuron of a Hodgkin-Huxley (HH) network with N excitatory neurons is governedby C dV i dt = − ( V i − V Na ) G Na m i h i − ( V i − V K ) G K n i − ( V i − V L ) G L + I input i (1) dz i dt = (1 − z i ) α z ( V i ) − z i β z ( V i ) for z = m, h and n (2)where V i is the membrane potential, m i , h i and n i are gating variables, V Na , V K and V L are the reversalpotentials for the sodium, potassium and leak currents, respectively, G Na , G K and G L are the correspondingmaximum conductances. I input i = − G i ( t )( V i − V G ) is the input current with dG i ( t ) dt = − G i ( t ) σ r + H i ( t ) (3)2 H i ( t ) dt = − H i ( t ) σ d + f X l δ ( t − s il ) + X j = i X l S ij δ ( t − τ jl ) (4)where V G is the reversal potential, G i ( t ) is the conductance, H i ( t ) is an additional parameter to smooth G i ( t ) , σ r and σ d are fast rise and slow decay time scale, respectively, and δ ( · ) is the Dirac delta function.The second term in Eq. (4) is the feedforward input with magnitude f . The input time s il is generatedfrom a Poisson process with rate ν . The third term in Eq. (4) is the synaptic current from synapticinteractions in the network, where S ij is the coupling strength from the j th neuron to the i th neuron, τ jl is the l th spike time of j th neuron. The forms of α and β and other model parameters are given inAppendix.When the voltage V i , evolving continuously according to Eqs. (1, 2), reaches the threshold V th , wesay the i th neuron fires a spike at this time. Instantaneously, all its postsynaptic neurons receive this spikeand their corresponding parameter H jumps by an appropriate amount S ji for the j th neuron. For the sakeof simplicity, we mainly consider a homogeneously and randomly connected network with S ij = A ij S where A = ( A ij ) is the adjacency matrix and S is the coupling strength. But note that the conclusionsshown in this paper will not change if we extend the given methods to more complicated networks, e.g., networks of both excitatory and inhibitory neurons, more realistic connectivity with coupling strengthfollowing the typically Log-normal distribution [15, 16]. We first introduce the regular Runge-Kutta fourth-order scheme (RK4) with fixed time step ∆ t to evolvethe HH model. For the easy of illustration, we use the vector X i ( t ) = ( V i ( t ) , m i ( t ) , h i ( t ) , n i ( t ) , G i ( t ) , H i ( t )) (5)to represent the variables of the i th neuron. The neurons interact with each other through the spikes bychanging the postsynaptic state of X , so it is important to obtain accurate spike sequences, e.g., at leastwith an accuracy of fourth-order. We determine the spike times as follows [12, 13]. If neuron i fires in [ t n , t n +1 ] , i.e., V i ( t n ) < V th and V i ( t n +1 ) ≥ V th , we can use the membrane potential and its derivativeat t = t n and t = t n +1 : V i ( t n ) , dV i dt ( t n ) , V i ( t n +1 ) , dV i dt ( t n +1 ) to perform a cubic Hermite interpolation p ( t ) to decide the spike time by finding the solution t spike of p ( t ) = V th with t n < t spike ≤ t n +1 . Thenthe obtained spike time t spike has an accuracy of fourth-order given that the membrane potential and itsderivative has an accuracy of fourth-order as well.In between spikes, the RK4 method can achieve a fourth-order of accuracy. However, for a spike-containing time step, it requires a careful recalibration of X to account for the spike-spike interaction.For example, consider 2 bidirectionally connected neurons and . If neuron 1 fires in the time step,then it is necessary to recalibrate X to achieve an accuracy of fourth-order. If neuron 2 also fires, then X and spike time of neuron 1 is imprecise and we should recalibrate them. Then again X and the spiketime of neuron 2 is imprecise. 3 spike(2) t V Figure 1: Local error analysis of neuron for t ≤ t ≤ t + ∆ t . The blue solid line is true membranepotential v , magenta dashdot line is true membrane potential ˜ v without considering the spike fromneuron 2 and the red dashed line is the cubic Hermite interpolation p . The solid horizontal line is thethreshold and the dotted vertical lines are spike times.We take the spike-spike correction procedure [7] to solve this problem. The strategy is that wepreliminarily evolve neuron 1 and 2 considering only the feedforward input in the time step from t to t + ∆ t and decide the spike times t (1) spike and t (2) spike by a cubic Hermite interpolation. Suppose that t (1) spike < t (2) spike . Let v ( t ) denote the true membrane potential of neuron 1, ˜ v ( t ) denote the true membranepotential of neuron 1 without considering the spike of neuron 2 fired at t (2) spike and p ( t ) be the cubitHermite interpolation for neuron 1 as shown in Fig. 1. Then we have p ( t ) − ˜ v ( t ) = O (∆ t ) for t ≤ t ≤ t + ∆ t. (6)Note that the spike from neuron 2 starts to affect at time t (2) spike , i.e. , v ( t ) = ˜ v ( t ) for t ≤ t ≤ t (2) spike . (7)Therefore, the preliminarily computed t (1) spike indeed has an accuracy of four-order as shown in Fig. 1.Then we can update neuron 1 and 2 from t to t (1) spike , accept the spike time of neuron 1 at t (1) spike , andevolve them to t + ∆ t to obtain spike time of neuron 2, X ( t + ∆ t ) and X ( t + ∆ t ) with an accuracyof fourth-order. For large networks, the strategy is the same and detailed algorithm of regular RK44cheme is given in Algorithm 1. Algorithm 1:
Regular RK4 algorithm
Input:
An initial time t , time step ∆ t , feedforward input times { s il } Output: { X i ( t + ∆ t ) } and { τ il } (if any fired)Preliminarily evolve the network from t to t + ∆ t to find the first synaptic spike: for i = 1 to N do Let M denote the total number of feedforward spikes of the i th neuron within [ t , t + ∆ t ] and sort them into an increasing list { T sorted m } . Then we extend this notation such that T sorted := t and T sorted M +1 := t + ∆ t . for m = 1 to M + 1 do Advance the equations for the i th HH neuron from T sorted m − to T sorted m using the standardRK4 scheme. Then update the conductance H i ( T sorted m ) by adding f . if V i ( T sorted m − ) < V th , V i ( T sorted m ) ≥ V th then We know that the i th neuron spiked during [ T sorted m − , T sorted m ] .Find the spike time t ( i ) spike by a cubic Hermite interpolation using the values V i ( T sorted m − ) , V i ( T sorted m ) , dV i dt ( T sorted m − ) and dV i dt ( T sorted m ) . while { t ( i ) spike } is non-empty do Find the minimum of { t ( i ) spike } , say t ( i ) spike . Evolve neuron i to t ( i ) spike , generate a spike at thismoment and update its postsynaptic neurons to t ( i ) spike . Preliminarily evolve the affectedneurons from t ( i ) spike to t + ∆ t to update their spike times { t spike } .We accept { X i ( T sorted M +1 ) } as the solution { X i ( t + ∆ t ) } ; When a neuron fires a spike, the HH neuron equations are stiff for some milliseconds, denoted by T stiff as shown in Fig. 2. This stiff period requires a sufficiently small time step to avoid stability problem.In the regular RK4 scheme, we use fixed time step, so to satisfy the requirement of stability, we haveto use a relatively small time step, e . g . , ∆ t = 1 / ms. But we may need to simulate the HH modelfrequently to analyze the system’s behavior with different parameters or evolve the model for a long runtime (hours) to obtain precise statistical properties. Therefore, it is important to enlarge the time step asmuch as possible to raise efficiency.We note that between spikes the neurons do not affect each other and thus can be evolved independently.As shown in Algorithm 1, we induce the spike-spike correction to obtain a high accurate method of fourth-order, i.e. , we should split up the time step once a presynaptic neuron fires. Therefore, the neurons areindeed evolved independently in simulation. With this observation, we can design efficient method byreasonably treating the neurons inside the stiff period.We first introduce our adaptive method. Note that the derivative of voltage is quite small outsidethe stiff period as shown in Fig. 2(b). Then strategy is that we take a large time step ∆ t to evolve thenetwork. Once a neuron is in the stiff period, the large time step is then split up into small time steps ∆ t S , e.g., with a value of ∆ t S = 1 / ms in this Letter, as shown in Fig. 3. For each time step, we stilluse the standard RK4 scheme to evolve the neurons, so the adaptive method has an accuracy of O (∆ t ) .Detailed adaptive algorithm is the same as the Algorithm 1 except that the step 3 in Algorithm 1 shouldbe replaced by the following algorithm. 5
20 425 430 435−1000100200300 t (ms) C u rr e n t Intrinsic currentI input
420 425 430 435−100−50050 t (ms) V ( m V ) (a) (b)
420 425 430 435−50510 t (ms) I i n p u t T stiff Figure 2: Typical firing event of a single HH neuron. (a) The trajectory of voltage V . The black lineindicates the threshold V th and the red dotted lines indicate the stiff period. (b) The trajectory of theintrinsic current and input current I input i ( µ A · cm − ). The intrinsic current is the sum of ionic and leakagecurrents. Algorithm 2:
Adaptive algorithm if The i th neuron is outside the stiff period then Let M denote the total number of feedforward spikes of the i th neuron within [ t , t + ∆ t ] . else Let M denote the total number of feedforward spikes of the i th neuron within [ t , t + ∆ t ] and the extra time nodes t + ∆ t S , t + 2∆ t S , ..., t + k ∆ t S , k = [ ∆ t ∆ t S ] ( [ · ] takes theround-off number).Sort them into an increasing list { T sorted m } and extend this notation such that T sorted := t and T sorted M +1 := t + ∆ t . We now introduce the exponential time differencing method (ETD) which is proposed for stiff systems[8–10]. To describe the ETD methods for HH networks, it is more instructive to first consider a simpleordinary differential equation dudt = F ( t, u ) (8)where F ( t, u ) represents the stiff nonlinear forcing term. For a single time step from t = t k to t = t k +1 = t k + ∆ t , we rewrite the equation dudt = au + b + ( F ( t, u ) − au − b ) (9)where au + b is the dominant linear approximation of F ( t, u ) , and F ( t, u ) − au − b is the residual error.By multiplying Eq. (9) through by the integrating factor e − at , we can obtain u ( t k +1 ) = u ( t k ) e a ∆ t + e a ∆ t Z ∆ t e − aτ ˜ F ( t k + τ, u ( t k + τ )) dτ. (10)where ˜ F ( t, u ) = F ( t, u ) − au . This formula is exact, and the essence of the ETD method is to find aproper way to approximate the integral in this expression, e.g., approximating ˜ F by a polynomial.6
20 425 430 435−100−50050 t (ms) V ( m V ) Adaptive method
Neuron 1Neuron 2 T stiff Small time step
Figure 3: Illustration of the adaptive method for two HH neurons unidirectionally connected from neuron1 to neuron 2. After neuron 1 fires a spike, it is evolved with a small time step ∆ t S during the stiff periodindicated by the stars. The dots indicate the time nodes for the large time step. The green square indicatesan update time node due to the synaptic spike from neuron 1.Here we give an ETD method with Runge-Kutta fourth-order time stepping (ETD4RK). We write u k and ˜ F k for the numerical approximations for u ( t k ) and ˜ F ( t k , u k ) , respectively (We use similar writingin the derivation of ETD4RK method for HH equations). The ETD4RK method is given by a k = u k e a ∆ t/ + ˜ F k e a ∆ t/ − a (11) b k = u k e a ∆ t/ + ˜ F ( t k + ∆ t/ , a k ) e a ∆ t/ − a (12) c k = a k e a ∆ t/ + (2 ˜ F ( t k + ∆ t/ , b k ) − ˜ F k ) e a ∆ t/ − a (13) u k +1 = u k e a ∆ t + g ˜ F k + g [ ˜ F ( t k + ∆ t/ , a k ) + ˜ F ( t k + ∆ t/ , b k )] + g ˜ F ( t k + ∆ t, c k ) (14)where g = [ − − a ∆ t + e a ∆ t (4 − a ∆ t + a ∆ t )] /a ∆ t g = 2[2 + a ∆ t + e a ∆ t ( − a ∆ t )] /a ∆ t (15) g = [ − − a ∆ t − a ∆ t + e a ∆ t (4 − a ∆ t )] /a ∆ t The above formula requires an appropriate linear approximation au + b for F ( t, u ) during the singletime step, so that the residual error is relatively small and we can use a large time step to raise efficiency.Note that the variables V, m, h, and n appear linearly in the HH equations (1, 2). Therefore, we candirectly apply ETD4RK method to HH system. For the easy of writing, we give the standard ETD4RKmethod for a single HH neuron to evolve over a single time step from t = t k to t = t k +1 = t k + ∆ t . TheHH equations (1, 2) are rewritten in the form dzdt = a z z + ˜ F z ( t, V, m, h, n ) for z = V, m, h and n (16)where a V = ( − G Na m k h k − G K n k − G L ) /C (17)7 z = − α z ( V k ) − β z ( V k ) for z = m, h and n (18)and ˜ F V = [ − ( V − V Na ) G Na m h − ( V − V K ) G K n − ( V − V L ) G L + I input − a V V ] /C (19) ˜ F z = (1 − z ) α z ( V ) − zβ z ( V ) − a z z for z = m, h and n. (20)The standard ETD4RK method is given by a z,k = z k e a z ∆ t/ + ˜ F z,k e a z ∆ t/ − a z (21) b z,k = z k e a z ∆ t/ + ˜ F z ( t k + ∆ t/ , a V,k , a m,k , a h,k , a n,k ) e a z ∆ t/ − a z (22) c z,k = a z,k e a z ∆ t/ + (2 ˜ F z ( t k + ∆ t/ , b V,k , b m,k , b h,k , b n,k ) − ˜ F z,k ) e a z ∆ t/ − a z (23) z k +1 = z k e a z ∆ t + g z, ˜ F z,k + g z, ˜ F z ( t k + ∆ t/ , a V,k , a m,k , a h,k , a n,k )+ g z, ˜ F z ( t k + ∆ t/ , b V,k , b m,k , b h,k , b n,k ) + g z, ˜ F z ( t k + ∆ t, c V,k , c m,k , c h,k , c n,k ) (24)for z = V, m, h and n . The forms of g z, , g z, , g z, are the same as given in Eq. (15) except that the term a should be replaced by a z .
420 425 430 435−100−50050 t (ms) V ( m V ) ETD4RK method
Neuron 1Neuron 2 T stiff Standard ETD4RK
Figure 4: Illustration of the ETD4RK method for two HH neurons unidirectionally connected fromneuron 1 to neuron 2. After neuron 1 fires a spike, we use the standard ETD4RK scheme to evolve itduring the stiff period indicated by the stars. The dots indicate the time nodes where we use the standardRK4 scheme. The green square indicates an update time node due to the synaptic spike from neuron 1.Compared with the standard RK4 scheme, the standard ETD4RK scheme requires extra calculationfor the linear approximation terms in Eqs. (1, 2). Besides the HH equations are stiff only during anaction potential as shown in Fig. 2. Therefore, it is more suitable to design the ETD4RK method forHH networks as following: If a neuron is inside the stiff period, we use the standard ETD4RK scheme toevolve its HH equations, otherwise, we still use the standard RK4 scheme, as shown in Fig. 4. Detailed8TD4RK algorithm is also based on the Algorithm 1 except that the step 5 should be replaced by thefollowing algorithm.
Algorithm 3:
ETD4RK algorithm if The i th neuron is outside the stiff period then Advance the equations for the i th HH neuron from T sorted m − to T sorted m using the standard RK4scheme. else Advance the equations for the i th HH neuron from T sorted m − to T sorted m using the standardETD4RK scheme.Then update the conductance H i ( T sorted m ) by adding f . The introduced adaptive and ETD4RK methods can evolve the HH neuronal networks quite accurately, e.g. , they both can capture precise spike shapes. The library method [14] sacrifices the accuracy of thespike shape by treating the HH neuron’s firing event like an I&F neuron. Once a neuron’s membranepotential reaches the threshold V th , we stop evolving its V, m, h, n for the following stiff period T stiff ,and restart with the values interpolated from a pre-computed high resolution library. Thus the librarymethod can achieve the highest efficiency in principle by skipping the stiff period. Besides, it can stillcapture accurate statistical properties of the HH neuronal networks like the mean firing rates and chaoticdynamics. We first describe how to build the library in detail. When a neuron’s membrane potential reaches thethreshold V th , we record the values I input , m, h, n and denote them by I th , m th , h th , n th , respectively. Ifwe know the exact trajectory of I input for the following stiff period T stiff , we can use a sufficiently smalltime step to evolve the Eqs. (1, 2) for T stiff with initial values V th , m th , h th , n th to obtain high resolutiontrajectories of V, m, h, n . The values at the end of the stiff period are like the reset values in I&F neuronsand are denoted by V re , m re , h re , n re .However it is impossible to obtain the exact trajectory of I input without knowing the feedforward andsynaptic spike information. As shown in Fig. 2(b), I input varies during the stiff period with peak value inthe range of O (5) µ A · cm − , while the intrinsic current, the sum of ionic and leakage current, is about30 µ A · cm − at the spike time, and quickly rises to the peak value about 250 µ A · cm − , then stays at O ( − µ A · cm − in the remaining stiff period. Therefore, the intrinsic current is dominant in the stiffperiod. With this observation, we take I input as constant throughout the whole stiff period when we buildthe library. We emphasize that this is the only assumption made in the library method. Then, given asuite of I th , m th , h th , n th , we can obtain the corresponding suite of V re , m re , h re , n re .9igure 5: The ranges of values I th , m th , h th , n th .When building the library, we choose N I , N m , N h , N n different values of I th , m th , h th , n th , respectively,equally distributed in their ranges as shown in Fig. 5. For each suite of I th , m th , h th , n th , we evolve theEqs. (1, 2) for a time interval of T stiff to obtain V re , m re , h re , n re with RK4 method with a sufficientlysmall time step, e . g . , ∆ t = 2 − ms. Note that we should take I input = I th throughout the whole timeinterval T stiff . The library is built with a total size of N I N m N h N n . In our simulation, we take theranges [0 , µ A · cm − , [0 , . , [0 . , . and [0 . , . for I th , m th , h th , n th respectively that can coveralmost all possible situations and take sample numbers N I = 21 , N m = 16 , N h = 21 , N n = 16 whichcan make the library quite accurate. The library occupies only 6.89 megabyte in binary form and is quitesmall for today’s computers.One key point in building the library is to choose a proper threshold value V th . The thresholdshould be relatively low to keep the HH equations not stiff and allow a large time step with the stabilityrequirement satisfied. On the other hand, it should be relatively high that a neuron will definitely firea spike after its membrane potential reaches the threshold. In this Letter, we take V th = − mV.Correspondingly, we take the stiff period T stiff = 3 . ms which is long enough to cover the stiff parts ingeneral firing events. 10 .2 Use the library
420 425 430 435−100−50050 t (ms) V ( m V ) Library method
Neuron 1Neuron 2 T stiff Figure 6: Illustration of the library method for two HH neurons unidirectionally connected from neuron 1to neuron 2. After neuron 1 fires a spike, we compute and record the values I th , m th , h th , n th , then we stopevolving its V, m, h, n for the following T stiff ms and restart with the values V re , m re , h re , n re interpolatedfrom the library indicated by the triangle. The dots indicate the time nodes where we use the standardRK4 scheme. The green square indicates an update time node due to the synaptic spike from neuron 1.We now illustrate how to use the library. Once a neuron’s membrane potential reaches the threshold, wefirst record the values I th , m th , h th , n th , then stop evolving this neuron’s equations of V, m, h, n for thefollowing T stiff ms and restart with values V re , m re , h re , n re linearly interpolated from the pre-computedhigh resolution data library as shown in Fig. 6. For the easy of writing, suppose I th falls between twodata points I th and I th in the library. Simultaneously we can find the data points m th and m th , h th and h th , n th and n th , respectively. So we need 16 suites of values in the library to perform a linear interpolation z re = X i,j,k,l =0 , z re ( I th i , m th j , h th k , n th l ) I th − I th − i I th i − I th − i m th − m th − j m th j − m th − j h th − h th − k h th k − h th − k n th − n th − l n th l − n th − l (25)for z = V, m, h, n . We note that the parameters G and H have analytic solutions, so they can be evolvedas usual during the stiff period. After obtaining the high resolution library, we can use a large time stepto evolve the HH neuron network with standard RK4 method. Detailed library algorithm is the same asthe Algorithm 1 except that the step 13 should be replaced by the following algorithm. Algorithm 4:
Library algorithmFind the minimum of { t ( i ) spike } , say t ( i ) spike . Evolve neuron i to t ( i ) spike , and generate a spike at thismoment.Record the values I th , m th , h th , n th , then perform a linear interpolation from the library to get V re , m re , h re , n re . Meanwhile, we stop evolving V, m, h, n of neuron i for the next T stiff ms,but we still evolve the conductance parameters G, H as usual.Update its postsynaptic neurons to t ( i ) spike . Preliminarily evolve the affected neurons from t ( i ) spike to t + ∆ t to update their spike times { t spike } . There are three kinds of error in the library method: the error from the time step ∆ t , the error from theway of interpolation to use the library and the error from the assumption that we keep I input as constant11hroughout the stiff period T stiff . The first one is simply O (∆ t ) since the library method is based on theRK4 scheme. The other two kinds of error come from the call of library. For simplicity, we consider theerror of voltage for one single neuron to illustrate ∆ V = | V library − V regular | (26)where | · | takes the absolute value, the subscript “regular” indicates the high precision solution computedby the RK4 method for a sufficiently small time step ∆ t = 2 − ms and “library” indicates the solutionfrom the library method.The error from the way of interpolation can be well described by the error of the reset value ∆ V re .We use a constant input I input to drive the HH neuron to exclude the influence of the assumption ofconstant input. Denote the sample intervals ∆ I, ∆ m, ∆ h, ∆ n for I th , m th , h th , n th , respectively. A linearinterpolation yields an error of ∆ V re = O (∆ I + ∆ m + ∆ h + ∆ n ) . In our simulation, we take ∆ I = 2 . µ A · cm − , ∆ m = ∆ h = ∆ n = 0 . to build the library. If we use sample intervals c ∆ I, c ∆ m, c ∆ h, c ∆ n , then it is straight forward that ∆ V re = O ( c ) as shown in Fig. 7(a) (Same resultshold for m, h, n ). In the same way, a cubic interpolation yields ∆ V re = O ( c ) as shown in Fig. 7(b).Thus the cubic interpolation can indeed improve the accuracy when the network is driven by a constantinput as shown in Fig. 7(c). time (ms) -12-10-8-6-4-20 l o g ( V ) LinearCubicSpikes −3 −2 −1 0 1−15−12−9−6−303 log (c) l o g ( E rr o r ) Cubic interpolation ∆ V re ∆ m re ∆ h re ∆ n re Fourth order −3 −2 −1 0 1−15−12−9−6−303 log (c) l o g ( E rr o r ) Linear interpolation ∆ V re ∆ m re ∆ h re ∆ n re Second order (a) (c)(b)
Figure 7: (a, b): The error of the reset values for a single HH neuron driven by constant input. (a) Alinear interpolation and (b) a cubic interpolation to use the library. (c) The error of ∆ V for a single HHneuron driven by constant input with a linear (blue) and cubic (red) interpolation to use the library. Theshort black lines indicate a spike. Time step is ∆ t = 2 − ms for (a-c).We now consider the error due to the assumption of constant input. Driven a single HH neuron byPoisson input, we compare the error of ∆ V with different time steps and ways of interpolation to use thelibrary as shown in Fig. 8. We find that once the neuron fires a spike and calls the library, the reset error ∆ V re (the dotted line in Fig. 8) is significantly greatly than the error of ∆ V from the time step and wayof interpolation. Therefore, the biggest error is from the assumption of constant input which cannot bereduced. Hence, we build a relatively coarse library and choose the linear interpolation to use the libraryin the Letter. Besides, we also find that the error of ∆ V will not accumulate linearly with call number.As shown in Fig. 8, when the neuron fires, the error of ∆ V will first raise to the level of ∆ V re and thenquickly decay until the next spike time. 12 −10−8−6−4−20 time (ms) l o g ( ∆ V ) LinearCubicSpikes
50 100 150 200 250−10−8−6−4−20 time (ms) l o g ( ∆ V ) ∆ t=0.25 ∆ t=0.004Spikes (a) (b) Figure 8: (a) The error of ∆ V for one single neuron driven by Poisson input with a linear interpolationto use the library. Time steps are ∆ t = 0 . ms (blue) and ∆ t = 0 . ms (red). (b) The error of ∆ V for one single neuron driven by Poisson input with a linear (blue) and cubic (red) interpolation to use thelibrary, ∆ t = 0 . ms. The dotted line indicates the error of the reset value ∆ V re and the short blacklines indicate a spike. In this section, we show the performance of the given adaptive, ETD4RK and library methods with largetime steps. We first exam whether the given methods can capture the type II behavior of individual HHneuron. Driven by constant input, the neuron can fire regularly and periodically [17, 18] only when theinput current greater than a critical value I input ≈ . µ A · cm − . The HH model has a sudden jumparound this critical value from zero firing rate to regular nonzero firing rate because of a subcriticalHopf bifurcation [18], as shown in Figure 9(a). Below the critical value, some spikes may appear beforethe neuron converges to stable zero firing rate state. The number of spikes during this transient perioddepends on how close the constant input is to the critical value, as shown in Figure 9(b). We point outthat all the adaptive, ETD4RK and library method with time steps 8 time greater than that of the regularmethod can capture the type II behavior. Especially, the original library method in Ref. [14] fails tocapture the spikes in the transient period since they use stable information of V, m, h, and n to buildthe library. Our library method uses the more intuitive suit of V, m, h and n to build the library whichalready covers both the possible stable and transient cases and is thus more precise. For the performance of a network, we mainly consider a homogeneously and randomly connected networkof 100 excitatory neurons with connection probability 10%, feedforward Poisson input ν = 100 Hz and f = 0 . mS · cm − . Then the coupling strength S is the only remaining variable. Other types of HHnetwork and other dynamic regimes can be easily extended and similar results can be obtained.We first study the chaotic dynamical property of the HH system by computing the largest Lyapunovexponent which is one of the most important tools to characterize chaotic dynamics [19]. The spectrumof Lyapunov exponents can measure the average rate of divergence or convergence of the reference andthe initially perturbed orbits [20–22]. If the largest Lyapunov exponent is positive, then the reference andperturbed orbits will exponentially diverge and the dynamics is chaotic, otherwise, they will exponentiallyconverge and the dynamics is non-chaotic. 13 .9 6 6.1 6.2 6.3024681012 I input S p i k e s RegularAdaptiveETD4RKLibrary input F i r i n g r a t e ( H z ) RegularAdaptiveETD4RKLibrary (a) (b)
Figure 9: (a) The firing rate as a function of constant input I input ( µ A · cm − ). (b) The number of spikesduring the transient period with initial voltage V = − mV. The blue stars, green circles, cyan squaresand red diamonds in (a) and (b) indicate the results using regular method with ∆ t = 1 / ms, adaptive,ETD4RK and library method with ∆ t = 1 / ms, respectively.When calculating the largest Lyapunov exponent, we use X = [ X , X , ..., X N ] to represent all thevariables of the neurons in the HH model. Denote the reference and perturbed trajectories by X ( t ) and ˜X ( t ) , respectively. The largest Lyapunov exponent can be computed by lim t →∞ lim ǫ → t ln || ˜X ( t ) − X ( t ) |||| ǫ || (27)where ǫ is the initial separation. However we cannot use Eq. (27) to compute directly, because for achaotic system the separation || ˜X ( t ) − X ( t ) || is unbounded as t → ∞ and a numerical ill-condition willhappen. The standard algorithm to compute the largest Lyapunov exponent can be found in Ref. [22–24].The regular, ETD4RK and adaptive methods can use these algorithms directly. However, for the librarymethod, the information of V, m, h, n are blank during the stiff period and these algorithms do not work.We use the extended algorithm introduced in Ref. [25] to solve this problem.As shown in Fig. 10(a), we compute the largest Lyapunov exponent as a function of coupling strength S from 0 to 0.1 mS · cm − by the regular, adaptive, ETD4RK and library methods, respectively. The totalrun time T is 60 seconds which is sufficiently long to have convergent results. The adaptive, ETD4RK andlibrary methods with large time steps ( ∆ t = 1 / ms) can all obtain accurate largest Lyapunov exponentcompared with the regular method. It shows three typical dynamical regimes that the system is chaoticin . . S . . mS · cm − with positive largest Lyapunov exponent and the system is non-chaoticin . S . . and . . S . . mS · cm − .As shown in Fig. 10(b), we compute the mean firing rate, denoted by R , obtained by these methodsto further demonstrate how accurate the adaptive, ETD4RK and library methods are. We give the relativeerror in the mean firing rate, which is defined as E R = | R ∗ − R regular | /R regular (28)where ∗ = adaptive, ETD4RK, library. As shown in Fig. 10(c), all the given three methods can achieveat least 2 digits of accuracy using large time steps ( ∆ t = 1 / ms). Note that the relative error may bezero in some cases and we set the logarithmic relative error -8 in Fig. 10(c), if it happens. Therefore, theadaptive method has a bit advantage over the ETD4RK and library methods for the mean firing rate.14 −8−6−4−20 S l o g ( R e l a t i v e E rr o r ) AdaptiveETD4RKLibrary F i r i n g r a t e RegularAdaptiveETD4RKLibrary L y a p u n o v e x p o n e n t RegularAdaptiveETD4RKLibrary (a) (c)(b)
Figure 10: (a) The largest Lyapunov exponent of the HH network versus the coupling strength S . (b)Mean firing rate versus the coupling strength S . (c) The relative error in the mean firing rate (the Y-coordinate value -8 indicates there is no error). The blue, green, cyan and red curves represent theregular, adaptive, ETD4RK and library methods, respectively. The time steps are used as ∆ t = 1 / msin the regular method and ∆ t = 1 / ms in the adaptive, ETD4RK and library methods. The total runtime is 60 seconds to obtain convergent results.From the calculation of the largest Lyapunov exponent, we have known that there are three typicaldynamical regimes in the HH model. As shown in Fig. 11, these three regimes are asynchronous regimein . S . . mS · cm − , chaotic regime in . . S . . mS · cm − and synchronous regimein . . S . . mS · cm − . So we choose three coupling strength S = 0 . , . and . mS · cm − to represent these three typical dynamical regimes respectively in the following numerical tests.
500 600 700 800 900 1000020406080100 S=0.08time (ms) N e u r o n I D
500 600 700 800 900 1000020406080100 S=0.05time (ms) N e u r o n I D
500 600 700 800 900 1000020406080100 S=0.02time (ms) N e u r o n I D (a) (c)(b) Figure 11: Raster plots of firing events in three typical dynamical regimes with coupling strength (a) S = 0 . mS · cm − , (b) S = 0 . mS · cm − , (c) S = 0 . mS · cm − . Since all the given methods havealmost the same raster, we only show the result obtained by the regular method. We now verify whether the algorithms given above have a fourth-order accuracy by performing convergencetests. For each method, we use a sufficiently small time step ∆ t = 2 − ms ( ∆ t S = 2 − ms for adaptivemethod) to evolve the HH model to obtain a high precision solution X high at time t = 2000 ms. Toperform a convergence test, we also compute the solution X (∆ t ) using time steps ∆ t = 2 − , − , ..., − ms. The numerical error is measured in the L -norm E = || X (∆ t ) − X high || (29)As shown in Fig. 12(a), the regular method can achieve a fourth-order accuracy for the non-chaoticregimes S = 0 . and . mS · cm − . For the chaotic one S = 0 . mS · cm − , we can never achieveconvergence of the numerical solution. The adaptive, ETD4RK and library methods have similar convergencephenomena as shown in Fig. 12. 15 ( D t) l o g ( E rr o r ) ETD4RK method
S=0.02S=0.05S=0.08Fourth order −2.6 −2.4 −2.2 −2 −1.8 −1.6 −1.4 −1.2−10−8−6−4−2024 log ( D t) l o g ( E rr o r ) Library method
S=0.02S=0.05S=0.08Fourth order −2.6 −2.4 −2.2 −2 −1.8 −1.6 −1.4 −1.2−10−8−6−4−2024 log ( D t) l o g ( E rr o r ) Adaptive method
S=0.02S=0.05S=0.08Fourth order −2.6 −2.4 −2.2 −2 −1.8 −1.6 −1.4 −1.2−10−8−6−4−2024 log ( D t) l o g ( E rr o r ) Regular method
S=0.02S=0.05S=0.08Fourth order (a) (d)(c) (b)
Figure 12: Convergence tests of (a) regular method, (b) adaptive method, (c) ETD4RK method and (d)library method. The convergence tests are performed by evolving the HH model for a total run time of T = 2000 ms. We show the results for the coupling strength S = 0 . (blue) , . (green) and . (red) mS · cm − , respectively. In this section, we compare the efficiency among the adaptive, ETD4RK and library methods. A straightforward way is to compare the time cost of each method with a same total run time T . We notice that allthe methods are based on the standard RK4 scheme, so we can compare the call number of the standardRK4 scheme that each method costs to give an analytical comparison of efficiency. For the ETD4RKmethod, the computational cost of one standard ETD4RK scheme is a bit more than that of the strandRK4 scheme. Here we omit the difference and no longer distinguish them for simplicity.We start by exploring the call number of standard RK4 scheme for one neuron for one single timestep with the regular method, which is presented in Theorem 1. Theorem 1.
Suppose the presynaptic spike train to a neuron can be modeled by a Poisson train with rate λ , then the call number of standard RK4 scheme for the neuron for one time step from t to t + ∆ t inthe regular method is ν ∆ t + (2∆ t + ν ∆ t ) λ on average. Proof.
During the initial preliminary evolving from t to t + ∆ t , it requires a call number of ν ∆ t onaverage. Suppose there are n spikes fired by the presynaptic neurons during [ t , t + ∆ t ] which happenswith probability ( λ ∆ t ) n n ! e − λ ∆ t . Denote the spike times by t , t , ..., t n , where t < t < t < ..., t n Figure 13: (a) The call number of standard RK4 scheme per neuron for the regular, adaptive, ETD4RKand library methods. The circles represent the call number from numerical tests while the starsrepresent the approximations from Eqs. (33)-(35). (b) The relative error in the call number betweenthe approximation and numerical tests for the regular, adaptive, ETD4RK and library methods. (c) Theefficiency measured by the time cost and call number versus the coupling strength S for the adaptive,ETD4RK and library methods. Time steps and colors are the same as the one in Fig. 10. Total run timeis 50 seconds.As shown in Fig. 13, we count the call numbers from the numerical tests and approximations in Eqs.(33-35) for the regular, adaptive, ETD4RK and library methods, respectively. These equations are indeedclose approximations of the call number achieving at least 1 digit of accuracy. We should point out thatthe information of R and p in the approximations are obtained from the numerical tests since they arehard to estimate directly.Fig. 13(c) gives the efficiency of the adaptive, ETD4RK and library methods by comparing the timecost and call number with that of the regular method. These two kinds of efficiency are quite consistentexcept a bit difference in the ETD4RK and library methods. For the ETD4RK method, this is because weuse the standard ETD4RK scheme during the stiff period which costs a bit more time than the standardRK4 scheme. So the efficiency measured by the call number is a bit overestimated. For the librarymethod, this is because when a neuron fires a spike, we should call the library and evolve the parameters G and H during the stiff period which costs some time but is not included in the efficiency measuredby the call number. When the mean firing rate is high, this extra consumed time is no longer negligible.Even so, these two kinds of efficiency still show good agreement for the ETD4RK and library methods.Therefore, we can use the efficiency measured by the call number to compare. E ff i c i e n c y ETD4RK method0 0.02 0.04 0.06 0.08 0.1024681012 S E ff i c i e n c y Adaptive method D t=0.354 ms D t=0.25 ms D t=0.125 ms D t=0.0625 ms D t=0.03125 ms E ff i c i e n c y Library method (a) (c)(b) Figure 14: Efficiency measured by the expected call number for the adaptive, ETD4RK and librarymethods, respectively. The time steps are ∆ t = 0 . , . , . , . , . ms from top tobottom. Total run time is 50 seconds. 18ig. 14 shows the efficiency for the adaptive, ETD4RK and library methods with different timesteps. When the mean firing rate is low ( ≤ Hz), the adaptive method can achieve high efficiencywith a maximum 7 times of speedup. However this high efficiency decreases rapidly with the couplingstrength (mean firing rate), since the neurons will use the smaller time step much more often when thefiring rate is high. The ETD4RK and library methods are not so sensitive to the firing rate and can achievea global high efficiency. Especially, the library method can achieve a maximum 10 times of speedup forthe maximum time step ∆ t = 0 . ms. In conclusion, we have shown three methods to deal with the stiff part during the firing event in evolvingthe HH equations. All of them can use a large time step to save computational cost. The adaptive methodis the easiest to use and can retain almost all the information of the original HH neurons like the spikeshapes. However, it has limited efficiency and is more appropriate for the dynamical regimes with lowfiring rate. The ETD4RK method can obtain both precise trajectory of the variables and high efficiency.The library method achieves the highest efficiency, but it sacrifices the accuracy of spike shapes and ismore suitable for high accurate statistical information of HH neuronal networks.For the adaptive method, we should point out that the standard adaptive method like the Runge-Kutta-Fehlberg method is not suitable for the HH neuronal networks. When evolving a single neuron,the standard adaptive method requires an extra RK4 calculation to decide the adaptive time step. Whenthe neuron fires, its equations become stiff and the chosen time step is very small ∼ / ms as illustratedin Ref. [8]. So it is no better than our adaptive method for a single neuron. For networks, the standardadaptive method should evolve the entire network to choose the adaptive time step. For large-scalednetworks, there are firing events almost everywhere, then the chosen adaptive time step is always quitesmall / ms [8] and makes the method ineffective.Our library method is based on the original work in Ref. [14]. The main difference is the way tobuild and use the library. In the original work, once the membrane potential reaches the threshold,we should record the input current I th and gating variables m th , h th , n th . Driving a single neuron withconstant input I th , we will obtain a periodic trajectory of membrane potential after the transient period.Especially we intercept a stable section of the trajectory whose initial value is the threshold V th and datalength is the stiff period. Given the section of membrane potential and initial values m th , h th , n th , we canobtain the corresponding reset gating variables m re , h re , n re , respectively. Denote the sample number of I th , m th , h th , n th by N I , N m , N n , N h , then the size of library is N I (1 + N m + N n + N h ) , i.e. , the inputcurrent I th is the most important. In our library method, the gating variables are as important as the inputcurrent. Given a suite of I th , m th , h th , n th , we evolve the HH equations with constant input I th for T stiff ,then we can obtain the corresponding reset values I re , m re , h re , n re simultaneously. The advantage of ourlibrary method lies in two aspects: 1) It is much easier to build the library. 2) Our data library is muchaccurate since it can cover both the transient and the stable periodic information.Finally, we emphasize that the spike-spike correction procedure [7] is necessary in the given methodsto achieve an accuracy of fourth-order. However, it will greatly decrease the efficiency of the advancedETD4RK and library methods for large-scaled networks. When there are many neurons that fire in onetime step, each neuron will call the standard RK4 scheme a lot during the updating and preliminaryevolving procedure. When the size of the network tends infinity, the given methods with large time stepwill even slower than the regular method with a small time step as illustrated in Eqs. (33), (34) and35). We point out that this problem can be solved by reducing the accuracy from the fourth-order to asecond-order. In each time step, we evolve each neuron without considering the feedforward and synapticspikes and recalibrate their effects at the end of the time step. Then the spike-spike correction procedureis avoided while the methods still have an accuracy of second-order [13]. Besides, the call number perneuron is merely T / ∆ t for regular and ETD4RK methods and (1 − T stiff R ) T / ∆ t for the library method.19herefore, the ETD4RK and library methods can stably achieve over 10 times of speedup with a largetime step in any kinds of networks, e.g. , all-to-all connected networks, large-scaled networks and networkof both excitatory and inhibitory neurons. Appendix: Parameters and variables for the Hodgkin-Huxley equations Definitions of α and β are as follows [28]: α m ( V ) = (0 . V + 4) / (1 − exp( − . V − ,β m ( V ) = 4 exp( − ( V + 65) / ,α h ( V ) = 0 . 07 exp( − ( V + 65) / ,β h ( V ) = 1 / (1 + exp( − . − . V )) ,α n ( V ) = (0 . V + 0 . / (1 − exp( − . V − . ,β n ( V ) = 0 . 125 exp( − ( V + 65) / .Other model parameters are set as C = 1 µ F · cm − , V Na = 50 mV, V K = − mV, V L = − . mV, G Na = 120 mS · cm − , G K = 36 mS · cm − , G L = 0 . mS · cm − , V G = 0 mV, σ r = 0 . ms, and σ d = 3 . ms [28]. References [1] Alan L Hodgkin and Andrew F Huxley. A quantitative description of membrane current and itsapplication to conduction and excitation in nerve. The Journal of physiology , 117(4):500, 1952.[2] Brian Hassard. Bifurcation of periodic solutions of the hodgkin-huxley model for the squid giantaxon. Journal of Theoretical Biology , 71(3):401–420, 1978.[3] Peter Dayan and LF Abbott. Theoretical neuroscience: computational and mathematical modelingof neural systems. Journal of Cognitive Neuroscience , 15(1):154–155, 2003.[4] David C Somers, Sacha B Nelson, and Mriganka Sur. An emergent model of orientation selectivityin cat visual cortical simple cells. Journal of Neuroscience , 15(8):5448–5465, 1995.[5] David McLaughlin, Robert Shapley, Michael Shelley, and Dingeman J Wielaard. A neuronalnetwork model of macaque primary visual cortex (v1): Orientation selectivity and dynamics inthe input layer 4c α . Proceedings of the National Academy of Sciences , 97(14):8087–8092, 2000.[6] David Cai, Aaditya V Rangan, and David W McLaughlin. Architectural and synaptic mechanismsunderlying coherent spontaneous activity in v1. Proceedings of the National Academy of Sciencesof the United States of America , 102(16):5868–5873, 2005.[7] Aaditya V Rangan and David Cai. Fast numerical methods for simulating large-scale integrate-and-fire neuronal networks. Journal of Computational Neuroscience , 22(1):81–100, 2007.[8] Christoph Borgers and Alexander R Nectow. Exponential time differencing for hodgkin–huxley-likeodes. SIAM Journal on Scientific Computing , 35(3):B623–B643, 2013.[9] Peter G Petropoulos. Analysis of exponential time-differencing for fdtd in lossy dielectrics. IEEEtransactions on antennas and propagation , 45(6):1054–1057, 1997.[10] Steven M Cox and Paul C Matthews. Exponential time differencing for stiff systems. Journal ofComputational Physics , 176(2):430–455, 2002.2011] Lili Ju, Xiao Li, Zhonghua Qiao, and Hui Zhang. Energy stability and error estimates of exponentialtime differencing schemes for the epitaxial growth model without slope selection. Mathematics ofComputation , 87(312):1859–1885, 2018.[12] David Hansel, Germán Mato, Claude Meunier, and L Neltner. On numerical simulations ofintegrate-and-fire neural networks. Neural Computation , 10(2):467–483, 1998.[13] Michael J Shelley and Louis Tao. Efficient and accurate time-stepping schemes for integrate-and-fire neuronal networks. Journal of Computational Neuroscience , 11(2):111–119, 2001.[14] Yi Sun, Douglas Zhou, Aaditya V Rangan, and David Cai. Library-based numerical reductionof the hodgkin–huxley neuron for network simulation. Journal of computational neuroscience ,27(3):369–390, 2009.[15] Sen Song, Per Jesper Sjöström, Markus Reigl, Sacha Nelson, and Dmitri B Chklovskii. Highlynonrandom features of synaptic connectivity in local cortical circuits. PLoS biology , 3(3):e68,2005.[16] Yuji Ikegaya, Takuya Sasaki, Daisuke Ishikawa, Naoko Honma, Kentaro Tao, Naoya Takahashi,Genki Minamisawa, Sakiko Ujita, and Norio Matsuki. Interpyramid spike transmission stabilizesthe sparseness of recurrent network activity. Cerebral Cortex , 23(2):293–304, 2012.[17] Wulfram Gerstner and Werner M Kistler. Spiking neuron models: Single neurons, populations,plasticity . Cambridge university press, 2002.[18] Christof Koch and Idan Segev. Methods in neuronal modeling: from ions to networks . MIT press,1998.[19] Valery Iustinovich Oseledec. A multiplicative ergodic theorem. lyapunov characteristic numbersfor dynamical systems. Trans. Moscow Math. Soc , 19(2):197–231, 1968.[20] Edward Ott. Chaos in dynamical systems . Cambridge university press, 2002.[21] John Michael Tutill Thompson and H Bruce Stewart. Nonlinear dynamics and chaos . John Wiley& Sons, 2002.[22] Thomas S Parker and Leon Chua. Practical numerical algorithms for chaotic systems . SpringerScience & Business Media, 2012.[23] Douglas Zhou, Yi Sun, Aaditya V Rangan, and David Cai. Spectrum of lyapunov exponents ofnon-smooth dynamical systems of integrate-and-fire type. Journal of computational neuroscience ,28(2):229–245, 2010.[24] Alan Wolf, Jack B Swift, Harry L Swinney, and John A Vastano. Determining lyapunov exponentsfrom a time series. Physica D: Nonlinear Phenomena , 16(3):285–317, 1985.[25] Douglas Zhou, Aaditya V Rangan, Yi Sun, and David Cai. Network-induced chaos in integrate-and-fire neuronal ensembles. Physical Review E , 80(3):031918, 2009.[26] Peter AW Lewis. Stochastic point processes: statistical analysis, theory, and applications. 1972.[27] Erhan Cinlar and RA Agnew. On the superposition of point processes. Journal of the RoyalStatistical Society. Series B (Methodological) , pages 576–581, 1968.[28] Peter Dayan and Laurence F Abbott.