On a variable step size modification of Hines' method in computational neuroscience
aa r X i v : . [ m a t h . NA ] M a r On a variable step size modification of Hines’method in computational neuroscience
Michael Hanke ∗ October 18, 2018
KTH Royal Institute of Technology, Department of Mathematics, 100 44 Stockholm,Sweden
Abstract
For simulating large networks of neurons Hines proposed a method which usesextensively the structure of the arising systems of ordinary differential equations inorder to obtain an efficient implementation. The original method requires constantstep sizes and produces the solution on a staggered grid. In the present paper aone-step modification of this method is introduced and analyzed with respect totheir stability properties. The new method allows for step size control. Local errorestimators are constructed. The method has been implemented in matlab and testedusing simple Hodgkin-Huxley type models. Comparisons with standard state-of-the-art solvers are provided.
Keywords : partitioned midpoint rule, stability of splitting methods, Hodkin-Huxley models, networks of neurons
Classification : AMS MSC (2010) 65L20, 65L05, 65L06, 92C42
When simulating large networks of neurons a considerable part of the model consistsof the electrical subsystem which in turn extensively uses the classical Hodkin-Huxleymodel of nerve activity. Owing to the large size of the networks to be modeled the effi-cient numerical solution of the arising high-dimensional system of ordinary differentialequations is of extraordinary importance. Complex program systems, e.g., NEURON[4], GENESIS [2], and many others are in routine use in order to solve them. In order toconstruct efficient numerical methods it is necessary to tailor the methods to the specialproperties of the system. The building block is often (variants of) the Hodgkin-Huxley ∗ email: [email protected] C dVdt = I ( t ) − g K n ( V − V K ) − g Na m h ( V − V Na ) − g L ( V − V L ) , (1) dndt = α n ( V )( − n ) − β n ( V ) n , (2) dmdt = α m ( V )( − m ) − β m ( V ) m , (3) dhdt = α h ( V )( − h ) − β h ( V ) h . (4)The coefficients α i ( V ) and β i ( V ) are highly nonlinear functions of their argument. Thekey observation in this system is that the differential equation for the voltage V islinear in V while the differential equations for the gate variables n , m , h are linear inthose. Slightly more general, this system has the structure x ′ = A ( y ) x + b ( y , t ) , (5) y ′ = c ( x , t ) + D ( x ) y . (6)Here, x denotes the voltage while y is the vector of gate variables, or vice versa. Ingeneral, this model leads to stiff differential equations such that implicit time steppingmethods are necessary. Standard approaches require the solution of a nonlinear sys-tem of equations in every step by using variants of Newton’s method. Given the largedimension of the usual models, this property may become a severe restriction. Hines[8] came up with the idea to discretize the system in two steps: First, the differentialequation for x is discretized leading to a linear system to be solved. Then, the differen-tial equation for y is discretized. Also here it remains only a linear system to be solvedin contrast to a fully nonlinear system in the standard approach. Note that the lowerdimensional systems have often a very special structure such that they can be solvedvery efficiently. In particular, for the Hodgkin-Huxley system (1) – (4), both systemshave a diagonal system matrix.Hines chose the implicit midpoint rule as the basic discretization. The discreteapproximations of x and y are defined on a staggered grid. In order to fix notation, let t ∈ [ , T ] for some T > h > n = , , , . . . let t n = nh , t n + / = ( n + / ) h . Hines method reads x n + = x n + h (cid:18) A ( y n + / ) x n + + x n + b ( y n + / , t ) (cid:19) , (7) y n + / = y n + / + h (cid:18) c ( x n + , t n + ) + D ( x n + ) y n + / + y n + / (cid:19) . (8)This method has the following properties: • The approximations are available on a staggered grid, only: x n ≈ x ( t n ) and y n + / ≈ y ( t n + / ) . • Since initial values are available for t = y / mustbe computed by other means. 2 The method is second order accurate. However, this property is only preservedif the step size is constant.In particular the last property calls for a modification of this method such that a stepsize control becomes possible.In this paper, we consider the following modification of Hines’ method: x n + / = x n + h A ( y n ) x n + b ( y n , t n ) , (9) y n + = y n + h (cid:18) c ( x n + / , t n + / ) + D ( x n + / ) y n + + y n (cid:19) , (10) x n + = x n + / + h ( A ( y n + ) x n + + b ( y n + , t n + )) . (11)Since the first of these three equation is an explicit Euler step, the computational workof the latter method is only slightly more expensive than that of the original proposalby Hines. However, the modified version is a genuine one-step method allowing for astep size change without sacrificing the order.In this note we will investigate the numerical properties of this method. In partic-ular, we are interested in the asymptotic stability of this method. Moreover, we willpropose an efficient implementation. In the final section a few numerical examples willbe provided. In order to simplify the notation slightly, we consider the system x ′ = f ( x , y , t ) , (12) y ′ = g ( x , y , t ) . (13)The method (9) – (11) reduces to x n + / = x n + h f ( x n , y n , t n ) , (14) y n + = y n + hg ( x n + / , ( y n + + y n ) , t n + / ) , (15) x n + = x n + / + h f ( x n + , y n + , t n + ) . (16)By eliminating the intermediate approximation x n + / , this system reduces to x n + = x n + h ( f ( x n + , y n + , t n + ) + f ( x n , y n , t n )) , (17) y n + = y n + hg ( x n + h f ( x n , y n , t n ) , ( y n + + y n ) , t n + / ) . (18)Let ( x ∗ , y ∗ ) denote the solution of (12) – (13) on [ , T ] subject to the initial condition x ( ) = x and y ( ) = y . Uniqueness is guaranteed if f and g are Lipschitz continuouswith respect to x and y and continuous with respect to t in a neighborhood U of thetrajectory Γ = { ( x ∗ ( t ) , y ∗ ( t ) , t ) | t ∈ [ , T ] } . Gustaf Söderlind (2013), personal communication. { ( x n , y n ) } N ( h ) n = , the discretization error by N x ( x n , y n ) = x n + − x n h − ( f ( x n y n t n ) + f ( x n + , y n + , t n + )) , N y ( x n , y n ) = y n + − y n h − g ( x n + h f ( x n , y n , t n ) , ( y n + + y n ) , t n + / ) . The method is called stable if there exist an h and a K such that, for h < h and forany sequences { ( x jn , y jn ) } N ( h ) n = , j = , U it holdsmax n = ,... N ( h ) ( | x n − x n | + | y n − y n | ) ≤ K (cid:8) | x − x | + | y − y | + max n = ,..., N ( h ) (cid:0) | N x ( x n , y n ) − N x ( x n , y n ) | + | N y ( x n , y n ) − N y ( x n , y n ) | (cid:1)(cid:27) . This definition follows [1, Section 5.2.3].
Proposition 1.
Let ( x ∗ , y ∗ ) be solutions of (12) – (13) on [ , T ] subject to the initialcondition x ( ) = x and y ( ) = y . Let f and g be Lipschitz continuous with respectto x and y and continuous with respect to t in a neighborhood U of the trajectory Γ = { ( x ∗ ( t ) , y ∗ ( t ) , t ) | t ∈ [ , T ] } . Then, the method (14) – (16) is stable.Proof. The method is a combination of two implicit Runge-Kutta methods. So theproof is standard.
Proposition 2.
Let the assumptions of Theorem 1 be fulfilled and f and g be sufficientlyoften differentiable. Then, (14) – (16) is a second order method and the global errorhas an asymptotic expansion in powers of h .Proof. As a Runge-Kutta method, the local error possesses an expansion in powers of h , [5, Theorem 3.2]. Taylor expansion shows that, for the truncation error, it holds N x ( x ∗ ( t n ) , y ∗ ( t n )) = O ( h ) , N y ( x ∗ ( t n ) , y ∗ ( t n )) = O ( h ) . Since the method is stable, it is second order convergent.Rewriting (17), it holds x n + ( h / ) f ( x n , y n , t n ) = x n + − ( h / ) f ( x n + , y n + , t n + ) such that (18) can be rewritten as y n = y n + − hg ( x n + − ( h / ) f ( x n + , y n + , t n + ) , ( y n + + y n ) , t n + / ) . Hence, the method is symmetric and the assertion about the asymptotic expansion fol-lows from [5, Theorem 8.10].A more interesting question is the asymptotic properties of this method. Recall thatthe system to be solved is usually stiff. This is also the case if the two components areconsidered individually, that is (12) for fixed y and (13) for fixed x . So the standardnotions of asymptotic stability that base on the test scalar equation z ′ = λ z are notuseful here. A more appropriate test equation must have at least two components. Thisleads to the proposal x ′ = µ x + ay , (19) y ′ = bx + λ y , (20)4here µ , λ < ab < µλ . (21)Under these conditions, the system is asymptotically stable as well as the individualcomponents are. This test system has been proposed by Strehmel&Weiner [18] inorder to characterize stability properties of partitioned Runge-Kutta methods.The method (14) – (16) applied to (19) – (20) gives rise to A (cid:18) x n + y n + (cid:19) = B (cid:18) x n y n (cid:19) (22)where A = (cid:18) − h µ − ha − h λ (cid:19) , B = + h µ ha hb (cid:16) + h µ (cid:17) + h λ + abh ! . This recursion can be rewritten in the form (cid:18) x n + y n + (cid:19) = C (cid:18) x n y n (cid:19) with C = α (cid:16) + γ h µ ( β − ) (cid:17) ha (cid:16) − h µ (cid:17) ( − h λ ) + γ ( α − )( β − ) ! hb + h µ − h λ β + γ ( β − ) h µ where α = + h µ − h µ , β = + h λ − h λ are the stability functions of the midpoint and trapezoidal rule, respectively, applied tothe equations (19) – (20) individually. Note that, under the conditions (21), it holds | α | < | β | < C are lessthan one in absolute value. A short computation shows that the characteristic polyno-mial of C has the form χ ( s ) = s − ( α + β + γ ( α − )( β − )) s + αβ were γ = ab µλ . Lemma 3.
Consider the polynomial χ ( s ) = s − ( α + β + γ ( α − )( β − )) s + αβ with | α | < and | β | < . For the roots s and s of χ ( s ) = it holds max ( | s | , | s | ) < if and only if − ( + α )( + β )( − α )( − β ) < γ < . Remark.
Under the assumption (21) it holds γ <
1. Note that γ may be negative.5 roof. We use the change of variables w ( z ) = + z − z . w maps the negative complex halfplane C − = { z ∈ C | R z < } uniquely onto the disk S = { z ∈ C || z | < } . So it holds s , s ∈ S if and only if z , z ∈ C − for the zeros of χ ( w ( z )) . Denote for short χ ( s ) = s + a s + a . A short calculation provides χ ( w ( z )) = z ( − a + a ) + z ( − a ) + ( + a + a )( − z ) . The zeros of this functions are those of the enumerator polynomial η ( z ) = c z + c z + c where c = + ( α + β + γ ( α − )( β − )) + αβ , c = − αβ , c = − ( α + β + γ ( α − )( β − )) + αβ . According to the Routh-Hurwitz criterion [5, Theorem I.13.4] the roots of η lie all in C − if and only if all coefficients c i have the same sign. Since | αβ | <
1, it holds c > c > + αβ − α − β ( α − )( β − ) = > γ while c > − ( α + )( β + )( α − )( β − ) < γ . This completes the proof.
Theorem 4.
Under the assumption µ , λ < , the recursion (22) is asymptotically stableif and only if − ( + α )( + β )( − α )( − β ) < γ < . The assertion is a consequence of Lemma 3.
Corollary 5.
For every γ < , there exists a h ( µ , λ , γ ) > such that the recursion (22)is unstable for all h > h ( µ , λ , γ ) .Proof. One can easily show that α = α ( h ) and β = β ( h ) are monotonically decreasingfunctions of h . Moreover, ϕ ( h ) = + α − α is a monotonically decreasing function of α and it holds lim h → ϕ ( α ( h )) = ∞ andlim h → ∞ ϕ ( α ( h )) = Remark.
It is interesting to see how the corresponding results for the original Hines’method look like. The recursion becomes (cid:18) x n + y n + / (cid:19) = C Hines (cid:18) x n y n + / (cid:19) C Hines = α ha (cid:16) − h µ (cid:17) α hb ( − h λ ) β + abh (cid:16) − h µ (cid:17) ( − h λ ) = α ha (cid:16) − h µ (cid:17) α hb ( − h λ ) β + γ ( − α )( − β ) . It holds trace ( C Hines ) = α + β + γ ( − α )( − β ) , det ( C Hines ) = αβ + αγ ( − α )( − β ) − α abh (cid:16) − h µ (cid:17) (cid:16) − h λ (cid:17) = αβ . So the stability polynomial of Hines’ method and its modification (9) – (11) are identi-cal.
In this section, we will consider an approach to solving (12) – (13) using a splittingmethod. For that, let U = (cid:18) xy (cid:19) , F ( U , t ) = (cid:18) f ( x , y , t ) g ( x , y , t ) (cid:19) . Then (12) – (13) is equivalent to U ′ = F ( U , t ) . Introduce the splitting F ( U , t ) = F ( U , t ) + F ( U , t ) = (cid:18) f ( x , y , t ) (cid:19) + (cid:18) g ( x , y , t ) (cid:19) . (23)A classical example of operator splitting is Strang’s approach [17]. In order to advancethe solution one step of step size h from t n to t n + the system U ′ = F ( U , t ) is firstintegrated over the interval [ t n , t n + h / ] , then the results are used to integrate the system U ′ = F ( U , t ) on [ t n , t n + ] , and finally the first system on [ t n + − h / , t n + ] . For thedecomposition (23), this gives rise to the following three steps:1. Integrate x ′ = f ( x , y n , t ) , x ( t n ) = x n on [ t n , t n + h / ] . Denote x n + / = x ( t n + h / ) .2. Integrate y ′ = g ( x n + / , y , t ) , y ( t n ) = y n on [ t n , t n + ] . Denote y n + = y ( t n + ) .3. Integrate x ′ = f ( x , y n + , t ) , x ( t n + h / ) = x n + / on [ t n + − h / , t n + ] . Let x n + = x ( t n + ) .Even if f , g are linear functions, the operators F and F do not commute. So we expectthe splitting to be second order accurate at least in the linear case (e.g., [10, Chapter7]). A comparison with the finite difference method (14) – (16) reveals that the lattercan be interpreted as a second order discretization of Strang’s splitting.Let us ask the question of asymptotic stability of Strang’s splitting applied to thetest system (19) – (20). Similarly as before the recursion can be written down in theform (cid:18) x n + y n + (cid:19) = C Strang (cid:18) x n y n (cid:19) (24) C Strang = α + α / T a µ (cid:0) α / − (cid:1)(cid:0) α / + T + β (cid:1) b λ α / ( β − ) T + β ! where T = γ (cid:0) α / − (cid:1) ( β − ) . Here, α = e µ h , β = e λ h . The characteristic polynomial becomes χ ( s ) = s − ( α + β + γ ( α − )( β − )) s + αβ . This is structurally identical to the one for the discrete case.
Theorem 6.
Let µ < and λ < . The recursion (24) is asymptotically stable if andonly if − (cid:0) + e h µ (cid:1) (cid:0) + e h λ (cid:1) ( e h µ − ) (cid:0) e h λ − (cid:1) < γ < . Proof.
The result is a consequence of the results of Lemma 3 by setting α = e h µ and β = e h λ .Note that always γ < ab < µλ . Moreover, for any h > , µ < , λ < ψ ( h ) = (cid:0) + e h µ (cid:1) (cid:0) + e h λ (cid:1) ( e h µ − ) (cid:0) e h λ − (cid:1) it holds that ψ is a monotonically decreasing function with lim h → ψ ( h ) = ∞ andlim h → ∞ ψ ( h ) =
1. We have immediately
Corollary 7. If γ < − and µ < , λ < , the there exist always an h ( γ , µ , λ ) > suchthat the recursion (24) is unstable for h > h ( γ , µ , λ ) . Compared to the discrete case, the stability domain is slightly larger for Strang’ssplitting. However, even here the stability domain is bounded.
The Peaceman-Rachford method was introduced in [13] in order to solve semidis-cretized linear parabolic partial differential equations. If the system (12) – (13) issplitted according to (23), the solution U is then advanced from t n to t n + by U n + / = U n + h (cid:0) F ( U n + / , t n + / ) + F ( U n , t n ) (cid:1) , U n + = U n + / + h (cid:0) F ( U n + / , t n + / ) + F ( U n + , t n + ) (cid:1) . (cid:18) x n + / y n + / (cid:19) = (cid:18) x n y n (cid:19) + h (cid:18)(cid:18) g ( x n + / , y n + / , t n + / (cid:19) + (cid:18) f ( x n , y n , t n ) (cid:19)(cid:19) , (cid:18) x n + y n + (cid:19) = (cid:18) x n + / y n + / (cid:19) + h (cid:18)(cid:18) g ( x n + / , y n + / , t n + / (cid:19) + (cid:18) f ( x n + , y n + , t n + ) (cid:19)(cid:19) . Writing out the components, this recursion becomes (i) x n + / = x n + h f ( x n , y n , t n ) (ii) y n + / = y n + h g ( x n + / , y n + / , t n + / ) (iii) y n + = y n + / + h g ( x n + / , y n + / , t n + / ) (iv) x n + = x n + / + h f ( x n + , y n + , t n + ) Inserting (i) into (iv) we obtain x n + = x n + h ( f ( x n , y n , t n ) + f ( x n + , y n + , t n + )) . Similarly, from (ii) and (iii) we have y n + = y n + hg ( x n + h f ( x n , y n , t n ) , y n + / , t n + / ) . By subtracting (ii) and (iii) we arrive at y n + / = ( y n + y n + ) . Hence, the Peaceman-Rachford method applied to our system becomes x n + = x n + h ( f ( x n , y n , t n ) + f ( x n + , y n + , t n + )) , y n + = y n + hg ( x n + h f ( x n , y n , t n ) , ( y n + y n + ) , t n + / ) . So this method is equivalent to (17) – (18).The Peaceman-Rachford method has been used extensively to solve partial differ-ential equations. There, one is mainly interested in showing stability properties inde-pendent of the spatial discretization. These stability estimates are often based on mono-tonicity assumptions (one-sided Lipschitz conditions) on the right-hand side F , F . Inparticular, let the condition h F i ( ˜ w , t ) − F i ( w , t ) , ˜ w − w i ≤ ν k ˜ w − w k , i = , , hold for all ˜ w , w and t with a certain constant ν ∈ R . Here, h·i denotes the Euclideaninner product and k · k the Euclidean norm. Hundsdorfer&Verwer [11] show that themethod is unconditionally stable (that is, for all step sizes h ) if ν ≤
0. However, if ν >
0, stability can only be guaranteed if the step size is restricted by h ν < x ′ = µ x + ay , y ′ = bx + λ y , µ , λ < ab < µλ ?In the linear autonomous case, the monotonicity requirement reduces to w T F i ( w ) ≤ ν k w k for all w . We have w T F ( w ) ≤ ν k w k for all w ⇐⇒ y ( bx + λ y ) ≤ ν ( x + y ) for all x , y ⇐⇒ ≤ ν x − bxy + ( ν − λ ) y for all x , y . For the latter inequality to hold for all x , y , we must have ν ≥ w T F ( w ) ≤ ν k w k for all w ⇐⇒ ≤ (cid:18) √ ν x − √ ν by (cid:19) + (cid:18) ν − λ − b ν (cid:19) y for all x , y ⇐⇒ ≤ ν − λ − b ν ⇐⇒ ( λ + p λ + b ) ≤ ν . Similarly, w T F ( w ) ≤ ν k w k for all w ⇐⇒ ( µ + p µ + a ) ≤ ν . Hence, we have always ν > a = b = For an efficient implementation, estimations of the local error and step size control areof utmost importance. In this section we will discuss these issues.The method (14) – (16) does not have an imbedded error estimator. So we havetwo possibilities for estimating the local error:1. Since the discrete solution possesses an asymptotic expansion in powers of h ,the error can be estimated via Richardson extrapolation. This can be combinedwith local extrapolation.2. Use the detailed representation of the leading error term.The first idea seems to be rather straightforward. However, one has to keep in mind thatthe individual equations in (5) – (6) are often stiff, and the discretization reduces to thetrapezoidal rule and the implicit midpoint rule, respectively, for decoupled systems. Forsuch systems and these discretizations, we expect a simple step size halving to behavevery badly when using local extrapolation since the resulting method has a boundedstability domain [6, p. 133]. In fact, when applying our method and step size halvingto Hodkin-Huxley systems, we observed a behavior of the method which is typical forinstabilities due to too large step sizes at low tolerances. Therefore, we implementedtwo versions: • a version using the step size subdivisions { , } without extrapolation ( modhines ); • a version using the step size subdivisions { , } with local extrapolation ( modhext ).10t should be mentioned that a theoretical analysis of the extrapolation method is missingso far. It is known that the domains of absolute stability for the extrapolated trapezoidalrule become smaller and smaller with the number of extrapolation steps. However, thepresent method should be investigated using the test system (19) – (20).In practice, there is no problem to further extrapolate in the extrapolation tableau.However, in the applications we are aiming at, we expect mainly low accuracies to berequired such that high order methods will not provide much benefit.The discrete solution x n + is computed using the trapezoidal rule (17). Therefore,the local discretization error has the representation τ x , n = N x ( x ( t n ) , y ( t n )) = − x ′′′ ( t n ) h + O ( h ) . A similar computation leads to τ y , n = N y ( x ( t n ) , y ( t n ))= (cid:18) y ′′′ ( t n ) + (cid:18) ∂∂ x g ( x ( t n ) , y ( t n ) , t n ) x ′′ ( t n ) − ∂∂ y g ( x ( t n ) , y ( t n ) , t n ) y ′′ ( t n ) (cid:19)(cid:19) h + O ( h ) . In order to approximate the derivatives x ′′′ , y ′′′ , and y ′′ , the discrete solution is in-terpolated by a 3rd order Hermite polynomial using the the values ( x n , y n ) , ( x n + , y n + ) and the function values ( f ( x n , y n , t n ) , g ( x n , y n , t n )) , ( f ( x n + , y n + , t n + ) , g ( x n + , y n + , t n + )) .In case of an accepted step, f ( x n , y n , t n ) and f ( x n + , y n + , t n + ) are available for free.It should be noted that in the case of a constant coefficient system y ′ = Ay it holds y ′′′ = Ay ′′ = ( ∂ / ∂ y ) gy ′′ such that τ y , n = − y ′′′ ( t n ) h + O ( h ) . In our implementation( modhnew ) we use the “simplified” approximation τ y , n = − y ′′′ ( t n ) h + O ( h ) for y even in the general case.We tested also a number of step size selection strategies following proposals in[15, 16]. The PI.4.2 controller [15] was finally chosen. We show the performance on two benchmark problems; a pure Hodgkin-Huxley system(Example 1) and a model of a nerve cell with a spine by using compartment modeling(Example 2).
Example 1
In the Hodkin-Huxley system we choose the parameters from [9]: C = , I ( t ) = . g K = , g Na = , g L = . V K = , V Na = − , V L = − . α n ( V ) = . ψ ( . ( V + )) , β n ( V ) = .
125 exp ( V / ) α m ( V ) = ψ ( . ( V + )) , β m ( V ) = ( V / ) α h ( V ) = .
07 exp ( . V ) , β h ( V ) = ( + exp ( . ( V + ))) − ψ ( x ) = xe x − [ , ] with the initial values V ( ) = − . , m ( ) = . , n ( ) = . , h ( ) = . . Figure 1 shows a plot of the solutions.11 xample 2
We consider the neuron model consisting of three compartments, the soma,the dendrite and a spine that has been proposed in [3]. Each compartment car-ries its own potential V i , i = , , n , m , h ) whilethe spine has two of them ( r , s ). There are no channels attached to the dendrite.Additionally, the calcium ion dynamics is taken into account. The latter containsan additional degradation in a calcium pool. The equations become C dV dt = I ( t ) − g K n ( V − V K ) − g Na m h ( V − V Na ) + V − V R a , − V − V L R m , C dV dt = V − V R a , + V − V R a , − V − V L R m , C dV dt = − g Ca s r ( V − V Ca ) − g KCa c Ca ( V − V K ) + V − V R a , − V − V L R m , dc Ca dt = g Ca s rB ( V Ca − V ) − c Ca τ dPdt = α P ( V )( − P ) + β P ( V ) P , P ∈ { n , m , h , r , s } . The opening and closing rates are given in the following table.Opening rate Closing rate α h =
70 exp ( − ( V + . )) β h = + exp ( − ( V + . )) α m = ψ ( − ( V + . )) β m = ( − ( V + . ) / . ) α n = ψ ( − ( V + . )) β n =
125 exp ( − . ( V + . )) α r = ( , if V ≤ − .
075 exp ( − ( V + . )) , if V > − . β r = − α r α s = + exp ( − ( V + . )) β s = ψ ( ( V + . )) The parameters are given by C = . × − F , C = × − F , C = . × − , R m , = . × Ω , R m , = . × Ω , R m , = . × Ω , R a , = × Ω , R a , = × Ω , V Na = . V , V K = − . V , V Ca = . V , V L = − . V , g Na = . × − S , g K = . × − S , g Ca = . × − S , g KCa = . × − , I ( t ) = . × − A , τ = . s , B = . × . The system has been solved on the interval [ , . ] using the initial values V ( ) = . V , V ( ) = . V , V ( ) = . V , c Ca ( ) = . × − mol/m , n ( ) = . , m ( ) = , h ( ) = . , r ( ) = , s ( ) = . . Figure 2 shows a plot of the solution.12ll methods have been implemented in Matlab. The experiments have been carriedout by running the codes with varying tolerances TOL = − − k / for k = , . . . ,
48. Inthe applications, the coarser tolerances are of most interest. The tolerance requirementsin the codes implementing the new method are modeled according to those used in theMatlab ode suite: For z = ( x , y ) , the criterion | err i | ≤ TOL | z i | + AbsTol i shall be satisfied for all components z i of z . Here, err i denotes the error estimate for z i .AbsTol i has been chosen as a typical size of | z i | multiplied by TOL.The diagrams contain the obtained accuracy versus the computational effort. Theaccuracy is measured as the error of the numerical solution at the final time. Sincethe analytical solutions are not known, the systems have been solved with very tighttolerances using ode15s in order to obtain a reference solution.The computational effort has been measured in terms of function evaluations. Onefunction evaluation corresponds to a computation of both f and g . A Jacobian com-putations is counted as expensive as one function evaluation. So one step of Hines’method corresponds to two function evaluations while one step of the modified methodneeds 2.5 function evaluations. This is consistent with the statistics provided by thecodes of the matlab ode suite.The following codes have been compared: hines, cmhines This are the implementations of the original method (7) – (8) andthe modification (9) – (11), respectively, for constant step sizes. The main pur-pose of using these codes consists of showing second order convergence of bothas well as comparing the relative accuracy. modhines, modhext, modhnew
These are the new implementations with error es-timation and step size control according to the descriptions above. Since thesemethods are no longer symmetric in x and y (in contrast to the original method),the experiments are run in two versions: one where the voltages are used as x -component, and one where the gate variables of the channels are used as x -components. ode15s, ode23s, ode23t, ode23tb This are Matlab’s ode solvers. Here, the sys-tem is solved as a whole. In these codes the complete Jacobian is used. Asan experiment, we modified even ode15s in such a way that only the diagonalblocks of the Jacobian are used ( ode15sm ). We intended to understand how im-portant the off-diagonal blocks are in the given examples. It should be noted,however, that this approach is questionable since this may break internal controlstrategies. radau5
This is the Fortran code RADAU5 developed by Hairer&Wanner [6, SectionIV.8]. We used the interface in [12] to call this code from Matlab. drcvode
This is the cvodes code from the Sundial package [7], version 2.8.2, usingthe matlab interface version 2.5.0 [14].The following observations can be made: Matlab release 2016a, The MathWorks, Inc., Natick, MA, USA. Figures 3 and 6 show clearly that the new methods as well as Hines’ methodhave second order of accuracy. While Hines’ method is symmetric in both the x - and y -components, the symmetry is broken in the new method. Therefore, itbecomes important how the splitting is defined. For the Hodgkin-Huxley system,a much better accuracy is obtained if the gates are chosen as x -components. Thisdifference is not seen in the soma-dendrite-spine example. Here, the differencein efficiency between Hines’ method and the new method is mainly due to thefact that the new method is slightly more expensive per step. • In both examples, the usage of higher order methods is preferred, in particular inthe case of higher accuracies. Thus, it is only the version with local extrapolationwhich is competitive, at least for low tolerances. • Among the second order methods, the additional flexibility offered by variablestep size solvers compared to the constant step size Hines’ method does not seemto pay off. • It is surprising how irregular the effort-accuracy curve is for many of the well-established solvers. This holds in particular for the Hodgkin-Huxley system.This is emphasized in Figure 9 where the accuracy is plotted versus the tolerancerequirement.
In the present note we have introduced a modification of a method proposed by Hinesfor solving the large system of ordinary differential equations arising when simulatinglarge networks of neurons. The basic motivation for the new method was to allow for astep size variation while at the same time retaining the possibility for an efficient linearalgebra by using the special structure of the systems as it is done in Hines’ method.The relation of the new method to Strang splitting and the Peaceman-Rachford methodhas been shown.Since the systems are often stiff, an investigation of the domain of absolute stabilityhas been done. Here, a test equation inspired by similar considerations for partitionedRunge-Kutta methods was of much use. It turned out that, in many cases, these domainsare bounded.The method has been implemented in different versions in matlab including alsoan implementation of Richardson extrapolation. A number of tests and comparisons tostandard state-of-the-art solvers for ordinary differential equations have been done. Inthe tests it turned out that higher order methods are most efficient even in the case ofrather low tolerance which are of most practical interest.The competivity of the new method compared to standard solvers depends mostlyon an efficient implementation of the linear algebra involved. This could not be testedin the matlab environment. Therefore, we will implement the new method in actualneuron simulators in the future in order to obtain more realistic comparisons.
References [1] U.M. Ascher and L.R. Petzold.
Coputer methods for Ordinary Differential Equa-tions and Differential-Algebraic Equations . SIAM, Philadelphia, 1998.142] J.M. Bower and D. Beeman.
The Book of GENESIS: Exploring realistic neuralmodels with the GEneral NEural SImulation System . Springer, 2nd edition, 1998.[3] M. Brandi. Simulating a Multi-Scale and Multi-Physics Model of a DendriticSpine – Towards a communication framework for multi-scale modeling. Masterthesis, KTH Royal Institute of Technology, 2011.[4] N.T. Carnevale and M.L. Hines.
The NEURON Book . Cambridge UniversityPress, 2005.[5] E. Hairer, S. Norsett, and G. Wanner.
Solving ordinary differential equations I ,volume 8 of
Springer Series in Computational Mathematics . Springer, Berlin, 2.rev. edition, 1992.[6] E. Hairer and G. Wanner.
Solving ordinary differential equations II , volume 14 of
Springer Series in Computational Mathematics . Springer, Berlin, 2. rev. edition,1996.[7] A.C. Hindmarsh, P.N. Brown, K.E. Grant, S.L. Lee, R. Serban, D.E. Shumaker,and C.S. Woodward. SUNDIALS, Suite of Nonlinear and Differential/AlgebraicEquation Solvers.
ACM Trans. Math. Software , 31:363–396, 2005.[8] M. Hines. Efficient computation of branched nerve equations.
Int. J. Bio-Med.Comput. , 15:69–76, 1984.[9] A.L. Hodgkin and A.F. Huxley. A quantitative description of membrane currentand application to conduction and excitation in nerve.
J. Physiol. , 117:500–544,1952.[10] W. Hundsdorfer and J. Verwer.
Numerical solution of time-dependent advection-diffusion-reaction equations , volume 33 of
Springer Series in ComputationalMathematics . Springer, 2003.[11] W.H. Hundsdorfer and J.G. Verwer. Stability and convergence of the Peaceman-Rachford ADI method for initial-boundary value problems.
Math. Comp.
J. Soc. Indust. Appl. Math. , 3:28–41, 1955.[14] R. Serban. sundialsTB, a MATLAB Interface to SUNDIALS. Technical ReportUCRL-SM-212121, LLNL, 2005.[15] G. Söderlind. Automatic control and adaptive time-stepping.
Numer. Algorithms ,31:281–310, 2002.[16] G. Söderlind. Digital filters in adaptive time-stepping.
ACM Trans. Math. Soft-ware , 29:1–26, 2003. 1517] G. Strang. On the construction and comparison of difference schemes.
SIAM J.Numer. Anal. , 5:506–517, 1968.[18] K. Strehmel and R. Weiner. Partitioned Runge-Kutta adaptive methods and theirstability.
Numer. Math. , 45:283–300, 1984.16 t -50510152025303540 V o l t a g e Problem HH: Voltage t Problem HH: Gates mnh (a) (b)Figure 1: Solution of the Hodkin-Huxley system. (a) Voltage V , (b) gate variables m , n , h t [s] -0.07-0.06-0.05-0.04-0.03-0.02-0.0100.010.020.03 V o l t a g e [ V ] Problem maya: Voltage V V V t [s] Problem maya: Gates mhnsr t [s] C on ce n t r a t i on [ m o l e / m ] × -4 Problem maya: Calcium concentration (a) (b) (c)Figure 2: Solution of the soma-dendrite-spine system. (a) Voltages V , V , V . Note that V = V up to plotting accuracy, (b) gate variables m , n , h , r , s ,(c) calcium concentration c Ca -10 -9 -8 -7 -6 -5 -4 -3 -2 Problem HH hinescmhines_voltagescmhines_channels
Figure 3: The Hodkin-Huxley system: Constant step size solvers19 -10 -9 -8 -7 -6 -5 -4 -3 -2 -1 Problem HH ode15sode23tode23tbode23smodhines_channelsmodhnew_channelsmodhext_channels -11 -10 -9 -8 -7 -6 -5 -4 -3 -2 -1 Problem HH modhines_voltagesmodhnew_voltagesmodhext_voltagesmodhines_channelsmodhnew_channelsmodhext_channels (a) (b)Figure 4: The Hodkin-Huxley system: Comparison of solvers. (a) behavior of the new solvers in comparison to matlab’s ode solvers, (b) behavior ofthe new solvers with different assignments to the x - and y -components. The tags “voltages” and “channels” indicate which set of variables has beenconsidered as x -components -11 -10 -9 -8 -7 -6 -5 -4 -3 -2 Problem HH modhines_voltagescmhines_voltagesmodhnew_voltagesmodhext_voltages -11 -10 -9 -8 -7 -6 -5 -4 -3 -2 -1 Problem HH ode15sode15smradau5drcvodemodhext_voltages (a) (b)Figure 5: The Hodkin-Huxley system: Comparison of solvers. (a) The new solvers and the constant step size Hines’ method, (b) state-of-the-artmethods and the most efficient new version -8 -7 -6 -5 -4 -3 -2 -1 Problem maya hinescmhines_voltagescmhines_channels
Figure 6: The soma-dendrite-spine system: Constant step size solvers22 -8 -7 -6 -5 -4 -3 -2 -1 Problem maya ode15sode23tode23tbode23smodhines_channelsmodhnew_channelsmodhext_channels -8 -7 -6 -5 -4 -3 -2 -1 Problem maya modhines_voltagesmodhnew_voltagesmodhext_voltagesmodhines_channelsmodhnew_channelsmodhext_channels (a) (b)Figure 7: The soma-dendrite-spine system: Comparison of solvers. (a) behavior of the new solvers in comparison to matlab’s ode solvers, (b) behaviorof the new solvers with different assignments to the x - and y -components. The tags “voltages” and “channels” indicate which set of variables has beenconsidered as x -components -8 -7 -6 -5 -4 -3 -2 -1 Problem maya modhines_voltagescmhines_voltagesmodhnew_voltagesmodhext_voltages -8 -7 -6 -5 -4 -3 -2 -1 Problem maya ode15sode15smradau5drcvodemodhext_voltages (a) (b)Figure 8: The soma-dendrite-spine system: Comparison of solvers. (a) The new solvers and the constant step size Hines’ method, (b) state-of-the-artmethods and the most efficient new version -8 -7 -6 -5 -4 -3 -2 -11 -10 -9 -8 -7 -6 -5 -4 -3 -2 -1 Problem HH ode15sode15smradau5drcvodemodhext_voltages -8 -7 -6 -5 -4 -3 -2 -8 -7 -6 -5 -4 -3 -2 -1 Problem maya ode15sode15smradau5drcvodemodhext_voltages (a) (b)Figure 9: Tolerance-accuracy plots. (a) Hodgkin-Huxley model, (b) soma-dendrite-spine system(a) (b)Figure 9: Tolerance-accuracy plots. (a) Hodgkin-Huxley model, (b) soma-dendrite-spine system