Characterizing Order of Convergence in the Obreshkov Method in Differential-Algebraic Equations
CCHARACTERIZING ORDER OF CONVERGENCE IN THEOBRESHKOV METHOD IN DIFFERENTIAL-ALGEBRAICEQUATIONS
EMAD GAD ∗ Abstract.
The Obreshkov method is a single-step multi-derivative method used in the numericalsolution of differential equations and has been used in recent years in efficient circuit simulation. Ithas been shown that it can be made of arbitrary high local order of convergence while maintainingunconditional numerical stability. Nevertheless, the theoretical basis for the high order of convergencehas been known only for the special case where the underlying system of differential equations isof the ordinary type, i.e., for ordinary differential equations (ODE). On the other hand, theoreticalanalysis of the order of convergence for the more general case of a system consisting of differentialand algebraic equations (DAE) is still lacking in the literature.This paper presents the theoretical characterization for the local order of convergence of theObreshkov method when used in the numerical solution of a system of DAE. The contributionpresented in this paper demonstrates that, in DAE, the local order of convergence is a function ofthe differentiation index of the system and, under certain conditions, becomes lower than the orderobtained in ODE.
Key words.
Numerical Methods for Differential Equations, High-Order Approximation,Differential-Algebraic Equations, A -stability and L -stability, The Differentiation Index of DifferentialAlgebraic Equations. AMS subject classifications.
1. Introduction.
Numerical solution of differential equations (DEs) is one ofthe fundamental tools used in all walks of applied and computational sciences. Themain goal in any method used to solve a system of DEs of the form(1.1) f (cid:18) x ( t ) , d x ( t )d t , t (cid:19) = , (with x ( t ) ∈ R N , f : R N × R N × R → R N ) is to approximate x ( t ) at time points, t n , n = 0 , , · · · . The process of computing approximations to x ( t i ) is typicallyknown as time marching, and starts from an initial point, t ,where x ( t ) is known. Inmany situations, it is possible that the derivatives d x ( t )d t can be expressed explicitlyin terms of x ( t ) and t , in which case the system of DEs is said to be in the ordinarydifferential equations (ODE) form. If that is not possible, the DEs are referred to asa system of differential-algebraic equations (DAE).Research activities targeting developing numerical methods for solving DEs spanover several decades and are still going strong. Section 2 provides a more detailedaccount of those activities. One particular method that has been applied recentlyin the area of circuit simulation [15, 31, 13, 12, 28, 23] is based on the Obreshkovformula [27]. The Obreshkov-based numerical method was shown to provide keyfeatures foremost among them is the fact that the method can be implemented witharbitrary high-order without losing the A -stability or the L -stability properties. Inaddition, the parameters of the method (coefficients of the Obreshkov formula) aregiven in pre-determined analytical form. These features made the application of thismethod advantageous in circuit simulation. ∗ School of Electrical Engineering and Computer Science, Ottawa, ON, Canada K1N 5N6([email protected] )
Funding:
This work was supported by the Natural Sciences and Engineering Research Council(NSERC) of Canada. 1 a r X i v : . [ m a t h . NA ] F e b EMAD GAD
Nevertheless, the exitsting literature do not provide a characterization of theObreshkov order of convergence if the underlying DEs is in the form of DAE. Indeed,the order of convergence in the Obreshkov-based method is only known if the DEs takethe form of ODE. The lack of rigorous analysis for Obreshkov order of convergencein DAE is viewed as a gap in the literature and is being addressed in this paper.The analysis and obtained results presented in this paper show that the order of theObreshkov-based method is not determined only by the parameters of the integrationformula, as in the case of ODE, but also by the differentiation index of the DAE.More particularly, it is shown that the differentiation index of the has the potentialDAE to lower the order of the Obreshkov method below the order nominally affordedby its parameters.
Section 2 provides the background to thetopic addressed in this paper. It also provides a substantial overview of the vari-ous methods proposed in the literature to numerically solve DEs, and positions theObreshkov method within this body of work highlighting the scope of the new con-tribution. Section 3 describes the recent progress on using the Obreshkov formulain circuit simulation underscoring its high-order for the special case of ODE. Section4 develops the mathematical analysis needed to characterize the Obreshkov order inDAE. Section 5 provides experimental validation of the theoretical results. Finally,Section 6 draws important conclusion from the presented work.
2. General Background and Scope of the Paper.
This section describesthe main landscape of the methods used in the numerical solution of DEs. Section2.2 dwells on the major criteria through which those methods can be viewed andclassified. Section 2.3 underscores the ideal characteristics that is typically soughtin a general-purpose method and reviews the literature describing how the variousmethods derived score on those characteristics. The review of the Obreshkov methodand its recent utilization in circuit simulation is presented in Section 2.4. Finally,Section 2.5 presents the scope of the contribution of the paper. ( t n ) will be taken as the exact value of the solution to the DEsat t = t n . On the other hand, the approximation generated by any given solutionmethod at t = t n will be denoted ˆx n . In a similar manner, the exact i th orderderivative of x ( t ) at t = t n will be denoted by x ( i ) ( t n ), whereas its approximationis denoted by ˆx ( i ) n , i = 0 , , , · · · , with the convention that x (0) ( t n ) ≡ x ( t n ) and ˆx (0) ≡ ˆx n . h in the context of this paper will refer to the latest step size, h = t n − t n − . O ( h q ) will mark series terms that are proportional to h u , with u ≥ q and O r ( h q ) isused to extend this notation to a vector of size r . Finally, I H will be used to denote H × H identity matrix. Several themespervade the literature of methods that numerically solve DEs as an Initial ValueProblem (IVP). This section reviews those themes treating them as distinct classesnoting that a particular method can lie at the intersection of two or more classes.
Themethods which have been proposed, or used in software packages, can be classifiedbased on the approach they use to advance from point t n − to t n . Those methods areoften grouped under the following three categories.1. Linear Multi-Step (LMS).
These methods rely on the approximationsgenerated at the past r points, that is ˆx n − , ˆx n − , · · · , ˆx n − r and the first- BRESHKOV ORDER IN DAE x ( t n ). LMSmethods are usually represented by the following formula,(2.1) r (cid:88) l =0 µ l ˆx n − l = h r (cid:88) l =0 ζ l ˆx (1) n − l where the coefficients µ i , ζ i are specific to each integration method and arechosen to make the first q Taylor series terms in the operator L ( x ( t )) := (cid:80) rl =0 µ l x ( t n − l ) − h (cid:80) rl =0 ζ l x (1) ( t n − l ) vanish [16]. Notable example amongthese methods, are the backward Euler (BE), trapezoidal rule (TR) and thebackward differentiation formulae (BDF).2. Single-Step Multi-Stage (SSMS).
Those methods use the approximationat a single past time point, t n − , along with approximations to off-step points(known as stages), at t i = t n + c i h n , where c i < i = 1 , · · · , s . The Runge-Kutta (RK) methods [5] are the best known example in this class of methods.An RK-method is typically represented by the Butcher tableau c Ab , where A ∈ R s × s , b , c ∈ R s with s the number of stages. The formulation of theRK for a system of ordinary differential equations x (1) = f ( t, x ) takes thefollowing form ˆg i = f (cid:0) t n − + c i h, h Σ sj =1 a ij ˆg i (cid:1) i = 1 , · · · , s ˆx n = ˆx n − + h s (cid:88) i =1 b i ˆg i where ˆg i are called the stage value that approximate x ( t n − + c i h ) and a ij , b i , c i are the components of A , b and c , respectively.3. General Linear Methods (GLM).
The GLM are typically viewed as ahybrid between the LMS and SSMS methods, since it uses the past r pointsalong with s stages to advance to the next step. The construction of a GLMmethod is usually represented by the block matrix (cid:34) A UB V (cid:35) where A ∈ R s × s , B ∈ R r × s , U ∈ R s × r , V ∈ R r × r . The time stepping to ˆx n is presentedas ˆg i = a ij h f (cid:0) t n − + c i h, h Σ sj =1 a ij ˆg i (cid:1) + Σ rj =1 u ij ˆx n − j , i = 1 , · · · , s ˆx n − i +1 = Σ sj =1 b ij h ˆg j + Σ rj =1 v ij ˆx n − j , i = 1 , · · · , r where u ij , v ij , a ij , b ij are the components of U , V , A and B , respectively. The stability propertiesof a given integration method is usually studied through characterizing its behaviorin approximating the solution to the scalar test problem, defined by,(2.2) d x ( t )d t = λx ( t )A method is said to be A -stable if the successive approximations to the scalartest problem satisfy | ˆ x j | < | ˆ x i | , ( j > i ) for all values of C − , L -stable if λ ∈ C − ∪ {∞} ,and A ( α )-stable if λ ∈ { λ ; | arg( − λ ) | < α, λ (cid:54) = 0 } . Note here that L -stability impliesnecessarily A -stability. EMAD GAD
A third way tocharacterize a given method is to derive the relation between the approximation ˆx n and the exact value x ( t n ) in terms of the step size h = t n − t n − , assuming thatapproximations generated at, or prior to, t n − are exact. Such relation defines the“local order” of the method and is made more precise through the following definition. Definition An r -step method used to ap-proximate the solution of a general differential equations is said to be of local order q if the approximate solution, ˆ x n is related to the exact solution, x ( t n ) , through, (2.3) ˆx n = ˆx ( t n ) + Ch q +1 d q +1 d t q +1 x ( t ) (cid:12)(cid:12)(cid:12)(cid:12) t = t n − + O N ( h q +2 ) assuming that ˆx n − p = x ( t n − p ) , for p = 1 , · · · , r . Within theabove classes, a method can be further classified as explicit vs. implicit. For example,LMS methods are implicit if its integration formula (2.1) includes both the (unknown) ˆx n and its derivative ˆx (1) n , i.e., if ζ (cid:54) = 0. Otherwise, it is called explicit. SSMS orGLM methods are considered explicit if the stage ˆg n depends strictly on the stages ˆg m for m < n . This would be the case if the matrix A is strictly lower triangular,i.e., a ij = 0 for j ≥ i . Otherwise it is considered implicit. Explicit methods havethe advantage that computing the unknown is only done through linear combinationof known values, with complexity proportional to N , O ( N ), while implicit methodsrequire a linear system solution, which scales in the general case as O (( sN ) α ), s beingthe number of stages, and 1 < α ≤ The common thrust in the methods proposed un-der the above classes has been a quest for an unconditionally stable method with highorder and a computational complexity that scales modestly with the problem size N .In the domain of circuit simulation, for instance, the requirement of unconditionalstability is found in the general class of A -stable methods. Therefore, it becomesimperative that a method constructed using one the above three approaches men-tioned in Section 2.2.1 satisfy the A -stability condition indicating the local order ofapproximation, with higher order being more desirable than lower order.Dahlquist, in his landmark paper [8], showed that A -stable methods constructedthrough the LMS approach cannot have order higher than 2. It was also shown thatexplicit methods constructed through the LMS approach cannot be A -stable. Thisresult, rightly termed the Dahlquist barriers [18], put to rest the quest for derivingLMS A -stable methods of any order higher than 2 ( q ≤ A -stability requirement and adopting A ( α )-stability instead,with the (BDF) method being the best-known example in these methods.Unlike LMS-constructed methods, SSMS methods such as the RK do not havea theoretical barrier that precludes the existence of arbitrary high-order A -stablemethods. For example, it is known that the highest-order for A -stable RK method isonly restricted by the number of stages s , and the identity that the order q satisfies q ≤ s was first proposed by the Daniel-Moore conjecture [9] before being provedby the theory of order stars [20, 30]. Nevertheless, the difficulty in finding high-order RK methods is not due to the existence of a theoretical barrier but rather inconstructing the appropriate Butcher tableau. For example, to derive the Butchertableau corresponding to a method of order 10, one would need to solve a system BRESHKOV ORDER IN DAE sN strongly coupled nonlinear equations to obtainthe stage values, ˆg i , [3].Similar to SSMS, GLM methods also do not have theoretical “barriers” on the or-der for the A -stability. Rather, its stability is characterized through a two-dimensionalpolynomial Φ ( w, z ) := det ( w I − M ( z )), with M ( z ) = V + z B ( I − z A ) − U [6]. Infact, A -stability is guaranteed in the GLM so long as all the w -roots of the polynomial Φ ( w, z ) = 0 lie inside the unit disk of the complex plan whenever (cid:60){ z } <
0, withonly simple roots allowed at the unit circle. The barrier on the highest possible order A -stable GLM is given by the following theorem. Theorem ([30]) Let Φ ( w , z ) be a two-dimensional polynomial correspondingto an A -stable GLM method of order q . Then q ≤ s . Furthermore, methods attainingthe highest possible order, i.e. for p = 2 s , will have an error constant satisfying theinequality C ≥ ( − s s ! s !(2 s )!(2 s +1)! . Examples GLM are the diagonally implicit multi stage methods (DIMSIM) methods[4] and multistep collocation techniques [22]To the best of the author’s knowledge, the construction of A -stable GLM methodis still an open question. However, as the above theorem states, the only limit to theorder of any such method, when it is found, will be determined by the number ofstages s used in the construction. The Obreshkov method, al-though dating back to 1942 [27], has not received serious attention during the abovedevelopments. In fact, this method can be considered as a fourth class of methodsas far as the its construction approach is considered, since it relies on higher-orderderivatives. The method is constructed by forcing two successive approximations, ˆx n − , ˆx n ∈ R N at t = t n − , t n , to satisfy the Obreshkov formula [27, 11], which isgiven by,(2.4) m (cid:88) i =0 ( − i α i,l,m h i ˆx ( i ) n = l (cid:88) i =0 α i,m,l h i ˆx ( i ) n − where(2.5) α i,l,m = ( m + l − i )!( m + l )! (cid:32) mi (cid:33) , wehre l and m are integers that define the parameters of the formula. It should beobvious that the Obreshkov method is a single-step method in the sense that it usesthe immediate past approximation (for t = t n − ) to advance to the time point at t = t n . However, in contrast to LMS methods, it requires the higher-order derivativesat the past and current time point. Furthermore, the method is implicit since itsformula includes the derivatives at the unknown point t n .In 1968, Ehle [11] showed that Obreshkov formula, when implemented in a single-step implicit mode with specific parameters, m − ≤ l ≤ m , can be used to derive A -stable and L -stable methods. One salient advantage to the Obreshkov-based ap-proach, compared to the RK or the GLM, was the fact the coefficients α i,l,m definingthe method are pre-determined and are given by the expression in (2.5) for arbitrary EMAD GAD high-order. Despite its potential advantages, Ehle remarked that this integration for-mula appears to be “largely of theoretical interest” due to having to work with implicithigh-order derivatives [11]. Consequently, interest in working toward a practical im-plementation of the Obreshkov formula in general nonlinear systems waned. In fact,Gear, in his classical work on BDF methods [16], notes that due to such a difficulty,“the application of these methods is a major computing task for large systems and isnot generally practical.” Such an attitude toward using Obreshkov-like formulas per-sisted over three decades with only one exception; when Nørsett attempted to modifythem in an effort to reduce the computational difficulties in solving linear ordinaryDEs. The modified method, however, lost the A -stability property for orders higherthan five. In addition, its generalization to nonlinear DAE-based systems was notobvious [26].In a more recent development, research in the area of circuit simulation renewedthe interest in an Obreshkov-based approach to the numerical solution of DEs. In [15],a graphical methodology, called “rooted trees”, was used to represent the nonlinearfunctions used in nonlinear circuit device models and proved successful in computingthe high-order derivatives in an efficient manner, thereby removing the main obstaclethat impeded using the Obreshkov method in industrial-scale problems. Followingthis work, the Obreshkov method was tailored to more specialized circuits in [13, 12,23, 14], with increasing success compared to the conventional methods based on LMSthat are used in circuit simulations. Further characterization of the Obreshkov methodwas also presented in [31] along with more efficient implementation techniques. The work presented in this paper is motivated bythe fact that the existing literature provides a characterization for the order of theObreshkov method only when it is being applied to a system of ordinary differentialequations (ODE). On the other hand, the treatment of the more general case of DAEis still lacking in literature. It is a well-known fact that DAE have fundamentallydifferent and richer structural characteristics from those of ODE, and that thosecharacteristics can cause a particular method to behave differently when applied toDAE (compared with its application to ODE). For example, in the class of SSMS-RKmethods, the order in solving DAE deviates from the order of solving ODE [1, 21, 24],leading to loss of order. The core contribution in this work expands the existing theory,which describes the order of Obreshkov method in ODE, to cover the general case ofDAE. It demonstrates that the order of the Obreshkov method, when used to solveDAE, is a function of the system differentiation index [21]. It is shown in this workthat DAE with high differentiation index do in fact lower the order of the Obreshkovmethod, which is a results that, to the best of the author’s knowledge, is novel.
3. Implementation and Order of the Obreshkov Method.
Characteriza-tion of the order achieved by a given numerical method can be carried out by con-sidering a linear version of the DEs in (1.1). For the case of an ODE, such DEs arerepresented by(3.1) d x ( t )d t = Ax ( t ) + u ( t )where A is a matrix in R N × N . On the other hand, the DAE formulation for linearDEs can be put in the form(3.2) C d x ( t )d t + Gx ( t ) = b ( t ) BRESHKOV ORDER IN DAE C and G are matrices in R N × N with C being generally singular matrix. Incase that the matrix C is nonsingular, then (3.2) can be put in the form of the ODE(3.1) using A = − C − G and u ( t ) = C − b ( t ).In the circuit simulation framework, the matrix C carries the so-called “stamps”of memory elements such as inductors and capacitors, while G carries the stamps ofthe memoryless elements (including resistors, independent/dependent voltage/currentsources), while the vector b ( t ) ∈ R N groups the contributions of the independentvoltage/current sources in the circuit. The standard approach used to construct thosematrices is based on the modified nodal analysis (MNA) formulation [19, 25]. The appli-cation of the Obreshkov formula in the context of circuit simulation was proposedrecently in [15, 31, 13, 12]. In this approach to using the Obreshkov formula (2.4) on(3.2), the time stepping from t = t n − to t n is done by solving the following system(3.3) (cid:16) (cid:101) C + (cid:101) G (cid:17) ˆ ξ n = (cid:101) b n The matrices (cid:101) C and (cid:101) G ∈ R ( k +1) N × ( k +1) N in (3.3) are obtained from C and G asfollows(3.4) (cid:101) C = 1 h · · ·
00 0 C · · · ... ... . . . ... · · · · · · (3.5) (cid:101) G = G 0 · · ·
00 G 0 · · · ... ... . . . ... · · · G 0 α ,l,m I N − α ,l,m I N · · · ( − k α m,l,m I N and the vectors (cid:101) b n and ˆ ξ n are given by(3.6) (cid:101) b n = (cid:20) (cid:0) b (0) ( t n ) (cid:1) (cid:62) · · · (cid:0) h m − b ( m − ( t n ) (cid:1) (cid:62) (cid:16)(cid:80) li =0 α i,m,l h i ˆx ( i ) n − (cid:17) (cid:62) (cid:21) (cid:62) (3.7) ˆ ξ n = (cid:104) ˆx (0) n (cid:62) h ˆx (1) n (cid:62) · · · h m ˆx ( m ) n (cid:62) (cid:105) (cid:62) where b ( i ) ( t n ) = d i b ( t )d t i evaluated at t = t n . The above implementation of theObreshkov method in circuit simulation was shown to be computationally advan-tageous as it preserves the inherent sparsity of the matrices G and C that arise fromthe circuit formulation and enables using block forms of LU factorization such as theKLU [10] which scales almost linearly with the circuit size and the order m [23]. Theinitial point of the time marching is done by computing x ( i ) (0), i = 0 , · · · , l , and thetime marching proceeds from t = t n − to t n with appropriately sized time step asshown in [15]. EMAD GAD
The first result listedhere is concerned with the order of the Obreshkov method when applied to a systemof ODE (3.1).
Lemma
Assume that the C matrix is non-singular and, hence, the DAEsystem (3.2) can be put in the form of the ODE (3.1). Also assume that (3.8) ˆx ( i ) n − = d i x ( t )d t i (cid:12)(cid:12)(cid:12)(cid:12) t = t n − , i = 0 , · · · , m It then follows that (3.9) h i ˆx ( i ) n = h i x ( i ) ( t n ) + O N (cid:0) h l + m + i +1 (cid:1) , i = 0 , · · · , m The proof of Lemma 3.1 will be given as part of the proof of the more general resultpertinent to the case of DAE, in Section 4.5.2.
Remark i = 0) inthe Obreshkov method when applied to a system of ODE is determined solely basedon the parameters l and m used in the Obreshkov formula. Remark i th order derivative is approximated to the same degree as thelower-order derivatives. This fact follows from (3.9), or alternatively from ˆx ( i ) n − x ( i ) ( t n ) is given by O N (cid:0) h l + m +1 (cid:1) , independent from i .
4. Obreshkov Order Characterization in DAE.4.1. Preliminaries.
The development of the analysis is predicated on the fol-lowing prelimiaries.
Definition
The minimum number of times that allor part of the DAE (3.2) must be differentiated with respect to t in order to explicitlyexpress the derivative d x ( t )d t in terms of x ( t ) and t is defined as the differentiationindex of the DAE. Definition
A nilpotent square matrix N is definedsuch that N ν = and N ν − (cid:54) = for some positive integer ν . ν is referred to as the nilpotency index of N . Definition
Let I be an open interval of R , Ω a connected open subset of R N +1 and f of (1.1) differentiable from Ω to R N . Thenthe DAE (1.1) is solvable on I in Ω if there is an r -dimensional family of solutions φ ( t, c ) defined on a connected open set I × ¯Ω , ¯Ω ⊂ R r , such that: φ ( t, c ) is defined all of I for each c ⊂ ¯Ω2. ( t, φ ( t, c ) , φ (1) ( t, c )) ∈ Ω for ( t, c ) ∈ I × ¯Ω3. if ψ ( t, c ) is any other solution with ( t, ψ ( t, c ) , ψ (1) ( t, c )) ∈ Ω then ψ ( t ) = φ ( t, c ) for some c ∈ ¯Ω4. The graph of φ as a function of ( t, c ) is an r -dimensional manifold. Definition
Let A and B be N × N matrices, thenthe matrix pencil is defined as the matrix A + λ B for some complex parameter λ . Thematrix pencil A + λ B is said to be regular if its determinant, denoted det ( A + λ B ) ,is not identically zero as a function of λ . Theorem
The DAE system(3.2) is solvable if and only if the matrix pencil G + λ C is regular. BRESHKOV ORDER IN DAE Theorem
Suppose that matrices A , B ∈ R q × q are real matrices and A + λ B is a regular pencil, then there exist non-singularreal matrices P , Q ∈ R q × q such that (4.1) PAQ = (cid:34) I r
00 N (cid:35) , PBQ = (cid:34) J 00 I s (cid:35) , where N ∈ R s × s is a nilpotent matrix with nilpotency index k , J ∈ R r × r is a matrixin Jordan canonical form, with r + s = q . In case that N = then define k = 1 . Inthe special case that A is nonsingular, then take PAQ = I q , PBQ = J and define k = 0 . If A + λ B is identically constant, then (4.1) simplifies to PAQ = N and PBQ = I q . The matrix pair P and Q are referred to as the Weierstrass transform of A and B . Theorem
Suppose the DAE system in (3.2) is solvable with differentiation index µ ≥ , andtherefore there exist matrices P and Q satisfying (4.1) . Then the nilpotecy index ofthe resulting N matrix is equal to µ . Lemma
Let z ( t ) be a continuously differen-tiable function in t and let h = t n − t n − . Then, (4.2) m (cid:88) i =0 ( − i α i,l,m h i d i z ( t )d t i (cid:12)(cid:12)(cid:12)(cid:12) t = t n − l (cid:88) i =0 α i,m,l h i d i z ( t )d t i (cid:12)(cid:12)(cid:12)(cid:12) t = t n − = O (cid:0) h l + m +1 (cid:1) The analysis presented in this section is de-veloped along three main steps.1. The first step will utilize a Weierstrass tranformation on the matrices C and G which decouples the DAE in (3.2) into two subsystems: the first takes theform of an ODE while the other subsystem is a purely algebraic subsystem.This step will result in decomposing the state x ( t ) into two components: thefirst is a solution to the ODE subsystem, which will be denoted z D ( t ), andthe other is a solution to the algebraic subsystem which is denoted by z A ( t ).2. In the second step, another Weierstrass tranformation is applied to discretesystem matrices in (3.3) (cid:101) C and (cid:101) G decoupling it into two discrete systems anddecomposing ˆ ξ n into two components, which will be denoted by ˆ ζ D,n and ˆ ζ A,n . It will be demonstrated that ˆ ζ D,n and ˆ ζ A,n carry the approximationsto z D ( t ), and z A ( t ), and their derivatives, up to the m th order derivatives (at t = t n ), to varying orders of the step size h .3. The third step will then link the entries of ˆ ζ D,n and ˆ ζ A,n to the entriesin ˆ ξ n providing the path to describe the approximations generated by theObreshkov method in ˆ ξ n in terms of the exact x ( t n ) and the step size h .Moving forward, it will be assumed that the past time step is devoid of error,that is,(4.3) h i ˆx ( i ) n − = h i x ( i ) ( t n − ) , i = 0 , , · · · , l so that the error derived error at t = t n reflects only the “local” approximation errorcommitted in a single time step. It will also be convenient to group all the exactderivatives scaled by powers of the step size h in a single vector ξ ( t ), defined by(4.4) ξ ( t ) := (cid:104) (cid:0) x (0) ( t ) (cid:1) (cid:62) (cid:0) h x (1) ( t ) (cid:1) (cid:62) · · · (cid:0) h m x ( m ) ( t ) (cid:1) (cid:62) (cid:105) (cid:62) EMAD GAD
The previous steps are detailed in the following subsections.
Assuming the DAE(3.2) system is solvable, then it follows by Theorem 4.6 that are nonsingular matrices P and Q that can be used in a Weierstrass transform on C and G . To this end, Q isused in following the change of variables x ( t ) → z ( t ) in (3.2)(4.5) z ( t ) := Q − x ( t ) , which, after pre-multiplying the DAE system (3.2) by P decouples it into (using (4.1))d z D ( t )d t = − Jz D ( t ) + u D ( t )(4.6) N d z A ( t )d t = − z A ( t ) + u A ( t )(4.7)where z D ( t ) , u D ( t ) ∈ R r and z A ( t ) , u A ( t ) ∈ R s that are obtained from(4.8) (cid:2) u D ( t ) (cid:62) u A ( t ) (cid:62) (cid:3) (cid:62) = Pb ( t ) , (cid:2) z D ( t ) (cid:62) z A ( t ) (cid:62) (cid:3) (cid:62) = z ( t )It is obvious that (4.6) is a classical ODE whose solution is uniquely determined basedon the initial condition at t = 0, and the stimulus u D ( t ). However, the second part of(4.7) is a purely algebraic part whose solution depends solely on the driving stimulus b ( t ) and its derivatives. This fact can be demonstrated when the solution to (4.7) iswritten explicitly, as explained in [2], in the following form(4.9) z A ( t ) = k − (cid:88) i =0 ( − i N i d i d t i ( u A ( t ))indicating that z A ( t ) for any time t are determined solely from b ( t ) and its derivatives,at the same time t .For convenience and later usage, let Q D mark the fist k columns of Q and Q A its remaining ( s = N − k ) columns. Thus,(4.10) x ( t ) = Q D z D ( t ) + Q A z A ( t )It would also be convenient for the purposes of the following analysis to stack thevectors z ( t ), z D ( t ) and z A ( t ) and their high-order derivatives (scaled by powers of h )in single vectors ζ ( t ), ζ D ( t ), ζ A ( t ) defined, respectively, by ζ ( t ) := (cid:104) (cid:0) z (0) ( t ) (cid:1) (cid:62) (cid:0) h z (1) ( t ) (cid:1) (cid:62) · · · (cid:0) h m z ( m ) ( t ) (cid:1) (cid:62) (cid:105) (cid:62) (4.11) ζ D ( t ) := (cid:20) (cid:16) z (0) D ( t ) (cid:17) (cid:62) (cid:16) h z (1) D ( t ) (cid:17) (cid:62) · · · (cid:16) h m z ( m ) D ( t ) (cid:17) (cid:62) (cid:21) (cid:62) (4.12) ζ A ( t ) := (cid:20) (cid:16) z (0) A ( t ) (cid:17) (cid:62) (cid:16) h z (1) A ( t ) (cid:17) (cid:62) · · · (cid:16) h m z ( m ) A ( t ) (cid:17) (cid:62) (cid:21) (cid:62) (4.13)Using the Kronecker (tensor) operator ⊗ , it is obvious from (4.4) and the above that(4.14) ξ ( t ) = ( I m +1 ⊗ Q D ) ζ D ( t ) + ( I m +1 ⊗ Q A ) ζ A ( t ) BRESHKOV ORDER IN DAE In this step, the matrices (cid:101) P , (cid:101) Q ∈ R ( m +1) N × ( m +1) N defined by(4.15) (cid:101) P := P 0 · · ·
00 P 0 · · · ... ... . . . ... · · · P 00 0 · · · Q − , (cid:101) Q := Q 0 · · ·
00 Q 0 · · · ... ... . . . ... · · · Q 00 0 · · · Q are used to Weierstrass transform the matrices (cid:101) C and (cid:101) G . This is carried out throughperforming the change of variables ˆ ξ n → ˆ ζ n given by ˆ ζ n := (cid:101) Q − ˆ ξ n = (cid:20) (cid:16) Q − ˆx (0) n (cid:17) (cid:62) (cid:16) h Q − ˆx (1) n (cid:17) (cid:62) · · · (cid:16) h m Q − ˆx ( m ) n (cid:17) (cid:62) (cid:21) (cid:62) (4.16)Define vectors ˆz ( i ) n ∈ R N , ˆz ( i ) n − ∈ R N ˆz ( i ) n := Q − ˆx ( i ) n , i = 0 , · · · , m (4.17) ˆz ( i ) n − := Q − ˆx ( i ) n − , i = 0 , · · · , l (4.18)and partition them into vectors of sizes r and s ,(4.19) ˆz ( i ) n = (cid:20) ˆz ( i ) D,n (cid:62) ˆz ( i ) A,n (cid:62) (cid:21) (cid:62) , ˆz ( i ) n − = (cid:20) ˆz ( i ) D,n − (cid:62) ˆz ( i ) A,n − (cid:62) (cid:21) (cid:62) Next the system (3.3), is pre-multiplied by the matrix (cid:101) P , which along with thechange of variables in (4.16) decouples it into the following two systems Kˆ ζ D,n = e D,n (4.20) Mˆ ζ A,n = e A,n , (4.21)where the vectors ˆ ζ D,n ∈ R ( m +1) r and ˆ ζ A,n ∈ R ( m +1) s group the vectors ˆz ( i ) D,n and ˆz ( i ) A,n , respectively, i.e., ˆ ζ D,n = (cid:104) ˆz (0) D,n (cid:62) h ˆz (1) D,n (cid:62) · · · h m ˆz ( m ) D,n (cid:62) (cid:105) (cid:62) (4.22) ˆ ζ A,n = (cid:104) ˆz (0) A,n (cid:62) h ˆz (1) A,n (cid:62) · · · h m ˆz ( m ) A,n (cid:62) (cid:105) (cid:62) (4.23)Moreover the matrices K ∈ R ( m +1) r × ( m +1) r and M ∈ R ( m +1) s × ( m +1) s , and the2 EMAD GAD vectors e D,n ∈ R ( m +1) r , e A,n ∈ R ( m +1) s are, respectively, given by K = J h I r · · ·
00 J h I r · · · ... ... . . .
00 0 · · · J h I r α ,l,m I r − α ,l,m I r · · · ( − m α m,l,m I r (4.24) M = I s h N · · ·
00 I s h N · · · ... ... . . .
00 0 · · · I s h N α ,l,m I s − α ,l,m I s · · · ( − m α m,l,m I s (4.25)(4.26) e D,n = (cid:104) (cid:16) u (0) D ( t n ) (cid:17) (cid:62) · · · (cid:16) h m − u ( m − D ( t n ) (cid:17) (cid:62) (cid:16)(cid:80) li =0 α i,m,l h i ˆz ( i ) D,n − (cid:17) (cid:62) (cid:105) (cid:62) , (4.27) e A,n = (cid:104) (cid:16) u (0) A ( t n ) (cid:17) (cid:62) · · · (cid:16) h m − u ( m − A ( t n ) (cid:17) (cid:62) (cid:16)(cid:80) li =0 α i,m,l h i ˆz ( i ) A,n − (cid:17) (cid:62) (cid:105) (cid:62) In (4.26) and (4.27), the vectors ˆz ( i ) D,n − ˆz ( i ) A,n − are, r and s partitions, respectively,of the vector ˆz ( i ) n − , which is given by ˆz ( i ) n − = Q − ˆx ( i ) n − , i = 0 , · · · , l . Similarly to(4.14), ˆ ξ n can be expressed as,(4.28) ˆ ξ n = ( I m +1 ⊗ Q D ) ˆ ζ D,n + ( I m +1 ⊗ Q A ) ˆ ζ A,n
Thetask of characterizing the order of the Obreshkov method can be accomplished if thedifference between the -exact- ξ ( t n ) and the -Obreshkov-approximated- ˆ ξ n is derivedshowing its relation to the step size h . Using (4.14) and (4.28), this difference is givenby(4.29) ˆ ξ n − ξ ( t n ) = ( I m +1 ⊗ Q D ) (cid:16) ˆ ζ D,n − ζ D ( t n ) (cid:17) +( I m +1 ⊗ Q A ) (cid:16) ˆ ζ A,n − ζ A ( t n ) (cid:17) indicating that the approximation error ( ˆ ξ n − ξ ( t n )) of the Obreshkov method resultsfrom two different components. The first component is the differential componentwhich arises from ˆ ζ D,n − ζ D ( t n ) while the other component is the one resulting fromthe algebraic component ˆ ζ A,n − ζ A ( t n ). Characterizing each of those components isconsidered separately in the following subsections. Using (4.7), multiplying by h i anddifferentiating both sides i times with respect to t yields(4.30) 1 h N d i +1 d t i +1 (cid:0) h i +1 z A ( t ) (cid:1) = − d i d t i (cid:0) h i z A ( t ) (cid:1) + d i d t i (cid:0) h i u A ( t ) (cid:1) , i = 0 , · · · , m − m (cid:88) i =0 ( − i α i,l,m h i d i z A ( t )d t i (cid:12)(cid:12)(cid:12)(cid:12) t = t n = l (cid:88) i =0 α i,m,l h i d i z A ( t )d t i (cid:12)(cid:12)(cid:12)(cid:12) t = t n − + O s (cid:0) h l + m +1 (cid:1) BRESHKOV ORDER IN DAE m systems in (4.30) (at t = t n ) along with the system in (4.31) can be putin a matrix form(4.32) M ζ A ( t n ) = Ψ where M is the matrix defined by (4.25), ζ A ( t n ) is defined in (4.13) and the vector Ψ is given by(4.33) Ψ := (cid:34) u (0) A ( t n ) (cid:62) · · · h m − u ( m − A ( t n ) (cid:62) (cid:32) (cid:80) li =0 α i,m,l h i z ( i ) A ( t n − )+ O s (cid:0) h l + m +1 (cid:1) (cid:33) (cid:62) (cid:35) (cid:62) Subtraction of (4.32) from (4.21), and noting from (4.3) and (4.18) that ˆz ( i ) A,n − = z ( i ) A ( t n − ), yields(4.34) ˆ ζ A,n − ζ A ( t n ) = M − ∆ where(4.35) ∆ = (cid:104) , · · · , O s (cid:0) h l + m +1 (cid:1) (cid:62) (cid:105) (cid:62) The inverse of the matrix M can be expressed in block-structured format by parti-tioning it into four blocks,(4.36) M = (cid:34) M M M M (cid:35) where, M = I s h N 0 · · ·
00 I s h N ... . . .... . . . h N0 0 · · · I s , M = ... h N M = (cid:104) α ,l,m I s · · · ( − m − α m − ,l,m I s (cid:105) , M = ( − m α m,l,m I s The inverse of the block-partitioned matrix (4.36) is given by the block-partitionedmatrix(4.37) M − = (cid:34) M − + M − M S − A M M − − M − M S − A S − A M M − S − A (cid:35) with(4.38) S A = M − M M − M EMAD GAD
The error in algebraic component is therefore given by,(4.39) ˆ ζ A,n − ζ A ( t n ) = (cid:34) − M − M S − A S − A (cid:35) O s (cid:0) h l + m +1 (cid:1) Given that M is a block-upper diagonal matrix, with identity matrices on thediagonal blocks, its inverse is trivial and is given by(4.40) M − = I s − h N h N − h N · · · ( − m − h m − N m − s − h N h N · · · ( − m − h m − N m − . . . . . .... . . . − h N0 0 · · · I s S A is expanded as(4.41) S A = m (cid:88) i =0 α i,l,m ( − i (cid:18) N h (cid:19) m − i and its inverse S − A can be expanded in a Taylor series format(4.42) S − A = ∞ (cid:88) p =0 γ p (cid:18) N h (cid:19) p where γ p denote the sum of all the coefficients that appear in front of the terms with (cid:0) N h (cid:1) p . Substituting (4.42) and (4.40) into (4.39) and using the definition of M yields(4.43) ˆ ζ A,n − ζ A ( t n ) = ( − m (cid:80) ∞ p =0 γ p (cid:0) N h (cid:1) p + m ...( − m − i (cid:80) ∞ p =0 γ p (cid:0) N h (cid:1) p + m − i ... − (cid:80) ∞ p =0 γ p (cid:0) N h (cid:1) p +1 (cid:80) ∞ p =0 γ p (cid:0) N h (cid:1) p O s (cid:0) h l + m +1 (cid:1) Let us now assume that the differentiation index of the DAE system is k . Thusfrom Theorem 4.7 the matrix N is nilpotent with nilpotency index k . Therefore, N q = for q ≥ k . This fact makes the algebraic component of the error in the i -thorder derivative, (that is ˆz ( i ) A,n − z ( i ) A ( t n ) , i = 0 , · · · , m ), vanish completely if m − i ≥ k .On the other hand, the case m − i < k entails truncating the infinite series in (4.43)(in accordance with N q = for q ≥ k ) at p = k − m + i − BRESHKOV ORDER IN DAE ˆ ζ A,n − ζ A ( t n ) = ( − m (cid:80) k − m − p =0 γ p (cid:0) N h (cid:1) p + m ...( − m − i (cid:80) k − m + i − p =0 γ p (cid:0) N h (cid:1) p + m − i ... − (cid:80) k − p =0 γ p (cid:0) N h (cid:1) p +1 (cid:80) k − p =0 γ p (cid:0) N h (cid:1) p O s (cid:0) h l + m +1 (cid:1) The above expression enable deducing the order of the error through finding the smallest positive integer that appears on the power of h in each component. Forexample, in the first component the smallest positive power of h appears at p = k − m −
1, giving rise to an order of l + m + 2 − k Using analogous reasoning leads tothe following result(4.45) ˆz ( i ) A,n − z ( i ) A ( t n ) = (cid:40) O s (cid:0) h l + m +2 − k (cid:1) if m − i < k if m − i ≥ k This part of the error analysisalso lays out the proof of Lemma 3.1 since the system of (4.6) is in the ODE form,and its error resembles the error of a general ODE addressed in that lemma.The error in the differential component can be derived in a similar manner tothe algebraic component. Starting with (4.30) and (4.31), replacing z A ( t ) for z D ( t ),yields the following(4.46) ˆ ζ D,n − ζ D ( t n ) = K − ∆ where K is given by (4.24) and ˆ ζ D,n and ζ D ( t n ) are, respectively, defined in (4.22)and (4.12). A process similar to the process of deriving the algebraic error can befollowed if, in the matrix M , N is replaced with I r above the diagonal, I s is replacedwith J on the main diagonal and I s is replaced by I r on the last block of rows. Usingthe formula derived above for the inverse of a 2 × ˆ ζ D,n − ζ D ( t n ) = h m h m (cid:80) mi =0 ( − i α i,l,m h i (cid:0) J − (cid:1) m − i − J h m h m − (cid:80) mi =0 ( − i α i,l,m h i (cid:0) J − (cid:1) m − i − J ... h m h (cid:80) mi =0 ( − i α i,l,m h i (cid:0) J − (cid:1) m − i − J O r (cid:0) h l + m +1 (cid:1) The above expressions can be used to deduce that the order of convergence in the i th order derivative will be given by(4.48) ˆz ( i ) D,n − z ( i ) D ( t n ) = O r (cid:0) h l + m +1+ i (cid:1) EMAD GAD
The preceding analysis enables establishing the order of theObreshkov method in a general DAE using the following theorem.
Theorem
Let the DAE system in (3.2) be given with a differentiation in-dex k > and assume that the Obreshkov method with parameters l, m is used toapproximate x ( t ) . Furthermore assume that x ( t ) and d i x ( t )d t i i = 0 , , · · · , l are readilyavailable at t = t n − and have been assigned to ˆx ( i ) n − , i = 0 , · · · , l . It then follows thatthe approximations ˆx ( i ) n converge asymptotically to d i x ( t )d t i at t = t n with the followingorder (4.49) h i ˆx ( i ) n − h i x ( i ) ( t n ) = (cid:40) O N (cid:0) h l + m +2 − k (cid:1) if m − i < k O N (cid:0) h l + m +1+ i (cid:1) if m − i ≥ k Proof.
Substituting (4.44)-(4.45) and (4.46)-(4.47) into (4.29), letting h → ˆ ξ n − ξ ( t n ) is associated with thesmallest positive power of h proves the above theorem. Remark l, m when applied in DAE with index k matches the same order con-vergence if the method is applied in ODE, if and only if m ≥ k . Equivalently put,the Obreshkov method suffers order reduction if it is used in DAE with differentiationindex k > m .
5. Experimental Validation.
Numerical validation of the theoretical resultspresented above requires problems where the exact solution (solution free from trun-cation error) of the DAE is obtainable. This is generally difficult since problemsmodelled by systems of DAE do not have their analytical solutions readily available.Fortunately, this problem can be handled in the domain of circuit simulation usingthe following steps. • The circuit is excited by sinusoidal sources. In this case the source vector isgiven in the form b ( t ) = b c cos ( ωt ) + b s sin ( ωt ), with ω denoting the radialfrequency in rad/sec and b c and b s are constant vectors. • With sinusoidal stimulus at the input, the circuit response at steady-state( t → ∞ ) settles down to a sinusoidal waveform represented by x ss ( t ) := X c cos ( ωt ) + X s sin ( ωt ), where X c and X S are constant vectors that can becomputed using the AC analysis method [29]. This approach for computingthe response of the circuit, or the solution of the DAE that model circuitformulation, in steady-state is indeed free from the truncation error that isassociated with the methods that solve the DAE as an IVP. Therefore, it canbe used as the accurate reference against which results from any such IVPmethods can be compared. • Next, the initial value, ˆx , used in starting the Obreshkov method is takenfrom an arbitrary point in the periodical trajectory traced by x ss ( t ). Forexample, the point t = 0 is possible choice. Hence, ˆx can be assigned thevalue of x (0) = X C . In a similar manner the initial values of the derivativescan also be assigned from the derivatives of the steady-state response x ss ( t )at t = 0. Thus, ˆx (1)0 = ω X S , ˆx (2)0 = − ω X C , and so forth. • Running the Obreshkov method using specified values for parameters l, m ,and starting with the point computed in the previous step should generatea sequence of points ˆx n that approximate x ss ( t ). In fact, ˆx − x ss ( h ), forsufficiently small values of h , should asymptotically approach O ( h q ) where BRESHKOV ORDER IN DAE q is the power described by Theorem 4.9 in (4.49). Thus, the validationof Theorem 4.9 can be carried out by examining the behaviour of the error || ˆx − x ss ( h ) || versus h . • In order to clearly display the results, the error || ˆx − x ss ( h ) || will be plottedon log-scaled graph versus the values of h . Naturally, if the error asymp-totically approaches h q (as it should) then the log-plot will demonstrate alinear behaviour whose slope should match the value described by (4.49) inaccordance with the values of l, m and the index of the DAE k .The above steps were executed on the three circuits shown in Figure 5.1. The dif-ferentiation index of those circuits are, from left to right, k = 1 , ,
3, respectively. Thedetermination of the indices of these circuits was done using the procedure describedby [17] to compute the matrices P and Q of the Weierstrass transformation. The N matrix resulting from the transformation was used to determine the differentiationindex k (using N k = ) of the DAE modelling each circuit. +-+-+- + - + - + - +-+-+- I s I s Fig. 5.1 . Circuits used in the numerical validation of the theoretical results. All resistors areequal , capacitors 1F and inductors are 1H. Independent voltage source is cos(2 πt ) + sin(2 πt ) . The plots in Figure 5.2 display the error || h i ˆx ( i )1 − h i x ( i )ss ( h ) || for i = 0 , h logarithmically distributed within one decade. Each plot indicates thevalues of l, m used with the Obreshkov method and the differentiation index k of theDAE system. The plots also highlight the slopes observed in each line. As shown,the slope in each case matches the order predicted by Theorem 4.9 given the valuesof l, m and k .Worthy of observation on Figure 5.2 is behaviour of the error in Figures 5.2(b)compared with that in 5.2(d). Those two panels show the method used for the samecircuit (the one with k = 3) but with different values for m . What needs to benoted here is the increase in the order from 2 in the former to 5 in the latter, whichunderscores the order reduction phenomena mentioned in Remark 4.10.
6. Conclusions.
This paper presented a novel theoretical result characterizingthe local order of convergence of the recently proposed high-order A - and L -stablebased on the Obreshkov formula. The main focus of this work has been the derivationof this order of convergence when the system of differential equations takes the form ofdifferential-algebraic equations (DAE). The derived results showed that this order maybe different from the order of convergence the in the ordinary differential equations(ODE) and for certain configurations of the Obreshkov formula. Theoretical resultshave been validated with careful numerical simulation of several circuits. Slope on a log scale graph is defined as the number of decades of increase/decrease in error ˆx − x ss ( h ) per one decade change in h EMAD GAD -3 -2 h -8 -7 -6 -5 -4 -3 -2 E rr o r N o r m l = 0, m = 2, k = 2 i=0i=1 Slope 2 Slope 3 (a) l = 0 , m = 2 , k = 2 -3 -2 h -4 -3 -2 -1 E rr o r N o r m l = 1, m = 2, k = 3 i=0i=1 Slope 2Slope 2 (b) l = 1 , m = 2 , k = 3 -3 -2 h -16 -15 -14 -13 -12 -11 -10 -9 E rr o r N o r m l = 1, m = 3, k = 1 i=0i=1 Slope 6Slope 5 (c) l = 1 , m = 3 , k = 1 -3 -2 h -14 -12 -10 -8 -6 -4 E rr o r N o r m l = 1, m = 3, k = 3 i=0i=1 Slope 5Slope 3 (d) l = 1 , m = 3 , k = 3 Fig. 5.2 . Log graph for the error || ˆx ( i )1 − x ( i )ss ( h ) || at i = 0 , versus the step size h . Thehighlighted slopes refer to the number of decades where the error drops within one decade of variationin h . The slopes match the orders predicted by theorem 4.9 given the corresponding values of l, m and k . REFERENCES[1]
K. Brenan, S. Campbell, and L. Petzold , Numerical Solution of Initial-Value Problems inDifferential-Algebraic Equations , Society for Industrial and Applied Mathematics, 1995.[2]
K. E. Brenan, S. L. Campbell, and L. R. Petzold , Numerical Solution of Initial-ValueProbelms in Differential-Algebraic Equations , SIAM, 1996.[3]
K. Burrage , Parallel and Sequential Methods for Ordinary Differential Equations , OXFORD,New York, 1995.[4]
J. C. Butcher , Diagonally-implicit multi-stage integration methods , 11 (1993), pp. 347–363.[5]
J. C. Butcher , Numerical Methods for Ordinary Differential Equations , Wiley, 2003.[6] ,
General linear methods , Acta Numerica, 15 (2006), pp. 157–256.[7]
S. Campbell , Singular Systems of Differential Equations , Pitman, Boston, 1980.[8]
G. Dahlquist , A special stability problem for linear multistep methods , BIT, 3 (1963), pp. 27–43.[9]
J. W. Daniel and R. E. Moore , Computation and theory in ordinary differential equations ,W.H. Feeman and Company, 1970.[10]
T. A. Davis and E. Palamadai Natarajan , Algorithm 907: Klu, a direct sparse solver forcircuit simulation problems , ACM Trans. Math. Softw., 37 (2010), pp. 36:1–36:17.[11]
B. L. Ehle , High-order A-stable methods for the numerical solution of D.E.’s , BIT, 8 (1968),pp. 276–278.[12]
M. Farhan, E. Gad, M. Nakhla, and R. Achar , Fast simulation of microwave circuitswith nonlinear terminations using high-order stable methods , Microwave Theory and Tech-BRESHKOV ORDER IN DAE niques, IEEE Transactions on, 61 (2013), pp. 360–371.[13] M. Farhan, E. Gad, M. Nakhla, and R. Achar , New method for fast transient simula-tion of large linear circuits using high-order stable methods , Components, Packaging andManufacturing Technology, IEEE Transactions on, 3 (2013), pp. 661–669.[14]
M. A. Farhan, M. Nakhla, E. Gad, and R. Achar , Parallel high-order envelope-followingmethod for fast transient analysis of highly oscillatory circuits , IEEE Transactions on VeryLarge Scale Integration (VLSI) Systems, 25 (2017), pp. 261–270.[15]
E. Gad, M. Nakhla, R. Achar, and Y. Zhou , A-stable and L-stable high-order integrationmethods for solving stiff differential equations , IEEE Trans. Computer-Aided Design ofIntegrated Circ. Sys., 28 (2009), pp. 1359–1372.[16]
C. W. Gear , Numerical initial value problems in ordinary differential equations , Prentice-Hall,N.J., 1971.[17]
M. Gerdin , Computation of a canonical form for linear differential-algebraic equations , Tech.Rep. LiTH-ISY-R-2602, Department of Electrical Engineering, Link¨oping University, SE-581 83 Link¨oping, Sweden, Apr. 2004.[18]
E. Hairer, S. P. Nøsett, and G. Wanner , Solving Ordinary Differential Equations I NonstiffProblems , Springer, Berlin Heidelberg, 3 ed., 2008.[19]
C.-W. Ho, A. Ruehli, and P. Brennan , The modified nodal approach to network analysis ,Circuits and Systems, IEEE Transactions on, 22 (1975), pp. 504 – 509.[20]
A. Iserles and S. P. Nørsett , Order Stars , Chapman & Hall, 1991.[21]
P. Kunkel and V. Mehrmann , Differential-ALgebraic Equations Analysis and NumericalSolution , EMS, 2006.[22]
I. Lie and S. P. Nørsett , Superconvergence for multistep collocation , Math. Comp., 52 (1989),pp. 65–79.[23]
Y. Lin and E. Gad , Formulation of the obreshkov-based transient circuit simulator in thepresence of nonlinear memory elements , IEEE Transactions on Computer-Aided Designof Integrated Circuits and Systems, 34 (2015), pp. 86–94.[24]
P. Maffezzoni, L. Codecasa, and D. D’Amore , Time-domain simulation of nonlinear cir-cuits through implicit runge ndash;kutta methods , IEEE Transactions on Circuits and Sys-tems I: Regular Papers, 54 (2007), pp. 391–400.[25]
F. N. Najm , Circuit simulation , John Wiley & Sons, 2010.[26]
S. P. Nørsett , One-step methods of hermite type for numerical intgeration of stiff systems ,BIT, 14 (1974), pp. 68–77.[27]
N. Obreshkov , Sur les quadrature mecanique , (Bulgarian, French Summary) Akad. Nauk., 65(1942), pp. 191–289.[28]
J. C. G. Pimentel, E. Gad, and S. Roy , High-order A -stable and L -stable state-space discretemodeling of continuous systems , Circuits and Systems I: Regular Papers, IEEE Transac-tions on, 59 (2012), pp. 346 –359.[29] J. Vlach and K. Singhal , Computer Methods for Circuit Analysis and Design , John Wiley& Sons, Inc., New York, NY, USA, 2nd ed., 1993.[30]
G. Wanner, E. Hairer, and S. P. Nøsett , Order stars and stability theorems , BIT, 18 (1978),pp. 475–489.[31]
Y. Zhou, E. Gad, M. S. Nakhla, and R. Achar , Structural characterization and efficientimplementation techniques for A -stable high-order integration methods-stable high-order integration methods