Conversion Methods for Improving Structural Analysis of Differential-Algebraic Equation Systems
aa r X i v : . [ c s . S C ] A ug BIT manuscript No. (will be inserted by the editor)
Conversion Methods for Improving Structural Analysis ofDifferential-Algebraic Equation Systems
Guangning Tan · Nedialko S. Nedialkov · John D. Pryce
Received: date / Accepted: date
Abstract
Differential-algebraic equation systems (DAEs) are generated routinely bysimulation and modeling environments. Before a simulation starts and a numericalmethod is applied, some kind of structural analysis (SA) is used to determine whichequations to be differentiated, and how many times. Both Pantelides’s algorithm andPryce’s S -method are equivalent: if one of them finds correct structural information,the other does also. Nonsingularity of the Jacobian produced by SA indicates a suc-cess, which occurs on many problems of interest. However, these methods can fail onsimple, solvable DAEs and give incorrect structural information including the index.This article investigates S -method’s failures and presents two conversion methodsfor fixing them. Both methods convert a DAE on which the S -method fails to anequivalent problem on which this SA is more likely to succeed. Keywords differential-algebraic equations · structural analysis · modeling · symbolic computation Mathematics Subject Classification (2010) · · · Differential-algebraic equation systems (DAEs) arise from disciplines such as elec-trical circuits, chemical engineering, optimal control, and mechanical systems. To
G. TanSchool of Computational Science and Engineering, McMaster University,1280 Main Street West, Hamilton, Ontario L8S 4K1, Canada, E-mail: [email protected]. S. NedialkovDepartment of Computing and Software, McMaster University,1280 Main Street West, , Hamilton, Ontario L8S 4K1, Canada, E-mail: [email protected]. D. PryceSchool of Mathematics, Cardiff University,Senghennydd Road, Cardiff CF24 4AG, Wales, UK., E-mail: [email protected] Guangning Tan et al. simulate the dynamic behaviour of such systems, a variety of algorithms are devel-oped from building a mathematical model to producing a numerically solvable systemof equations. In the modeling stage, components and modules are selected from li-braries and integrated into subsystems. Each having its own physical dynamics, thesesubsystems together can be further interconnected via interface or coupling formulas,see [29] for example. The result of this approach can be a large, sparse, and nonlinearDAE system, which is typically structured: the dependence between components isstronger within a subsystem, but is weaker between subsystems. Moreover, such aDAE may have a high index.To solve numerically a DAE, usually derivatives of some equations need to beappended to the original DAE, and an augmented system is solved as a whole. Withsome index reduction methods [6,9] or regularization techniques [7,29], this enlargedsystem is reduced to a DAE of index 1 or a regularized DAE, respectively, so that astandard DAE numerical solution method can be applied. However, it is not easyto tell which equations are to be differentiated, and how many times exactly. If thenumerical method is not chosen properly for a DAE of high index, then the integrationcan lead to instabilities and non-convergence of this method [29].Hence it is desirable to understand the structure of a DAE before a simulationstarts on it. As a preprocessing tool, some structural analysis (SA) algorithm is ap-plied to determine the index, number of degrees of freedom (DOF), constraints, andwhich variables and derivatives need initial values. This preprocess helps give moreinsight into the underlying structure of a DAE and indicates how to carry out a nu-merical integration.The widely used SA method of Pantelides’s [21] is an algorithm that requiresgraph theory for understanding and implementation. Pryce’s S -method [22] is equiv-alent to it, for they both produce the same structural index, when applied to first-ordersystems [22, Theorem 5.8]. This index is an upper bound for the differentiation in-dex, and often they are the same [22]. However, Reißig et al. show that the structuralindex can be arbitrarily high for a family of DAEs of differentiation index 1 [27]. Weshow that some simple manipulation on equations can make the S -method report thecorrect (structural) index 1 on these DAEs [31].The S -method can also work on high-order systems. The SA results can helpdecide how to apply an index reduction algorithm [9, 11, 12, 13, 14, 23, 24], performa regularization process [29], or design a solution scheme for a Taylor series method[1, 2, 16, 17, 18, 20].Although the S -method succeeds on many problems of practical interest, it canfail—hence Pantelides’s algorithm fails as well—on simple, solvable DAEs, produc-ing an identically singular System Jacobian.In this article, we investigate the S -method’s failures and present two conversionmethods that reformulate such a DAE into an equivalent problem with the same solu-tion. After each conversion, provided some conditions are satisfied, the value of thesignature matrix is guaranteed to decrease. We conjecture that this decrease usuallyleads to a better formulation of a problem, so that the SA may produce a (generically)nonsingular System Jacobian and hence succeed.The rest of this article is organized as follows. Section 2 summarizes the S -method theory and the notation we use throughout this article. Section 3 describes onversion Methods for Improving Structural Analysis of DAEs 3 these SA’s failures. Section 4 introduces the conversion methods and illustrates themwith simple examples. Section 5 presents two more illustrative examples. Section 6gives conclusions. S -method. We consider DAEs of the general form f i ( t , the x j and derivatives of them ) = , i = n , (2.1)where the x j ( t ) , j = n , are state variables that are functions of an independentvariable t , usually regarded as time.We introduce notation that will be used later. For more details, see [16, 22, 25].Terms are set in slantedfont at their defining occurrence.The S -method constructs for a DAE (2.1) an n × n signaturematrix S , whose ( i , j ) entry s i j is either an integer ≥
0, order of the highest derivative to which variable x j occurs in equation f i , or − ¥ if neither x j nor its derivatives occur in f i .A highest-valuetransversal(HVT) of S is a set T of n positions ( i , j ) with one en-try in each row and each column, such that the sum of these entries is maximized. Thissum is the valueof S , written Val ( S ) . If Val ( S ) is finite, then the DAE is structurallywell posed (SWP); otherwise, Val ( S ) = − ¥ and the DAE is structurally ill posed(SIP). In the SIP case, there exists no one-to-one correspondence between equationsand variables.We henceforth consider the SWP case. Using a HVT, we find 2 n integers c =( c , . . . , c n ) and d = ( d , . . . , d n ) associated with the equations and variables of (2.1),respectively. These integers satisfy c i ≥ i ; d j − c i ≥ s i j for all i , j with equality on a HVT . (2.2)We refer to such c and d , written as a pair ( c ; d ) , as a validoffsetpair. It is not unique,but there exists a unique elementwise smallest solution ( c ; d ) of (2.2), which we referto as the canonicaloffsetpair [22].Any valid ( c ; d ) can be used to prescribe a stage-by-stage solution scheme forsolving DAEs by a Taylor series method. The derivatives of the solution are computedin stages k = k d , k d + , . . . , , , . . . , where k d = − max j d j . (2.3)At each stage k , we solve0 = f ( c i + k ) i for all i such that c i + k ≥ x ( d j + k ) j for all j such that d j + k ≥ , (2.5) The colon notation p : q for integers p , q denotes either the unordered set or the enumerated list ofintegers i with p ≤ i ≤ q , depending on context. Throughout this article, “derivatives of x j ” include x j itself as its 0th derivative: x ( l ) j = x j if l =
0. Guangning Tan et al. using x ( < d j + k ) j , j = n , found in the previous stages. Here z ( < r ) is a short notationfor z , z ′ , . . . , z ( r − ) , and z ( ≤ r ) includes z ( < r ) and z ( r ) .If the solution scheme (2.3–2.5) can be carried out for stages k = k d : 0, and thederivatives x ( ≤ d j ) j , j = n , can be uniquely determined, then we say the solutionscheme and the S -method succeed. Otherwise they fail, in the sense that the Jacobianused to solve (2.4) at some stage k ∈ k d : 0 does not have full row rank.The Jacobian used to solve (2.4) for stages k ≥ n × n matrix J ( c ; d ) = ( J i j ) defined by J i j = ¶ f ( c i ) i ¶ x ( d j ) j = ¶ f i ¶ x ( d j − c i ) j = ¶ f i ¶ x ( s ij ) j if d j − c i = s i j , and0 otherwise , (2.6)with i , j = n . The second “ = ” in (2.6) results from Griewank’s Lemma (see laterLemma 4.1), and the third “ = ” follows from (2.2).Using the derivatives computed in stages k = k d : 0, we have found a consistentpoint: it is either (cid:0) t , x ( < d ) , . . . , x ( < d n ) n (cid:1) , if every x ( d j ) j occurs in a jointly linear wayin every f ( c i ) i , or (cid:0) t , x ( ≤ d ) , . . . , x ( ≤ d n ) n (cid:1) , if some x ( d j ) j occurs nonlinearly in (2.4) atstage k = ( c ; d ) produces a different solution scheme (2.3–2.5) andgenerally a different System Jacobian J ( c ; d ) , all J ’s nevertheless share the samedeterminant [16]. If one J is nonsingular—and hence all J ’s are—at a consistentpoint, then there exists (locally) a unique solution through this point [22]. The SAcan now use the canonical ( c ; d ) to determine the structural index and the numberof DOF: n S = max i c i + ( j d j =
00 otherwiseand
DOF = Val ( S ) = (cid:229) ( i , j ) ∈ T s i j = (cid:229) j d j − (cid:229) i c i . Here “DOF” refers to the phrase “degrees of freedom”, while
DOF is the correspond-ing number.
Example 2.1
We illustrate the above concepts with the simple pendulum, a DAE ofdifferentiation index 3.0 = f = x ′′ + x l = f = y ′′ + y l − g = f = x + y − L S = x y l c i " f • ◦ f ◦ • f ◦ • d j J = x ′′ y ′′ l " f xf yf ′′ x y (2.7)The state variables are x , y , and l ; G is gravity and L > S , marked with • and ◦ , respectively. A blank in S When we present a DAE example, we also present its signature matrix S , the canonical offset pair ( c ; d ) , and the associated System Jacobian J .onversion Methods for Improving Structural Analysis of DAEs 5 denotes − ¥ , and a blank in J denotes 0. The row and column labels in J , showingequations and variables differentiated to order c i and d j , aim to remind the reader ofthe formula for J in (2.6).Since det ( J ) = − ( x + y ) = − L =
0, the J is nonsingular, and the SA suc-ceeds. The derivatives x ′′ , y ′′ , l occur in a jointly linear way in (2.7), so a consistentpoint is (cid:16) t , x ( < d ) , y ( < d ) , l ( < d ) (cid:17) = (cid:16) t , x ( < ) , y ( < ) , l ( < ) (cid:17) = (cid:0) t , x , x ′ , y , y ′ (cid:1) that satisfies (2.4) in stages k = − , −
1, that is, f = f ′ =
0. The structuralindex is n S = min i c i + = + = j d j = d = DOF = Val ( S ) = (cid:229) j d j − (cid:229) i c i = − =
2. The solution scheme prescribed by the canonical ( c ; d ) is shown in Table 2.1. k solve for using Jacobian − f x , y − [ x y ] − f ′ x ′ , y ′ x , y [ x y ] ≥ f ( k ) , f ( k ) , f ( k + ) l ( k ) , x ( k + ) , y ( k + ) l ( < k ) , x ( < k + ) , y ( < k + ) J Table 2.1: Solution scheme for the simple pendulum DAE.
We discuss the S -method’s failures in this section. Hidden symbolic cancellation isthe easiest way that can make the S -method fail with structurally singular System Ja-cobian [16]; see § § § u u is genericallynonzero (that is, not identically zero)for all values of the variables occurring in the expressions that define u . This u maybe a scalar, a vector, or a matrix, depending on context. Similarly, we use det ( A ) A is genericallynonsingular, that is, not identically singular.3.1 Symbolic cancellation may cause failure.In the encoding of a DAE, an equation f may be, for instance, x + ( x x ) ′ − x ′ x or x + x + cos x ′ + sin x ′ . We say a symboliccancellationoccurs in f , because itsimplifies to x + x x ′ and x + x +
1, respectively. That is, f does not truly dependon x ′ . However, we note that the problem of detecting such true dependence (whichis equivalent to recognizing zero) in any expressions is unsolvable in general [28].Codes like DAETS [18] and
DAESA [19, 25], which are implemented through op-erator overloading and do not perform symbolic simplifications, compute a formal e s i j Guangning Tan et al. instead of a true one when constructing the signature matrix. For example, both codeswould find for f above the formal e s = s =
0. By a formal e s i j , we mean that x ( e s ij ) j appears as a highest-order derivative (HOD) in the encodingof an equation f i , while a true s i j means that f i is not constant with respect to a HOD x ( s ij ) j and thus truly depends on it—equivalently ¶ f i / ¶ x ( s ij ) j
0. Obviously e s i j ≥ s i j .For a formally computed e S = ( e s i j ) , also a valid offset pair ( e c ; e d ) is found and aSystem Jacobian e J is derived from ( e c ; e d ) and e S by (2.6). Suppose symbolic cancella-tions happen in some f i and make e s i j > s i j . Then f i does not truly depend on x ( e s ij ) j ,and e J i j is identically zero by (2.6), whether e d j − e c i = e s i j holds or not. In this case, e J has more identically zero entries than does a J based on the true S and ( c ; d ) , hencebeing more likely structurally singular.Overestimating some s i j of S may seem dangerous to the SA’s success. Fortu-nately, modern modeling environments usually perform simplifications on problemformulation [5, 8, 30]. They can reduce the occurrence of a structurally singular J ,when SA is applied. Theorems 5.1 and 5.2 in [16] also ensure that, if Val ( e S ) = Val ( S ) and det ( J )
0, then an offset pair ( e c ; e d ) of the formal e S is also valid for S , anddet ( e J ) = det ( J )
0. In this case, such an overestimation would treat some identi-cally zero entries of J as nonzeros and simply make the solution scheme slightlyless efficient; see [16, Examples 5.1 and 5.2]. By the same theorems, in the caseVal ( e S ) > Val ( S ) , e J must be structurally singular.3.2 SA can fail when J is structurally nonsingular.Hereafter we focus on the case where an identically singular System Jacobian J isstructurally nonsingular—that is, there exists a HVT T of S such that J i j ( i , j ) ∈ T . We shall simply say “identically singular” to refer to this case.When J is identically singular, the DAE may be still solvable, but the way itsequations are written may not properly reflect its structure. For example, if the pen-dulum DAE (2.7) f = Mf = M being a randomnonsingular constant 3 × S is [ , , ] , the canonical offsetpair is ( c ; d ) = ( , ,
0; 2 , , ) , and the resulting J is identically singular [15]. Example 3.1
We illustrate a failure case with the following DAE in [3, p. 23].0 = f = x ′ + ty ′ − h ( t ) = f = x + ty − h ( t ) S = x y c i (cid:20) (cid:21) f • f • d j J = x ′ y ′ (cid:20) (cid:21) f tf ′ t The SA fails since det ( J ) ≡
0. Here J is identically singular but not structurally sin-gular. The original formulation denotes driving functions as f , f .onversion Methods for Improving Structural Analysis of DAEs 7 One simple fix is to replace f by f = − f + f ′ , which results in the problembelow; cf. [9, Example 5].0 = f = y + h ( t ) − h ′ ( t ) = f = x + ty − h ( t ) S = x y c i (cid:20) (cid:21) f • f • d j J = x y (cid:20) (cid:21) f f t Since det ( J ) = −
1, the SA succeeds. Notice Val ( S ) = < = Val ( S ) . This is asimple illustration of our linear combination method in § z = x + ty and eliminate x in f and f .0 = f = − y + z ′ − h ( t ) = f = z − h ( t ) S = y z c i (cid:20) (cid:21) f • f • d j J = y z ′ (cid:20) (cid:21) f − f ′ ( J ) = −
1, and the SA succeeds. After solving for y and z ,we can obtain x = z − ty . This fix also gives Val ( S ) = < = Val ( S ) , and is a simpleillustration of our expression substitution method in § ( S ) may be the key to deriving a better problem formulation of a DAE. Our con-version methods aim to do so, and are the main contribution of this article. We present two conversion methods that attempt to fix SA’s failures in a systematicway. The first method is based on replacing an existing equation by a linear com-bination of some equations and derivatives of them. We call this method the linearcombination (LC) method and describe it in § § ( S ) is finite and that a SystemJacobian J is identically singular but structurally nonsingular. We also assume thatthe equations in (2.1) are sufficiently differentiable, so that our methods fit into the S -method theory; see Theorem 4.2 in [22] and § S and Sys-tem Jacobian as J . If Val ( S ) is finite and J is identically singular still, then we canperform another conversion, using either of the methods, provided the correspondingconditions are satisfied.Suppose a sequence of conversions produces a solvable DAE with Val ( S ) ≥ J . Given the fact that each conversion reduces the value ofthe signature matrix by at least one, the total number of conversions does not exceedthe value of the original signature matrix. Guangning Tan et al.
If the resulting system is SIP after a conversion, that is, Val ( S ) = − ¥ , then wesay the original DAE is illposed.4.1 Linear combination method.Let u = [ u , . . . , u n ] T be a nonzero n -vector function in the cokernel of J , thatis, u ∈ coker ( J ) or equivalently J T u = . We consider J and u as functions of t andderivatives of the x j ( t ) ’s, j = n .For convenience, denote s ( x j , w ) = ( order of the highest derivative to which x j occurs in w ; or − ¥ if x j does not occur in w . (4.1)Here w can be a scalar, a vector, or a matrix, depending on context. This notation isa generalization of the ( i , j ) entry of S : s i j = s ( x j , f i ) . Lemma 4.1 (Griewank’s Lemma) [17] Let w be a function of t, the x j ( t ) , j = n,and derivatives of them. Denote w ( p ) = d p w / dt p , where p ≥ . If s ( x j , w ) ≤ q, then ¶ w ¶ x ( q ) j = ¶ w ′ ¶ x ( q + ) j = · · · = ¶ w ( p ) ¶ x ( q + p ) j . (4.2)Denote I = { i | u i } , c = min i ∈ I c i , and L = (cid:8) i ∈ I | c i = c (cid:9) . (4.3)We prove two preliminary lemmas before the main Theorem 4.1, on which the LCmethod is based. Lemma 4.2
Assume that u ∈ coker ( J ) and u . If s ( x j , u ) < d j − c , for all j = n , (4.4) then s (cid:0) x j , f (cid:1) < d j − c for all j = n, wheref = (cid:229) i ∈ I u i f ( c i − c ) i . (4.5) Proof
The formula for c gives c i − c ≥ i ∈ I . By (2.2), s ( x j , f i ) = s i j ≤ d j − c i . Applying Griewank’s Lemma (4.2) to (2.6) with w = f i and q = c i − c yields J i j = ¶ f i ¶ x ( d j − c i ) j = ¶ f ( c i − c ) i ¶ x ( d j − c i + c i − c ) j = ¶ f ( c i − c ) i ¶ x ( d j − c ) j for i ∈ I and all j = n . (4.6) onversion Methods for Improving Structural Analysis of DAEs 9 This shows that such an f ( c i − c ) i depends on x ( ≤ d j − c ) j only. Then for all j = n , ¶ f ¶ x ( d j − c ) j = ¶ (cid:16) (cid:229) i ∈ I u i f ( c i − c ) i (cid:17) ¶ x ( d j − c ) j by the definition of f in (4.5) = (cid:229) i ∈ I u i ¶ f ( c i − c ) i ¶ x ( d j − c ) j = (cid:229) i ∈ I u i J i j by (4.4) and then (4.6) (4.7) = ( J T u ) j = u ∈ coker ( J ) . Hence f depends on x ( < d j − c ) j only, for all j —this results in the inequality in (4.5). ⊓⊔ Lemma 4.3
Assume that an n × n signature matrix S has a finite Val ( S ) and a validoffset pair ( c ; d ) . Given a row of index l, if we replace in row l all entries s l j by s l j < d j − c l , then Val ( S ) < Val ( S ) , where S is the resulting signature matrix.Proof Since s l j < d j − c l for all j , the intersection of a HVT T of S with row l is aposition ( l , r ) with s lr < d r − c l . ThenVal ( S ) = (cid:229) ( i , j ) ∈ T s i j = s lr + (cid:229) ( i , j ) ∈ T \{ ( l , r ) } s i j < (cid:229) j d j − (cid:229) i c i = Val ( S ) . ⊓⊔ The LC method is based on the following theorem.
Theorem 4.1
Let I, c, and L be as defined in (4.3). If we replace an equation f l ,l ∈ L, by f in (4.5), then
Val ( S ) < Val ( S ) , where S is the signature matrix of theresulting DAE.Proof By Lemma 4.2, such a replacement results in s l j = s (cid:0) x j , f l (cid:1) < d j − c l for all j = n . Immediate from Lemma 4.3 is Val ( S ) < Val ( S ) . ⊓⊔ Usually we write f as f l in the resulting DAE.We call (4.4) the LC condition, which is merely sufficient for the strict decrease:if (4.4) becomes s ( x j , u ) ≤ d j − c for all j = n with equality for some j , then wecan achieve only Val ( S ) ≤ Val ( S ) , while the strict “ < ” may not hold. Example 4.1
We illustrate the LC method with the following simple example:0 = f = − x ′ + x = f = − x ′ + x = f = x x + g ( t ) = f = x x + x x + x + x + g ( t ) , where g and g are driving functions. S = x x x x c i f • f • f • f • d j J = x ′ x ′ x x f − f − f ′ x x f x x A shaded entry s i j in S denotes a position ( i , j ) where d j − c i > s i j ≥ J i j ≡ J . The SA fails here since det ( J ) ≡ u = (cid:0) x , x , , − (cid:1) T ∈ coker ( J ) . Then (4.3) becomes I = (cid:8) i | u i (cid:9) = (cid:8) (cid:9) , c = min i ∈ I c i = , L = (cid:8) i ∈ I | c i = c (cid:9) = (cid:8) , , (cid:9) . Checking the condition (4.4) is not difficult; for example, s ( x , u ) = < = d − c .We pick l = ∈ L (we shall reason why this choice is desirable) and replace f by f = (cid:229) i ∈ I u i f ( c i − c ) i = x f + x f + f ′ − f = − x − x + g ′ ( t ) − g ( t ) . The resulting DAE is 0 = ( f , f , f , f ) . S = x x x x c i f • f • f • f • d j J = x ′ x ′ x x f − f − f ′ x x f ′ − − ( S ) = < = Val ( S ) . The SA succeeds at all points wheredet ( J ) = x − x = . From (4.3) and (4.5), we can recover the replaced equation f l by f l = (cid:16) f l − (cid:229) i ∈ I \{ l } u i f ( c i − c ) i (cid:17) (cid:14) u l . (4.8)Provided u l = t in the interval of interest, it is not difficult to show that theoriginal DAE and the resulting one have the same solution, if there exists one. Fromour experience, it is desirable to choose a row index l ∈ L such that u l could be anexpression that never becomes zero. For example, u l is a nonzero constant, x +
1, or2 + cos x . Such a choice of l guarantees that the resulting DAE is “equivalent” to theoriginal DAE, in the sense that they always have the same solution if there exists one.The reader is referred to [31, § u l as the most preferable choice amongall l ∈ L , and derive a set L that contains all l for such u l : L = (cid:8) l ∈ L | u l is constant (cid:9) .We summarize the steps of the LC method.1) Obtain a symbolic form of J .2) Compute a u ∈ coker ( J ) .3) Derive I , c , and L as defined in (4.3).4) Check condition (4.4). If it is not satisfied, then set L ← /0 to mean that the LCmethod is not applicable; otherwise proceed to the next step.5) L ← (cid:8) l ∈ L | u l is constant (cid:9) . If L = /0, then choose an l ∈ L ; otherwise an l ∈ L .6) Replace f l by f l = f as defined in (4.5). onversion Methods for Improving Structural Analysis of DAEs 11 The sets L and L are used to decide a desirable conversion method; see Table 4.1.We show below that the LC method cannot fix the following (artificially con-structed) DAE (4.9) because the condition (4.4) is not satisfied. Example 4.2
Consider 0 = f = x + e − x ′ − x x ′′ + h ( t ) = f = x + x x ′ + x + h ( t ) . (4.9) S = x x c i (cid:20) (cid:21) f • f • d j J = x ′ x ′′ (cid:20) (cid:21) f − a − a x f ′ x Here h and h are given driving functions, and a = e − x ′ − x x ′′ . Obviously det ( J ) ≡ u = (cid:0) a − , (cid:1) T = (cid:0) e x ′ + x x ′′ , (cid:1) T ∈ coker ( J ) . Then (4.3) becomes I = (cid:8) i | u i (cid:9) = (cid:8) , (cid:9) , c = min i ∈ I c i = , and L = (cid:8) i ∈ I | c i = c (cid:9) = (cid:8) (cid:9) . Obviously, s ( x j , u ) = d j − c , for j = ,
2, violates (4.4). Choosing l = ∈ L andreplacing f by f = u f + u f ′ = b + x ′ + x x ′′ + ( x ′ ) + x x ′ + h ′ ( t ) results in the DAE 0 = (cid:0) f , f (cid:1) . Here b = a − ( x + h ( t )) + S = x x c i (cid:20) (cid:21) f • f • d j J = x ′ x ′′ (cid:20) (cid:21) f b b x f ′ x The SA fails still, since det ( J ) ≡
0. Now Val ( S ) = Val ( S ) = v = [ v , . . . , v n ] T be a nonzero n -vector function in the kernel of J , that is, v ∈ ker ( J ) , or equivalently Jv = . Denote J = (cid:8) j | v j (cid:9) , s = | J | , M = (cid:8) i | d j − c i = s i j for some j ∈ J (cid:9) , and c = max i ∈ M c i . (4.10)We choose an l ∈ J , and introduce s − y j = x ( d j − c ) j − v j v l · x ( d l − c ) l for all j ∈ J \ (cid:8) l (cid:9) . (4.11) In each f i , wereplace every x ( s ij ) j = x ( d j − c i ) j with j ∈ J \ (cid:8) l (cid:9) by (cid:16) y j + v j v l · x ( d l − c ) l (cid:17) ( c − c i ) . (4.12)From the formula (4.10) for M , these replacements (or substitutions) occur only in f i ’s with i ∈ M , because at least one equality d j − c i = s i j must hold for some j ∈ J .Hence they use the fact that, for such an x ( s ij ) j , i ∈ M and j ∈ J \ (cid:8) l (cid:9) , x ( s ij ) j = x ( d j − c i ) j = (cid:16) x ( d j − c ) j (cid:17) ( c − c i ) = (cid:18) y j + v j v l · x ( d l − c ) l (cid:19) ( c − c i ) . After the replacements, denote each equation by f i (for all i / ∈ M , f i and f i arethe same). Equivalent to (4.11) are s − = g j = − y j + x ( d j − c ) j − v j v l · x ( d l − c ) l for all j ∈ J \ (cid:8) l (cid:9) (4.13)that prescribe the substitutions in (4.12). Appending (4.13) to the f i ’s results in anenlarged DAE consisting ofequations 0 = (cid:0) f , . . . , f n (cid:1) and 0 = g j for all j ∈ J \ (cid:8) l (cid:9) in variables x , . . . , x n and y j for all j ∈ J \ (cid:8) l (cid:9) . The ES method is based on the following theorem.
Theorem 4.2
Let J, s, M, and c be as defined in (4.10). Assume that s ( x j , v ) ( < d j − c if j ∈ J ≤ d j − c otherwise , and d j − c ≥ for all j ∈ J . (4.14) For any l ∈ J, if we1) introduce s − new variables x j , j ∈ J \ (cid:8) l (cid:9) , as defined in (4.11),2) perform substitutions in f i , for all i = n, by (4.12), and3) append s − equations g j , j ∈ J \ (cid:8) l (cid:9) , as defined in (4.13),then Val ( S ) < Val ( S ) , where S is the signature matrix of the resulting DAE. We call (4.14) the ES conditions, which are again sufficient for Val ( S ) < Val ( S ) . Example 4.3
We illustrate the ES method on the DAE (4.9).Suppose we choose v = [ x , − ] T ∈ ker ( J ) . Then (4.10) becomes J = (cid:8) , (cid:9) , s = | J | = , M = (cid:8) , (cid:9) , and c = max i ∈ M c i = c = . We can apply the ES method as the conditions (4.14) hold: s ( x , v ) = − ¥ ≤ − − = d − c − , d − c = − ≥ , s ( x , v ) = ≤ − − = d − c − , d − c = − ≥ . onversion Methods for Improving Structural Analysis of DAEs 13 We choose l = ∈ J . Now J \ (cid:8) l (cid:9) = (cid:8) (cid:9) . Using (4.11) and (4.13), we introducefor x a new variable y = x ( d − c ) − v v · x ( d − c ) = x ( − ) − x ( − ) · x ( − ) = x + x x ′ , and append the equation 0 = g = − y + x + x x ′ . Then we replace x ′ by ( y − x x ′ ) ′ in f to obtain f , and replace x by y − x x ′ in f to obtain f . The resulting DAEand its SA results are shown below.0 = f = x + e − y ′ + x ′ + h ( t ) = f = y + x + h ( t ) = g = − y + x + x x ′ S = x x y c i f • f • g • d j J = x x ′ y ′ f x ′ g − g f ′ x g x Here g = e − y ′ + x ′ . Now Val ( S ) = < = Val ( S ) . The SA succeeds at all pointswhere det ( J ) = g ( x + x ′ ) − x = v j s positions of v , that is, v = [ v , . . . , v s , , . . . , ] T . Then J = { , . . . , s } in (4.10).(b) We introduce one more variable y l = x ( d l − c ) l for the chosen l ∈ J , and appendcorrespondingly one more equation 0 = g l = − y l + x ( d l − c ) l . Lemma 4.4
Let ( c ; d ) = ( c , . . . , c n ; d , . . . , d n ) be a valid offset pair of S . Let e c and e d be the two ( n + s ) -vectors defined as e d j = ( d j if j = nc if j = n + n + s and e c i = ( c i if i = nc if i = n + n + s , (4.15) where c is as defined (4.10). Then the signature matrix S of the resulting DAE fromthe ES method has the form in Figure 4.1. The proof of this lemma is rather technical, so we present it in Appendix A. UsingLemma 4.4, we prove Theorem 4.2.
Proof
Let T be a HVT of S . By Lemma 4.4,Val ( S ) = (cid:229) ( i , j ) ∈ T s i j ≤ (cid:229) ( i , j ) ∈ T ( e d j − e c i ) since e d j − e c i ≥ s i j for all i , j = n + s (cid:229) j = e d j − n + s (cid:229) i = e c i = n (cid:229) j = d j + sc − n (cid:229) i = c i − sc by (4.15) = n (cid:229) j = d j − n (cid:229) i = c i = Val ( S ) . x ··· x l − x l x l + ··· x s x s + ··· x n y ··· y l − y l y l + ··· y s e c i f − ¥ c ... < ≤ ≤ ... ≤ ... f n − ¥ c n g = < = < ≤ c ... ... ... ... − ¥ ... g l = − ¥ ··· − ¥ c ... < ... ... ≤ − ¥ ... ... g s = < = c e d j d ··· d l − d l d l + ··· d s d s + ··· d n c ··· c c c ··· c Fig. 4.1: The form of S for the resulting DAE by the ES method. The < , ≤ , and = mean the relations between s i j and e d j − e c i , respectively. For instance, every s i j whose ( i , j ) position is in the region marked with “ ≤ ” is ≤ e d j − e c i .We assert Val ( S ) < Val ( S ) , and show that an equality leads to a contradiction.Assume that Val ( S ) = Val ( S ) . Then there exists a transversal T of S such that e d j − e c i = s i j > − ¥ for all ( i , j ) ∈ T . (4.16)Consider ( i , ) , . . . , ( i s , s ) ∈ T for the first s columns. Since the y l column has onlyone finite entry s n + l , n + l =
0, position ( n + l , n + l ) is in T , and thus row numbers i , . . . , i s can only take values among1 , , . . . , n , n + , . . . , n + l − , n + l + , . . . , n + s . Here only s − n , so at least one of them is among 1 : n .In other words, there exists a position ( r , j ) ∈ T with 1 ≤ r ≤ n and 1 ≤ j ≤ s in the“ < ” region in Figure 4.1. Hence e d j − e c r > s r j , which yields a contradiction of (4.16).Therefore Val ( S ) < Val ( S ) .Finally we remove the y l column and its matched row g l . The resulting signaturematrix still has Val ( S ) , since ( n + l , n + l ) ∈ T and s n + l , n + l = ⊓⊔ From the steps of applying the ES method, we can recover the original DAE byreverting the expression substitutions and removing the introduced variables y j andequations g j . Similar to the LC method, the ES method also guarantees that, provided v l = t in the interval of interest, the original DAE and the resulting one have(at least locally) the same solution (if there is one); this is shown in [31, § l ∈ J , such that the v l is a (nonzero)constant. With this choice, the equivalence of the original DAE and the resulting oneis always guaranteed. We hence derive a set J , a subset of J that contains these l ’s forwhich l ∈ J and v l is constant.We summarize the steps of the ES method. onversion Methods for Improving Structural Analysis of DAEs 15
1) Obtain a symbolic form of J .2) Compute a v ∈ ker ( J ) .3) Derive J , s , M , c as defined in (4.10).4) Check condition (4.14). If it is not satisfied, then set J ← /0 to mean that the ESmethod is not applicable; otherwise proceed to the next step.5) J ← (cid:8) l ∈ J | v l is constant (cid:9) . If J = /0, then choose an l ∈ J ; otherwise an l ∈ J .6) For each j ∈ J \ (cid:8) l (cid:9) , introduce y j , as defined in (4.11), and append the corre-sponding equation g j , as defined in (4.13).7) Replace each x ( d j − c i ) j in f i by (cid:0) y j + ( v j / v l ) · x ( d l − c ) l (cid:1) ( c − c i ) , for all i ∈ M and all j ∈ J \ (cid:8) l (cid:9) .8) (Optional) For consistence, rename variables y j , j ∈ J \ (cid:8) l (cid:9) , to x n + , . . . , x n + s − ,and rename equations g j , j ∈ J \ (cid:8) l (cid:9) , to f n + , . . . , f n + s − .The sets J and J are used to decide a desirable conversion method; see below.4.3 Which method to choose?We present our rationale for choosing a conversion method in Table 4.1 and base ourchoice on the following observations. For some failure cases, our experiments findthat usually one of the LC condition (4.4) and the ES condition (4.14) is satisfied,while the other is not, so we can apply one conversion method only. For other caseswhere both methods are applicable, we consider as priority the equivalence betweenthe original DAE and the resulting one. As discussed in § § u l [resp. v l ] for the LC [resp. ES] method. Our experiencesuggests that such a constant frequently exists for one of the methods. If both methodsguarantee equivalence or neither of them does, then we choose the LC method, as itreplaces only one existing equation and maintains the problem size.We summarize in Table 4.1 the above logic of finding the desirable conversion inthe sense of equivalence. For instance, suppose the LC method finds L = /0 and L = /0while the ES method finds J = /0. Then either method can provide some conversionfor reducing Val ( S ) . However, the LC method does not guarantee the equivalencewhile the ES method does, so we choose the ES method with a column index l ∈ J .ES method J = /0 J = /0 and J = /0 J = /0LC method L = /0 LC LC LC L = /0 and L = /0 ES LC LC L = /0 ES ES –Table 4.1: Rationale for choosing a conversion method. We show how to iterate the LC method on a linear constant coefficient DAE in § § by Scholz and Stein-brecher 0 = f = − x ′ + x − b = f = − x ′ + x − b = f = x + x + x − b = f = − x + x + x − b . S = x x x x c i f • f • f • f • d j J = x ′ x ′ x x f − f − f f f i is a forcing function b i ( t ) , i = S and J tomean an iteration number, not a power. Since det ( J ) ≡
0, the SA fails.Choose u = [ , , − , ] T ∈ coker ( J ) . Using (4.3) gives I = (cid:8) , (cid:9) , c = , and L = (cid:8) , (cid:9) . Since u is a constant vector, the condition (4.4) is surely satisfied, as s ( x j , u ) = − ¥ for all j . Choosing l = ∈ L and replacing f by f results in 0 = ( f , f , f , f ) , where f = (cid:229) i ∈ I u i f ( c i − c ) i = − f + f = − x − x + b − b . S = x x x x c i f • f • f • f • d j J = x ′ x ′ x x f − f − f ′ − − f ( J ) ≡
0. We then apply the LC method again by choos-ing u = [ − , − , , ] T ∈ coker ( J ) . This gives I = (cid:8) , , , (cid:9) , c = , and L = (cid:8) , , (cid:9) . We consider it with parameters b = e = a = a = d =
1, and g = −
1. Superscripts are used asindices there, while we use subscripts instead. The equations g , g are renamed f , f , and the variables y , y are renamed x , x .onversion Methods for Improving Structural Analysis of DAEs 17 Choosing l = ∈ L and replacing f by f = − f − f + f ′ + f = − x + b + b + b ′ − b ′ − b results in 0 = ( f , f , f , f ) . S = x x x x e c i f • f • f • f • e d j J = x ′ x ′ x x f ′ − f − f ′ − − f ( J ) =
1. Note Val ( S ) = < Val ( S ) = < Val ( S ) = x , y , l : xy l = x x x . The resulting problem is0 = f = ( x + x ) ′′ + ( x + x )( x + x ) = f = ( x + x ) ′′ + ( x + x )( x + x ) − g = f = ( x + x ) + ( x + x ) − L . (5.1) S = x x x c i f f f d j J = x ′′ x ′′ x ′′ " f f f ′′ a ( a + b ) b Here a = x + x and b = x + x . That det ( J ) ≡ u = [ a , b , − ] T ∈ coker ( J ) . Us-ing (4.3) it finds I = (cid:8) i | u i (cid:9) = (cid:8) (cid:9) , c = min i ∈ I c i = , L = (cid:8) i ∈ I | c i = c (cid:9) = (cid:8) , (cid:9) . For all l ∈ L , u l is not a constant, so L = /0 and L = /0. Then we try the ES method toseek a conversion that guarantees equivalence.We show below how the ES method reveals the linear transformation of the stateswithout having knowledge about the equations.Compute v = [ , − , ] T ∈ ker ( J ) and, using (4.10), find J = (cid:8) , , (cid:9) , s = | J | = , M = (cid:8) , , (cid:9) , and c = . Since s ( x j , v ) = − ¥ and d j − c = j , the ES condition (4.14) is satisfied, and we have J = J = (cid:8) , , (cid:9) . Below shows thecase l = ∈ J for example.As J \ (cid:8) l (cid:9) = (cid:8) , (cid:9) , we introduce new variables y and y for x and x , respec-tively. Then (4.11) becomes y = x ( d − c ) − ( v / v ) · x ( d − c ) = x + x and y = x ( d − c ) − ( v / v ) · x ( d − c ) = x − x . The corresponding two equations are0 = g = − y + x + x and 0 = g = − y + x − x . We first write explicitly the derivatives x ′′ , x ′′ and x ′′ in f and f :0 = f = x ′′ + x ′′ + ( x + x )( x + x ) , = f = x ′′ + x ′′ + ( x + x )( x + x ) − g . Then we perform expression substitutions as described in the table.substitute for in y ′′ − x ′′ x ′′ f , f y ′′ + x ′′ x ′′ f y − x x f y + x x f One may want to make the variable names consistent and do the same for theequation names. By Step 8 of the ES method, variables y , y are renamed x , x ,while equations g , g are renamed f , f . The resulting DAE is0 = f = x ′′ + x ( x + x ) = f = ( x + x ) ′′ + ( x + x )( x + x ) − g = f = x + ( x + x ) − L = f = − x + x + x = f = − x + x + x . (5.2) S = x x x x x c i f • f • f • f • f • d j J = x x x x ′′ x ′′ f x f m f ′′ ( x + m ) m f − − f − m = x + x . The SA succeeds on (5.2), since by f =
0, we havedet ( J ) = − ( x + x x + x ) = − L = . onversion Methods for Improving Structural Analysis of DAEs 19 We proposed two conversion methods for improving the S -method. They converta DAE with finite Val ( S ) and an identically (but not structurally) singular SystemJacobian to a DAE that may have a nonsingular System Jacobian. A conversion guar-antees that both DAEs have equivalent solutions (if any). The conditions for applyingthese methods can be checked automatically, and the main result of a conversion isVal ( S ) < Val ( S ) , where S is the signature matrix of the resulting DAE.An implementation of these methods requires the following steps.1) Compute a symbolic form of a System Jacobian J .2) Find a vector in coker ( J ) [respectively ker ( J ) ].3) Check the LC condition (4.4) [respectively ES conditions (4.14)].4) Generate the equations for the resulting DAE.In general, the computational cost of a conversion depends on the size of theDAE, sparsity, and intricacy of the equations. Determining the cost in advance isundecidable due to the results in [28]. For example, fixing Mf = f = M is a generically nonsingular dense n × n matrix that can contain any derivatives of the x j ’s, typically lower than the d j th.So far, all the fixes we have found for those failure cases are not difficult to compute.In [32], a continuation of this paper, we shall show how to combine the con-version methods with block triangularization of a DAE. For DAEs whose J can bepermuted into a block-triangular form [25,26], we can locate the diagonal blocks thatare singular and then apply a conversion to each such block, instead of working onthe whole DAE. This approach improves the efficiency of finding a useful conversionfor reducing Val ( S ) . Using our block conversion methods, we shall show the reme-dies for the Campbell-Griepentrog robot arm in [4], and the transistor amplifier andthe ring modulator in [10]. A Preliminary results and proof of Lemma 4.4.
Let the notation be as at the start of § Lemma A.1
Let r ∈ J \ (cid:8) l (cid:9) and w = y r + ( v r / v l ) · x ( d l − c ) l . Then s (cid:0) x j , w (cid:1) = ( < d j − c if j ∈ J \ (cid:8) l (cid:9) ≤ d j − c otherwise . (A.1) Proof
Consider the case j = l ∈ J . Obviously s ( x l , w ) = d l − c .Now consider the case j = l . Since x j can occur only in v r and v l in w , we have s (cid:0) x j , w (cid:1) ≤ s (cid:0) x j , v (cid:1) .It follows from (4.14) and the case j = l that (A.1) holds. ⊓⊔ Lemma A.2
Let r ∈ J \ (cid:8) l (cid:9) , i ∈ M, and w = w ( c − c i ) = (cid:18) y r + v r v l · x ( d l − c ) l (cid:19) ( c − c i ) . (A.2) Then s (cid:0) x j , w (cid:1) = ( < d j − c i if j ∈ J \ (cid:8) l (cid:9) ≤ d j − c i otherwise . (A.3)0 Guangning Tan et al. Proof
Since c = max i ∈ M c i , we have c − c i ≥ i ∈ M . From (A.2), connecting s (cid:0) x j , w (cid:1) = s (cid:0) x j , w (cid:1) + ( c − c i ) to (A.1) immediately yields (A.3). ⊓⊔ Using the two assumptions before Lemma 4.4, we prove it below.
Proof
Write S in Figure 4.1 into the following 2 × S = S S S S S S . (A.4)We aim to verify below the relations between s ij and e d j − e c i in each block.(1) S . Consider j , r ∈ J \ (cid:8) l (cid:9) . By (4.12), we substitute w in (A.2) for every x ( d r − c i ) r in f i for all i = n .By (A.3), s (cid:0) x j , w (cid:1) < d j − c i for all i ∈ M . So these expression substitutions do not introduce x ( d r − c i ) r in f i , where r ∈ J \ (cid:8) l (cid:9) . Given M in (4.10), we have d j − c i > s ij for all i / ∈ M and j ∈ J . Hence s (cid:0) x j , f i (cid:1) < d j − c i for j ∈ J \ (cid:8) l (cid:9) , i = n . (A.5)What remains to show is the case j = l . From (4.11), x ( d r − c ) r = y r + v r v l · x ( d l − c ) l . Taking the partial derivatives of both sides with respect to x ( d l − c ) l and applying Griewank’s Lemma(4.2) with w = x ( d r − c ) r and q = c − c i ≥ i ∈ M , we have v r v l = ¶ x ( d r − c ) r ¶ x ( d l − c ) l = ¶ x ( d r − c + c − c i ) r ¶ x ( d l − c + c − c i ) l = ¶ x ( d r − c i ) r ¶ x ( d l − c i ) l . (A.6)Then ¶ f i ¶ x ( d l − c i ) l = ¶ f i ¶ x ( d l − c i ) l + (cid:229) r ∈ J \{ l } ¶ f i ¶ x ( d r − c i ) r · ¶ x ( d r − c i ) r ¶ x ( d l − c i ) l by the chain rule = J il + (cid:229) r ∈ J \{ l } J ir · v r v l by (A.6) = v l (cid:229) r ∈ J J ir v r = v l ( Jv ) i = Jv = . This gives s (cid:0) x l , f i (cid:1) < d l − c i for all i = n . Together with (A.5) we have proved the “ < ” part in S .(2) S . The substitutions do not affect x j , for all j / ∈ L . By (A.3), such an x j occurs in every w of order ≤ d j − c i , where i ∈ M . Hence also s (cid:0) x j , f i (cid:1) ≤ d j − c i for all i = n and j / ∈ L . (3) S . Consider r ∈ J \ (cid:8) l (cid:9) . For an i ∈ M , y r occurs of order c − c i in w in (A.2). For all i = n , if asubstitution occurs for an x ( d r − c i ) r in f i , then s (cid:0) y r , f i (cid:1) = c − c i ; otherwise s (cid:0) y r , f i (cid:1) = − ¥ . In eithercase s (cid:0) y r , f i (cid:1) ≤ c − c i .(4) S . Equalities hold on the diagonal and in the l th column, as y ( d r − c ) r and y ( d l − c ) l occur in g l , where r ∈ J . What remains to show is the “ < ” part. Assume that j , r , l ∈ J are distinct. Then by (4.11) and (4.14), s (cid:0) x j , g r (cid:1) = s (cid:18) x j , y r − x ( d r − c ) r + v r v l · x ( d l − c ) l (cid:19) ≤ s (cid:0) x j , v (cid:1) < d j − c . (A.7)(5) S . Assume again that j , r , l are distinct, where r ∈ J and j = s + n . Then replacing the “ < ” in(A.7) by “ ≤ ” proves the “ ≤ ” part in S .onversion Methods for Improving Structural Analysis of DAEs 21(6) S . Consider r , j ∈ J . By 0 = g l = − y l + x ( d l − c ) l and (4.11), y j occurs in g r only if j = r , and s (cid:0) y j , g j (cid:1) =
0. Hence, on the diagonal lie zeros, and everywhere else is filled with − ¥ .Also worth noting is that in the y l column is only one finite entry s n + l , n + l =
0, and that in the g l roware only two finite entries s n + l , n + l = s n + l , l = d l − c .Recalling (4.15) for the formulas of e c i and e d j of S , we can summarize that the above items (1)–(6)verify the relations between s ij and e d j − e c i in S for all i , j = n + s ; see Figure 4.1. ⊓⊔ Acknowledgements
The authors acknowledge with thanks the financial support for this research: GT issupported in part by the McMaster Centre for Software Certification through the Ontario Research Fund,Canada, NSN is supported in part by the Natural Sciences and Engineering Research Council of Canada,and JDP is supported in part by the Leverhulme Trust, the UK.
References
1. Barrio, R.: Performance of the Taylor series method for ODEs/DAEs. Appl. Math. Comp. , 525–545 (2005)2. Barrio, R.: Sensitivity analysis of ODEs/DAEs using the Taylor series method. SIAM J. Sci. Comput. , 929–1947 (2006)3. Brenan, K.E., Campbell, S.L., Petzold, L.R.: Numerical Solution of Initial-Value Problems inDifferential-Algebraic Equations, second edn. SIAM, Philadelphia (1996)4. Campbell, S.L., Griepentrog, E.: Solvability of general differential-algebraic equations. SIAM Journalon Scientific Computing (2), 257–270 (1995)5. Carpanzano, E., Maffezzoni, C.: Symbolic manipulation techniques for model simplification in object-oriented modelling of large scale continuous systems. Mathematics and Computers in Simulation (2), 133 – 150 (1998)6. Kunkel, P., Mehrmann, V.: Index reduction for differential-algebraic equations by minimal extension.ZAMM-Journal of Applied Mathematics and Mechanics/Zeitschrift f¨ur Angewandte Mathematik undMechanik (9), 579–597 (2004)7. Kunkel, P., Mehrmann, V.L.: Differential-algebraic equations: analysis and numerical solution. Euro-pean Mathematical Society, Z¨urich, Switzerland (2006)8. MapleSim: Technological superiority in multi-domain physical modeling and simulation (2012).
9. Mattsson, S.E., S¨oderlind, G.: Index reduction in differential-algebraic equations using dummyderivatives. SIAM J. Sci. Comput. (3), 677–692 (1993)10. Mazzia, F., Iavernaro, F.: Test set for initial value problem solvers. Tech. Rep. 40, Department ofMathematics, University of Bari, Italy (2003). http://pitagora.dm.uniba.it/~testset/
11. McKenzie, R.: Structural analysis based dummy derivative selection for differential-algebraic equa-tions. Tech. rep., Cardiff University, UK (2015). Submitted to BIT Numerical Mathematics, Oct2015.12. McKenzie, R.: Reducing the index of differential-algebraic equations by exploiting underlying struc-tures. PhD Thesis, School of Mathematics, Cardiff University, Senghennydd Road, Cardiff CF244AG, UK (2016)13. McKenzie, R., Pryce, J.D.: Structural Analysis and Dummy Derivatives: Some Relations, pp. 293–299. Springer International Publishing, Cham (2015). DOI 10.1007/978-3-319-12307-3 42. URL http://dx.doi.org/10.1007/978-3-319-12307-3_42
14. McKenzie, R., Pryce, J.D.: Solving Differential-Algebraic Equations by Selecting Universal DummyDerivatives, pp. 665–676. Springer International Publishing, Cham (2016). DOI 10.1007/978-3-319-30379-6 60. URL http://dx.doi.org/10.1007/978-3-319-30379-6_60
15. Nedialkov, N., Pryce, J.D.: DAETS user guide. Tech. Rep. CAS 08-08-NN, Department of Computingand Software, McMaster University, Hamilton, ON, Canada (2013). 68 pages, DAETS is available at
16. Nedialkov, N.S., Pryce, J.D.: Solving differential-algebraic equations by Taylor series (I): ComputingTaylor coefficients. BIT Numerical Mathematics (3), 561–591 (2005)17. Nedialkov, N.S., Pryce, J.D.: Solving differential-algebraic equations by Taylor series (II): Computingthe system Jacobian. BIT Numerical Mathematics (1), 121–135 (2007)2 Guangning Tan et al.18. Nedialkov, N.S., Pryce, J.D.: Solving differential-algebraic equations by Taylor series (III): theDAETS code. JNAIAM J. Numer. Anal. Indust. Appl. Math , 61–80 (2008)19. Nedialkov, N.S., Pryce, J.D., Tan, G.: Algorithm 948: DAESA—a Matlab tool for structural analysisof differential-algebraic equations: Software. ACM Trans. Math. Softw. (2), 12:1–12:14 (2015)20. Nedialkov, N.S., Tan, G., Pryce, J.D.: Exploiting fine block triangularization and quasilinearity indifferential-algebraic equation systems (2016). McMaster University, Cardiff University. In prepara-tion21. Pantelides, C.C.: The consistent initialization of differential-algebraic systems. SIAM J. Sci. Stat.Comput. , 213–231 (1988)22. Pryce, J.D.: A simple structural analysis method for DAEs. BIT Numerical Mathematics (2), 364–394 (2001)23. Pryce, J.D.: A simple approach to Dummy Derivatives for DAEs. Tech. rep., Cardiff University(2015). In preparation24. Pryce, J.D., McKenzie, R.: A New Look at Dummy Derivatives for Differential-Algebraic Equations,pp. 713–723. Springer International Publishing, Cham (2016). DOI 10.1007/978-3-319-30379-6 64.URL http://dx.doi.org/10.1007/978-3-319-30379-6_64
25. Pryce, J.D., Nedialkov, N.S., Tan, G.: DAESA—a Matlab tool for structural analysis of differential-algebraic equations: Theory. ACM Trans. Math. Softw. (2), 9:1–9:20 (2015)26. Pryce, J.D., Nedialkov, N.S., Tan, G.: Fine block triangular structure of DAEs and its uses (2016).Cardiff University, McMaster University. In preparation27. Reissig, G., Martinson, W.S., Barton, P.I.: Differential–algebraic equations of index 1 may have anarbitrarily high structural index. SIAM J. Sci. Comput.21