Index reduction of differential algebraic equations by differential algebraic elimination
Xiaolin Qin, Lu Yang, Yong Feng, Bernhard Bachmann, Peter Fritzson
aa r X i v : . [ c s . S C ] A p r Index reduction of di ff erential algebraic equations by di ff erentialalgebraic elimination Xiaolin Qin ∗ ,a,b , Lu Yang a , Yong Feng a , Bernhard Bachmann c , Peter Fritzson b a Chengdu Institute of Computer Applications, Chinese Academy of Sciences, Chengdu 610041, China b Department of Computer and Information Science, Link¨oping University, Link¨oping SE-581 83, Sweden c Department of Mathematics and Engineering, Bielefeld University of Applied Sciences, Bielefeld D-33609, Germany
Abstract
High index di ff erential algebraic equations (DAEs) are ordinary di ff erential equations (ODEs)with constraints and arise frequently from many mathematical models of physical phenomenonsand engineering fields. In this paper, we generalize the idea of di ff erential elimination with Dixonresultant to polynomially nonlinear DAEs. We propose a new algorithm for index reduction ofDAEs and establish the notion of di ff erential algebraic elimination, which can provide the di ff er-ential algebraic resultant of the enlarged system of original equations. To make use of structure ofDAEs, variable pencil technique is given to determine the termination of di ff erentiation. More-over, we also provide a heuristics method for removing the extraneous factors from di ff erentialalgebraic resultant. The experimentation shows that the proposed algorithm outperforms existingones for many examples taken from the literature. Key words:
Index reduction; Di ff erential algebraic resultant; Variable pencil; Di ff erentialalgebraic equations
1. Introduction
Modeling with di ff erential algebraic equations (DAEs) plays a vital role in a variety of ap-plications [16], for constrained mechanical systems, control theory, electrical circuits and chem-ical reaction kinetics, and many other areas. In general, it is directly numerical computationsdi ffi cult to solve the system of DAEs. The index of DAEs is a measure of the number of timesneeded to di ff erentiate it to get its equivalent low index or ordinary di ff erential equations (ODEs).There exist many di ff erent index concepts for the specific DAEs, such as the di ff erentiation in-dex [1, 3], perturbation index [3, 12], tractability index [19], structural index [23], and Kro-necker index [31]. There has been considerable research for the general linear and low indexDAEs [16, 19, 21, 31]. In particular, it may only solve some special DAEs by the directly nu-merical methods [8, 18]. It is more di ffi cult to solve the system of high index nonlinear DAEs[1, 2, 3, 11, 20, 23]. Therefore, index reduction techniques may be necessary to get a solution[1]. ∗ Corresponding author
Email address: [email protected] (Xiaolin Qin )
Preprint November 14, 2018 ndex reduction in the pre-analysis of DAEs solving is an active technique of research. Itis equivalent to applying a sequence of di ff erentiations and eliminations to an input system ofDAEs. In [21], Pantelides gave a systematic way to reduce the high index DAEs to low indexone, by selectively adding di ff erentiated forms of the equations already appear in the system.However, the algorithm can succeed yet not correctly in some instances and be just first order[27] . Campbell’s derivative array theory needs to be computationally expensive especially forcomputing the singular value decomposition of the Jacobian of the derivative array equationsusing nonlinear singular least squares methods [2]. In [20], Mattsson et al. proposed the dummyderivative method based on Pantelides’ algorithm for index reduction, which is an algebraic view-point. In [23], Pryce proposed the signature matrix method (also called Σ -method), which canbe viewed as an extension of Pantelides’ method for any order. Recently, Wu et al. generalizedthe Σ -method for DAEs to partial di ff erential algebraic equations with constraints (PDAEs) [32].Qin et al. presented the structural analysis of high index DAEs for process simulation by the Σ -method [24]. But the Σ -method relies heavily on square (i.e. the same number of DAEs anddependent variables) and sparsity structure, which is confronted with the same drawback that cansucceed yet not correctly in some DAEs arising from the specific types.A principal aim of this paper is the development of an e ffi cient di ff erential elimination ap-proach for index reduction of DAEs that extends the direct elimination treatment of [34]. Fromthe algebraic standpoint, di ff erential elimination algorithms which are key for simplifying sys-tems of polynomially di ff erential equations and computing formal power series solutions forthem. The underlying theory is the di ff erential algebra of Ritt [28] and Kolchin [15]. Di ff eren-tial elimination algorithm in algebraic elimination theory is an active field and powerful toolswith many important applications [7, 10, 26, 29, 34, 35]. Almost all of the authors focus on thedi ff erential elimination theory for ODEs. Only Reid et al. presented an e ff ective algorithm forcomputing the index of polynomially nonlinear DAE and a framework for the algorithmic anal-ysis of perturbed system of PDAEs. This underlies the jet space approach based on di ff erentialgeometry.In this paper, we want to promote the e ffi cient di ff erential elimination algorithm as naturalgeneralization of DAEs, which is a direct and elementary approach. In particular, di ff erentialelimination with Dixon resultant can be solved by eliminating serval variables at a time, sim-plifying the system with respect to its constraints, or determining its singular cases [34]. Wecan directly transform the system of DAEs to its equivalent ODEs by di ff erential algebraic elim-ination. Di ff erential algebraic elimination is to apply a finite number of di ff erentiations andeliminations to uncover all hidden constraints of system of DAEs. We define the new minimumdi ff erentiation time, which is the weak di ff erentiation index for DAEs / ODEs. It can be used as aunified formulation of di ff erentiation times for di ff erential elimination of ODEs and di ff erentialalgebraic elimination of DAEs. Meanwhile, we provide the new index reduction with variablepencil and the notion of di ff erential algebraic resultant. In order to overcome the drawback offactoring a large polynomial system [34], we consider a heuristics method for removing the ex-traneous factors from the di ff erential algebraic elimination matrix. Our algorithm is also suitablefor the non-square nonlinear DAEs / ODEs. To the best of our knowledge, it is the first time thatthe generalized Dixon resultant formulation has been directly extended to the system of DAEs.The rest of the paper is organized as follows. Section 2 gives a brief description of the gen-eralized Dixon resultant formulation, and analyzes the size of Dixon matrix and the complexityof computing the entries of Dixon matrix. Section 3 proposes the new index reduction proce-dure for the system of DAEs and defines the weak di ff erentiation index. Section 4 provides thedi ff erential algebraic elimination algorithm and some basic properties of di ff erential algebraic2esultant. Section 5 presents some specific examples in detail and comparisons of our algorithmfor the system of ODEs. The final section concludes this paper.
2. Generalized Dixon resultant formulation
Following Kapur et al. [4, 5, 13, 14, 36, 37], we introduce the concept of generalized Dixonresultant formulation and its properties. This technique will play a central role in our subsequentanalysis. Let X = { x , x , · · · , x n } and ¯ X = { ¯ x , ¯ x , · · · , ¯ x n } be two sets of n variables, respectively.The determinant of a square matrix A is denoted by det ( A ). Definition 2.1.
Let F = { f , f , · · · , f n + } ⊂ Q [ X ] be a set of n + polynomials in n variables.The cancellation matrix C F of F is the ( n + × ( n + matrix as follows: C F = f ( x , x , · · · , x n ) · · · f n + ( x , x , · · · , x n ) f ( ¯ x , x , · · · , x n ) · · · f n + ( ¯ x , x , · · · , x n ) f ( ¯ x , ¯ x , · · · , x n ) · · · f n + ( ¯ x , ¯ x , · · · , x n ) ... ... ... f ( ¯ x , ¯ x , · · · , ¯ x n ) · · · f n + ( ¯ x , ¯ x , · · · , ¯ x n ) , where f i ( ¯ x , ¯ x , · · · , ¯ x k , x k + , x k + , · · · , x n ) stands for uniformly replacing x j by ¯ x j for all ≤ j ≤ k ≤ n in f i . The Dixon polynomial of F is denoted by θ F ∈ Q [ X , ¯ X ] , θ F = det ( C F ) Q ni = ( x i − ¯ x i ) , (1) the row vector of Dixon derived polynomials of F is denoted by P F , and Dixon matrix of F isdenoted by D F as follows, θ F = P F V ¯ X ( θ F ) = V X ( θ F ) D F V ¯ X ( θ F ) , (2) where V ¯ X ( θ F ) is a column vector of all monomials in ¯ X which appears in θ F , and V X ( θ F ) is a rowvector of all monomials in X which appears in θ F . The determinant of D F is called the Dixonresultant, denoted by res ( f , f , · · · , f n + ) . It is well known that Dixon resultant is a projection operator whose vanishing is a necessarycondition for the system F to have a common a ffi ne solution. However, the Dixon matrix maybe non-square then its determinant cannot be directly computed. Even if it is square, the Dixonresultant vanishes identically without providing any information for the a ffi ne solutions. In [14],Kapur et al. presented a heuristic method to remedy the drawback by extracting a non-trivialprojection operator. Lemma 2.1. ([14]) If there exists a column which is linearly independent of all other columnsin D F , then the determinant of any non-singular rank submatrix of D F is a non-trivial projectionoperator. Remark 2.2.
From Lemma 2.1, this method may fail if there is no column which is linearlyindependent of all other columns in D F . However, the method is quite e ffi cient and practicalas demonstrated in [14, 34, 36, 37], and such failure is very rare even never occurred on thenumerous problems. Furthermore, the projection operator may contain extraneous factors in theDixon resultant.
3n this article, we shall use the following properties of Dixon resultant.
Lemma 2.3. ([37]) Dixon resultant can be expressed as a linear combination of original poly-nomial system F , res ( f , f , · · · , f n + ) = n + X i = K i f i , (3) where K i is a polynomial with respect to X and can be deduced from P F . Moreover, it has beenproved that the extraneous factors mentioned above may include three parts which are taken fromP F , D F and the resulting resultant expression by substituting P F , respectively. Remark 2.4.
Extraneous factors arising from Dixon resultant is a troublesome problem when itis used for elimination in a variety of applications. Gather-and-Sift method [33] is a completemethod to remove extraneous factors by the simplicial decomposition algorithm. But it su ff ersfrom very high computational complexity because of the intermediate expression swell in sym-bolic computation. Therefore, we mainly use the technique based on Lemma 2.3, which can beviewed as a heuristic method. Theorem 2.5.
The size of Dixon matrix is at most n ! Q ni = d i × n ! Q ni = d i , and the complexityof computing the entries of Dixon matrix is O ( d ( n ! Q ni = d i ) ) in the worst case, where d i is thehighest degree of variable x i . P roof . Similar to the proof of computing the entries of Dixon matrix in the bivariate case andcombine with the multivariate Sylvester resultant and the general case in [36, 38]. (cid:3) Remark 2.6.
Here we give the size and computational complexity of Dixon matrix in the generalsetting. In particular, the complexity of computing the entries of Dixon matrix is a new result.The highest degree d i of variable x i can be obtained by using the algorithm in [25]. To make useof sparsity in polynomial systems, bound on the size of Dixon matrix of the specific systems isderived in terms of their Newton polytopes in [13].
3. Index reduction algorithm
In this section, let δ = d / dt denote the di ff erentiation operator, let R be a di ff erential ring, i.e., a commutative ring with unit and a di ff erentiation δ acting on it. Let N = { , , · · · , n , · · · } , υ ∈ N n represents a multi-index υ = ( υ , υ , · · · , υ n ) T , m ∈ N , Y = { y , y , . . . , y m } . If r ∈ N ,then order of δ r is ord ( δ r ) = r , we denote y ( k ) j the k -th derivative of y j and y [ k ] j to represent theset { y ( i ) j , i = , , · · · , k } , in particular, y (1) j and y (2) j denote ˙ y j and ¨ y j in the following examples fornotational simplicity. | Y | = m , | · | denotes the cardinality of a set.We give a new index reduction technique for DAEs and define the weak di ff erentiation index.With loss of generality, consider n polynomially DAEs with m dependent variables y j = y j ( t ) with t a scalar independent variable, of the form f i = f ( t , the y j and derivatives o f them ) = , ≤ i ≤ n , ≤ j ≤ m . (4) where T denotes the transposition, which is the same way for the rest of this article.
4t is the following equivalent form from the above notations, f i = c i + l i X k = c ik P ik . (5)where c i , c ik are the coe ffi cients that are the known forcing functions with t or the constants, P ik = ( Y [ r j ] ) α ik is a monomial in { y , y , · · · , y m , y (1)1 , · · · , y ( r )1 , · · · , y (1) m , · · · , y ( r m ) m } with exponentvector α ik and l i = | α ik | ≥
1. In particular, the highest degree of y j and its derivative y ( r j ) j denote d j and d jr j in { f , f , · · · , f n } , respectively.In order to compute the di ff erential algebraic resultant in Section 4, the outline of indexreduction procedure is as follows: Phase 1
Initialization.(a) Collect every dependent variable y j and its derivative y [ r j ] j for each f i , and then gatherthe set of dependent variables V .(b) Sort V for every y j and y [ r j ] j into a lexicographic order under assumption of ordering · · · ≻ y ≻ y ( r )1 ≻ · · · ≻ y (1)1 ≻ y , and V j represents the set of y j and its derivative y [ r j ] j .(c) Construct a matrix M = ( m i j ), which is called variable pencil , defined for (4) by m i j = the y j or its derivative y [ r j ] j appears in equation f i , i f the variable does not occur , (6)where row ( M ) and col ( M ) denote the number of rows and columns of M , respectively. Phase 2 Di ff erentiation.(a) Determine the set of di ff erential equations F o if there exists m i j = Y , and the set of algebraic equations F a if m i j = Y in f i , where | F o | = s , | F a | = n − s .(b) Select the algebraic constraints f k ( s + ≤ k ≤ n ) from F a to di ff erentiate υ k suchthat ord ( y j ) ≤ r j , which can be viewed as the homogeneous order. If it generates thenew di ff erential dependent variables, then it requires to augment the row and column ofvariable pencil to denote M ′ , update dependent variables set to V ′ and V ′ j . The terminatedcondition of algebraic di ff erentiation is as follows: n + n X k = s + υ k = | V ′ | − | V ′ j | + . T hat is , row ( M ′ ) = col ( M ′ ) − | y [ r j ] j | , (7)where υ = υ = · · · = υ s = ff erential equations f k (1 ≤ k ≤ s ) from F o to di ff erentiate υ k if (7) fails such that ord ( f k ) ≤ max mj = r j with y j , and augment the row and column ofvariable pencil to denote M ′′ , and update dependent variables set to V ′′ , V ′′ j and the order r j to r ′ j . The terminated condition of di ff erentiation is as follows: n + n X k = υ k = | V ′′ | − | V ′′ j | + . T hat is , row ( M ′′ ) = col ( M ′′ ) − | y [ r ′ j ] j | . (8)5 emark 3.1. We remark that the termination of index reduction procedure is required by thecondition (7) or (8) because of the construction of a square elimination matrix. In particular, theprocedure may have fully been degenerated into the problems of ODEs if (8) fails in Phase 2(c).We can obtain the new set of ODEs from Phase 2(b) and (c), then refer to the algorithm of [34].
Definition 3.1.
The number of di ff erentiations specified by index reduction procedure gives aformula for the weak di ff erentiation index of system of DAEs, denoted by d w = max nk = υ k . Obvi-ously, if no di ff erentiation of the original system is index zero, ODEs may have the weak di ff er-entiation index more than zero. Theorem 3.2.
Let F = { f , f , · · · , f n } ∈ R [ y , y , · · · , y m , y (1)1 , · · · , y ( r )1 , · · · , y (1) m , · · · , y ( r m ) m ] , andthe index reduction procedure satisfies the terminated condition (7) or (8). Then υ = ( υ , υ , · · · , υ n ) can be computed correctly as specified. P roof . The initialization in Phase 1, we can easily get the set of dependent variables V andconstruct the variable pencil M = ( m i j ) from F and initialize υ = υ = · · · = υ n =
0. Accordingto Phase 2(a), we have { f , f , · · · · · · , f s | {z } ODEs ( F o ) f s + , f s + , · · · · · · , f n | {z } algebraic equations ( F a ) } . (9)To compute υ = ( υ , υ , · · · , υ n ) , two cases are considered:Case (a): only di ff erentiate F a = { f s + , f s + , · · · , f n } to satisfy the condition (7) such that ord ( y j ) ≤ r j based on the homogeneous order, which can repeat the di ff erentiation to obtain the dif-ferentiation times { υ s + , υ s + , · · · , υ n } . In the general setting, since | y [ r j ] j | = r j , it is easy toget the terminated condition (7). In particular, since | y [ r j ] j | < r j for sparse case, it alwaysgenerates the new di ff erential dependent variables { y [ k ]1 , y [ k ]2 , · · · , y [ k m ] m } with k j < r j , and re-quires to update the set of dependent variables V ′ = V ∪ y [ k ]1 ∪ y [ k ]2 ∪ · · · ∪ y [ k m ] m , and F = F ∪ { δ f s + , · · · , δ υ s + f s + , · · · , δ f n , · · · , δ υ n f n } . Consequently, we need to augment the row andcolumn of variable pencil M to denote M ′ . This concludes following: row ( M ′ ) = col ( M ′ \ { y j , y [ r j ] j } ) + . Case (b): following Case (a), if the condition (7) fails, it needs to obtain the condition (8). Theproblem can be transferred into the general n × m system of ODEs F o = F o ∪{ δ υ s + f s + , · · · , δ υ n f n } .In order to reduce the redundancy di ff erentiation times, we only di ff erentiate some low orderODEs from F o such that ord ( f k ) ≤ r j with y j , which are m i j = M ′ for { y ( r )1 , y ( r )2 , · · · , y ( r m ) m } .Furthermore, it may need to di ff erentiate some general ODEs from F o to satisfy the condition (8),which can repeat the di ff erentiation to obtain the di ff erentiation times { υ , υ , · · · , υ n } . It alwaysgenerates the new di ff erential dependent variables { y [ k ′ ]1 , y [ k ′ ]2 , · · · , y [ k ′ m ] m } and requires to update theset of dependent variables V ′′ = V ′ ∪ y [ k ′ ]1 ∪ y [ k ′ ]2 ∪· · ·∪ y [ k ′ m ] m , F ′ = F ∪{ δ f , · · · , δ υ f , · · · , δ f n , · · · ,δ υ n f n } and the order r j to r ′ j . Consequently, we need to augment the row and column of variablepencil M ′ to denote M ′′ . This concludes following: row ( M ′′ ) = col ( M ′′ \ { y j , y [ r ′ j ] j } ) + . (cid:3) emark 3.3. Yang et al [34] gave a formulation of di ff erentiation times for di ff erential elimi-nation of ODEs. However, their method may lead to some redundant di ff erentiation times in thepractical applications, such as the constrained mechanical systems. Here, we propose a variablepencil technique to analyze the di ff erentiation times of DAEs. It is able to make di ff erentiationtimes as few as possible for di ff erential algebraic elimination. It is also suitable for the di ff eren-tial elimination of ODEs with Dixon resultant formulation. We present a simple example to illustrate our index reduction procedure as follows:
Example 3.1.
This example is the linear, time-dependent index two DAE discussed in Gear [11]as follows: " η t ˙ y ˙ y ! + " + η η t y y ! = p p ! , (10) where the dependent variables y , y with t a scalar independent variable, p and p are theknown forcing functions of t, and η is a parameter. We can get the expanded form as follows: = f = ˙ y + (1 + η ) y + η t ˙ y − p , = f = y + η ty − p . (11) We can initialize the original system (11) as follows:(a) collect the set of dependent variable V = { ˙ y , y , ˙ y , y } ; (b) sort the set V = { y , ˙ y , y , ˙ y } ;(c) construct the variable pencil M ⇒ y ˙ y y ˙ y " f f . Obviously, we can get the F a and F o with | F a | = | F o | = , and di ff erentiate f based on thehomogeneous order as follows: M ′ ⇒ y ˙ y y ˙ y (cid:20) (cid:21) f f δ f . Therefore, we have = f = δ f = ˙ y + η y + η t ˙ y − ˙ p . (12) For the di ff erentiated equation δ f appended to the original system, the system of three equa-tions f , f and f has four dependent variables y , ˙ y , y , and ˙ y . For eliminating the dependentvariables { y , ˙ y } or { y , ˙ y } , the terminated condition of algebraic di ff erentiation (7) holds. Con-sequently, we can get the di ff erentiation times υ = (0 , . Remark 3.4.
We only di ff erentiate the equation f once, i.e., d w = , and mix the algebraicequations and ODEs to deal with uniformly. However, the existing methods need to di ff erentiatef twice until no algebraic equations appear by substitution, that is, the problem is index two[11, 26]. . Di ff erential algebraic elimination In this section, the definition of di ff erential algebraic elimination of DAEs is introduced byusing the generalized Dixon resultant formulation. Based on the index reduction algorithm inSection 3, we also present an algorithm for computing the di ff erential algebraic resultant. More-over, its basic properties are given. ff erential algebraic elimination The fundamental tool is based on the idea of algebraic Dixon resultant to create the di ff eren-tial algebraic elimination. Firstly, we construct the di ff erential algebraic cancellation matrix, andthen compute the entries of di ff erential algebraic elimination matrix, determinant of which con-tains the di ff erential algebraic resultant as a factor. That is, DAEs can be treated as polynomialsystem, and y j and its derivatives can be viewed as parameters, the other dependent variables andtheir derivatives as the purely algebraic variables are eliminated simultaneously. Therefore, wecan obtain the single ODE with y j and its derivatives to directly apply the numerical method.Let F = { f , f , · · · , f n , δ f , · · · , δ υ f , · · · , δ f n , · · · , δ υ n f n } ∈ R [ y , y , · · · , y m , y (1)1 , · · · , y ( r )1 , · · · , y (1) m , · · · , y ( r m ) m ] , ¯ Y = { ¯ y , ¯ y , · · · , ¯ y m , ¯ y (1)1 , · · · , ¯ y ( r )1 , · · · , ¯ y (1) m , · · · , ¯ y ( r m ) m } , the system { f = , f = , · · · , f n = } (13)has solution if and only if the system F has solutions for υ = ( υ , υ , · · · , υ n ) T . In order todefine the di ff erential algebraic elimination of (13) it is necessary to find a weak di ff erenti-ation index υ for eliminating the y , y , · · · , y j − , y j + , y j + , · · · , y m and their derivatives, suchthat f , · · · , f n , δ f , · · · , δ υ f , · · · , δ υ n f n are ( υ + + ( υ + + · · · + ( υ n +
1) polynomials in υ + υ + · · · + υ n + n − Definition 4.1.
Let f i be a di ff erential polynomial in R [ y , y , · · · , y m , y (1)1 , · · · , y ( r )1 , · · · , y (1) m , · · · , y ( r m ) m ] ,N = ( υ + + ( υ + + · · · + ( υ n + − as mentioned above. The di ff erential algebraic cancellationmatrix DC F of F with y j and its derivatives is the ( N + × ( N + matrix as follows: DC F = f ( y , y , · · · , y m , y (1)1 , · · · , y ( r )1 , · · · , y ( r m ) m ) · · · f N + ( y , y , · · · , y m , y (1)1 , · · · , y ( r )1 , · · · , y ( r m ) m ) f (¯ y , y , · · · , y m , y (1)1 , · · · , y ( r )1 , · · · , y ( r m ) m ) · · · f N + (¯ y , y , · · · , y m , y (1)1 , · · · , y ( r )1 , · · · , y ( r m ) m ) f (¯ y , ¯ y , · · · , y m , y (1)1 , · · · , y ( r )1 , · · · , y ( r m ) m ) · · · f N + (¯ y , ¯ y , · · · , y m , y (1)1 , · · · , y ( r )1 , · · · , y ( r m ) m ) ... ... ... f (¯ y , ¯ y , · · · , ¯ y m , ¯ y (1)1 , · · · , ¯ y ( r )1 , · · · , ¯ y ( r m ) m ) · · · f N + (¯ y , ¯ y , · · · , ¯ y m , ¯ y (1)1 , · · · , ¯ y ( r )1 , · · · , ¯ y ( r m ) m ) , where { y j , y (1) j , · · · , y ( r j ) j } as parameters do not replace by { ¯ y j , ¯ y (1) j , · · · , ¯ y ( r j ) j } in f i (1 ≤ i ≤ N + .The di ff erential algebraic elimination polynomial of F is denoted by D θ F ∈ R [ Y , ¯ Y ] ,D θ F = det ( DC F ) Q mi = i , j ( y i − ¯ y i )( y (1) i − ¯ y (1) i ) · · · ( y ( r i ) i − ¯ y ( r i ) i ) , (14)8 he row vector of di ff erential algebraic elimination derived polynomials of F is denoted by DP F ,and di ff erential algebraic elimination matrix of F is denoted by DA F as follows,D θ F = ... ( y ( r m ) m ) Nd mrm − ... y d − ... Q mi = i , j Q r i µ i = y id i − i ( y ( µ i ) i ) ( m + P i − k = r k + µ i − d i µ i − T · DA F · ... (¯ y ( r m ) m ) d mrm − ... ¯ y Nd − ... Q mi = i , j Q r i µ i = ¯ y ( N − i + d i − i (¯ y ( µ i ) i ) ( N − m − P i − k = r k − µ i + d i µ i − , (15) where the rows and columns of DA F are indexed ordering by y ( r m ) m ≻ · · · ≻ y (1) m ≻ · · · ≻ y ( r )1 ≻· · · ≻ y (1)1 ≻ y m ≻ · · · ≻ y , ¯ y ( r m ) m ≻ · · · ≻ ¯ y (1) m ≻ · · · ≻ ¯ y ( r )1 ≻ · · · ≻ ¯ y (1)1 ≻ ¯ y m ≻ · · · ≻ ¯ y ,respectively. The coe ffi cient matrix DA F is also a square matrix, determinant of which is calleddi ff erential algebraic resultant, denoted by DARes ( y j , y [ r j ] j ) . Here, we can write the DA F in the following block structure notation: DA F = D , D , · · · D , Nd − D , D , · · · D , Nd − ... ... . . . ... D d − , D d − , · · · D d − , Nd − , (16)where each block D i j is of size ( N − r j − Q mi = i , j Q r i µ i = d i d i µ i × ( N − r j − Q mi = i , j Q r i µ i = d i d i µ i .As the increasingly large scale system of DAEs, we can make use of its structure and blocktriangularization to decompose a problem into subproblems by permuting the rows and columnsof a rectangular or square, unsymmetric matrix. For more details refer to [22].Following the properties of Dixon resultant we prove easily. Theorem 4.1.
The di ff erential algebraic elimination matrix DA F is of size N ! Q mi = i , j Q r i µ i = d i d i µ i × N ! Q mi = i , j Q r i µ i = d i d i µ i at most, and the complexity of computing the entries of DA F is O ( d ( N ! Q mi = i , j Q r i µ i = d i d i µ i ) ) in the worst case, where d i and d i µ i are mentioned above. Theorem 4.2. Di ff erential algebraic resultant can be expressed as a linear combination of en-larged system of equations F with y j ,DARes ( y j , y [ r j ] j ) = n X i = υ i X µ i = K i µ i δ µ i f i , (17) where K i µ i is a polynomial with respect to Y and can be deduced from DP F . Moreover, ifDARes ( y j , y [ r j ] j ) is a reducible di ff erential equation, it can also be proved that the extraneousfactors mentioned above may include three parts which are taken from DP F , DA F and the result-ing resultant expression by substituting DP F , respectively. emark 4.3. From Theorem 4.2, we can remove the extraneous factors from di ff erential alge-braic resultant when the existing greatest common divisors in each row or column of di ff erentialalgebraic elimination matrix. That is, the extraneous factors are the greatest common divisors inthe algebraic cofactors of DA F . Theorem 4.4. Di ff erential algebraic resultant is equal to zero that is a necessary condition forthe existence of a common solution of system of DAEs. Corollary 4.5.
Let y , y , · · · , y m be solutions of the system of DAEs (13). Then y j satisfies theDARes ( y j , y [ r j ] j ) .4.2. Algorithm In this subsection, we have the following procedure for di ff erential algebraic elimination. Input:
DAEs system F = { f = , f = , · · · , f n = } , and dependent variables Y \ { y j , y [ r j ] j } . Output: a polynomial ODE only contains y j and its derivatives. Step 1:
Count the number of DAEs and Y \ { y j , y [ r j ] j } , denote n and m respectively, if n is equalto m plus 1, then goto Step 3. Step 2:
Call index reduction algorithm in Section 3 by taking the t -derivative of f i , update n and m such that n = m + F and new Y \ { y j , y [ r j ] j } , thecollections are as follows, F ← f , δ f , · · · , δ υ f f , δ f , · · · , δ υ f ... ... f n , δ f n , · · · , δ υ n f n = , Y ← y , y (1)1 , · · · , y ( r )1 y , y (1)2 , · · · , y ( r )2 ... ... y m , y (1) m , · · · , y ( r m ) m \ { y j , y [ r j ] j } . (18) Step 3:
Construct the di ff erential algebraic cancellation matrix DC F , obtain the entries of di ff er-ential algebraic elimination matrix DA F , remove the greatest common divisors from eachrow or column of DA F , and then compute its determinant DARes ( y j , y [ r j ] j ). Step 4:
Return
DARes ( y j , y [ r j ] j ). Theorem 4.6.
The above algorithm works correctly as specified and its complexity mainly con-tains index reduction algorithm and the computation of di ff erential algebraic resultant. P roof . Correctness of the algorithm follows from the Dixon elimination method. Regardingthe dependent variables { y j , y [ r j ] j } as parameters and the other ones as algebraic variables, we cantreat the enlarged system of equations F as an algebraic system. As shown in [34], a necessarycondition for the existence of a common solution of algebraic di ff erential equations is that thedi ff erential resultant is equal to zero. We can easily get the Theorem 4.4.From the description of algorithm, we observe that there are two major steps on time com-plexity. In Step 2, we can obtain the di ff erentiation times P nk = υ k . The problem is solved by10omogeneous order rule, one that makes di ff erentiation time as few as possible for reducing thenumber of enlarged system of equations F . If there exists the υ = ( υ , υ , · · · , υ n ) T , which can bedone in polynomial time. In Step 3, the complexity includes three parts: (a). to obtain the entriesof di ff erential algebraic elimination matrix DA F in Theorem 4.1, suppose d i = d i µ i = d , r i = r ,for each i = , , · · · , m , it needs at most d ( N ! m Y i = i , j r i Y µ i = d i d i µ i ) ≤ ( N ! m Y i = i , j r i Y µ i = d i d i µ i ) ≤ ( N ! d ( m − m − r ) ≤ O ( N ! d O ( m r ) ) , which is the single exponential complexity; (b). calculate the greatest common divisors for eachrow or column of DA F in the polynomial time; (c). compute its determinant DARes ( y j , y [ r j ] j )in polynomial time. Therefore, if there exists the di ff erential algebraic resultant with singledependent variable and its derivative, we can transform the system of DAEs to its equivalentODEs in the single exponential complexity. (cid:3) Remark 4.7.
From Lemma 2.1 and Theorem 4.6, our algorithm is not a complete method. How-ever, our algorithm is really e ff ective and practical technique on numerous problems. It is wellknown that Dixon resultant elimination is that it can do one-step elimination of a block of un-knowns from a system of polynomial equations like Macaulay’s. Moreover, the size of Dixonmatrix is much smaller than the size of Macaulay matrix. Though the entries of Dixon matrixare complicated in contrast to the entries in Macaulay matrix, the entries of which are either or coe ffi cients of the monomials in the polynomial systems. Fortunately, we can easily applythe extended fast algorithm for constructing the Dixon matrix [36]. In particular, for a fixednumber n of variables of a polynomial system, the construction cost of Dixon matrix is at most O ( mvol ( F ) ) arithmetic operations [13], where mvol ( F ) is the n-fold mixed volume. As shownin [34], our algorithm is also appropriate to the system of ODEs. In many practical applicationsof DAEs, we can easily see that d i and d i µ i are very low degrees in P ik . Example 4.1.
Continue from Example 3.1, we can construct the di ff erential algebraic elimina-tion matrix with y and ˙ y as follows: h η tp − η t ˙ p + y − p i × . We can also eliminate y and ˙ y by the same method simultaneously. Therefore, we get thefollowing di ff erential algebraic resultants:DARes ( y , ˙ y ) = y − p + η t ( p − ˙ p ) , DARes ( y , ˙ y ) = y − p + ˙ p . These are the same as the results obtained in [11].
5. Examples
In this section, we present some small examples in detail and compare the matrices size ofdi ff erential resultants of two models with other methods. Examples 5.1 and 5.4 illuminate howto deal with the nonlinear and high index DAEs of constrained mechanical system. Example 5.2uses a simple example to test our algorithm for the nonlinear and non-square system of DAEs.Example 5.3 is a practical application for the linear electrical network problem.11 .1. Some examples in detail Example 5.1.
Consider the nonlinear DAEs for the simulation of the dynamics of multibodysystems, which is a major application area. Here, we show the simple pendulum to illustratemany of the principles. The DAEs can be written = f = ¨ y + y λ, = f = ¨ y + y λ − g , = f = y + y − L , (19) where g > , L > are constants. From (6) its variable pencil, labeled by equations andvariables, is M ⇒ y ¨ y y ¨ y λ f f f . Obviously, we can get the F a and F o with | F a | = , | F o | = , and di ff erentiate f based on thehomogeneous order as follows: M ′ ⇒ y ˙ y ¨ y y ˙ y ¨ y λ f f f δ f δ f . Therefore, we have = f = δ f = y ˙ y + y ˙ y , = f = δ f = y ¨ y + y ¨ y + y + y . (20) For the di ff erentiated equations δ f and δ f appended to the original system, the system of fiveequations f , f , f , f and f has seven dependent variables y , ˙ y , ¨ y , y , ˙ y , ¨ y , and λ . For elim-inating the dependent variables { y , ˙ y , ¨ y } or { y , ˙ y , ¨ y } , the terminated condition of algebraicdi ff erentiation satisfies (7). Consequently, we can get the di ff erentiation times υ = (0 , , , thatis, d w = .We can construct the di ff erential algebraic elimination matrix with y , ˙ y and ¨ y as follows: y ˙ y L − y − y ¨ y − y ˙ y y ˙ y y − L y ˙ y L − y y ˙ y − − gy L − y − y ¨ y − y ˙ y − ¨ y − gy − − ¨ y . bviously, we can also eliminate y , ˙ y and ¨ y by the same method simultaneously. Therefore,we get the following di ff erential algebraic resultants:DARes ( y , ˙ y , ¨ y ) = ( − L y + L + y L )¨ y + ( − L y + L y )˙ y ¨ y + g L y + L ˙ y y + g y − g L y − L g y , DARes ( y , ˙ y , ¨ y ) = ( L − L y )¨ y + L ˙ y y − gy + gL y − gL . The remaining dependent variable λ is determined by y and y . Example 5.2.
The example is the nonlinear, non-square system of DAEs discussed in [17] asfollows: c c c c c c y y ˙ y y ˙ y ˙ y = , (21) where the dependent variables y and y , c i j ( i = , , , j = , , , are the known forcingfunctions of t. We can get the expanded form as follows: = f = c + c ˙ y ˙ y , = f = c + c ˙ y y , = f = c + c y y . (22) We can initialize the original system (22) as follows:(a) collect the set of di ff erential variable V = { ˙ y , ˙ y , y , y } ; (b) sort the set V = { y , ˙ y , y , ˙ y } ;(c) construct the variable pencil M ⇒ y ˙ y y ˙ y f f f . Obviously, we can get the F a and F o with | F a | = and | F o | = , and the system of three equa-tions f , f and f has four dependent variables y , ˙ y , y and ˙ y . For eliminating the dependentvariables { y , ˙ y } or { y , ˙ y } , the terminated condition of algebraic di ff erentiation satisfies (7).Consequently, we can get the di ff erentiation times υ = (0 , , that is, d w = .Here, we can construct the di ff erential algebraic elimination matrix with y and ˙ y as follows: h c c y − c c ˙ y i × . We can eliminate y and ˙ y by the same method simultaneously. Therefore, we get the followingdi ff erential algebraic resultants:DARes ( y , ˙ y ) = c c y − c c ˙ y , DARes ( y , ˙ y ) = − c c y + c c ˙ y . xample 5.3. Consider a practical linear electrical network example, di ff erential algebraicequations of index may have an arbitrarily high structural index from [27] as follows: = f = ˙ y + ˙ y + y − a ( t ) , = f = ˙ y + ˙ y + y − b ( t ) , = f = ˙ y + ˙ y + y − c ( t ) , = f = ˙ y + ˙ y + y − d ( t ) , = f = y − e ( t ) , (23) where a ( t ) , b ( t ) , c ( t ) , d ( t ) and e ( t ) are the known forcing functions of t. It is clear that y is known,i.e., y = e ( t ) . We can get the variable pencil as follows: M ⇒ y y ˙ y y ˙ y y ˙ y y ˙ y f f f f f . Obviously, we can get the F a and F o with | F a | = , | F o | = , and di ff erentiate f based on thehomogeneous order as follows: M ′ ⇒ y y ˙ y y ˙ y y ˙ y y ˙ y f f f f f δ f . Therefore, we have = f = δ f = ˙ y − ˙ e ( t ) . (24) For the di ff erentiated equations δ f appended to the original system, the system of six equationsf , f , f , f , f and f has seven dependent variables y , y , ˙ y , y , ˙ y , y and ˙ y . For eliminatingthe dependent variables { y , ˙ y } , { y , ˙ y } or { y , ˙ y } , the terminated condition of algebraic di ff er-entiation satisfies (7). Consequently, we can get the di ff erentiation times υ = (0 , , , , . Itonly needs to di ff erentiate f once, that is, d w = . Remark 5.1.
We easily compute the di ff erential times (0 , , , , of five equations f , f , f , f , f in sequence by the Σ -method [23], that is, the structural index is 3. As for an increasingly largedimensions, the Σ -method may perform an arbitrarily high di ff erentiation times. However, ourweak di ff erentiation index is the same as the di ff erentiation index. In general, it is suitable forthe linear DAEs as follows, A ˙ Y + B Y = S , (25)14 here S is the vector of m su ffi ciently smooth forcing functions of t, Y is as mentioned above, Aand B are m × m matrices, such as linear DAEs (25) with m = k + , zero vector S , and theidentity matrix B, A = · · . . . · · , such that A solely consists of k blocks of form (cid:0) (cid:1) , the lower left element of each being on themain diagonal of A. This is the same result that the index is by using the Kronecker canonicalform [31]. However, structural index algorithm [23] needs to di ff erentiate the last equation ktimes, that is, the structural index is k + . In [21], Pantelides’ algorithm needs to perform k + iterations before termination. Therefore, it leads to a large number of redundant di ff erentiationtimes. Example 5.4.
We present a double pendulum model to demonstrate our index reduction tech-nique in the dynamical systems. It is modeled by the motion in Cartesian coordinates, see Figure1.
Figure 1: Double Pendulum e can derive the governing DAEs using Newton’s second law of motion as follows, = f = m ¨ x − λ l x − λ l ( x − x ) , = f = m ¨ y − λ l y − λ l ( y − y ) − m g , = f = m ¨ x − λ l ( x − x ) , = f = m ¨ y − λ l ( y − y ) − m g , = f = x + y − l , = f = ( x − x ) + ( y − y ) − l , (26) where g > , l > , l > , m > and m > are constants, the dependent variablesx , x , y , y with t a scalar independent variable, λ , λ are the known forcing functions of t.From (6), we can construct its variable pencil M , and easily to get the needed to di ff erenti-ate f and f twice, respectively. Then we obtain the new variable pencil M ′ to determine thetermination of di ff erentiation, which holds the condition (7). Finally, we can eliminate the depen-dent variables { x , ˙ x , ¨ x } , { x , ˙ x , ¨ x } , { y , ˙ y , ¨ y } , or { y , ˙ y , ¨ y } , where λ , λ can be determinedby x , x , y and y . Therefore, we can get the di ff erentiation times υ = (0 , , , , , , that is,d w = .5.2. Some comparisons In this subsection, we also apply our algorithm to the system of ODEs and compare thematrix size of di ff erential resultant with other methods. The algebraic manipulation of di ff erentialequations has gained importance in last years. Example 5.5.
Consider two nonlinear generic ordinary di ff erential polynomials with order oneand degree two from [35] as follows: = f = ˙ y + a y ˙ y + a y + a ˙ y + a y + a , = f = ˙ y + b y ˙ y + b y + b ˙ y + b y + b , (27) where a i , b i are di ff erential constants, i.e., δ a i = δ b i = i = , , · · · , . Example 5.6.
Consider the simplified version of a predator-prey model from [6] as follows: = f = a y + ( a + a y ) y + ˙ y + ( a + a y ) y + a y , = f = ˙ y + ( b + b y ) y + ( b + b y ) y + b y , (28)where a i , b j ( i = , , · · · , , j = , , · · · ,
5) are the known forcing functions of t .Example Matrix sizeZYG [35] Rueda [30] Our algorithm5.5 36 ×
36 * 9 × ×
13 5 × Table 1: Matrix size for computing di ff erential resultant ff erential resultant in Examples 5.5 and5.6, where ’*’ represents that the computation is not compared. From the Table 1, we have thefollowing observations:In two examples above, the matrix size of di ff erential resultant via our algorithm is muchsmaller than two other methods. The smaller matrix leads to reduce more time for computingits symbolic matrix. It is consistent with the generalized Dixon resultant formulation. ZYG[35] is based on the idea of algebraic sparse resultant and Macaulay resultant for a class of thespecial ordinary di ff erential polynomials. Rueda [30] presents the di ff erential elimination bydi ff erential specialization of Sylvester style matrices to focus on the sparsity with respect to theorder of derivation. In practice, Dixon’s method is the most e ffi cient technique to simultaneouslyeliminate several variables from a system of nonhomogeneous polynomial equations.
6. Conclusions
In this paper, we propose a new index reduction for high index DAEs and establish a relation-ship between the generalized Dixon resultant formulation and system of DAEs solving, which isdefined as di ff erential algebraic elimination. A significant problem in the di ff erential algebraicelimination is to create methods to control the growth of di ff erentiations. Our method can beapplied to the mixed algebraic equations and di ff erential equations to deal with simultaneously,and given a variable pencil technique to determine the termination of di ff erentiation. From thealgebraic geometry, it can be considered as the index reduction via symbolic computation.Our method can be also suitable for the system of ODEs and the high index nonlinear non-square system of DAEs, i.e., the number of dependent variable is not equal to the number ofequations. The weak di ff erentiation index is defined to unify the formulation of di ff erentia-tion times for di ff erential elimination of ODEs and di ff erential algebraic elimination of DAEs.Moreover, a heuristics method is given for sifting the extraneous factors in di ff erential algebraicresultants to remedy the drawback of factoring large polynomial system. Parallel computationcan be used to speed up the computation of di ff erential algebraic resultant of each dependentvariable.However, the disadvantages of our method contain its limitation to polynomial coe ffi cientsand incomplete method because of the generalized Dixon elimination. Usually, for many prac-tical relevant applications, the large scale system of ODEs / DAEs is also a challenge problem bythe purely symbolic method; for instance, the full robot in the Modelica context [9] has beforesymbolic simplification about 2391 equations and 254 dependent variables, which are reducedto 743 equations and 36 states that require a lot of index reduction going on. An obvious futurework, is to attempt the block triangularization and sparsity considerations in constructing thedi ff erential algebraic elimination matrices. The sparseness is reflected in the quantity l i of P ik in Section 3. Furthermore, symbolic-numeric di ff erential algebraic elimination method is a veryinteresting work in the numerical algebraic geometry. Acknowledgement
This research was partly supported by China 973 Project NKBRPC-2011CB302402, the Na-tional Natural Science Foundation of China (No. 61402537, 91118001), the West Light Foun-dation of Chinese Academy of Sciences, and the Open Project of Chongqing Key Laboratory ofAutomated Reasoning and Cognition (No. CARC2014004).17he first author is also grateful to Dr. Shizhong Zhao for his valuable discussions aboutremoving the extraneous factors from resultant computations.
References [1] K. E. Brenan, S. L. Campbell, L. R. Petzold, Numerical Solution of Initial-Value Problems in Di ff erential-AlgebraicEquations, Second edition, Society for Industrial and Applied Mathematics, 1996.[2] S. L. Campbell, Least squares completions for nonlinear di ff erential algebraic equations, Numerische Mathematik65(1993) 77–94.[3] S. L. Campbell, C. W. Gear, The index of general nonlinear DAEs, Numerische Mathematik 72(2)(1995) 173–196.[4] A. D. Chtcherba, D. Kapur, Exact resultants for corner-cut unmixed multivariate polynomial systems using theDixon formulation, Journal of Symbolic Computation 36(3-4)(2003) 289–315.[5] A. D. Chtcherba, D. Kapur, Constructing Sylvester-type resultant matrices using the Dixon formulation, Journal ofSymbolic Computation 38(1)(2004) 777–814.[6] A. C. Casal, A. S. Somolinos, Parametric excitation in a predator-prey model. In the first 60 years of Jean Mawhin,World Scientific, River Edge N.J., 2004, 41–54.[7] G. Carr` a -Ferro, A Resultant Theory for the Systems of Two Ordinary Algebraic Di ff erential Equations, ApplicableAlgebra in Engineering, Communication and Computing 8(1997) 539–560.[8] M. A. El-Khateb, H. S. Hussien, An optimization method for solving some di ff erential algebraic equations, Com-munications in Nonlinear Science and Numerical Simulation 14(2009) 1970–1977.[9] P. Fritzson, Principles of Object-Oriented Modeling and Simulation with Modelica 3.3: A Cyber-Physical Ap-proach, Second edition, Wiley-IEEE Press, 2015.[10] X. S. Gao, W. Li, C.M. Yuan, Intersection theory in di ff erential algebraic geometry: generic intersections and thedi ff erential Chow form, Transactions of the American Mathematical Society 365 (9) (2013) 4575–4632.[11] C. Gear, Di ff erential-Algebraic Equation Index Transformations, SIAM Journal on Scientific and Statistical Com-puting 9(1) (1988) 39–47.[12] E. Hairer, G. Wanner, Solving Ordinary Di ff erential Equations II (second ed.), Springer-Verlag, Berlin, 1996.[13] D. Kapur, T. Saxena, Sparsity considerations in Dixon resultants, Proceedings 28th Annual ACM Symp. on Theoryof Computation (STOC-28), Philadelphia, May 1996, 184–191.[14] D. Kapur, T. Saxena, L. Yang, Algebraic and geometric reasoning using Dixon resultants, Proceedings of theInternational Symposium on Symbolic and Algebraic Computation, ACM, New York, 99–107, 1994.[15] E. R. Kolchin, Di ff erential Algebra and Algebraic Groups, Academic Press, London-New York, 1973.[16] P. Kunkel, V. Mehrmann, Di ff erential-Algebraic Equations, Analysis and Numerical Solution, EMS PublishingHouse, Z¨ u rich, Switzerland, 2006.[17] W. Li, X. S. Gao, C. M. Yuan, Sparse Di ff erential Resultant, Proceedings of the International Symposium onSymbolic and Algebraic Computation, ACM, New York, 225–232, 2011.[18] C. S. Liu, A new sliding control strategy for nonlinear system solved by the Lie-group di ff erential algebraic equationmethod, Communications in Nonlinear Science and Numerical Simulation 19(2014) 2012–2038.[19] R. M¨ a rz, Numerical methods for di ff erentialalgebraic equations, Acta Numerica (1)(1992) 141–198.[20] S. E. Mattsson, G. S¨ o derlind, Index reduction in di ff erential-algebraic equations using dummy derivatives, SIAMJournal on Scientific Computing 14(3)(1993) 677–692.[21] C. C. Pantelides, The consistent initialization of di ff erential-algebraic systems, SIAM Journal on Scientific andStatistical Computing 9(2)(1988) 213–231.[22] A. Pothen, C. J. Fan, Computing the block triangular form of a sparse matrix, ACM Transactions on MathematicalSoftware 16(4)(1990) 303–324.[23] J. D. Pryce, A simple structural analysis method for DAEs, BIT Numerical Mathematics 41(2) (2001) 364–394.[24] X. L. Qin, W. Y. Wu, Y. Feng, et al., Structural analysis of high index DAE for process simulation, InternationalJournal of Modeling, Simulation, and Scientific Computing 4(4)(2013) 1–16.[25] X. L. Qin, Z. Sun, T. Leng, et al., Computing the determinant of a matrix with polynomial entries by approximation,2014. available at http: // arxiv.org / pdf / ff erential Elimination-Completion Algorithms for DAE and PDAE, Studies inApplied Mathematics 106 (2001) 1–45.[27] G. Reißig, W. Martinson, P. I. Barton, Di ff erential-algebraic equations of index 1 may have an arbitrarily highstructural index, SIAM Journal on Scientific Computing 21(6)(2000) 1987–1990.[28] J. F. Ritt, Di ff erential Algebra, Coll. Publ., Vol. 33, Amer. Math. Soc., New York, 1950.[29] S. L. Rueda, Linear sparse di ff erential resultant formulas, Linear Algebra and its Applications, 438 (11) (2013)4296–4321.
30] S. L. Rueda, Di ff erential elimination by di ff erential specialization of Sylvester style matrices, 2014. available athttp: // arxiv.org / pdf / ff erential-algebraic equations by substitution method, Linear Algebraand its Applications 429(2008) 2268–2277.[32] W. Y. Wu, G. Reid, S. Ilie, Implicit Riquier Bases for PDAE and their semi-discretizations, Journal of SymbolicComputation 44(7)(2009) 923–941.[33] L. Yang, X. R. Hou, Gather-and-Shift: a Symbolic Method for Solving Polynomial Systems, Proceedings of theFirst Asian Technology Conference on Mathematics, Association of Mathematics Educators, Singapore, 771–780,1995.[34] L. Yang, Z. B. Zeng, W. N. Zhang, Di ff erential elimination with Dixon resultants, Applied Mathematics and Com-putation 218 (2012) 10679–10690.[35] Z.Y. Zhang, C.M. Yuan, X. S. Gao, Matrix Formulae of Di ff erential Resultant for First Order Generic OrdinaryDi ff erential Polynomials, Computer Mathematics, R. Feng et al. (eds.), Springer-Verlag Berlin Heidelberg, 479–503, 2014.[36] S. Z. Zhao, H. G. Fu, An extended fast algorithm for constructing the Dixon resultant matrix, Science in ChinaSeries A: Mathematics 48(1)(2005) 131–143.[37] S. Z. Zhao, H. G. Fu, Three kinds of extraneous factors in Dixon resultants, Science in China Series A: Mathematics52(1)(2009) 160–172.[38] S. Z. Zhao, H. G. Fu, Multivariate Sylvester resultant and extraneous factors (in Chinese), Science in China SeriesA: Mathematics 40(7)(2010) 649–660.erential Polynomials, Computer Mathematics, R. Feng et al. (eds.), Springer-Verlag Berlin Heidelberg, 479–503, 2014.[36] S. Z. Zhao, H. G. Fu, An extended fast algorithm for constructing the Dixon resultant matrix, Science in ChinaSeries A: Mathematics 48(1)(2005) 131–143.[37] S. Z. Zhao, H. G. Fu, Three kinds of extraneous factors in Dixon resultants, Science in China Series A: Mathematics52(1)(2009) 160–172.[38] S. Z. Zhao, H. G. Fu, Multivariate Sylvester resultant and extraneous factors (in Chinese), Science in China SeriesA: Mathematics 40(7)(2010) 649–660.