Computing Canonical Bases of Modules of Univariate Relations
CComputing Canonical Bases of Modules of Univariate Relations
Vincent Neiger
Technical University of DenmarkKgs. Lyngby, [email protected]
Vu Thi Xuan
ENS de Lyon, LIP (CNRS, Inria, ENSL, UCBL)Lyon, [email protected]
ABSTRACT
We study the computation of canonical bases of sets of univariaterelations ( p , . . . , p m ) ∈ K [ x ] m such that p f + · · · + p m f m = f , . . . , f m are from a quotient K [ x ] n /M ,where M is a K [ x ] -module of rank n given by a basis M ∈ K [ x ] n × n in Hermite form. We exploit the triangular shape of M to generalizea divide-and-conquer approach which originates from fast minimalapproximant basis algorithms. Besides recent techniques for thisapproach, we rely on high-order lifting to perform fast modularproducts of polynomial matrices of the form PF mod M .Our algorithm uses O ˜ ( m ω − D + n ω D / m ) operations in K , where D = deg ( det ( M )) is the K -vector space dimension of K [ x ] n /M , O ˜ (·) indicates that logarithmic factors are omitted, and ω is theexponent of matrix multiplication. This had previously only beenachieved for a diagonal matrix M . Furthermore, our algorithmcan be used to compute the shifted Popov form of a nonsingularmatrix within the same cost bound, up to logarithmic factors, asthe previously fastest known algorithm, which is randomized. KEYWORDS
Polynomial matrix; shifted Popov form; division with remainder;univariate equations; syzygy module.
In what follows, K is a field, K [ x ] denotes the set of univariatepolynomials in x over K , and K [ x ] m × n denotes the set of m × n (univariate) polynomial matrices. Univariate relations . Let us consider a (free) K [ x ] -submodule M ⊆ K [ x ] n of rank n , specified by one of its bases, representedas the rows of a nonsingular matrix M ∈ K [ x ] n × n . Besides, letsome elements f , . . . , f m ∈ K [ x ] n /M be represented as a matrix F ∈ K [ x ] m × n . Then, the kernel of the module morphism φ M , f : K [ x ] m → K [ x ] n /M( p , . . . , p m ) (cid:55)→ p f + · · · + p m f m consists of relations between the f i ’s, and is known as a syzygymodule [10]. From the matrix viewpoint above, we write it as R( M , F ) = { p ∈ K [ x ] × m | pF = mod M } , ISSAC ’17, Kaiserslautern, Germany © 2017 Copyright held by the owner/author(s). Publication rights licensed to ACM.This is the author’s version of the work. It is posted here for your personal use. Notfor redistribution. The definitive Version of Record was published in
Proceedings ofISSAC ’17, July 25-28, 2017 , http://dx.doi.org/10.1145/3087604.3087656. where the notation A = mod M stands for “ A = QM for some Q ”,which means that the rows of A are in the module M . Hereafter,the elements of R( M , F ) are called relations of R( M , F ) .Examples of such relations are the following. • Hermite-Pad´e approximants are relations for n = M = x D K [ x ] . That is, given polynomials f , . . . , f m , thecorresponding approximants are all ( p , . . . , p m ) ∈ K [ x ] m such that p f + · · · + p m f m = x D . Fast algorithmsfor finding such approximants include [3, 15, 19, 31, 37]. • Multipoint Pad´e approximants: the fast computation of re-lations when M is a product of ideals, corresponding toa diagonal basis M = diag ( M , . . . , M n ) , was studied in[2, 4, 19, 20, 26, 32]. Many of these references focus on M , . . . , M n which split over K with known roots and mul-tiplicities; then, relations are known as multipoint Pad´eapproximants [1], or also interpolants [4, 20]. In this case,a relation can be thought of as a solution to a linear systemover K [ x ] in which the j th equation is modulo M j . Canonical bases . Since det ( M ) K [ x ] m ⊆ R( M , F ) ⊆ K [ x ] m , themodule R( M , F ) is free of rank m [8, Sec. 12.1, Thm. 4]. Hence, anyof its bases can be represented as the rows of a nonsingular matrixin K [ x ] m × m , which we call a relation basis for R( M , F ) .Here, we are interested in computing relation bases in shiftedPopov form [5, 27]. Such bases are canonical in terms of the module R( M , F ) and of a shift , the latter being a tuple s ∈ Z n used as columnweights in the notion of degree for row vectors. Furthermore, thedegrees in shifted Popov bases are well controlled, which helps tocompute them faster than less constrained types of bases (see [19]and [25, Sec. 1.2.2]) and then, once obtained, to exploit them forother purposes (see for example [28, Thm. 12]). Having a shiftedPopov basis of a submodule M ⊆ K [ x ] n is particularly useful forefficient computations in the quotient K [ x ] n /M (see Section 3).In fact, shifted Popov bases coincide with Gr¨obner bases for K [ x ] -submodules of K [ x ] n [9, Chap. 15], for a term-over-positionmonomial order weighted by the entries of the shift. For moredetails about this link, we refer to [24, Chap. 6] and [25, Chap. 1].For a shift s = ( s , . . . , s n ) ∈ Z n , the s -degree of a row vector p = [ p , . . . , p n ] ∈ K [ x ] × n is max (cid:54) j (cid:54) n ( deg ( p j ) + s j ) ; the s -rowdegree of a matrix P ∈ K [ x ] m × n is rdeg s ( P ) = ( d , . . . , d m ) with d i the s -degree of the i th row of P . Then, the s -leading matrix of P = [ p i , j ] ij is the matrix lm s ( P ) ∈ K m × n whose entry ( i , j ) is thecoefficient of degree d i − s j of p i , j . Similarly, the list of columndegrees of a matrix P is denoted by cdeg ( P ) . Definition 1.1 ([5, 21]).
Let P ∈ K [ x ] m × m be nonsingular, and let s ∈ Z m . Then, P is said to be in • s -reduced form if lm s ( P ) is invertible; • s -Popov form if lm s ( P ) is unit lower triangular and lm ( P T ) is the identity matrix. a r X i v : . [ c s . S C ] M a y SSAC ’17, July 25-28, 2017, Kaiserslautern, Germany Vincent Neiger and Vu Thi Xuan
Problem 1:
Relation basis
Input: • nonsingular matrix M ∈ K [ x ] n × n , • matrix F ∈ K [ x ] m × n , • shift s ∈ Z m . Output: • the s -Popov relation basis P ∈ K [ x ] m × m for R( M , F ) .Hereafter, when we introduce a matrix by saying that it is re-duced, it is understood that it is nonsingular. Similar forms can bedefined for modules generated by the columns of a matrix ratherthan by its rows; in the context of polynomial matrix division withremainder, we will use the notion of P in column reduced form,meaning that lm ( P T ) is invertible. In particular, we remark thatany matrix in shifted Popov form is also column reduced.Considering relation bases P for R( M , F ) in shifted Popov formoffers a strong control over the degrees of their entries. As shifted(row) reduced bases, they satisfy the predictable degree property [12], which is at the core of the correctness of a divide-and-conquerapproach behind most algorithms for the two specific situationsdescribed above, for example [3, 15, 16, 20]. Furthermore, as columnreduced matrices they have small average column degree, which iscentral in the efficiency of fast algorithms for non-uniform shifts[19, 26]. Indeed, we will see in Corollary 2.4 that | cdeg ( P )| = deg ( det ( P )) (cid:54) deg ( det ( M )) , where | · | denotes the sum of the entries of a tuple.Below, triangular canonical bases will play an important role.A matrix M ∈ K [ x ] n × n is in Hermite form if M is upper triangularand lm ( M T ) is the identity matrix; or, equivalently, if M is in ( dn , d ( n − ) , . . . , d ) -Popov form for any d (cid:62) deg ( det ( M )) . Relations modulo Hermite forms . Our main focus is on the casewhere M is in Hermite form and F is already reduced modulo M .In this article, all comparisons of tuples are componentwise.Theorem 1.2. If M is in Hermite form and cdeg ( F ) < cdeg ( M ) ,there is a deterministic algorithm which solves Problem 1 using ˜ O (cid:16) m ω − D + n ω D / m (cid:17) operations in K , where D = deg ( det ( M )) = | cdeg ( M )| . Here, the exponent ω is so that we can multiply m × m matricesover K in O ( m ω ) operations in K , the best known bound being ω < .
38 [7, 23]. The notation O ˜ (·) means that we have omittedthe logarithmic factors in the asymptotic bound.To put this cost bound in perspective, we note that the repre-sentation of the input F and M requires at most ( m + n ) D fieldelements, while that of the output basis uses at most mD elements.In many applications we have n ∈ O ( m ) , in which case the costbound becomes O ˜ ( m ω − D ) , which is satisfactory.To the best of our knowledge, previous algorithms with a com-parable cost bound focus on the case of a diagonal matrix M .The case of minimal approximant bases M = x d I n has concen-trated a lot of attention. A first algorithm with cost quasi-linear in d was given [3]. It was then improved in [15, 30, 37], obtaining thecost bound O ˜ ( m ω − nd ) = O ˜ ( m ω − D ) under assumptions on thedimensions m and n or on the shift. In [20], the divide-and-conquer approach of [3] was carried overand made efficient in the more general case M = diag ( M , . . . , M n ) ,where the polynomials M i split over K with known linear factors.This approach was then augmented in [19] with a strategy focusingon degree information to efficiently compute the shifted Popovbases for arbitrary shifts, achieving the cost bound O ˜ ( m ω − D ) .Then, the case of a diagonal matrix M , with no assumption on thediagonal entries, was solved within O ˜ ( m ω − D + n ω D / m ) [26]. Themain new ingredient developed in [26] was an efficient algorithmfor the case n =
1, that is, when solving a single linear equationmodulo a polynomial; we will also make use of this algorithm here.In this paper we obtain the same cost bound as [26] for anymatrix M in Hermite form. For a more detailed comparison withearlier algorithms focusing on diagonal matrices M , we refer thereader to [26, Sec. 1.2] and in particular Table 2 therein.Our algorithm essentially follows the approach of [26]. In par-ticular, it uses the algorithm developed there for n =
1. However,working modulo Hermite forms instead of diagonal matrices makesthe computation of residuals much more involved. The residualis a modular product PF mod M which is computed after the firstrecursive call and is to be used as an input replacing F for thesecond recursive call. When M is diagonal, its computation boilsdown to the multiplication of P and F , although care has to betaken to account for their possibly unbalanced column degrees.However, when M is triangular, computing PF mod M becomes amuch greater challenge: we want to compute a matrix remainderinstead of simply taking polynomial remainders for each columnseparately. We handle this, while still taking unbalanced degreesinto account, by resorting to high-order lifting [29]. Shifted Popov forms of matrices . A specific instance of Problem 1yields the following problem: given a shift s ∈ Z n and a nonsingularmatrix M ∈ K [ x ] n × n , compute the s -Popov form of M . Indeed, thelatter is the s -Popov relation basis for R( M , I n ) (see Lemma 2.7).To compute this relation basis efficiently, we start by computingthe Hermite form H of M , which can be done deterministically in O ˜ ( n ω (cid:100) D M / n (cid:101)) operations [22]. Here, D M is the generic determinantbound [17]; writing M = [ a ij ] , it is defined as D M = max π ∈ S n (cid:205) (cid:54) i (cid:54) n max ( , deg ( a i , π i )) where S n is the set of permutations of { , . . . , n } . In particular, D M / n is bounded from above by both the average of the degrees ofthe columns of M and that of its rows. For more details about thisquantity, we refer to [17, Sec. 6] and [22, Sec. 2.3].Since the rows of H generate the same module as M , we have R( M , I n ) = R( H , I n ) (see Lemma 2.5). Then, applying our algo-rithm for relations modulo H has a cost of O ˜ ( n ω − deg ( det ( H ))) operations, according to Theorem 1.2. This yields the next result.Theorem 1.3. Given a shift s ∈ Z n and a nonsingular matrix M ∈ K [ x ] n × n , there is a deterministic algorithm which computes the s -Popov form of M using ˜ O (cid:0) n ω (cid:100) D M / n (cid:101) (cid:1) ⊆ ˜ O (cid:0) n ω deg ( M ) (cid:1) operations in K . A similar cost bound was obtained in [26], yet with a randomizedalgorithm. The latter follows the approach of [18] for computingHermite forms, whose first step determines the Smith form S of M omputing Canonical Bases of Modules of Univariate Relations ISSAC ’17, July 25-28, 2017, Kaiserslautern, Germany along with a matrix F such that the sought matrix is the s -Popovrelation basis for R( S , F ) , with S being therefore a diagonal matrix.Here, relying on the deterministic computation of the Hermite formof M , our algorithm for relation bases modulo Hermite forms allowsus to circumvent the computation of S , for which the currentlyfastest known algorithm is Las Vegas randomized [29]. For a moredetailed comparison with earlier row reduction and Popov formsalgorithms, we refer to [26, Sec. 1.1] and Table 1 therein. General relation bases . To solve the general case of Problem 1,one can proceed as follows: • find the Hermite form H of M , using [22, Algo. 1 and 3]; • reduce F modulo H , for example using Algorithm 1; • apply Algorithm 5 for relations modulo a Hermite form. Outline . We first give basic properties about matrix division andrelation bases (Section 2). We then focus on the fast computation ofresiduals (Section 3). After that, we discuss three situations whichhave already been solved efficiently in the literature (Section 4):when n =
1, when information on the output degrees is available,and when D (cid:54) m . Finally, we present our algorithm for relationsmodulo Hermite forms (Section 5). Division with remainder . Polynomial matrix division is a centralnotion in this paper, since we aim at solving equations modulo M .Theorem 2.1 ([13, IV.§2],[21, Thm. 6.3-15]). For any F ∈ K [ x ] m × n and any column reduced M ∈ K [ x ] n × n , there exist unique matrices Q , R ∈ K [ x ] m × n such that F = QM + R and cdeg ( R ) < cdeg ( M ) . Hereafter, we write Quo ( F , M ) and Rem ( F , M ) for the quotient Q and the remainder R . We have the following properties.Lemma 2.2. We have
Rem ( P Rem ( F , M ) , M ) = Rem ( PF , M ) and Rem (cid:18)(cid:20) FG (cid:21) , M (cid:19) = (cid:20) Rem ( F , M ) Rem ( G , M ) (cid:21) for any F ∈ K [ x ] m × n , G ∈ K [ x ] ∗× n , P ∈ K [ x ] ∗× m and any column reduced M ∈ K [ x ] n × n . Degree control for relation bases . We first relate the vector spacedimension of quotients and the degree of determinant of bases.Lemma 2.3.
Let M be a K [ x ] -submodule of K [ x ] n of rank n . Then,the dimension of K [ x ] n /M as a K -vector space is deg ( det ( M )) , forany matrix M ∈ K [ x ] n × n whose rows form a basis of M . Proof. Since the degree of the determinant is the same for allbases of M , we may assume that M is column reduced. Then,Theorem 2.1 implies that there is a K -vector space isomorphism K [ x ] n /M (cid:27) K [ x ]/( x d ) × · · · × K [ x ]/( x d n ) , where ( d , . . . , d n ) = cdeg ( M ) . Thus, the dimension of K [ x ] n /M is d + · · · + d n , whichis equal to deg ( det ( M )) according to [21, Sec. 6.3.2]. (cid:3) This allows us to bound the sum of column degrees of any columnreduced relation basis; for example, a shifted Popov relation basis.Corollary 2.4.
Let F ∈ K [ x ] m × n , and let M ∈ K [ x ] n × n benonsingular. Then, any relation basis P ∈ K [ x ] m × m for R( M , F ) issuch that deg ( det ( P )) (cid:54) deg ( det ( M )) . In particular, if P is columnreduced, then | cdeg ( P )| (cid:54) deg ( det ( M )) . Proof. Let M be the row space of M . By definition, R( M , F ) is the kernel of φ M , f (see Section 1), hence K [ x ] m /R( M , F ) is iso-morphic to a submodule of K [ x ] n /M . Since, by Lemma 2.3, thedimensions of K [ x ] m /R( M , F ) and K [ x ] m /M are deg ( det ( P )) anddeg ( det ( M )) , we obtain deg ( det ( P )) (cid:54) deg ( det ( M )) . (cid:3) Properties of relation bases . We now formalize the facts that R( M , F ) is not changed if M is replaced by another basis of themodule generated by its rows; or if F and M are right-multiplied bythe same nonsingular matrix; or yet if F is considered modulo M .Lemma 2.5. Let F ∈ K [ x ] m × n , and let M ∈ K [ x ] n × n be non-singular. Then, for any nonsingular A ∈ K [ x ] n × n , any matrix B ∈ K [ x ] m × n , and any unimodular U ∈ K [ x ] m × m , we have R( M , F ) = R( UM , F ) = R( MA , FA ) = R( M , F + BM ) . A first consequence is that we may discard identity columns in M .Corollary 2.6. Let F ∈ K [ x ] m × n , and let M ∈ K [ x ] n × n benonsingular. Suppose that M has at least k ∈ Z > identity columns,and that the corresponding columns of F are zero. Then, let π , π be n × n permutation matrices such that π M π = (cid:20) I k B0 N (cid:21) and F π = (cid:2) (cid:3) , where G ∈ K [ x ] m ×( n − k ) . Then, R( M , F ) = R( N , G ) . Another consequence concerns the transformation of a matrixinto shifted Popov form. Indeed, Lemma 2.5 together with the nextlemma imply in particular that the s -Popov form of M is the s -Popovrelation basis for R( H , I n ) , where H is the Hermite form of M .Lemma 2.7. Let M ∈ K [ x ] n × n be nonsingular. Then, M is a relationbasis for R( M , I n ) . It follows that the s -Popov form of M is the s -Popovrelation basis for R( M , I m ) , for any s ∈ Z n . Proof. Let P ∈ K [ x ] n × n be a relation basis for R( M , I n ) . Then, PI n = QM for some Q ∈ K [ x ] n × n ; since the rows of M belong to R( M , I n ) , we also have M = RP for some R ∈ K [ x ] n × n . Since P is nonsingular, P = QRP implies that QR = I n , and therefore R isunimodular. Thus, M = RP is a relation basis for R( M , I n ) . (cid:3) Divide and conquer approach . Here we give properties in thecase of a block triangular matrix M . They imply, if M is in Hermiteform, that Problem 1 can be solved recursively by splitting theinstance in dimension n into two instances in dimension n / Let M ∈ K [ x ] n × n , M ∈ K [ x ] n × n , and A ∈ K [ x ] n × n be such that M = (cid:2) M A0 M (cid:3) is column reduced. For any F ∈ K [ x ] m × n and F ∈ K [ x ] m × n , we have Rem ([ F F ] , M ) = [ Rem ( F , M ) Rem ( F − Quo ( F , M ) A , M )] . Proof. Writing [ F F ] = [ Q Q ] M + [ R R ] wherecdeg ([ R R ]) < cdeg ( M ) , we obtain F = Q M + R as well ascdeg ( R ) < cdeg ( M ) , and therefore R = Rem ( F , M ) and Q = Quo ( F , M ) . The result follows from F = Q A + Q M + R . (cid:3) Theorem 2.9.
Let M = (cid:2) M ∗ (cid:3) be column reduced, where M ∈ K [ x ] n × n and M ∈ K [ x ] n × n , and let F ∈ K [ x ] m × n and F ∈ K [ x ] m × n . If P is a basis for R( M , F ) , then Rem ( P [ F F ] , M ) has the form [ ] for some G ∈ K [ x ] m × n ; if furthermore P is abasis for R( M , G ) , then P P is a basis for R( M , [ F F ]) . SSAC ’17, July 25-28, 2017, Kaiserslautern, Germany Vincent Neiger and Vu Thi Xuan
Proof. It follows from Lemma 2.8 that the first n columns ofRem ( P [ F F ] , M ) are Rem ( P F , M ) , which is zero, and thatRem ([ ] , M ) = [ Rem ( G , M )] . Then, the first identity inLemma 2.2 implies both that R( M , [ ]) = R( M , G ) and that therows of P P are in R( M , [ F F ]) . Now let p ∈ R( M , [ F F ]) .Lemma 2.8 implies that p ∈ R( M , F ) , hence p = λ P for some λ .Then, the first identity in Lemma 2.2 shows that = Rem ( λ P [ F F ] , M ) = Rem ( λ [ ] , M ) , and therefore λ ∈ R( M , G ) . Thus λ = µ P forsome µ , and p = µ P P . (cid:3) In this section, we aim at designing a fast algorithm for the modularproducts that arise in our relation basis algorithm.
For univariate polynomials, fast Euclidean division can be achievedby first computing the reversed quotient via Newton iteration, andthen deducing the remainder [14, Chap. 9]. This directly translatesinto the context of polynomial matrices, as was noted for examplein the proof of [15, Lem. 3.4] or in [36, Chap. 10].In the latter reference, it is showed how to efficiently computeremainders Rem (E , M ) for a matrix E as in Eq. (1) below; this isnot general enough for our purpose. Algorithms for the generalcase have been studied [6, 11, 33–35], but we are not aware of anythat achieves the speed we desire. Thus, as a preliminary to thecomputation of residuals in Section 3.2, we now detail this extensionof fast polynomial division to fast polynomial matrix division.As mentioned above, we will start by computing the quotient.The degrees of its entries are controlled thanks to the reducednessof the divisor, which ensures that no high-degree cancellation canoccur when multiplying the quotient and the divisor.Lemma 3.1. Let M ∈ K [ x ] n × n , F ∈ K [ x ] m × n , and δ ∈ Z > besuch that M is column reduced and cdeg ( F ) < cdeg ( M ) + ( δ , . . . , δ ) .Then, deg ( Quo ( F , M )) < δ . Proof. First, lm ( M T ) T = lm − d ( M ) where d = cdeg ( M ) ∈ Z n (cid:62) :the -column leading matrix of M is equal to its − d -row leadingmatrix. Since M is -column reduced, it is also − d -row reduced.Thus, by the predictable degree property [21, Thm. 6.3-13] andsince since rdeg − d ( M ) = , we have rdeg − d ( QM ) = rdeg ( Q ) . Here,we write Q = Quo ( F , M ) and R = Rem ( F , M ) .Now, our assumption cdeg ( F ) < d + ( δ , . . . , δ ) and the fact thatcdeg ( R ) < d imply that cdeg ( F − R ) < d + ( d , . . . , d ) , and thusrdeg − d ( F − R ) < ( δ , . . . , δ ) . Since F − R = QM , from the previousparagraph we obtain rdeg ( Q ) < ( δ , . . . , δ ) , hence deg ( Q ) < δ . (cid:3) Corollary 3.2.
Let M ∈ K [ x ] n × n and F ∈ K [ x ] m × n be such that M is column reduced and cdeg ( F ) < cdeg ( M ) , and let P ∈ K [ x ] k × m .Then, rdeg ( Quo ( PF , M )) < rdeg ( P ) . Proof. For the case k =
1, the inequality follows from Lemma 3.1since cdeg ( PF ) (cid:54) ( δ , . . . , δ ) + cdeg ( F ) < ( δ , . . . , δ ) + cdeg ( M ) ,where δ = deg ( P ) . Then, the general case k ∈ Z > follows byconsidering separately each row of P . (cid:3) Going back to the division F = QM + R , to obtain the reversedquotient we will right-multiply the reversed F by an expansion of the inverse of the reversed M . This operation is performed effi-ciently by means of high-order lifting; we will use the next result.Lemma 3.3. Let M ∈ K [ x ] n × n with M ( ) nonsingular, and let F ∈ K [ x ] m × n . Then, defining d = (cid:100)| cdeg ( M )|/ n (cid:101) , the truncated x -adic expansion FM − mod x kd can be computed deterministicallyusing O ˜ ((cid:100) mk / n (cid:101) n ω d ) operations in K . Proof. This is a minor extension of [29, Prop. 15], incorporatingthe average column degree of the matrix M instead of the largestdegree of its entries. This can be done by means of partial columnlinearization [17, Sec. 6], as follows. One first expands the high-degree columns of M and inserts elementary rows to obtain a matrix M ∈ K [ x ] n × n such that n (cid:54) n < n , deg ( M ) (cid:54) d , and M − is the n × n principal leading submatrix of M − [17, Thm. 10 and Cor. 2].Then, defining F = [ F 0 ] ∈ K [ x ] m × n , we have that FM − is thesubmatrix of F M − formed by its first n columns. Thus, the soughttruncated expansion is obtained by computing F M − mod x kd ,which is done efficiently by [29, Alg. 4] with the choice X = x d ;this is valid since this polynomial is coprime to det ( M ) = det ( M ) and its degree is at least the degree of M . (cid:3) Algorithm 1:
PM-QuoRem
Input: • M ∈ K [ x ] n × n column reduced, • F ∈ K [ x ] m × n , • δ ∈ Z > such that cdeg ( F ) < cdeg ( M ) + ( δ , . . . , δ ) . Output: the quotient Quo ( F , M ) , the remainder Rem ( F , M ) . /* reverse order of coefficients */ ( d , . . . , d n ) ← cdeg ( M ) M rev = M ( x − ) diag ( x d , . . . , x d n ) F rev = F ( x − ) diag ( x δ + d − , . . . , x δ + d n − ) /* compute quotient via expansion */ Q rev ← F rev M − mod x δ Q ← x δ − Q rev ( x − ) Return ( Q , F − QM ) Proposition 3.4.
Algorithm 1 is correct. Assuming that both mδ and n are in O ( D ) , where D = | cdeg ( M )| , this algorithm uses O ˜ ((cid:100) m / n (cid:101) n ω − D ) operations in K . Proof. Let Q = Quo ( F , M ) , R = Rem ( F , M ) , and ( d , . . . , d n ) = cdeg ( M ) . We have the bounds cdeg ( F ) < ( δ + d , . . . , δ + d n ) ,cdeg ( R ) < ( d , . . . , d n ) , and Lemma 3.1 gives deg ( Q ) < δ . Thus, wecan define the reversals of these polynomial matrices as M rev = M ( x − ) diag ( x d , . . . , x d n ) , F rev = F ( x − ) diag ( x δ + d − , . . . , x δ + d n − ) , Q rev = x δ − Q ( x − ) , R rev = R ( x − ) diag ( x d − , . . . , x d n − ) , for which the same degree bounds hold. Then, right-multiplyingboth sides of the identity F ( x − ) = Q ( x − ) M ( x − ) + R ( x − ) bydiag ( x δ + d − , . . . , x δ + d n − ) , we obtain F rev = Q rev M rev + x δ R rev .Now, note that the constant term M rev ( ) ∈ K n × n is equal to thecolumn leading matrix of M , which is invertible since M is column omputing Canonical Bases of Modules of Univariate Relations ISSAC ’17, July 25-28, 2017, Kaiserslautern, Germany reduced, hence M rev is invertible (over the fractions). Thus, sincedeg ( Q rev ) < δ , this reversed quotient matrix can be determined asthe truncated expansion Q rev = F rev M − mod x δ . This proves thecorrectness of the algorithm.Concerning the cost bound, Step uses O ˜ ((cid:100)( mδ )/( nd )(cid:101) n ω d ) op-erations according to Lemma 3.3, where d = (cid:100) D / n (cid:101) . We have byassumption d ∈ Θ ( D / n ) as well as mδ /( nd ) ∈ O ( ) , so that this costbound is in O ˜ ( n ω − D ) .In Step , we multiply the m × n matrix Q of degree less than δ with the n × n matrix M such that | cdeg ( M )| = D . First consider thecase m (cid:54) n . To perform this product efficiently, we expand the rowsof Q so as to obtain a O ( n ) × n matrix Q of degree in O ((cid:100) mδ / n (cid:101)) and such that QM is easily retrieved from QM (see Section 3.2 formore details about how such row expansions are carried out). Thus,this product is done in O ˜ ( n ω − D ) , since (cid:100) mδ / n (cid:101) ∈ O ( D / n ) . On theother hand, if m > n , we have δ ∈ O ( D / m ) ⊆ O ( D / n ) . Then, wecan compute the product QM via (cid:100) m / n (cid:101) products of n × n matricesof degree O ( D / n ) , which cost each O ˜ ( n ω − D ) operations; hencethe total cost O ˜ ( mn ω − D ) when m > n . (cid:3) Here, we focus on performing modular products Rem ( PF , M ) , where F ∈ K [ x ] m × n and P ∈ K [ x ] m × m are such that cdeg ( F ) < cdeg ( M ) and | cdeg ( P )| (cid:54) | cdeg ( M )| , and M ∈ K [ x ] n × n is column reduced.The difficulty in designing a fast algorithm for this operation comesfrom the non-uniformity of cdeg ( P ) : in particular, the product PF cannot be computed within the target cost bound.To start with, we use the same strategy as in [19, 26]: we makethe column degrees of P uniform, at the price of introducing another,simpler matrix E for which we want to compute Rem (E F , M ) .Let ( δ , . . . , δ m ) = cdeg ( P ) , δ = (cid:100)( δ + · · · + δ m )/ m (cid:101) (cid:62)
1, andfor i ∈ { , . . . , m } write δ i = ( α i − ) δ + β i with α i = (cid:100) δ i / δ (cid:101) and1 (cid:54) β i (cid:54) δ if δ i >
0, and with α i = β i = δ i =
0. Then, let m = α + · · · + α m , and define E ∈ K [ x ] m × m as the transpose of E T = x δ · · · x ( α − ) δ . . . x δ · · · x ( α m − ) δ . (1)Define also the expanded column degrees δ ∈ Z m (cid:62) as δ = ( δ , . . . , δ , β (cid:124) (cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32) (cid:123)(cid:122) (cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32) (cid:125) α , . . . , δ , . . . , δ , β m (cid:124) (cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32) (cid:123)(cid:122) (cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32) (cid:125) α m ) . (2)Then, we expand the columns of P by considering P ∈ K [ x ] m × m such that P = P E and deg ( P ) (cid:54) δ . (Note that P can be madeunique by specifying more constraints on cdeg ( P ) .) The aim of thisconstruction is that the dimension is at most doubled while thedegree of the expanded matrix becomes the average column degreeof P . Precisely, m (cid:54) m < m and max ( δ ) = δ = (cid:100)| cdeg ( P )|/ m (cid:101) .Now, we have Rem ( PF , M ) = Rem ( P E F , M ) = Rem ( P F , M ) byLemma 2.2, where F = Rem (E F , M ) . Thus, Rem ( PF , M ) can beobtained by computing first F and then Rem ( P F , M ) . For the lat-ter, since P has small degree, one can compute the product andthen perform the division (Steps and of Algorithm 3). Step ofAlgorithm 3 efficiently computes F , relying on Algorithm 2. Algorithm 2:
RemOfShifts
Input: • M ∈ K [ x ] n × n column reduced, • F ∈ K [ x ] m × n such that cdeg ( F ) < cdeg ( M ) , • δ ∈ Z > and k ∈ Z (cid:62) . Output: the list of remainders ( Rem ( x rδ F , M )) (cid:54) r < k . If k = then Return F Else a. ( ∗ , G ) ← PM-QuoRem ( M , x k − δ F , k − δ ) b. (cid:18)(cid:20) R r R r (cid:21)(cid:19) (cid:54) r < k − ← RemOfShifts (cid:18) M , (cid:20) FG (cid:21) , δ , k − (cid:19) c. Return ( R r ) (cid:54) r < k − ∪ ( R r ) (cid:54) r < k − Proposition 3.5.
Algorithm 2 is correct. Assuming that both k mδ and n are in O ( D ) , where D = | cdeg ( M )| , this algorithm uses O ˜ (( k mn ω − + kn ω − ) D ) operations in K . Proof. The correctness is a consequence of the two propertiesin Lemma 2.2. Now, if 2 k mδ and n are in O ( D ) , the assumptionsin Proposition 3.4 about the input parameters for PM-QuoRem arealways satisfied in recursive calls, since the row dimension m is dou-bled while the exponent 2 k δ is halved. From the same proposition,we deduce the cost bound O ˜ (( (cid:205) (cid:54) r (cid:54) k − (cid:100) r m / n (cid:101)) n ω − D ) . (cid:3) Algorithm 3:
Residual
Input: • M ∈ K [ x ] n × n column reduced, • F ∈ K [ x ] m × n such that cdeg ( F ) < cdeg ( M ) , • P ∈ K [ x ] m × m . Output: the remainder Rem ( PF , M ) . /* expand high-degree columns of P */ ( δ i ) (cid:54) i (cid:54) m ← cdeg ( P ) δ ← (cid:100)( δ + · · · + δ m )/ m (cid:101) α i ← max ( , (cid:100) δ i / δ (cid:101)) for 1 (cid:54) i (cid:54) mm ← α + · · · + α m P ∈ K [ x ] m × m ← matrix such that P = P E and deg ( P ) (cid:54) δ for E as in Eq. (1) /* compute F = Rem (E F , M ) */ For (cid:54) i (cid:54) m such that α i = do F i ∈ K [ x ] α i × n ← row i of F For (cid:54) k (cid:54) (cid:100) log ( max i ( α i ))(cid:101) do ( i , . . . , i (cid:96) ) ← { i ∈ { , . . . , m } | k − < α i (cid:54) k } G ← submatrix of F formed by its rows i , . . . , i (cid:96) ( R r ) (cid:54) r < k ← RemOfShifts ( M , G , δ , k ) For (cid:54) j (cid:54) (cid:96) do F i j ∈ K [ x ] α ij × n ← stack the rows j of ( R r ) (cid:54) r < α ij F ← (cid:104) F T · · · F T m (cid:105) T ∈ K [ x ] m × n /* left-multiply by the expanded P */ G ← P F /* complete the remainder computation */ ( ∗ , R ) ← PM-QuoRem ( M , G , δ ) Return R SSAC ’17, July 25-28, 2017, Kaiserslautern, Germany Vincent Neiger and Vu Thi Xuan
Proposition 3.6.
Algorithm 3 is correct. Assuming that all of | cdeg ( P )| , m , and n are in O ( D ) , where D = | cdeg ( M )| , this algorithmuses O ˜ (( m ω − + n ω − ) D ) operations in K . Proof. Let us consider
E ∈ K [ x ] m × m defined as in Eq. (1) fromthe parameters δ and α , . . . , α m in Step . We claim that the ma-trix F computed at Step is equal to Rem (E F , M ) . Then, havingcdeg ( P F ) < cdeg ( M ) + ( δ , . . . , δ ) , the correctness of PM-QuoRemimplies R = Rem ( P F , M ) , which is Rem ( PF , M ) by Lemma 2.2.To prove our claim, it is enough to show that, for 1 (cid:54) i (cid:54) m , the i th block F i of F is the matrix formed by stacking the remaindersinvolving the row i of F , that is, ( Rem ( x rδ F i , ∗ , M )) (cid:54) r < α i . This isclear from the first For loop if α i =
1. Otherwise, let k ∈ Z > besuch that 2 k − < α i (cid:54) k . Then, at the k th iteration of the secondloop, we have i j = i for some 1 (cid:54) j (cid:54) (cid:96) . Thus, the correctnessof RemOfShifts implies that, for 0 (cid:54) r < k , the row j of R r isRem ( x rδ G j , ∗ , M ) = Rem ( x rδ F i , ∗ , M ) . Since 2 k (cid:62) α i , this containsthe wanted remainders and the claim follows.Let us show the cost bound, assuming that | cdeg ( P )| , m , and n are in O ( D ) . Note that this implies mδ ∈ O ( D ) .We first study the cost of the iteration k of the second loop ofStep . We have that 2 k − (cid:96) (cid:54) α + · · · + α m = m (cid:54) m , the rowdimension of G is (cid:96) , and k (cid:54) (cid:100) log ( max i ( α i ))(cid:101) ∈ O ( log ( m )) . Thus,the call to RemOfShifts costs O ˜ (( mn ω − + n ω − ) D ) operationsaccording to Proposition 3.5, and the same cost bound holds for thewhole Step . Concerning Step , the cost bound O ˜ ((cid:100) m / n (cid:101) n ω − D ) follows directly from Proposition 3.4.The product at Step involves the m × m matrix P whose degreeis at most δ and the m × n matrix F such that cdeg ( F ) < cdeg ( M ) ; werecall that m (cid:54) m . If n (cid:62) m , we expand the columns of F similarlyto how P was obtained from P : this yields a m × ( (cid:54) n ) matrix ofdegree at most (cid:100) D / n (cid:101) , whose left-multiplication by P directly yields P F by compressing back the columns. Thus, this product is done in O ˜ ( m ω − nD ) operations since both δ and D / n are in O ( D / m ) when n (cid:62) m . If m (cid:62) n , we do a similar column expansion of F , yet into amatrix with O ( m ) columns and degree O ( D / m ) ; thus, the productcan be performed in O ˜ ( m ω − D ) operations in this case. (cid:3) Here, we discuss fast solutions to specific instances of Problem 1.This will be important ingredients of our main algorithm for rela-tions modulo Hermite forms (Algorithm 5).
We first focus on Problem 1 when n =
1; this is one of the twobase cases of the recursion in Algorithm 5 (Step ). In this case, theinput matrix M is a nonzero polynomial M ∈ K [ x ] . In other words,the input module is the ideal ( M ) of K [ x ] , and we are looking forthe s -Popov basis for the set of relations between m elements of K [ x ]/( M ) . A fast algorithm for this task was given in [26, Sec. 2.2];precisely, the following result is achieved by running [26, Alg. 2]on input M , F , s , D .Proposition 4.1. Assuming n = and deg ( F ) < D = deg ( M ) ,there is an algorithm which solves Problem 1 using O ˜ ( m ω − D ) oper-ations in K . s -minimal degree is known Now, we consider Problem 1 with an additional input: the s -minimaldegree of R( M , F ) , which is the column degree of its s -Popov basis.This is motivated by a technique from [19] and used in Algorithm 5to control the degrees of all the bases computed in the process.Namely, we find this s -minimal degree recursively, and then wecompute the s -Popov relation basis using this knowledge.The same question was tackled in [18, Sec. 3] and [26, Sec. 2.1]for a diagonal matrix M . Here, we extend this to the case of acolumn reduced M , relying in particular on the fast computation ofRem (E F , M ) designed in Section 3.2. We first extend [26, Lem. 2.1]to this more general setting (Lemma 4.2), and then we give theslightly modified version of [26, Alg. 1] (Algorithm 4).Lemma 4.2. Let M ∈ K [ x ] n × n be column reduced, let F ∈ K [ x ] m × n be such that cdeg ( F ) < cdeg ( M ) , let s ∈ Z m . Furthermore, let P ∈ K [ x ] m × m , and let w ∈ Z n be such that max ( w ) (cid:54) min ( s ) . Then, P is the s -Popov relation basis for R( M , F ) if and only if [ P Q ] isthe u -Popov kernel basis of [ F T M ] T for some Q ∈ K [ x ] m × n and u = ( s , w ) ∈ Z m + n . In this case, deg ( Q ) < deg ( P ) and [ P Q ] has u -pivot index ( , , . . . , m ) . Proof. Let N = [ F T M ] T . It is easily verified that P is a relationbasis for R( M , F ) if and only if there is some Q ∈ K [ x ] m × n suchthat [ P Q ] is a kernel basis of N .Then, for any matrix [ P Q ] ∈ K [ x ] m ×( m + n ) in the kernelof N , we have PF = − QM and therefore Corollary 3.2 showsthat rdeg ( Q ) < rdeg ( P ) ; since max ( w ) (cid:54) min ( s ) , this impliesrdeg w ( Q ) < rdeg s ( P ) . Thus, we have lm u ([ P Q ]) = [ lm s ( P ) ] ,and therefore P is in s -Popov form if and only if [ P Q ] is in u -Popovform with u -pivot index ( , . . . , m ) . (cid:3) Algorithm 4:
KnownDegreeRelations
Input: • M ∈ K [ x ] n × n column reduced, • F ∈ K [ x ] m × n such that cdeg ( F ) < cdeg ( M ) , • s ∈ Z m , • δ = ( δ , . . . , δ m ) the s -minimal degree of R( M , F ) . Output: the s -Popov relation basis for R( M , F ) . /* define partial linearization parameters */ δ ← (cid:100)( δ + · · · + δ m )/ m (cid:101) , α i ← max ( , (cid:100) δ i / δ (cid:101)) for 1 (cid:54) i (cid:54) m , m ← α + · · · + α m , δ ← tuple as in Eq. (2) /* for E as in Eq. (1), compute F = Rem (E F , M ) */ F ← follow Step of Algorithm 3 (Residual) /* compute the kernel basis */ u ← (− δ , − δ , . . . , − δ ) ∈ Z m + n τ ← ( cdeg ( M ∗ , j ) + δ + ) (cid:54) j (cid:54) n P ← u -Popov approximant basis for (cid:20) FM (cid:21) and orders τ /* retrieve the relation basis */ P ← the principal m × m submatrix of P Return the submatrix of P E formed by the rows at indices α + · · · + α i for 1 (cid:54) i (cid:54) m omputing Canonical Bases of Modules of Univariate Relations ISSAC ’17, July 25-28, 2017, Kaiserslautern, Germany Proposition 4.3.
Algorithm 4 is correct, and assuming that m and n are in O ( D ) , where D = | cdeg ( M )| , it uses O ˜ ( m ω − D + n ω D / m ) operations in K . Proof. The correctness follows from the material in [26, Sec. 2.1]and [19, Sec. 4]. Concerning the cost bound, we first note that wehave δ + · · · + δ m (cid:54) D according to Corollary 2.4. Thus, the cost anal-ysis in Proposition 3.6 shows that Step uses O ˜ (( mn ω − + n ω − ) D ) operations. [19, Thm. 1.4] states that the approximant basis compu-tation at Step uses O ˜ (( m + n ) ω − ( + n / m ) D ) operations, sincethe row dimension of the input matrix is m + n (cid:54) m + n and thesum of the orders is | τ | = | cdeg ( M )| + n ( δ + ) (cid:54) ( + n / m ) D . (cid:3) Here, we detail how previous work can be used to handle a basecase of the recursion in Algorithm 5 (Step ): when the vector spacedimension deg ( det ( M )) of the input module is small compared tothe number m of input elements. Then, we rely on an interpretationof Problem 1 as a question of dense linear algebra over K , which issolved efficiently by [20, Alg. 9]. This yields the following result.Proposition 4.4. Assuming that M is in shifted Popov form, andthat cdeg ( F ) < cdeg ( M ) , there is an algorithm which solves Problem 1using O ˜ ( D ω (cid:100) m / D (cid:101)) operations in K , where D = deg ( det ( M )) . This cost bound is O ˜ ( D ω − m ) ⊆ O ˜ ( m ω − D ) when D ∈ O ( m ) .To see why relying on fast linear algebra is sufficient to obtain afast algorithm when D ∈ O ( m ) , we note that this implies that theaverage column degree of the s -Popov relation basis P is | cdeg ( P )|/ m = deg ( det ( P ))/ m (cid:54) D / m ∈ O ( ) . For example, if D (cid:54) m , most entries in this basis have degree0: we are essentially dealing with matrices over K . On the otherhand, when m ∈ O ( D ) , this approach based on linear algebra uses O ˜ ( D ω ) operations, which largely exceeds our target cost.We now describe how to translate our problem into the K -linearalgebra framework in [20]. Let M denote the row space of M ; weassume that M has no identity column. In order to compute in thequotient K [ x ] n /M , which has finite dimension D , it is customaryto make use of the multiplication matrix of x with respect to a givenmonomial basis. Here, since the basis M of M is in shifted Popovform with column degree ( d , . . . , d n ) ∈ Z n > , Lemma 2.3 suggeststo use the monomial basis {( x i , , . . . , ) , (cid:54) i < d } ∪ · · · ∪ {( , . . . , , x i ) , (cid:54) i < d n } . Above, we have represented an element in K [ x ] n /M by a poly-nomial vector f ∈ K [ x ] × n such that cdeg ( f ) < ( d , . . . , d n ) . Inthe linear algebra viewpoint, we rather represent it by a constantvector e ∈ K × D , which is formed by the concatenations of thecoefficient vectors of the entries of f . Applying this to each row ofthe input matrix F yields a constant matrix E ∈ K m × D , which isanother representation of the same m elements in the quotient.Besides, the multiplication matrix X ∈ K D × D is the matrix suchthat eX ∈ K × D corresponds to the remainder in the division of x f by M . Since the basis M is in shifted Popov form, the computationof X is straightforward. Indeed, writing M = diag ( x d , . . . , x d n )− A where A ∈ K [ x ] n × n is such that cdeg ( A ) < ( d , . . . , d n ) , then • the row d + · · · + d i − + j of X is the unit vector with 1 atindex d + · · · + d i − + j +
1, for 1 (cid:54) j < d i and 1 (cid:54) i (cid:54) n , • the row d + · · · + d i of X is the concatenation of thecoefficient vectors of the row i of A , for 1 (cid:54) i (cid:54) n .That is, writing A = [ a ij ] (cid:54) i , j (cid:54) n and denoting by { a ( k ) ij , (cid:54) k < d j } the coefficients of a ij , the multiplication matrix X ∈ K D × D is . . . a ( ) a ( ) · · · a ( d − ) · · · a ( ) n a ( ) n · · · a ( d n − ) n . . . . . . a ( ) n a ( ) n · · · a ( d − ) n · · · a ( ) nn a ( ) nn · · · a ( d n − ) nn . In this section, we give a fast algorithm for solving Problem 1 when M is in Hermite form; this matrix is denoted by H in what follows.The cost bound is given under the assumption that H has no identitycolumn; how to reduce to this case by discarding columns of H and F was discussed in Corollary 2.6. We recall that Steps , , and have been discussed in Section 4.Proposition 5.1. Algorithm 5 is correct and, assuming the entries cdeg ( H ) are positive, it uses O ˜ ( m ω − D + n ω D / m ) operations in K ,where D = | cdeg ( H )| = deg ( det ( H )) . Proof. Following the recursion in the algorithm, our proof isby induction on n , with two base cases (Steps and ).The correctness and the cost bound for Step follows from thediscussion in Section 4.3, as summarized in Proposition 4.4. FromSection 4.1, Step correctly computes the s -Popov relation basisand uses O ˜ ( m ω − D ) operations in K .Now, we focus on the correctness of Step , assuming that thetwo recursive calls at Steps and correctly compute the shiftedPopov relation bases. Since KnownDegreeRelations is correct, itis enough to prove that the s -minimal degree of R( H , F ) is δ + δ ;for this, we will show that P P is a relation basis for R( H , F ) whose s -Popov form has column degree δ + δ .From Theorem 2.9, P P is a relation basis for R( H , F ) . Further-more, the fact that the s -Popov form of P P has column degree δ + δ follows from [19, Sec. 3], since P is in s -Popov form and P is in t -Popov form, where t = s + δ = rdeg s ( P ) .Concerning the cost of Step , we remark that m < D , that n (cid:54) D is ensured by cdeg ( H ) > , and that δ + δ = deg ( det ( P P )) (cid:54) D according to Corollary 2.4. Furthermore, there are two recursivecalls with dimension about n /
2, and with H and H that are inHermite form and have determinant degrees D = deg ( det ( H )) and D = deg ( det ( H )) such that D = D + D . Besides, the entriesof both cdeg ( H ) and cdeg ( H ) are all positive.In particular, the assumptions on the parameters in Proposi-tions 3.6 and 4.3, concerning the computation of the residual at SSAC ’17, July 25-28, 2017, Kaiserslautern, Germany Vincent Neiger and Vu Thi Xuan
Step and of the relation basis when the degrees are known atStep , are satisfied. Thus, these steps use O ˜ (( m ω − + n ω − ) D ) and O ˜ ( m ω − D + n ω D / m ) operations, respectively. The announcedcost bound follows. (cid:3) Algorithm 5:
RelationsModHermite
Input: • matrix H ∈ K [ x ] n × n in Hermite form, • matrix F ∈ K [ x ] m × n such that cdeg ( F ) < cdeg ( H ) , • shift s ∈ Z m . Output: the s -Popov relation basis for R( H , F ) . If D = | cdeg ( H )| (cid:54) m : a. build X ∈ K D × D from H as in Section 4.3 b. build E ∈ K m × D from F as in Section 4.3 c. P ← [20, Alg. 9] on input ( E , X , s , (cid:100) log ( D )(cid:101) ) d. Return P Else if n = then a. P ← [26, Alg. 2] on input ( H , F , s , D ) b. Return P Else : a. n ← (cid:98) n / (cid:99) ; n ← (cid:100) n / (cid:101) b. H and H ← the n × n leading and n × n trailingprincipal submatrices of H c. F ← first n columns of F d. P ← RelationsModHermite ( H , F , s ) e. δ ← diagonal degrees of P f. G ← last n columns of Residual ( H , P , F ) g. P ← RelationsModHermite ( H , G , s + δ ) h. δ ← diagonal degrees of P i. Return
KnownDegreeRelations ( H , F , s , δ + δ ) ACKNOWLEDGMENTS
The authors thank Claude-Pierre Jeannerod for interesting discus-sions, Arne Storjohann for his helpful comments on high-orderlifting, and the reviewers whose remarks helped to prepare thefinal version of this paper. The research leading to these resultshas received funding from the People Programme (Marie CurieActions) of the European Union’s Seventh Framework Programme(FP7/2007-2013) under REA grant agreement number 609405 (CO-FUNDPostdocDTU). Vu Thi Xuan acknowledges financial supportprovided by the scholarship
Explora Doc from
R´egion Rhˆone-Alpes,France , and by the LABEX MILYON (ANR-10-LABX-0070) of Uni-versit´e de Lyon, within the program
Investissements d’Avenir (ANR-11-IDEX-0007) operated by the French National Research Agency.
REFERENCES [1] G. A. Baker and P. R. Graves-Morris. 1996.
Pad´e Approximants . CambridgeUniversity Press.[2] B. Beckermann. 1992. A reliable method for computing M-Pad´e approximantson arbitrary staircases.
J. Comput. Appl. Math.
40, 1 (1992), 19–42.[3] B. Beckermann and G. Labahn. 1994. A Uniform Approach for the Fast Compu-tation of Matrix-Type Pad´e Approximants.
SIAM J. Matrix Anal. Appl.
15, 3 (July1994), 804–823.[4] B. Beckermann and G. Labahn. 1997. Recursiveness in matrix rational interpola-tion problems.
J. Comput. Appl. Math.
77, 1–2 (1997), 5–34.[5] B. Beckermann, G. Labahn, and G. Villard. 1999. Shifted Normal Forms ofPolynomial Matrices. In
ISSAC’99 . ACM, 189–196. [6] B. Codenotti and G. Lotti. 1989. A fast algorithm for the division of two polyno-mial matrices.
IEEE Trans. Automat. Control
34, 4 (Apr 1989), 446–448.[7] D. Coppersmith and S. Winograd. 1990. Matrix multiplication via arithmeticprogressions.
J. Symbolic Comput.
9, 3 (1990), 251–280.[8] D. S. Dummit and R. M. Foote. 2004.
Abstract Algebra . John Wiley & Sons.[9] D. Eisenbud. 1995.
Commutative Algebra: with a View Toward Algebraic Geometry .Springer-Verlag, New York.[10] D. Eisenbud. 2005.
The Geometry of Syzygies . Springer-Verlag, New York.[11] P. Favati and G. Lotti. 1991. Parallel algorithms for matrix polynomial division.
Computers and Mathematics with Applications
22, 7 (1991), 37–42.[12] G. D. Forney, Jr. 1975. Minimal Bases of Rational Vector Spaces, with Applicationsto Multivariable Linear Systems.
SIAM Journal on Control
13, 3 (1975), 493–520.[13] F. R. Gantmacher. 1959.
The Theory of Matrices . Chelsea.[14] J. von zur Gathen and J. Gerhard. 2013.
Modern Computer Algebra (third edition) .Cambridge University Press. i–xiii, 1–795 pages.[15] P. Giorgi, C.-P. Jeannerod, and G. Villard. 2003. On the complexity of polynomialmatrix computations. In
ISSAC’03 . ACM, 135–142.[16] P. Giorgi and R. Lebreton. 2014. Online Order Basis Algorithm and Its Impact onthe Block Wiedemann Algorithm. In
ISSAC’14 . ACM, 202–209.[17] S. Gupta, S. Sarkar, A. Storjohann, and J. Valeriote. 2012. Triangular x -basisdecompositions and derandomization of linear algebra algorithms over K [ x ] . J.Symbolic Comput.
47, 4 (2012), 422–453.[18] S. Gupta and A. Storjohann. 2011. Computing Hermite Forms of PolynomialMatrices. In
ISSAC’11 . ACM, 155–162.[19] C.-P. Jeannerod, V. Neiger, ´E. Schost, and G. Villard. 2016. Fast computationof minimal interpolation bases in Popov form for arbitrary shifts. In
ISSAC’16 .ACM, 295–302.[20] C.-P. Jeannerod, V. Neiger, ´E. Schost, and G. Villard. 2017. Computing minimalinterpolation bases.
J. Symbolic Comput.
83 (2017), 272–314.[21] T. Kailath. 1980.
Linear Systems . Prentice-Hall.[22] G. Labahn, V. Neiger, and W. Zhou. 2017. Fast, deterministic computation of theHermite normal form and determinant of a polynomial matrix.
J. Complexity (inpress) (2017).[23] F. Le Gall. 2014. Powers of Tensors and Fast Matrix Multiplication. In
ISSAC’14 .ACM, 296–303.[24] J. Middeke. 2011.
A computational view on normal forms of matrices of Orepolynomials
Bases of relations in one or several variables: fast algorithmsand applications . Ph.D. Dissertation. ´Ecole Normale Sup´erieure de Lyon. https://tel.archives-ouvertes.fr/tel-01431413[26] V. Neiger. 2016. Fast computation of shifted Popov forms of polynomial matricesvia systems of modular polynomial equations. In
ISSAC’16 . ACM, 365–372.[27] V. M. Popov. 1972. Invariant Description of Linear, Time-Invariant ControllableSystems.
SIAM Journal on Control
10, 2 (1972), 252–264.[28] J. Rosenkilde and A. Storjohann. 2016. Algorithms for Simultaneous Pad´e Ap-proximations. In
ISSAC’16 . ACM, 405–412.[29] A. Storjohann. 2003. High-order lifting and integrality certification.
J. SymbolicComput.
36, 3-4 (2003), 613–648.[30] A. Storjohann. 2006. Notes on computing minimal approximant bases. In
Challenges in Symbolic Computation Software (Dagstuhl Seminar Proceedings) .http://drops.dagstuhl.de/opus/volltexte/2006/776[31] M. Van Barel and A. Bultheel. 1991. The computation of non-perfect Pad´e-Hermiteapproximants.
Numer. Algorithms
1, 3 (1991), 285–304.[32] M. Van Barel and A. Bultheel. 1992. A general module theoretic framework forvector M-Pad´e and matrix rational interpolation.
Numer. Algorithms
IEEE Trans. Automat. Control
31, 2 (Feb 1986), 165–166.[34] W. Wolovich. 1984. A division algorithm for polynomial matrices.
IEEE Trans.Automat. Control
29, 7 (Jul 1984), 656–658.[35] Shou-Yuan Zhang and Chi-Tsong Chen. 1983. An algorithm for the division oftwo polynomial matrices.
IEEE Trans. Automat. Control
28, 2 (Feb 1983), 238–240.[36] W. Zhou. 2012.
Fast Order Basis and Kernel Basis Computation and RelatedProblems . Ph.D. Dissertation. University of Waterloo.[37] W. Zhou and G. Labahn. 2012. Efficient Algorithms for Order Basis Computation.