Computing Popov and Hermite forms of rectangular polynomial matrices
CComputing Popov and Hermite Formsof Rectangular Polynomial Matrices
Vincent Neiger ∗ Univ. Limoges, CNRS, XLIM, UMR 7252
F-87000 Limoges, [email protected]
Johan Rosenkilde
Technical University of DenmarkKgs. Lyngby, [email protected]
Grigory Solomatov
Technical University of DenmarkKgs. Lyngby, [email protected]
ABSTRACT
We consider the computation of two normal forms for matricesover the univariate polynomials: the Popov form and the Hermiteform. For matrices which are square and nonsingular, deterministicalgorithms with satisfactory cost bounds are known. Here, wepresent deterministic, fast algorithms for rectangular input matrices.The obtained cost bound for the Popov form matches the previousbest known randomized algorithm, while the cost bound for theHermite form improves on the previous best known ones by a factorwhich is at least the largest dimension of the input matrix.
KEYWORDS
Polynomial matrix; Reduced form; Popov form; Hermite form.
In this paper we deal with (univariate) polynomial matrices, i.e.matrices in K [ x ] m × n where K is a field admitting exact computation,typically a finite field. Given such an input matrix whose row spaceis the real object of interest, one may ask for a “better” basis for therow space, that is, another matrix which has the same row space butalso has additional useful properties. Two important normal formsfor such bases are the Popov form [21] and the Hermite form [11],whose definitions are recalled in this paper. The Popov form hasrows which have the minimal possible degrees, while the Hermiteform is in echelon form. A classical generalisation is the shifted Popov form of a matrix [1], where one incorporates degree weightson the columns: with zero shift this is the Popov form, while undersome extremal shift this becomes the Hermite form [2]. We areinterested in the efficient computation of these forms, which hasbeen studied extensively along with the computation of the relatedbut non-unique reduced forms [6, 13] and weak Popov forms [16].Hereafter, complexity estimates count basic arithmetic opera-tions in K on an algebraic RAM, and asymptotic cost bounds omit ∗ Part of the research leading to this work was conducted while Vincent Neigerwas with Technical University of Denmark, Kgs. Lyngby, Denmark, with fundingfrom the People Programme (Marie Curie Actions) of the European Union’s SeventhFramework Programme (FP7/2007-2013) under REA grant agreement number 609405(COFUNDPostdocDTU).
ISSAC ’18, July 16–19, 2018, New York, NY, USA © 2018 Copyright held by the owner/author(s). Publication rights licensed to theAssociation for Computing Machinery.This is the author’s version of the work. It is posted here for your personal use. Notfor redistribution. The definitive Version of Record was published in
ISSAC ’18: 2018ACM International Symposium on Symbolic and Algebraic Computation, July 16–19,2018, New York, NY, USA , https://doi.org/10.1145/3208976.3208988. factors that are logarithmic in the input parameters, denoted by O ˜ (·) . We let 2 ≤ ω ≤ K m × m can be multiplied in O ( m ω ) operations. Asshown in [5], the multiplication of two polynomials in K [ x ] ofdegree at most d can be done in O ˜ ( d ) operations, and more gener-ally the multiplication of two polynomial matrices in K [ x ] m × m ofdegree at most d uses O ˜ ( m ω d ) operations.Consider a square, nonsingular M ∈ K [ x ] m × m of degree d . Forthe computation of a reduced form of M , the complexity O ˜ ( m ω d ) was first achieved by a Las Vegas algorithm of Giorgi et al. [7]. Allthe subsequent work mentioned in the next paragraph achieved thesame cost bound, which was taken as a target: up to logarithmicfactors, it is the same as the cost for multiplying two matrices withdimensions and degree similar to those of M .The approach of [7] was de-randomized by Gupta et al. [9], whileSarkar and Storjohann [23] showed how to compute the Popov formfrom a reduced form; combining these results gives a deterministicalgorithm for the Popov form. Gupta and Storjohann [8, 10] gavea Las Vegas algorithm for the Hermite form; a Las Vegas methodfor computing the shifted Popov form for any shift was describedin [18]. Then, a deterministic Hermite form algorithm was givenby Labahn et al. [14], which was one ingredient in a deterministicalgorithm due to Neiger and Vu [19] for the arbitrary shift case.The Popov form algorithms usually exploit the fact that, bydefinition, this form has degree at most d = deg ( M ) . While nosimilarly strong degree bound holds for shifted Popov forms ingeneral (including the Hermite form), these forms still share aremarkable property in the square, nonsingular case: each entryoutside the diagonal has degree less than the entry on the diagonalin the same column. These diagonal entries are called pivots [13].Furthermore, their degrees sum to deg ( det ( M )) ≤ md , so that theseforms can be represented with O ( m d ) field elements, just like M .This is especially helpful in the design of fast algorithms since thisprovides ways to control the degrees of the manipulated matrices.These degree constraints exist but become weaker in the caseof rectangular shifted Popov forms, say m × n with m < n . Such anormal form does have m columns containing pivots, whose averagedegree is at most the degree d of the input matrix M . Yet it alsocontains n − m columns without pivots, which may all have largedegree: up to Θ ( md ) in the case of the Hermite form. As a result, adense representation of the latter form may require Ω ( m ( n − m ) d ) field elements, a factor of m larger than for M . Take for examplesome U ∈ K [ x ] m × m of degree d which is unimodular, meaning that U − has entries in K [ x ] . Then, the Hermite form of [ U I m · · · I m ] is [ I m U − · · · U − ] , and the entries of U − may have degree in Ω ( md ) . However, the Popov form, having minimal degree, has sizein O ( mnd ) , just like M . Thus, unlike in the nonsingular case, one a r X i v : . [ c s . S C ] M a y ould set different target costs for the computation of Popov andHermite forms, such as O ˜ ( m ω − nd ) for the former and O ˜ ( m ω nd ) for the latter (note that the exponent affects the small dimension).For a rectangular matrix M ∈ K [ x ] m × n , Mulders and Storjohann[16] gave an iterative Popov form algorithm which costs O ( rmnd ) ,where r is the rank of M . Beckermann et al. [3] obtain the shiftedPopov form for any shift by computing a basis of the left kernel of [ M T I n ] T . This approach also produces a matrix which transforms M into its normal form and whose degree can be in Ω ( md ) : efficientalgorithms usually avoid computing this transformation. To findthe sought kernel basis, the fastest known method is to compute ashifted Popov approximant basis of the ( m + n ) × n matrix above,at an order which depends on the shift. [3] relies on a fraction-free algorithm for the latter computation, and hence lends itselfwell to cases where K is not finite. In our context, following thisapproach with the fastest known approximant basis algorithm [12]yields the cost bounds O ˜ (( m + n ) ω − nmd ) for the Popov form and O ˜ (( m + n ) ω − n md ) for the Hermite form. For the latter this is thefastest existing algorithm, to the best of our knowledge.For M with full rank and m ≤ n , Sarkar [22] showed a Las Vegasalgorithm for the Popov form achieving the cost O ˜ ( m ω − nd ) . Thisuses random column operations to compress M into an m × m matrix, which is then transformed into a reduced form. Applyingthe same transformation on M yields a reduced form of M withhigh probability, and from there the Popov form can be obtained.Lowering this cost further seems difficult, as indicated in the squarecase by the reduction from polynomial matrix multiplication toPopov form computation described in [23, Thm. 22].For a matrix M ∈ K [ x ] m × n which is rank-deficient or has m > n ,the computation of a basis of the row space of M was handled byZhou and Labahn [29] with cost O ˜ ( m ω − ( m + n ) d ) . Their algorithmis deterministic, and the output basis B ∈ K [ x ] r × n has degree atmost d . This may be used as a preliminary step: the normal form of M is also that of B , and the latter has full rank with r ≤ n .We stress that, from a rectangular matrix M ∈ K [ x ] m × n , it seemsdifficult in general to predict which columns of its shifted Popovform will be pivot-free. For this reason, there seems to be no obviousdeterministic reduction from the rectangular case to the square case,even when n is only slightly larger than m . Sarkar’s algorithm isa Las Vegas reduction, compressing the matrix to a nonsingular m × m matrix; another Las Vegas reduction consists in completing the matrix to a nonsingular n × n matrix (see Section 3).In the nonsingular case, exploiting information on the pivots hasled to algorithmic improvements for normal form algorithms [10,12, 14, 23]. Following this, we put our effort into two computationaltasks: finding the location of the pivots in the normal form (the pivot support ), and using this knowledge to compute this form.Our first contribution is to show how to efficiently find the pivotsupport of M . For this we resort to the so-called saturation of M computed in a form which reveals the pivot support (Section 4.1),making use of an idea from [28]. While this is only efficient for n ∈ O ( m ) , using this method repeatedly on well-chosen submatrices of M with about 2 m columns allows us to find the pivot support using O ˜ ( m ω − nd ) operations for any dimensions m ≤ n (Section 4.2).In our second main contribution, we consider the shifted Popovform of M , for any shift. We show that once its pivot support is known, then this form can be computed efficiently (Section 6 andProposition 6.1). In particular, combining both contributions yieldsa fast and deterministic Popov form algorithm.Theorem 1.1. For a matrix M ∈ K [ x ] m × n of degree at most d andwith m ≤ n , there is a deterministic algorithm which computes thePopov form of M using O ˜ ( m ω − nd ) operations in K . The second contribution may of course be useful in situationswhere the pivot support is known for some reason. Yet, there areeven general cases where it can be computed efficiently, namelywhen the shift has very unbalanced entries. This is typically thecase of the Hermite form, for which the pivot support coincideswith the column rank profile of M . The latter can be efficientlyobtained via an algorithm due to Zhou [26, Sec. 11], based on thekernel basis algorithm from [30]. This leads us to the next result.Theorem 1.2. Let M ∈ K [ x ] m × n with full rank and m < n . Thereis a deterministic algorithm which computes the Hermite form of M using O ˜ ( m ω − nδ ) operations in K , where δ is the minimum of thesum of column degrees of M and of the sum of row degrees of M . Using this quantity δ (see Eq. (6) for a more precise definition),the mentioned cost for the kernel basis approach of [3] becomes O ˜ (( m + n ) ω − n δ ) . Thus, when n ∈ O ( m ) the cost in the abovetheorem already gains a factor n compared to this approach; when n is large compared to m , this factor becomes n ( nm ) ω − . If M is an m × n matrix and 1 ≤ j ≤ n , we denote by M ∗ , j the j thcolumn of M . If J ⊆ { , . . . , n } is a set of column indices, M ∗ , J isthe submatrix of M formed by the columns at the indices in J . Weuse analogous row-wise notation. Similarly, for a tuple t ∈ Z n , then t J is the subtuple of t formed by the entries at the indices in J .When adding a constant to an integer tuple, for example t + t = ( t , . . . , t m ) ∈ Z m , we really mean ( t + , . . . , t m + ) ;when comparing a tuple to a constant, for example t ≤
1, we meanmax ( t ) ≤
1. Two tuples of the same length will always be comparedentrywise: s ≤ t stands for s i ≤ t i for all i . We use the notationamp ( t ) = max ( t ) − min ( t ) , and | t | = t + . . . + t m (note that thelatter will mostly be used when t has nonnegative entries).For a given nonnegative integer tuple t = ( t , . . . , t m ) ∈ Z m ≥ ,we denote by x t the diagonal matrix with entries x t , . . . , x t m . For a matrix M ∈ K [ x ] m × n , its row space is the K [ x ] -module gen-erated by its rows, that is, { λ M , λ ∈ K [ x ] × m } . Then, a matrix B ∈ K [ x ] r × n is a row basis of M if its rows form a basis of the rowspace of M , in which case r is the rank of M .The left kernel of M is the K [ x ] -module { p ∈ K [ x ] × m | pM = } .A matrix K ∈ K [ x ] k × m is a left kernel basis of M if its rows forma basis of this kernel, in which case k = m − r . Similarly, a right kernel basis of M is a matrix K ∈ K [ x ] n ×( n − r ) whose columns forma basis of the right kernel of M .Given d = ( d , . . . , d n ) ∈ Z n > , the set of approximants for M atorder d is the K [ x ] -module of rank m defined as A d ( M ) = { p ∈ K [ x ] × m | pM = mod x d } . he identity pM = mod x d means that the j th entry of the vector pM ∈ K [ x ] × n is divisible by x d j , for all j .Two m × n matrices M , M have the same row space if and onlyif they are unimodularly equivalent , that is, there is a unimodularmatrix U ∈ K [ x ] m × m such that UM = M . For M ∈ K [ x ] r × n with r ≤ m , M and M have the same row space exactly when M padded with m − r zero rows is unimodularly equivalent to M . For a matrix M ∈ K [ x ] m × n , we denote by rdeg ( M ) the tuple of thedegrees of its rows, that is, ( deg ( M , ∗ ) , . . . , deg ( M m , ∗ )) .If M has no zero row, the (row-wise) leading matrix of M , denotedby lm ( M ) , is the matrix in K m × n whose entry i , j is equal to thecoefficient of degree deg ( M i , ∗ ) of the entry i , j of M .For a matrix R ∈ K [ x ] m × n with no zero row and m ≤ n , we saythat R is (row) reduced if lm ( R ) has full rank. Thus, here a reducedmatrix must have full rank (and no zero row), as in [6]. For moredetails about reduced matrices, we refer the reader to [3, 6, 13, 25].In particular, we have the following characterizing properties: • Predictable degree property [6] [13, Thm. 6.3-13]: we havedeg ( λ R ) = max { deg ( λ i ) + rdeg ( R i , ∗ ) , ≤ i ≤ m } for any vector λ = [ λ i ] i ∈ K [ x ] × m . • Minimality of the sum of row degrees [6]: for any nonsingularmatrix U ∈ K [ x ] m × m , we have | rdeg ( UR )| ≥ | rdeg ( R )| . • Minimality of the tuple of row degrees [26, Sec. 2.7]: for anynonsingular matrix U ∈ K [ x ] m × m , we have s ≤ t where thetuples s and t are the row degrees of R and of UR sorted innondecreasing order, respectively.From the last item, it follows that two unimodularly equivalentreduced matrices have the same row degree up to permutation.For a matrix M ∈ K [ x ] m × n , we call reduced form of M anyreduced matrix R ∈ K [ x ] r × n which is a row basis of M . The thirditem above shows that deg ( R ) ≤ deg ( M ) . For a nonzero vector p = [ p j ] j ∈ K [ x ] × m , the pivot index of p isthe largest index j such that deg ( p j ) = deg ( p ) [13, Sec. 6.7.2]. In thiscase we call p j the pivot entry of p . For the zero vector, we define itsdegree to be −∞ and its pivot index to be 0. Further, the pivot index of a matrix M ∈ K [ x ] m × n is the tuple ( j , . . . , j m ) ∈ Z m ≥ such that j i is the pivot index of M i , ∗ . Note that we will only use the word“pivot” in this row-wise sense.A matrix P ∈ K [ x ] m × n is in weak Popov form if it has no zerorow and the entries of the pivot index of P are all distinct [16]; aweak Popov form is further called ordered if its pivot index is in(strictly) increasing order. A weak Popov matrix is also reduced.The (ordered) weak Popov form is not canonical: a given rowspace may have many (ordered) weak Popov forms. The Popovform adds a normalization property, yielding a canonical form; weuse the definition from [2, Def. 3.3]:A matrix P ∈ K [ x ] m × n is in Popov form if it is in ordered weakPopov form, the corresponding pivot entries are monic, and in eachcolumn of P which contains a (row-wise) pivot the other entrieshave degree less than this pivot entry. For a matrix M ∈ K [ x ] m × n of rank r , there exists a unique P ∈ K [ x ] r × n which is in Popov form and has the same row space as M [3, Thm. 2.7]. We call P the Popov form of M . For a more detailedtreatment of Popov forms, see [2, 3, 13].For example, consider the unimodularly equivalent matrices (cid:20) x x + x + x (cid:21) and (cid:20) x − x − x + x (cid:21) , defined over F [ x ] ; the first one is in weak Popov form and thesecond one is its Popov form. Note that any deterministic rule forordering the rows would lead to a canonical form; we use that of[2, 3], while that of [13, 16] sorts the rows by degrees and wouldconsider the second matrix not to be normalized.Going back to the general case, we denote by π ( M ) ∈ Z r > thepivot index of the Popov form of M , called the pivot support of M .In most cases, π ( M ) differs from the pivot index of M . We have thefollowing important properties: • The pivot index of M is equal to the pivot support π ( M ) ifand only if M is in ordered weak Popov form. • For any λ ∈ K [ x ] × m such that λ M (cid:44) , the pivot index of λ M appears in the pivot support π ( M ) ; in particular eachnonzero entry of the pivot index of M is in π ( M ) .For the first item, we refer to [3, Sec. 2] (in this reference, theset formed by the entries of the pivot support is called “pivot set”and ordered weak Popov forms are called quasi-Popov forms). Thesecond item is a simple extension of the predictable degree property(see for example [17, Lem. 1.17] for a proof). We will rely on the following result from [30, Cor. 4.6 and Thm. 3.4]about the computation of kernel bases in reduced form. Note that amatrix is column reduced if its transpose is reduced.Theorem 2.1 ([30]).
There is an algorithm
MinimalKernelBasis which, given a matrix M ∈ K [ x ] m × n with m ≤ n , returns a rightkernel basis K ∈ K [ x ] m ×( n − r ) of M in column reduced form using ˜ O (cid:0) n ω ⌈ m deg ( M )/ n ⌉ (cid:1) ⊆ ˜ O (cid:0) n ω deg ( M ) (cid:1) operations in K . Furthermore, | cdeg ( K )| ≤ r deg ( M ) . For the computation of normal forms of square, nonsingularmatrices, we use the next result ( s -Popov forms will be introducedin Section 5; Popov forms as above correspond to s = ).Theorem 2.2 ([19]). There is an algorithm
NonsingularPopov which, given a nonsingular matrix M ∈ K [ x ] m × m and a shift s ∈ Z m ,returns the s -Popov form of M using ˜ O (cid:0) m ω ⌈| rdeg ( M )|/ m ⌉ (cid:1) ⊆ ˜ O (cid:0) m ω deg ( M ) (cid:1) operations in K . This is [19, Thm. 1.3] with a minor modification: we have replacedthe so-called generic determinant bound by a larger quantity (thesum of row degrees), since this is sufficient for our needs here.
We now present a new Las Vegas algorithm for computing the (non-shifted) Popov form P of a rectangular matrix M ∈ K [ x ] m × n withull rank and m < n , relying on algorithms for the case of square,nonsingular matrices. In the case n ∈ O ( m ) , this results in a costbounded by O ˜ ( m ω deg ( M )) , which has already been obtained bythe Las Vegas algorithm of Sarkar [22]; however, the advantage ofour approach is that it becomes asymptotically faster if the average row degree of M is significantly smaller than deg ( M ) .The idea is to find a matrix C ∈ K [ x ] ( n − m )× n such that thePopov form of [ M T C T ] T contains P as an identifiable subset of itsrows. We will show that if C is drawn randomly of sufficiently highdegree, then this is true with high probability. Definition 3.1.
Let M ∈ K [ x ] m × n have full rank with m < n andlet P ∈ K [ x ] m × n be the Popov form of M . A completion of M is anymatrix C ∈ K [ x ] ( n − m )× n such that:min ( rdeg ( C )) > deg ( P ) and (cid:20) PC (cid:21) is row reduced . The next lemma shows that: 1) if C is a completion, then P willappear as a submatrix of the Popov form of [ M T C T ] T ; and 2) wecan easily check from that Popov form whether C is a completionor not. The latter is essential for a Las Vegas algorithm.Lemma 3.2. Let M ∈ K [ x ] m × n have full rank with m < n withPopov form P , and let C ∈ K [ x ] ( n − m )× n be such that [ M T C T ] T hasfull rank and min ( rdeg ( C )) > deg ( P ) . Then, C is a completion of M if and only if rdeg ( ˆP ) contains a permutation of rdeg ( C ) , where ˆP isthe Popov form of [ M T C T ] T . In this case, P is the submatrix of ˆP formed by its rows of degree less than min ( rdeg ( C )) . Proof. First, we assume that C is a completion of M . Then [ P T C T ] T is reduced, and therefore it has the same row degree asits Popov form ˆP up to permutation. Hence, in particular, rdeg ( ˆP ) contains a permutation of rdeg ( C ) .Now, we assume that rdeg ( ˆP ) contains a permutation of rdeg ( C ) and our goal is to show that [ P T C T ] T is reduced and ˆP contains P as a submatrix. Let ˆP be the submatrix of ˆP of its rows of degreeless than min ( rdeg ( C )) ; and ˆP be the submatrix of the remainingrows. By assumption, ˆP has at least n − m rows and ˆP has at most m rows. Since ˆP is also the Popov form of [ P T C T ] T , there is aunimodular transformation (cid:20) U U U U (cid:21) (cid:20) ˆP ˆP (cid:21) = (cid:20) PC (cid:21) . (1)By the predictable degree property we obtain U = ; thus, since P has full rank m , then ˆP has exactly m rows, and U is unimodular.Therefore ˆP = P since both matrices are in Popov form. As a result,rdeg ( ˆP ) is a permutation of ( rdeg ( P ) , rdeg ( C )) . □ Lemma 3.3.
Let M ∈ K [ x ] m × n have full rank with m < n . Let S ⊆ K be finite of cardinality q and let L ∈ K ( n − m )× n with entries chosenindependently and uniformly at random from S . Then x deg ( M ) + L isa completion of M with probability at least (cid:206) n − mi = ( − q − i ) if K isfinite and S = K , and at least − n − mq otherwise. Proof. Let d = deg ( M ) . We first note that for x d + L to be acompletion of M , it is enough that the matrixlm (cid:18)(cid:20) PC (cid:21)(cid:19) = (cid:20) lm ( P ) lm ( C ) (cid:21) = (cid:20) lm ( P ) L (cid:21) ∈ K n × n be invertible. Indeed, this implies first that [ P T C T ] T is reduced; andsecond, that C has no zero row, hence rdeg ( C ) = ( d + , . . . , d + ) and min ( rdeg ( C )) = d + > deg ( M ) ≥ deg ( P ) .In the case of a finite field K with q elements, the probabilitythat the above matrix is invertible is (cid:206) n − mi = ( − q − i ) . If K is infiniteor of cardinality ≥ q , the Schwartz-Zippel lemma implies that theprobability that the above matrix is singular is at most ( n − m )/ q . □ Thus, if K is infinite, it is sufficient to take S of cardinality atleast 2 ( n − m ) to ensure that x d + L is a completion with probabilityat least 1 /
2. On the other hand, if K is finite of cardinality q , wehave the following bounds on the probability: n − m (cid:214) i = ( − q − i ) > .
28 if q = , .
55 if q = , .
75 if q > . In Algorithm 1, we first test the nonsingularity of N = [ M T C T ] T before computing ˆP , since the fastest known Popov form algorithmsin the square case do not support singular matrices. Over a fieldwith at least 2 n deg ( N ) + α ∈ K andtesting the resulting scalar matrix for nonsingularity; this falselyreports singularity only if det ( N ) is divisible by ( x − α ) . Alterna-tively, a deterministic check is as follows. First, apply the partiallinearization of [9, Sec. 6], yielding a matrix N ∈ K [ x ] n × n such that N is nonsingular if and only if N is nonsingular; n ∈ O ( n ) ; anddeg ( N ) ≤ ⌈| rdeg ( N )|/ n ⌉ . This does not involve arithmetic opera-tions. Since N is nonsingular if and only if its kernel is trivial, it thenremains to compute a kernel basis via the algorithm in [27], using O ˜ ( n ω deg ( N )) ⊆ O ˜ ( n ω ⌈| rdeg ( N )|/ n ⌉) operations in K . Instead ofconsidering the kernel, one could also test the nonsingularity of N using algorithms from [9], as explained in [22, p. 24]. Algorithm 1:
RandomCompletionPopov
Input: matrix M ∈ K [ x ] m × n with full rank and m < n ; subset S ⊆ K of cardinality q . Output: the Popov form of M , or failure . L ← matrix in K ( n − m )× n with entries chosen uniformlyand independently at random from S . C ← x deg ( M ) + L If [ M T C T ] T is singular then return failure ˆP ← NonsingularPopov ([ M T C T ] T ) If rdeg ( ˆP ) does not contain a permutation of rdeg ( C ) thenreturn failure Return the submatrix of ˆP formed by its rows of degree lessthan min ( rdeg ( C )) Proposition 3.4.
Algorithm 1 is correct and the probability thata failure is reported at Step 3 or Step 5 is as indicated in Lemma 3.3.If
NonsingularPopov is the algorithm of [19], Algorithm 1 uses ˜ O (cid:18) n ω (cid:24) | rdeg ( M )| + ( n − m ) deg ( M ) n (cid:25)(cid:19) ⊆ ˜ O (cid:0) n ω deg ( M ) (cid:1) operations in K . ndeed, from Theorem 2.2, Step 4 uses O ˜ ( n ω ⌈ ∆ / n ⌉) operationswhere ∆ = | rdeg ([ M T C T ] T )| = | rdeg ( M )| + ( n − m )( deg ( M ) + ) .While other Popov form algorithms could be used, that of [19]allows us to take into account the average row degree of M . Indeed,if | rdeg ( M )| ≪ m deg ( M ) and n − m ≪ n , the cost bound above isasymptotically better than O ˜ ( n ω deg ( M )) . Remark 1:
As we mentioned in Section 2.4, the pivot index of M is a subset of π ( M ) . Therefore, one can let L be zero at all columnswhere M has a pivot, or indices one otherwise knows appear in π ( M ) . If M has uneven degrees (e.g. it has the form ˆMx s for someshift s , see Section 5.1), then this can be particularly worthwhile. Inthe case where for some reason we know π ( M ) , then L can simplybe taken such that L ∗ , { ,..., n }\ π ( M ) is the identity matrix. In thatcase, Algorithm 1 becomes deterministic. We now consider a matrix M ∈ K [ x ] m × n with m < n , possiblyrank-deficient, and we focus on the computation of its pivot support π ( M ) . In Section 4.1, we give a deterministic algorithm which isefficient when n ∈ O ( m ) . In Section 4.2 we explain how this can beused iteratively to efficiently find the pivot support when m ≪ n . Our approach stems from the fact (see Lemma 4.2) that π ( M ) is alsothe pivot support of any basis of the saturation of the row space of M [4, Sec. II.§2.4], defined as { λ M , λ ∈ K ( x ) × m } ∩ K [ x ] × m . This notion of saturation was already used in [28] in order to com-pute column bases of M by relying on the following factorization:Lemma 4.1 ([28, Sec. 3]). Let M ∈ K [ x ] m × n have rank r ∈ Z > ,let K ∈ K [ x ] n ×( n − r ) be a right kernel basis of M , and let S ∈ K [ x ] r × n be a left kernel basis of K . Then, we have M = CS for some columnbasis C ∈ K [ x ] m × r of M . One can easily verify that the left kernel of K is precisely thesaturation of M , and therefore the matrix S is a (row) basis of thissaturation. Here, we are particularly interested in the followingconsequence of this result:Lemma 4.2. The matrices M and S in Lemma 4.1 have the samepivot support, that is, π ( M ) = π ( S ) . Proof. Since M = CS , the row space of M is contained in that of S . Hence, by the properties at the end of Section 2.4, π ( M ) ⊆ π ( S ) as sets. But since M and S both have rank r , both pivot supportshave exactly r different elements, and must be equal. □ We will read off π ( S ) from S by ensuring that this matrix is inordered weak Popov form. First, we obtain a column reduced rightkernel basis K of M using MinimalKernelBasis (see Theorem 2.1).However, the degree profile of K prevents us from using the samealgorithm to compute a left kernel basis S efficiently, since the aver-age row degree of K could be as large as r deg ( M ) . To circumventthis issue, we combine the observations that deg ( S ) is bounded andthat K has small average column degree to conclude that S can beefficiently obtained via an approximant basis (see Section 2). Lemma 4.3. Let M ∈ K [ x ] m × n have rank r ∈ Z > and let K ∈ K [ x ] n ×( n − r ) be a right kernel basis of M . Then, any left kernel basisof K which is in reduced form must have degree at most d = deg ( M ) .As a consequence, if ˆP ∈ K [ x ] n × n is a reduced basis of A d ( K ) , where d = cdeg ( K ) + d + ∈ Z n − r , then the submatrix P of ˆP formed byits rows of degree at most d is a reduced left kernel basis of K . Proof. Let S ∈ K [ x ] r × n be a left kernel basis of K in reducedform. By Lemma 4.1, M = CS for some matrix C ∈ K [ x ] m × r . Then,the predictable degree property implies that deg ( S ) ≤ deg ( CS ) = d .For the second claim (which is a particular case of [28, Lem. 4.2]),note that P is reduced as a subset of the rows of a reduced matrix.Besides, cdeg ( PK ) < d by construction, hence PK = mod x d implies PK = . It remains to show that P generates the left kernelof K . Indeed, there exists a basis of this kernel which has degree atmost d , and on the other hand any vector of degree at most d inthis kernel is in particular in A d ( K ) and therefore is a combinationof the rows of ˆP ; using the predictable degree property, we obtainthat this combination only involves rows from the submatrix P . □ If we compute ˆP in ordered weak Popov form, then the submatrix P is in ordered weak Popov form as well, and therefore π ( M ) canbe directly read off from it. The computation of an approximantbasis in ordered weak Popov form can be done via the algorithm of[12], which returns one in Popov form. Algorithm 2:
PivotSupportViaFactor
Input: matrix M ∈ K [ x ] m × n with m ≤ n . Output: the pivot support π ( M ) of M . If M = then return the empty tuple () ∈ Z > K ∈ K [ x ] n ×( n − r ) ← MinimalKernelBasis ( M ) ˆP ∈ K [ x ] n × n ← ordered weak Popov basis of A d ( K ) , with d = cdeg ( K ) + ( deg ( M ) + ) ∈ Z n − r S ∈ K [ x ] r × n ← the rows of ˆP of degree at most deg ( M ) Return the pivot index of S Proposition 4.4.
Algorithm 2 is correct and uses O ˜ ( n ω deg ( M )) operations in K . Proof. Note that we compute the rank of M as r by the indirectassignment at Step 2. Besides, S is in ordered weak Popov form sinceit is a submatrix formed by rows of ˆP itself in ordered weak Popovform. This implies that Step 5 indeed returns the pivot support of S .Then, the correctness directly follows from Lemmas 4.2 and 4.3.By Theorem 2.1, Step 2 costs O ˜ ( n ω d ) , where d = deg ( M ) , and | cdeg ( K )| ≤ rd . Thus, the sum of the approximation order definedat Step 3 is | d | = | cdeg ( K )| + ( n − r )( d + ) < n ( d + ) . Then, thisstep uses O ˜ ( n ω − | d |) ⊆ O ˜ ( n ω d ) operations [12, Thm. 1.4]. □ Note that in this algorithm we do not require that M has fullrank. The only reason why we assume m ≤ n is because the costbound for the computation of a kernel basis at Step 2 is not clear tous in the case m > n (the same assumption is made in [30]).Here, it seems more difficult to take average degrees into accountthan in Algorithm 1. While the average degree of the m columns of M with largest degree could be taken into account by the kernelbasis algorithm of [30], it seems that the computation of S via anapproximant basis remains in O ˜ ( n ω d ) nevertheless. .2 The case of wide matrices In this section we will deal with pivots of submatrices M ∗ , J , where J = { j < . . . < j k } ⊆ { , . . . , n } . To use column indices of M ∗ , J in M , we introduce for any such J the operator ϕ J : { , . . . , k } →{ , . . . , n } satisfying ϕ J ( i ) = j i . We abuse notation by applying ϕ J element-wise to tuples, such as in ϕ J ( π ( M ∗ , J )) .The following simple lemma is the crux of the algorithm:Lemma 4.5. Let M ∈ K [ x ] m × n , and consider any set of indices J ⊆ { , . . . , n } . Then ( π ( M ) ∩ J ) ⊆ ϕ J ( π ( M ∗ , J )) with equalitywhenever π ( M ) ⊆ J . Proof. If a vector v ∈ K [ x ] × n in the row space of M is suchthat π ( v ) ∈ J , then π ( v ) = ϕ J ( π ( v ∗ , J )) . This implies ( π ( M ) ∩ J ) ⊆ ϕ J ( π ( M ∗ , J )) since the pivot index of any vector in the row spaceof M (resp. M ∗ , J ) appears in π ( M ) (resp. π ( M ∗ , J ) ), see Section 2.4.It also immediately implies the equality whenever π ( M ) ⊆ J . □ These properties lead to a fast method for computing the pivotsupport when n ≫ m , relying on a black box PivotSupport whichefficiently finds the pivot support when n ∈ O ( m ) : one first consid-ers the 2 m left columns M ∗ , { ,..., m } and uses PivotSupport tocompute their pivot support π . Then, Lemma 4.5 suggests to dis-card all columns of M in { , . . . , m } \ π , thus obtaining a matrix M . Then, we repeat the same process to obtain M , M , etc. Algorithm 3:
WideMatrixPivotSupport
Input: matrix M ∈ K [ x ] m × n . Output: the pivot support π ( M ) of M . Assumption: the algorithm PivotSupport takes as input M andreturns π ( M ) . If n ≤ m then return PivotSupport ( M ) π ← PivotSupport ( M ∗ , { ,..., m } ) ˆM ← [ M ∗ , π M ∗ , { m + ,..., n } ] [ π π ] ← WideMatrixPivotSupport ( ˆM ) ,such that max ( π ) ≤ π and min ( π ) > π . Return (cid:2) ϕ π ( π ) ϕ { m + ,..., n } ( π ) (cid:3) Proposition 4.6.
Algorithm 3 is correct. It uses at most ⌈ n / m ⌉ calls to PivotSupport , each with a m × k submatrix of M as input,where k ≤ m . If m ≤ n and PivotSupport is Algorithm 2, thenAlgorithm 3 uses O ˜ ( m ω − n deg ( M )) operations in K . Proof. The correctness follows from Lemma 4.5, and the opera-tion count is obvious. If using Algorithm 2 for PivotSupport, thecorrectness and cost bound follow from Proposition 4.4. □ The notions of reduced and Popov forms presented in Sections 2.3and 2.4 can be extended by introducing additive integer weights inthe degree measure for vectors, following [24, Sec. 3]: a shift is atuple s = ( s , . . . , s n ) ∈ Z n , and the shifted degree of a row vector p = [ p · · · p n ] ∈ K [ x ] × n isrdeg s ( p ) = max ( deg ( p ) + s , . . . , deg ( p n ) + s n ) = rdeg ( px s ) , where x s = diag ( x s , . . . , x s n ) . Note that here px s may be over thering of Laurent polynomials if min ( s ) <
0; below, actual computa-tions will always remain over K [ x ] . Note that with s = we recoverthe notion of degree used in the previous sections.This leads to shifted reduced forms for cases where one is in-terested in matrices whose rows minimize the s -degree, instead ofthe usual -degree. The generalized definitions from Section 2 canbe concisely described as follows. For a matrix M ∈ K [ x ] m × n , its s -row degree is rdeg s ( M ) = rdeg ( Mx s ) . If M has no zero row, its s -leading matrix is lm s ( M ) = lm ( Mx s ) , and the s -pivot index andentries of M are the pivot index and entries of Mx s . The s -pivotdegree of M is the tuple of the degrees of its s -pivot entries; this isequal to rdeg s ( M ) − s J , where J is the s -pivot index of M and s J the corresponding subshift.If M has no zero row and m ≤ n , then M is in s -reduced, s -(ordered) weak Popov or s -Popov form if Mx s has the respectivenon-shifted form, whenever min ( s ) ≥
0. Since adding a constantto all the entries of s simply shifts the s -degree of vectors by thisconstant, this does not change the s -leading matrix or the s -pivots,and thus does not affect the shifted forms. Therefore we can extendthe definitions of these to also cover s with negative entries; onemay alternatively assume min ( s ) = s -Popov form P of a matrix M ∈ K [ x ] m × n is the unique rowbasis of M which is in s -Popov form. The s -pivot support of M isthe s -pivot index of P and is denoted by π s ( M ) ∈ Z r > , where r isthe rank of M . For more details on shifted forms, we refer to [3].Computationally, it is folklore that finding the shifted Popov formeasily reduces to the non-shifted case: given a matrix M ∈ K [ x ] m × n and a nonnegative shift s ∈ Z n , the non-shifted Popov form ˆP of Mx s has the form ˆP = Px s , with P the s -Popov form of M . If m < n and the computation of ˆP can be carried out in O ˜ ( m ω − n deg ( M )) operations, this approach yields P in O ˜ ( m ω − n ( deg ( M ) + amp ( s ))) .While this cost is satisfactory whenever amp ( s ) ∈ O ( deg ( M )) , onemay hope for improvements especially when amp ( s ) > m deg ( M ) .Indeed, Eq. (5) in Lemma 5.1 shows deg ( P ) ≤ m deg ( M ) , suggestingthe target cost O ˜ ( m ω n deg ( M )) for the computation of P . A matrix H = [ h i , j ] ∈ K [ x ] r × n with r ≤ n is in Hermite form [11, 15, 20] if there are indices 1 ≤ j < · · · < j r ≤ n such that: • h i , j = ≤ j < j i and 1 ≤ i ≤ r , • h i , j i is monic (therefore nonzero) for 1 ≤ i ≤ r , • deg ( h i ′ , j i ) < deg ( h i , j i ) for 1 ≤ i ′ < i ≤ r .We call ( j , . . . , j r ) the Hermite pivot index of H ; note that it isprecisely the column rank profile of H .For a matrix M ∈ K [ x ] m × n , its Hermite form H ∈ K [ x ] r × n is the unique row basis of M which is in Hermite form. We call Hermite pivot support of M the Hermite pivot index of H . Note thatthis is also the column rank profile of M , since M is unimodularlyequivalent to H (up to padding H with zero rows).For a given M , the Hermite form can be seen as a specific shiftedPopov form: defining the shift h = ( nt , . . . , t , t ) for any t > deg ( H ) ,the h -Popov form of M coincides with its Hermite form [3, Lem. 2.6].Besides, the h -pivot index of H is ( j , . . . , j r ) ; in other words, theHermite pivot support π h ( M ) is the column rank profile of M . .3 Degree bounds for shifted Popov forms The next result states that the unimodular transformation U be-tween M and its s -Popov form P only depends on the submatricesof M and P formed by the columns in the s -pivot support. It alsogives useful degree bounds for the matrices U and P ; for a moregeneral study of such bounds, we refer to [3, Sec. 5].Lemma 5.1. Let M ∈ K [ x ] m × n have full rank with m ≤ n , let s ∈ Z n , let P ∈ K [ x ] m × n be the s -Popov form of M , and let π = π s ( M ) be the s -pivot index of P . Then M ∗ , π ∈ K [ x ] m × m is nonsingular, P ∗ , π is its s π -Popov form, and U = P ∗ , π M − ∗ , π ∈ K [ x ] m × m is theunique unimodular matrix such that UM = P .Furthermore, we have the following degree bounds: deg ( P ) ≤ deg ( M ) + amp ( s ) , (2)cdeg ( U ∗ , i ) ≤ | rdeg ( M )| − rdeg ( M i , ∗ ) for ≤ i ≤ m , (3)deg U ≤ | cdeg ( M ∗ , π )| , (4)deg ( P ) ≤ min (| rdeg ( M )| , | cdeg ( M ′ )|) ≤ m deg ( M ) where M ′ is M with its zero columns removed . (5)Proof. Let ˆP = M ∗ , π , ˆM = M ∗ , π , and ˆ s = s π . Note first that ˆP is nonsingular and in ˆ s -Popov form. Let V ∈ K [ x ] m × m be anyunimodular matrix such that VM = P . Then in particular V ˆM = ˆP ,hence ˆM is nonsingular and unimodularly equivalent to ˆP , which istherefore the ˆ s -Popov form of ˆM . Besides, we have V = ˆP ˆM − = U .It remains to prove the degree bounds. The first one comes fromthe minimality of P . Indeed, since P is an s -reduced form of M wehave max ( rdeg s ( P )) ≤ max ( rdeg s ( M )) ; the left-hand side of thisinequality is at least deg ( P ) + min ( s ) while its right-hand side is atmost deg ( M ) + max ( s ) .Let δ ∈ Z m ≥ be the s -pivot degree of P . Then, ˆP is in (− δ ) -Popovform with rdeg − δ ( ˆP ) = and cdeg ( ˆP ) = δ [12, Lem. 4.1]. Besides, ˆP is column reduced and thus | cdeg ( ˆP )| = deg ( det ( ˆP )) [13, Sec. 6.3.2],hence | δ | = deg ( det ( ˆM )) .Let t = ( t , . . . , t m ) = rdeg ( U − ) . We obtain rdeg − δ ( ˆM ) = rdeg − δ ( U − ˆP ) = rdeg ( U − ) = t by the predictable degree prop-erty (with shifts, see e.g. [26, Lem. 2.17]). Now, U being the trans-pose of the matrix of cofactors of U − divided by the constantdet ( U − ) ∈ K \ { } , we obtain cdeg ( U ∗ , i ) ≤ | t | − t i for 1 ≤ i ≤ m .Since − δ ≤ we have t = rdeg − δ ( ˆM ) ≤ rdeg ( M ) , hence | t | − t i ≤| rdeg ( M )| − rdeg ( M i , ∗ ) . This proves (3).Every entry of the adjugate of ˆM has degree at most | cdeg ( ˆM )| .Then, U = ˆP ˆM − gives deg ( U ) ≤ deg ( ˆP ) − deg ( det ( ˆM )) + | cdeg ( ˆM )| .This yields (4) since deg ( ˆP ) = max ( δ ) ≤ | δ | = deg ( det ( ˆM )) .The second inequality in (5) is implied by | rdeg ( M )| ≤ m deg ( M ) .Besides, from P = UM = (cid:205) mi = U ∗ , i M i , ∗ we see that (3) impliesdeg ( P ) ≤ | rdeg ( M )| . For j ∈ π we have cdeg ( P ∗ , j ) ≤ | cdeg ( ˆP )| = deg ( det ( ˆM )) ≤ | cdeg ( M ′ )| . Now, let j ∈ { , . . . , n } \ π : if M ∗ , j = then P ∗ , j = , and otherwise it follows from (4) that cdeg ( P ∗ , j ) = deg ( UM ∗ , j ) ≤ | cdeg ( ˆM )| + cdeg ( M ∗ , j ) ≤ | cdeg ( M ′ )| . □ Now, we focus on computing the s -Popov form P of M when the s -pivot support π s ( M ) is known; here, M has full rank with m < n . To exploit the knowledge of π = π s ( M ) , a first approach followsfrom Remark 1: use Algorithm 1 with L such that L ∗ , { ,..., n }\ π isthe identity matrix and its other columns are zero. Then, it is easilychecked that C = Lx max ( rdeg s ( M ))− s is a completion of ˆM = Mx s ;hence Algorithm 1 returns the Popov form ˆP = Px s of ˆM . Thisyields P deterministically in O ˜ ( n ω ( deg ( M ) + amp ( s ))) operations.Both factors in this cost bound are unsatisfactory in some pa-rameter ranges. When n ≫ m , a sensible improvement would be toreplace the matrix dimension factor n ω with one which has the ex-ponent on the smallest dimension, such as m ω − n . Similarly, whenamp ( s ) ≫ m deg ( M ) , a sensible improvement would be to replacethe polynomial degree factor deg ( M ) + amp ( s ) with one suggestedby the bounds on deg ( P ) given in Eq. (5) of Lemma 5.1.We achieve both improvements with our second approach, whichworks in three steps and is formalised as Algorithm 4. First, wecompute the s π -Popov form of the submatrix M ∗ , π , which canbe done efficiently since this submatrix is square and nonsingular.Then, we use polynomial matrix division to obtain the unimodulartransformation U ∈ K [ x ] m × m such that M ∗ , π s ( M ) = U P ∗ , π s ( M ) .Lastly, we compute the remaining part of the s -Popov form of M as U − M ∗ , { ,..., n }\ π . Note that, even for s = , all entries of U − mayhave degree in Θ ( m deg ( M )) ; we avoid handling such large degreesby computing this product truncated at precision x δ , where δ isa (strict) upper bound on the degree of the s -Popov form P . Forexample, if s = we can take δ = + deg ( M ) . Algorithm 4:
KnownSupportPopov
Input: • matrix M ∈ K [ x ] m × n with full rank and m < n , • shift s ∈ Z n , • the s -pivot support π = π s ( M ) of M , • bound δ ∈ Z > on the degree of the s -Popov form of M . Default: δ = + min (| rdeg ( M )| , | cdeg ( M ′ )| , deg ( M ) + amp ( s )) ,where M ′ is M with zero columns removed. Output: the s -Popov form of M . P ← zero matrix in K [ x ] m × n P ∗ , π ← NonsingularPopov ( M ∗ , π , s π ) U ← M ∗ , π P − ∗ , π ∈ K [ x ] m × m δ ← min ( δ , + max ( rdeg s π ( P ∗ , π )) − min ( s ( ,..., n )\ π ) P ∗ , { ,..., n }\ π ← U − M ∗ , { ,..., n }\ π mod x δ Return P Proposition 6.1.
Algorithm 4 is correct and uses O ˜ ( m ω − nδ ) operations in K , where δ = + min (| rdeg ( M )| , | cdeg ( M ′ )| , deg ( M ) + amp ( s )) , and M ′ is M with zero columns removed. Proof. Let Q ∈ K [ x ] m × n be the s -Popov form of M . For cor-rectness we prove that P = Q . The first part of Lemma 5.1 showsthat indeed Q ∗ , π = P ∗ , π , and that U = M ∗ , π P − ∗ , π = M ∗ , π Q − ∗ , π computed at Step 3 is the unimodular matrix such that M = UQ .The last item of Lemma 5.1 proves that the input default valueof δ is more than deg ( Q ) . Besides, by definition of s -pivots and s -Popov form, the column j of Q has degree at mostmax ( rdeg s π ( Q ∗ , π )) − s j = max ( rdeg s π ( P ∗ , π )) − s j . t follows that δ > deg ( Q ∗ , { ,..., n }\ π ) holds after Step 4, andthus the submatrix Q ∗ , { ,..., n }\ π is equal to the truncated product U − M ∗ , { ,..., n }\ π mod x δ computed at Step 5. Hence Q = P .Now we explain the cost bound. Step 2 uses O ( m ω deg ( M ∗ , π )) operations, by Theorem 2.2. Step 3 has the same cost by Lemma 6.2below; note that P ∗ , π is in s π -Popov form and thus column reduced.This is within the announced bound since O ( m ω deg ( M ∗ , π )) ⊆ O ( m ω − n deg ( M )) and deg ( M ) ≤ δ holds by definition of δ .Finally, Step 5 costs O ˜ ( m ω − nδ ) operations in K : since U ( ) ∈ K m × m is invertible, the truncated inverse of U is computed byNewton iteration in time O ˜ ( m ω δ ) ; then, the truncated productuses O ˜ ( m ω ⌈( n − m )/ m ⌉ δ ) operations. □ At Step 3, we compute a product of the form BA − , knowing thatit has polynomial entries and that A is column reduced; in particular,deg ( BA − ) ≤ deg ( B ) [19, Lem. 3.1]. Then, it is customary to obtain BA − via a Newton iteration on the “reversed matrices” (see e.g.[22, Chap. 5] and [26, Chap. 10]).Lemma 6.2. For a column reduced matrix A ∈ K [ x ] m × m and amatrix B ∈ K [ x ] m × m which is a left multiple of A , the quotient BA − can be computed using O ˜ ( m ω deg ( B )) operations in K . Proof. We follow Steps 1 and 2 of the algorithm PM-QuoRemfrom [19], on input A , B , and d = deg ( B ) +
1; hence the requirementcdeg ( B ) < cdeg ( A ) + ( d , . . . , d ) is satisfied. It is proved in [19,Prop. 3.4] that these steps correctly compute the quotient BA − ; yetwe do a different cost analysis since the assumptions on parametersin [19, Prop. 3.4] might not be satisfied here.Step 1 of PM-QuoRem computes a type of reversals ˆA and ˆB of the matrices A and B : this uses no arithmetic operation. Thesematrices also have dimensions m × m and the constant coefficientof ˆA is invertible because A is column reduced. Step 2 computes thetruncated product ˆB ˆA − mod x d + , which can be done via Newtoniteration in O ˜ ( m ω d ) operations in K . □ Since Algorithm 4 works for an arbitrary shift, it allows us inparticular to find the Hermite form of M when its Hermite pivotsupport is known. It turns out that the latter can be computedefficiently via a column rank profile algorithm from [26].Proof of Theorem 1.2. Here, the integer δ is defined as δ = + min (| rdeg ( M )| , | cdeg ( M ′ )|) , (6)where M ′ is M with zero columns removed.Let h = ( nδ , . . . , δ , δ ) . By Lemma 5.1, δ is more than the degreeof the Hermite form of M ; therefore the h -Popov form of M is alsoits Hermite form (see Section 5.2). Thus, up to the knowledge of theHermite pivot support π h ( M ) of M , we can compute the Hermiteform of M using O ˜ ( m ω − nδ ) operations via Algorithm 4.As mentioned in Section 5.2, π h ( M ) is also the column rankprofile of M . It is shown in [26, Sec. 11.2] how to use row basis andkernel basis computations to obtain this rank profile in O ˜ ( m ω − nσ ) operations, where σ = ⌈| rdeg ( M )|/ m ⌉ is roughly the average rowdegree of M . We have σ ≤ + | rdeg ( M )| by definition, and it iseasily verified that | rdeg ( M )|/ m ≤ | cdeg ( M ′ )| , hence σ ≤ δ . □ ACKNOWLEDGMENTS
The authors are grateful to Clément Pernet for pointing at thenotion of saturation.
REFERENCES [1] B. Beckermann and G. Labahn. 2000. Fraction-Free Computation of MatrixRational Interpolants and Matrix GCDs.
SIAM J. Matrix Anal. Appl.
22, 1 (2000),114–144.[2] B. Beckermann, G. Labahn, and G. Villard. 1999. Shifted Normal Forms of Poly-nomial Matrices. In
ISSAC’99 . ACM, 189–196.[3] B. Beckermann, G. Labahn, and G. Villard. 2006. Normal forms for generalpolynomial matrices.
J. Symbolic Comput.
41, 6 (2006), 708–737.[4] N. Bourbaki. 1972.
Commutative Algebra . Hermann.[5] D. G. Cantor and E. Kaltofen. 1991. On fast multiplication of polynomials overarbitrary algebras.
Acta Inform.
28, 7 (1991), 693–701.[6] G. D. Forney, Jr. 1975. Minimal Bases of Rational Vector Spaces, with Applicationsto Multivariable Linear Systems.
SIAM Journal on Control
13, 3 (1975), 493–520.[7] P. Giorgi, C.-P. Jeannerod, and G. Villard. 2003. On the complexity of polynomialmatrix computations. In
ISSAC’03 . ACM, 135–142.[8] S. Gupta. 2011.
Hermite forms of polynomial matrices . Master’s thesis. Universityof Waterloo, Canada.[9] S. Gupta, S. Sarkar, A. Storjohann, and J. Valeriote. 2012. Triangular x -basisdecompositions and derandomization of linear algebra algorithms over K [ x ] . J.Symbolic Comput.
47, 4 (2012), 422–453.[10] S. Gupta and A. Storjohann. 2011. Computing Hermite Forms of PolynomialMatrices. In
ISSAC’11 . ACM, 155–162.[11] C. Hermite. 1851. Sur l’introduction des variables continues dans la théorie desnombres.
Journal für die reine und angewandte Mathematik
41 (1851), 191–216.[12] C.-P. Jeannerod, V. Neiger, É. Schost, and G. Villard. 2016. Fast computation ofminimal interpolation bases in Popov form for arbitrary shifts. In
ISSAC’16 . ACM,295–302.[13] T. Kailath. 1980.
Linear Systems . Prentice-Hall.[14] G. Labahn, V. Neiger, and W. Zhou. 2017. Fast, deterministic computation of theHermite normal form and determinant of a polynomial matrix.
J. Complexity
The Theory of Matrices . Springer-Verlag Berlin Heidelberg.https://doi.org/10.1007/978-3-642-99234-6[16] T. Mulders and A. Storjohann. 2003. On lattice reduction for polynomial matrices.
J. Symbolic Comput.
35 (2003), 377–401. Issue 4.[17] V. Neiger. 2016.
Bases of relations in one or several variables: fast algorithms andapplications . Ph.D. Dissertation. École Normale Supérieure de Lyon. https://tel.archives-ouvertes.fr/tel-01431413[18] V. Neiger. 2016. Fast computation of shifted Popov forms of polynomial matricesvia systems of modular polynomial equations. In
ISSAC’16 . ACM, 365–372.[19] V. Neiger and T. X. Vu. 2017. Computing canonical bases of modules of univariaterelations. In
ISSAC’17 . ACM.[20] M. Newman. 1972.
Integral Matrices . Number v. 45 in Integral matrices. AcademicPress.[21] V. M. Popov. 1972. Invariant Description of Linear, Time-Invariant ControllableSystems.
SIAM Journal on Control
10, 2 (1972), 252–264.[22] S. Sarkar. 2011.
Computing Popov Forms of Polynomial Matrices . Master’s thesis.University of Waterloo, Canada.[23] S. Sarkar and A. Storjohann. 2011. Normalization of row reduced matrices. In
ISSAC’11 . ACM, 297–304.[24] M. Van Barel and A. Bultheel. 1992. A general module theoretic framework forvector M-Padé and matrix rational interpolation.
Numer. Algorithms
Linear Multivariable Systems . Applied MathematicalSciences, Vol. 11. Springer-Verlag New-York.[26] W. Zhou. 2012.
Fast Order Basis and Kernel Basis Computation and Related Problems .Ph.D. Dissertation. University of Waterloo.[27] W. Zhou and G. Labahn. 2012. Efficient Algorithms for Order Basis Computation.
J. Symbolic Comput.
47, 7 (2012), 793–819.[28] W. Zhou and G. Labahn. 2013. Computing Column Bases of Polynomial Matrices.In
ISSAC’13 . ACM, 379–386.[29] W. Zhou and G. Labahn. 2014. Unimodular Completion of Polynomial Matrices.In
ISSAC’14 . ACM, 413–420.[30] W. Zhou, G. Labahn, and A. Storjohann. 2012. Computing Minimal NullspaceBases. In