Fast and deterministic computation of the determinant of a polynomial matrix
FFAST AND DETERMINISTIC COMPUTATION OF THE DETERMINANT OFA POLYNOMIAL MATRIX
WEI ZHOU AND GEORGE LABAHN
Abstract.
Given a square, nonsingular matrix of univariate polynomials F ∈ K [ x ] n × n over afield K , we give a deterministic algorithm for finding the determinant of F . The complexity ofthe algorithm is O ˜( n ω s ) field operations where s is the average column degree or the averagerow degree of F . Here O ˜ notation is Big- O with log factors omitted and ω is the exponent ofmatrix multiplication. Introduction
Let F ∈ K [ x ] n × n be a square, nonsingular polynomial matrix with K a field. In this paper wegive a deterministic algorithm for finding the determinant of F . The complexity of the algorithmis O ˜( n ω s ) field operations from K where s is the average column degree or the average row degreeof F . Here O ˜ denotes O with log c ( nd ) factors suppressed for some positive real constant c and ω isthe exponent of matrix multiplication. The fact that the complexity of determinant computationis related to the complexity of matrix multiplication is well-known. In the case of matrices over afield, for example, Bunch and Hopcroft [5] showed that if there exists a fast algorithm for matrixmultiplication then there also exists an algorithm for determinant computation with the sameexponent.In the case of square matrices of polynomials of degree at most d , Storjohann [13] gives a re-cursive deterministic algorithm to compute a determinant making use of fraction-free Gaussianelimination with a cost of O ˜( n ω +1 d ) operations. A O ( n d ) deterministic algorithm was latergiven by Mulders and Storjohann [12], modifying their weak Popov form computation. Using lowrank perturbations, Eberly, et al [7] gave a determinant algorithm requiring O ˜( n ω/ d ) fieldoperations, while Storjohann [14] used higher order lifting to give an algorithm which reducesthe complexity to ( n ω (log n ) d (cid:15) ) field operations. Finally, we mention the algorithm of Giorgiet al [9] which computes the determinant with complexity O ∼ ( n ω d ) . However the algorithms inboth [7] and [14] are both probabilistic, while the algorithm from [9] only works efficiently on aclass of generic input matrices, matrices that are well behaved in the computation. Determin-istic algorithms for general polynomial matrices with complexity similar to that of fast matrixmultiplication have not appeared previously.In the case of an arbitrary commutative ring (with multiplicative unit) or integers other fastdeterminant algorithms have been given by Kaltofen [10], Abbott et al [1], Eberly et al [7] andKaltofen and Villard [11]. We refer the reader to the last named paper and the references thereinfor more details on efficient determinant computation of such matrices.Our algorithm takes advantage of a fast algorithm [19] for computing a shifted minimal kernelbasis to efficiently eliminate blocks of a polynomial matrix. More specifically, we use kernel basesto partition our input F as F · U = (cid:20) G ∗ G (cid:21) with U unimodular. Such a unimodular transformation almost preserves the determinant of F , butresults in an extra factor coming from the determinant of the unimodular matrix U , a nonzerofield element in K . The computation of the determinant of F can therefore be reduced to thecomputations of the determinants of U , G and det G . The computations of det G and det G Cheriton School of Computer Science, University of Waterloo, Waterloo ON, Canada N2L 3G1 {w2zhou,glabahn}@uwaterloo.ca . a r X i v : . [ c s . S C ] S e p re similar to the original problem of computing det F , but with input matrices of lower dimensionand possibly higher degrees. To achieve the desired efficiency, however, these computations needto be done without actually determining the unimodular matrix U , since its potential large degreesize may prevent it from being efficiently computed. We show how the determinant of U can becomputed without actually computing the entire unimodular multiplier U . In addition, for fast,recursive computation, the degrees of each of the diagonal blocks need to be controlled in orderto ensure that these are also not too large. We accomplish this by making use of the conceptsof shifted minimal kernel bases and column bases of polynomial matrices. Shifts basically allowus to control the computations using column degrees rather than the degree of the polynomialmatrix. This becomes an issue when the degrees of the input columns vary considerably fromcolumn to column (and hence to the degree of the input). The shifted kernel and column basescomputations can be done efficiently using algorithms from [19] and [17]. We remark that the useof shifted minimal kernel bases and column bases, used in the context of fast block elimination,have also been used for deterministic algorithms for inversion [20] and unimodular completion [18]of polynomial matrices.The remainder of this paper is organized as follows. In the next section we give preliminaryinformation for shifted degrees, kernel and column bases of polynomial matrices. Section 3 thencontains the algorithm for recursively computing the diagonal elements of a triangular form anda method to compute the determinants of the unimodular matrices. The paper ends with aconclusion and topics for future research.2. Preliminaries
In this section we give the basic definitions and properties of shifted degree , minimal kernelbasis and column basis for a matrix of polynomials. These will be the building blocks used in ouralgorithm.2.1. Shifted Degrees.
Our methods makes use of the concept of shifted degrees of polynomialmatrices [3], basically shifting the importance of the degrees in some of the rows of a basis. Fora column vector p = [ p , . . . , p n ] T of univariate polynomials over a field K , its column degree,denoted by cdeg p , is the maximum of the degrees of the entries of p , that is, cdeg p = max ≤ i ≤ n deg p i . The shifted column degree generalizes this standard column degree by taking the maximum aftershifting the degrees by a given integer vector that is known as a shift . More specifically, the shiftedcolumn degree of p with respect to a shift (cid:126)s = [ s , . . . , s n ] ∈ Z n , or the (cid:126)s -column degree of p is cdeg (cid:126)s p = max ≤ i ≤ n [deg p i + s i ] = deg( x (cid:126)s · p ) , where x (cid:126)s = diag ( x s , x s , . . . , x s n ) . For a matrix P , we use cdeg P and cdeg (cid:126)s P to denote respectively the list of its column degrees andthe list of its shifted (cid:126)s -column degrees. When (cid:126)s = [0 , . . . , , the shifted column degree specializesto the standard column degree.Shifted degrees have been used previously in polynomial matrix computations and in general-izations of some matrix normal forms [4]. The shifted column degree is equivalent to the notionof defect commonly used in the literature.Along with shifted degrees we also make use of the notion of a matrix polynomial being column(or row) reduced. A polynomial matrix F is column reduced if the leading column coefficientmatrix, that is the matrix [ coeff ( f, x, d j )] ≤ i,j ≤ n , with (cid:126)d = cdeg F , has full rank. A polynomial matrix F is (cid:126)s -column reduced if x (cid:126)s F is column reduced. he usefulness of the shifted degrees can be seen from their applications in polynomial matrixcomputation problems [16, 18, 19, 20]. One of its uses is illustrated by the following lemma, whichfollows directly from the definition of shifted degree. Lemma 1.
Let (cid:126)s be a shift whose entries bound the corresponding column degrees of A ∈ K ∗× m .Then for any polynomial matrix B ∈ K [ x ] m ×∗ , the column degrees of A · B are bounded by thecorresponding (cid:126)s -column degrees of B . An essential subroutine needed in our algorithm, also based on the use of the shifted degrees,is the efficient multiplication of a pair of matrices A · B with unbalanced degrees. The followingresult follows as a special case of [15, Theorem 5.6]. The notation (cid:80) (cid:126)s , for any list (cid:126)s , denotes thesum of all entries in (cid:126)s . Theorem 2.
Let A ∈ K [ x ] n × m and B ∈ K [ x ] m × n be given, m ≤ n . Suppose (cid:126)s ∈ Z n ≥ is a shiftthat bounds the corresponding column degrees of A , and (cid:80) (cid:126)s ≥ (cid:80) cdeg (cid:126)s B . Then the product A · B can be computed in O ∼ ( n ω s ) field operations from K , where s = (cid:80) (cid:126)s/n is the average ofthe entries of (cid:126)s . Kernel and Column Bases.
The kernel of F ∈ K [ x ] m × n is the F [ x ] -module { p ∈ K [ x ] n | Fp = 0 } with a kernel basis of F being a basis of this module. Formally, we have: Definition 3.
Given F ∈ K [ x ] m × n , a polynomial matrix N ∈ K [ x ] n × k is a (right) kernel basis of F if the following properties hold:(1) N is full-rank.(2) N satisfies F · N = 0 .(3) Any q ∈ K [ x ] n satisfying Fq = 0 can be expressed as a linear combination of the columnsof N , that is, there exists some polynomial vector p such that q = Np .It is not difficult to show that any pair of kernel bases N and M of F are unimodularlyequivalent, that is, N = M · V for some unimodular matrix V .A (cid:126)s -minimal kernel basis of F is just a kernel basis that is (cid:126)s -column reduced. Definition 4.
Given F ∈ K [ x ] m × n , a polynomial matrix N ∈ K [ x ] n × k is a (cid:126)s -minimal (right)kernel basis of F if N is a kernel basis of F and N is (cid:126)s -column reduced. We also call a (cid:126)s -minimal(right) kernel basis of F a ( F , (cid:126)s ) -kernel basis.We will need the following result from [19] to bound the sizes of kernel bases. Theorem 5.
Suppose F ∈ K [ x ] m × n and (cid:126)s ∈ Z n ≥ is a shift with entries bounding the correspondingcolumn degrees of F . Then the sum of the (cid:126)s -column degrees of any (cid:126)s -minimal kernel basis of F isbounded by (cid:80) (cid:126)s . A column basis of F is a basis for the K [ x ] -module { Fp | p ∈ K [ x ] n } . Such a basis can be represented as a full rank matrix T ∈ K [ x ] m × r whose columns are thebasis elements. A column basis is not unique and indeed any column basis right multiplied by aunimodular polynomial matrix gives another column basis.The cost of kernel basis computation is given in [19] while the cost of column basis computationis given in [17]. In both cases they make heavy use of fast methods for order bases (also sometimesreferred to as sigma bases or minimal approximant bases) [2, 9, 16]. Theorem 6.
Let F ∈ K [ x ] m × n with (cid:126)s = cdeg F . Then a ( F , (cid:126)s ) -kernel basis can be computed witha cost of O ˜( n ω s ) field operations where s = (cid:80) (cid:126)s/n is the average column degree of F . heorem 7. The algorithm from [17] can compute a column basis of a polynomial matrix F deterministically with O ˜ (cid:0) nm ω − s (cid:1) field operations in K , where s is the average average columndegree of F . In addition, if r is the rank of F , then the column basis computed has column degreesbounded by the r largest column degrees of F , Example 8.
Let F = x − x − x x − x − − x − x − x + x x − x + 1 3 x be a × matrix over Z [ x ] having column degree (cid:126)s = (1 , , , , . Then a column space G anda kernel basis N of F are given by G = x − x − x − − x − x + x x and N := − x − x − x − . For example, if { g i } i =1 ,..., denote the columns of G then column of F - denoted by f - is givenby f = − g − x g + x g + 2 g . Here cdeg (cid:126)s N = (5 , with shifted leading coefficient matrixlcoeff (cid:126)s ( N ) = − − . The kernel basis N satisfies F · N = . Since lcoeff (cid:126)s ( N ) has full rank, it is (cid:126)s -column reduced, andwe have that N is a (cid:126)s -minimal kernel basis. (cid:3) Column bases and kernel bases are closely related, as shown by the following result from [15, 17].
Lemma 9.
Let F ∈ K [ x ] m × n and suppose U ∈ K [ x ] n × n is a unimodular matrix such that F · U = [0 , T ] with T of full column rank. Partition U = [ U L , U R ] such that F · U L = 0 and F · U R = T . Then (1) U L is a kernel basis of F and T is a column basis of F . (2) If N is any other kernel basis of F , then U ∗ = [ N , U R ] is unimodular and also unimod-ularly transforms F to [0 , T ] . Recursive Computation
In this section we show how to recursively compute the determinant of a nonsingular inputmatrix F ∈ K [ x ] n × n having column degrees (cid:126)s . The computation makes use of fast kernel basisand column basis computation.Consider unimodularly transforming F to(1) F · U = G = (cid:20) G ∗ G (cid:21) , which eliminates a top right block and gives two square diagonal blocks G and G in G . Thenthe determinant of F can be computed as(2) det F = det G det U = det G · det G det U , hich requires us to first compute det G , det G , and det U . The same procedure can then beapplied to compute the determinant of G and the determinant of G . This can be repeatedrecursively until the dimension becomes .One major obstacle of this approach, however, is that the degrees of the unimodular matrix U and the matrix G can be too large for efficient computation. To sidestep this issue, we will showthat the matrices G , G , and the scalar det U can in fact be computed without computing theentire matrices G and U .3.1. Computing the diagonal blocks.
Suppose we want G to have dimension k . We canpartition F as F = (cid:20) F U F D (cid:21) with k rows in F U , and note that both F U and F D are of full-rank since F is assumed to be nonsingular. By partitioning U = (cid:2) U L , U R (cid:3) , with k columns in U L , then(3) F · U = (cid:20) F U F D (cid:21) (cid:2) U L U R (cid:3) = (cid:20) G ∗ G (cid:21) = G . Notice that the matrix G is nonsingular and is therefore a column basis of F U . As such thiscan be efficiently computed as mentioned in Theorem 7. In addition, the column basis algorithmmakes the resulting column degrees of G small enough for G to be efficiently used again as theinput matrix of a new subproblem in the recursive procedure. Lemma 10.
The first diagonal block G in G can be computed with a cost of O ˜( n ω s ) and withcolumn degrees bounded by the k largest column degrees of F . For computing the second diagonal block G , notice that we do not need a complete unimodularmatrix U , as only U R is needed to compute G = F D U R . In fact, Lemma 9 tells us much more.It tells us that the matrix U R is a right kernel basis of F , which makes the top right block of G zero. In addition the kernel basis U R can be replaced by any other kernel basis of F to giveanother unimodular matrix that also transforms F U to a column basis and also eliminates the topright block of G . Lemma 11.
Partition F = (cid:20) F U F D (cid:21) and suppose G is a column basis of F U and N a kernel basisof F U . Then there is a unimodular matrix U = [ ∗ , N ] such that F · U = (cid:20) G ∗ G (cid:21) , where G = F D · N . If F is square nonsingular, then G and G are also square nonsingular. Note that the blocks represented by the symbol ∗ are not needed in our computation. Theseblocks may have very large degrees and cannot be computed efficiently.We have just seen how G and G can be determined without computing the unimodularmatrix U . We still need to make sure that G can be computed efficiently, which can be done byusing the existing algorithms for kernel basis computation and the multiplication of matrices withunbalanced degrees. We also require that the column degrees of G be small enough for G to beefficiently used again as the input matrix of a new subproblem in the recursive procedure. Lemma 12.
The second diagonal block G can be computed with a cost of O ˜( n ω s ) field operations.Furthermore (cid:80) cdeg G ≤ (cid:80) (cid:126)s .Proof. From Lemma 11 we have that G = F D · N with N a kernel basis of F U . In fact, thiskernel basis can be made (cid:126)s -minimal using the algorithm from [19], and computing such a (cid:126)s -minimalkernel basis of F U costs O ˜( n ω s ) field operations by Theorem 6. In addition, from Theorem 5 thesum of the (cid:126)s -column degrees of such a (cid:126)s -minimal N is bounded by (cid:80) (cid:126)s .For the matrix multiplication F D · N , the sum of the column degrees of F D and the sum of the (cid:126)s -column degrees of N are both bounded by (cid:80) (cid:126)s . Therefore we can apply Theorem 2 directly tomultiply F D and N with a cost of O ˜( n ω s ) field operations.The second statement follows from Lemma 1. (cid:3) he computation of a kernel basis N of F U is actually also used as an intermediate step by thecolumn basis algorithm for computing the column basis G [17]. In other words, we can get thiskernel basis from the computation of G with no additional work.3.2. Determinant of the unimodular matrix.
Lemma 10 and Lemma 12 show that the twodiagonal blocks in (1) can be computed efficiently. In order to compute the determinant of F using (2), we still need to know the determinant of the unimodular matrix U satisfying (3), orequivalently, we can also find out the determinant of V = U − . The column basis computationfrom [17] for computing the diagonal block G also gives U R , the matrix consisting of the right ( n − k ) columns of U , which is also a right kernel basis of F U . In fact, this column basis computationalso gives a right factor multiplied with the column basis G to give F U . The following lemmashows that this right factor coincides with the the matrix V U consisting of the top k rows of V .The column basis computation therefore gives both U R and V U with no additional work. Lemma 13.
Let k be the dimension of G . The matrix V U ∈ K [ x ] k × n satisfies G · V U = F U ifand only if V U is the submatrix of V = U − consisting of the top k rows of V .Proof. The proof follows directly from G · V = (cid:20) G ∗ G (cid:21) (cid:20) V U V D (cid:21) = (cid:20) F U F D (cid:21) = F . (cid:3) While the determinant of V or the determinant of U is needed to compute the determinant of F , a major problem is that we do not know U L or V D , which may not be efficiently computeddue to their large degrees. This means we need to compute the determinant of V or U withoutknowing the complete matrix V or U . The following lemma shows how this can be done usingjust U R and V U , which are obtained from the computation of the column basis G . Lemma 14.
Let U = [ U L , U R ] and F be as before, that is, they satisfy F · U = (cid:20) F U F D (cid:21) (cid:2) U L U R (cid:3) = (cid:20) G ∗ G (cid:21) = G , where the row dimension of F U , the column dimension of U L , and the dimension of G are k .Let V = (cid:20) V U V D (cid:21) be the inverse of U with k rows in V U . If U ∗ L ∈ K [ x ] n × k is a matrix such that U ∗ = [ U ∗ L , U R ] is unimodular, then det F = det G · det ( V U U ∗ L )det ( U ∗ ) . Proof.
Since det F = det G · det V , we just need to show that det V = det ( V U U ∗ L ) / det ( U ∗ ) .This follows from det V · det U ∗ = det ( V · U ∗ )= det (cid:18)(cid:20) V U V D (cid:21) (cid:2) U ∗ L U R (cid:3)(cid:19) = det (cid:18)(cid:20) V U U ∗ L ∗ I (cid:21)(cid:19) = det ( V U U ∗ L ) . (cid:3) Lemma 14 shows that the determinant of V can be computed using V U , U R , and a unimodularcompletion U ∗ of U R . In fact, this can be made more efficient still by noticing that the higherdegree parts do not affect the computation. Lemma 15. If U ∈ K [ x ] n × n is unimodular, then det U = det ( U mod x ) = det ( U (0)) . lgorithm 1 determinant( F ) Input: F ∈ K [ x ] n × n , nonsingular. Output: the determinant of F . (cid:20) F U F D (cid:21) := F , with F U consists of the top (cid:100) n/ (cid:101) rows of F ; if n = 1 then return F ; endif ; G , U R , V U := ColumnBasis( F U ) ; Note:
Here
ColumnBasis() also returns the kernel basis U R and the right factor V U it computed in addition to the column basis G . G := F D U R ; U R := U R mod x ; V U := V U mod x ; Compute U ∗ L ∈ K n × k , a matrix that makes U ∗ = [ U ∗ L , U R ] unimodular; d V := det ( V U U ∗ L ) / det( U ∗ ; d G := determinant( G ) · determinant( G ); return d V · d G ; Proof.
Note that det ( U ( α )) = (det U ) ( α ) for any α ∈ K , that is, the result is the same whetherwe do evaluation before or after computing the determinant. Taking α = 0 , we have det ( U mod x ) = det ( U (0)) = (det U ) (0) = det ( U ) mod x = det U . (cid:3) Lemma 15 allows us to use just the degree zero coefficient matrices in the computation. HenceLemma 14 can be improved as follows.
Lemma 16.
Let F , U = [ U L , U R ] , and V = (cid:20) V U V D (cid:21) be as before. Let U R = U R mod x and V U = V U mod x be the constant matrices of U R and V U , respectively. If U ∗ L ∈ K n ×∗ is a matrixsuch that U ∗ = [ U ∗ L , U R ] is unimodular, then det F = det G · det ( V U U ∗ L )det ( U ∗ ) . Proof.
Lemma 15 implies that det V = det V and det U ∗ = det U ∗ . These can then be substitutedin the proof of Lemma 14 to obtain the result. (cid:3) Remark . Lemma 16 requires us to compute U ∗ L ∈ K n ×∗ a matrix such that U ∗ = [ U ∗ L , U R ] isunimodular. This can be obtained from the unimodular matrix that transforms V U to its reducedcolumn echelon form computed using the Gauss Jordan transform algorithm from [13] with a costof O (cid:0) nm ω − (cid:1) where m is the column dimension of U ∗ L .We now have all the ingredients needed for computing the determinant of F . A recursivealgorithm is given in Algorithm 1, which computes the determinant of F as the product of thedeterminant of V and the determinant of G . The determinant of G is computed by recursivelycomputing the determinants of its diagonal blocks G and G . Example 18.
In order to see correctness of the algorithm, let F = x − x − x x − x − − x − x − x + x x − x + 1 3 x x + 2 x − x + 2 x − − x + 2 − x − x + 3 2 x + 2 0 orking over Z [ x ] . If F U denotes the top three rows of F then a column basis G = x − x − x − − x − x + x x and minimal kernel basis U R = − x − x − x − for F U were given in Example 8. The computation of the column basis also gives the right factor V U = − x x
00 0 1 − x . The constant term matrices are then U R = − − and V U = with Gaussian-Jordan used to fnd a unimodular completion of U R as U ∗ L = . The determinant of V is then computed as det ( V U U ∗ L )det ( U ∗ ) = −
13 = 2 . Recursively computing the determinant of G and G gives det G = x − x and det G = x − x .Accumulating the above gives the determinant of F as det F = det V · det G · det G = 2 (cid:0) x − x (cid:1) (cid:0) x − x (cid:1) = 2 x − x − x + 2 x . (cid:3) Computational cost.Theorem 19.
Algorithm 1 costs O ˜( n ω s ) field operations to compute the determinant of a non-singular matrix F ∈ K [ x ] n × n , where s is the average column degree of F .Proof. From Lemma 10 and Lemma 12 the computation of the two diagonal blocks G and G costs O ∼ ( n ω s ) field operations, which dominates the cost of the other operations in the algorithm.Now consider the cost of the algorithm on a subproblem in the recursive computation. If welet the cost be g ( m ) for a subproblem whose input matrix has dimension m , by Lemma 10 andLemma 12 the sum of the column degrees of the input matrix is still bounded by ns , but theaverage column degree is now bounded by ns/m . The cost on the subproblem is then g ( m ) ∈ O ∼ ( m ω ( ns/m )) + g ( (cid:100) m/ (cid:101) ) + g ( (cid:98) m/ (cid:99) ) ⊂ O ∼ ( m ω − ns ) + 2 g ( (cid:100) m/ (cid:101) ) ⊂ O ∼ ( m ω − ns ) . The cost on the original problem when the dimension m = n is therefore O ∼ ( n ω s ) . (cid:3) . Conclusion
In this paper we have given a new, fast, deterministic algorithm for computing the determinantof a nonsingular polynomial matrix. Our method relies on the efficient, deterministic computationof the diagonal elements of a triangularization of the input matrix. This in turn relies on recentefficient methods [17, 19] for computing shifted minimal kernel and column bases of polynomialmatrices.In a future report we will show how our triangularization technique results in a fast, determin-istic algorithm for finding a Hermite normal form. Other directions of interest include making useof the diagonalization procedures in domains such as matrices of differential or, more generally, ofOre operators, particularly for computing normal forms. Partial results had been reported in [6], atleast in the case of Popov normal forms. Similarly we are interested in applying our block elimina-tion techniques using kernal bases to computing the Dieudonné determinant and quasideterminantof matrices for Ore polynomial rings. These are the two main generalizations of determinants formatrices of noncommutative polynomials. Degree bounds for these noncommutative determinantshave been used in [8] for modular computation of normal forms.
References [1] J. Abbott, M. Bronstein, and T. Mulders. Fast deterministic computation of determinantsof dense matrices. In
Proceedings of the International Symposium on Symbolic and AlgebraicComputation , ISSAC’99, pages 197–204. ACM Press, 1999.[2] B. Beckermann and G. Labahn. A uniform approach for the fast computation of matrix-typePadé approximants.
SIAM Journal on Matrix Analysis and Applications , 15(3):804–823, 1994.[3] B. Beckermann, G. Labahn, and G. Villard. Shifted normal forms of polynomial matrices.In
Proceedings of the International Symposium on Symbolic and Algebraic Computation , IS-SAC’99, pages 189–196, 1999.[4] B. Beckermann, G. Labahn, and G. Villard. Normal forms for general polynomial matrices.
Journal of Symbolic Computation , 41(6):708–737, 2006.[5] J. Bunch and J. Hopcroft. Triangular factorization and inversion by fast matrix multiplication.
Mathematics of Computation , 28:231–236, 1974.[6] P. Davies, H. Cheng, and G. Labahn. Computing the Popov Form of Ore polynomial matrices.In
Milestones in Computer Algebra , pages 149–156, 2008.[7] W. Eberly, M. Giesbrecht, and G. Villard. On computing the determinant and Smith nor-mal form of an integer matrix. In
Proceedings of 41st IEEE Symposium on Foundations ofComputer Science (FOCS’2000) , pages 675–687, 2000.[8] M. Giesbrecht and M. Sub Kim. Computing the Hermite form of a matrix of Ore polynomials.
Journal of Algebra , 376:341–362, 2013.[9] P. Giorgi, C.-P. Jeannerod, and G. Villard. On the complexity of polynomial matrix computa-tions. In
Proceedings of the International Symposium on Symbolic and Algebraic Computation ,ISSAC’03, pages 135–142. ACM Press, 2003.[10] E. Kaltofen. On computing determinants of matrices without divisions. In
Proceedings of theInternational Symposium on Symbolic and Algebraic Computation , ISSAC’92, pages 342–349.ACM, 1992.[11] E. Kaltofen and G. Villard. On the complexity of computing determinants.
ComputationalComplexity , 13:91–130, 2004.[12] T. Mulders and A. Storjohann. On lattice reduction for polynomial matrices.
Journal ofSymbolic Computation , 35(4):377–401, April 2003.[13] A. Storjohann.
Algorithms for Matrix Canonical Forms . PhD thesis, Department of ComputerScience, Swiss Federal Institute of Technology—ETH, 2000.[14] A. Storjohann. High-order lifting and integrality certification.
Journal of Symbolic Compu-tation, , 36:613–648, 2003.[15] W. Zhou.
Fast Order Basis and Kernel Basis Computation and Related Problems . PhD thesis,University of Waterloo, 2012.
16] W. Zhou and G. Labahn. Efficient algorithms for order basis computation.
Journal of SymbolicComputation , 47(7):793–819, 2012.[17] W. Zhou and G. Labahn. Fast computation of column bases of polynomial matrices. In
Pro-ceedings of the International Symposium on Symbolic and Algebraic Computation , ISSAC’13,pages 379–387. ACM, 2013.[18] W. Zhou and G. Labahn. Fast computation of unimodular completion of polynomial matri-ces. In
Proceedings of the International Symposium on Symbolic and Algebraic Computation ,ISSAC’14, pages 413–420. ACM, 2014.[19] W. Zhou, G. Labahn, and A. Storjohann. Computing minimal nullspace bases. In
Proceedingsof the International Symposium on Symbolic and Algebraic Computation , ISSAC’12, pages375–382. ACM, 2012.[20] W. Zhou, G. Labahn, and A. Storjohann. A deterministic algorithm for inverting a polynomialmatrix.
Submitted : Journal of Complexity , 2014., 2014.