Effective Matrix Methods in Commutative Domains
aa r X i v : . [ c s . S C ] N ov Effective Matrix Methodsin Commutative Domains ⋆ Gennadi I. Malaschonok
Tambov State University, 392622 Tambov, Russia [email protected]
Abstract.
Effective matrix methods for solving standard linear algebraproblems in a commutative domains are discussed. Two of them are new.There are a methods for computing adjoined matrices and solving systemof linear equations in a commutative domains.
Let R be a commutative domain with identity, K the field of quotients of R .We assume that R is equipped with an algorithm allowing exact division . Thismeans that if two elements a and b of R are given ( a being different fromzero) such that b = ac with c ∈ R , then this algorithm can exhibit the exactquotient c . Let R n × m denote the set of n × m matrices with entries in R .This paper is devoted to the review of effective matrix methods in the domain R for a solution of standard linear algebra problems. They are (1) multiplicat-ing two matrices, (2) solving linear systems in K , (3) solving linear systems in R , (4) computing the adjoint matrix, (5) computing the matrix determinant,(6) computing the characteristic polynomial of a matrix.We shall estimate algorithms according to the total number of multiplicationand division operations in the ring R .(1). Multiplication of two matrices . Let O ( n β ) be the number of multi-plication operations, that are necessary for multiplication of square matrices ofthe order n . For the standard multiplication of matrices β = 3, for Strassen [21]algorithm β = log 7, and for the best on today algorithm β < .
376 [9].(2).
Solving linear systems in K . Let Ax = b be a systems of linearequations whose coefficients belong to the commutative domain R : A ∈ R n × m , b ∈ R n , x ∈ K m . The main method here is the so-called Gauss method withexact divisions with complexity O ( n m ) operations in R . First this method waspublished in the paper of Dodgson [11], and later it was developed in the works[5], [22], [14], [17], [18]. We adduce the asymptotic complexity of these methods:1 . n m for Dodgson method [11], 1 . n m for Bareiss method [5], n for forward ⋆ This paper was published in the book Formal Power Series and Algebraic Combina-torics, (Ed. by D.Krob, A.A.Mikhalev, A.V.Mikhalev), Springer, 2000, 506-517. Nopart of this materials may be reproduced, stored in retrieval system, or transmitted,in any form without prior permission of the copyright owner. nd back-up procedures [14], 2 / n m for one-pass method [17], 7 / n m forgeneralized method [18]. A fast method of solving systems of linear equationsover commutative domain is published in the article [19]. The complexity of thismethod is O ( n β − m ), the same as the complexity of matrix multiplication.(3). Solving linear systems in R . Let Ax = b be a system of linear equa-tions A ∈ R n × m , b ∈ R n . The problem is to find all the solutions x of thissystem in the module R m . The particular cases of this problem are discussed in[15], [16]. A randomized algorithm for finding all the solutions in R is discassedin the section 3 of this paper. It is supposed that there exists an algorithm thatis able to ascertain whether the finitely generated ideal I = ( a , a , . . . , a s ) isunit or not. If I is unit, then there are calculated the coefficients k i ∈ R in theexpansion of the unit 1 = P si =1 k i a i . It is possible to take in the capacity of suchalgorithm the algorithm of the Gr¨obner bases computation in R .(4). Computing the adjoint matrix . The best known method of com-puting the adjoint matrix in an arbitary commutative ring has the complexity O ( n √ n log n loglog n ) operations of addition, substraction and multiplication[13]. If in a commutative ring the exact division is possible, then the best methodhas the complexity O ( n ) operations of multiplication and exact division [16],[17]. In this work in section 2 we suggest the method with the complexity, equalto the complexity of matrix multiplication, i.e. O ( n β ).(5). Computing the matrix determinant . The intermediate result of eachalgorithm [11], [5], [14], [17], [18], [19] for solving systems in K is the compu-tation of the matrix determinant. So the asymptotic complexity of determinantcomputation for these methods is 1 . n , 1 . n , n , 2 / n , 7 / n respectively.For the methods [19] the complexity of determinant computation is O ( n β ). Thebest method of computing the determinant of a matrix without divisions waspublished by Kaltofen [13]. Complexity of this method is n √ n log n log log n .(6). Computing the characteristic polynomial of a matrix . In thecase of an arbitrary commutative ring , the best algorithms are the Chistov one[7], and the Improved Berkowitz Algorithm [1] with size O ( n β +1 log n ) . In thepaper [2] there are described two new efficient methods with O ( n ) ring opera-tions (addition, subtraction, multiplication and exact division). The first one isthe Quasi-triangular method (with asymptotic multiplicative complexity 5 / n )and the second one is the Tri-diagonal method (with asymptotic multiplicativecomplexity 3 n ). As in the case of Hessenberg’s method [12], they proceed byreducing the given matrix A to a particular upper quasi-triangular (Hessenberg)form, similar to A . Commutative domain of principal ideals.
This is the basic applicationfield. In section 4 we discuss the problem of solving linear systems in the principalideals domain R and in the field of fractions K Let Ax = c be a system of linearequations, A ∈ R n × m , c ∈ R n . Solving linear system in the field of fractions.
The best known on todaymethod for solving determined system Ax = c in the field of fractions in thecase when R = Z , m = n , det A = 0, is the Dixon method [10], which useshe linear p-adic lifting. Its complexity is O ( n (log n + log k A k + log p ) ) bitoperations.If m > n , R = Z and Gauss method with exact divisions is used thensolving system Ax = c in usual arithmetic needs O ( mn (log n + log k A k ) ) bitoperations. k A k denotes the absolute value of the greatest coefficient of thesystem. Using the Chinese remaindering method may reduce the complexity upto O ( mn (log n + log k A k ) ) bit operations [6].My approach to this problem uses p-adic lifting like in [10]. The complexityof the algorithm in the case of the ring Z is O (( m − n + 1) n m (log m + log k A k +log p ) ) bit operations. Solving linear system in the principal ideal domain.
In [20] there is given therandomized algorithm for finding one solution of a system in the domain R , inthe cases when R = Z and R = F [ x ] (F[x] — ring of polynomials over a field).This method is based on the Dixon algorithm.I suggest a randomized algorithm for finding all the solutions of a systemin a commutative domain of principal ideals. It is based on using p-adic lifting.Its complexity is essentially cubic in the dimension of system like [20], but thenumber of matrix inversions is now m − n times less. Let A = (cid:18) A CB D (cid:19) be the invertible matrix and A — its invertible block. It ispossible to factorize its inverse matrix A − : (cid:18) I − A − C I (cid:19) (cid:18) I
00 ( D − BA − C ) − (cid:19) (cid:18) I − B I (cid:19) (cid:18) A − I (cid:19) . (2 . A be a matrix of the order n = 2 p . If a block inversion by formula (2.1) ispossible for the blocks up to the second order, then the computation of inversematrix needs 2 p − second order block inversions and 6 · p − k − multiplicationsof blocks the order 2 k ( k = 1 , , . . . , p − n β − n ) / β = log Let R be a commutative ring, A = ( a i,j )—a square matrix of an order n overthe ring R . We denote by A ki,j its square submatrix of the order k , obtained bythe bordering of upper left block of order k − i and the column j , ( i, j > k ). Its determinant is denoted by a ki,j = det A ki,j . Denote the cornerminor of the order k by δ k = a kk,k . The determinant of the matrix, obtainedrom the upper left block A kk,k of order k by the replacement of the column i bythe column j is denoted by δ k ( i,j ) , (1 ≤ i ≤ k, k < j ≤ n ).Consider the matrices A ( s ) t = ( a si,j ) i = s,...,tj = s,...,t and G ( t ) s = ( δ t ( i,j ) ) i = s,...,tj = t +1 ,...,n (2 . t − s + 1) × ( t − s + 1) and ( t − s + 1) × ( n − t ), correspondingly.With the preceding notation the determinant Sylvester identity [3] may bewritten in the following way: det A ( s ) t = δ t − ss − δ t (2 . < s < t ≤ n .Let us prove the two main theorems of the adjoint matrix factorization. Theorem 1.
Let a square matrix A of the order n over the ring R be decomposedinto the blocks A = (cid:18) A CB D (cid:19) ,A is the square block of order s , (1 < s < n ) , whose determinant δ s is not zeroor zero divisor in R . Then the adjoint matrix A ∗ can be factorized (cid:18) δ − s δ n I − δ − s F C I (cid:19) (cid:18) I G (cid:19) (cid:18) I − B δ s I (cid:19) (cid:18) F I (cid:19) , (2 . where F = A ∗ , G = δ − n + s +1 s A ( s +1) ∗ n , I is the unit matrix and the followingidentity takes place: A ( s +1) n = δ s D − BF C. (2 . Theorem 2.
Let the square matrix A ( s +1) n of order n − s , ( s > , n − s > , over the ring R be decomposed into the blocks A ( s +1) n = (cid:18) A CB D (cid:19) , where A is the square block of the order t − s , ( s < t < n ) , δ s and δ t are notzeros or zero divisors in R . Then the matrix δ − n + s +1 s A ( s +1) ∗ n can be factorized: (cid:18) δ − t δ n I − δ − t FC I (cid:19) (cid:18) I δ − s G (cid:19) (cid:18) I − B δ t I (cid:19) (cid:18) F I (cid:19) , (2 . where F = δ − t + s +1 s A ( s +1) ∗ t , G = δ − n + t +1 t A ( t +1) ∗ n , I is the unit matrix and thefollowing identity is true: A ( t +1) n = δ − s ( δ t D − BFC ) (2 . .roof. Calculate the products of matrix A ( s +1) n by the factors of the matrix (2.7)step by step from the right to the left: A ( s +1) n → (cid:18) δ t I FCB D (cid:19) → (cid:18) δ t I FC δ t D − BFC (cid:19) → (cid:18) δ t I FC δ n I (cid:19) → δ n I. It is necessary to prove the identity (2.8) and the following identities: FA = δ t I (2 . δ − n + t +1 t A ( t +1) ∗ n A ( t +1) n = δ n I (2 . A = A ( s +1) t , the equality (2.9) follows from the determinant Sylvester identitydet A ( s +1) t = δ t − s − s δ t .The identity (2.10) follows from the determinant Sylvester identitydet A ( t +1) n = δ n − t − t δ n . Let us prove the identity (2.8). Denote by a ( s +1) ∗ l,j the cofactor of the element( l, j ) in the matrix A ( s +1) t . From the determinant Sylvester identity we obtain: δ t − s − s δ t ( i,j ) = t X l = s +1 a ( s +1) ∗ l,i a s +1 l,j . Since F = δ − t + s +1 s A ( s +1) ∗ t and C is the block of the matrix A ( s +1) n , the lastequality for the elements implies the matrix identity G ( s ) t = FC . We decomposethe determinant of the matrix A s +1 i,j according to the last column and obtain a s +1 i,j = a i,j δ s − s X l =1 a l,j σ s ( l,j ) , (2 . σ s ( l,j ) is the determinant of the matrix, resulted from A ss,s by the replace-ment of the row l by the row j . Let σ = ( σ s (1 ,i ) , σ s (2 ,i ) , . . . , σ s ( s,i ) , , , . . . , ,α = ( a ,j , a ,j , . . . , a t,j ) , β = ( a s +1 i, , a s +1 i, , . . . , a s +1 i,t ) denote the rows with t ele-ments. Then according to (2.11) we obtain the matrix identity (cid:18) I − σ δ s (cid:19) · A t +1 i,j = (cid:18) A tt,t α T β a s +1 i,j (cid:19) . Correspondingly, we write the following determinant identity, where the deter-minant of the matrix on the right is decomposed according to the last row: δ s a t +1 i,j = δ t a s +1 i,j − t X l =1 a s +1 i,l δ s ( l,j ) . In the matrix form it may be written as δ s A ( t +1) n = δ t D − B G ( s ) t . Taking intoaccount G ( s ) t = FC we get the identity (2.9). .3 The Estimate of the Complexity The dimension of the upper left block A in the process of the factorization ofthe matrix may be taken arbitrarily. Consider the case, when the dimension ofthe block A is a degree of 2. We call such decomposition of the adjoin matrixthe binary factorization .Let M ( n ) be the complexity of the multiplication of two matrices of the order n and its asymptotical estimate is αn β .Then the complexity of the adjoint matrix calculation for the matrix of theorder n = 2 p by means of binary factorization is C ( n ) = P p − k =0 · k M (2 n − k − ) . We neglect the complexity of multiplication of a matrix by a scalar, i.e. the termsof the order n .Therefore, the asymptotical estimate of the complexity of the adjoint matrixcalculation is 3 αn β / (1 − − β ).Finally, for the relation of the complexities of the adjoint matrix calcu-lation and of the matrix multiplication we obtain the asymptotical estimate k ( β ) = lim n →∞ C ( n ) M ( n ) = − − β . For example we have k (3) = 4 for classicalmultiplication, and k (log
7) = 4 . Let R be a commutative domain with an identity, K be a field of fractions of R , A ∈ R n × m , rank A = r , c ∈ R n , Ax = c (3 . S and T be permutation matrices, whichtranspose linearly independent rows and columns of the matrix A to the upperleft corner. We obtain in this corner a square matrix of size r × r , denote it by A (det A = 0). The matrices SAT and Sc may be written in a block form: SAT = (cid:18) A A A A (cid:19) , Sc = (cid:18) c c (cid:19) , c ∈ R r . Denote by M = { x | x ∈ K m , Ax = c } the set of all the solutions of the system(3.1).We denote by I r the identity matrix of order r , E i,j —square matrices whichhave only one nonzero element—( i, j ), that equals 1.We need some facts from the theory of linear equations. If rank(
A, c ) = r , then M = ∅ . If rank( A, c ) = r , then M is a hyperplaneof dimension m − r in a space of dimension m . It is defined by m − r + 1 points,which do not belong to one hyperplane of lower dimension. If the system (3.1) is homogeneous ( c = 0) and x , x , . . . , x m − r are itslinearly independent solutions then M = { P m − r +1 i =1 x i u i | u i ∈ K } . . If the system (3.1) is nonhomogeneous ( c = 0) and x , x , . . . , x m − r +1 are its linearly independent solutions then M = { P m − r +1 i =1 x i u i | u i ∈ K , P m − r +1 i =1 u i = 1 } . Definition.
A basis set of solutions of a homogeneous system of linear equations (3.1) is a set which consists from m − r linearly independent solutions of thesystem (3.1). A basis set of solutions of a nonhomogeneous system of linearequations (3.1) is a set which consists from m − r + 1 linearly independentsolutions of the system (3.1).The next two theorems reduce the problem of getting the basis solutionsof (3.1) to several problems of solving determined systems. The first theoremconsiders homogeneous systems, the second — nonhomogeneous systems. Theorem 3.
Let (3.1) be a homogeneous system of linear equations and A =( a , a , . . . , a m − r ) , a j ∈ R r . Then the systems A x j = − a j , j = 1 , . . . , m − r, (3 . are determined and their solutions x j ∈ K r define the basis set of solutions of(3.1): T (cid:18) x j e j (cid:19) , j = 1 , . . . , m − r, (3 . where e j ∈ R m − r are the columns of the identity matrix I m − r =( e , e , . . . , e m − r ) .Proof. Denote y = T − x . By the condition we have ( A , A ) y = 0. We searchfor the solution in the form y = (cid:18) x j e j (cid:19) , and get the system (3.2). Corollary 1.
Let it be B = A − A = ( b , b , . . . , b m − r ) , b j ∈ K r . Then T (cid:18) − b e (cid:19) , T (cid:18) − b e (cid:19) , . . . , T (cid:18) − b m − r e m − r (cid:19) is the basis set of solutions of (3.1). Theorem 4.
Let (3.1) be a nonhomogeneous system of linear equations, P apermutation matrix such that the last element of the vector b = P A − c isnot equal to . Let it be B = P A − A , J ⊂ { , . . . , m − r } be the numbersof the columns of the matrix B with zero elements in the last row. Let it be U = I m − r +1 + P j ∈ J E ,j +1 , W = diag( I r − , U ) , Q = diag( P, I m − r ) , V = QW , I m − r +1 = ( e ′ , e ′ , . . . , e ′ m − r ) , where e ′ j ∈ R m − r +1 are the columns of the unitmatrix, and ( A ′ , a , a , a , . . . , a m − r ) = P ( A , A ) V where A ′ ∈ R r × r − , a j ∈ R r . Then the systems ( A ′ , a j ) x j = P c , j = 0 , , . . . , m − r, (3 . are determined. The solutions of these systems x j = (cid:18) x ′ j ξ j (cid:19) , x ′ j ∈ K r − , ξ j ∈ K define the basis set of solutions of (3.1): T V (cid:18) x ′ j ξ j e ′ j (cid:19) , j = 0 , , . . . , m − r. (3 . roof. Denote y = V − T − x . By the condition we have P ( A , A ) V y = P c and P ( A , A ) V = ( A ′ , a , a , . . . , a m − r ). If we search for the solution in theform y = (cid:18) x ′ j ξ j e ′ j (cid:19) , then we obtain the systems (3.4).Let us show that the systems (3.4) are determined and ξ j = 0. Multiply themby P A − P from the left. Since P = P − , we get P A − P ( A ′ , a j ) x j = P A − c , j = 0 , , . . . , m − r .As b = P A − c and P A − P ( A ′ , a ) = I r , the first of the systems (3.4) getsthe form I r x = b . Denote I r = ( I ′ , e ), e ∈ R r . We see that P A − P A ′ = I ′ , P A − P a = e .Since P A − P ( a , . . . , a m − r ) = P A − P ( a , P A ) U = ( e, B ) U , we get d j = P A − P a j , j = 1 , . . . , m − r are the columns of the matrix ( e, B ) U . As U = I m − r +1 + P j ∈ J E ,j +1 and J are the numbers of the columns of the matrix B with zero elements in the last row, the last elements of all columns d j of thematrix ( e, B ) U do not equal zero. The systems (3.4) obtain the form( I ′ , d j ) x j = b, j = 0 , , . . . , m − r, (3 . I ′ , d j ) = 0. Since the last element of the vector b is nonzero, solutions x j = (cid:18) x ′ j ξ j (cid:19) of the systems (3.6) are such that ξ j = 0. So the vectors ξ j e ′ j , j = 1 , . . . , m − r , are linearly independent, therefore the vectors (3.5) are linearlyindependent. Corollary 2.
Let it be B = ( b , b , . . . , b m − r ) , b = (cid:18) b ′ β (cid:19) , b j = (cid:18) b ′ j β j (cid:19) ; ξ j = β/β j , f j = e ′ j for j / ∈ J ; ξ j = β , f j = e ′ j + e ′ for j ∈ J . Then T Q (cid:18) b ′ βe ′ (cid:19) , T Q (cid:18) b ′ − ξ j b ′ j ξ j f j (cid:19) , j = 1 , . . . , m − r, (3 . is the basis set of solutions of (3.1).Proof. Substitute solutions of (3.6) into (3.5). We take into account that accord-ing to the construction, d j = (cid:18) b ′ j δ j (cid:19) , δ j = 1 for j ∈ J and δ j = β j for j / ∈ J .Then we multiply by the matrix W .Corollaries 1 and 2 allow to present the algorithm to construct the basis setof solutions of a system of linear equations for an arbitrary commutative domain. Now we consider a linear system solving in a commutative domain. Note thatit is not a problem for homogeneous systems, since any solution in a field offractions, been multiplied by a suitable factor, gives the solution in the domain.So further we shall consider only nonhomogeneous systems.et α be a nonempty finite subset of R . The intersection of principal idealsgenerated by elements of the set α , is the principal ideal ∪ p ∈ α ( p ). We denote byLCM( α ) the generator of this ideal, i.e. (LCM( α )) = ∪ p ∈ α ( p ).Let K be a field of fractions of R , x be a vector of the space K m , α x ⊂ R be a set of denominators of components of x . Definition.
A denominator of a vector x is χ = LCM( α x ). The vector x willbe written as a product x = x χ − , x ∈ R m , χ ∈ R .The denominator of a vector x is denoted by DEN( x ).Let M = { x | x ∈ K m , Ax = b } be the set of all solutions of (3.1) in K m .Denote by M D = M ∪ R m the set of solutions lying in the module R m . We call M D the set of Diophantine solutions .For x = ( x , x , . . . , x s ) and y = ( y , y , . . . , y s ) we denote h x, y i = P si =1 x i y i . Theorem 5.
Let { x i = x i χ − i | i = 1 , , . . . , h } be a basis set of solutions of thenonhomogeneous system (3.1), ˆ x = ( x , x , . . . , x h ) , χ = ( χ , χ , . . . , χ h ) . Then M = (cid:8) h ˆ x , q ih χ, q i (cid:12)(cid:12)(cid:12)(cid:12) q ∈ R h , h χ, q i 6 = 0 (cid:9) Proof.
Let u = ( u , u , . . . , u h ) ∈ K h be such that P si =1 u i = 1. Further we usethe notations: s i = u i χ − i , g is LCM of denominators of all numbers { s i | i =1 , , . . . , h } , q i = gs i ∈ R , i = 1 , , . . . , h , q = ( q , q , . . . , q h ).Let it be ˆ x = ( x , x , . . . , x h ), z = h ˆ x, u i , P si =1 u i = 1, and so z ∈ M . Thenwe obtain z = h ˆ x, u i = h X i =1 x i χ − i u i = h X i =1 x i s i = h ˆ x , q i g − , (3 . s X i =1 u i = h X i =1 χ i χ − i u i = h X i =1 χ i s i = h χ, q i g − . (3 . h ˆ x ,q ih χ,q i = z ∈ M .Conversely, if q ∈ R h , h χ, q i = g = 0 and z = h ˆ x ,q ih χ,q i , then, denoting u i = χ i q i g − , obtain z = h ˆ x, u i and P si =1 u i = 1. Therefore z ∈ M . Corollary 3.
The set I A = { DEN( x ) | Ax = c, x ∈ K m } ∪ is an ideal in R . Corollary 4. M D = ∅ if and only if I A = R . Corollary 5.
A system (3.1) has Diophantine solutions if the ideal generatedby the denominators of a basis set of solutions is unit.
Corollary 6.
Let the ideal, generated by the denominators of basis solutions x i = x i χ − i , i = 1 , , . . . , h of the system (3.1) be unit, χ = ( χ , χ , . . . , χ h ) .Then there exists a nonzero vector q = ( q , q , . . . , q h ) ∈ R h , such that h χ, q i = 1 and h x , q i is Diophantine solution of (3.1). If in addition q s = 0 , then x , . . . , x s − , h x , q i , x s +1 , . . . , x h is a basis set of solutions for this system. efinition. We call a
Diophantine basis of solutions for a system Ax = b abasis set of solutions for this system, that wholly belongs to R m .In other words a Diophantine basis consists of the m + 1 − r linearly indepen-dent solutions of nonhomogeneous ( m − r for homogeneous) system, that belongto R m . Corollary 7.
Let x i = x i χ − i , i = 1 , , . . . , h be a basis set of solutions of (1)and χ = 1 . Then the set x , x i − x ( χ i − , i = 2 , , . . . , h , is a Diophantinebasis of solutions for this system.Proof. : Since all z i = x i − x ( χ i − i = 2 , , . . . , h belong to R m and are linearlyindependent together with x , it remains to show that z i are the solutions of(1). As χ = 1, we have z i = x i − x ( χ i −
1) = x ( − χ i + 1) + x i · χ ( − χ i + 1) + χ i · . Therefore, by Theorem 3, z i , i = 2 , , . . . , h are the solutions of the system Ax = b .Corollaries of Theorem 5 allow to present a rendomized algorithm for com-puting a Diophantine basis of a system solutions.If the ideal generated by the denominators of a basis set of solutions is unit,than we compute a Diophantine basis. Else we must choose a new permutationmatrices S and T and compute a new rational basis, and so on. So we getsome kind of iterative Diophantine solve. As it is proved in [20] an expectednumber of rational solutions that are necessary for getting Diophantine solutionis O (log n + log log k ( A, c ) k ) for the ring Z . One of the question here is how toget such evaluation for another rings.As we get m − n + 1 rational solutions in one iteration for rank A = n , soan expected number of iterations for the ring Z is O (( m − n + 1) − (log n +log log k ( A, c ) k )). Theorems 3 and 4 reduce the problem of getting the basis solutions of the system(3.1) to several problems of solving determined systems. Computing a basis setof solutions of nonhomogeneous system needs to solve m − r + 1 systems withthe matrix of coefficients of size r × r (the case of homogeneous systems needsto solve m − r systems). To solve a determined system one may use the p-adiclifting.Recall a general scheme of p-adic methods. The suitable prime element p of the ring R is chosen. The element p must not divide the determinant ofthe coefficients matrix. For example, in the ring Z this choice may appear toe unsuccessful with probability no more than 1 /p . If the check shows thatthe solution is incorrect, then another prime element p is chosen. The ring ofresidues prime modulo p is a field, and the solution in this field may be foundfor example using the Gauss method. The upper evaluations for numerators anddenominators of system solutions is calculated by means of Hadamard inequality.According to them the upper evaluation for p k — the boundary of lifting. Thenthe solution is lifted modulo p up to p k and a rational solution is constructed.For solving one determined system we apply the algorithm given by Dixon [10],which use a linear p-adic lifting. Its complexity is O ( n (log n + log k A k + log p ) )bit operations. So the algorithm for getting all the rational solutions of thesystem Ax = c , when rank A = n , has the complexity O ( n m ( m − n + 1)(log n +log k A k + log p ) ) bit operations. A one iterative step for computing a Diophantine basis is consist in the com-puting a rational basis and a Diophantine solutions, due to the Corollaries ofTheorem 5.The complexity of the first is O ( mn β − ), of the second — O ( m ( m − r ) + C G )operations in the ring R . C G is the amount of operations that are necessaryfor an expansion of a unit in m − r generators of the unit ideal in the ring R .Such expansion of a unit may be obtained for example with the help of thealgorithm of computing the Gr¨obner basis. The evaluation of the complexityof such algorithm in general is not the subject of this paper. Mention that forEuclidean rings C G it is the amount of operations in Euclidean algorithm, thatcalculates the GCD for m − r numbers.The more defined evaluations may be obtained for algorithms, using thelinear p-adic lifting. It is known that the complexity of Dixon algorithm [20] inthe case of integer numbers Z , for n = m and det A = 0 is bounded by thenumber D Z = O ( n (log n + log k A k + log p ) + n (log kk ) ) bit operations. In thecase of the ring of polynomials F [ x ] over a field F this complexity is boundedby D F [ x ] = O ( n ( k A k + k p k ) + n ( k c k ) ) operations in the field F . The function k k has the next meaning: k α k = | α | for α ∈ Z , k α k = deg α for α ∈ F [ x ], formatrix A , k A k = max α ∈ A k α k .Computing a basis from m − r solutions needs not more than C Z = ( m − r ) D Z and C F [ x ] = ( m − r ) D F [ x ] operations for each case correspondingly.The average number of matrix inversions for computing one rational solutionis now m − r times less than this number in the algorithm [20].The complexity of computing a Diophantine basis for Z and F [ x ] is thesame as the complexity of computing a rational basis. An expected number ofiterations is O (( m − n + 1) − (log n + log log k ( A, c ) k )). References
1. Abdeljaoued, J.: Berkowitz Algorithm, Maple and computing the characteristicpolynomial in an arbitrary commutative ring. Computer Algebra MapleTech ,o. 3, Birkhauser Boston (1997)2. Abdeljaoued, J., Malaschonok, G. I.: Efficient Algorithms for Computing the Char-acteristic Polynomial in a Domain. Journal of Pure and Applied Algebra (to ap-pear)3. Akritas, A. G., Akritas, E. K., Malaschonok, G. I.: Various proofs of Sylvester’s(determinant) identity. Mathematics and Computations in Simulation (1996)585–5934. Akritas, A. G., Akritas, E. K., Malaschonok, G. I.: Matrix computation of subre-sultant polynomial remainder sequences in integral domain. Reliable Computing , No. 4 (1995) 375–3815. Bareiss, E. H.: Sylvester’s Identity and Multistep Integer-Preserving GaussianElimination. Math. Comp. (103) (1968) 565–5786. Bareiss, E. H.: Computational solutions of matrix problems over an integral do-main. J. Inst. Maths Applics (1972) 68–1047. Chistov, A. L.: Fast parallel calculation of the rank of matrices over a field of ar-bitrary characteristic. Proc. FCT’85, Springer Lecture Notes in Computer Science (1985) 147–1508. Collins, G. E. and Encarnacion, M. J.: Efficient rational number reconstruction.Journal of Symbolic Computation. (1995) 287–2979. Coppersmith, D. and Winograd, S.: Matrix multiplication via arithmetic progres-sions. Journal of Symbolic Computation (1990) 251–28010. Dixon, J.: Exact solution of linear equations using p-adic expansions. Numer. Math. (1982) 137–14111. Dodgson, C. L.: Condensation of determinants, being a new and brief method forcomputing their arithmetic values. Proc. Royal Soc. Lond. A.15 (1866) 150–15512. Faddeev, D. K., Faddeeva, V. M.: Computational methods of linear algebra.W.H. Freeman & Co., San Francisco (1963)13. Kaltofen, E.: On Computing Determinants of Matrices Without Divisions. In:Wang, P. S. (ed.): Proc. Internat. Symp. Symbolic Algebraic Comput. ISSAC’92.ACM Press (1992) 342–34914. Malaschonok, G. I.: Solution of a system of linear equations in an integral domain.USSR Journal of computational Mathematics and Mathematical Physics (1983)1497–150015. Malaschonok, G. I.: System of linear equations over a commutative ring. Academyof Sciences of Ukraine, Lvov (1986) (in Russian)16. Malaschonok, G. I.: On the solution of a linear equation system over commutativerung. Math. Notes of the Acad. Sci. USSR , No. 4 (1987) 543–54817. Malaschonok, G. I.: Algorithms for the solution of systems of linear equationsin commutative rings. In: Mora, T. and Traverso, C. (ed.): Effective Methods inAlgebraic Geometry. Progress in Mathematics 94, Birkhauser (1991) 289–29818. Malaschonok, G. I.: Argorithms for the Computing Determinants in a commutativerings. Diskretnaya Matematika , No. 4 (1995) 68–7619. Malaschonok, G. I.: Recursive Method for the Solution of Systems of Linear Equa-tions. In: Sydow, A. (ed.): Computational Mathematics (Proceedings of the 15thIMACS World Congress, Vol. I, Berlin, August 1997). Wissenschaft & TechnikVerlag, Berlin (1997) 475–48020. Mulders, T., Storjohann, A.: Diophantine Linear System Solving. In: Proceedingsof ISSAC’99: ACM International Symposium on Symbolic and Algebraic Compu-tation, July 1999, Vancouver, Canada21. Strassen, V.: Gaussian Elimination is not optimal. Numerische Mathematik (1969) 354–3562. Sasaki, T. & Murao, H.: Efficient Gaussian elimination method for symbolic de-terminants and linear systems. A.C.M. Trans. Math. Software8