Resultants over principal Artinian rings
aa r X i v : . [ c s . S C ] A p r RESULTANTS OVER PRINCIPAL ARTINIAN RINGS
CLAUS FIEKER, TOMMY HOFMANN, AND CARLO SIRCANA
Abstract.
The resultant of two univariate polynomials is an invariant of great impor-tance in commutative algebra and vastly used in computer algebra systems. Here wepresent an algorithm to compute it over Artinian principal rings with a modified versionof the Euclidean algorithm. Using the same strategy, we show how the reduced resultantand a pair of B´ezout coefficient can be computed. Particular attention is devoted to thespecial case of Z /n Z , where we perform a detailed analysis of the asymptotic cost of thealgorithm. Finally, we illustrate how the algorithms can be exploited to improve idealarithmetic in number fields and polynomial arithmetic over p -adic fields. Introduction
The computation of the resultant of two univariate polynomials is an important taskin computer algebra and it is used for various purposes in algebraic number theory andcommutative algebra. It is well-known that, over an effective field F , the resultant of twopolynomials of degree at most d can be computed in O (M( d ) log d ) ([vzGG03, Section11.2]), where M( d ) is the number of operations required for the multiplication of poly-nomials of degree at most d . Whenever the coefficient ring is not a field (or an integraldomain), the method to compute the resultant is given directly by the definition, via thedeterminant of the Sylvester matrix of the polynomials; thus the problem of determiningthe resultant reduces to a problem of linear algebra, which has a worse complexity.In this paper, we focus on polynomials over principal Artinian rings, and we presentan algorithm to compute the resultant of two polynomials. In particular, the methodapplies to polynomials over quotients of Dedekind domains, e.g. Z /n Z or quotients ofvaluation rings in p -adic fields, and can be used in the computation of the minimumand the norm of an ideal in the ring of integers of a number field, provided we have a2-element presentation of the ideal.The properties of principal Artinian rings play a crucial role in our algorithm: we recallthem in Section 2 and we define the basic operations that we will need in the algorithm.The algorithm we present does not involve any matrix operations but performs onlyoperations on the polynomials in order to get the result. Inspired by the sub-resultantalgorithm, we define a division with remainder between polynomials in the case the divisoris primitive, which leads us immediately to an adjusted version of the Euclidean algorithm.As a result, we obtain algorithms to compute the reduced resultant, a pair of B´ezoutcoefficient and the resultant of two polynomials. The runtime of the algorithms dependsheavily on the properties of the base ring, in particular on the number of maximal ideals F and on their multiplicity. The asymptotic cost of our method is O ( d M( d ) log( E ) F ),improving upon the asymptotic complexity of the direct approach, which consists of acomputation of the echelon normal form of the Sylvester matrix. In the special case of Z /n Z , we present a detailed analysis of the asymptotic cost of our method in terms ofbit-operations, yielding a bit-complexity in O ( d log( d ) log log( d ) log( n ) log log( n )) . Date : April 8, 2020. inally, we illustrate an algorithm to compute the greatest common divisor of polynomialsover a p -adic field based on the ideas developed in the previous sections. Acknowledgments.
The authors were supported by Project II.2 of SFB-TRR 195 ‘Sym-bolic Tools in Mathematics and their Application’ of the German Research Foundation(DFG). 2.
Preliminaries
Our aim is to describe asymptotically fast algorithms for the computation of (reduced)resultants and B´ezout coefficients. In this section we describe the complexity model thatwe use to quantify the runtime of the algorithms.2.1.
Basic operations and complexity.
Let R be a principal ideal ring. By M : Z > → R > we denote multiplication time for R [ x ], that is, two polynomials in R [ x ] of degree atmost n can be multiplied using at most M( n ) many additions and multiplications in R .We will express the cost of our algorithms in terms of the number of basic operations of R , by which we mean any of the following operations:(1) Given a, b ∈ R , return a + b , a − b , a · b and true or false depending on whether a = b or not.(2) Given a, b ∈ R , decide if a | b and return b/a in case it exists.(3) Given a ∈ R , return true or false depending on whether a is a unit or not.(4) Given a, b ∈ R , return g, s, t ∈ R such that ( g ) = ( a, b ) and g = st + tb .(5) Given two ideals I, J ⊆ R , return a principal generator of the colon ideal ( I : J ) = { r ∈ R | rJ ⊆ I } .A common strategy of the algorithms we describe is the reduction to computations innon-trivial quotient rings. The following remark shows that working in quotients is asefficient as working in the original ring R . Remark 2.1.
Let r ∈ R and consider the quotient ring ¯ R = R/ ( r ). We will representelements in ¯ R by a representative, that is, by an element of R . It is straight foward tosee that a basic operation in ¯ R can be done using at most O (1) basic operations in R :(1) Computing ¯ a + ¯ b , ¯ a − ¯ b , ¯ a · ¯ b is trivial. In order to decide if ¯ a = ¯ b , it is enough totest whether a − b ∈ ( r ).(2) Deciding if ¯ a divides ¯ b in R/ ( r ) is equivalent to the computation of a principalgenerator d of ( a, r ) ⊆ R and testing if d divides b .(3) Deciding if an element ¯ a is a unit is equivalent to testing ( a, r ) = R .(4) Given ¯ a , ¯ b ∈ ¯ R , a principal generator (¯ a, ¯ b ) is given by the image of a principalgenerator g of ( a, b ). If sa + tb = g , then ¯ s and ¯ t are such that ¯ g = ¯ s ¯ a + ¯ t ¯ b .(5) Given two ideals ¯ I, ¯ J of ¯ R , the colon ideal is generated by the principal generatorof (( I + ( r )) : ( J + ( r ))).2.2. Principal Artinian rings.
The rings we will be most interested in are principalArtinian rings, that is, unitary commutative rings whose ideals are principal and satisfythe descending chain condition. These have been studied extensively and their structureis well-known, see [AM69, McL73, Bou06]. Prominent examples of these rings includenon-trivial quotient rings Z /n Z , F q [ x ] / ( f ) and non-trivial quotients of residually finiteDedekind domains. By [Bou06, chap. IV, § R has only finite many maximal ideals m , . . . , m r and there exist minimal positive integers e , . . . , e r ∈ Z > such that the canonical map R → Q ≤ i ≤ r R/ m e i i is an isomorphism ofrings. We denote by π i : R → R/ m e i i the canonical projection onto the i -th component. or every index 1 ≤ i ≤ r , the ring R/ m e i i is a local principal Artinian ring. We call e i the nilpotency index of m i and denote by E = max ≤ i ≤ r e i the maximum of the nilpotencyindices of the maximal ideals of R . Note that E is equal to the nilpotency index of thenilradical p (0) = { r ∈ R | r is nilpotent } of R . We will keep this notation for the restof the paper whenever we work with a principal Artinian ring.When investigating polynomials over R , the following well-known characterization ofnilpotent elements will be very helpful, see [Bou06, chap. II, § Lemma 2.2.
An element a ∈ R is nilpotent if and only if a ∈ m ∩ · · · ∩ m r = m · · · m r .As R is a principal ideal ring, given elements a , . . . , a n of R , there exists an element g generating the ideal ( a , . . . , a n ). We call g a greatest common divisor of a , . . . , a n .Such an element is uniquely defined up to multiplication by units. By abuse of notationwe will denote by gcd( a , . . . , a r ) any such element. Similarly, lcm( a , . . . , a n ) denotes a least common multiple of a , . . . , a n , that is, a generator of the intersection of the idealsgenerated by each a i . As R is in general not a domain, quotients of elements are notwell-defined. To keep the notation lightweight, still for two elements a, b ∈ R with a | b we will denote by b/a ∈ R an element c ∈ R with ca = b (the element c is uniquelydefined up to addition by an element of the annihilator Ann R ( a ) = { s ∈ R | sa = 0 } ).The strategy for the (reduced) resultant algorithm will be to split the ring as soon aswe encounter a non-favorable leading coefficient of a polynomial. The (efficient) splittingof the ring is based on the following simple observation. Proposition 2.3.
Let a ∈ R be a zero-divisor, which is not nilpotent. Using O (log( E ))many basic operations in R , we can find an idempotent element e ∈ R such that(1) The canonical morphism R → R/ ( e ) × R/ (1 − e ) is an isomorphism with inverse( a, b ) (1 − e ) a + eb .(2) The image of a in R/ ( e ) is nilpotent and the image of a in R/ (1 − e ) is invertible. Proof.
For i ∈ Z ≥ consider the ideal I i = ( a i ) of R . Since R is Artinian, there exists n ∈ Z ≥ such that I n = I n +1 . In particular I n = I n , that is, I n is idempotent. (Note thatwe can always take n = E .) Consider c = a n . Since c is a generator of the idempotentideal I n , we can find b ∈ R such that c · b = c . Then e = c · b satisfies e ∈ I n , (1 − e ) I n = 0, e = e and therefore (1) and (2) follow. The cost of finding e is the computation of thepower of a , one multiplication and one division. (cid:3) Definition 2.4.
Let a ∈ R be an element. We say that a is a splitting element if it is anon-nilpotent zero divisor.3. Polynomials over principal Artinian rings
We now discuss theoretical and practical aspects of polynomials over a principal Ar-tinian ring R . Note that due to the presence of zero-divisors the ring structure of R [ x ] ismore intricated than in the case of integral domains. For example, it is not longer truethat every invertible element in R [ x ] is a non-zero constant or that every polynomial canbe written as the product of its content and a primitive polynomial. In this section, weshow how to overcome these difficulties and describe asymptotically fast algorithms tocompute inverses, modular inverses and division with remainder (in case it exists).3.1. Basic properties.
We recall some theoretical properties of polynomials in R [ x ].For the sake of completness, we include the short proofs. efinition 3.1. Let f = P di =0 a i x i ∈ R [ x ] be a polynomial. We define the content ideal C ( f ) of f to be the ideal ( a , . . . , a d ) of R generated by the coefficients of f . We saythat f is primitive if C ( f ) = R . By abuse of notation, we will often denote by c ( f ) agenerator of this ideal. Lemma 3.2.
Let f, g ∈ R [ x ] be primitive polynomials. Then the product f g is primitive. Proof.
Assume that the product f g is not primitive and let C = C ( f g ) be its content ideal.Then C is contained in a maximal ideal m of R , so f g = 0 mod m [ x ]. By assumption wehave f , g = 0 mod m [ x ] yielding a contradiction since ( R/ m )[ x ] is an integral domain. (cid:3) However, in general, due to the presence of idempotent elements, it is not true that ifwe write a polynomial f ∈ R [ x ] as f = c ( f ) ˜ f for some f ∈ R [ x ], then ˜ f is primitive. Example 3.3.
Consider the non-primitive polynomial f = 4 x + 8 over Z / Z , forwhich we clearly have C ( f ) = (4) and c ( f ) = 4. As 4 f = f , we can set ˜ f = f and write f = c ( f ) f .Nevertheless, in case the content of f is nilpotent, we can say something about thecontent of ˜ f . Lemma 3.4.
Let f ∈ R [ x ] be a non-zero polynomial with nilpotent content. Let ˜ f ∈ R [ x ]be any polynomial such that f = c ( f ) · ˜ f . Then c ( ˜ f ) is not nilpotent. Proof.
Assume by contradiction that c ( ˜ f ) is nilpotent. Now it holds that C ( f ) = C ( c ( f ) · ˜ f ) = C ( f ) C ( ˜ f ). Since C ( f ) and C ( ˜ f ) are nilpotent ideals, this is a contradiction. (cid:3) Next we give the well-known characterization of units and nilpotent elements of R [ x ].We include a proof, since it gives a bound on the degree of the inverse of an invertiblepolynomial. Recall that E is the nilpotency index of the nilradical of R . Proposition 3.5.
Let f ∈ R [ x ] be a polynomial.(1) The polynomial f is nilpotent if and only if its content is nilpotent.(2) The polynomial f is a unit if and only if the constant term f of f is a unit in R and f − f is nilpotent.(3) If f is invertible, then the degree of the inverse f − is bounded by deg( f ) · E . Proof. (1): Assume that c ( f ) is nilpotent. Since c ( f ) divides f in R [ x ], also f is nilpotent.Vice versa, if f is nilpotent, the projection of f to a residue field R/ p is nilpotent too, soit must be zero. Hence all the coefficients of f are in the intersection of all the maximalideals of R , which coincides with the set of nilpotent elements by Lemma 2.2. As thecontent is then generated by nilpotent elements, it is nilpotent.(2): Assume that the constant term r is a unit of R and that g = f − r is nilpotent.Without loss of generality, r = 1 since being a unit or nilpotent in R [ x ] is invariant undermultiplication with elements from R × ⊆ R [ x ] × . Since g k = 0 for a sufficiently large k ∈ Z ≥ , we get (1 − g ) P k − i =0 ( − i g i = 1 − g k = 1 showing that f indeed is a unit.Vice versa, assume that f is a unit. Then for every prime ideal p of R we have thatthe image of f in ( R/ p )[ x ] is a unit. In particular, as R/ p is a domain, all non-constantcoefficients of f are contained p . Since this holds for all prime ideals p , by Proposition 3.5the non-constant coefficients are nilpotent.(3): If g ∈ R [ x ] is nilpotent, then g E = 0. Thus the claim follows as in the proof ofpart (2). (cid:3) Remark 3.6.
The bound from Proposition 3.5 is sharp. If p ∈ Z is a prime, then theinverse of the polynomial 1 − px over Z /p k Z is P k − i =0 ( px ) i . roposition 3.5 allows us to use classical Hensel lifting (see [vzGG03, Algorithm 9.3])to compute the inverse of units in R [ x ]. For the sake of completeness, we include thealgorithm. Algorithm 3.7.
Given a unit f ∈ R [ x ], the following steps return the inverse f − ∈ R [ x ].(1) Define v as the inverse of the constant term of f .(2) While i < log ( e deg( f )):(a) Set v i +1 = v i (2 − v i f ) (mod x i ).(b) Increase i .(3) Return v i . Proposition 3.8.
Algorithm 3.7 is correct and computes the inverse of f ∈ R [ x ] using O (M( E · deg( f ))) basic operations in R . Proof.
See [vzGG03, Theorem 9.4]. (cid:3)
Quotient and remainder.
We now consider the task of computing divisions withremainder.
Remark 3.9.
Let f, g ∈ R [ x ] be polynomials. If g has invertible leading coefficient, onecan use well-known asymptotically fast algorithms to find q, r ∈ R [ x ] such that f = qg + r and deg( r ) < deg( g ) (see for example [vzGG03, Algorithm 9.5]). This can be done using O (M( d )) basic operations in R .Things are more complicated when the leading coefficient of g is not invertible. Un-der certain hypotheses, we can factorize the polynomial as the product of a unit and apolynomial with invertible leading coefficient. Proposition 3.10.
Assume that f = P di =0 f i x i ∈ R [ x ], is a primitive polynomial ofdegree d and that there exists 0 ≤ k ≤ d such that for k + 1 ≤ i ≤ d the coefficient f i isnilpotent and f k is invertible. Then there exists a unit u ∈ R [ x ] × of degree deg( u ) = d − k and a polynomial ˜ f ∈ R [ x ] with invertible leading coefficient, deg( ˜ f ) = k such that f = u · ˜ f . The polynomials ˜ f and u can be computed using O (M( d ) log( E )) basicoperations in R . Proof.
This is just an application of Hensel lifting. More precisely, consider the ideal a = ( f i | i = k + 1 , . . . , d ) of R . Since a is generated by nilpotent elements, it is nilpotentand a E = { } . Consider the polynomials ¯ u = 1 + P d − ki =1 f k + i x i and ¯ f = P ki =0 f i x i in R [ x ]. By construction f ≡ ¯ u · ¯ f (mod a ) and ¯ u ≡ a ). Thus ¯ u , ¯ f are coprimemodulo a and 1 ≡ · ¯ u + 0 · ¯ f (mod a ). Furthermore, the leading coefficient of ¯ f isinvertible. Therefore, by means of Hensel lifting and since a is nilpotent, we can liftthe factorization of f modulo a to factorization of f in R [ x ]. The lifting can be doneusing [vzGG03, Algorithm 15.10]. As in our case f is not monic, the degree of the lift of ¯ u will increase during the lifting process, but since the polynomial ¯ f has invertible leadingcoefficient, the degree of u will be d − k . The cost of every step in the lifting processis O (M( d )), as it involves a constant number of additions, multiplication and divisionsbetween polynomials of degree at most d . As the number of steps we need is at mostlog( E ), the claim follows. (cid:3) Example 3.11. In Z / Z [ x ], f = 2 x + x + 1 satisfies the hypotheses of Proposition 3.10.The corresponding factorization is f = (2 x + 4 x + 1) · ( x + 6 x + 4 x + 1). Proposition 3.12.
Let f, g ∈ R [ x ] be polynomials of degree at most d and assume that g is primitive. Then using at most O (M( E · d ) · min( F, d )) basic operations in R we can nd q, r ∈ R [ x ] such that f = qg + r and 0 ≤ deg( r ) < deg( g ), where F is the number ofmaximal ideals of R . Proof.
Assume first that g satisfies the hypotheses of Proposition 3.10. Then we cancompute a factorization g = u ˜ g with ˜ g ∈ R [ x ] monic of degree ≤ d and u ∈ R [ x ] × a unit.As ˜ g is monic, we can perform division with remainder of f by ˜ g and find q, r ∈ R [ x ] suchthat f = q ˜ g + r and deg( r ) < deg(˜ g ) ≤ deg( g ). Multiplying q by the inverse of u , we get f = qu − g + r . By Remark 3.9 and Proposition 3.8, the division needs O (M( d )) and theinversion O (M( E · d )) basic operations respectively. As the degree of u − is bounded bydeg( u ) E ≤ deg( g ) E and the degree q by deg( f ) = d , the final multiplication of u − with q requires O (M( E · d )) basic operations. Thus the costs are in O (M( E · d )).Now, we deal with the general case. In particular g has trivial content but it doesnot satisfy the assumption of Proposition 3.10. This means that the first non-nilpotentcoefficient c of g is a splitting element. Therefore, by Proposition 2.3, using O (log E ) basicoperations we can find an isomorphism R → R × R of R with the direct product of twonon-trivial quotient rings. In the quotients, the coefficient c will be either nilpotent orinvertible by Proposition 2.3. If c is invertible in the quotient R i , then the projection of g to R i [ x ] satisfies the assumption of Proposition 3.10. Thus the division can be performedusing O (M( E · d )) basic operations. In case c is nilpotent in the quotient R i , we needto repeat this process until the first non-nilpotent coefficient of the polynomial will beinvertible. This has to happen eventually, as the content of the polynomial is trivial andthe number of maximal ideals is finite. As at every step the degree of the term of thepolynomial which is non-nilpotent decreases, the splitting can happen at most min( F, d )times. At the end, we reconstruct the quotient and the remainder by means of the Chineseremainder theorem. It follows that the algorithm requires O (M( E · d ) · min( d, F )) basicoperations in R . (cid:3) Modular inverses.
Finally note that using a similar strategy as in Algorithm 3.7we can compute modular inverses.
Algorithm 3.13.
Given a unit u ∈ R [ x ] and a polynomial f ∈ R [ x ] with invertibleleading coefficient, the following steps return the inverse u − ∈ R [ x ] modulo f .(1) Define v as the inverse of the constant term of u .(2) While i < log (deg( f )):(a) Set v i +1 = v i (2 − v i u ) (mod x i ).(b) Increase i .(3) While i < log ( E · deg( u )):(a) Set v i +1 = v i (2 − v i u ) (mod f ).(b) Increase i .(4) Return v i . Lemma 3.14.
Algorithm 3.13 is correct and computes the inverse of u modulo f using O (M(deg( f )) log( E deg( u ))) basic operations in R . Proof.
The correctness follows from [vzGG03, Theorem 9.4] as above. The complexityresult follows from the fact that the degrees of the polynomials that we compute duringthe algorithm are bounded by deg( f ). (cid:3) Resultants and reduced resultants via linear algebra
In this section, we will describe algorithms to compute the resultant, reduced resultantand the B´ezout coefficients of univariate polynomials over an arbitrary principal ideal ring , which in this section is not assumed to be Artinian. The algorithms we present herewill be based on linear algebra over R , for which the complexity is described in [Sto00].Note that in [Sto00] a slightly different notions of basic operations is used, which can beused to derive the basic operations from Section 2.1. For the sake of simpiclity in thissection we will use the term basic operations to refer to basic operations as described in[Sto00].We start by recalling the definition of the objects that we want to compute. Let f, g ∈ R [ x ] be polynomials of degree of n and m respectively. Recall that the Sylvestermatrix of the pair f, g is the matrix S ( f, g ) ∈ Mat ( n + m ) × ( n + m ) ( R ) representing the R -linear map ϕ : P m × P n → P n + m , ( s, t ) sf + tg, where P k = { h ∈ R [ x ] | deg( h ) < k } , with respect to the canonical basis ( x k , x k − , . . . , x, Definition 4.1.
Let f, g ∈ R [ x ] be polynomials. We define the resultant res( f, g ) of f, g to be the determinant det( S ( f, g )) ∈ R of the Sylvester matrix, and the reducedresultant Rres( f, g ) of f, g as the ideal ( f, g ) ∩ R . Two elements u, v ∈ R [ x ] are called B´ezout coefficients of the reduced resultant of f and g , if they satisfy uf + vg ∈ R and( uf + vg ) = Rres( f, g ). As usual, by abuse of notation, we will call any generator ofRres( f, g ) a reduced resultant of f, g and denote it by rres( f, g ).4.1. Reduced resultant.
We begin by showing that, similar to the resultant, also thereduced resultant can be characterized in terms of invariants of the Sylvester matrix (atleast in the case that one of the leading coefficients is invertible).
Lemma 4.2.
Let f , g , u , v ∈ R [ x ] such that uf + vg = r ∈ R and assume that the leadingcoefficient of f or g is invertible. Then we can find ˜ v , ˜ u ∈ R [ x ] such that deg(˜ u ) < deg( g ),deg(˜ v ) < deg( f ) and r = ˜ uf + ˜ vg . Proof.
Without loss of generality, we may assume that f is monic. Thus we can usepolynomial division to write v = qf + ˜ v and deg(˜ v ) < deg( f ). Now uf + vg = uf + ( qf +˜ v ) g = ( u + qg ) f + ˜ vg . Let ˜ u = u + qg . Then we have deg( g ) + deg(˜ v ) < deg( g ) + deg( f )and, since f is monic, deg(˜ uf ) = deg(˜ u ) + deg( f ) = deg(˜ vg ). This shows deg(˜ u ) < deg( g )as claimed. (cid:3) Recall that the strong echelon form of a matrix over a principal ideal ring is the sameas the Howell form with reordered rows. In case of a principal ideal domain, the strongechelon form is the same as a Hermite normal form, where the rows are reordered, suchthat all the pivot entries are on the diagonal, see ([How86, Sto00, FH14]). In case thematrix has full rank, it is just the last diagonal entry. We will make use only of thefollowing property of the upper right strong echelon form of a matrix A ∈ Mat k × k ( R ):If v = ( v , . . . , v k ) ∈ R k is contained in the row span of A and v = v = · · · = v l = 0for some 1 ≤ l ≤ k , then v is in the row span of the k − l rows. In particular, if v = v = · · · = v k − = 0, then v is a multiple of the last row of A , that is, v k is amultiple of the last diagonal entry of A . Proposition 4.3.
Let f, g ∈ R [ x ] be polynomials of degree n, m respectively, k = n + m and H = ( h ij ) ≤ i,j ≤ k ∈ Mat k × k ( R ) the upper right strong echelon form of S ( f, g ). Assumethat one of the leading coefficients of f and g is invertible. Then ( h k,k ) = Rres( f, g ). Proof.
Under the R -isomorphism R k → P k , ( v k − , . . . , v ) P ≤ i Let f, g ∈ R [ x ] be polynomials of degree n, m respectively. Assume thatone of the leading coefficients of f and g is invertible. Both a reduced resultant rres( f, g )and B´ezout coefficients of a reduced resultant of f, g can be computed using O (( n + m ) ω )many basic operations in R , where ω is the exponent of matrix multiplication. Proof. Follows from Proposition 4.3 and [Sto00, Chapter 4] (see also [SM98]). (cid:3) Resultants. For the sake of completeness, we also state the corresponding result for theresultant. Note that here, in contrast to the reduced resultant, we do not need anyassumption on the leading coefficients of the polynomials. While the resultant can easilybe computed as the determinant of the Sylvester matrix, a pair of B´ezout coefficients canbe found via linear algebra. Proposition 4.7. Let f, g ∈ R [ x ] be two polynomials of degree n , m respectively. Boththe resultant res( f, g ) and B´ezout coefficients for the resultant can be computed using O (( n + m ) ω ) many basic operations in R . Proof. By [Col71, Theorem 3], there exist v , u ∈ R [ x ] such that deg( u ) < deg( g ), deg( v ) < deg( f ) and res( f, g ) = uf + vg . Therefore the computation of a pair of B´ezout coefficientsreduces to a determinant computation and linear system solving involving a k × k matrixover R . Thus the result follows from [Sto00, Chapter 4]. (cid:3) . Resultants and polynomial arithmetic In this section, we show how to compute the resultant, reduced resultant and B´ezoutcoefficients of univariate polynomials over a principal Artinian ring by directly manipu-lating the polynomials, avoiding the reduction to linear algebra problems. At the sametime, this will allow us to get rid of the assumption on the leading coefficients of thepolynomials, as present in Corollary 4.6. For the rest of this section we will denote by R a principal Artinian ring.5.1. Reduced resultant. Let f, g ∈ R [ x ] be polynomials. The basic idea of the reducedresultant algorithm is to use that ( f, g ) = ( f − pg, g ) for every p ∈ R [ x ] in order to makethe degree of the operands decrease. Thus the computation reduces to the following basecases: Lemma 5.1. Let c, d ∈ R ⊆ R [ x ] be constant polynomials and f ∈ R [ x ] a polynomialwith invertible leading coefficient and deg( f ) > 0. Then the following hold:(1) Rres( c, d ) = ( c, d ) and rres( c, d ) = gcd( c, d ),(2) Rres( f, c ) = ( c ) and rres( f, c ) = c . Proof. (1): Clear. (2): Since c is contained in ( f, c ) ∩ R , we can reduce to the computationof ( f ) ∩ R/ ( c ), where f ∈ ( R/ ( c ))[ x ] is the projection of f modulo ( c ). As f has invertibleleading coefficient, ( f ) ∩ R/ ( c ) = (0). (cid:3) Unfortunately, this process may fail, mainly for the following reasons: the leadingcoefficient of the divisor is a zero divisor or the polyomials are not primitive. We nowdescribe a strategy to overcome these issues. Reduction to primitive polynomials. First of all, we show how to reduce to the case ofprimitive polynomials. Let f , g ∈ R [ x ] be polynomials of positive degree. We wantto show that either we find a splitting element r ∈ R or we reduce the computationto primitive polynomials. In some case, we will need to change the ring over which weare working. To clarify this, when necessary, we will specify the ring in which we arecomputing the reduced resultant by writing it as a subscript, for example, rres R denotesthe reduced resultant over R . For a quotient R/ ( r ) we will denote by rres R/ ( r ) ( f, g ) ∈ R a lift of the reduced resultant of the polynomials ¯ f , ¯ g ∈ ( R/ ( r ))[ x ]. Lemma 5.2. Let f , g ∈ R [ x ] be polynomials and assume g is of positive degree withinvertible leading coefficient. Let h ∈ R [ x ] such that f = c ( f ) · h . Then rres R ( f, g ) = c ( f ) · rres R/ Ann( c ( f )) ( h, g ). Proof. Let r = rres( f, g ) be the reduced resultant of f and g and write s · c ( f ) h + tg = r with s, t ∈ R [ x ]. Since tg ≡ r (mod c ( f )), the polynomial g has invertible leadingcoefficient and r is a constant, both t and r lie in the ideal generated by c ( f ). Indeed,the additivity of the degree of the product holds if one of the polynomials has invertibleleading coefficient. Thus sh + tc ( f ) g ≡ rc ( f ) mod Ann( c ( f ))giving the result. (cid:3) Lemma 5.3. Let f , g ∈ R [ x ] be two polynomials of positive degree. Assume that g isprimitive and write f = c ( f ) h , where h ∈ R [ x ]. Suppose that neither c ( f ), c ( h ) nor anyof the coefficients of g are splitting elements. Denoting by g = u · ˜ g a factorization fromProposition 3.10, we have rres( f, g ) = c ( f ) · rres R/ Ann( c ( f )) ( h, ˜ g ). roof. As all the coefficients of g are either invertible or nilpotent, g satisfies the hypothe-ses of Proposition 3.10, so that indeed g = u · ˜ g , where u is a unit and ˜ g is monic. As u is a unit, ( f, g ) = ( f, ˜ g ) and the same holds for the reduced resultant. Therefore, we canapply Lemma 5.2 and this translates immediately to the result. (cid:3) Lemma 5.4. Let f, g ∈ R [ x ] be two nilpotent polynomials. Let d be a generator of( c ( f ) , c ( g )) and let ˜ f , ˜ g ∈ R [ x ] be polynomials such that f = d · ˜ f and g = d · ˜ g . Thenrres( f, g ) = d · rres R/ Ann( d ) ( ˜ f , ˜ g ) and either ˜ f or ˜ g is not nilpotent. Proof. As ( f, g ) = d · ( ˜ f , ˜ g ), the relation between the reduced resultants holds. Wenow prove that either ˜ f or ˜ g is not nilpotent. Let f , g ∈ R [ x ] be polynomials such that f = d · c ( ˜ f ) · f and g = d · c (˜ g ) · g . Looking at the content, we obtain c ( f ) = d · c ( ˜ f ) · c ( f )and c ( g ) = d · c (˜ g ) · c ( g ). Therefore, d = ( c ( f ) , c ( g )) = d · ( c ( ˜ f ) · c ( f ) , c (˜ g ) · c ( g )). Ifboth ˜ f , ˜ g are nilpotent, we therefore get a contradiction. (cid:3) We use these three cases either to split the ring R or to reduce to the case of primitivepolynomials. Algorithm 5.5. Input: f , g ∈ R [ x ] non-constant.Output: r ∈ R or ( c, ¯ f , ¯ g ) ∈ R × R [ x ] × R [ x ].(1) If f and g are primitive, return (1 , f, g ).(2) If c ( f ) is a splitting element, return c ( f ).(3) If c ( g ) is a splitting element, return c ( g ).(4) If c ( f ) and c ( g ) are nilpotent, then:(a) Compute d = ( c ( f ) , c ( g )).(b) Compute ˜ f = f /d , ˜ g = g/d .(c) Apply Algorithm 5.5 to ( ˜ f , ˜ g ). If it returns an element r ∈ R , return r .Otherwise if it returns a , f , g then return ( d · a , f , g ).(5) If c ( f ) is not nilpotent, swap f and g .(6) Find the term a i x i of g of highest degree which is not nilpotent.(7) If a i is invertible, then:(a) Factorize g = u · ˜ g using Proposition 3.10 and set ˜ f = f /c ( f ).(b) Apply Algorithm 5.5 to ( ˜ f , ˜ g ). If it returns an element r ∈ R , return r .Otherwise, if it returns a , f , g , then return ( c ( f ) · a , f , g ).(8) If a i is not invertible, return ( a i , f, g ). Proposition 5.6. Let f , g ∈ R [ x ] be two polynomials of degree at most d . ThenAlgorithm 5.5 either returns ( c, ˜ g, ˜ f ) with c ∈ R and ˜ f , ˜ g ∈ R [ x ] primitive polynomialssuch that rres R ( f, g ) = c · rres R/ Ann( c ) ( ˜ f , ˜ g ) or returns a splitting element of R . Thealgorithm requires O (M( d ) log( E )) basic operations. Proof. If both polynomials are primitive, the statement is trivial. Assume now that oneof the polynomials is primitive, say g . If the content of f is a zero divisor, the algorithmis clearly correct. Otherwise, the algorithm follows the proof of Lemma 5.3 and performsa recursive call on the resulting polynomials. By Lemma 3.4, in the recursive call, eitherboth polynomials are primitive or the content of one of them is a splitting element, asdesired.Let us now assume that both polynomials are not primitive. If one of them is notnilpotent, its content is a splitting element of R . Therefore the only case that remainsis when both polynomials are nilpotent. By means of Lemma 5.4, we can reduce to thecase when at least one of the two polynomials is not nilpotent, and we have already dealtwith this case above. Therefore the claim follows. e now analyse the runtime. All the operations except for the factorization in Step (7a)using Proposition 3.10 are at most linear in d . If we are in the case of the factorization ofProposition 3.10, then in the recursive call we have a monic polynomial and a polynomialwhich is not nilpotent. Therefore the recursive call will take at most linear time in d andthe algorithm requires in this case O (M( d ) log( E )) operations. (cid:3) Chinese remainder theorem. Algorithm 5.5 returns in some cases a splitting element r ∈ R . In such a case, we use Proposition 2.3 and we continue the computation over thefactor rings. Therefore we need to explain how to recover the reduced resultant over R from the one computed over the quotients. Lemma 5.7. Let f, g ∈ R [ x ] be two polynomials and let e be a non-trivial idempotent.Denote by π and π the projections onto the components ( R/ ( e ))[ x ] and R/ (1 − e ))[ x ]respectively. Then π i (Rres( f, g )) = Rres( π i ( f ) , π i ( g )) for i = 1 , 2. In particular we haverres( f, g ) = e · rres R/ (1 − e ) ( f, g ) + (1 − e ) rres R/ ( e ) ( f, g ). Proof. Let r be the reduced resultant of f, g ; as r ∈ ( f, g ), there exist s, t ∈ R [ x ] suchthat sf + tg = r . Therefore, applying π i we get π i ( r ) ∈ ( π i ( f ) , π i ( g )) and therefore π i (Rres( f, g )) ⊆ Rres( π i ( f ) , π i ( g )). On the other hand, let r i = rres( π i ( f ) , π i ( g )) = u i π i ( f ) + v i π i ( g ). Then the Chinese remainder theorem implies that there exists r ∈ R and u, v ∈ R [ x ] such that π i ( r ) = r i and uf + vg = r , as desired. (cid:3) The main algorithm. Using Algorithm 5.5, we may assume that the input polynomials forthe reduced resultant algorithm are primitive. In order to compute the reduced resultantof two polynomials f and g , we want to perform a modified version of the Euclideanalgorithm on f , g . During the computation, we will potentially split the base ring usingProposition 2.3 and reconstruct the result using Lemma 5.7. We will now describe thecomputation of the reduced resultant. Before stating the algorithm, we briefly outlinethe basic idea. Let us assume that deg( f ) ≥ deg( g ). • If g has invertible leading coefficient, we can divide f by g with the standardalgorithm (Remark 3.9). • If the leading coefficient of g is not invertible, we want to apply Proposition 3.10. Ifthe first non-nilpotent coefficient is invertible, then we get a factorization g = u · ˜ g where u is a unit and g is monic. Therefore rres( f, g ) = rres( f, ˜ g ) and ( f, ˜ g ) satisfythe the hypotheses of item (1). If it is not invertible, then it is a splitting elementand Proposition 2.3 applies.We repeat this until the degree of one of the polynomials drops to 0. In this case, we canjust use one of the base cases from Lemma 5.1.Summarizing, we get the following recursive algorithm to compute the reduced resul-tant. Algorithm 5.8 (Reduced resultant) . Input: Polynomials f , g in R [ x ].Output: A reduced resultant rres( f, g ).(1) If f or g is constant, use Lemma 5.1 to return rres( f, g ).(2) If deg( g ) > deg( f ), swap f and g .(3) Apply Algorithm 5.5 to ( f, g ).(4) If Step (2) returns an element r (which is necessarily a splitting element), then:(a) Compute a non-trivial idempotent e ∈ R using Proposition 2.3.(b) Recursively compute r = rres R/ ( e ) ( f, g ), r = rres R/ (1 − e ) ( f, g ) and return er + (1 − e ) r .(5) Now Step (2) returned ( c, f , g ). 6) If c R × , then return rres( f, g ) = c · rres R/ Ann( c ) ( f , g ).(7) Now c ∈ R × , so that rres( f, g ) = c · rres( f , g ) and both f and g are primitive.(8) Let a i x i be the term of g of highest degree which is not nilpotent.(9) If a i is invertible, then(a) Factorize g = u · ˜ g with u ∈ R [ x ] × a unit and g ∈ R [ x ] monic usingProposition 3.10.(b) Divide f by g using Remark 3.9 to obtain q, r ∈ R [ x ] with deg( r ) < deg( g )and f = qg + r .(c) Return c · rres(˜ g , r )(10) If a i is not invertible, then a i is a splitting element and we proceed as in Step (3)with ( f, g ) replaced by ( f , g ) and multiply the result by c . Theorem 5.9. Algorithm 5.8 is correct and terminates. If the degree of f and g isbounded by d , then the number of basic operations is in O ( d M( d ) F log( E )). Proof. We first discuss the correctness of the algorithm. At every recursive call we eitherhave that: • the polynomials we produce generate the same ideal as the starting ones; in thiscase correctness is clear; • the reduced resultant of the input polynomials is the is the same as the reducedresultant of the polynomials we produce in output up to a constant, as stated inProposition 5.6; • we find a splitting element and continue the computation over the residue rings,following Lemma 5.7.Termination is straightforward too, as at every recursive call we either split the ring,pass to a residue ring or the sum of the degrees of the polynomials decreases and theseoperations can happen only a finite number of times. Let us analyse the complexity of thealgorithm. Algorithm 5.5 costs at most O (M( d ) · log( E )) operations by Proposition 5.6and it is the most expensive operation that can be performed in every recursive call. Thesplitting of the ring can happen at most F times and every time we need to continue thecomputation in every quotient ring. The recursive call in Step 6 will start again withStep 8, as the input polynomials are primitive. Therefore, as passing to the quotientring takes a constant number of operations, it does not affect the asymptotic cost of thealgorithm. The recursive call in Step (9) can happen at most d time. Summing up, thetotal cost of the algorithm is O ( d M( d ) F log( E )) operations. (cid:3) Example 5.10. We consider the polynomials f = x + 2 x + 3 and g = x + 1 over Z / Z .The polynomials are primitive and monic, so we go directly to step 9 of the algorithmand we divide f by g , so that we get ( f, g ) = ( g, x + 2). As the second polynomial hasnow content 2, we can use it as a splitting element. This means that we need to continuethe computation over Z / Z and Z / Z . In the first of these rings, as 2 is invertible, wedivide g by 2 x + 2, obtaining the ideal (2 x + 2 , Z / Z (2 x + 2 , g ) = 1. Let usnow consider the second ring, Z / Z . Here 2 is nilpotent, so we get rres Z / Z ( g, x + 2) =2 · rres Z / Z ( g, x + 1) as g is monic. By dividing, ( g, x + 1) = (0 , x + 1) and thereforerres Z / Z ( g, x +1) = 0. Therefore, rres Z / Z ( g, x +2) = 0. Applying the Chinese remaindertheorem, we therefore get rres( f, g ) = (4). Remark 5.11. When one of the input polynomials has invertible leading coefficient, atevery recursive call of Algorithm 5.8 one of the two polynomials will still have invertibleleading coefficient. omputation of B´ezout coefficients. Let f, g ∈ R [ x ] be polynomials. We want find twopolynomials a, b ∈ R [ x ] such that af + bg = rres( f, g ). To this end, in the same way as inthe Euclidean algorithm, we will keep track of the operations that we perform during thecomputation of the reduced resultant. In order to describe the algorithm, we just need toexplain how to obtain cofactors in the base case and how to update cofactors during thevarious operations of Algorithm 5.8. We begin with the base case, which follows triviallyfrom Lemma 5.1. Lemma 5.12. Let f, g ∈ R [ x ] be polynomials.(1) If f and g are constant, let a, b ∈ R such that ( f, g ) = ( af + bg ) as ideals of R .Then rres( f, g ) = af + bg .(2) If f has positive degree and g is constant, then rres( f, g ) = g = 0 · f + 1 · g .In particular, if f or g is constant, then B´ezout coefficients for the reduced resultant canbe computed using O (1) basic operations.An easy calculation shows the following: Lemma 5.13. Let f, g ∈ R [ x ] be polynomials of positive degree.(1) If f = qg + r with q, r ∈ R [ x ] and a , b ∈ R [ x ] satisfy a g + b r = rres( g, r ), thenrres( f, g ) = ( a − qb ) f + b g .(2) If c ∈ R, f = c · ˜ f , ˜ f ∈ R [ x ] and a , b ∈ R [ x ] satisfy rres( ˜ f , g ) = a f + b g , thenrres( f, g ) = a f + cb g .(3) If g = u · ˜ g where u ∈ R [ x ] × is a unit, ˜ g ∈ R [ x ] and a , b ∈ R [ x ] satisfyrres(˜ g, f ) = a ˜ g + b f = rres( f, g ), then rres( f, g ) = ( a · u − ) · f + b g .In particular, given a, b ∈ R [ x ] of degree at most d , in cases (1) and (2) we can computeB´ezout coefficients using at most O (M( d )) basic operations. In case (3) we can computeB´ezout coefficients using O (M( d · E )) basic operations.Finally we mention the case where the ring is split using a non-trivial idempotent. Lemma 5.14. Assume that e is a non-trivial idempotent, f, g ∈ R [ x ] and rres( ¯ f , ¯ g ) = a ¯ f + b ¯ g over R/ ( e ) with a , b ∈ ( R/ ( e ))[ x ] and rres( ¯ f , ¯ g ) = a ¯ f + b ¯ g over R/ (1 − e )with a , b ∈ ( R/ (1 − e ))[ x ]. Then rres( f, g ) = ((1 − e ) a + ea ) f + ((1 − e ) b + eb ) g . Thecomplexity is bounded by O ( d ), where d is the maximum of the degrees of a , b , a , b .Notice that, as the inverse of a unit may have large degree, the B´ezout coefficientthat we compute this way may have large degree too. If we assume that one of thepolynomials has invertible leading coefficient, we can reduce the degrees of the cofactorsusing Lemma 4.2. Under this assumption, this reduction must be done every time weinvert the unit in order to control the degrees of the cofactors. More precisely, assumethat during one of the steps of the algorithm f has invertible leading coefficient and wefactorize the second polynomials g as u · ˜ g and ˜ g is monic. Then we do not need to computethe full inverse of u , but to update the cofactors we only need to know the inverse of u modulo f . The complexity analysis changes completely under this assumption, so let usfirst assume that one of the input polynomials has invertible leading coefficient. Lemma 5.15. Let f, g ∈ R [ x ] be polynomials of degree at most d and assume that f or g have invertible leading coefficient. Then we can compute B´ezout coefficients a, b ∈ R [ x ]with af + bg = rres( f, g ) using O ( d M( d ) F log( dE )) basic operations. Proof. By Remark 5.11 we know that at every recursive call one of the polynomials willhave invertible leading coefficient. Thus the most expensive operation that we perform in lgorithm 5.8 is given by the computation of the inverse of the unit modulo the polyno-mial with invertible leading coefficient in Step 9, which requires at most O (M( d ) log( dE ))operations. Therefore the claim follows as in Theorem 5.9. (cid:3) Now, we determine the complexity of the algorithm in the case none of the inputpolynomials has invertible leading coefficient. Lemma 5.16. Let f, g ∈ R [ x ] be polynomials of degree at most d . Then we can computeB´ezout coefficients a, b ∈ R [ x ] with af + bg = rres( f, g ) using at most O (M( d · E ) + d M( d ) F log dE ) operations. Proof. We can assume that both polynomials are primitive. If one of them has invertibleleading coefficient, Lemma 5.15 applies. Otherwise, by means of Proposition 3.10 and, ifneeded, splitting the ring, we can reduce to the case of a monic polynomial ˜ g . Thereforewe can compute B´ezout coefficients for f and ˜ g using O ( d M( d ) F log dE ) by Lemma 5.15.To get B´ezout coefficients for f and g , we need to invert the unit, which requires at most O (M( d · E )) basic operations by Proposition 3.8. (cid:3) Resultant. Let f, g ∈ R [ x ] be two polynomials. In this subsection, we show howto compute the resultant res( f, g ). We want to follow the same approach as in thecomputation of the reduced resultant: Via a modified version of the Euclidean algorithm,we compute successive remainders in order to reduce the degree of the polynomials, untilwe arrive at one of the following base cases, which are easily verified using the definition. Lemma 5.17. Let f ∈ R [ x ] be a polynomial of positive degree and a, b, c ∈ R constants.Then the following hold:(1) res( f, c ) = c deg( f ) ,(2) res( x − a, x − b ) = a − b .In order to apply this strategy, we need to understand how the resultant behaves withrespect to division with remainder. We have the following classical statement. Lemma 5.18. Let f, g ∈ R [ x ] be non-constant polynomials. If f = qg + r with deg( q ) =deg( f ) − deg( g ) and deg( r ) < deg( g ), then res( f, g ) = lc( g ) d res( r, g ), where d = deg( f ) − deg( r ).As usual, the problem is that division works well only if the leading coefficient of thequotient is invertible. Therefore, we will now explain how to reduce to this case. Givenpolynomials f, g ∈ R [ x ], we will describe an algorithm that either returns a splittingelement r ∈ R or returns two primitive polynomials f , g and a constant c such that c · res( f , g ) = res( f, g ). The resultant behaves well with respect to the content of apolynomial, which is a consequence of the multilinearity of the resultant. Lemma 5.19. Let f, g ∈ R [ x ] be polynomials and c ∈ R be a constant. Then res( f, cg ) = c deg( f ) res( f, g ). Chinese remainder theorem. Every time we encounter a splitting element, we can useProposition 2.3 to split R . In order to exploit this in the computation of the resultant,we need to understand the behaviour of the resultant with respect to this operation. Asthe projection onto one of the components is a homomorphism of rings, we recall thebehaviour of the resultant with respect to homomorphisms, see [GCL92, Theorem 9.2]. Lemma 5.20. Let f, g ∈ R [ x ] be non-constant polynomials and let ϕ : R → S be a ringhomomorphism. If deg( f ) = deg( ϕ ( f )) and deg( g ) > deg( ϕ ( g )), then ϕ (res( f, g )) = ϕ (lc( f )) deg( g ) − deg( ϕ ( g )) res( ϕ ( f ) , ϕ ( g )); • if deg( f ) > deg( ϕ ( f )) and deg( g ) > deg( ϕ ( g )), then ϕ (res( f, g )) = 0. Proof. The image of the resultant ϕ (res( f, g )) is equal to the determinant of ϕ ( S ( f, g )),i.e. the matrix whose entries are the images of the entries of S ( f, g ) under ϕ . If thedegrees of the polynomials are invariant under ϕ , then ϕ ( S ( f, g )) = S ( ϕ ( f ) , ϕ ( g )), givingthe first formula. The second result follows by using Laplace expansion on the firstdeg( g ) − deg( ϕ ( g )) rows of ϕ ( S ( f, g )). The third statement follows from the fact that,under the given hypotheses, the first column of the matrix ϕ ( S ( f, g )) is zero. (cid:3) Proposition 5.21. Let ϕ : R → R × R be a ring isomorphism, π i : R → R i the canonicalprojections, i = 1 , 2, and f, g ∈ R [ x ] polynomials. For i = 1 , f i = π i ( f ) and g i = π i ( g ) and set r i = , if deg( f i ) < deg( f ) and deg( g i ) < deg( g ) , lc( g ) deg( f ) − deg( f i ) res( f i , g i ) , if deg( f i ) ≤ deg( f ) and deg( g i ) = deg( g ) , lc( f ) deg( g ) − deg( g i ) res( f i , g i ) , if deg( f i ) = deg( f ) and deg( g i ) ≤ deg( g ) . Then res( f, g ) = ϕ − ( r , r ). Proof. Follows from Lemma 5.20. (cid:3) Corollary 5.22. Let e be a non-trivial idempotent and f, g ∈ R [ x ] polynomials of degreeat most n . Given the resultants of f, g over R/ ( e ) and R/ (1 − e ) respectively, we cancompute the resultant res( f, g ) using O (log( n )) many basic operations.It remains to describe what happens when we factorize one of the polynomials, say g ,as g = u · ˜ g with u ∈ R [ x ] × a unit and ˜ g ∈ R [ x ] monic. Lemma 5.23. Let f, g, h ∈ R [ x ] be non-constant polynomials. If deg( f ) + deg( g ) =deg( f g ), then res( f g, h ) = res( f, h ) res( g, h ). Proof. The formula holds for every integral domain. In particular, it holds over a multi-variate polynomial ring over Z . Let a , . . . , a deg( f ) be the coefficients of f , b , . . . , b deg( g ) the coefficients of g and c , . . . , c deg( h ) the coefficients of h . Consider the polynomialring S = Z [ s f, , · · · , s f, deg( f ) , s g, , · · · , s g, deg( g ) , s h, , · · · , s h, deg( h ) ] and the homomorphism ϕ : S → R sending s f,i to a i , s g,i to b i and s h,i to c i . The map ϕ induces a homomorphism S [ x ] → R [ x ], which by abuse of notation we also denote by ϕ . By construction, f, g, h are in the image of ϕ and hence there exist ˜ f , ˜ g and ˜ h in S [ x ] which map to f, g and h respectively. Invoking Proposition 5.20 twice, we haveres( f g, h ) = res( ϕ ( ˜ f ˜ g ) , ϕ (˜ g )) = ϕ (res( ˜ f ˜ g, ˜ h )) = ϕ (res( ˜ f , ˜ h )) ϕ (res(˜ g, ˜ h ))= res( f, h ) res( g, h ) . (cid:3) By virtue of this lemma, if we factorize g using Proposition 3.10 as g = u · ˜ g , where u ∈ R [ x ] × is a unit and ˜ g ∈ R [ x ] is monic, then res( f, g ) = res( f, u ) · res( f, ˜ g ). We continuewith the Euclidean algorithm in the computation of res( f, ˜ g ). For what concerns res( f, u ),we pass to the reciprocal polynomial. Definition 5.24. Let f ∈ R [ x ] be a polynomial. We define the reciprocal polynomial i ( f ) ∈ R [ x ] as the polynomial x deg( f ) f (1 /x ). Lemma 5.25. Let f, g ∈ R [ x ] and assume that the constant term of f is non-zero. Thenres( f, g ) = ( − (deg( f ) · deg( g )) · lc( i ( f )) deg( g ) − deg( i ( g )) res( i ( f ) , i ( g )). roof. Note that f (0) = 0 is equivalent to deg( i ( f )) = deg( f ). If deg( i ( g )) = deg( g ),the Sylvester matrix of the reversed polynomials i ( f ), i ( g ) is obtained by swapping thecolumns of the Sylvester matrix of f , g , so the claim follows. If deg( i ( g )) < = deg( g ),we use Laplace formula for the determinant on the first columns in order to reduce tothe Sylvester matrix of i ( f ) and i ( g ); this correspond to the multiplication by a power ofleading term of i ( f ). (cid:3) The resultant algorithm. Algorithm 5.26 (Resultant) . Input: Polynomials f , g ∈ R [ x ].Output: The resultant res( f, g ).(1) If f or g is constant or deg( f ) = deg( g ) = 1, use Lemma 5.17 to return rres( f, g ).(2) If deg( g ) > deg( f ), swap f and g .(3) Write f = c ( f ) · ˜ f and set r = c ( ˜ f ).(4) If r R × , then:(a) Split the ring R ≃ R/ ( r ) × R/ (1 − r ) using r as a splitting element.(b) Compute res R/ ( r ) ( ˜ f , g ) and res R/ (1 − r ) ( ˜ f , g ) recursively.(c) Use Corollary 5.22 to determine res( ˜ f , g ) and return c ( f ) · res( ˜ f , g ).(5) Write g = c ( g ) · ˜ g and set r = c (˜ g ).(6) If r R × , then • Split the ring R ≃ R/ ( r ) × R/ (1 − r ) using r as a splitting element. • Compute res R/ ( r ) ( ˜ f , ˜ g ) and res R/ (1 − r ) ( ˜ f , ˜ g ) recursively. • Use Corollary 5.22 to determine res( ˜ f , ˜ g ) and return c ( f ) · c ( g ) · res( ˜ f , ˜ g ).(7) Let b i x i be the term of ˜ g of highest degree with b i not nilpotent.(8) If b i is not invertible, then • Split the ring R ≃ R/ ( b i ) × R/ (1 − b i ) using b i as a splitting element. • Compute res R/ ( b i ) ( ˜ f , ˜ g ) and res R/ (1 − b i ) ( ˜ f , ˜ g ) recursively. • Use Corollary 5.22 to determine res( ˜ f , ˜ g ) and return c ( f ) · c ( g ) · res( ˜ f , ˜ g ).(9) Factorize ˜ g = u ˆ g where u ∈ R [ x ] × is a unit and ˆ g ∈ R [ x ] is monic using Proposi-tion 3.10.(10) Compute the remainder f of the division of ˜ f by ˆ g .(11) Set f = i ( ˜ f ) and u = i ( u ).(12) Compute the remainder f of the division of f by u .(13) Return c ( f ) · c ( g ) · lc( u ) deg( f ) − deg( f ) · res( f , u ) · res( f , ˆ g ) Theorem 5.27. Algorithm 5.26 is correct and for input of degree at most d requires atmost O ( d M( d ) log( E ) F ) basic operations. Proof. The algorithm terminates, as the number of maximal ideals and of quotients of R is finite and every time we perform a division, the sum of the degrees of the polynomi-als decreases. Correctness follows from the discussion above. Let us now compute thecomputational cost of the algorithm.We denote by R ( d ) the cost of the algorithm for input polynomials of degree at most d . Notice that R ( d ) is superlinear, that is lim sup d →∞ R ( d ) /d = ∞ , as at every recursivecall we compute a division of polynomials and this operation is already superlinear. Thisimplies that R ( d ) ≥ R ( d − s ) + R ( s ) for d ≫ ≤ s < d . Steps (1) and (3)require O ( d ) basic operations. The division in Step (10) takes at most O (M( d )) basicoperations, while the computation of the factorization in Step (9) requires O (M( d ) log E )basic operations. Every time we use Proposition 3.10, we need to split the computationinto two resultants of lower degree. In particular, the degrees of the input polynomials in he recursive call are bounded by d − s − s , where s is the degree of the unit u inthe factorization at Step (9). Putting together all these considerations, we get the bound R ( d ) ≤ M( d ) log( E ) + R ( d − F times. Therefore, the asymptotic complexity of the algorithm is O ( d M( d ) log( E ) F ). (cid:3) Example 5.28. We want to compute the resultant of f = x + 2 x + 1 and g = x +2 x + 2 over Z / Z . As the polynomials are primitive and monic, we divide directly f by g . Denoting by f the remainder f = 2 x + 2 x + 3 so that res( f, g ) = res( f , g ).Now, both polynomials are primitive but the leading coefficient of f is not invertible.Therefore, we apply Proposition 3.10. As f is a unit, the corresponding factorization istrivial. Following the algorithm, we pass to the reverse polynomials, so that res( f, g ) =res( i ( f ) , i ( g )). As i ( f ) has invertible leading coefficient, we can divide i ( g ) by i ( f ),so that res( f, g ) = res( i ( f ) , x + 1). Both polynomials are primitive and the algorithmrecognises that 2 x + 1 is a unit and passes again to the reciprocal polynomials, res( f, g ) =res( f , x + 2). Dividing again, res( f, g ) = res(3 , x + 2) so that we are in the case ofLemma 5.17. Therefore res( f, g ) = 3. Resultant ideal. Let f, g ∈ R [ x ] be polynomials. In a lot of applications, we are interestedin the ideal (res( f, g )) ⊆ R generated by the resultant and not in the resultant itself. Insuch a case, the algorithm can be simplified, as the resultant of a unit and a monicpolynomial is a unit of R : Lemma 5.29. Let f, u ∈ R [ x ] be polynomials. Assume that u ∈ R [ x ] × is a unit and f has invertible leading coefficient. Then res( u, f ) is a unit. Proof. Let m be a maximal ideal of R . By Lemma 2.2, the projection of u modulo m isa non-zero constant. This implies that the Sylvester matrix S ( u, f ) is invertible over theresidue field R/ m . Applying the same argument to all maximal ideals of R , we obtain S ( f, g ) ∈ R × . (cid:3) Therefore, if we make sure that at every recursive call the polynomial of higher degreehas invertible leading coefficient, in the recursive call at Step (13) we only need to computeone resultant instead of two, sincelc( u ) deg( f ) − deg( f ) res( f , u ) = lc( u ) deg ( f ) − deg( f ) res( f , u ) = res( ˜ f , u ) ∈ R × by Lemma 5.29. If the polynomial of higher degree does not have invertible leadingcoefficient, we still need to invoke Proposition 3.10. Even if this does not change theasymptotic complexity, it makes the computation faster in practice.5.3. Multivariate polynomials. A classical application of the resultant algorithm isthe computation of the resultant of multivariate polynomials with respect to one variableusing a modular approach, see [Col71] and [GCL92] for the case R = Z . Here, for thesake of simplicity we will focus on bivariate polynomials. Of course, using an inductiveprocess it is possible to use the same argument to deal with more variables. The idea ofthe algorithm is to reduce to the univariate case in order to use Algorithm 5.26.Let f, g ∈ R [ x, y ] be polynomials and assume that we want to compute the resultantwith respect to y , so that S ( f, g ) is a matrix with entries in R [ x ]. The degree of theresultant is bounded by B = deg y ( g ) deg x ( f ) + deg y ( f ) deg x ( g ). This bound can beeasily computed just by looking at the degrees of the entries of S ( f, g ), see also [GCL92,(9.34)]. ssume that there exist B + 1 interpolation points in R , that is, there are elements a , . . . , a B ∈ R such that a i − a j ∈ R × is invertible for i = j . We then consider theideals I i = ( x − a i ) ⊆ R [ x, y ], 0 ≤ i ≤ B , whose residue rings are all isomorphic to R [ y ]. As a i − a j is invertible, the ideals are pairwise coprime. Thus we can apply theChinese remainder theorem, compute the resultants over the residue rings by means ofAlgorithm 5.26 and reconstruct it in R [ x ] using the formula given by Proposition 5.20,in the same way we did for the resultant itself. Algorithm 5.30 (Resultant of bivariate polynomials) . Input: Polynomials f , g ∈ R [ x, y ] and a . . . , a B ∈ R such that a i − a j ∈ R × , where b = deg y ( g ) deg x ( f ) + deg y ( f ) deg x ( g ).Output: The resultant res y ( f, g ) ∈ R [ x ].(1) For i = 0 , . . . , B , compute the resultant r i ∈ R of f, g in R [ x, y ] / ( x − a i ) ∼ = R [ y ]using Algorithm 5.26.(2) Using Proposition 5.20, reconstruct the resultant over R [ x ]. Remark 5.31. In general it is not easy (or not possible at all) to find such elements. Weshow what to do in the case of Z /n Z and for residue rings of valuations rings of a p -adicfield.(1) Assume first that R = Z /n Z . Let p , . . . , p k be the prime numbers smaller than B . Then we factorize n as n = p e . . . p e k k m with m coprime to every p i . Over Z /m Z , the elements 0 , , . . . , B are a set satisfying the requirements and thereforewe can compute the resultant by using Algorithm 5.30. To treat the rings Z /p e i i Z ,we construct a base ring extension as follows. Denote now by p be one of the smallprime divisors p i of n and e = e i the corresponding exponent. Let k = log p ( B ), S = Z /p e Z and λ ∈ S [ t ] a monic polynomial of degree k , which is irreduciblemodulo p . Then in S [ t ] / ( λ ) the elements a i,j = i ¯ t j for i ∈ { , . . . , p − } and j ∈ { , . . . , k } are a set of elements such that a i,j − a i ,j is invertible if ( i, j ) =( i , j ). Therefore we can work over the extension ring S [ t ] / ( λ ) and compute theresultant over ( S [ t ] / ( λ ))[ x ] by using Algorithm 5.30. As the input polynomialshave coefficients in the subring Z /p e Z ⊆ S [ t ] / ( λ ), the result will be in Z /p e Z , sothat the reconstruction is not affected by this.(2) Let us now assume that R is the valuation ring of a p -adic field with residue field κ . Let q the cardinality of the residue field. If q > B , then we can take lifts of theelements of the residue field κ as elements a , . . . , a B and apply Algorithm 5.30.Otherwise, as we did in the first case, we need to work over a ring extension. Let k = log q ( B ) and let λ ∈ R [ t ] be a polynomial of degree k which is irreducibleover κ . Then in S = R [ t ] / ( λ ) we can consider the elements b i,j = a i ¯ t j , for i ∈ { , . . . , q } and j ∈ { , . . . , k − } , where the a i ∈ R are lifts of the elements of κ . The difference b i,j − b i ,j is invertible if ( i, j ) = ( i , j ), so that these elementcan be used to apply Algorithm 5.30.6. Complexity over residue rings of the integers In Section 5 we have determined the algebraic complexity of computing resultant andreduced resultants over a principal ideal ring R . The number of operations depend notonly on the degree of the input polynomials, but also on the number F of maximal idealsand the nilpotency index E . For the ring R = Z /n Z , we want to translate the algebraiccomplexity into a bit complexity statement. We will denote by N : Z ≥ → Z ≥ the bitcomplexity of multiplying to two integers. Thus two integers a, b ∈ Z with | a | , | b | ≤ k an be multiplied using O (N( k )) many bit operations. In this regard, [Sto00, Theorem1.5] shows that in Z /n Z , • basic operations (1), (2) and (3) can be computed using O (N(log( n ))) many bitoperations, and • basic operations (4) and (5) can be computed using O (N(log( n )) log( n )) many bitoperations.We will improve upon the naive translation into bit complexity of Theorems 5.27and 5.9 by exploiting the fact that the operation in non-trivial quotients of Z /n Z are lessexpensive than operations in Z /n Z . Lemma 6.1. Let f ∈ Z /n Z [ x ] be a primitive polynomial. The bit complexity of thelifting of Proposition 3.10 is O (M( d )N(log( n ))). Proof. Without loss of generality, we can assume n = m k . The cost of every step ofthe lifting is O (M( d )N(log m i )), if we are working over Z /m i Z . This means that thecomplexity of the entire algorithm is M( d ) P ki =1 N(log m i ). As N is superlinear, we get k X i =1 N(log m i ) ≤ N(log m k ) + N( k − X i =1 log m i )= N(log m k ) + N(log m k − X i =1 i ) ≤ m k )Summarizing, the cost of the Hensel lifting is O (M( d )N(log n )), as desired. (cid:3) Theorem 6.2. Let f, g ∈ Z /n Z [ x ] be polynomials of degree at most d . The bit com-plexity of the reduced resultant and resultant algorithms is O ( d M( d )N(log( n ))). Proof. We only need to show that the number of distinct prime factors of n does not affectthe complexity. We show the result by induction on the number of distinct factors of n . If n is prime, it follows from the general statement. Let us now assume that the statementis true for every n with k factors and let n be a modulus with k + 1 distinct prime factors.Let us assume that, in the algorithm, we find a factor m of n such that m and n/m arecoprime. Then, following the algorithm, we need to split the computation into two, i.e.continue over Z /m Z and Z / ( n/m ) Z . By induction, we know that over these rings thenumber of operations is bounded by O ( d M( d )N(log( m ))) and O ( d M( d )N(log( n/m ))). AsN(log( m )) + N(log( n/m )) ≤ N(log( m ) + log( n/m )) = N(log( n )) by the superlinearity ofN, the statement follows. (cid:3) Polynomials over complete discrete valuation rings We now show how to use the algorithms of the previous sections to improve operationsover complete discrete valuation rings. The most important example is given by thevaluation rings of local fields, for example, the ring Z p of p -adic integers. To this end let R be a complete discrete valuation ring with fraction field K . Let π be a uniformizerof K , that is, π is a generator of the unique maximal ideal of R . In particular, if wefix a set of representatives S ⊆ R of R/ ( π ), then every element a ∈ R can be writtenas a = P ∞ i =0 a i π i for unique elements a i ∈ S . We assume that every element of R isrepresented with a fixed precision k ∈ Z ≥ . This is equivalent to saying that elements arerepresented as elements of the quotient R/ ( π k ), which is a local principal Artinian ring.In particular, the algorithms for computing (reduced) resultants developed in Section 4apply. Note that this is an important building block, for example in the following tasks: Computation of the norm of elements in algebraic extensions of p -adic fields, whichis just a resultant computation. • The computation of a bivariate resultant is one of the main tools to compute thefactorization of p -adic polynomials, see [FPR02] and [CG00] Computing greatest common divisors. Apart from these direct applications, the ideas thatwe used in Section 4 can be exploited also in the computation of the greatest commondivisor of polynomials over R . Note that while the classical Euclidean algorithm can beapplied in thise setting, it suffers from a great loss of precision, as shown for example in[Car17].Let f, g ∈ R [ x ] be two polynomials with coefficient in R , for which we want to computegcd( f, g ). First of all, we notice that we can assume that the polynomials are primitive. Ifnot, we can just divide the polynomials by the content. The division between polynomials,in the case the divisor is monic, works as usual. In the case the divisor g is not monic,we can apply the following adapted version of Proposition 3.10. Lemma 7.1. Let f = P ni =0 a i x i ∈ R [ x ] and g = P mi =0 b i x i be polynomials over R andassume that v ( a i ) ≥ ≤ i ≤ n − v ( a d ) = 0, v ( b i ) > ≤ i ≤ m and v ( b ) = 0.Then f and g are coprime. Proof. Let d be the greatest common divisor of f and g . As d divides f and g , the samemust hold over the residue ring. However, the greatest common divisor of the projectionsof f and g over the residue field is constant, because g the valuation of the coefficients ofpositive degree is greater than zero. As the leading coefficient of f is invertible, the samemust hold for d . Therefore, d ∈ R and, since f and g are primitive, d = 1. (cid:3) Proposition 7.2. Let f = P di =0 f i x i ∈ R [ x ] be a primitive polynomial. Assume thatthere exists 0 ≤ s ≤ d such that for s + 1 ≤ i ≤ d the coefficient f i has positive valuationand f s is invertible. Then there exists a factorization f = f · f in R [ x ] such that f modulo ( π ) is constant and f is monic of degree s . The polynomials f and f arecoprime and can be computed using O (M( d ) log k ) basic operations in R . Proof. The existence of f and f follow as in the proof of Proposition 3.10. The copri-mality follows from Lemma 7.1 (cid:3) Therefore, if the leading coefficient of g has positive valuation, we can factorize itas g = g · g with g , g ∈ R [ x ] coprime and g monic. In particular, gcd( f, g ) =gcd( f, g ) gcd( f, g ). As g is monic, we can divide f by g and continue with the algorithmrecursively. For what concerns gcd( f, g ), we distinguish two cases. If f is monic, we canconclude that f and g are coprime, using again Lemma 7.1. If f is not monic, then wecan apply again Proposition 7.2 to f and get a factorization f = f · f with f , f ∈ R [ x ]coprime and f monic. In particular, we have gcd( f, g ) = gcd( f , g ) gcd( f , g ). Asabove, gcd( f , g ) = 1, so that gcd( f, g ) = gcd( f , g ). Therefore we are in the case inwhich both polynomials are for the form g = g , + πx ˜ g and f = f , + πx ˜ f for somepolynomial ˜ f , ˜ g ∈ R [ x ] and g , , f , ∈ R × . In this case, we again take advantage of thereciprocal polynomials: Lemma 7.3. Let u = P ni =0 a i x i ∈ R [ x ] and v = P mi =0 b i x i be polynomials over R andassume that v ( a i ) ≥ ≤ i ≤ n , v ( a ) = 0, v ( b i ) > ≤ i ≤ m and v ( b ) = 0.Then gcd( u, v ) = i (gcd( i ( u ) , i ( v ))). Proof. Let g ∈ R [ x ] be a greatest common divisor of i ( u ) , i ( v ). As i ( u ) , i ( v ) have invertibleleading coefficient, the same must hold for g by Gauss’s lemma. Let s, t ∈ R [ x ] be such hat i ( u ) = g · s and i ( v ) = g · t . Then u = i ( i ( u )) = i ( g · s ) and v = i ( g · t ). As theconstant term of i ( u ) is non-zero, i ( g ) · i ( s ) = i ( g · s ) and therefore we get i ( g ) | gcd( u, v ).We now need to prove that i ( s ) and i ( t ) are coprime, but this follows in the same wayby noticing that s and t are monic with non-zero constant term. (cid:3) Summarizing, we get the following algorithm: Algorithm 7.4. Input: Polynomials f , g ∈ R [ x ].Output: gcd( f, g ).(1) If f or g are not primitive, return gcd( c ( f ) , c ( g )) · gcd( f /c ( f ) , g/c ( g )).(2) If deg( g ) > deg( f ), swap f and g .(3) If g = 0, return f . If g is a non-zero constant, return gcd( c ( f ) , g ) ∈ R .(4) If lc( g ) is not invertible, then(a) Split g as g · g using Proposition 7.2, with g ∈ R [ x ] monic.(b) If lc( f ) is invertible, then return gcd( f, ˜ g )(c) Split f as f · f using Proposition 7.2, with f ∈ R [ x ] monic.(d) Return gcd( ˜ f , ˜ g ) · i (gcd( i ( f ) , i ( g ))).(5) Compute the remainder r of the division of f by g .(6) Return gcd( g, r ).The correctness of the algorithm is clear, as we know that the Euclidean divisionpreserves the greatest common divisor and the factorization of Proposition 7.2 is givenby coprime polynomials. As usual, we can keep track of the transformation we do in orderto recover a pair of B´ezout coefficients, in the same way as we explained in Section 5.1.8. Ideal arithmetic in number fields In this section, we apply the algorithms from Section 4 to the basic problems of com-puting minima and norms of ideals in rings of integers of number fields. More precisely,let γ be an integral primitive element for a number field K and let f ∈ Z [ x ] be its monicminimal polynomial, that is, K = Q ( γ ) = Q [ x ] / ( f ). Let O K be the maximal order of K and consider a non-zero ideal of O K with a normal presentation I = ( a, α ) with a ∈ Z and α ∈ O K as defined in [PZ97, 6.3 Ideal calculus]. We will show how to compute thenorm and the minimum of I given such a presentation.8.1. Norm of an ideal. We want to compute the norm N( I ) = |O K /I | of I . Since I has a normal presentation, it is easy to see that N( I ) = gcd(N( a ) , N( α )) = gcd( a n , N( α )),where N( α ) ∈ Q denotes the usual field norm.In order to compute the norm of an element, we make use of the following folklorestatement: Lemma 8.1. Let α ∈ Z [ γ ] = Z [ x ] / ( f ) ⊆ K and let g ∈ Z [ x ] such that g ( γ ) = α . ThenN( α ) = res( g, f ).In particular, the norm of α and therefore the norm of I can be determined using oneresultant computation of two polynomials in Z [ x ].On the other hand, we do not need to compute the norm of α , but it is sufficientto determine the norm of α modulo a n . We can thus compute the norm of an ideal asfollows. Proposition 8.2. Let I = ( a, α ) and g ∈ Q [ x ] such that g ( γ ) = α .(1) If g ∈ Z [ x ] and ¯ r = res(¯ g, ¯ f ) ∈ Z /a n Z with ¯ g, ¯ f ∈ ( Z /a n Z )[ x ], then N( I ) =gcd( a n , r ). 2) Let d ∈ Z be an integer such that d · g ∈ Z [ x ] and denote by g = d · g . If¯ r = res(¯ g , ¯ f ) ∈ Z /a n d n Z with ¯ g , ¯ f ∈ ( Z /a n d n Z )[ x ], then N( I ) = gcd( a n , rd n ).Let us assume first that α ∈ Z [ γ ] and let as above g ∈ Z [ x ] such that g ( γ ) = α . In orderto compute the norm of the ideal, we compute res( g, f ) (mod a n ) using Algorithm 5.26.If α is not in Z [ γ ], we compute its denominator d , i.e. the minimum positive integersuch that dα ∈ Z [ γ ]. Write d = c · u , where u is the largest divisor of d coprime to a .We compute res( ˜ dα, f ) (mod c n a n ) and return the result divided by c n . The result willbe correct as N ( dα ) = d n N ( α ).Thus we have replaced the resultant computation over Z [ x ] with a resultant computa-tion over a residue ring Z /m Z for a suitable m = 0.8.2. Minimum of an ideal. We want to compute the minimum of the ideal I , that is,the positive integer min( I ) ∈ Z > which satisfies min( I ) Z = I ∩ Z . Again we distinguishtwo cases, depending on the index of Z [ γ ] in O K . Lemma 8.3. Assume that [ O K : Z [ γ ]] and a are coprime. Then min( I ) = r , where¯ r = rres(¯ g, ¯ f ) ∈ Z /a Z and ¯ g, ¯ f ∈ ( Z /a Z )[ x ] and g ( γ ) = α . Proof. First of all, we notice that since a and the index are coprime, the projection of g modulo a is well defined. Let now m be the minimum of I . Then m can be written asan element of I , that is, m = s ( γ ) · a + t ( γ ) · α with s, t ∈ Q [ x ] with denominators d , d (the smallest integers d , d such that d · s ∈ Z [ x ] and d · t ∈ Z [ x ]) coprime to a . Thusthere exists u such that m = s ( x ) · a + t ( x ) · g ( x ) + u ( x ) · f ( x ). Reducing the equationmodulo a , we get that m ∈ ( ¯ f , ¯ g ) ∩ Z /a Z . Hence m is in the reduced resultant ideal of( ¯ f , ¯ g ). On the other hand, any relation over Z /a Z can be lifted, giving the equality. (cid:3) This lemma provides an easy way of computing the minimum of an ideal in the case theindex is coprime to the first generator of the ideal. However, it does not work in general,as the denominators of the elements appearing in the equations and the first generatorof I do not need to be coprime. In this unlucky case, we take advantage of the followinglemma: Lemma 8.4. Let I = ( a, α ) an ideal of O K . Then min( I ) = gcd( a, den( α − )).This lemma allows us to compute the minimum of I from a B´ezout identity. Indeed,let aα + bf = r be the B´ezout identity for f and α . Then a/r is the inverse of α in thenumber field. As we want to work modularly, we need to take care of the denominators.Let d be the denominator of α and write d = d u where u is the largest divisor of d which is coprime to a . In the same way, let e be the exponent of O K / Z [ γ ] and write e = e v with v the largest divisor coprime to a . Then we compute a B´ezout identity of f and dα modulo a · e · d in order to find the denominator of the inverse. The minimumof the ideal will then be the greatest common divisor of a and the denominator of themodular inverse. References [AM69] M. F. Atiyah and I. G. Macdonald. Introduction to commutative algebra . Addison-Wesley Pub-lishing Co., Reading, Mass.-London-Don Mills, Ont., 1969.[Bou06] Nicolas Bourbaki. ´El´ements de math´ematique. Alg`ebre commutative. Chapitres 1 `a 4. Berlin:Springer, 2006.[Car17] Xavier Caruso. Numerical stability of Euclidean algorithm over ultrametric fields. Journal deTh´eorie des Nombres de Bordeaux , 29(2):503 – 534, 2017.[CG00] David G. Cantor and Daniel M. Gordon. Factoring polynominals over p-adic fields. In ANTS ,2000. Col71] George E. Collins. The calculation of multivariate polynomial resultants. J. Assoc. Comput.Mach. , 18:515–532, 1971.[FH14] Claus Fieker and Tommy Hofmann. Computing in quotients of rings of integers. LMS J. Comput.Math. , 17(suppl. A):349–365, 2014.[FPR02] David Ford, Sebastian Pauli, and Xavier-Fran¸cois Roblot. A fast algorithm for polynomialfactorization over Q p . Journal de Th´eorie des Nombres de Bordeaux , 14(1):151–169, 2002.[GCL92] K. O. Geddes, S. R. Czapor, and G. Labahn. Algorithms for computer algebra . Kluwer AcademicPublishers, Boston, MA, 1992.[How86] John A. Howell. Spans in the module ( Z m ) s . Linear and Multilinear Algebra , 19(1):67–77, 1986.[McL73] K. R. McLean. Commutative artinian principal ideal rings. Proc. London Math. Soc. (3) ,26:249–272, 1973.[PZ97] M. Pohst and H. Zassenhaus. Algorithmic algebraic number theory , volume 30 of Encyclopediaof Mathematics and its Applications . Cambridge University Press, Cambridge, 1997. Revisedreprint of the 1989 original.[SM98] Arne Storjohann and Thom Mulders. Fast algorithms for linear algebra modulo N . In Algorithms—ESA ’98 (Venice) , volume 1461 of Lecture Notes in Comput. Sci. , pages 139–150.Springer, Berlin, 1998.[Sto00] Arne Storjohann. Algorithms for Matrix Canonical Forms . PhD thesis, Department of ComputerScience, Swiss Federal Institute of Technology – ETH, 2000.[vzGG03] Joachim von zur Gathen and J¨urgen Gerhard. Modern computer algebra . Cambridge UniversityPress, Cambridge, second edition, 2003.[vzGH98] Joachim von zur Gathen and Silke Hartlieb. Factoring modular polynomials. J. SymbolicComput. , 26(5):583–606, 1998. Claus Fieker, Fachbereich Mathematik, Technische Universit¨at Kaiserslautern, 67663Kaiserslautern, Germany E-mail address : [email protected] URL : ∼ fieker Tommy Hofmann, Fakultat f¨ur Mathematik und Informatik, Universit¨at des Saarlan-des, 66123 Saarbrucken, Germany E-mail address : [email protected] URL : ∼ thofmann Carlo Sircana, Fachbereich Mathematik, Technische Universit¨at Kaiserslautern,67663 Kaiserslautern, Germany E-mail address : [email protected]@mathematik.uni-kl.de