On the complexity of computing integral bases of function fields
aa r X i v : . [ c s . S C ] M a y On the complexity of computing integral basesof function fields
Simon Abelard Laboratoire d’informatique de l’École polytechnique (LIX, UMR 7161)CNRS, Institut Polytechnique de Paris [email protected]
Abstract.
Let C be a plane curve given by an equation f ( x, y ) = 0with f ∈ K [ x ][ y ] a monic squarefree polynomial. We study the prob-lem of computing an integral basis of the algebraic function field K ( C )and give new complexity bounds for three known algorithms dealingwith this problem. For each algorithm, we study its subroutines and,when it is possible, we modify or replace them so as to take advantageof faster primitives. Then, we combine complexity results to derive anoverall complexity estimate for each algorithm. In particular, we modifyan algorithm due to Böhm et al. and achieve a quasi-optimal runtime. Acknowledgements.
Part of this work was completed while the author was at theSymbolic Computation Group of the University of Waterloo. This paper is partof a project that has received funding from the French Agence de l’Innovationde Défense. The author is grateful to Grégoire Lecerf, Adrien Poteaux and ÉricSchost for helpful discussions and to Grégoire Lecerf for feedback on a prelimi-nary version of this paper.
When handling algebraic function fields, it is often helpful –if not necessary– toknow an integral basis. Computing such bases has a wide range of applicationsfrom symbolic integration to algorithmic number theory and applied algebraicgeometry. It is the function field analogue of a well-known and difficult problem:computing rings of integers in number fields and, as often, the function fieldversion is easier: the algorithm of Zassenhaus [25] described for number fields inthe late 60’s can indeed be turned into a polynomial-time algorithm for functionfields which was later precisely described by Trager [23].However, there are very few complexity results going further than just statinga polynomial runtime. Consequently, most of the existing algorithms in the lit-erature are compared based on their runtimes on a few examples and this yieldsno consensus on which algorithm to use given an instance of the problem. In thispaper, we provide complexity bounds for three of the best-known algorithms tocompute integral bases and provide complexity bounds based on state-of-the artresults for the primitives they rely on.
Simon Abelard
In this paper, we focus on the case of plane curves C given by equationsof the form f ( x, y ) = 0 with f ∈ K [ x ][ y ] monic in y and squarefree. We setthe notation n = deg y f and d x = deg x f . The associated function field is K ( C ) = Frac ( K ( x )[ y ] /f ( x, y )), it is an algebraic extension of degree n of K ( x ).An element h ( x, y ) of K ( C ) is integral (over K [ x ]) if there exists a monic bivari-ate polynomial P ( x, y ) such that P ( x, h ( x, y )) equals 0 in K ( C ). The set of suchelements forms a free K [ x ]-module of rank n and a basis of this module is calledan integral basis of K ( C ).Computing integral bases of algebraic function fields has applications in sym-bolic integration [23] but more generally an integral basis can be useful to handlefunction fields. For instance, the algorithm of van Hoeij and Novocin [13] usessuch a basis to “reduce” the equation of function fields and thus makes themeasier to handle. The algorithm of Hess [11] to compute Riemann-Roch spacesis based on the assumption that integral closures have been precomputed. Thisassumption is sufficient to establish a polynomial runtime, but a more precisecomplexity estimate for Hess’ approach requires to assess the cost of computingintegral closures as well. Our contribution.
We provide complexity estimates for three algorithms ded-icated to computing integral bases of algebraic function fields in characteris-tic 0 or greater than n . To the best of our knowledge, no previous boundswere given for these algorithms. Another approach which has received a lotof attention is the use of Montes’ algorithm. We do not tackle this approachin the present paper, a complexity estimate has been given by Bauch in [2,Lemma 3.10] in the case of number fields. Using the Montes algorithm, a lo-cal integral basis of a Dedeking domain A at a prime ideal p is computed in O (cid:0) n ε δ log q + n ε δ ε + n ε δ ε (cid:1) p -small operations, with δ the p -valuationof Disc( f ) and the cardinal of A/ p .Our contribution is actually not limited to a complexity analysis: the algo-rithms that we present have been slightly modified so that we could establishbetter complexity results. We also discuss possible improvements to van Hoeij’salgorithm in some particular cases that are not uncommon in the literature. Ourmain complexity results are Theorems 1, 2 and 3. Note that we count field oper-ations and do not take into account the coefficient growth in case of infinite fieldsnor the field extensions incurred by the use of Puiseux series. We also made thechoice not to delve into probabilistic aspects: all the algorithms presented hereare “at worst” Las Vegas due to the use of Poteaux and Weimann’s algorithm,see for instance [21, Remark 3].We decided to give worst-case bounds and to only involve n and Disc( f ) inour theorems so as to give ready-to-use results. Our proofs, however, are meantto allow the interested reader to derive sharper bounds involving more preciseparameters such as the regularity and ramification indices of Puiseux series.We summarize these complexity estimates in Table 1 in a simpler context:we ignore the cost of factorizations and bound both n and d x = deg x f by D .In this case, the input size is in O ( D ) and output size in O ( D ). The constant2 ≤ ω ≤ n the complexity of computing integral bases of function fields 3 smallest value currently known. Translating the above bound, the complexity ofthe Montes approach is at best in e O ( D ) but only for a computing a local integralbasis at one singularity, while the algorithm detailed in Section 4 computes aglobal integral basis for a quasi-optimal arithmetic complexity (i.e. in e O ( D )). Organization of the paper.
We sequentially analyze the three algorithms: Sec-tion 2 is dedicated to van Hoeij’s algorithm [12], Section 3 to Trager’s algo-rithm [23] and Section 4 to an algorithm by Böhm et al. introduced in [3]. Ineach section, we first give an overview of the corresponding algorithm and insiston the parts where we perform some modifications. The algorithms we describeare variations of the original algorithms so we give no proof exactness and re-fer to the original papers in which they were introduced. Then, we establishcomplexity bounds for each algorithm by putting together results from variousfields of computer algebra. We were especially careful about how to handle linearalgebra, Puiseux series and factorization over K [[ x ]] [ y ]. Table 1.
Simplified complexity estimates for computing integral bases.Algorithm Worst-case complexityTrager’s algorithm [23] e O ( D )Van Hoeij’s algorithm [12] e O ( D ω +4 )Böhm et al.’s algorithm [3] e O ( D ) We recall some basic concepts about Puiseux series and refer to [24] for moredetails. Assuming that the characteristic of K is either 0 or > n , the Puiseuxtheorem states that f ∈ K [ x ][ y ] has n roots in the field of Puiseux series ∪ e ≥ K (cid:0) ( x /e ) (cid:1) .Following Duval [10], we group these roots into irreducible factors of f . First,one can write f = Q ri =1 f i with each f i irreducible in K [[ x ]][ y ]. Then, for 1 ≤ i ≤ r we write f i = Q ϕ i j =1 f ij , where each f ij is irreducible in K [[ x ]][ y ]. Finally,for any ( i, j ) ∈ { , . . . r } × { , . . . , ϕ i } we write f ij = e i − Y k =0 (cid:16) Y − S ij ( x /e i ζ ke i ) (cid:17) , where S ij ∈ K (( x )) and ζ e i is a primitive e i -th root of unity. Simon Abelard
Definition 1.
The n fractional Laurent series S ijk ( x ) = S ij ( x /e i ζ ke i ) are calledthe classical Puiseux series of f above 0. The integer e i is called the ramificationindex of S ijk . Proposition 1.
For a fixed i , the f ij ’s all have coefficients in K i , a degree- ϕ i extension of K and they are conjugated by the action of the associated Galoisgroup. We have P ri =1 e i ϕ i = n . Definition 2. [21, Definition 2] A system of rational Puiseux expansions over K ( K -RPE) of f above is a set { R i } ≤ i ≤ r such that • R i ( T ) = ( X i ( T ) , Y i ( T )) ∈ K i (( T )) , • R i ( T ) = ( γ i T e i , P ∞ j = n i β ij T j ) , where n i ∈ Z , γ i = 0 and β in i = 0 , • f i ( X i ( T ) , Y i ( T )) = 0 , • the integer e i is minimal. In the above setting, we say that R i is centered at ( X i (0) , Y i (0)). We mayhave Y i (0) = ∞ if n i < f is monic. Definition 3. [21, Definition 3] The regularity index of a Puiseux series S of f with ramification index e is the smallest N ≥ min(0 , ev x ( S )) such that no otherPuiseux series S ′ have the same truncation up to exponent N/e . The truncationof S up to its regularity index is called the singular part of S . It can be shown that two Puiseux series associated to the same RPE share thesame regularity index so we can extend this notion (and the notion of singularpart) to RPE’s.
We will be looking for an integral basis of the form p i ( x, y ) /d i ( x ), where the p i are degree- i monic polynomials in y . It is known that the irreducible factors ofthe denominators d i are among the irreducible factors of the discriminant withmultiplicity at least 2. We can treat these factors one by one by first lookingfor local integral bases at each of these factors, i.e. bases whose denominatorscan only be powers of such an irreducible factor. A global integral basis is thenrecovered from these local bases by CRT.To compute a local integral basis at a fixed factor φ , van Hoeij [12] followsthe following strategy. Starting from (1 , y, · · · , y n − ) and updating it so that itgenerates a larger module, until this module is the integral closure. This basis ismodified by multiplying it by an appropriate triangular matrix in the followingway. Let us fix a d , then b d must be a linear combination of the b , · · · , b d − such that ( yb d − + P d − i =0 a i b i ) /φ j is integral with j as large as possible.To this end, the coefficients of the linear combination are first set to bevariables and we write equations enforcing the fact that the linear combinationdivided by φ has to be integral. If a solution of this system is found, the value of b d is updated and we repeat the process so as to divide by the largest possible n the complexity of computing integral bases of function fields 5 power of φ . Note that a solution is necessary unique otherwise the difference oftwo solutions would be an integral element with numerator of degree d −
1, whichmeans that the j computed in the previous step was not maximal. When there isno solution, we have reached the maximal exponent and move on to computing b d +1 .For the sake of completeness, we give a description of van Hoeij’s algorithmbut we refer to van Hoeij’s original paper [12] for a proof that this algorithmis correct. This algorithm is originally described for fields of characteristic 0but also works in the case of positive characteristic provided that we avoidwild ramification (see [12, Section 6.2.]). To deal with this issue, we make theassumption that we are either considering characteristic zero or greater than n . Input :
A monic irreducible polynomial f ( y ) over K [ x ] Output :
An integral basis for K [ x, y ] / h f i n ← deg y ( f ) ; S fac ← set of factors P such that P | Disc( f ); for φ in S fac do Compute α a root of φ (possibly in extension) ;Compute r i the n Puiseux expansions of f at α with precision N ; b ← for d ← to n − do b d ← yb d − ;solutionfound ← true ;Let a , · · · a d − be variables ; a ← ( b d + P d − i =0 a i b i ) / ( x − α ) ; while solutionfound do Write the equations, i.e. the coefficients of a ( r i ) with negative powerof ( x − α ) for any i ;Solve this linear system in the a i ’s ; if no solution then solutionfound ← false; else There is a unique solution ( a i ) in K ( α ) d ;Substitute α by x in each a i ; b d ← ( b d + P d − i =0 a i b i ) /φ ; endendendendend From all the local bases perform CRT to deduce B an integral basis ; return B ; Algorithm 1:
Van Hoeij’s algorithm [12]
Simon Abelard
In this section, we prove the following theorem.
Theorem 1.
Let f ( x, y ) be a degree- n monic squarefree polynomial in y . Al-gorithm 1 returns an integral basis for the corresponding function field andcosts the factorization of Disc( f ) and e O ( n ω +2 deg Disc( f )) field operations, where ≤ ω ≤ is a feasible exponent for linear algebra.Proof. First, we need to compute the discriminant and recover its square factors,which costs a factorization of a univariate polynomial of degree ≤ nd x .Then, we need to compute the Puiseux expansions η i of f at one root of eachfactor in S fac , up to precision N = max i P i = j v ( η i − η j ). Using the algorithmof Poteaux and Weimann [21], the Puiseux expansions are computed up to pre-cision N in e O ( n ( δ + N )) field operations, where δ stands for the valuation ofDisc( f ). Indeed, these expansions are computed throughout their factorizationalgorithm, which runs in e O ( n ( δ + N )) field operations as stated in [21, Theo-rem 3]. Therefore, in theory, we will see that computing the Puiseux expansionshas a negligible cost compared to other parts of the algorithm since N ≤ n .Another problem coming from the use of Puiseux expansions is that we haveto evaluate bivariate polynomials (the b i ’s) at the Puiseux expansions of f .However this matter can be dealt with by keeping them in memory and updatingthem along the computations. This way, for a fixed d we first initialize b d = yb d − so we just have to perform a product of Puiseux expansions at precision O ( n ) and then each time b d is updated it will amount to performing a linearcombination of Puiseux expansions. Since we fix precision at N ≤ n , taking intoaccount the denominator in the exponents of the Puiseux series this amounts tohandling polynomials of degrees ≤ n . Thus, in our case, arithmetic operationson Puiseux series can be performed in e O ( n ) field operations.The main task in this algorithm is to solve a linear system of c equationsin d variables over the extension K ( α ), where c is the total number of terms ofdegrees < n Puiseux expansions. Since we know the linear system musthave at most one solution, we have the lower bound c ≥ d but in the worst case,each Puiseux series has n terms of degrees < c can be bounded aboveby n . More precisely, we can bound it by ne , where e is the maximum of theramification indices of the classical Puiseux expansions of f .In most cases, this system will be rectangular of size c × d so we solve it intime e O ( cd ω − ) using [5, Theorem 8.6]. This step is actually the bottleneck foreach iteration and using the bounds on d and c it runs in e O ( n ω +1 deg φ ) fieldoperations, since the extension K ( α ) of K has degree ≤ deg φ .This process is iterated over the irreducible factors of the discriminant ap-pearing with multiplicity at least 2, and for φ such a factor we have to solveat most n + M ( φ ) / M ( φ ) is the multiplicity of φ in Disc( f ).Indeed, each time a solution to a system is found the discriminant is divided by φ so that cannot happen more than M ( φ ) / d we will have to handle n n the complexity of computing integral bases of function fields 7 additional systems. Thus, for a fixed factor φ the cost of solving the systems isbounded by O ( n · n ω +1 deg φ + n ω +1 deg φM ( φ )), where the factor deg φ comesfrom the fact that the linear systems are solved over a degree deg φ -extension ofthe base field.Thus, the complexity is in e O (cid:16)P φ ∈ S fac n ω +1 M ( φ ) deg φ + n ω +2 P φ ∈ S fac deg φ (cid:17) . Remark 1.
If the base field is a finite field F q , factoring the discriminant is donein e O (( nd x ) . log q + nd x (log q ) ) bit operations [16]. Remark 2.
The above formula shows how the size of the input is unsufficientto give an accurate estimate of the runtime of van Hoeij’s algorithm. Indeed, inthe best possible case S fac , deg φ and M ( φ ) might be constant, and all the c φ,i ’s might be equal to d , leading to an overall complexity in O ( n ω +2 ). In theworst possible case however, the sum P φ ∈ S fac deg φ is equal to the degree of thediscriminant, leading to an overall complexity in e O ( n ω +2 deg Disc( f )). Instead of incrementally computing the b i ’s, it is possible to compute one b k bysolving the exact same systems, except that this time the previous b i ’s may nothave been computed (and are thus set to their initial value y i ). The apparentdrawback of this strategy is that it computes b k without exploiting previousknowledge of smaller b i ’s and therefore leads to solving more systems than usingthe previous approach. More precisely, if we already know b k − then we have tosolve e k − e k − + 1 systems otherwise we may have to solve up to e k + 1 systems.Using the complexity analysis above, we can bound the complexity of finding agiven b k without knowing other b i ’s by e O ( n k ω − ( e k + 1) deg φ ).However, we know that for a fixed φ , the b i ’s can be taken of the form p i ( x, y ) /φ e i where the exponents are non-decreasing and bounded by M ( φ ).Therefore, when M ( φ ) is small enough compared to n , it makes sense to pick anumber k and compute b k . If b k = y k then we know that b i = y i for any i smallerthan k . If b k = p k ( x, y ) /φ M ( φ ) then we know that we can take b i = y i − k b k for i greater than k . In most cases neither of this will happen but then we can repeatthe process recursively and pick one number between 1 and k − k + 1 and n and repeat.In the extreme case where we treat M ( φ ) as a constant (but deg φ is stillallowed to be as large as deg(Disc( f )) /
2) this approach saves a factor e O ( n )compared to the iterative approach computing the b i ’s one after another. Thisis summarized by the following proposition. Proposition 2.
Let f ( x, y ) be a degree- n monic squarefree polynomial in y suchthat irreducible factors of Disc( f ) only appear with exponent bounded by an abso-lute constant. The above modification of van Hoeij’s algorithm returns an integralbasis for the corresponding function field and costs a univariate factorization ofdegree ≤ nd x and e O ( n ω +1 deg Disc( f )) field operations, where ω is a feasibleexponent for linear algebra. Simon Abelard
Proof.
Let us first assume that M ( φ ) = 1 : then the problem is just to find thesmallest k such that e k = 1. Since we the e i ’s are non-decreasing, we can usebinary search and find this k after computing O (log n ) basis elements b i ’s, for atotal cost in e O ( n ω +1 deg φ ) and we indeed gain a quasi-linear factor comparedto the previous approach. As long as M ( φ ) is constant, a naive way to get thesame result is to repeat binary searches to find the smallest k such that e k = 1,then the smallest k such that e k = 2 and so on. Remark 3.
Such extreme cases are not uncommon among the examples pre-sented in the literature and we believe that beyond this extreme, there will be atrade-off between this strategy and the classical one for non-constant but smallmultiplicities. We do not investigate this trade-off further because finding properturning points should be addressed in practice as it depends both on theory andimplementation.
In the other extreme case where M ( φ ) is greater than n our strategy will performworse than the original one. Therefore, two ideas seem natural to find the e i :performing larger “jumps” by testing values of e i which are multiples of a fixed ν > e i . We briefly explainwhy these strategies do not beat the classical one.Given a root α of the discriminant, and fixing a d between 1 and n , it is indeedpossible for any ν > (cid:16) b d + P d − i =0 a i b i (cid:17) / ( X − α ) ν is integral, thus allowing to skip steps in the iterated updates and divisions.But there is a price to pay for this: the system that we will have to solve isbigger. When dividing by ( X − α ) the number of equations is the number of termsof the Puiseux expansions of exponent ≤ ne max . Whendividing by ( X − α ) ν , however, the number of equations is bounded by νne max .When solving our rectangular system, recall that the complexity depends linearlyon the number of equations, and thus even though this approach reduces thenumber of iterations by a factor close to ν , it increases the complexity of eachiteration by a factor ν .To sum up, if we want to know whether ( b d + P α i b i ) / ( X − α ) m is integralthen it costs the same (up to logarithmic factors) to either repeat divisions by( X − α ) as in van Hoeij’s algorithm, to perform repeated divisions by ( X − α ) ν or even to solve a single system to directly divide by ( X − α ) m . Therefore, thisstrategy does not bring any advantage over the classical strategy in the contextof van Hoeij’s algorithm. Computing an integral basis amounts to computing the integral closure of the K [ x ]-module generated by the powers of y . Trager’s algorithm [23] computes such n the complexity of computing integral bases of function fields 9 an integral closure iteratively using the following integrality criterion to decidewhen to stop. Note that there exists many similar algorithms like Round 2 andRound 4 using various criteria for integrality. A more precise account on thesealgorithms and their history is given in the final paragraphs of [9, Section 2.7]. Proposition 3. [23, Theorem 1] Let R be a principal domain ( K [ X ] in ourcase) and V a domain that is a finite integral extension of R . Then V is integrallyclosed if and only if the idealizer of every prime ideal containing the discriminantequals V .Proof. See [23].More precisely, Trager’s algorithm uses the following corollary to the aboveproposition:
Proposition 4. [23, Corollary 2] The module V is integrally closed if and onlyif the idealizer of the radical of the discriminant equals V . Starting from any basis of integral elements generating a module V the ideais to compute ˆ V the idealizer of the radical of the product of all such ideals in V . Either ˆ V is equal to V and we have found an integral basis, or ˆ V is strictlylarger and we can repeat the operation. We therefore build a chain of moduleswhose length has to be finite. Indeed, the discriminant of each V i has to be astrict divisor of that of V i − . Input :
A degree- n monic squarefree polynomial f ( y ) over K [ x ] Output :
An integral basis for K [ x, y ] / h f i D ← Disc( f ) ; B ← (1 , y, · · · , y n − ); while true do Set V the K [ x ]-module generated by B ; Q ← Q P i , where P i | D ;If Q is a unit then return B ;Compute J Q ( V ) the Q -trace radical of V ;Compute ˆ V the idealizer of J Q ( V ) ;Compute M the change of basis matrix from ˆ V to V ;Compute det M , if it is a unit then return V ;Update B by applying the change of basis ; D ← D/ (det M ) ; V ← ˆ V ; end Algorithm 2:
A bird’s eye view of Trager’s algorithm [23]
Computing the radical.
Following Trager, we avoid computing the radical ofthe ideal generated by Disc( f ) directly. First, we note that this radical is theintersection of the radical of the prime ideals generated by the irreducible factorsof Disc( f ). Let P be such a factor, we then use the fact that in characteristic zeroor greater than n , the radical of h P i is exactly the so-called P -trace radical of V (see [23]) i.e. the set J P ( V ) = { u ∈ V |∀ w ∈ V, P | tr( uw ) } , where the trace tr( w )is the sum of the conjugates of a w ∈ K ( x )[ y ] viewed as a degree- n algebraicextension of K ( x ).The reason we consider this set is that it is much easier to compute thanthe radical. Note that Ford and Zassenhaus’ Round 2 algorithm is designed tohandle the case where this assumption fails but we do not consider this possibilitybecause if it should happen it would be more suitable to use van Hoeij’s algorithmfor the case of small characteristic [14]. This latter algorithm is different from theone we detailed in Section 2 but follows the same principle, replacing Puiseuxseries by a criterion for integrality based on the Frobenius endomorphism.Finally, for Q = Q P i we define the Q -trace radical of V to be the intersectionof all the J P i ( V ). Here, we further restricted the P i ’s to be the irreducible factorsof Disc( f ) whose square still divide Disc( f ). In what follows, we summarize how J Q ( V ) is computed in Trager’s algorithm. Once again, we refer to [23] for furtherdetails and proofs.Let M be the trace matrix of the module V , i.e. the matrix whose entriesare the (tr( w i w j )) i,j , where the w i ’s form a basis of V . An element u is in the Q -trace radical if and only if M u is in Q · R n . In Trager’s original algorithm, the Q -trace radical is computed via a 2 n × n row reduction and one n × n polynomialmatrix inversion.We replace this step and compute a K [ x ]-module basis of the Q -trace radicalby using an approach due to Neiger [20] instead. Indeed, given a basis w i of the K [ x ]-module v , the Q -trace radical can be identified to the set ( f , · · · , f n ∈ K [ x ] n (cid:12)(cid:12)(cid:12)(cid:12) ∀ ≤ j ≤ n, n X i =1 f i tr( w i w j ) = 0 mod Q ( x ) ) . Using [20, Theorem 1.4] with n = m and the shift s = 0, there is a determin-istic algorithm which returns a basis of the Q -trace radical in Popov form for acost of e O ( n ω deg( Q )) field operations. Computing the idealizer.
The idealizer of an ideal m of V is the set of u ∈ Frac( V )such that u m ⊂ m . Let M i represent the multiplication matrix by m i with inputbasis ( v , · · · , v n ) and output basis ( m , · · · , m m ). Then to find the elements u in the idealizer we have to find all u ∈ Frac( R ) such that M u ∈ R n . Notethat building these multiplication matrices has negligible cost (in O ( n ) fieldoperations) using the technique of [22].Following Trager, we row-reduce the matrix M and consider ˆ M the top left n × n submatrix and the elements of the idealizer are now exactly the u such thatˆ M u ∈ R n . Thus, the columns of ˆ M − form a basis of the idealizer. Furthermore,the transpose of ˆ M − is the change of basis matrix from V i to V i +1 . n the complexity of computing integral bases of function fields 11 The purpose of this section is to prove the following theorem.
Theorem 2.
Consider f a degree- n monic squarefree polynomial in K [ x ][ y ] ,then Algorithm 2 returns an integral basis for the cost of factoring Disc( f ) and e O ( n deg Disc( f )) operations in K .Proof. The dominant parts in this algorithm are the computations of radicalsand idealizers, which have been reduced to linear algebra operations on polyno-mial matrices. First, we have already seen how to compute the Q -trace radical J Q ( V ) in e O ( n ω deg( Q )) field operations using the algorithm presented in [20].To compute the idealizer of J Q ( V ), we row-reduce a n × n matrix withentries in K [ x ] using naive Gaussian elimination. This costs a total of O ( n )operations in K ( x ).Then we extract the top n × n square submatrix ˆ M from this row-reduced n × n matrix and invert it for e O ( n ω ) operations in K ( x ). The output ˆ M − ofthis gives a basis of a module ˆ V such that V ⊂ ˆ V ⊂ V .To translate operations in K ( x ) into operations in K , one can bound thedegrees of all the rational fractions encountered, however it is quite fastidious totrack degree-growth while performing the operations described above. In fact,we exploit the nature of the problem we are dealing with.Our first task is to row-reduce a matrix M built such that a u = P ni =1 ρ i v i is in ˆ V if and only if M ( ρ , . . . , ρ n ) t ∈ K [ x ] n . The ρ i ’s are rational fractionsbut their denominator divides Q . Therefore, we fall back to finding solutions of M (˜ u , . . . , ˜ u n ) t ∈ ( Q ( x ) · K [ x ]) n , where the ˜ u i ’s are polynomials. In this case,it does no harm to reduce the entries of the matrix M modulo Q , howeverperforming Gaussian elimination will induce a degree growth that may causeus to handle polynomials of degree up to n deg Q instead of deg Q . With thisbound, the naive Gaussian elemination costs a total of O ( n deg Q ) operationsin K .After elimination, we retrieve a n × n matrix ˆ M whose entries have degreesbounded by n deg Q . Inverting it will cause another degree increase by a factorat most n . Thus, the inversion step has cost in e O ( n ω +2 deg Q ). Since ω ≤
3, eachiteration of Trager’s algorithm has cost bounded by O ( n deg Q ).Now, let us assess how many iterations are necessary. Let us assume thatwe are exiting step i and have just computed V i +1 from V i . Let us consider P a square factor of Disc( V i ). Let m be a prime ideal of V i containing P . Let usconsider u ∈ V i +1 , then by definition uP ∈ m because P ∈ m and therefore u ∈ P m ⊂ P V i . Thus, V i +1 ⊂ P V i . This means that at each step i we haveDisc( V i +1 ) = Disc( V i ) /Q i , where Q i is the product of square factors of Disc( V i ).Thus, the total number of iterations is at most half the multiplicity of the largestfactor of Disc( f ).More precisely, if we assume that the irreducible factors of Disc( f ) are r poly-nomials of respective degrees d i and multiplicity ν i , then the overall complexity of Trager’s algorithm is in e O ν X i =1 n X j ≤ r, ν j ≥ i d j , where ν = ⌊ max ν i / ⌋ .Since P ri =1 ν i d i ≤ deg Disc( f ), the above bound is in e O ( n deg Disc( f )),which ranges between e O ( n d x ) and e O ( n ) depending on the input f . Remark 4.
In the above proof, our consideration of degree growth seems quitepessimistic given that the change of basis matrix has prescribed determinant. Itwould be appealing to perform all the computations modulo Q but it is unclearto us whether the algorithm remains valid. However, even assuming that it ispossible, our complexity estimate would become e O ( n deg Disc( f )), which is stillno better than the bound we give in next section. As van Hoeij’s algorithm, this algorithm due to Böhm et al. [3] relies on comput-ing local integral bases at each “problematic” singularity and then recovering aglobal integral basis. But this algorithm then splits the problem again into com-puting contributions to the integral basis at each branch of each singularity.More precisely, given a reduced Noetherian ring A we denote by A its nor-malization i.e. the integral closure of A in its fraction field Frac( A ). In order tocompute the normalization of A = K [ x, y ] / h f ( x, y ) i we use the following resultto perform the task locally at all the singularities. Proposition 5. [3, Proposition 3.1] Let A be a reduced Noetherian ring witha finite singular locus { P , . . . , P s } . For ≤ i ≤ s , let an intermediate ring A ⊂ A ( i ) ⊂ A be given such that A ( i ) P i = A P i . Then P si =1 A ( i ) = A .Proof. See the proof of [4, Proposition 3.2].Each of these intermediate rings is respectively called a local contribution to A at P i . In the case where A ( i ) P j = A P j for any j = i , we say that A ( i ) is a minimallocal contribution to A at P i . Here, we consider the case A = K [ x, y ] / h f ( x, y ) i and will compute minimal local contributions at each singularity of f . This issummarized in Algorithm 3.In this section, we revisit the algorithm presented by Böhm et al. in [3] andreplace some of its subroutines in order to give complexity bounds for theirapproach. Note that these modifications are performed solely for the sake ofcomplexity and rely on algorithms for which implementations may not be avail-able. However, we note that our new description makes this algorithm bothsimpler and more efficient because we avoid using Hensel lifting to compute the E ( f ) and the triples ( a i , b i , c i ) which are actually obtained as byproducts of thefactorization of f over K [[ x ]][ y ]. This allows us to prove the following theorem. n the complexity of computing integral bases of function fields 13 Input :
A monic irreducible polynomial f ( y ) over K [ x ] Output :
An integral basis for K [ x, y ] / h f i n ← deg y ( f ) ; S fac ← set of factors φ such that φ | Disc( f ); for φ in S fac do Compute α a root of φ (possibly in extension) ;Apply a linear transform to fall back to the case of a singularity at x = 0 ;Compute the maximal integrality exponent E ( f ) ;Using Proposition 9, factor f over K [[ x ]][ y ];Compute the Bézout relations of Proposition 7 ;Compute integral bases for each factor as in Section 4.1 ;As in Section 4.2, recover the local contribution corresponding to φ ;(For this, use Proposition 7 and Proposition 11) end From all the local contributions, use CRT to deduce B an integral basis ; return B ; Algorithm 3:
Adaptation of the algorithm by Böhm et al. [3]
Theorem 3.
Let f ( x, y ) be a degree- n monic squarefree polynomial in y . ThenAlgorithm 3 returns an integral basis of K [ x, y ] / h f i and costs a univariate factor-ization of degree deg Disc( f ) over K , at most n factorizations of degree- n poly-nomials over an extension of K of degree ≤ deg(Disc( f )) and e O ( n deg Disc( f )) operations in K . Let us first address the particular case when f ( x, y ) is an irreducible Weier-strass polynomial. This way, we will be able to compute integral bases for eachbranches at a given singularity. The next section will then show how to glue thisinformation first into a local integral basis and then a global integral basis canbe computed using CRT as in van Hoeij’s algorithm. The main result of thissection is the following proposition. Proposition 6.
Let g be an irreducible Weierstrass polynomial of degree m whose Puiseux expansions have already been computed up to sufficiently largeprecision ρ . An integral basis for the normalization of K [[ x ]][ y ] / h g i can be com-puted in e O ( ρm ) operations in K . As in van Hoeij’s algorithm, the idea is to compute for any 1 ≤ d < m apolynomial p d ∈ K [ x ][ y ] and an integer e d such that p d ( x, y ) /x e d is integral and e d is maximal. However, the building process is quite different. We clarify thisnotion of maximality in the following definition. Definition 4.
Let P ∈ K [ x ][ y ] be a degree- d monic polynomial (in y ). Wesay that P is d -maximal if there exists an exponent e d such that P ( x, y ) /x e d is integral and such that there is no degree- d monic polynomial Q satisfying Q ( x, y ) /x e d +1 . Remark 5.
We introduce the notion of d -maximality for the sake of clarity andbrevity. To the best of our knowledge this notion has not received a standardname in the literature and was often referred to using the word maximal.Let us consider the m Puiseux expansions γ i of g . Since g is irreducible, theseexpansions are conjugated but let us first make a stronger assumption : thereexists a t ∈ Q such that all the terms of degree lower than t of the expansions γ i are equal and the terms of degree t are conjugate. We truncate all these seriesby ignoring all terms of degree greater or equal to t . This way, all the expansionsshare the same truncation γ . Lemma 1. [3, Lemma 7.5] Using the notation and hypotheses of previous para-graph, for any ≤ d < m the polynomial p d = ( y − γ ) d is d -maximal.Proof. See [3].In a more general setting, more truncations are iteratively performed so asto fall back in the previous case. We recall below the strategy followed in [3] forthe sake of completeness.Initially we have g = g = Q mi =1 ( y − γ i ). We compute the smallest exponent t such that the expansions γ i are pairwise different. We truncate the expan-sions to retain only the exponents smaller than t and denote these truncations γ (1) j . Among these expansions, we extract a set of r mutually distinct expan-sions which we denote by η i . Note that by local irreducibility, each of theseexpansions correspond to the same number of identical γ (1) j . We further denote g = Q mi =1 ( y − γ (1) i ) and g = Q ri =1 ( y − η i ) and u = m/r . We actually have g = g u .We recursively repeat the operation: starting from a polynomial g j − Q r i − i =1 ( y − η i ), we look for the first exponent such that all the truncations of the η i are pair-wise different. Truncating these expansions up to exponent strictly smaller, wecompute g j − = Q m j i =1 ( y − γ ( j ) i ). Once again we retain only one expansion perset of identical truncations and we define a g j = Q r j i =1 ( y − η i ) and u j = m j /r j .The numerators of the integral basis that the algorithm shall return areproducts of these g i ’s. Speaking very loosely, the g i have decreasing degrees in y and decreasing valuations so for a fixed d the denominator p d is chosen of theform Q g ν i i where the ν i ’s are incrementally built as follow : ν is the largestinteger such that deg y ( g ν ) ≤ d and ν ≤ u , then ν is the largest integer suchthat deg y ( g ν g ν ) ≤ d and ν ≤ u , and so on. This is Algorithm 6 of [3], werefer to the proof of [3, Lemma 7.8] for a proof of exactness.Since we assumed that we are treating a singularity at 0, the denominatorsare powers of x . The proper exponents are deduced in the following way: foreach g i we keep in memory the set of expansions that appear, we denote this setby N g i . Then for any γ in the set Γ of all Puiseux expansions of g we compute σ i = P η ∈ N gi v ( γ − η ) which does not depend on the choice of γ ∈ Γ . For any j ,if p j = Q k g ν k k then the exponent e j of the denominator is given by ⌊ P k ν k σ k ⌋ .Further justifications of this are given in [3]. n the complexity of computing integral bases of function fields 15 Complexity analysis.
Let us now give a proof of Proposition 6. To do so, remarkthat the g k ’s are polynomials whose Puiseux series are precisely the truncation η i ’s of the above γ ( i ) j . Equivalently, one can say that the g k ’s are the norms ofthe Puiseux expansions η i ’s.To compute them, we can appeal to the Algorithm NormRPE of Poteauxand Weimann [21, Section 4.1.]. Suppose we know all the expansions involvedup to precision ρ sufficiently large. These expansions are not centered at (0 , ∞ )because g is monic. Therefore, the hypotheses of [21, Lemma 8] are satisfied andthe algorithm NormRPE compute each of the g i ’s above in time e O ( ρ deg y ( g i ) ).Then we remark that the total number of such g i ’s is in O (log m ). Indeed,at each step the number of expansions to consider is at least halved (Puiseuxexpansions are grouped according to their truncations being the same, at leasttwo series having the same truncation). Since the degree of each g i is no greaterthan m −
1, all these polynomials can be computed in e O ( m ρ ) operations in K .Once the g i ’s are known we can deduce the numerators p i ’s as explainedabove. Building them incrementally starting from p each p i is either equal to a g j or can be expressed as one product of quantities that were already computed(either a g j or a p k for k < i ). Therefore, computing all the numerators amountsto computing at most m products of polynomials whose degrees are bounded by m over K [ x ] / h x ρ i . Using Schönhage-Strassen’s algorithm for these products thetotal cost is in e O ( ρm ) operations in K . The computation of denominators thenhas a negligible cost. This concludes the proof. Once again, let us assume that we are treating the local contribution at thesingularity x = 0. In the setting of van Hoeij’s algorithm, this corresponds todealing with a single irreducible factor of the discriminant. We further divide theproblem by considering the factorization f = f Q ri =1 f i , where f is a unit in K [[ x ]][ y ] and the other f i ’s are irreducible Weierstrass polynomials in K [[ x ]][ y ].We can apply the results from the previous section to each f i for i > K [[ x ]][ y ] / h f i i . In this section, we deal withtwo problems: we explain how to compute the factorization of f and how toefficiently perform an analogue of the Chinese Remainder Theorem to computean integral basis of K [[ x ]][ y ] / h f · · · f r i from the integral bases at each branch.For the sake of completeness, we recall in Section 4.3, how Böhm et al. take f into account and deduce a minimal local contribution at any given singularity. Proposition 7. [3, Proposition 5.9] Let f ,. . . , f r be the irreducible Weierstrasspolynomials in K [[ x ]][ y ] appearing in the factorization of f into branches. Let usset h i = Q j =1 ,j = i f j . Then the f i and h i are coprime in K (( x ))[ y ] so that thereare polynomials a i , b i in K [[ x ]][ y ] and positive integers c i such that a i f i + b i h i = x c i for any ≤ i ≤ r .Furthermore, the normalization of K [[ x ]][ y ] / ( f · · · f r ) splits as K [[ x ]][ y ] / h f · · · f r i ∼ = r M i =1 K [[ x ]][ y ] / h f i i and the splitting is given explicitly by ( t mod f , . . . , t r mod f r ) r X i =1 b i h i t i x c i mod f · · · f r . Proof.
See [7, Theorem 1.5.20].The following corollary will be used in practice to recover an integral basisfor K [[ x ]][ y ] / h f · · · f r i . Proposition 8. [3, Corollary 5.10] With the same notation, let , p ( i )1 ( x, y ) x e ( i )1 , . . . , p ( i ) m i − ( x, y ) x e ( i ) mi − ! represent an integral basis for f i , where each p ( i ) j ∈ K [ x ][ y ] is a monic degree- j polynomial in y . For ≤ i ≤ r , set B ( i ) = b i h i x c i , b i h i p ( i )1 x c i + e ( i )1 , . . . , b i h i p ( i ) m i − x c i + e ( i ) mi − ! . Then B (1) ∪ · · · ∪ B ( r ) is an integral basis for f · · · f r . In [3], these results are not used straightforwardly because the authors re-marked that it was time-consuming in practice. Instead, the c i ’s are computedfrom the singular parts of the Puiseux expansions of f and polynomials β i replacethe b i ’s, playing a similar role but being easier to compute.Indeed, these β i ’s are computed in [3, Algorithm 8] and they are actuallyproducts of the polynomials g i ’s already computed by [3, Algorithm 7], which isthe algorithm that we detailed above to describe the computation of an integralbasis for each branch. The only new thing to compute in order to deduce the β i ’s are the suitable exponents of the g i ’s. This is achieved through solvinglinear congruence equations. This step can be fast on examples considered inpractice and we also note that the β i ’s seem more convenient to handle becausethey are in K [ x ][ y ] and they contain less monomials than the b i ’s. However thecomplexity of this problem (often denoted LCON in the literature) has beenwidely studied, see for example [1,6] but, to the best of our knowledge, none ofthe results obtained provide bounds that we could use here.For the sake of complexity bounds, we therefore suggest another way which isbased on computing the b i ’s of Proposition 7. We also compute the factorizationof f into branches in a different way: instead of following the algorithms of [3,Section 7.3 & 7.4] we make direct use of the factorization algorithm of Poteaux n the complexity of computing integral bases of function fields 17 and Weimann [21] so we also invoke their complexity result [21, Theorem 3] whichis recalled below. Another advantage to this is that we will see that the b i ’s canactually be computed using a subroutine involved the factorization algorithm,which simplifies even further the complexity analysis. Proposition 9. [21, Theorem 3] There exists an algorithm that computes the ir-reducible factors of f in K [[ x ]][ y ] with precision N in an expected e O (deg y ( f )( δ + N )) operations in K plus the cost of one univariate factorization of degree atmost deg y ( f ) , where δ stands for the valuation of Disc( f ) .Proof. See [21, Section 7].Let us now get back to the first steps of Algorithm 3: we have to compute E ( f ) to assess up to what precision we should compute the Puiseux series andthen compute the factorization of f , the integers c i and the polynomials b i .In each section, we tried to keep the notation of the original papers as muchas we could which is why we introduced E ( f ) but the definition given in [3,Section 4.8] is exactly the same as the N in van Hoeij’s paper [12]. This boundcan be directly computed from the singular part of the Puiseux expansions of f . We recall its definition: E ( f ) = max i P i = j v ( γ i − γ j ), where the γ i ’s are thePuiseux expansions of f . We will see later on an alternate definition which willmake it easier to bound E ( f ).Following [3], we need to compute the factorization of f into branches up toprecision E ( f ) + c i . Using Poteaux and Weimann’s factorization algorithm fromProposition 9, we can compute the factors f i up to the desired precision.Furthermore, using a subroutine contained within this algorithm, we cancompute the Bézout relation a i f i + b i h i = x c i up to precision E ( f ) + c i . This isdetailed in [21, Section 4.2], where our c i is the lifting order κ and our f i and h i are respectively the H and G of Poteaux and Weimann. The algorithm used tocompute the Bézout relations is due to Moroz and Schost [19] and its complexityis given by [19, Corollary 1]. Complexity analysis.
We analyze the cost of the computations performed in thissection and summarize them by the following proposition.
Proposition 10.
Let f ( x, y ) be a degree- n monic squarefree polynomial in y and let δ be the x -valuation of Disc( f ) . Then the integers c i ’s and E ( f ) , a fac-torization in branches f = f Q ri =1 f i as well as the polynomials a i ’s and b i ’sor Proposition 7 can be computed up to precision E ( f ) + c i for a univariatefactorization degree n over K and a total of e O ( n δ ) field operations.Proof. First, the singular parts of the Puiseux series of f above 0 are computedfor e O ( nδ ) field operations by [21, Theorem 1]. This allows us to compute E ( f ).Then we compute the factorization in branch up to a sufficient precisionto compute the c i ’s. We then extend the precision further so as to compute thefactorization and the Bézout relations a i f i + b i h i = x c i up to precision E ( f )+ c i . Invoking [19, Corollary 1], computing a single Bézout relation up to precision E ( f ) + c i costs e O ( n ( E ( f ) + c i )) field operations. Computing the factorization of f in branches up to the same precision with Proposition 9 accounts for e O ( n ( δ + c i + E ( f )) operations in K and one univariate factorisation of degree n over K .Using [3, Definition 4.14], we note that E ( f ) can also be seen as e n − , whichis bounded by the valuation δ of the discriminant because we assumed that wewere handling a singularity at x = 0. Thanks to [21, Proposition 8] we can bound c i by v x (cid:16) ∂f∂y (cid:17) which is itself bounded by δ .Putting these bounds together, the overall cost is one univariate factorizationof degree n over K and e O ( nδ ) operations in K for the factorization step while the n Bézout relations requires e O ( n δ ) operations in K . This concludes the proof. f To deal with this problem, we reuse the following result without modification.
Proposition 11. [3, Proposition 6.1] Let f = f g be a factorization of f with f and g in K [[ x ]][ y ] , f a unit and g a Weierstrass polynomial of y -degree m .Let (cid:0) p = 1 , p x e , · · · , p m − x em − (cid:1) be an integral basis for K [[ x ]][ y ] / h g i such that the p i ’s are degree- i monic polynomials in K [ x ][ y ] and let f a monic polynomial in K [ x ][ y ] such that f = f mod x e m − . Let us denote d = deg y ( f ) .Then (cid:18) , y, . . . , y d − , f p , f p x e , · · · , f p m − x e m − (cid:19) is an integral basis for the normalization of K [[ x ]][ y ] / h f i .Proof. See [3]Since we handle a single singularity at 0, the previous basis is also a K [ x ]-module basis of the minimal local contribution at this singularity by [3, Corol-lary 6.4]. Complexity analysis.
This step involves a truncation of f modulo x e m − and m products of polynomials in K [[ x ]] [ y ] / h x e m − i whose y -degrees are bounded by n = deg y ( f ). This incurs e O ( mne m − ) field operations. Since we are treating asingularity at x = 0, we have e m − = O ( δ ) with δ the valuation of Disc( f ) sothat we can simplify the above bound as e O ( n δ ) field operations. In this section, we put all the previous bounds together and prove Theorem 3.
Proof.
As in van Hoeij’s algorithm, we first compute Disc( f ) and factor it inorder to recover its irreducible square factors. For each irreducible factor φ suchthat φ | Disc( f ), we compute the corresponding minimal local contribution. For n the complexity of computing integral bases of function fields 19 each of them, we first perform a translation so as to handle a singularity at x = 0. If there are several conjugated singularities we can handle them like invan Hoeij’s algorithm, at the price of a degree-deg φ extension of K which wedenote by K ′ in this proof. Also note that through this transform the multiplicity M ( φ ) corresponds to the valuation δ of the discriminant.First, we split f into branches using Proposition 10 for a cost in e O ( n M ( φ ))operations in K ′ and one univariate factorization of degree ≤ n over K ′ .Then, at each branch f i , we apply Proposition 6 with precision ρ = E ( f ) + c i . Therefore, the cost of computing an integral basis at each branch f i is in e O ( M ( φ ) deg y ( f i ) ) operations in K ′ . Since P i deg( f i ) ≤ n , computing the inte-gral bases at all the branches costs e O ( n M ( φ )) operations in K ′ .At the end of this step, we have integral bases B i of the form (cid:18) , p ( x, y ) x e , . . . , p m i − ( x, y ) x e mi − (cid:19) with m i = deg y f i but the p i ’s are in K ′ [[ x ]][ y ].At first glance, this is a problem because Proposition 8 requires the p i ’s tobe in K ′ [ x ][ y ]. However, the power of x in the denominators is bounded a prioriby E := E ( f ) + max ≤ i ≤ r c i so we can truncate all series beyond this exponent.Indeed, forgetting the higher order terms amounts to subtracting each elementof the basis by a polynomial in K ′ [ x ]. Such polynomials are obviously integralelements so they change nothing concerning integrality.We can thus apply Proposition 8 to get an integral basis for f · · · f r . Thiscosts O ( n ) operations in K ′ [[ x ]] [ y ] / h x E , f ( x, y ) i . Each such operation amountsto nE operations in K ′ . We have previously seen that E is in O ( M ( φ )) so theoverall cost of applying Proposition 8 is in O ( n M ( φ )) operations in K ′ .After this process, the basis that we obtained must be put in “triangularform” (i.e. each numerator p i should have degree- i in y in order for us to applyProposition 11. To do this, we first reduce every power of y greater or equalto n using the equation f ( x, y ) = 0. For a fixed i , by the Bézout relations, h i has y -degree ≤ n − m i and b i has y -degree < m i , so we have to reduce atotal of O ( n ) bivariate polynomials whose degrees in y are in O ( n ). Using a fastEuclidean algorithm, this amounts to e O ( n ) operations in K ′ [ x ] / h x E i , hence acost in O ( n M ( φ )) operations in K ′ .Once done, every element in the basis can be represented by a vector ofpolynomials in K ′ [ x ] whose degrees are bounded by E . To put the above integralbasis in triangular form, it suffices to compute a Hermite Normal Form of a fullrank n × n polynomial matrix. Using [17, Theorem 1.2] an algorithm by Labahn,Neiger and Zhou performs this task in e O ( n ω − M ( φ )) operations in K ′ .We can finally apply Proposition 11 and deduce the minimal local contribu-tion for the factor φ in e O ( n M ( φ )) operations in K ′ .Overall, given a factor φ , computing the corresponding minimal local con-tribution to the normalization of K [ C ] costs the factorization of Disc( f ), oneunivariate factorization of degree ≤ n over K and e O ( n M ( φ )) operations in K ′ .Computing all the local contributions can therefore be done for the factorization of Disc( f ), S fac univariate factorization of degree ≤ n over extensions of K ofdegree ≤ max φ ∈ S fac deg φ and e O ( n deg Disc( f )) operations in K .In the case of conjugate singularities, we follow the idea of van Hoeij ratherthan [3, Remark 7.17] and simply replace α by x in the numerators and ( x − α )by φ in the denominators because it does not harm our complexity bound. Inthis process, some coefficients of the numerators are multiplied by polynomialsin x , which clearly preserves integrality. Since the numerators are monic in y , nosimplification can occur and the basis property is also preserved.Finally, a global integral basis for K [ x, y ] / h f i is deduced by a Chinese re-mainder theorem. This can be achieved in quasi-linear time in the size of thelocal bases. Each of them being in O ( n deg Disc( f )), this last CRT does notincrease our complexity bound. This concludes the proof. Remark 6.
The n factorizations incurred by the use of Poteaux and Weimann’salgorithm are only necessary to ensure that quotient rings are actually fields,this cost can be avoided by using the D5 principle [8] at the price of a potentialcomplexity overhead. However, using directed evaluation [15] yields the sameresult without hurting our complexity bounds. Conclusion
In the setting of Table 1, the best bound given in this paper is in e O ( D ) whichis quasi-quadratic in the input size, but quasi-linear in the output size. It issurprising that we are able to reach optimality without even treating the localfactors f i through a divide-and-conquer approach like in [21]. This would allowus to work at precision δ/n instead of δ most of the time, but this does not affectthe worst-case complexity of the whole algorithm. From an implementation pointof view, however, this approach will probably make a significant different.Note that we are still relatively far from having implementations of algo-rithms actually reaching these complexity bounds because we lack implementa-tions for the primitives involved in computing Popov/Hermite forms, Puiseuxseries and factorizations over K [[ x ]] [ y ]. In some experiments we performed,Puiseux series were actually the most time-consuming part, which is why Trager’salgorithm may still be a competitive choice despite our complexity results. References
1. Vikraman Arvind and TC Vijayaraghavan. The complexity of solving linear equa-tions over a finite ring. In
Annual Symposium on Theoretical Aspects of ComputerScience , pages 472–484. Springer, 2005.2. Jens-Dietrich Bauch. Computation of integral bases.
Journal of Number Theory ,165:382–407, 2016.3. Janko Böhm, Wolfram Decker, Santiago Laplagne, and Gerhard Pfister. Com-puting integral bases via localization and Hensel lifting. arXiv preprintarXiv:1505.05054 , 2015.n the complexity of computing integral bases of function fields 214. Janko Böhm, Wolfram Decker, Santiago Laplagne, Gerhard Pfister, Andreas Steen-paß, and Stefan Steidel. Parallel algorithms for normalization.
Journal of SymbolicComputation , 51:99–114, 2013.5. Alin Bostan, Frédéric Chyzak, Marc Giusti, Romain Lebreton, Grégoire Lecerf,Bruno Salvy, and Éric Schost.
Algorithmes efficaces en calcul formel . 2017.6. Niel de Beaudrap. On the complexity of solving linear congruences and computingnullspaces modulo a constant. arXiv preprint arXiv:1202.3949 , 2012.7. Theo De Jong and Gerhard Pfister.
Local analytic geometry: Basic theory andapplications . Springer Science & Business Media, 2013.8. Jean Della Dora, Claire Dicrescenzo, and Dominique Duval. About a new methodfor computing in algebraic number fields. In
European Conference on ComputerAlgebra , pages 289–290. Springer, 1985.9. Claus Diem.
On arithmetic and the discrete logarithm problem in class groups ofcurves . Habilitation, Universität Leipzig, 2009.10. Dominique Duval. Rational Puiseux expansions.
Compositio mathematica ,70(2):119–154, 1989.11. Florian Hess. Computing Riemann–Roch spaces in algebraic function fields andrelated topics.
Journal of Symbolic Computation , 33(4):425–445, 2002.12. Mark van Hoeij. An algorithm for computing an integral basis in an algebraicfunction field.
Journal of Symbolic Computation , 18(4):353–363, 1994.13. Mark van Hoeij and Andrew Novocin. A reduction algorithm for algebraic functionfields. 2008.14. Mark van Hoeij and Michael Stillman. Computing an integral basis for an algebraicfunction field, 2015.15. Joris van der Hoeven and Grégoire Lecerf. Directed evaluation. working paper orpreprint, December 2018.16. Kiran S Kedlaya and Christopher Umans. Fast polynomial factorization and mod-ular composition.
SIAM Journal on Computing , 40(6):1767–1802, 2011.17. George Labahn, Vincent Neiger, and Wei Zhou. Fast, deterministic computationof the Hermite normal form and determinant of a polynomial matrix.
Journal ofComplexity , 42:44–71, 2017.18. François Le Gall. Powers of tensors and fast matrix multiplication. In
Proceedingsof the 39th international symposium on symbolic and algebraic computation , pages296–303, 2014.19. Guillaume Moroz and Éric Schost. A fast algorithm for computing the truncatedresultant. In
Proceedings of the ACM on International Symposium on Symbolicand Algebraic Computation , pages 341–348, 2016.20. Vincent Neiger. Fast computation of shifted Popov forms of polynomial matricesvia systems of modular polynomial equations. In
Proceedings of the ACM onInternational Symposium on Symbolic and Algebraic Computation , pages 365–372,2016.21. Adrien Poteaux and Martin Weimann. Computing Puiseux series: a fast divideand conquer algorithm. arXiv preprint arXiv:1708.09067 , 2017.22. Barry Marshall Trager. Algorithms for manipulating algebraic functions.
SM thesisMIT , 1976.23. Barry Marshall Trager.
Integration of algebraic functions . PhD thesis, Mas-sachusetts Institute of Technology, 1984.24. Robert J Walker.
Algebraic curves . 1950.25. Hans Zassenhaus. Ein algorithmus zur berechnung einer minimalbasis übergegebener ordnung. In