Algebraic Diagonals and Walks: Algorithms, Bounds, Complexity
AALGEBRAIC DIAGONALS AND WALKS:ALGORITHMS, BOUNDS, COMPLEXITY
ALIN BOSTAN, LOUIS DUMONT, AND BRUNO SALVY
Abstract.
The diagonal of a multivariate power series F is the univariatepower series Diag F generated by the diagonal terms of F . Diagonals forman important class of power series; they occur frequently in number theory,theoretical physics and enumerative combinatorics. We study algorithmicquestions related to diagonals in the case where F is the Taylor expansionof a bivariate rational function. It is classical that in this case Diag F is analgebraic function. We propose an algorithm that computes an annihilatingpolynomial for Diag F . We give a precise bound on the size of this polynomialand show that generically, this polynomial is the minimal polynomial andthat its size reaches the bound. The algorithm runs in time quasi-linear inthis bound, which grows exponentially with the degree of the input rationalfunction. We then address the related problem of enumerating directed latticewalks. The insight given by our study leads to a new method for expandingthe generating power series of bridges, excursions and meanders. We show thattheir first N terms can be computed in quasi-linear complexity in N , withoutfirst computing a very large polynomial equation. Introduction
The diagonal of a multivariate power series with coefficients a i ,...,i k is theunivariate power series with coefficients a i,...,i . Particularly interesting is the classof diagonals of rational power series (ie, Taylor expansions of rational functions). Inparticular, diagonals of bivariate rational power series are always roots of nonzerobivariate polynomials (ie, they are algebraic series) [34, 21]. This property persistsfor multivariate rational power series, but only in positive characteristic, while theconverse inclusion — algebraic series being diagonals of rational series — alwaysholds [21, 36, 19]. As far as we are aware, the first occurrence of this result inthe literature is an article of Pólya’s [34], which deals with a particular class ofbivariate rational functions; the proof uses elementary complex analysis. Along thelines of Pólya’s approach, Furstenberg [21] gave a (sketchy) proof of the generalresult, over the field of complex numbers; the same argument has been enhancedlater [25], [38, §6.3]. Three more different proofs exist: a purely algebraic one thatworks over arbitrary fields of characteristic zero [23, Th. 6.1] (see also [38, Th. 6.3.3]),one based on non-commutative power series [20, Prop. 5], and a combinatorialproof [9, §3.4.1] that relies on an encoding of the diagonal using unidimensionalwalks, seen themselves as words of a non-ambiguous context-free language. Variousother generalizations are known [21, 18, 24, 33]. Polynomial equations.
Despite the richness of the topic and the fact that mostproofs are constructive in essence, we were not able to find in the literature any explicit algorithm for computing a bivariate polynomial that cancels the diagonal ofa general bivariate rational function. We design in Section 5 such an algorithm for a r X i v : . [ c s . S C ] O c t ALIN BOSTAN, LOUIS DUMONT, AND BRUNO SALVY computing a polynomial equation for the diagonal of an arbitrary bivariate rationalfunction. We show in Proposition 20 that generically, the size of the minimalpolynomial for the diagonal of a rational function is exponential in the degree of theinput and that our algorithm computes it in quasi-optimal complexity (Theorem 18).The algorithm has two main steps that may be of independent interest. The firststep is the computation of a polynomial equation for the residues of a bivariaterational function. We propose an efficient algorithm for this task, that is a polynomial-time version of Bronstein’s algorithm [12]; corresponding size and complexity boundsare given in Theorem 8. The second step is the computation of a polynomial equationfor the sums of a fixed number of roots of a given polynomial. We design an additiveversion of the Platypus algorithm [2, §2.3] and analyze it in Theorem 12.
Recurrences.
Since it is also classical that algebraic series are differentially finite(ie, satisfy linear differential equations with polynomial coefficients), the coefficientsof these bivariate diagonals satisfy linear recurrences and this leads to an optimalalgorithm for the computation of their first terms [16, 17, 4]. We show however,that computing an annihilating polynomial of the diagonal first is usually not theright approach and that a direct computation of the recurrence [3] will be moreefficient. For completeness, we mention that in more than two variables, diagonals ofrational functions are still differentially finite [15, 30] and currently the most efficientalgorithm in that situation is that based on the Griffiths-Dwork method [7, 27].
Walks.
Diagonals of rational functions appear naturally in enumerative combina-torics. In particular, the enumeration of unidimensional walks has been the subjectof recent activity, see [2] and the references therein. Three generating functionsof different types of walks are of interest: the generating series B of bridges, E of excursions and M of meanders (these are defined precisely in Section 6). Thealgebraicity of these generating functions is classical as well, and related to that ofbivariate diagonals. Beyond this structural result, several quantitative and effectiveresults are known. Explicit formulas give the generating functions in terms ofimplicit algebraic functions attached to the set of allowed steps in the cases ofexcursions [11, §4], [23], bridges and meanders [2]. Moreover, Bousquet-Mélou gavea tight exponential bound on the degree of the annihilating polynomial in the caseof excursions [10, §2.1], while Banderier and Flajolet designed an algorithm (calledthe Platypus Algorithm ) computing it [2, §2.3].Our message for these walks is that again, precomputing a polynomial equationis too costly if one is only interested in the enumeration. Instead, we propose toprecompute a differential equation for B , that has polynomial size only, to use it forexpanding B , and to recover the expansion of E from that of B . For meanders, wecompute a polynomial-size differential equation for log M , from which the expansionof M can be computed efficiently. Our algorithms have quasi-linear complexity inthe precision of the expansion, while keeping the precomputation step in polynomialcomplexity (Theorem 24). Structure of the article.
After a preliminary section on background and notation,we first discuss several special bivariate resultants of broader general interest inSections 3 and 4. Next, we consider diagonals, the size of their minimal polynomialsand an efficient way of computing annihilating polynomials in Section 5. Finally,we turn to walks in Section 6 and show how to compute the coefficients of thegenerating functions of excursions and of meanders efficiently.
LGEBRAIC DIAGONALS AND WALKS 3
A preliminary version of this article has appeared at the ISSAC’15 conference [5].In the present version, we give tight bounds in the main results (Theorems 12and 18), an improved algorithm for the algebraic residues and more detailed proofsthroughout.
Acknowledgments.
This work was supported in part by the project FastRelaxANR-14-CE25-0018-01. 2.
Background and Notation
In this section, that might be skipped at first reading, we introduce notation andtechnical results that will be used throughout the article.2.1.
Notation.
In this article, K denotes a field of characteristic 0, and K analgebraic closure of K . We denote by K [ x ] n the set of polynomials in K [ x ] of degreeless than n . Similarly, K ( x ) n stands for the set of rational functions in K ( x ) withnumerator and denominator in K [ x ] n , and K [[ x ]] n for the set of power series in K [[ x ]] truncated at precision n .If P is a polynomial in K [ x, y ], then its degree with respect to x (resp. y ) isdenoted deg x P (resp. deg y P ). We take the convention that deg 0 = −∞ . The bidegree of P is the pair bideg P = (deg x P, deg y P ). The notation deg without anysubscript is used for univariate polynomials. Inequalities between bidegrees arecomponent-wise. The set of polynomials in K [ x, y ] of bidegree less than ( n, m ) isdenoted by K [ x, y ] n,m , and similarly for more variables.The valuation of a polynomial F ∈ K [ x ] or a power series F ∈ K [[ x ]] is its smallestexponent with nonzero coefficient. It is denoted val F , with the convention val 0 = ∞ .The reciprocal of a polynomial P ∈ K [ x ] is the polynomial rec( P ) = x deg P P (1 /x ).If P = c ( x − α ) · · · ( x − α d ) with c = 0 and α i ∈ K for all i , the notation N ( P )stands for the generating series of the Newton sums of P : N ( P ) = X n (cid:62) ( α n + α n + · · · + α nd ) x n . A polynomial is called square-free when its gcd with its derivative is trivial.A square-free decomposition of a nonzero polynomial Q ∈ A [ y ], where A = K or K [ x ], is a factorization Q = Q · · · Q mm , with Q i ∈ A [ y ] square-free, the Q i ’spairwise coprime and deg y ( Q m ) >
0. The corresponding square-free part of Q isthe polynomial Q ? = Q · · · Q m . If Q is square-free then Q = Q ? .The coefficient of x n in a power series A ∈ K [[ x ]] is denoted [ x n ] A . If A = P ∞ i =0 a i x i , then A mod x n denotes the polynomial P n − i =0 a i x i . The exponentialseries P n x n /n ! is denoted exp( x ). The Hadamard product of two power series A and B is the power series A (cid:12) B such that [ x n ] A (cid:12) B = [ x n ] A · [ x n ] B for all n .If F ( x, y ) = P i,j (cid:62) f i,j x i y j is a bivariate power series in K [[ x, y ]], the diagonal of F , denoted Diag F is the univariate power series in K [[ t ]] defined by Diag F ( t ) = P n (cid:62) f n,n t n . Complexity Estimates.
We recall classical complexity notation and facts forlater use. Let K be again a field of characteristic zero. Unless otherwise specified, weestimate the cost of our algorithms by counting arithmetic operations in K (denoted“ops.”) at unit cost. The soft-O notation ˜ O ( · ) indicates that polylogarithmic factorsare omitted in the complexity estimates (see [22, Def. 25.8] for a precise definition).The arithmetic size of an element of K is 1. That of a univariate polynomial is ALIN BOSTAN, LOUIS DUMONT, AND BRUNO SALVY its degree plus 1 (ie, we are considering dense representations). That of tuples ofpolynomials is the sum or their sizes, and this defines the size for rational functionsand multivariate polynomials. We say that an algorithm has quasi-linear complexityif its complexity is ˜ O ( d ), where d is the maximal arithmetic size of the input and ofthe output. In that case, the algorithm is said to be quasi-optimal . Univariate operations.
Throughout this article we will use the fact that mostoperations on polynomials, rational functions and power series in one variable canbe performed in quasi-linear time. Standard references for these questions are thebooks [22] and [13], as well as [37]. The needed results are summarized in Fact 1below.
Fact 1.
The following operations can be performed in ˜ O ( n ) ops. in K :(1) addition, product and differentiation of elements in K [ x ] n , K ( x ) n and K [[ x ]] n ; integration in K [ x ] n and K [[ x ]] n ;(2) extended gcd, square-free decomposition and resultant in K [ x ] n ;(3) multipoint evaluation in K [ x ] n , K ( x ) n at O ( n ) points in K ; interpolation in K [ x ] n and K ( x ) n from n (resp. n − ) values at pairwise distinct pointsin K ;(4) inverse, logarithm, exponential in K [[ x ]] n (when defined);(5) conversions between P ∈ K [ x ] n and N ( P ) mod x n ∈ K [ x ] n . Multivariate operations.
Basic operations on polynomials, rational functionsand power series in several variables are hard questions from the algorithmic pointof view. For instance, no general quasi-optimal algorithm is currently known forcomputing resultants of bivariate polynomials, even though in several importantcases such algorithms are available [6]. Multiplication is the most basic non-trivialoperation in this setting. The following result can be proved using Kronecker’ssubstitution; it is quasi-optimal for a fixed number of variables m = O (1). Forpolynomials with more complicated monomial supports, or when the number ofvariables grows, more sophisticated techniques apply [14, 31, 29, 40]. Fact 2.
For fixed m , polynomials in K [ x , . . . , x m ] d ,...,d m and power series in K [[ x , . . . , x m ]] d ,...,d m can be multiplied using ˜ O ( d · · · d m ) ops. A related operation is multipoint evaluation and interpolation. The simplestcase is when the evaluation points form an m -dimensional tensor product grid I × · · · × I m , where I j is a set of cardinal d j ; it extends to subgrids of tensorproduct grids [40]. Fact 3. [31] For fixed m , polynomials in K [ x , . . . , x m ] d ,...,d m can be evaluatedand interpolated from values that they take on d · · · d m points that form an m -dimensional tensor product grid using ˜ O ( d · · · d m ) ops. Again, the complexity in Fact 3 is quasi-optimal for fixed m = O (1).A general (although non-optimal) technique to deal with more involved operationson multivariable algebraic objects (eg, in K [ x, y ]) is to use (multivariate) evaluationand interpolation on polynomials and to perform operations on the evaluatedalgebraic objects using Facts 1–3. To put this strategy in practice, the size of theoutput needs to be well controlled. We illustrate this philosophy on the example ofresultant computation, based on the following easy variation of [22, Thm. 6.22]. LGEBRAIC DIAGONALS AND WALKS 5
Fact 4.
Let P ( x, y ) and Q ( x, y ) be bivariate polynomials of respective bidegrees ( d Px , d Py ) and ( d Qx , d Qy ) . Then, deg Resultant y ( P ( x, y ) , Q ( x, y )) (cid:54) d Px d Qy + d Qx d Py , and this is an equality whenever one of d Qx or d Px is zero. Lemma 5.
Let P and Q be polynomials in K [ x , . . . , x m , y ] d ,...,d m ,d . Then R =Resultant y ( P, Q ) belongs to K [ x , . . . , x m ] D ,...,D m , where D i = 1 + 2( d − d i − .Moreover, the coefficients of R can be computed using ˜ O (2 m d · · · d m d m +1 ) ops.in K .Proof. The degrees estimates follow from Fact 4. To compute R , we use an evaluation-interpolation scheme: P and Q are evaluated at D = D · · · D m points ( x , . . . , x m )forming an m dimensional tensor product grid; D univariate resultants in K [ y ] d are computed; R is recovered by interpolation. By Fact 3, the evaluation andinterpolation steps are performed in ˜ O ( mD ) ops. The second one has cost ˜ O ( dD ).Using the inequality D (cid:54) m d · · · d m d m concludes the proof. (cid:3) We conclude this section by recalling two complexity results on bivariate polyno-mials and rational functions; for proofs, see [28] and [3].
Fact 6. (1) A square-free decomposition of polynomials in K [ x, y ] d x ,d y can be computed using ˜ O ( d x d y ) ops.(2) If P, Q ∈ K [ x, y ] are non-zero coprime polynomials such that bideg( P ) < bideg( Q ) and Q is primitive wrt y , then a minimal telescoper for P/Q ofdegree O ( d x d ?y d y ) and order at most d ?y can be computed using ˜ O ( d x d y d ? y ) ops, where ( d x , d y ) = bideg( Q ) and d ?y is the degree in y of any square-freepart of Q . Recall that a minimal telescoper for
P/Q is a differential operator L ∈ K [ x ] h ∂ x i of minimal order such that L · ( P/Q ) = ∂ y ( g ) with g ∈ K [ x, y ].3. Polynomials for Residues
Algorithm.
We are interested in a polynomial that vanishes at some or allof the residues of a given rational function. It is a classical result in symbolicintegration that in the case of simple poles, there is a resultant formula for sucha polynomial, first introduced by Rothstein [35] and Trager [39]. This was latergeneralized by Bronstein [12] to accommodate multiple poles as well. However, asmentioned by Bronstein, the complexity of his method grows exponentially withthe multiplicity of the poles. Instead, we develop in this section an algorithm withpolynomial complexity.Let f = P/Q be a nonzero element in K ( y ), where P, Q are two coprime polyno-mials in K [ y ]. Let also ˆ Q be a divisor of Q such that ˆ Q and Q/ ˆ Q are coprime. In ourcontext, ˆ Q represents the subset of the roots of Q at which we want to compute anannihilating polynomial of the residues. Let Q Q · · · Q mm be a square-free decomposi-tion of ˆ Q . For i ∈ { , . . . , m } , if α is a root of Q i in an algebraic extension of K , thenit is simple and the residue of f at α is the coefficient of t − in the Laurent expansionof f ( α + t ) at t = 0. Consider the polynomial V i ( y, t ) = ( Q i ( y + t ) − Q i ( y )) /t . Since α is a simple root of Q i , V i satisfies V i ( α, t ) = Q i ( α + t ) /t and V i ( α,
0) = Q i ( α ) = 0.Therefore, the rational function g defined by g ( y, t ) = f ( y + t ) Q ii ( y + t ) /V ii ( y, t )satisfies g ( α, t ) = f ( α + t ) · t i and has the advantage of being regular at t = 0. The ALIN BOSTAN, LOUIS DUMONT, AND BRUNO SALVY
Algorithm
AlgebraicResidues ( P/Q, ˆ Q ) Input :
Three polynomials P , Q and ˆ Q a divisor of Q in K [ y ] such that ˆ Q and Q/ ˆ Q are coprime ( ˆ Q can be Q ) Output:
A polynomial in K [ z ] canceling the residues of P/Q at the rootsof ˆ Q Compute Q Q · · · Q mm a square-free decomposition of ˆ Q ; for i ← m doif deg y Q i = 0 then R i ← else U i ( y ) ← Q ( y ) /Q ii ( y ); V i ( y, t ) ← ( Q i ( y + t ) − Q i ( y )) /t ;Expand P ( y + t ) U i ( y + t ) V ii ( y,t ) = S + · · · + S i − t i − + O ( t i );Write S i − as A i ( y ) /B i ( y ) with A i and B i coprime polynomials; R i ( z ) ← Resultant y ( A i − zB i , Q i ); return R R · · · R m Algorithm 1.
Polynomial canceling the residues residue of f at α may hence be computed as the evaluation at y = α of [ t i − ] g ( y, t ).If this coefficient is denoted S i − ( y ) = A i ( y ) /B i ( y ), with polynomials A i and B i ,the residue at α is thus a root of Resultant y ( A i − zB i , Q i ). When the multiplicityof the pole m = 1, this is exactly the Rothstein-Trager resultant. This computationleads to Algorithm 1, which avoids the exponential blowup of the complexity thatwould follow from a symbolic precomputation of the Bronstein resultants. Example 7.
Let d (cid:62) G d ( x, y ) ∈ Q ( x )[ y ] be the rationalfunction y d / ( y − y − x ) d +1 . The poles have order d + 1. In this example, thealgorithm can be performed by hand for arbitrary d : a square-free decompositionhas m = d + 1 and Q m = y − y − x , the other Q i ’s being 1. Then V m = 1 − y − t and the next step is to expand( y + t ) d (1 − y − t ) d +1 = ( y + t ) d (1 − y ) d +1 (cid:16) − t − y (cid:17) d +1 . Expanding the binomial series gives the coefficient of t d as A m B m , with A m = d X k =0 (cid:18) dk (cid:19)(cid:18) d + kk (cid:19) y k (1 − y ) d − k , B m = (1 − y ) d +1 . The residues are then cancelled by R m = Resultant y ( A m − zB m , Q m ), namely by(1) R m = (1 − x ) d +1 z − b d/ c X k =0 (cid:18) d k (cid:19)(cid:18) kk (cid:19) x k . (Equality (1) is a consequence of the identity A m = P b d/ c k =0 (cid:0) d k (cid:1)(cid:0) kk (cid:1) ( y − y ) k , whichimplies A m mod Q m = P b d/ c k =0 (cid:0) d k (cid:1)(cid:0) kk (cid:1) x k , while B m mod Q m = (1 − x ) d (1 − y ).) Both sides of the identity satisfy (2 y − ( d +1) u d − (2 d +3) u d +1 +( d +2) u d +2 = 0 , u = u = 1 . LGEBRAIC DIAGONALS AND WALKS 7
In our applications, as in the previous example, the polynomials P and Q havecoefficients that are themselves polynomials in another variable x . The rest of thissection is devoted to the proof of the following. Theorem 8.
Let P ( x, y ) /Q ( x, y ) ∈ K ( x, y ) d x +1 ,d y +1 . Let ˆ Q be a divisor of Q , ˆ Q ? be a square-free part of it wrt y , and denote by m the number of factors in thesquare-free decompositions of ˆ Q . Let ( d ?x , d ?y ) be bounds on the bidegree of Q ? . Thenthe polynomial computed by Algorithm 1 annihilates the residues of P/Q at the rootsof ˆ Q , has degree in z bounded by deg y ˆ Q ? and degree in x bounded by d ?x ( d y + 1) + 2( d ?y − d x − d ?x d ?y . It can be computed in O ( m d ?x d ?y ( m + d ?y )) operations in K . Note that rewriting the bound under the equivalent form2 d x d y − d x − d ?x )( d y − d ?y + 1)shows that the degree in x is bounded by 2 d x d y , independently of the multiplicities.The complexity is also bounded independently of the multiplicities by O ( d ?x d ?y d y ).3.2. Bounds.
By Fact 4, the resultant R i has degree in z exactly deg Q i so thatthe degree in z of the result is bounded by deg y Q + · · · + deg y Q m = d ?y .The degree in x is the sum of the degrees in x of all the R i ’s. In order to derivea bound on the degree of R i using Fact 4, we first consider the degrees in x and y of A i and B i . The important point is that these degrees do not depend so muchon Q as on its square-free part. In order to quantify this precisely, we first focus onpower series expansion of a special type about which we state a few useful lemmas.For a polynomial Q ∈ K [ x ] and a real number α , we denote by E α ( Q ) the subsetof K ( x )[[ t ]] formed of power series that can be written c + c tQ + · · · + c n t n Q n + · · · , with c n ∈ K [ x ] and deg c n (cid:54) nα , for all n (recall that deg 0 = −∞ , which makesit convenient to allow negative α ). This notation extends to the case when x is atuple of variables, with α replaced by a tuple of real numbers. The main propertiesof E α ( Q ) are summarized as follows. Lemma 9.
Let
Q, R ∈ K [ x ] , α, β ∈ R and f ∈ K [[ t ]] .(1) The set E α ( Q ) is a subring of K ( x )[[ t ]] ;(2) Let S ∈ E α ( Q ) with S (0) = 0 , then f ( S ) ∈ E α ( Q ) ;(3) The products obey E α ( Q ) · E β ( R ) ⊂ E max( α +deg R, β +deg Q ) ( QR ) . Proof.
For (3) , if A = P n a n t n /Q n and B = P n b n t n /R n belong respectivelyto E α ( Q ) and E β ( R ), then the n th coefficient of their product is a sum of terms of theform a i ( x ) Q n − i b n − i ( x ) R i / ( QR ) n . Therefore, the degree of the numerator is boundedby i ( α + deg R ) + ( n − i )( β + deg Q ), whence (3) is proved. Property (1) is provedsimilarly, the n th coefficient of the product being a sum of terms a i ( x ) b n − i ( x ) t n /Q n .In Property (2) , the condition on S (0) makes f ( S ) well-defined. The result thenfollows from (1) . (cid:3) ALIN BOSTAN, LOUIS DUMONT, AND BRUNO SALVY
Corollary 10.
Let Q ∈ K [ x, t ] be such that Q (0 , = 0 . Let Q ? be a square-freepart of Q and δ ( Q ? ) its total degree in ( x, t ) . Then Q ( x, t ) ∈ Q ( x, E min(deg x ( Q ? ) ,δ ( Q ? ) − ( Q ? ( x, . Proof.
For all i , the coefficient of t i in Q has degree at most min(deg x ( Q ) , δ ( Q ) − i ).Thus R := ( Q ( x, t ) − Q ( x, /Q ( x, ∈ E min(deg x ( Q ) ,δ ( Q ) − ( Q ( x, Q ( x, t ) = Q ( x, R ) and using Part (2) of Lemma 9 with f = 1 / (1 + y ) thengives the result when Q is square-free. Using f = 1 / (1 + y ) i gives the result for apure power by Part (1) of the lemma. The general case then follows from Part (3) by induction on the number of parts in the square-free decomposition of Q , usingadditivity of degree and total degree. (cid:3) Now, we turn to the fraction F i := P ( x, y + t ) /U i ( x, y + t ) /V i ( x, y, t ) i , with U i ( x, y ) = Q ( x, y ) /Q i ( x, y ) i and V i ( x, y, t ) = ( Q i ( x, y + t ) − Q i ( x, y )) /t . We usebidegrees with respect to ( x, y ) and observe thatbideg U ?i = bideg Q ? − bideg Q i , bideg V i (cid:54) bideg Q i − (0 , . The total degrees in ( y, t ) behave similarly: that of U ?i ( x, y + t ) is deg y Q − deg y Q i ,while that of V i ( x, y, t ) is deg y Q i −
1. Corollary 10 gives1 U i ( x, y + t ) ∈ U i ( x, y ) E bideg Q ? − bideg Q i − (0 , ( U ?i ( x, y )) , (2) 1 V i ( x, y, t ) i ∈ V i ( x, y, i E bideg Q i − (0 , ( V i ( x, y, . (3)From there, Part (3) of Lemma 9 shows that the product of these series belongs to1 U i ( x, y ) V i ( x, y, i E bideg Q ? − (0 , ( U ?i ( x, y ) V i ( x, y, . Thus the coefficient S i − of t i − in the power series expansion of F i can be writtenas A i /B i with B i = U i ( x, y ) V i ( x, y, i U ?i ( x, y ) i − V i ( x, y, i − , and finally bideg A i (cid:54) bideg P + ( i −
1) bideg Q ? − i − , , bideg B i (cid:54) bideg Q + ( i −
1) bideg Q ? − (2 i − , , (4)whencebideg( A i − zB i ) (cid:54) max(bideg P, bideg Q − (0 , i − Q ? − (0 , . Fact 4 can now be exploited, leading to a bound on the degree of the resultant:deg x R i (cid:54) deg y Q i (max(deg x P, deg x Q ) + ( i −
1) deg x Q ? )+ deg x Q i (cid:0) max(deg y P, deg y Q −
1) + ( i − y Q ? − (cid:1) . Next, we sum over the indices i corresponding to factors of ˆ Q . This leads to thefollowing bound for the degree in x of the resultdeg y ˆ Q ? max(deg x P, deg x Q ) + (deg y ˆ Q − deg y ˆ Q ? ) deg x Q ? + deg x ˆ Q ? max(deg y P, deg y Q −
1) + (deg x ˆ Q − deg x ˆ Q ? )(deg y Q ? − . LGEBRAIC DIAGONALS AND WALKS 9
This bound being an increasing function of each of the degrees that appear, it isitself upper bounded by replacing any of those degrees by an upper bound.In the context of Theorem 8, the bidegrees of P , Q and ˆ Q are bounded by ( d x , d y ),while those of Q ? and ˆ Q ? are bounded by ( d ?x , d ?y ). This leads to the bound d ?y d x + ( d y − d ?y ) d ?x + d ?x d y + ( d x − d ?x )( d ?y − , which rewrites as the bound in the Theorem and completes that part of the proof.3.3. Complexity.
By Fact 6, a square-free decomposition of ˆ Q can be computedusing ˜ O ( d x d y ) ops. We now focus on the computations performed inside the i th iteration of the loop and write ( d ( i ) x , d ( i ) y ) for the bidegree of Q i . Computing U i requires an exact division of polynomials of bidegrees at most ( d x , d y ); thisdivision can be performed by evaluation-interpolation in ˜ O ( d x d y ) ops. Similarly,the trivariate polynomial V i can be computed by evaluation-interpolation wrt ( x, y )in time ˜ O ( d ( i ) x ( d ( i ) y ) ). By Eq. (4), both A i ( x, y ) and B i ( x, y ) have bidegrees atmost ( D ( i ) x , D ( i ) y ), where D ( i ) x = d x + id ?x and D ( i ) y = d y + id ?y . They can becomputed by evaluation-interpolation in ˜ O ( iD ( i ) x D ( i ) y ) ops. Finally, the resultant R i ( x, z ) has bidegree at most ( d ( i ) x D ( i ) y + d ( i ) y D ( i ) x , d ( i ) y ), and since the degree in y of A i − zB i and Q i is at most D ( i ) y , it can be computed by evaluation-interpolation in˜ O (( d ( i ) x D ( i ) y + d ( i ) y D ( i ) x ) d ( i ) y D ( i ) y ) ops by Lemma 5. The total cost of the loop is thus˜ O ( L ), where L = m X i =1 (cid:16) ( i + ( d ( i ) y ) ) D ( i ) x D ( i ) y + d ( i ) x d ( i ) y ( D ( i ) y ) (cid:17) . Using the (crude) bounds D ( i ) x (cid:54) D ( m ) x , D ( i ) y (cid:54) D ( m ) y , P mi =1 ( d ( i ) y ) (cid:54) d ?y and P mi =1 d ( i ) x d ( i ) y (cid:54) d ?x d ?y shows that L is bounded by D ( m ) x D ( m ) y m X i =1 ( i +( d ( i ) y ) )+( D ( m ) y ) m X i =1 d ( i ) x d ( i ) y (cid:54) D ( m ) x D ( m ) y ( m + d ?y )+( D ( m ) y ) d ?x d ?y , which, by using the inequalities D ( m ) x (cid:54) md ?x and D ( m ) y (cid:54) md ?y , is seen to belongto O ( m d ?x d ?y ( m + d ?y )), as was to be proved. This completes the proof of thetheorem. Remark.
Note that one could also use Hermite reduction combined with the usualRothstein-Trager resultant in order to compute a polynomial ˜ R ( x, z ) that annihilatesthe residues. Indeed, Hermite reduction computes an auxiliary rational functionthat admits the same residues as the input, while only having simple poles. Aclose inspection of this approach provides the same bound d ?y for the degree in y of˜ R ( x, z ), but a less tight bound for its degree in x , namely worse by a factor of d ?y .The complexity of this alternative approach appears to be ˜ O ( d x d y ( d y + d ?y )) (usingresults from [3]), to be compared with the complexity bound from Theorem 8.4. Sums of roots of a polynomial
Algorithm.
Given a polynomial P ∈ K [ y ] of degree d with coefficients in afield K of characteristic 0, let α , . . . , α d be its (not necessarily distinct) roots in Algorithm
PureComposedSum ( P, c ) Input :
A polynomial P of degree d in K [ y ], a positive integer c (cid:54) d Output:
The polynomial Σ c P from Eq. (5) D ← (cid:0) dc (cid:1) N ( P ) ← rec( P ) / rec( P ) mod y D +1 S ← N ( P ) (cid:12) exp( y ) mod y D +1 F ← exp (cid:16)P cn =1 ( − n − S ( ny ) n z n (cid:17) mod ( y D +1 , z c +1 ) N (Σ c P ) ← ([ z c ] F ) (cid:12) P n ! y n mod y D +1 return rec (cid:16) exp (cid:16)R D −N (Σ c P ) y d y (cid:17) mod y D +1 (cid:17) Algorithm 2.
Polynomial canceling the sums of c roots the algebraic closure of K . For any positive integer c (cid:54) d , the polynomial of degree (cid:0) dc (cid:1) defined by(5) Σ c P = Y i < ···
Let P ∈ K [ y ] be a polynomial of degree d , let N ( P ) denote thegenerating series of its Newton sums and let S be the series N ( P ) (cid:12) exp( y ) . Let Ψ c be the polynomial in K [ t , . . . , t c ] defined by Ψ c ( t , . . . , t c ) = [ z c ] exp X n (cid:62) ( − n − t n z n n . Then the following equality holds N (Σ c P ) (cid:12) exp( y ) = Ψ c ( S ( y ) , S (2 y ) , . . . , S ( cy )) . Proof.
By construction, the series S is S ( y ) = X n (cid:62) ( α n + α n + · · · + α nd ) y n n ! = d X i =1 exp( α i y ) . LGEBRAIC DIAGONALS AND WALKS 11
When applied to the polynomial Σ c P , this becomes N (Σ c P ) (cid:12) exp( y ) = X i < ···
Let P ∈ K [ x, y ] d x +1 ,d y +1 , and let c (cid:54) d y be a positive integer. Let a ∈ K [ x ] denote the leading coefficient of P wrt y and let Σ c P be defined as in Eq. (5) .We also denote D x := (cid:18) d y − c − (cid:19) , D y := (cid:18) d y c (cid:19) . Then a D x · Σ c P is a polynomial in K [ x, y ] that cancels all sums α i + · · · + α i c of c roots α i ( x ) of P , with i < · · · < i c , and satisfies deg x ( a D x · Σ c P ) (cid:54) d x D x , deg y ( a D x · Σ c P ) = D y . Moreover, this polynomial can be computed in ˜ O ( cd x D x D y ) ops. These bounds are sharp. Experiments suggest that for generic P of bidegree ( d x , d y )the minimal polynomial of α i + · · · + α i c has bidegree precisely ( d x D x , D y ). Similarly,the complexity result is quasi-optimal up to a factor of c only.4.2. Bounds.
We start with the following effective version of a very classical resulton symmetric functions [43, Theorem 6.21].
Lemma 13.
Let α , . . . , α n be indeterminates, and σ , . . . , σ n be the associatedelementary symmetric functions. Let P ∈ K [ α , . . . , α n ] be a symmetric polynomialsatisfying deg α i P (cid:54) d for all (cid:54) i (cid:54) n Then P can be expressed as a polynomial in σ , . . . , σ n of total degree at most d . Proof.
This is a consequence of the form of the matrix of the change of basesfrom the elementary symmetric functions to the monomial symmetric functions asdescribed for instance in the proof of [38, Theorem 7.4.4]. Since P is symmetric andhas degree at most d with respect to each variable, it can be written as a linearcombination of monomial symmetric functions of the form P i < ···
The computation is performed by evaluation and interpolationat 1 + d x D x values of x . By Fact 1, at each of these values, the computation of thetruncated series expansions N ( P ) and S in K [[ y ]] D y have complexity ˜ O ( D y ); sodo the computations of N (Σ c P ) and the last step; the most expensive step is thecomputation of F , which costs ˜ O ( cD y ) ops. in K . Since this is executed O ( d x D x )times, the total cost is ˜ O ( cd x D x D y ).5. Diagonals
In this section we turn to our main topic, namely the computation of annihilatingpolynomials for diagonals of bivariate rational functions. The algorithm relies on aclassical expression of the diagonal as a sum of residues (see Lemma 14), and onthe results of Sections 3 and 4. The conclusions of the analysis of Algorithm 3 canbe found in Theorem 18 and Proposition 20.5.1.
Algebraic equations for diagonals.
Let F ( x, y ) = P i,j (cid:62) a i,j x i y j be arational function in K ( x, y ), whose denominator does not vanish at (0 , F is defined as Diag F ( t ) = P i (cid:62) a i,i t i . A first basic, but veryimportant, remark is that Diag F ( t ) = [ y − ] 1 y F (cid:18) ty , y (cid:19) . When K = C , this coefficient can be viewed as a Cauchy integral and computed bythe residue formula [21]. For general K (of characteristic 0), we proceed similarlywith a purely algebraic approach, adapted from [23, Theorem 6.1]. (The reader whois not interested in the general proof may also skip directly to Lemma 14.) Thestarting point is the partial fraction decomposition of G ( t, y ) := y F ( ty , y ) consideredas a rational function in K ( t )( y ):(7) G ( t, y ) = n X i =1 m i X j =1 f i,j ( t, y ) , LGEBRAIC DIAGONALS AND WALKS 13 where f i,j ( t, y ) = r i,j ( t )( y − y i ( t )) j , (cid:54) i (cid:54) n, (cid:54) j (cid:54) m i . In particular, r i, ( t ) is the residue of G at y i ( t ) for all i ∈ { , , . . . , n } . By Puiseux’stheorem, there exists N ∈ N ? such that the y i ’s and r i,j ’s all lie in the field K (( t /N )).In order to apply the operator [ y − ] on both sides of Equation (7), it is necessary tofind a ring where both the equality and the operator [ y − ] make sense. We are goingto check that A = K (( y ))(( t /N )) and [ y − ] computed coefficient-wise are suitablefor this.First, as a rational function, it is immediate that G ( t, y ) belongs to A . In orderto expand the right-hand side, we consider each term separately and distinguishbetween the cases val t ( y i ) (cid:54) t ( y i ) >
0. If val t ( y i ) (cid:54) f i,j can be writtenas follows: f i,j = r i,j ( − y i ) j · − y/y i ) j = r i,j ( − y i ) j X k (cid:62) (cid:18) − jk (cid:19) y k y ki ∈ K (( t /N ))[[ y ]] . Since val t (1 /y i ) (cid:62)
0, the series f i,j /r i,j actually belongs to K [[ t /N ]][[ y ]] ∼ = K [[ y ]][[ t /N ]].Hence f i,j ∈ K [[ y ]](( t /N )) ⊂ A , and in particular [ y − ] f i,j = 0. On the other hand,if val t ( y i ) > f i,j can be expanded directly in A as: f i,j = r i,j y j · − y i /y ) j = r i,j y j X k (cid:62) (cid:18) − jk (cid:19) y ki y k . Since y i /y ∈ A and val t ( y i /y ) >
0, this last quantity is the sum of a convergentseries (in the sense of formal Laurent series) of elements of A , hence belongs to A .In this case we obtain [ y − ] f i,j = r i, .We have everything we need to apply [ y − ] on both sides of Equation (7), leadingto the generalization to any base field of characteristic 0 of Furstenberg’s classicalresult [21, §2]. Lemma 14. If F ( x, y ) is a rational function in K ( x, y ) whose denominator doesnot vanish at (0 , , then (8) Diag F ( t ) = X y ( t ) ∈P val t ( y ( t )) > Residue (cid:18) y F (cid:18) ty , y (cid:19) , y = y ( t ) (cid:19) , where P is the set of poles of y F ( ty , y ) . The poles y ( t ) ∈ P such that val t ( y ( t )) > small branches of Q and we denote their number by Nsmall( Q ).Since the elements of P are algebraic and finite in number and residues areobtained by series expansion, which entails only rational operations, it follows thatthe diagonal is algebraic too. Combining the algorithms of the previous sectiongives Algorithm 3 that produces a polynomial equation for Diag F . Example 15.
Let d (cid:62) F d ( x, y ) be the rational function1 / (1 − x − y ) d +1 . The diagonal of F d is equal to X n (cid:62) (cid:18) n + dn (cid:19)(cid:18) n + dd (cid:19) t n . Algorithm
AlgebraicDiagonal (A/B)
Input :
Two polynomials A and B in K ( x, y ), with B (0 , = 0 Output:
A polynomial Φ ∈ K [ t, ∆] such that Φ( t, Diag
A/B ) = 0
P, Q, α ← y ddeg − ( A ) A ( ty , y ), y ddeg − ( B ) B ( ty , y ), ddeg − ( B ) − ddeg − ( A ) − if α < then r ← AlgebraicResidues ( y α P/Q, y ) R ← AlgebraicResidues ( y α P/Q, Q ) c ← number of small branches of Q Φ( t, ∆) ← numer( PureComposedSum ( R, c )) if α < then Φ( t, ∆) ← numer(Φ( t, ∆ − r )) return Φ( t, ∆) Algorithm 3.
Polynomial canceling the diagonal of a rational function. The notationddeg is defined in Eq. (9); numer denotes the numerator of theirreducible form of a fraction.
By the previous argument, it is an algebraic series, which is the sum of the residuesof the rational function G d of Example 7 over its small branches (with x replacedby t ). In this case, the denominator is y − t − y . It has one solution tending to 0with t ; the other one tends to 1. Thus the diagonal is canceled by the quadraticpolynomial (1). Example 16.
For an integer d >
0, we consider the rational function F d ( x, y ) = x d − − x d − y d +1 , of bidegree ( d, d + 1). The first step of Algorithm 3 produces G d ( t, y ) = y α PQ = t d − y d − t d − y d +1 , of bidegree ( d, d + 1), whose denominator is irreducible with d small branches.From there, Algorithm 3 computes a polynomial Φ d annihilating Diag F d , which isexperimentally irreducible and whose bidegrees for d = 1 , , , , , , , (cid:18) d ( d + 1) (cid:18) d − d − (cid:19) , (cid:18) d + 1 d (cid:19)(cid:19) , of exponential growth in the bidegree of F d . In general, these bidegrees do not growfaster than in this example. In Theorem 18 below, we prove bounds that are barelylarger than the values above. Sloped Diagonals. If p and q are relatively prime positive integers and F ( x, y ) = P i,j (cid:62) f i,j x i y j , then the sloped diagonal of F , Diag p,q F ( t ) is P n (cid:62) f pn,qn t n . Directmanipulations show thatDiag p,q F ( t pq ) = Diag( F ( x q , y p ))( t ) , LGEBRAIC DIAGONALS AND WALKS 15 so that our bounds and algorithm apply almost directly to these more generaldiagonals.5.2.
Degree Bounds and Complexity.
The rest of this section is devoted tothe derivation of bounds on the complexity of Algorithm 3 and on the size of thepolynomial it computes, which are given in Theorem 18.
Degrees.
A bound on the bidegree of Φ will be obtained from the bounds succes-sively given by Theorems 8 and 12.In order to follow the impact of the change of variables in the first step, we definethe lower diagonal degree and upper diagonal degree of a polynomial P ( x, y ) = P i,j a i,j x i y j respectively as the integersddeg − ( P ) = sup { i − j | a i,j = 0 } ddeg + ( P ) = sup { j − i | a i,j = 0 } (9)We collect the properties of interest in the following. Lemma 17.
For any P and Q in K [ x, y ] ,(1) ddeg − ( P ) (cid:54) deg x P and ddeg + ( P ) (cid:54) deg y P ;(2) ddeg ± ( P Q ) = ddeg ± ( P ) + ddeg ± ( Q ) ;(3) there exists a polynomial ˜ P ∈ K [ x, y ] , such that P ( x/y, y ) = y − ddeg − ( P ) ˜ P ( x, y ) , with ˜ P ( x, = 0 and bideg( ˜ P ) = (deg x P, ddeg − ( P ) + ddeg + ( P )); (4) bideg(( ˜ P ) ? ) = (deg x P ? , ddeg − ( P ? ) + ddeg + ( P ? )) .Proof. Part ( is immediate. The quantities ddeg − ( P ) and ddeg + ( P ) are nothingelse than − val y P ( x/y, y ) and deg y P ( x/y, y ), which makes Parts ( and ( cleartoo. From there, we get the identity g P Q = ˜ P ˜ Q for arbitrary P and Q , whence( ˜ P ) ? = f P ? and Part ( is a consequence of Parts ( and ( . (cid:3) Thus, starting with a rational function F = A/B ∈ K ( x, y ), with ( d x , d y ) abound on the bidegrees of A and B , and ( d ?x , d ?y ) a bound on the bidegree of asquare-free part B ? of B , the first step of the algorithm constructs G ( t, y ) = y α PQ ,with polynomials P and Q and α = ddeg − ( B ) − ddeg − ( A ) − P (cid:54) ( d x , ddeg − ( A ) + ddeg + ( A )) , bideg Q (cid:54) ( d x , ddeg − ( B ) + ddeg + ( B )) , bideg Q ? = ( d ?x , ddeg − ( B ? ) + ddeg + ( B ? )) . We first explain how to compute the number c of small branches of Q . Small branches.
It is classical that for a polynomial P = P a i,j x i y j ∈ K [ x, y ],the number of its solutions tending to 0 can be read off its Newton polygon (see,e.g. [42]). This polygon is the lower convex hull of the union of ( i, j ) + N for ( i, j )such that a i,j = 0. The number of solutions tending to 0 is given by the minimal y -coordinate of its leftmost points. Since the number of small branches counts onlydistinct solutions, it is thus given by(11) Nsmall( P ) = Nsmall( P ? ) = val y ([ x val x P ? ] P ? ) . The change of variables x x/y changes the coordinates of the point corre-sponding to a i,j into ( i, j − i ). This transformation maps the vertices of the original Newton polygon to the vertices of the Newton polygon of the Laurent polynomial P ( x/y, y ). Multiplying by y ddeg − ( P ) yields a polynomial and shifts the Newtonpolygon up by ddeg − ( P ), thusNsmall (cid:16) y ddeg − ( P ) P ( x/y, y ) (cid:17) = Nsmall( P ? ) + ddeg − ( P ? ) . The number of small branches of the polynomial Q constructed above is thengiven by(12) c := Nsmall ( B ? ) + ddeg − ( B ? ) . Degree in ∆ . At this point, there is a slight difference between the cases α (cid:62) α <
0. Indeed, in the latter case we have to take the additional small branch at0 into account. To do this, we denote by r the residue of G at 0. Since r is rational,we may compute a polynomial R that vanishes only on the residues at non-zerosmall branches of the denominator of G . If ˜Φ( t, ∆) is the polynomial produced byapplying Algorithm 2 to ( R, c ), then the polynomial Φ( t, ∆) = ˜Φ( t, ∆ − r ) cancelsDiag F . Thus we apply Algorithm 1 to (( y α P ) /Q, Q ) if α (cid:62)
0, and to ( P/ ( y − α Q ) , Q )otherwise. By Theorem 8, in both cases we obtain a polynomial R of degree D y ,with(13) D y := ddeg − ( B ? ) + ddeg + ( B ? ) , and applying Algorithm 2 gives a polynomial Φ with deg ∆ Φ = (cid:0) D y c (cid:1) . Degree in t . To bound the degree of Φ in t , we can neglect our optimization andapply Algorithm 1 to ( y α P, Q, Q ) or (
P, y − α Q, y − α Q ) depending on wether α (cid:62) α <
0. Indeed, the polynomial Φ obtained this way is clearly a multiple of theone computed by the algorithm. By Theorem 8, since the bidegrees of P , y α P , Q and y − α Q are all bounded by ( d x , d x + d y + 1), we compute a polynomial R ofdegree bounded by D x , where(14) D x := 2 d x ( d x + d y + 1) + d x − d x − d ?x )( d x − d ?x + d y − d ?y + 1) . Applying Theorem 12 to (
R, c ) or (
R, c + 1) depending on the sign of α yields inboth cases deg t Φ (cid:54) D x (cid:0) D y c (cid:1) . Complexity.
We now analyze the cost of Algorithm 3. The computation of P and Q does not require any arithmetic operation. Next, the computation of R and r takes ˜ O (( d x + d y ) ) ops. (see the comment after Theorem 8). The number of smallbranches is obtained with no arithmetic operation from a square-free decompositioncomputed in Algorithm 1. The bounds of the discussion above and Theorem 12 showthat Algorithm 2 uses ˜ O ( cD x (cid:0) D y c (cid:1) ) ops. Finally, if a translation of the variable isneeded, it can be performed by evaluation-interpolation in ˜ O ( D x (cid:0) D y c (cid:1) ) ops. (Onemay as well evaluate and interpolate wrt x and apply better algorithms for univariatetranslation [6, §5].)We summarize all the results of this section in the following theorem. Theorem 18.
Let F = A/B be a rational function in K ( x, y ) with B (0 , = 0 . Let ( d x , d y ) (resp. ( d ?x , d ?y ) ) be a bound on the bidegrees of A and B (resp. a square-freepart of B ). Let D x , D y , c be defined as in Eqs. (14,13,12). Then there exists apolynomial Φ ∈ K [ t, ∆] such that Φ( t, Diag F ( t )) = 0 and deg ∆ Φ = (cid:18) D y c (cid:19) , deg t Φ (cid:54) D x (cid:18) D y c (cid:19) . LGEBRAIC DIAGONALS AND WALKS 17
Algorithm 3 computes it in ˜ O (cid:16) cD x (cid:0) D y c (cid:1) + ( d x + d y ) (cid:17) ops. A general bound on bideg Φ depending only on a bound ( d, d ) on the bidegree ofthe input can be deduced from the above asbideg Φ (cid:54) ( d (4 d + 3) , × (cid:18) dd (cid:19) . Optimization.
Assume that the denominator of F ( x/y ) /y is already partiallyfactored as Q ( y ) = ˜ Q ( y ) Q ki =1 ( y − y i ( x )) α i , where the y i ’s are k distinct rational branches among the c small branches of Q . Then their corresponding (rational)residues r i contribute to the diagonal. The special case where k = 1 and y = 0 isexactly the situation that occurred in the discussion on deg ∆ Φ before Theorem 18,when α <
0. The trick that we used extends directly to the general case: it sufficesto apply Algorithm 1 to ˜ Q , Algorithm 2 with c − k roots, and Φ is then recoveredthrough a change of variable.5.4. Generic case.
The bounds from Theorem 18 on the bidegree of Φ are slightlypessimistic wrt the variable t , but generically tight wrt the variable ∆, as will beproved in Proposition 20 below. We first need a lemma. Lemma 19.
Let K be a field of characteristic , and P ∈ K [ y ] be a polynomial ofdegree d , with Galois group S d over K . Assume that the roots α , . . . , α d of P arealgebraically independent over Q . Then, for any c (cid:54) d , the degree (cid:0) dc (cid:1) polynomial Σ c P is irreducible in K [ y ] .Proof. Since Σ = α + · · · + α c is a root of Σ c P , it suffices to prove that K (Σ)has degree (cid:0) dc (cid:1) over K . The α i ’s being algebraically independent, any permutation σ ∈ S d of all the α i ’s that leaves Σ unchanged has to preserve the sets { α , . . . , α c } and { α c +1 , . . . , α d } . Conversely, any such permutation induces an automorphismof K ( α , . . . , α d ) that leaves Σ invariant. In other words, the Galois group of K ( α , . . . , α d ) over K (Σ) is equal to S c × S d − c . It follows that K ( α , . . . , α d ) hasdegree c !( d − c )! over K (Σ) and degree d ! over K , so that K (Σ) has degree (cid:0) dc (cid:1) over K . (cid:3) Proposition 20.
Let A be a polynomial in Q [ x, y ] . Let d x , d y be non-negativeintegers, s − (cid:54) d x , s + (cid:54) d y , and B ( x, y ) = s − X i =0 b ( x ) i x i + s + X j =1 b ( y ) j y j + X i (cid:54) d x ,j (cid:54) d y − s − (cid:54) j − i (cid:54) s + b i,j x i y j ∈ Q [( b ( x ) i ) , ( b ( y ) j ) , x, y ] , where the b ( x ) i and b ( y ) j are indeterminates and b i,j ∈ Q .Then the polynomial computed by Algorithm 3 with input A/B is irreducible ofdegree (cid:0) s − + s + s − (cid:1) over K = Q [( b ( x ) i ) , ( b ( y ) j ) , x, y ] .Proof. First apply the change of variables to obtain G = y α P/Q , with Q ( x, y ) = s − X i =0 b ( x ) i x i y s − − i + s + X j =1 b ( y ) j y s − + j + X i,j b i,j x i y s − − i + j . Denote d = s − + s + . Then, the polynomial Q (1 , y ) has the form P j (cid:54) d t j y j whereeach of the t j ’s is the sum of one of the indeterminates and rational constants. This implies that the t j ’s are algebraically independent over Q . Therefore, Q (1 , y ) hasGalois group S d over Q ( t , . . . , t d ) and its roots are algebraically independent over Q [41, §57]. This property lifts to Q ( x, y ) [41, §61], which thus has Galois group S d and algebraically independent roots, denoted y , . . . , y d .Now define the polynomial R ( x, y ) = Q i ( y − ˜ P ( x, y i ) /∂ y Q ( x, y i )), where ˜ P = y α P if α (cid:62) P = P otherwise. Since Q has simple roots, this is exactly thepolynomial that is computed by Algorithm 1. The family { P ( x, y i ) /∂ y Q ( x, y i ) } isalgebraically independent, since any algebraic relation between them would induceone for the y i ’s by clearing out denominators. In particular, the natural morphismGal( Q/ K ) = S d → Gal( R/ K ) is injective, whence an isomorphism. (Here, Gal( P/ K )denotes the Galois group of P ∈ K [ y ] over K .) Since an immediate investigation ofthe Newton polygon of Q shows that it has s − small branches, we conclude usingLemma 19 and the fact that the translation of the variable doesn’t change theirreducible character of Φ. (cid:3) Proposition 20 should be viewed as an optimality result. Indeed, for a genericrational function
A/B as in the proposition, we have B = B ? , ddeg − ( B ) = s − ,ddeg + ( B ) = s + and B has s − small branches. This implies that the bound ofTheorem 18 for deg ∆ Φ is optimal in this (generic) case.If one believes that random examples should behave like the generic case, then theproposition means that the polynomial computed by Algorithm 3 will be irreduciblemost of the time.As an example, we consider the special case of Proposition 20 where s − = s + = d x = d y = d . In this case, deg ∆ Φ is (cid:0) dd (cid:1) . We compare this to the followingexperiment on random examples. Example 21.
We consider a rational function F ( x, y ) = 1 /B ( x, y ), where B ( x, y ) isa dense polynomial of bidegree ( d, d ) chosen at random. For d = 1 , , ,
4, algorithm
AlgebraicDiagonal ( F ) produces irreducible outputs with bidegrees (2 , , , , (cid:18) d (cid:18) d − d − (cid:19) , (cid:18) dd (cid:19)(cid:19) , so that the bound on deg ∆ Φ is tight in this case and the irreducibility of the outputshows that Theorem 18 cannot be improved further.6.
Walks
The key ingredient in the fact that diagonals may have a big minimal polynomialwas the possibility to write them as a sum of residues. The same exponential growthas in Proposition 20 therefore occurs for other functions bearing this same structure.For instance, constant terms of rational functions in C ( x )[[ y ]] can also be written ascontour integrals of rational functions around the origin and thus by the residuetheorem be expressed as a sum of residues.By contrast, such sums of residues of rational functions always satisfy a differentialequation of only polynomial size [3]. Thus, when an algebraic function appears tobe connected to a sum of residues of a rational function, the use of this differentialstructure is much more adapted to the computation of series expansions, instead ofgoing through a potentially large polynomial. LGEBRAIC DIAGONALS AND WALKS 19
Figure 1. [2] The four types of paths: walks, bridges, meanders and excursionsand the corresponding generating functions.
As an example where this phenomenon occurs naturally, we consider here theenumeration of unidimensional lattice walks, following Banderier and Flajolet [2]and Bousquet-Mélou [10]. Our goal in this section is to study, from the algorithmicperspective, the series expansions of various generating functions (for bridges, excur-sions, meanders) that have been identified as algebraic [2]. One of our contributionsis to point out that although algebraic series can be expanded fast [16, 17, 4], theprecomputation of a polynomial equation could have prohibitive cost. We over-come this difficulty by precomputing differential (instead of polynomial) equationsthat have polynomial size only, and using them to compute series expansions toprecision N for bridges, excursions and meanders in time quasi-linear in N .6.1. Preliminaries.
We start with some vocabulary on lattice walks. A simplestep is a vector (1 , u ) with u ∈ Z . A step set S is a finite set of simple steps. A unidimensional walk in the plane Z built from S is a finite sequence ( A , A , . . . , A n )of points in Z , such that A = (0 ,
0) and −−−−−→ A k − A k = (1 , u k ) with (1 , u k ) ∈ S . Inthis case n is called the length of the walk, and S is the step set of the walk. The y -coordinate of the endpoint A n , namely P ni =1 y i , is called the final altitude of thewalk. The characteristic polynomial of the step set S isΓ S ( y ) = X (1 ,u ) ∈ S y u . Following Banderier and Flajolet, we consider three specific families of walks:bridges, excursions and meanders [2].
Bridges are walks with final altitude 0, meanders are walks confined to the upper half plane, and excursions are bridges that are also meanders. Figure 1, taken from [2], summarizes these definitionsgraphically.We define the full generating power series of walks W S ( x, y ) = X n (cid:62) ,k ∈ Z w n,k x n y k ∈ Z [ y, y − ][[ x ]] , where w n,k is the number of walks with step set S , of length n and final altitude k .We denote by B S ( x ) (resp. E S ( x ), and M S ( x )) the power series P n (cid:62) u n x n , where u n is the number of bridges (resp. excursions, and meanders) of length n with stepset S .We omit the step set S as a subscript when there is no ambiguity. Severalproperties of the power series W , B , E and M are classical: Fact 22. [2, §2.1-2.2] The power series W , B , E and M satisfy(1) W ( x, y ) is rational and W ( x, y ) = 1 / (1 − x Γ( y )) ;(2) B ( x ) , E ( x ) and M ( x ) are algebraic;(3) B ( x ) = [ y ] W ( x, y ) ;(4) E ( x ) = exp (cid:0)R ( B ( x ) − /x d x (cid:1) . In what follows, we describe and analyze three methods to compute the powerseries expansions of B , E and M . In the next two sections, we first study twopreviously known methods, then we introduce a new one.6.2. Expanding the generating power series.
From now on, we fix a step set S , and we denote by u − (resp. u + ) the largest u such that (1 , − u ) ∈ S (resp.(1 , u ) ∈ S ). We also define d = u − + u + . The integer d measures the verticalamplitude of S ; this makes d a good scale for measuring the complexity of thealgorithms that will follow. We assume that both u − and u + are positive, sinceotherwise the study of the bridges, excursions and meanders becomes trivial. The direct method.
The combinatorial definition of walks yields a recurrencerelation for w n,k :(16) w n,k = X (1 ,u ) ∈ S w n − ,k − u , with initial conditions w n,k = 0 if n, k (cid:54) n, k ) = (0 , w , = 1. If ˜ w n,k denotes the number of walks of length n and final altitude k that never exit the upperhalf plane, then ˜ w n,k also satisfies recurrence (16), but with the additional initialconditions ˜ w n,k = 0 for all k <
0. Then the bridges (resp. excursions, meanders)are counted by the numbers w n, (resp. ˜ w n, , P k ˜ w n,k ).One can compute these numbers by unrolling the recurrence relation (16). Eachuse of the recurrence costs O ( d ) ops., and in the worst case one has to compute O ( dN ) terms of the sequence (for example, if the step set is S = { (1 , , . . . , (1 , d ) } ).This leads to the computation of each of the generating series in O ( d N ) ops.This quadratic complexity in N is unsatisfactory, and any method that requiresthe complete expansion of the generating series W ( x, y ) is bound to be quadratic in N . The two other methods that we are going to present are designed to achievelinear or quasi-linear complexity in N . As will be explained, this comes at the costof a precomputation that must be taken into account in the analysis. Using algebraic equations.
In [2, §2.3], a method relying on the algebraicityof B , E and M (Fact 22 (2) )) is suggested. The series E and M can be expressed LGEBRAIC DIAGONALS AND WALKS 21 as products in terms of the small branches of the characteristic polynomial Γ S (see [2, Th. 1, Cor. 1]). From there, a polynomial equation can be obtained usingthe Platypus algorithm [2, §2.3], which computes a polynomial canceling the productsof a fixed number of roots of a given polynomial. Given a polynomial equation P ( z, E ) = 0, another one for B can be deduced from the relation B = zE /E + 1 asResultant E (( B − EP E + zP z , P ).Once a polynomial equation is known for one of these three series, it can beused to compute a linear recurrence with polynomial coefficients satisfied by itscoefficients [16, 17, 4]. The naive algorithm introduced above provides a way tocompute a sufficiently large number of initial conditions to unroll this recurrence.(For a quantitative result on the required number of initial conditions, see Corollary 28below.) This method produces an algorithm that computes the first N terms of B , E and M in O ( N ) ops. For this to be an improvement over the naive method forlarge N , the dependence on d of the constant in the O () should not be too largeand the precomputation not too costly.Indeed, the cost of the precomputation of an algebraic equation is not negligible.The bound (cid:0) du − (cid:1) on the degrees of equations for excursions has been obtained byBousquet-Mélou, and showed to be tight for a specific family of step sets, as wellas generically [10, §2.1]. This bound may be exponentially large with respect to d .Empirically, the polynomials for B and M are similarly large.The situation for differential equations and recurrences is different: B satisfiesa differential equation of only polynomial size (see below), whereas (empirically),those for E and M have a potentially exponential size. These sizes then transferto the corresponding recurrences and thereby to the constant in the complexityof unrolling them. The purpose of Theorem 24 below is to give explicitly thepolynomial dependence in d when using this method, showing at the same time thata true improvement over the naive method can be achieved. Example 23.
With the step set S = { (1 , d ) , (1 , , (1 , − d ) } and d (cid:62)
2, the countingseries W S equals W S ( x, y ) = y d y d − x (1 + y d +1 + y d ) . Experiments indicate that the minimal polynomial of B S ( x ) has bidegree (2 d (cid:0) d − d − (cid:1) , (cid:0) dd (cid:1) ),exhibiting an exponential growth in d . On the other hand, they show that B S ( x )satisfies a linear differential equation of order 2 d − d + 3 d − d , and d + 3 d − d . New Method.
We now give a method that runs in quasi-linear time (with respectto N ) and avoids the computation of an algebraic equation. Our method relies onthe fact that periods of rational functions such as the one in Part (3) of Fact 22satisfy differential equations of polynomial size in the degree of the input rationalfunction [3]. We summarize our results in the following theorem, and then go overthe proof in each case individually. Theorem 24.
Let S be a finite set of simple steps and d = u − + u + . The series B S (resp. E S and M S ) can be expanded at order N in O ( d N ) ops. (resp. ˜ O ( d N ) ops.), after a precomputation in ˜ O ( d ) ops. Fast Algorithms. Bridges.
To expand B ( x ), we rely on Fact 22 (3) . Theformula can be written B = (1 / πi ) H W ( x, y ) dyy , the integration path being a Algorithm
Walks ( S , N ) Input :
A set S of simple steps and an integer N Output: B S , E S , M S mod x N +1 F ← W ( x, y ) /y [case B, E ] or W ( x, y ) / (1 − y ) [case M ] D ← HermiteTelescoping ( F ) [3, Fig. 3] R ← the recurrence of order r associated to DI ← [ y ] W ( x, y ) mod x r +1 [case B, E ][ y ] yW ( x, y ) / (1 − y ) mod x r +1 [case M ] B ← [ y ] W ( x, y ) mod x N +1 (from R, I ) A ← [ y ] yW ( x, y ) / (1 − y ) mod x N +1 (from R, I ) E ← exp (cid:0)R ( B ( x ) − /x d x (cid:1) mod x N +1 M ← exp (cid:0) − R ( A ( x ) /x ) / (1 − Γ(1) x ) d x (cid:1) mod x N +1 return B, E, M
Algorithm 4.
Expanding the generating functions of bridges, excursions and meanders circle inside a small annulus around the origin [2, proof of Th. 1]. Moreover, W ( x, y ) /y is of the form P/Q , where bideg Q (cid:54) (1 , d ) and bideg P (cid:54) (0 , d − P and Q are relatively prime and Q is primitive with respect to y , Algorithm HermiteTelescoping [3, Fig. 3] computes a telescoper for
P/Q , which is also adifferential equation satisfied by B . By Fact 6 (2) , the resulting differential equationhas order at most d and degree O ( d ), and is computed using ˜ O ( d ) ops. Thisdifferential equation can be turned into a recurrence of order r = O ( d ) in quasi-optimal time (see the discussion after [8, Cor. 2]). We may use it to expand B ( x ) mod x N in O ( d N ) ops, once enough initial conditions are known. Again, theinitial conditions are computed by means of the direct method. The only remainingquestion is the number of initial conditions needed. Indeed, the recurrence may besingular, ie its leading coefficient may have positive integer roots. If we denote by α the largest such root, then we need to compute the first terms of the recurrenceup to max( r − , α ). In order not to break the flow of reading, we postpone thediscussion on the size of α to the next section. For now, we only state the result. Proposition 25.
Let S be a set of simple steps, and d = max (1 ,u ) , (1 ,v ) ∈ S | u − v | .Then the largest integer root of the leading term of the recurrence computed byAlgorithm 4 is at most O ( d ) Proof.
See Section 6 . (cid:3) Thus, a sufficient number of initial conditions is computed with O ( d ) ops by thedirect method, and the total cost of the precomputation is ˜ O ( d ), as announced. Excursions. If B ( x ) mod x N +1 is known, it is then possible to recover E ( x ) mod x N +1 thanks to Fact 22 (4) . Expanding E ( x ) comes down to the computation ofthe exponential of a series, which can be performed using ˜ O ( N ) ops. (Fact 1 (4) ). Meanders.
As in the case of excursions, the logarithmic derivative of M ( x ) isrecovered from a sum of residues by the following. LGEBRAIC DIAGONALS AND WALKS 23
Proposition 26.
The series W and M are related through A ( x ) = [ y ] y − y W ( x, y ) , M ( x ) = exp (cid:16) − R A ( x ) x d x (cid:17) − x Γ(1) . Proof.
Denote by y , . . . , y u − the small branches of the polynomial y u − − xy u − Γ( y ).Then M is given as [2, Cor. 1]: M ( x ) = 11 − x Γ(1) u − Y i =1 (1 − y i ) . On the other hand, A ( x ) = 12 πi I W ( x, y )1 − y d y = u − X i =1 Residue y = y i ( x ) (cid:18) − y )(1 − x Γ( y )) (cid:19) = − u − X i =1 − y i ) x Γ ( y i ) , where the integral has been taken over a circle around the origin and the smallbranches. Differentiating the equation 1 − x Γ( y ) = 0 with respect to x leads to − x Γ ( y i ) = 1 / ( xy i ), whence A ( x ) = x P u − i =1 y i / (1 − y i ) . Therefore, Q (1 − y i ) =exp( − R A/x d x )), finishing the proof. (cid:3) Thus we apply the same method as in the case of the excursions. We first computea differential equation for A ( x ) using the method of [3]. The computation of theinitial conditions for A can also be performed naively from its definition as a constantterm, by simply expanding yW ( x, y ) / (1 − y ). The formula of the proposition thenrecovers M ( x ). The complexity analysis goes exactly as in the previous case, givinga global cost of ˜ O ( d ) ops.6.4. Singular recurrences.
We now come back to the problem of singular recur-rences. In our context, the recurrences that we come across have a very specificstructure: they are associated to differential resolvents of polynomials. (The differ-ential resolvent of a polynomial is the least order differential operator canceling allof its roots.) This structure can be exploited to derive bounds on the singularitiesof our recurrences.If P ∈ K [ x ][ y ] is a polynomial, consider the recurrence associated to its differentialresolvent L . The leading coefficient of this recurrence is called the indicial polynomialof L at 0. Its largest integer root will be denoted α . The fundamental idea is thatthere exists a Laurent series solution of L which has valuation α [26, §15.31]Therefore, it is sufficient to find bounds on the valuations of the solutions of L . Thisis done in the following theorem. Theorem 27.
Let P be a polynomial in K [ x ][ y ] , of bidegree at most ( d x , d y ) , and L be the differential resolvent of P . Then all the Laurent series solutions y ( x ) of L uniformly satisfy val x ( y ( x )) = O ( d x d y ) . Proof.
Choose a subfamily y , y , . . . , y n of the Puiseux series roots of P thatconstitutes a basis of the solution space of the resolvent (in particular, n (cid:54) d y ). Let y = P ni =1 λ i y i be a Laurent series solution of the differential resolvent of P . Then the fact that val( f ) (cid:62) val( f ) − f ∈ K [[ x ]] impliesthat val(Wr( y, y , . . . , y n )) (cid:62) val( y ) + n X i =2 val( y i ) − (cid:18) n (cid:19) . By the multilinearity of the Wronskian, the left-hand side of this inequality isnothing more than val(Wr( y , y , . . . , y n )). On the other hand, the absolute valuesof the valuations of the y i ’s are bounded by max( d x , d y ) (because they are slopes ofedges in the Newton polygon of P ). A bound for val( y ) is thus obtained:val( y ) (cid:54) val(Wr( y , y , . . . , y n )) + ( d y −
1) max( d x , d y ) + d y ( d y − . The proof is then reduced to showing that val(Wr( y , y , . . . , y n )) = O ( d x d y ). Thisis very similar to the computations conducted in [4, §2.2]. We start by recallingsome facts that are proved there. There exist polynomials W k ∈ K [ x, y ] such thatfor all i ∈ { , , . . . , n } and all k (cid:62)
1, the derivative y ( k ) i can be expressed as y ( k ) i = W k ( x, y i ) P y ( x, y i ) k − . Moreover, the polynomials W k satisfy(17) deg x W k (cid:54) (2 d x − k − d x , deg y W k (cid:54) d y − k − d y + 2 . It follows that D = Q ni =1 P y ( x, y i ) n − ∈ K [ x, y , y , . . . , y n ] is a polynomial suchthat Wr( y , y , . . . , y n ) · D ∈ K [ x, y , y , . . . , y n ]. We will denote by R this lastpolynomial. R is the determinant of the matrix N = y P y ( x, y ) n − · · · y n P y ( x, y n ) n − W ( x, y ) P y ( x, y ) n − · · · W ( x, y n ) P y ( x, y n ) n − W ( x, y ) P y ( x, y ) n − · · · W ( x, y n ) P y ( x, y n ) n − ... ... ... W n − ( x, y ) · · · W n − ( x, y n ) .R is an anti-symmetric polynomial in y , y , . . . , y n , but R is symmetric, as well as D , so we can apply Lemma 13 to see that R and D belong to K ( x ). Therefore, theequality Wr( y , y , . . . , y n ) = RD shows that Wr( y , y , . . . , y n ) is the square root of a rational function in x . Weare going to use this structure and Lemma 13 to derive the desired bound on thevaluation of the Wronskian determinant.If det( N ) is viewed as a polynomial in K [ x, y , y , . . . , y n ], thendeg x det( N ) (cid:54) n − X k =0 ((2 n − d x − k ) (cid:54) n (2 n − d x + n ( n − , and for all i ∈ { , , . . . , n } ,deg y i det( N ) (cid:54) n − d y − n − . Similarly, when D is viewed as a polynomial in K [ x, y , y , . . . , y n ], we have:deg x D = n (2 n − d x , deg y i D = (2 n − d y − . LGEBRAIC DIAGONALS AND WALKS 25
Applying Lemma 13, we deduce that, denoting by p ( x ) the leading coefficient of P ( x, y ), Wr( y , y , . . . , y n ) = U ( x ) p ( x ) V ( x ) , where deg x U (cid:54) n (2 n − d x + n ( n −
1) + 2(2 n − d x d y − n − d x . Finally, the inequalities val(Wr( y , y , . . . , y n )) (cid:54) val( U ) (cid:54) deg x ( U ) and n (cid:54) d y yield val(Wr( y , y , . . . , y n )) = O ( d x d y ) , which concludes the proof. (cid:3) We immediately deduce the following corollary on the number of initial conditionsrequired to expand an algebraic power series.
Corollary 28.
Let P ∈ K [ x, y ] be a polynomial of bidegree bounded by ( d x , d y ) . Let R be the recurrence associated to the differential resolvent of P . Then the largestinteger root of the leading coefficient of R is at most O ( d x d y ) .Proof. Immediate from the theorem and the discussion that precedes it. (cid:3)
We are now able to prove Proposition 25.
Proof. (of Proposition 25) We only treat the case where the recurrence is computedfor B , and the proof transposes directly to the case of A . Let S and d be as in theProposition, and denote by P the minimal polynomial of B . Then the recurrencecomputed by Algorithm 4 is associated to the minimal annihilating differentialoperator for B , which is also the differential resolvent of P . We denote it by L P .Now since B = [ y − ] W ( x, y ) /y , it can be written as a sum of residues similar toformula (8). If we denote by R the polynomial that cancels these residues, then P divides Σ c R for some c . This implies in particular that all the solutions of L P arelinear combinations of the roots of R . Thus, if L R is the differential resolvent of R , then all the solutions of L P are solutions of L R . Since W has bidegree (1 , d ),Theorem 8 and Theorem 27 show that all the roots of P have valuation at most O ( d ), and the result follows. (cid:3) Conclusion
We gave a complete and efficient algorithm that calculates a polynomial equa-tion satisfied by the diagonal of a bivariate rational function in characteristic 0.Generically, the degree in ∆ of the polynomial P ( t, ∆) output by the algorithm isoptimal. The bound on the degree in t is not tight. The gap between this boundand the actual degrees is not yet fully understood: it is already present for theRothstein-Trager and Bronstein resultants. Our complexity results are given in thearithmetic complexity model. The corresponding study in the binary model remainsto be done.The case of positive characteristic requires different methods and algorithms. Inthat case, diagonals are algebraic even for rational functions with more than twovariables. To the best of our knowledge, these questions have never been studiedfrom the complexity viewpoint. One possible direction is to try and make effectivethe proof by Furstenberg that these diagonals are algebraic [21]. Some work has also been done by Adamczewski and Bell [1] who among other things studied how thesizes of the polynomial equations satisfied by diagonals vary with the characteristicof the base field. References [1] B. Adamczewski and J. P. Bell. Diagonalization and rationalization of algebraic Laurent series.
Ann. Sci. Éc. Norm. Supér. (4) , 46(6):963–1004, 2013.[2] C. Banderier and P. Flajolet. Basic analytic combinatorics of directed lattice paths.
TCS ,281(1-2):37–80, 2002.[3] A. Bostan, S. Chen, F. Chyzak, and Z. Li. Complexity of creative telescoping for bivariaterational functions. In
ISSAC’10 , pages 203–210. ACM, 2010.[4] A. Bostan, F. Chyzak, G. Lecerf, B. Salvy, and É. Schost. Differential equations for algebraicfunctions. In
ISSAC’07 , pages 25–32. ACM Press, 2007.[5] A. Bostan, L. Dumont, and B. Salvy. Algebraic diagonals and walks. In
ISSAC’15 , pages77–84. ACM Press, 2015.[6] A. Bostan, P. Flajolet, B. Salvy, and É. Schost. Fast computation of special resultants.
JSC ,41(1):1–29, 2006.[7] A. Bostan, P. Lairez, and B. Salvy. Creative telescoping for rational functions using theGriffiths-Dwork method. In
ISSAC’13 , pages 93–100. ACM Press, 2013.[8] A. Bostan and É. Schost. Polynomial evaluation and interpolation on special sets of points.
J.Complexity , 21(4):420–446, 2005.[9] M. Bousquet-Mélou. Rational and algebraic series in combinatorial enumeration. In
Interna-tional Congress of Mathematicians , pages 789–826. EMS, 2006.[10] M. Bousquet-Mélou. Discrete excursions.
Séminaire Lotharingien de Combinatoire , 57:Art.B57d, 1–23, 2008.[11] M. Bousquet-Mélou and M. Petkovšek. Linear recurrences with constant coefficients: themultivariate case.
Discrete Math. , 225(1-3):51–75, 2000.[12] M. Bronstein. Formulas for series computations.
AAECC , 2(3):195–206, 1992.[13] P. Bürgisser, M. Clausen, and M. A. Shokrollahi.
Algebraic complexity theory , volume 315 of
Grundlehren der Mathematischen Wissenschaften . Springer, 1997.[14] J. F. Canny, E. Kaltofen, and Y. N. Lakshman. Solving systems of nonlinear polynomialequations faster. In
ISSAC’89 , pages 121–128, 1989.[15] G. Christol. Diagonales de fractions rationnelles et équations de Picard-Fuchs. In
Study groupon ultrametric analysis, 12th year, 1984/85, No. 1 (Exp. No. 13) , pages 1–12, Paris, 1985.[16] D. V. Chudnovsky and G. V. Chudnovsky. On expansion of algebraic functions in power andPuiseux series, I.
Journal of Complexity , 2(4):271–294, 1986.[17] D. V. Chudnovsky and G. V. Chudnovsky. On expansion of algebraic functions in power andPuiseux series, II.
Journal of Complexity , 3(1):1–25, 1987.[18] P. Deligne. Intégration sur un cycle évanescent.
Invent. Math. , 76(1):129–143, 1984.[19] J. Denef and L. Lipshitz. Algebraic power series and diagonals.
Journal of Number Theory ,26(1):46–67, 1987.[20] M. Fliess. Sur divers produits de séries formelles.
Bull. Soc. Math. France , 102:181–191, 1974.[21] H. Furstenberg. Algebraic functions over finite fields.
Journal of Algebra , 7(2):271–277, 1967.[22] J. von zur Gathen and J. Gerhard.
Modern Computer Algebra . Cambridge Univ. Press, secondedition, 2003.[23] I. M. Gessel. A factorization for formal Laurent series and lattice path enumeration.
JCTA ,28(3):321–337, 1980.[24] B. Haible. The diagonal of a rational function. Preprint, 1997.[25] L. J. Hautus and D. A. Klarner. The diagonal of a double power series.
Duke MathematicalJournal , 38:229–235, 1971.[26] E. L. Ince.
Ordinary differential equations . Dover Publications, New York, 1956. Reprint ofthe 1926 edition.[27] P. Lairez. Computing periods of rational integrals.
Math. Comp. , (arxiv 1404.5069). To appear.[28] G. Lecerf. Fast separable factorization and applications.
AAECC , 19(2):135–160, 2008.[29] G. Lecerf and É. Schost. Fast multivariate power series multiplication in characteristic zero.
SADIO Electron. J. Inf. Oper. Res. , 5:1–10, 2003.
LGEBRAIC DIAGONALS AND WALKS 27 [30] L. Lipshitz. The diagonal of a D -finite power series is D -finite. Journal of Algebra , 113(2):373–378, 1988.[31] V. Y. Pan. Simple multivariate polynomial multiplication.
JSC , 18(3):183–186, 1994.[32] V. Y. Pan. New techniques for the computation of linear recurrence coefficients.
Finite Fieldsand their Applications , 6(1):93–118, 2000.[33] D. Y. Pochekutov. Diagonals of the Laurent series of rational functions.
Sibirsk. Mat. Zh. ,50(6):1370–1383, 2009.[34] G. Pólya. Sur les séries entières, dont la somme est une fonction algébrique.
L’EnseignementMathématique , 22:38–47, 1921.[35] M. Rothstein.
Aspects of symbolic integration and simplification of exponential and primitivefunctions . PhD thesis, 1976.[36] K. V. Safonov. On conditions for the sum of a power series to be algebraic and rational.
Math.Notes , 41(3–4):185–189, 1987.[37] A. Schönhage. The fundamental theorem of algebra in terms of computational complexity.Technical report, Tübingen, 1982.[38] R. P. Stanley.
Enumerative Combinatorics , volume II. Cambridge Univ. Press, 1999.[39] B. M. Trager. Algebraic factoring and rational function integration. SYMSAC’76, pages219–226. ACM, 1976.[40] J. van der Hoeven and É. Schost. Multi-point evaluation in higher dimensions.
AAECC ,24(1):37–52, 2013.[41] B. L. van der Waerden.
Modern Algebra. Vol. I . Frederick Ungar Publ. Co., 1949.[42] R. J. Walker.
Algebraic curves . Springer-Verlag, New York, 1978. Reprint of the 1950 edition.[43] C. K. Yap.
Fundamental Problems of Algorithmic Algebra . Oxford University Press Inc., NewYork, 2000.
Inria (France)
E-mail address : [email protected] Inria (France)
E-mail address : [email protected] Inria (France), LIP (U. Lyon, CNRS, ENS Lyon, UCBL)
E-mail address ::