Accelerated Approximation of the Complex Roots and Factors of a Univariate Polynomial
Victor Y. Pan, Elias P. Tsigaridas, Vitaly Zaderman, Liang Zhao
aa r X i v : . [ c s . S C ] N ov Accelerated Approximation of the Complex Roots and Factorsof a Univariate Polynomial
Victor Y. Pan [1 , ,a ] , Elias P. Tsigaridas [3] , Vitaly Zaderman [2 ,b ] , and Liang Zhao [2 ,c ][1] Department of Mathematics and Computer ScienceLehman College of the City University of New YorkBronx, NY 10468 USAand [2]
Ph.D. Programs in Mathematics and Computer ScienceThe Graduate Center of the City University of New YorkNew York, NY 10036 USA [ a ] [email protected]://comet.lehman.cuny.edu/vpan/ [2] INRIA, Paris-Rocquencourt Center,
PolSys
Sorbonne Universit´es, UPMC Univ. Paris 06,
PolSys ,UMR 7606, LIP6, F-75005, Paris, [email protected] [ b ] [email protected], and [ c ] [email protected] preliminary version of this paper has been presented at SNC 2014 Abstract
The algorithms of Pan (1995) [20] and Pan (2002) [22]) approximate the roots of a complexunivariate polynomial in nearly optimal arithmetic and Boolean time but require a precision ofcomputing that exceeds the degree of the polynomial. This causes numerical stability problemswhen the degree is large. We observe, however, that such a difficulty disappears at the initialstage of the algorithms, and in our present paper we extend this stage to root-finding withina nearly optimal arithmetic and Boolean complexity bounds provided that some mild initialisolation of the roots of the input polynomial has been ensured. Furthermore our algorithmis nearly optimal for the approximation of the roots isolated in a fixed disc, square or anotherregion on the complex plane rather than all complex roots of a polynomial. Moreover thealgorithm can be applied to a polynomial given by a black box for its evaluation (even if itscoefficients are not known); it promises to be of practical value for polynomial root-finding andfactorization, the latter task being of interest on its own right. We also provide a new supportfor a winding number algorithm, which enables extension of our progress to obtaining mildinitial approximations to the roots. We conclude with summarizing our algorithms and theirextension to the approximation of isolated multiple roots and root clusters. eywords: polynomial equation, roots, root-finding, root-refinement, power sums, winding num-ber, complexity The classical problem of univariate polynomial root-finding has been central in Mathematics andComputational Mathematics for about four millennia since the Sumerian times, and is still importantfor Signal and Image Processing, Control, Geometric Modeling, Computer Algebra, and FinancialMathematics. It is closely linked to the approximation of linear and nonlinear factors of a polynomial,which is also important on its own right because of the applications to the time series analysis, Weinerfiltering, noise variance estimation, covariance matrix computation, and the study of multi-channelsystems (see Wilson (1969) [34], Box and Jenkins (1976) [6], Barnett (1983) [1], Demeure and Mullis(1989 and 1990) [8], [9], Van Dooren (1994)) [32].Solution of both problems within nearly optimal arithmetic and Boolean complexity bounds(up to polylogarithmic factors) have been obtained in Pan (1995) [20] and Pan (2002) [22], butthe supporting algorithms require a precision of computing that exceeds the degree of the inputpolynomial, and this causes numerical stability problems when the degree is large.The most popular packages of numerical subroutines for complex polynomial root-finding, suchas MPSolve 2000 (see Bini and Fiorentino (2000) [3]), EigenSolve 2001 (see Fortune (2002) [10]), andMPSolve 2012 (see Bini and Robol (2014) [5]) employ alternative root-finders based on functional it-erations (namely, Ehrlich–Aberth’s and WDK, that is, Weierstrass’, also known as Durand-Kerner’s)and the QR algorithm applied to eigen-solving for the companion matrix of the input polynomial.The user considers these root-finders practically superior by relying on the empirical data abouttheir excellent convergence, even though these data have no formal support. To their disadvantage,these algorithms compute the roots of a polynomial in an isolated region of the complex plane notmuch faster than all its roots.We re-examine the subject, still assuming input polynomials with complex coefficients, and showthat the cited deficiency of the algorithms of [20] and [22] disappears if we modify the initial stageof these algorithms and apply them under some mild assumptions about the initial isolation of theroot sets of the input polynomial. Moreover, like the algorithms of [20] and [22] and unlike WDKand Ehrlich–Aberth’s algorithms, the resulting algorithms are nearly optimal for the approximationof the roots in an isolated region of the complex plane.Next we briefly comment on our results. In the next sections we elaborate upon them, deducethe computational cost estimates, and outline some natural extensions.Recall that polynomial root-finding iterations can be partitioned into two stages. At first a crude(although reasonably good) initial approximations to all roots or to a set of roots are relatively slowlycomputed. Then these approximations are refined faster by means of the same or distinct iterations.Our first algorithm applies at the second stage and is nearly optimal, under both arithmetic andBoolean complexity models and under mild initial isolation of every root, some roots or some rootsets. Such an isolation can be observed at some stages of root approximation by Ehrlich–Aberth’s andWDK algorithms, but with no estimates for the computational cost of reaching isolation. Towardsthe solution with controlled computational cost, one can apply advanced variants of Quad-treeconstruction of Weyl 1924 [33], successively refined in Henrici 1974 [12], Renegar 1987 [28], and Pan2000 [21].The algorithm of the latter paper computes all roots of a polynomial or its roots in a fixedisolated region at a nearly optimal arithmetic cost, and it is nearly optimal for computing the initialisolation of the roots as well; the paper [21] does not estimate the Boolean cost of its algorithm, butmost of its steps allow rather straightforward control of the precision of computing. Moreover in Section 5 we present a new winding number algorthm that computes the number ofroots in a fixed square on a complex plane at a nearly optimal Boolean cost. By incorporating thisalgorithm into the refined variant of Weyl’s Quad tree construction of [21], we extend our progress More recent variations of the Quad-tree algorithm have been studied by various authors under the name ofsubdivision algorithms for polynomial root-finding.
2o obtaining mild initial isolation of all the roots of a polynomial as well as its roots isolated in afixed region. This root-finder is performed at a nearly optimal Boolean cost and can be applied evenwhere a polynomial is given by a black box for its evaluation.If properly implemented our algorithms have very good chances to become the user’s choice.Besides the root-finding applications, they can be valuable ingredients of the polynomial factor-ization algorithms. Recall that one can extend factorization to root-finding (see Sch¨onhage (1982)[29], [20], [22], and the present paper), but also root-finding to factorization (see Pan (2012) [23]).Our algorithm can be technically linked to those of [29], [20], [22]; our results (Theorem 8 and10) could be also viewed as an extension of the recent record and nearly optimal bounds for theapproximation of the real roots [26, 27], see also [17].An interesting challenge is the design of a polynomial factorization algorithms that both aresimple enough for practical implementation and support factorization at a nearly optimal computa-tional complexity. An efficient solution outlined in our last section combines our present algorithmswith the one of Pan 2012 [23] and McNamee and Pan 2013 [18, Section 15.23], which is a simplifiedversion of the efficient but very much involved algorithm of Kirrinnis (1998) [13].Like our present algorithm, these solution algorithms are nearly optimal and remain nearlyoptimal when they are applied to a polynomial given by a black box for its evaluation, even whenits coefficients are not known.
Organization of the paper.
We recall the relevant definitions and some basic results in theremainder of this section and in the next section. In Section 3 we present our main algorithm, proveits correctness, and estimate its arithmetic cost when it is applied to the approximation of a singleroot and d simple isolated roots of a d th degree polynomial. In Section 4 we extend our analysis toestimate the Boolean cost of these computations. In Section 5 we present our new winding numberalgorthm. In our concluding Section 6 we summarize our results and outline their extension tofactorization of a polynomial and to root-finding in the cases of isolated multiple roots and rootclusters. Some definitions. • For a polynomial u = u ( x ) = P di =0 u i x i , the norms || u || γ denote the norms || u || γ of itscoefficient vector u = ( u i ) di =0 , for γ = 1 , , ∞ . • D ( X, r ) denotes the complex disc { x : | x − X | ≤ r } . • “ops” stands for “arithmetic operations”. • DF T ( q ) denotes the discrete Fourier transform at q points. It can be performed by using O ( q log( q )) ops. The following concept of the isolation ratio is basic for us, as well as for [20] and [22]. Assume a realor complex polynomial p = p ( x ) = d X i =0 p i x i = p n d Y j =1 ( x − z j ) , p d = 0 , (1)of degree d , an annulus A ( X, R, r ) = { x : r ≤ | x − X | ≤ R } on the complex plane with a center X ,and the radii r and R of the boundary circles. Then the internal disc D ( X, r ) is
R/r - isolated , andwe call R/r its isolation ratio if the polynomial p has no roots in the annulus. The isolation ratios,for all discs D (0 , r ) and for all positive r , can be approximated as long as we can approximate theroot radii | z j | , for j = 1 , . . . , d . 3he algorithms of [29] (cf. also Pan (2000) [21] and [22]) yield such approximations within aconstant relative error, say, 0.01, by using O ( d log ( d )) ops, but involve the Dandelin’s root-squaringiteration (see Householder (1959) [11]), and this leads to numerical stability problems.Alternative heuristic algorithms of Bini (1996) [2] and [3] are slightly faster, but also cannotproduce close approximation without using root-squaring iteration.The Schur-Cohn test does not use these iterations and can be applied to estimate the isolationratio more directly. For the disc D (0 , r ), a variant of this test in Renegar (1987) [28, Section 7]amounts to performing FFT at d ′ = 2 h points, for 16 d ≤ d ′ ≤ d , with the overhead of O ( n ) opsand comparisons of real numbers with 0. This means a reasonably low precision of computing andBoolean cost. Also see Brunie and Picart (2000) [7].The following result from Tilli (1998) [30] shows that Newton’s classical iteration convergesquadratically to a single simple root of p if it is initiated at the center of a 3 d -isolated disc thatcontains just this root. The result softens the restriction that s ≥ d of [28, Corollary 4.5]. Theorem 1.
Suppose that both discs D ( c, r ) and D ( c, r/s ) , for s ≥ d , contain a single simple root α of a polynomial p = p ( x ) of degree d . Then Newton’s iteration x k +1 = x k − p ( x k ) /p ′ ( x k ) , k = 0 , , . . . (2) converges quadratically to the root α right from the start provided that x = c . We can shift and scale the variable x , and so with no loss of generality we assume dealing with a(1 + η ) -isolated disc D (0 , r ), for r = 1 / (1 + η ), a fixed η >
0, and a polynomial p of (1) havingprecisely k not necessarily distinct roots z , . . . , z k in this disc.In Section 6 we outline some important extensions of our current study based on our results inthe general case of any k < d , which we produce next, but in Sections 3 and 4 we assume that k = 1.Now, under the above assumptions for any k < d , can we increase the isolation ratio, say, to 3 d ?We apply the following algorithm. Algorithm 2. The Power Sums.
Input: three integers d , k and q , such that 0 < k < n , k < q , and η > ω = ω q = exp(2 π √− /q ), a primitive q th root of unity,the coefficients p , . . . , p d of a polynomial p = p ( x ) of (1).We write r = 1 / (1 + η ) and assume that the disc D (0 , r )(i) is (1 + η ) -isolated and(ii) contains the k roots z , . . . , z k of the polynomial p = p ( x ) and no other roots. Output: the values σ ∗ , . . . , σ ∗ k such that | σ g − σ ∗ g | ≤ ∆ k,q = ( r q + k + ( d − r q − k ) / (1 − r q ) , for g = 1 , . . . , k, (3) σ g = k X j =1 z gj , for g = 1 , . . . , k. (4) Computations :1. Compute the coefficients of the two auxiliary polynomials p q ( x ) = P li =0 p q,i x i and¯ p q ( x ) = P li =0 ¯ p q,i x i where p q,i = P lj =0 p i + jq and ¯ p q,i = P ¯ lj =0 p i +1+ jq , for i = 0 , . . . , q − l = ⌊ d/q ⌋ and ¯ l = ⌊ ( d − /q ⌋ . 4. Compute the values p q ( ω j ), ¯ p q ( ω j ), and r j = p q ( ω j ) / ¯ p q ( ω j ), for j = 0 , , . . . , q − σ ∗ g = q P q − j =0 ω ( g +1) j r j , for g = 1 , . . . , k . Theorem 3.
Algorithm 2 involves d + O ( q log( q )) ops.Proof. Stage 1 of the algorithm involves d − d additions, Stage 2amounts to performing two DFT( q ) and q divisions, and Stage 3 amounts to performing an DFT( q )(because ω g +1 , for g = 1 , . . . , k , is the set of all q th roots of 1) and k divisions.In order to prove bound (3), implying correctness of the algorithm , at first observe that p ( ω j ) = p q ( ω j ) and p ′ ( ω j ) = ¯ p q ( ω j ), for j = 0 , , . . . , q −
1, and hence σ ∗ g = 1 q q − X j =0 ω ( k +1) j p ( ω j ) /p ′ ( ω j ) , for g = 1 , . . . , k. (5)The proof of bound (3) also exploits the Laurent expansion p ′ ( x ) /p ( x ) = d X j =1 x − z j = − ∞ X g =1 σ ′ g x g − + ∞ X g =0 σ g x − g − = ∞ X h = −∞ c h x h (6)where | x | = 1, σ = 1 , σ g = P kj =1 z gj (cf. (4)), σ ′ g = P di = k +1 z − gi , g = 1 , , . . . , that is, σ ′ g is the g thpower sum of the roots of the reverse polynomial p rev ( x ) that lie in the disc D (0 , r ). The leftmostequation of (6) is verified by the differentiation of p ( x ) = p n Q dj =1 ( x − z j ). The middle equation isimplied by the decompositions x − z = x P ∞ h =0 (cid:0) z x (cid:1) h and x − z i = − z i P ∞ h =0 (cid:16) xz i (cid:17) h , for i > | x | = 1 for all i . (Note a link of these expressions with the following quadratureformulae for numerical integration, σ g = π √− R C (0 , x m p ′ ( x ) /p ( x ) dx , for g = 1 , . . . , k. )In order to deduce bound (3), we next combine equations (5) and (6) and obtain σ ∗ k = + ∞ X l = −∞ c − k − lq . Moreover, equation (6), for h = − k − k ≥
1, implies that σ k = c − k − , while the same equation,for h = k − k ≥
1, implies that σ ′ k = − c k − . Consequently σ ∗ k − σ k = ∞ X l =1 ( c lq − k − + c − lq − k − ) . We assumed in (5) that 0 < k < q −
1. It follows that c − lq − k − = σ lq + k and c lq − k − = − σ ′ lq − k , for l = 1 , , . . . , and we obtain σ ∗ k − σ k = ∞ X l =1 ( σ lq + k − σ ′ lq − k ) . (7)Now recall that | σ h | ≤ z h and | σ ′ h | ≤ ( d − z h , for h = 1 , , . . . and z = max dj =1 min( | z j | , / | z j | ),and so z ≤ t in our case. Substitute these bounds into (7) and obtain | σ ∗ k − σ k | ≤ ( z q + k + ( d − z q − k ) / (1 − z q ) . Therefore, bound (3) follows because z ≤ r .By substituting q of order log( d ) into bound (3), we can increase the isolation ratio of the disc D (0 , r ) by a factor of gd h , for any pair of positive constants g and h . Therefore we obtain thefollowing estimates. 5 heorem 4. Suppose the disc D (0 , r ) = { x : | x | ≤ r } is (1 + η ) -isolated, for (1 + η ) r = 1 anda fixed η > , and contains exactly k roots of a polynomial p = p ( x ) of degree d . Let g and h be apair of positive constants. Then it is sufficient to perform O (3 d + log( d ) log(log( d ))) ops in order tocompute a gd h -isolated subdisc of D (0 , r ) containing exactly the same roots of p = p ( x ) . If k = 1, then σ = z , and by combining Theorems 1 and 4 we obtain the following result. Corollary 5.
Let a polynomial p ( x ) satisfy the assumptions of Theorem 4, for k = 1 . Then we canapproximate its root within ǫ , for < ǫ < , by using O (log( d ) log(log( d )) + d log(log(1 /ǫ ))) ops. Hereafter we write z + = max( | z | , | z | , . . . , | z d | ) . (8) Corollary 6.
Suppose that we are given d discs, each containing a single simple root of a polynomial p = p ( x ) of degree d and each being (1 + η ) -isolated, for a fixed η > . Then we can approximate all d roots of this polynomial within ǫz + , for z + of (8) and a fixed ǫ , < ǫ < , by using O ( d log ( d )(1 +log(log(1 /ǫ )))) ops.Proof. Apply Algorithm 2 concurrently in all d given discs, but instead of the q th roots of unity use q equally spaced points at the boundary circle of each input disc (that is, dq = O ( d log d ) pointsoverall) and instead of DFT( q ) apply the Moenck–Borodin algorithm for multipoint polynomialevaluation [19].Also use it at the stage of performing concurrent Newton’s iteration initialized at the centers ofthe 3 d -isolated subdiscs of the d input discs, each subdisc computed by the algorithm that supportsTheorem 4. Here we work with the d th degree polynomial p rather than with the q th degreepolynomials p q . Indeed, in order to support transition to polynomials p q of degree q , for d discs,we would need to perform d shifts and scalings of the variable x , which would involve the orderof d log( d ) ops, whereas by employing the Moenck–Borodin algorithm, we obtain a nearly optimalroot-refiner, which involves O ( d ) ops up to polylogarithmic factors.We replace the matrix Ω = [ ω j ( k +1) ] j,k in (5) by the matrix [ c + ω j ( k +1) ] j,k = c [1] j,k + Ω where c is invariant in j and k . The multiplication of the new matrix by a vector v is still reduced tomultiplication of the matrix Ω by a vector v and to additional 3 d ops, for computing the vector c [1] j,k v and adding it to the vector Ω v .The Moenck–Borodin algorithm uses nearly linear arithmetic time, and [13] proved that thisalgorithm supports multipoint polynomial evaluation at a low Boolean cost as well (see also J. vander Hoeven (2008) [31], Pan and Tsigaridas (2013a,b) [26], [27], Kobel and Sagraloff (2013) [14],Pan (2015) [24], and Pan (2015a) [25]). This immediately implies extension of our algorithm thatsupport Corollary 6 to refining all simple isolated roots of a polynomial at a nearly optimal Booleancost, but actually such an extension can be also obtained directly by using classical polynomialevaluation algorithm.Various other iterative root-refiners (see McNamee (2002) [15], McNamee (2007) [16], and Mc-Namee and Pan (2013) [18]) applied instead of Newton’s also support our nearly optimal complexityestimates as long as isolation of the roots obtained by our power sum algorithm is sufficient in order toensure subsequent superlinear convergence of the selected iterations. In particular Ehrlich–Aberth’sand WDK iterations converge globally to all roots with cubic and quadratic rate, respectively, if allthe d discs have isolation ratios at least 3 √ d , for Ehrlich–Aberth’s iterations, and 8 d/
3, for WDKiterations (cf. [30]).
Remark 7.
The algorithm of [24] and [25] (the latter paper provides more details) approximatesa polynomial of degree d at O ( d ) points within ǫz + , for z + of (8) and fixed ǫ such that 0 < ǫ < O ( d log ( d )(1 + log(1 /ǫ ))) ops. This matches the bound of [19], for 1 /ǫ of order gd h andfor positive constants g and h . Moreover the algorithm of [24] and [25], performed with the IEEEstandard double precision, routinely outputs close approximations to the values of the polynomial p ( x ) of degree d = 4096, at d selected points, whereas the algorithm of [19] routinely fails numerically,for d of about 40 (cf. [25]). 6 Boolean Cost Bounds
Hereafter e O B denotes the bit (Boolean) complexity ignoring logarithmic factors. By lg( · ) we denotethe logarithm with base 2. To estimate the Boolean complexity of the algorithms supported byCorollaries 5 and 6 we apply some results from Pan and Tsigaridas (2013) [26] and Pan and Tsigaridas(2015) [27], which hold in the general case where the coefficients of the polynomials are known upto an arbitrary precision. In this section we assume that the polynomial p = p ( x ) has Gaussian(that is, complex integer) coefficients known exactly; the parameter λ , to be specified in the sequel,should be considered as the working precision. We assume that we perform the computations usingfixed point arithmetic.At first we consider the algorithm of approximating one complex root, z , of a polynomial p upto any desired precision ℓ . By an approximation we mean absolute approximation, that is computea ˜ z such that | z − ˜ z | ≤ − ℓ . We assume that the degree of p is d and that k p k ∞ ≤ τ .Following the discussion that preceded Theorem 4, we compute the polynomial p q and then applytwo DFTs, for p q and ¯ p q , and the inverse DFT, for p q / ¯ p q .Assume that p is given by its λ -approximation e p such that lg k p − e p k ∞ ≤ − λ . Perform all theoperations with e p and keep track of the precision loss to estimate the precision of computationsrequired in order to obtain the desired approximation.We compute p q using d additions. This results in a polynomial such thatlg k p q k ∞ ≤ τ + lg( d )and lg( k p q − e p q k ∞ ) ≤ − λ + τ lg( d ) + 1 / ( d ) + 1 / d ) = O ( − λ + τ log( d ) + log ( d )) . Similar bounds hold for p ′ q , that is, lg( k p ′ q k ∞ ) ≤ τ + 2 lg( d )and lg( k p ′ q − e p ′ q k ∞ ) ≤ − λ + τ lg( d ) + 3 / ( d ) + 1 / d ) = O ( − λ + τ log( d ) + log ( d )) . The application of DFT on p ′ q leads us to the following bounds, | p ′ q ( ω i ) | ≤ τ + 2 lg( d ) + lg lg( d ) + 2 = O ( τ + log( d ))and | p ′ q ( ω i ) − ^ p ′ q ( ω i ) | ≤ − λ + τ lg(2 d ) + 3 / ( d ) + 5 / d ) + lg lg( d ) + 5 = O ( − λ + τ log( d ) + log ( d )) , for all i , [27, Lemma 16]. Similar bounds hold for p q ( ω i ).The divisions k i = p q ( ω i ) /p ′ q ( ω i ) output complex numbers such that | k i | = | p q ( ω i ) /p ′ q ( ω i ) | ≤ τ + 2 lg d + lg lg d + 2 . Define their approximations ˜ k i such thatlg( | k i − e k i | ) ≤ − λ + τ lg(4 d ) + 3 / d + 9 / d + 2 lg lg d + 11 = O ( − λ + τ log( d ) + log ( d )) . The final DFT produces numbers such that the logarithms of their magnitudes are not greaterthan τ + 2 lg d + 2 lg lg d + 4 and the logarithms of their approximation errors are at most − λ + τ lg(8 d ) + 3 / d + 13 / d + 4 lg lg d + 18 = O ( − λ + τ log( d ) + log ( d )), [27, Lemma 16].To achieve an error within 2 − ℓ in the final result, we perform all the computations with accuracy λ = ℓ + τ lg(8 d ) + 3 / d + 13 / d + 4 lg lg d + 18, that is λ = O ( ℓ + τ log d + log d ) = e O ( ℓ + τ ).7e perform d additions at the cost O B ( d λ ) and perform the rest of computations, that is the 3DFTs, at the cost O B (log( d ) log log( d ) µ ( λ )) or e O B ( d ( ℓ + τ )) [27, Lemma 16]. If the root that wewant to refine is not in the unit disc, then we replace τ in our bounds with dτ .We apply a similar analysis from [26, Section 2.3] to the Newton iteration (see also [27, Sec-tion 2.3]) and arrive at the same asymptotic bounds on the Boolean complexity.In [26] and [27] the error bounds of Newton operator have been estimated by using the propertiesof real interval arithmetic. In this paper we perform our computation in the field of complex numbers,but this affects only the constants of interval arithmetic, and so asymptotically, both the error boundsand the complexity bounds of the Newton iterations are the same. Thus, the overall complexity is e O B ( d τ + dℓ ) and the working precision is O ( dτ + ℓ ).In our case we also assume the exact input, that is, assume the coefficients of the input poly-nomials known up to arbitrary precision; for example, they are integers. For the refinement of theroot up to the precision of L bits, we arrive at an algorithm that supports the following complexityestimates. Theorem 8.
Under the assumptions of Theorem 4 we can approximate the root z of the polynomial p ( x ) ∈ Z [ x ] , which is of degree d and k p k ∞ ≤ τ , up to precision of L bits in e O B ( d τ + dL ) . If we are interested in refining all complex roots, we cannot work anymore with the polynomial p q of degree q = O (lg d ) unless we add the cost of d shifts of the initial approximations to the origin.Instead we rely on fast algorithms for multipoint evaluation. Initially we evaluate the polynomial p of degree d at O ( d lg d ) points, and we assume that lg k p k ∞ ≤ τ . These d points approximate theroots of p , and so their magnitude is at most ≤ τ .We use the following result of [27, Lemma 21]. Similar bounds appear in [13, 14, 31]. Theorem 9 (Modular representation) . Assume that we are given m + 1 polynomials, F ∈ C [ x ] ofdegree mn and P j ∈ C [ x ] of degree n , for j = 1 , . . . , m such that k F k ∞ ≤ τ and all roots of thepolynomials P j for all j have magnitude of at most ρ . Furthermore assume λ -approximations of F by e F and of P j by e P j such that k F − e F k ∞ ≤ − λ and k P j − e P j k ∞ ≤ − λ . Let ℓ = λ − O ( τ lg m + m n ρ ) .Then we can compute an ℓ -approximations e F j of F j = F mod P j , for j = 1 , . . . , m , such that k F j − e F j k ∞ ≤ − ℓ in e O B ( m n ( ℓ + τ + m n ρ )) . Using this theorem we bound the overall complexity of multipoint evaluation by e O B ( d ( L + dτ )).The same bound holds at the stage where we perform Newton’s iteration. We need to apply Newton’soperator e O (1) for each root. Each application of the operators consists of two polynomial evaluations.We perform the evaluations simultaneously and apply Theorem 9 to bound the complexity. Onsimilar estimates for the refinement of the real roots see [27].We have the following theorem, which complements Corollary 6 with the Boolean complexityestimates. Theorem 10.
Suppose that we are given d discs, each containing a single simple root of a polynomial p ( x ) ∈ Z [ x ] of degree d and k p k ∞ ≤ τ , and each being (1 + η ) -isolated, for a fixed η > . Then wecan approximate all d roots of this polynomial within L bits in in e O B ( d τ + dL ) . Assume that D ( X, r ) := { x : | X − x | ≤ r } denote the disk of radius r > X and thatno roots of a polynomial p ( x ) lie on its boundary circle C ( X, r ) = ∂D ( X, r ) = { x : | X − x | = r } .Let ω j = X + re √− πj/d ′ , and so ω , ..., ω d n − denote the d ′ = 2 ⌈ log n ⌉ equally spaced points onthis circle. Step 1.
Evaluate p ( x ) at ω , ..., ω d n − . 8 tep 2. For each i , ”label” p ( ω i ) according to the scheme L [ p ( x )] = if Re [ p ( x )] > and Im [ p ( x )] ≥ if Re [ p ( x )] ≤ and Im [ p ( x )] > if Re [ p ( x )] < and Im [ p ( x )] ≤ if Re [ p ( x )] ≥ and Im [ p ( x )] < , (9)where Re [ p ( x )] and Im [ p ( x )] denote the real and imaginary parts of p ( x ). Step 3.
If for some i , L [ p ( ω i +1 )] − L [ p ( ω i )] = 2 mod 4, then terminate and write ”FAILED”(using the definition ω d ′ = ω ). Step 4.
Defining ∗ : ( a, b )
7→ {− , , } to be the operation on ordered pairs of integers ( a, b ), a − b = 2 mod 4, where a ∗ b is congruent to ( a − b ) mod 4, compute the integer d ′ − X i =0 L [ p ( ω i +1 )] − L [ p ( ω i )] . Write ”SUCCESS; p ( x ) provided that no its roots areclose to the boundary of the disk D ( X, r ). Lemma 11 ([28]) . Assume that all roots of a a polynomial p ( x ) contained in D ( X, r/ are alsocontained in D ( X, r/ .Then the above algorithm, applied to D ( X, r ) , returns ”SUCCESS; ” and is the number ofroots of p ( x ) in D ( X, r ) . By using only the signs of the real and imaginary parts of polynomial evaluations at points ω , ..., ω d n − , Henrici-Renegar’s algorithm tracks the variation of the polynomial evaluation p ( x )along the boundary of the disk D ( X, r ). but there are still numerical problems that must be resolved.(i) If the real or imaginary part of an evaluation p ( ω j ) is vanishing or is very close to 0 (even if | p ( ω j ) | is not close to 0), then the signs of the values p ( ω j ) for some j can be difficult to determine.(ii) The number computed by the algorithm may not be the correct winding number of the curve.Inspired by [12] and [35], we provide a remedy to these problems by ”tracking the location”,namely, we further divide the complex plane, and use the additional information in order to guaranteea stable output in spite of potential numerical errors. For the ease of application, we also changedthe region from a disk to the region bounded by the square S ( z , r ) with four vertices { z + r + r √− , z + r − r √− , z − r + r √− , z − r − r √− } . Modified Renegar’s Winding Number Algorithm
Let p ( z ) be a polynomial of degree n . Let a i = kn ′ for i = 0 , ..., n ′ , n ′ = 2 ⌊ log (64 n/π ) ⌋ . Let L bean upper bound of the magnitude of the derivative dp ( γ ( t )) dt | over [0 , Step 1. (Check for singularity.) Evaluate p ( z ) at a , ..., a n ′ − . If any of these evaluation hasabsolute value less than √ r/ n , then terminate and return ”FAILED”. Step 2. (Check for correctness of the computations.) Check if the sum of absolute values of twoconsecutive evaluations is less than (8 rL ) /n ′ . If any such a sum is less than this bound, terminateand return ”FAILED”. Step 3.
For each i , label p ( a i ) according to its argument: L [ p ( x )] = ⌊ arg ( p ( x )) π ⌋ (10)Since 0 ≤ arg ( p ( x )) < π , L [ f ( x )] returns an integer between 0 and 7. In other words, L [ p ( x )]marks the octant to which the evaluation belongs. Step 4.
Define an operation on ordered pairs of integers: a ∗ b := ( a − b )( mod p ( z ) close to the boundary of the disk D ( z , r ), then the9roduct L [ p ( a i +1 )] ∗ L [ p ( a i )] can only take a value in the set {− , , } . Lemma 17 shows that thealgorithm works even if each labelling is off by ± Step 5.
Compute and return the integer n ′ − X i =0 L [ p ( a i +1 )] ∗ L [ p ( a i )] . (11)Then p ( z ) on the boundary of the square S ( z , r ). Lemma 12.
Let α = 1 / √ and β = 1 / . Assume that the only roots of p ( z ) contained in thedisc D ( z , αr ) are contained in the disc D ( z , βr ) . Then the operation a ∗ b in the algorithm abovecan only produce a value in the set {− , , } .Proof. Define a parametrization of the square S ( z , r ) as follows: γ ( t ) : [0 , → S ( z , r ) γ ( t ) = z + r + r √− − rt ≤ t < z + r − r √− − (8 t − r √− ≤ t < z − r − r √− t − r ≤ t < z + r − r √− t − r √− ≤ t ≤ γ ( t ) travels along S ( z , r ) once at constant pace, such that | γ ( a i +1 ) − γ ( a i ) | = | a i +1 − a i | = 8 r/n ′ , i = 0 , ..., n ′ − . Next we show that under the hypothesis of the lemma, the argument of p ( γ ( t )) cannot vary by morethan π/ t increases from a i to a i +1 . Therefore L [ p ( ω i +1 )] ∗ L [ p ( ω i )] can only take a value in theset {− , , } .Suppose otherwise, then for some t i ≤ t ′ < t ′′ ≤ t i +1 , the argument of p ( γ ( t ′ )) would differ fromthat of p ( γ ( t ′′ )) by π/
4. Let ξ , ..., ξ n be the roots of p ( z ). Since arg ( p ( z )) = P i arg ( z − ξ i )( mod π ),for at least one root ξ , the argument of γ ( t ′ ) − ξ would differ from that of γ ( t ′′ ) − ξ by more than π/ n . Now let us show that this is impossible. Let dis denote the distance from ξ to the line segmentconnecting γ ( t ′ ) and γ ( t ′′ ). Under the hypothesis of the lemma, we have dis ≥ r/ | arg ( γ ( t ′ ) − ξ ) − arg ( γ ( t ′′ ) − ξ ) | =2 tan − ( | γ ( t ′ ) − γ ( t ′′ ) | · dis )=2 tan − ( | t ′ − t ′′ | · dis ) ≤ tan − ( 8 r n ′ · dis ) ≤ tan − ( 8 n ′ ) ≤ tan − ( π n ) < π n Therefore the argument of p ( γ ( t )) cannot vary by more than π/ t increases from t i to t i +1 .This prove the lemma.As explained in the algorithm description, L [ p ( ω i )] marks the octant to which p ( ω i ) belongs, andthe value L [ p ( ω i +1 )] ∗ L [ p ( ω i )] indicates the change of octant as i runs through 1 , ..., n ′ . The value is1 if the evaluation rotates counter-clockwise, -1 if the evaluation rotates clockwise, and 0 otherwise.Thus every full counter-clockwise cycle is indicated by an increase of emma 13. Let L be an upper bound of the magnitude of the derivative dp ( γ ( t )) dt | over [0 , . If | p ( γ ( a i )) | > rLn ′ for all i = 0 , ..., n ′ , then the integer returns the correct winding number of p ( z ) around the square S ( z , r ) .Proof. This lemma follows directly from [Lemma 3][35].
Remark.
We can decrease the gap between D ( z , αr ) and D ( z , βr ) by increasing n ′ . In fact,it can be seen from the proof that the gap can be decreased by half every time we double the valueof n ′ . From now on we call the algorithm with number of samples doubled g -times as the modifiedwinding number algorithm of degree g .Let us next estimate precision required in the above computations. What precision is enoughfor labelling the evaluations correctly? For the precise labelling we need to know the signs of bothreal and imaginary part of the evaluations, as well as the correct comparison of its values. Thisrequirement is impractical as these values can be less than any working precision. However, thefollowing discussion indicates that even if the labels are off by ±
1, the algorithm still correctlyreturns the winding number.
Definition 14.
For each k = 0 , ..., n ′ , let L k denote the true label of the evaluation p ( a k ), and let f L k denote the actual label of the evaluation. Note that L n ′ = L and g L n ′ = f L .The following discussion is based on the condition of Lemma 12 and the following assumption: Assumption.
For each k = 0 , ..., n ′ , the computation of its real and imaginary part is preciseenough so that | f L k − L k | ≤ S ( z , r ) because theabsolute value of such an evaluation can be readily bounded. If neither real nor imaginary part issmall enough to create confusion, then at least f L k points to the same quadrant as L k . If one ofthem is indeed small, then the other one must be large enough so that the comparison between thereal and the imaginary part is also clear. In particular, we can prove the following result: Lemma 15.
Keep the assumptions of Lemma 12. If the working precision − b is greater than √ r/ n , then for any z ∈ S ( z , r ) , at most one of the following events occurs: • | Re ( p ( z )) | < − b • | Im ( p ( z )) | < − b • | Re ( p ( z )) + Im ( p ( z )) | < − b • | Re ( p ( z )) − Im ( p ( z )) | < − b Proof.
For z ∈ S ( z , r ), the norm of p ( z ) can be bounded as follows: | p ( z ) | = | n Y j =1 ( z − z j ) | = n Y j =1 | z − z j |≥ n Y j =1 r/ r/ n ≥ − b / √ | p ( z ) | = | Re ( p ( z )) | + | Im ( p ( z )) | , the occurrence of any two of the four aforementionedconditions results in | p ( z ) | < − b , thus proving the lemma.11 emma 16. For each k = 0 , ..., n ′ , ] L k +1 ∗ f L k = 4( mod ) Proof.
Lemma 12 essentially proves that L k +1 ∗ L k = 0 , ± mod f L k can only differ from L k by at most one, it can be easily seen that ] L k +1 ∗ f L k = 4( mod ǫ k = L k ∗ f L k (mod 8) denote the difference between the true label and the computed label.By assumption ǫ k can be chosen such that | ǫ k | ≤
1. Lemma 17 shows that the computation of thewinding number is not be affected by such errors.
Lemma 17. n ′ − X k =0 [ ] L k +1 ∗ f L k ] . Proof.
For each k = 0 , ..., n ′ , ( L k +1 ∗ L k ) − ( ] L k +1 ∗ f L k ) ≡ ( L k +1 − L k ) − ( ] L k +1 − f L k ) (mod 8)= ( L k +1 − ] L k +1 ) − ( L k − f L k ) (mod 8) ≡ ( L k +1 ∗ ] L k +1 ) − ( L k − f L k ) (mod 8) ≡ ǫ k +1 − ǫ k (mod 8)Since both ( L k +1 ∗ L k ) − ( ] L k +1 ∗ f L k ) and ǫ k +1 − ǫ k belongs to { , ± , ± } , we must have( L k +1 ∗ L k ) − ( ] L k +1 ∗ f L k ) = ǫ k +1 − ǫ k , ∀ k = 1 , ..., n ′ . (12)Therefore − n ′ − X k =0 [ ] L k +1 ∗ f L k ]= 18 n ′ − X k =0 [( L k +1 ∗ L k ) − ( ] L k +1 ∗ f L k )]= 18 n ′ − X k =0 [ ǫ k +1 − ǫ k ]= ǫ n ′ − ǫ = 0 . This completes the proof of Theorem 17.The following theorem summarizes our results:
Theorem 18.
Let α = (1 / g + √ and β = 1 − (1 / g . Assume that the only roots of p ( z ) containedin D ( z , αr ) are contained in D ( z , βr ) as well. Further assume that the working precision − b issmaller than min {√ r/ n , (8 rL ) / g ⌊ log (64 n/π ) ⌋ } . Then the modified winding number algorithm ofdegree g will return the number of roots of p ( z ) in S ( z , r ) . Given the square S = S ( z , r ), let τ p denote the minimum distance between a root p ( x ) andthis square. Then, by applying the modified winding number algorithm of degree g = O (log(1 /τ p ))o S , we correctly calculate the number of roots in the region bounded by S . This requires n ′ =2 g ⌊ log (64 n/π ) ⌋ polynomial evaluations with a precision up to min {√ r/ n , (8 rL ) /n ′ } . Suppose thatthe cost of single polynomial evaluation with the precision 2 − b is E ( n, b ), then the required precision b is asymptotically equal to O ( n log L log(1 /r ) + log(1 /τ p ) log n ) and the overall evaluation cost is O ( n (1 /τ p ) E ( n, b )). 12 .1 Summary Now we summarize the key properties of the modified winding number algorithm of degree k: • If the algorithm succeeds on a square S = S ( z , r ), then it returns the number of roots in S . • If there is no roots between D ( z , [(1 / g + √ r ) and D ( z , [1 − (1 / g ] r ), then the algorithmis guaranteed to succeed. • The algorithm requires polynomial evaluation with b = O ( n log L log(1 /r )+log(1 /τ p ) log n )-bitof precision, and the bit-complexity is O ( n (1 /τ p ) E ( n, b )), where E ( n, b ) denotes the cost ofone b -bit polynomial evaluation. Our polynomial root-finding consists of three stages.(i) At first, one computes some initial isolation of the roots or the clusters of the roots of apolynomial of a degree d by applying some known algorithms. One can apply Ehrlich–Aberth’s andWDK iterations, which have excellent empirical record of fast and reliable global convergence, butthis record has not been supported by formal analysis, and the iterations are not efficient for theapproximation of roots isolated in a disc or in a fixed region on the complex plane. One can avoidthese deficiencies by applying advanced variants of Weyl’s Quad-tree, a.k.a. subdivision, algorithms.In particular a variant of [21] yields such an initial isolation at a nearly optimal arithmetic cost,and furthermore control of the precision of computing is rather straightforward at most of its steps.By incorporating our new accelerated version of winding number algorithm by Henrici–Renegar,we enable application of the entire construction within a nearly optimal cost bound even where apolyomial is given by a black box subroutine for its evaluation.(ii) Next one increases the isolation in order to facilitate the final stage.(iii) Finally, one computes close approximations to all roots or the roots in a fixed region byapplying a simplified version of the algorithm of [13] presented in [23] and [18, Section 15.23]. Forapproximations to all roots one can instead apply Ehrlich–Aberth’s or WDK iterations.Besides acceleration of winding numbe algorithm, we contribute at stage (ii). Namely, we measurethe isolation of a root or a root cluster by the isolation ratio of its covering disc, that is, by theratio of the distance from the center of the disc to the external roots and the disc radius. Then ouralgorithm increases the isolation ratio from any constant 1 + η , for a positive η , to gd h , for any pairof positive constants g and h . The algorithm is performed at nearly optimal arithmetic and Booleancost and can be applied even when a polynomial is given by a black box for its evaluation, while itscoefficients are unknown.If such an isolation is ensured, then Newton’s iterations of [13], [23] and [18, Section 15.23] as wellas Ehrlich–Aberth’s and WDK iterations converge right from this point with quadratic or cubic rate(cf. [30]) and also perform at nearly optimal arithmetic and Boolean cost even where a polynomialis given by a black box for its evaluation.Our analysis can be readily extended to prove the same results even if we assume weaker initialisolation with the ratio as low as 1 + η , for η of order 1 / log( n ). Then we could move to stage (ii)after fewer iterations of stage (i). Further extension of our results to the case of η of order 1 /n h , fora positive constant h , is a theoretical challenge, but progress at stages (ii) and (iii) is important forboth theory and practice.Unlike Ehrlich–Aberth’s and WDK iterations, Newton’s iterations of [13], [23] and [18, Section15.23] produce (as by-product) polynomial factorization, which is important on its own right (seethe first paragraph of the introduction) and enable root-finding in the cases of multiple and clusteredroots. Let us conclude with some comments on this subject.Our Theorem 4 covers the case where we are given a (1 + η )-isolated disc containing k rootsof a polynomial p ( x ) of (1). In that case, at the cost of performing O ( d + log( d ) log(log( d ))) ops,Algorithm 2 outputs approximations to the power sums of these roots within the error bound ∆ q,k of order r q − k , for r = 1 / (1 + η ) and positive integers q = O ( d ) and k < q of our choice, say, q = 2 k .13he algorithm can be extended to the case where all roots or root clusters of the polynomial p ( x )are covered by s such (1 + η )-isolated discs. Then we can still approximate the power sums of theroots in all discs simultaneously by using O ( d log ( d )) ops.At this point the known numerically stable algorithm for the transition from the power sums tothe coefficients (cf. [4, Problem I · POWER · SUMS, pages 34–35]) approximates the coefficients of the s factors of p ( x ) whose root sets are precisely the roots of p ( x ) lying in these s discs. The algorithmperforms within dominated arithmetic and Boolean cost bounds.Having these factors computed, one can reduce root-finding for p ( x ) to root-finding for the s factors of smaller degrees. In particular, this would cover root-finding in the harder case where thediscs contain multiple roots or root clusters. As this has been estimated in [17] and [23], such a divideand conquer approach can dramatically accelerate stage (iii), even versus the Ehrich–Aberth’s andWDK algorithms, except that the initialization of the respective algorithm requires a little closerapproximation to the coefficients of the s factors of p ( x ) than we obtain from the power sums oftheir roots.The algorithm of [13], extending the one of [29], produces such a desired refinement of the outputof Algorithm 2 at a sufficiently low asymptotic Boolean cost, but is quite involved. In particular, itreduces all operations with polynomials to multiplication of long integers. In our further work wewill seek the same asymptotic complexity results at this stage, but with smaller overhead constants,based on the simplified version [23] and [18, Section 15.23] of the algorithm of [13]. Acknowledgments.
VP has been supported by NSF Grant CCF 1116736 and by PSC CUNYAward 67699-00 45. ET has been partially supported by GeoLMI (ANR 2011 BS03 011 06), HPAC(ANR ANR-11-BS02-013) and an FP7 Marie Curie Career Integration Grant.
References [1] S. Barnett,
Polynomial and Linear Control Systems , Marcel Dekker, New York, 1983.[2] D. Bini, Numerical Computation of Polynomial Zeros by Means of Aberth’s Method,
Nu-merical Algorithms , 179–200, 1996.[3] D. A. Bini, G. Fiorentino, Design, Analysis, and Implementation of a Multiprecision Poly-nomial Rootfinder, Numerical Algorithms , , 127–173, 2000.[4] D. Bini, V. Y. Pan, Polynomial and Matrix Computations , Volume 1:
Fundamental Algo-rithms (XVI + 415 pages), Birkh¨auser, Boston, 1994.[5] D.A. Bini, L. Robol, Solving Secular and Polynomial Equations: A Multiprecision Algo-rithm
J. Computational and Applied Mathematics , , 276–292, 2014.[6] G. E. P. Box, G. M. Jenkins, Time Series Analysis: Forecasting and Control , Holden-Day,San Francisco, 1976.[7] C. Brunie, P. Picart, A Fast Version of the Schur–Cohn Algorithm,
Journal of Complexity , , , 54–69, 2000.[8] C. J. Demeure, C. T. Mullis, The Euclid Algorithm and the Fast Computation of Cross–covariance and Autocovariance Sequences, IEEE Trans. Acoust., Speech, Signal Processing , 545–552, 1989.[9] C. J. Demeure, C. T. Mullis, A Newton–Raphson Method for Moving-average SpectralFactorization Using the Euclid Algorithm, IEEE Trans. Acoust., Speech, Signal Processing , 1697–1709, 1990.[10] S. Fortune, An Iterated Eigenvalue Algorithm for Approximating Roots of Univariate Poly-nomials, J. of Symbolic Computation , , , 627–646, 2002.1411] A. S. Householder, Dandelin, Lobachevskii, or Graeffe, American Mathematical Monthly , 464–466, 1959.[12] P. Henrici, Applied and Computational Complex Analysis , Wiley-Interscience, New York,1974.[13] P. Kirrinnis, Polynomial Factorization and Partial Fraction Decomposition by SimultaneousNewton’s Iteration,
J. of Complexity , 378–444 (1998).[14] A. Kobel and M. Sagraloff, Fast Approximate Polynomial Multipoint Evaluation and Ap-plications, arXiv:1304.8069v1 [cs.NA] 30 April 2013.[15] J.M. McNamee, A 2002 Update of the Supplementary Bibliography on Roots of Poly-nomials, J. of Computational and Applied Math. , 433-434, 2002; also at web-site [16] J.M. McNamee,
Numerical Methods for Roots of Polynomials (Part 1) , Elsevier, Amster-dam, 2007.[17] J. M. McNamee, V.Y. Pan, Efficient Polynomial Root-refiners: A Survey and New RecordEfficiency Estimate,
Computers and Mathematics with Applications , , 239–254, 2012.[18] J. M. McNamee, V.Y. Pan, Numerical Methods for Roots of Polynomials , Part 2 (XXII +718 pages), Elsevier, Amsterdam, 2013.[19] R. Moenck, A. Borodin, Fast Modular Transform via Division,
Proc. 13th Ann. Symp.Switching Automata Theory , 90–96, IEEE Comp. Soc. Press, Washington, DC, 1972.[20] V. Y. Pan, Optimal (up to Polylog Factors) Sequential and Parallel Algorithms for Approx-imating Complex Polynomial Zeros,
Proc. 27th Ann. ACM Symp. on Theory of Computing(STOC ’95) , ACM Press, New York, 741–750 (1995).[21] V. Y. Pan, Approximating Complex Polynomial Zeros: Modified Quadtree (Weyl’s) Con-struction and Improved Newton’s Iteration,
J. of Complexity
16, 1 , 213–264, 2000.[22] V. Y. Pan, Univariate Polynomials: Nearly Optimal Algorithms for Factorization andRootfinding,
Journal of Symbolic Computations , , , 701–733, 2002.[23] V. Y. Pan, Root-refining for a Polynomial Equation, Proceedings of CASC 2012 (V. P.Gerdt, V. Koepf, E. W. Mayr, and E. V. Vorozhtsov, editors),
Lecture Notes in ComputerScience , , 271–282, Springer, Heidelberg (2012).[24] V. Y. Pan, Transformations of Matrix Structures Work Again, Linear Algebra and ItsApplications , , 1–32, 2015.[25] V. Y. Pan, Fast Approximate Computations with Cauchy Matrices and Polynomials, Math-ematics of Computation , in print. Also arXiv:1506.02285 [math.NA] (32 pages, 6 figures, 8tables), 7 June 2015.[26] V. Y. Pan, E. P. Tsigaridas, On the Boolean Complexity of the Real Root Refinement,in
Proc. International Symp. on Symbolic and Algebraic Computations, (ISSAC 2013) ,Boston, Massachusetts, June 2013, (M. Kauers, editor), 299–306, ACM Press, New York(2013).[27] V. Y. Pan, E. P. Tsigaridas, Nearly Optimal Refinement of Real Roots of a UnivariatePolynomial,
J. of Symbolic Computation , , 181–204, 2016.http://dx.doi.org/10.1016/j.jsc.2015.06.009 (2015).[28] J. Renegar, On the Worst-case Arithmetic Complexity of Approximating Zeros of Polyno-mials, J. of Complexity
3, 2 , 90–113 (1987).1529] A. Sch¨onhage, The Fundamental Theorem of Algebra in Terms of Computational Com-plexity, Preliminary Report, Mathematisches Institut der Universit¨at T¨ubingen, Germany,1982.[30] P. Tilli, Convergence Conditions of Some Methods for the Simultaneous Computations ofPolynomial Zeros,
Calcolo , , 3–15, 1998.[31] J. van der Hoeven, Fast Composition of Numeric Power Series, Tech. Report 2008-09,Universit´e Paris-Sud, Orsay, France, 2008.[32] P. M. Van Dooren, Some Numerical Challenges in Control Theory, Linear Algebra forControl Theory, IMA Vol. Math. Appl. , 1994.[33] H. Weyl, Randbemerkungen zu Hauptproblemen der Mathematik, II, Fundamentalsatz derAlgebra und Grundlagen der Mathematik, Math. Z. , , 131–151, 1924.[34] G. T. Wilson, Factorization of the Covariance Generating Function of a Pure Moving-average Process, SIAM J. on Numerical Analysis , 1–7, 1969.[35] J.-L. G. Zapata, J. C. D. Martin, A geometric algorithm for winding number computationwith complexity analysis, Journal of Complexity ,
28, 328, 3