On exact division and divisibility testing for sparse polynomials
OOn exact division and divisibility testing for sparse polynomials
Pascal Giorgi Bruno Grenet Armelle Perret du CrayLIRMM, Univ. Montpellier, CNRSMontpellier, France { pascal.giorgi,bruno.grenet,armelle.perret-du-cray } @lirmm.frFebruary 10, 2021 Abstract
Assessing that a sparse polynomial G divides another sparse polynomial F is not yet known to admita polynomial time algorithm. While computing the quotient Q = F quo G can be done in polynomial timewith respect to the sparsities of F , G and Q , it is not yet sufficient to get a polynomial time divisibility test ingeneral. Indeed, the sparsity of the quotient Q can be exponentially larger than the ones of F and G . In thefavorable case where the sparsity Q of the quotient is polynomial, the best known algorithm to compute Q has a non-linear factor G Q in the complexity, which is not optimal.In this work, we are interested in the two aspects of this problem. First, we propose a new randomizedalgorithm that computes the quotient of two sparse polynomials when the division is exact. Its complexityis quasi-linear in the sparsities of F , G and Q . Our approach relies on sparse interpolation and it worksover any finite fields or the ring of integers. Then, as a step toward faster divisibility testing, we providea new polynomial time algorithm when the divisor has some specific shapes. More precisely, we reducethe problem to finding a polynomial S such that QS is sparse and testing divisibility by S can be done inpolynomial time. We identify some structure patterns in the divisor G for which we can efficiently computesuch a polynomial S . The existence of quasi-optimal algorithms for most operations on dense polynomials yields a strong base forfast algorithms in computer algebra [ ] and more generally in computational mathematics. The situationis somehow different for algorithms involving sparse polynomials. Indeed, when a sparse representation isused, a polynomial F = (cid:80) Di = f i X i ∈ R [ X ] is expressed as a list of pairs ( e i , f e i ) such that each f e i is nonzero.Therefore, the size of the sparse representation of F is O ( F ( B + log D )) bits, where B and F are bounds onrespectively the size of the coefficients and the number of nonzero coefficients of F . There, a bit complexityof O ( D log D ) , which is quasi-optimal for dense polynomials, is not even polynomial. It is then necessary thatfast algorithms for sparse polynomials have a (poly-)logarithmic dependency on the degree. Unfortunately,as shown by several NP -hardness results, such fast algorithms might not even exist unless P = NP . Thisis for instance the case for GCD computations [ ] . Fortunately, polynomial-time algorithms are known formany important operations such as multiplication, division or sparse interpolation. We refer to survey byRoche [ ] for a thorough discussion on their complexity and on the remaining major open problems.The main difficulty with sparse polynomial operations is the fact that the size of the output does notexclusively depend on the size of the inputs, contrary to the dense case. For instance, the product of twopolynomials F and G has at most F G nonzero coefficients. But it may have as few as 2 nonzero coeffi-cients [ ] . The size growth can be even more dramatic for sparse polynomial division. For instance, thequotient of F = X D − G = X − F / G = (cid:80) D − i = X i . The output can therefore be exponentially larger thanthe inputs. Such a growth leads to even greater difficulties to design efficient algorithms for Euclidean divi-sion since it is hard to predict the sparsity of the quotient and the remainder, which can range from constantto exponential.One important line of work with sparse polynomials is to find algorithms with a quasi-optimal bit com-plexity ˜ O ( T ( log D + log C )) where T is the number of nonzero coefficients of the input and output, D thedegree and log C a bound on the coefficient bitsize. While it is trivial that such algorithms exist for additionand subtraction, many attempts have been done for multiplication [
5, 27, 23, 16, 15, 20, 8, 25 ] but none ofthem was quasi-optimal in the general case. Only recently, the authors proposed a quasi-optimal algorithmfor the multiplication of sparse polynomials over finite fields of large characteristic or over the integers [ ] .1 a r X i v : . [ c s . S C ] F e b o obtain such a quasi-optimal output-sensitive algorithm, we strongly relied on sparse polynomial in-terpolation . In the latter problem, sparse polynomial is given implicitly using either a straight-line program (SLP) or a blackbox . Though efficient output-sensitive algorithms exist in the blackbox model [ ] theyare not well suited for sparse polynomial arithmetic since one probe of the blackbox is assumed to take aconstant time while it is not in our case. Using sparse interpolation algorithms on SLP is not a trivial so-lution either since no quasi-optimal optimal bit complexity bound is known despite the remarkable recentprogress [
18, 3, 2, 1, 4, 17, 11, 9, 6 ] . The best known result is due to Huang [ ] who obtained a bit com-plexity ˜ O ( L ( T log D log C )) to interpolate an SLP of length L representing a T -sparse polynomial of degreeat most D with coefficient of size log C . Nevertheless, we show in [ ] how to reuse sparse interpolation toderive a quasi-optimal bit complexity ˜ O ( T ( log D + log C )) for sparse polynomial multiplication.In this work, we are interested to use fast sparse interpolation to derive a better complexity bound forsparse polynomial division, in the special case where the division is exact. As a second goal, we make progresson the very related problem of testing the divisibility of two sparse polynomials. Euclidean division of sparse polynomials.
Let F = GQ + R ∈ (cid:75) [ X ] where F and G are two polynomials withat most T nonzero coefficients ( F , G ≤ T ), D = deg ( F ) > n = deg G , and deg R < deg G . Computing Q and R through classic Euclidean division requires O ( G Q ) operations in (cid:75) . Yet keeping track of the coefficientsof the remainder during the computation could dominate the cost. Indeed, each new nonzero coefficient of Q implies to add G − i -th step, the current remainder canhave as many as F + ( i − )( G − ) nonzero coefficients. Using sorted lists to store these monomials leadsto a total complexity of O ( F + Q ( G ) ) exponents comparisons. A good alternative to reduce this costis to prefer a binary heap structure that allows insertion and extraction within logarithmic time. This yields O (( F + Q G ) log ( F + Q G )) comparisons. We shall mention that the geobucket structure [ ] canreach similar complexity.To further reduce the number of comparisons, Johnson [ ] adapts the Euclidean algorithm to maintaina heap with at most Q + q i g j X e k for eachknown coefficient q i of the quotient Q , as well as the highest nonzero term of F that has not yet been cancelledout. When Q is computed, the heap is used to extract the nonzero coefficients of the remainder using a similarstrategy. This division algorithm performs O ( G Q ) operations in (cid:75) plus O (( F + G Q ) log Q log D ) bitoperations for heap management.Monagan and Pearce have proposed a variant of this algorithm that maintains a heap of size O ( G ) insteadof O ( Q ) [ ] . The best solution to date for sparse polynomial division is to switch from a quotient heapto a divisor heap whenever the quotient is getting larger than the divisor, reducing the size of the heap tomin ( log Q , log G ) [ ] . As pointed out in [ ] , for rational coefficients using the heap together with afraction free algorithm helps to reduce the number of operations with rational numbers since operations ondenominators can be delayed. This minimizes the number of GCD s to Q instead of O ( F + G Q ) and itreduces the overall constant for the multiplications and divisions of integers, see [
24, Table 1-2 ] . If F and G have coefficients in (cid:90) , their algorithm has bit complexity ˜ O ( T ( log C + log D )) where D = deg ( F ) , T boundsthe sparsities of F , G and Q , and C bounds the absolute values of their coefficients.To the best of our knowledge, no algorithm has been specifically designed for the special case of exactdivision. Sparse divisibility testing.
The problem of sparse divisibility testing is to determine, given two sparsepolynomials F and G , whether G divides F . It is an open problem whether this problem admits a polynomial-time algorithm, that is an algorithm that runs in time ( T log D ) O ( ) where T bounds the sparsities of the inputsand D their degrees. We note that the division algorithms do not settle the problem. Indeed as mentionedearlier, the quotient of two sparse polynomials F and G can be exponentially larger than F and G . Such agrowth also explains the difficulties to design optimal algorithms since it is hard to predict the sparsity of thequotient and the remainder, which can range from constant to exponential.The only general complexity result on this problem is due to Grigoriev, Karpinksi and Odlyzko [ ] whoshow that the problem is in coNP under the Extended Riemann Hypothesis (ERH). Besides, the problemadmits polynomial-time algorithms in the easy cases where deg ( G ) or deg ( F ) − deg ( G ) are polynomiallybounded [ ] . 2 .2 Our contributions We focus on the exact division of sparse polynomials. On the one hand, we provide the first algorithms whosebit complexity are quasi-linear in the input and the output sparsities. Our algorithms work over finite fieldsand the integers, and are randomized. Over a finite field (cid:70) q of characteristic larger than the degree D ofthe inputs, it computes the quotient of two polynomials in ˜ O ε ( T log D log q ) bit operations with probabilityat least 1 − ε , where T bounds the sparsities of both the inputs and the output. For smaller characteristic,the complexity bound is ˜ O ε ( T log D ( log D + log q )) . For polynomials over (cid:90) with coefficients bounded by C in absolute value, our algorithm performs ˜ O ε ( T ( log C + log D log S ) + log S ) bit operations where S is themaximum of D and the absolute value of the coefficients of the result. Our main technique is to adapt sparsepolynomial interpolation algorithms to our needs.As a more general remark, we can say that our approach generalizes the interpolation of straight-lineprogram that contains, not only additions, subtraction and multiplications, but also divisions. Our workfocus on the univariate case but it can be straightforwardly extended to the multivariate case using Kroneckersubstitution, or randomized Kronecker substitution [ ] .On the other hand, we provide a polynomial time algorithm for special cases of the sparse polynomialdivisibility testing problem when deg ( F ) = O ( deg ( G )) . We prove that if G contains a small chunk of coef-ficients, with large gaps surrounding it, then one can test in polynomial time whether G divides F . Moreprecisely, we require G to be written as G + X k G + X (cid:96) G with deg ( G ) = ( T log D ) O ( ) , k − deg ( G ) = Ω ( D ) and (cid:96) − deg ( X k G ) = Ω ( D ) . This encompasses polynomials of the form G + X k G or G + X (cid:96) G . Our techniqueis to prove that in this situation, even if the quotient F / G may have an exponential number of nonzero terms,we are able to efficiently compute a multiple of the quotient that is sparse. Notations.
Let F = (cid:80) Ti = f i X e i . We use F = T to denote its sparsity (number of nonzero terms) andsupp ( F ) = { e , . . . , e T } for its support. If F ∈ (cid:90) [ X ] , we use (cid:107) F (cid:107) ∞ = max i | f i | to denote its height. The reciprocal of F is the polynomial F (cid:63) = X deg ( F ) F ( / X ) . Our method to compute the exact quotient F / G of two sparse polynomials F and G relies on sparse inter-polation algorithms. These algorithms usually take as input a straight-line program (SLP), or sometimes ablackbox, representing a sparse polynomial Q , together with bounds on the sparsity and the degree of Q . Theoutput is the sparse polynomial Q given by the list of its nonzero monomials.For polynomials over (cid:70) q , we rely on the best known sparse interpolation algorithms that are the oneof Huang [ ] when q > deg Q and the one of Arnold, Giesbrecht and Roche [ ] otherwise. These twoalgorithms share a common technique originally due to Garg and Schost [ ] . Given an SLP representinga sparse polynomial Q , they compute the reduction of Q modulo X p − p . Thiscomputation is known as SLP probing and it can use dense polynomial arithmetic when p is small enough.The goal is then to reconstruct Q from Q p = Q mod X p −
1. One difficulty comes from exponent recoverysince a monomial cX e of Q is mapped to cX e mod p in Q p . A second one, called exponent collision , is when twodistinct exponents e , e ∈ supp ( Q ) are congruent modulo p . These collisions create the monomial ( c + c ) X e in Q p , from which neither c X e nor c X e can be easily recovered.The latter difficulty is handled similarly in the algorithms from [ ] and [ ] . Taking p at random in alarge enough set of prime numbers, one can show that a substantial fraction of the monomials of Q do notcollide during the reduction modulo X p −
1. Therefore, working with several random primes p allows the fullreconstruction. The two algorithms mainly differ in the way they overcome with the first difficulty.Huang’s very natural approach is to consider the derivative Q (cid:48) of Q [ ] . If the characteristic of (cid:70) q is largerthan the degree of Q , a monomial cX e of Q is mapped to ceX e − in Q (cid:48) . Then it is mapped to ceX ( e − ) mod p in ( Q (cid:48) ) p = Q (cid:48) mod X p −
1. Given an SLP for Q , one can efficiently compute an SLP for Q (cid:48) using automaticdifferentiation [ ] . If the monomial cX e does not collide modulo X p −
1, then Q p contains cX e mod p and ( Q (cid:48) ) p the monomial ceX ( e − ) mod p . The original monomial is retrieved using a mere division on the coefficients.With smaller characteristic, Huang’s idea is no longer working since not all the integer exponents existin (cid:70) q . Instead, Arnold, Giesbrecht and Roche work modulo several primes p i and use the Chinese remaindertheorem to recover the exponent. For, they introduce the diversification technique to be able to match the We let ˜ O ( f ( n )) = f ( n )( log f ( n )) O ( ) and ˜ O ε ( f ( n )) = ˜ O ( f ( n ) log ε ) . Q mod X p i −
1. Indeed, replacing Q with Q ( α j X ) for several randomly chosen α j ’s, the nonzero coefficients of Q ( α j X ) are pairwise distinct with a good probability.We now propose to adapt both approaches to the computation of Q = F / G which leverages further diffi-culties as the division in (cid:70) q [ X ] / ( X p − ) is not well-defined for all divisors. Given F , G ∈ (cid:70) q [ X ] such that F = GQ , our aim is to compute Q mod X p − ∈ (cid:70) q [ X ] . Let F p = F mod X p − G p = G mod X p − Q p = Q mod X p −
1, then F p = G p Q p mod X p −
1. (1)If gcd ( G p , X p − ) =
1, then G p is invertible modulo X p −
1, and Q p can be computed. Nevertheless, when G p and X p − Q p . The following lemmadefines a probabilistic approach to overcome the latter difficulty. Lemma 2.1.
Let A and B ∈ (cid:70) q [ X ] two nonzero polynomials, and α randomly chosen in some extension (cid:70) q s of (cid:70) q .Then A ( α X ) and B ( X ) are coprime with probability at least − deg ( A ) deg ( B ) / q s .Proof. Let β be a root of B in an algebraic closure (cid:70) q of (cid:70) q . Then β is a root of A ( α X ) if and only if A ( αβ ) = α is a root of A ( β X ) . Since deg ( A ( β X )) = deg ( A ) , there exist at most deg ( A ) roots of A ( β X ) in (cid:70) q . Since B has at most deg ( B ) roots in (cid:70) q , there are at most deg ( A ) deg ( B ) values of α such that there exists a commonroot β of A ( α X ) and B ( X ) . Therefore, with probability at least 1 − deg ( A ) deg ( B ) / q s , A ( α X ) and B ( X ) do notshare a common root in (cid:70) q , that is they are coprime. Notations.
For A ∈ (cid:70) q [ X ] , α ∈ (cid:70) q s and p ≥ , let A [ α ] be the polynomial A ( α X ) , A p be the polynomial A ( X ) mod X p − and A [ α ] p be the polynomial A [ α ] ( X ) mod X p − = A ( α X ) mod X p − . We remark that A [ α ] p (cid:54) = A p ( α X ) . The idea is to apply Lemma 2.1 to G and X p −
1. Instead of applyingEquation (1) to F and G , we apply it to F [ α ] and G [ α ] to get Q [ α ] p = Q ( α X ) mod X p −
1. In other words, wecompute Q [ α ] p from the equation F [ α ] p = G [ α ] p Q [ α ] p mod X p −
1. If α is chosen at random in some extension (cid:70) q s of (cid:70) q , G [ α ] and X p − − p deg ( G ) / q s and G [ α ] p is invertible moduloX p − Q [ α ] p for any α , we can adapt the algorithm of Arnold,Giesbrecht and Roche [ ] .In order to adapt Huang’s algorithm [ ] , we need to compute Q (cid:48) ( X ) mod X p −
1. To this end, we rely onthe equality ( F (cid:48) ) p − ( G (cid:48) ) p Q p mod X p − = G p ( Q (cid:48) ) p mod X p − ( A (cid:48) ) p denotes A (cid:48) ( X ) mod X p − A ∈ (cid:70) q [ X ] . We notice that this equation is similar to Equation (1).Knowing Q p , the equation defines ( Q (cid:48) ) p if and only if G p is invertible modulo X p −
1. This means that if α ischosen at random in (cid:70) q s , Equations (1) and (2) allow to compute both Q [ α ] p and (( Q [ α ] ) (cid:48) ) p with probability atleast 1 − p deg ( G ) / q s , where (( Q [ α ] ) (cid:48) ) p is the polynomial [ Q ( α X )] (cid:48) mod X p −
1. Next lemmas give the cost ofthese operations.
Lemma 2.2.
Let A ∈ (cid:70) q [ X ] of degree D, sparsity T and α ∈ (cid:70) q s . Then A [ α ] can be computed in ˜ O ( T log Ds log q ) bit operations, and A [ α ] p in O ( T log D log log p + Ts log q ) more bit operations.Proof. Computing A [ α ] = A ( α X ) requires T exponentiations of α , that is O ( T log D ) operations in (cid:70) q s , whichgives a bit complexity of ˜ O ( T log Ds log q ) . Computing A [ α ] p from A [ α ] needs T exponent divisions and T − Lemma 2.3.
Let F and G ∈ (cid:70) q [ X ] such that G divides F , and let p ≥ and α ∈ (cid:70) q s such that G [ α ] and X p − are coprime. If Q = F / G, the polynomials Q [ α ] p and (( Q [ α ] ) (cid:48) ) p can be computed in ˜ O ( T log Ds log q + ps log q ) bitoperations, where D = deg ( F ) and T is a bound on the sparsities of F and G.Proof. To get Q [ α ] p , the first step computes F [ α ] p and G [ α ] p . Then we invert G [ α ] p modulo X p − F [ α ] p . Then to get (( Q [ α ] ) (cid:48) ) p , we compute the derivatives of F [ α ] and G [ α ] and perform two more multiplications and one addition of dense polynomials, according to Equation (2).All dense polynomial operations cost ˜ O ( ps log q ) bit operations while the first step cost is given by Lemma 2.2.This concludes the proof since derivative cost is negligible.4uang’s algorithm recovers monomials of Q from Q p and ( Q (cid:48) ) p . In our approach, we compute Q [ α ] p and (( Q [ α ] ) (cid:48) ) p instead, and thus recover monomials of Q [ α ] instead of Q . Yet, if cX e is a monomial of Q [ α ] , thecorresponding monomial in Q is c α − e X e and it can be computed in ˜ O ( log ( e ) s log q ) = O ( log Ds log q ) bitoperations. We first consider the case where the characteristic of (cid:70) q is larger than the degree D of the input polynomials.We begin with the main ingredient of Huang’s algorithm to further adapt it to our needs. Recall that the ideais to recover Q from Q p and ( Q (cid:48) ) p . Definition 2.4.
Let Q ∈ (cid:70) q [ X ] and p a prime number. Then D ERIV L IFT ( Q p , ( Q (cid:48) ) p ) is the polynomial ˆ Q = (cid:80) e cX e where the sum ranges over all the integers e such that Q p contains the monomial cX e mod p and ( Q (cid:48) ) p contains themonomial ceX ( e − ) mod p . Clearly, if one knows both Q p and ( Q (cid:48) ) p , D ERIV L IFT ( Q p , ( Q (cid:48) ) p ) can be computed in ˜ O ( p log q ) bit opera-tions. Next lemma revamps the core of Huang’s result [ ] . Similar results are used in several interpolationalgorithms [
11, 1, 19 ] . Lemma 2.5.
Let Q ∈ (cid:70) q [ X ] of degree at most D and sparsity at most T . Let p , . . . , p k be randomly chosenamong the first N prime numbers, where N = max ( (cid:100) ( T − ) log D (cid:101) ) . Let i that maximizes Q p i and ˆ Q = D ERIV L IFT ( Q p i , ( Q (cid:48) ) p i ) . Then with probability at least − − k , ( Q − ˆ Q ) ≤ T / . The main idea in Huang’s algorithm is to use this lemma log ( T ) times to recover all the coefficients of Q with probability at least ( − − k ) log T . To extend the algorithm to our case, we need to compute Q p and ( Q (cid:48) ) p by choosing α in a suitable extension of (cid:70) q and compute F ( α X ) / G ( α X ) mod X p − Corollary 2.6.
Let G ∈ (cid:70) q [ X ] of degree at most D, and α be a random element of (cid:70) q s where s = (cid:100) log q ( ε D ) (cid:101) .Then with probability at least − ε , G ( α X ) is coprime with X p − for each of the N first prime numbers, whereN is defined as in Lemma 2.5.Proof. The polynomial G ( α X ) is coprime with all these polynomials if and only if it is coprime with theirproduct. The degree of their product is the sum of the N first prime numbers, which is bounded by N ln N (for N > G ( α X ) be coprime with this product is at least 1 − DN ln N / q s if α is chosen at random in (cid:70) q s . Since s ≥ log q ( ε D ) , q s ≥ ε D . Furthermore, since T ≤ D , N ≤ D log D .This implies DN ln Nq s ≤ D log ( D ) ln ( D log D ) ε D ≤ ε for large enough D .By hypothesis on (cid:70) q , we have q ≥ D . This implies that s = O ( ) in Corollary 2.6 as long as ε remainspolynomial in D . We now provide an algorithm for the exact division of sparse polynomials over large finitefields, given a bound on the sparsity of the quotient. Theorem 2.7.
Algorithm 1 is correct. It uses ˜ O ε ( T log D log q ) bit operations.Proof. For the algorithm to succeed, G ( α X ) must be coprime with all the polynomials X p − X p − p ∈ (cid:80) with probability at least 1 − ε . Next,the algorithm succeeds if at each iteration, the number of monomials of Q [ α ] − ˆ Q [ α ] is halved. According toLemma 2.5, this probability is at least ( − − k ) log T ≥ − − k log T ≥ − ε . Therefore, the overall probabilityof success is at least 1 − ε .Let us first note that s log q = O ε ( log q ) since D ≤ q . Step 2 takes ˜ O ( N ) = ˜ O ( T log D ) bit operationswhile step 3 takes ˜ O ( s log q ) = ˜ O ε ( log q ) bit operations. Computing the polynomials F [ α ] and G [ α ] costs˜ O ( T log Ds log q ) = ˜ O ε ( T log D log q ) bit operations by Lemma 2.2. In the loop, as p i = O ( N log N ) , computing F [ α ] p i and G [ α ] p i costs ˜ O ε ( T ( log D + log q )) bit operations. Then operations on polynomials of degree at most p take ˜ O ( ps log q ) bit operations, that is ˜ O ε ( T log D log q ) bit operations. Since this must be done for ˜ O ε ( log T ) primes in total, the overall cost of the algorithm is ˜ O ε ( T log D log q ) bit operations.5 lgorithm 1 S PARSE D IV L ARGE C HARACTERISTIC
Input: F , G ∈ (cid:70) q [ X ] such that G divides F ; a bound T on F , G and ( F / G ) ; 0 < ε < Output: F / G ∈ (cid:70) q [ X ] with probability ≥ − ε Let k = (cid:100) log ( ε log T ) (cid:101) and N = max ( (cid:100) ( T − ) log D (cid:101) ) Compute the set (cid:80) of the first N prime numbers Compute an extension field (cid:70) q s where s = (cid:100) log q ( ε D ) (cid:101) Choose α at random in (cid:70) q s Compute F [ α ] = F ( α X ) and G [ α ] = G ( α X ) Set ˆ Q [ α ] = loop (cid:100) log T (cid:101) times Choose p , . . . , p k at random in (cid:80) for each p i do Compute F [ α ] p i , G [ α ] p i (cid:46) Lemma 2.2 if G [ α ] p i is not coprime with X p i − then return failure Compute Q [ α ] p i and (( Q (cid:48) ) [ α ] ) p i (cid:46) Lemma 2.3
Let p ∈ { p , . . . , p k } that maximizes Q [ α ] p Add D
ERIV L IFT ( Q [ α ] p − ˆ Q [ α ] p , (( Q [ α ] ) (cid:48) ) p − (( ˆ Q [ α ] ) (cid:48) ) p ) to ˆ Q [ α ] Return ˆ Q [ α ] ( α − X ) When the field (cid:70) q has a characteristic smaller than the degree, Huang’s technique is no more possible, and thebest alternative is to use the algorithm of Arnold, Giesbrecht and Roche [ ] . As mentioned before, their sparseinterpolation algorithm computes Q [ α ] p = Q ( α X ) mod X p − α and p and they use theChinese Remainder Theorem to recover the coefficients of Q . This is the last part of [
2, procedure BuildAp-proximation ] , and we denote it by C RT L IFT . Next lemma summarizes their approach. It is the combinationof [
2, Lemma 3.1 ] for the value of λ , [
2, Corollary 3.2 ] for γ and [
2, Lemma 4.1 ] for m and s . Lemma 2.8 ( [ ] ) . Let Q ∈ (cid:70) q [ X ] a sparse polynomial of degree D and sparsity T . Let < µ < , λ = max ( (cid:100) ( T − ) ln D (cid:101) ) , γ = (cid:100) max ( λ D , 8 ln µ ) (cid:101) , m = (cid:100) log µ + ( T ( + (cid:100) log λ D (cid:101) )) (cid:101) and s ≥ log q ( D + ) . Let ˆ Q = C RT L IFT (( Q [ α j ] p i ) i j ) where Q [ α j ] p i = Q ( α j X ) mod X p i for random primes p , . . . , p γ in ] λ , 2 λ [ andrandom nonzero elements α , . . . , α m of (cid:70) q s . Then with probability at least − µ , ( Q − ˆ Q ) ≤ T / . In order to use such a lemma for sparse polynomial division, we need to compute Q [ α j ] p i for the γ primes p i and the m points α j . As explained in Section 2.1, Q [ α j ] p i can be efficiently computed as soon as G ( α j X ) and X p i − p i and α j ,we choose the α j ’s in a somewhat larger extension of (cid:70) q . That is, we need to increase the bound on s givenin Lemma 2.8 according to Lemma 2.1. Lemma 2.9.
Let G ∈ (cid:70) q [ X ] of degree-D, < µ < , λ , γ , m three integers and p , . . . , p γ be prime numbers in ] λ , 2 λ [ . Let s = (cid:100) log q ( λγµ mD ) (cid:101) and α , . . . , α m be random elements of (cid:70) q s . Then with probability at least − µ ,G ( α j ) and X p i − are coprime for all pairs ( i , j ) , ≤ i ≤ γ and ≤ j ≤ m.Proof. Let Π = (cid:81) γ i = X p i −
1. Its degree is (cid:80) γ i = p i ≤ λγ . For any α j , G ( α j X ) is coprime with all thepolynomials X p i − G ( α j X ) is coprime with Π . Since α j is randomly chosen in (cid:70) q s then byLemma 2.1, the probability that G ( α j X ) and Π are not coprime is at most ( λγ D ) / q s ≤ µ/ m by definition of s . Therefore the probability that there is at least one α j such that G ( α j X ) and Π are not coprime is at most µ . Lemma 2.9 shows that interpolating a sparse polynomial quotient needs a larger extension than the orig-inal algorithm in [ ] for any SLP. Actually the growth is asymptotically negligible.Using Lemma 2.8 and Lemma 2.9, one can derive Algorithm 2, that is an adaptation of the sparse inter-polation algorithm from [ ] to the exact quotient of two sparse polynomials. Theorem 2.10.
Algorithm 2 is correct. It uses ˜ O ε ( T log D ( log D + log q )) bit operations where D = deg ( F ) . lgorithm 2 S PARSE D IV S MALL C HARACTERISTIC
Input: F , G ∈ (cid:70) q [ X ] such that G divides F ; a bound T on F , G and ( F / G ) ; 0 < ε < Output: F / G ∈ (cid:70) q [ X ] with probability ≥ − ε Let µ = ε (cid:100) log T (cid:101) and set λ , γ and m as in Lemma 2.8 Compute the set (cid:80) of the prime numbers in ] λ , 2 λ [ Compute an extension field (cid:70) q s where s = (cid:100) log q ( λγµ mD ) (cid:101) Set ˆ Q = loop (cid:100) log T (cid:101) times Choose p , . . . , p γ at random in (cid:80) Choose α , . . . , α m at random in (cid:70) q s for each pair ( p i , α j ) do Compute F [ α j ] p i , G [ α j ] p i (cid:46) Lemma 2.2 if G [ α j ] p i is not coprime with X p i − then return failure Compute Q [ α ] p i (cid:46) Lemma 2.3
Add C RT L IFT (( Q [ α j ] p i − ˆ Q [ α j ] p i ) i , j ) to ˆ Q Return ˆ
QProof.
The algorithm is a modification of [
2, Procedure MajorityVoteSparseInterpolate ] . The algorithm suc-ceeds if at each iteration, every G [ α j ] p i is coprime with X p i − RT L IFT succeeds in recovering at least halfof the coefficients of Q − ˆ Q . Both conditions hold with probability 1 − µ at each step. The global probabilityof success is thus at least ( − µ ) (cid:100) log T (cid:101) ≥ − µ (cid:100) log T (cid:101) ≥ − ε .As in the original algorithm, the cost is dominated by the computation of all the Q [ α j ] p i . There are γ m (cid:100) log T (cid:101) such polynomials to compute. Since p i < λ and α j ∈ (cid:70) q s , each computation costs ˜ O (( T log D + λ ) s log q ) bitoperations according to Lemma 2.3. As λ = O ( T log D ) , γ = ˜ O ε ( log D ) , m is logarithmic and s = ˜ O ε ( + log q D ) ,the algorithm requires ˜ O ε ( T log D ( log D + log q )) bit operations. Both interpolation algorithms presented in the previous sections require a bound on the sparsity of the quo-tient. To overcome this difficulty, we use the same strategy as for sparse polynomial multiplication [ ] . Theidea is to guess the sparsity bound as we go using a fast verification procedure. For verifying exact quotient,we can directly reuse Algorithm V ERIFY
SP from [ ] that verifies if F = GQ , with an error probability at most ε if F (cid:54) = GQ . It requires ˜ O ε ( T ( log D + log q )) bit operations over (cid:70) q and ˜ O ε ( T ( log D + log C )) bit operationsover (cid:90) where C is a bound on the heights of F , G and Q . Algorithm 3 S PARSE E XACT D IVISION
Input: F , G ∈ (cid:70) q [ X ] such that G divides F ; 0 < ε < Output: F / G ∈ (cid:70) q [ X ] with probability at least 1 − ε t ← repeat t ← t Compute a tentative quotient ˆ Q using Algorithm 1 or 2,with sparsity bound t and probability ε until V ERIFY SP ( F , G , ˆ Q , ε t ) return ˆ Q Theorem 2.11.
Let F , G ∈ (cid:70) q [ X ] such that G divides F , < ε < , D = deg ( F ) and T = max ( F , G , ( F / G )) .With probability at least − ε , Algorithm 3 returns F / G in ˜ O ε ( T log D log q ) bit operations if char ( (cid:70) q ) > Dor ˜ O ε ( T log D ( log D + log q )) otherwise.Proof. The probability 1 − ε concerns both the correctness and the complexity of the algorithm. More precisely,we prove that the algorithm is correct with probability ≥ − ε and that it performs the claimed number ofbit operations with probability ≥ − ε . 7he algorithm is incorrect when F (cid:54) = G ˆ Q . This happens if at some iteration, Algorithm 1 or 2 returns anincorrect quotient but the verification algorithm fails to detect it. Let us pretend we execute the algorithm witha deterministic (infallible) verification test, and let t be the first value of t for which the tentative quotientis correct. Our actual algorithm returns a correct quotient if and only if all the verification tests have beencorrect for t = t /
2. This happens with probability at least 1 − (cid:80) t ε t ≥ − ε since the sum ranges overpowers of two. For the complexity we first need to bound the number of iterations. Since the values of t are powers oftwo, the first value ≥ ( F / G ) is at most 2 ( F / G ) . If t attains this value, Algorithm 2.7 or 2.10 correctlycomputes F / G with probability at least 1 − ε according to Theorems 1 and 2. That is, with probability at least1 − ε , t is bounded by 2 T and the number of iterations is O ( log T ) . Depending on the characteristic, usingTheorems 2.7 or 2.10 and the complexity of [
12, Algorithm V
ERIFY SP ] , we obtain the claimed complexitywith probability at least 1 − ε . For polynomials over (cid:90) [ X ] , we cannot directly use Algorithm 3 with the variant of Huang’s algorithm over (cid:90) [ ] . Indeed, the coefficients arising during the computation may be dramatically larger than the inputsand the output. This is mostly due to the inversion of G modulo X p −
1. Instead we use the standard techniquethat maps the computation over some large enough prime finite field. As we cannot tightly predict the sizeof the coefficients, we can again reuse our guess-and-check approach with several prime numbers of growingsize to discover it as we go. In order to use the fastest algorithm (Algorithm 1) we consider prime finite fields (cid:70) q such that q is larger than the input degree.We first define a bound on the coefficient of the quotient of two sparse polynomial over (cid:90) [ X ] as the classicMignotte’s bound [ ] on dense polynomial is too loose and it has no equivalent in the sparse case. Lemma 2.12.
Let F , G, Q ∈ (cid:90) [ X ] be three sparse polynomials such that F = QG and T = Q is the number ofnonzero coefficient of Q. Then (cid:107) Q (cid:107) ∞ ≤ ( (cid:107) G (cid:107) ∞ + ) (cid:100) T − (cid:101) (cid:107) F (cid:107) ∞ . Proof.
Write Q = (cid:80) Ti = q i X e i with e > e > · · · > e n . We use induction on the remainder and quotientsequence in the Euclidean division algorithm. Let Q j = (cid:80) ji = q i X e i and R j = F − Q j G be the elements of thatsequence, starting with R = F and Q =
0. The integer coefficients of Q are defined as q i = LC ( R i − ) / LC ( G ) where LC denote the leading coefficient. We know from the algorithm that R i = R i − − q i X e i G and Q i = Q i − + q i X e i . Since R = F and (cid:107) R i (cid:107) ∞ ≤ (cid:107) R i − (cid:107) ∞ + | q i | × (cid:107) G (cid:107) ∞ ≤ (cid:107) R i − (cid:107) ∞ ( + (cid:107) G (cid:107) ∞ ) we have (cid:107) R i (cid:107) ∞ ≤ (cid:107) F (cid:107) ∞ ( + (cid:107) G (cid:107) ∞ ) i . Since the reciprocal Q (cid:63) of Q is defined by the quotient F (cid:63) / G (cid:63) , we alsoget (cid:107) R i (cid:107) ∞ ≤ (cid:107) F (cid:107) ∞ ( + (cid:107) G (cid:107) ∞ ) T − i . Therefore, (cid:107) Q (cid:107) ∞ = max i ( | q i | ) ≤ max i (cid:107) R i (cid:107) ∞ ≤ (cid:107) F (cid:107) ∞ ( + (cid:107) G (cid:107) ∞ ) (cid:100) T − (cid:101) . Algorithm 4 S PARSE E XACT D IVISION O VER Z Input: F , G ∈ (cid:90) [ X ] such that G divides F ; 0 < ε < Output: F / G with probability at least 1 − ε Let n = deg ( F ) , i = Let µ = ε/ ( (cid:100) log log (( (cid:107) G (cid:107) ∞ + ) (cid:100) T − (cid:101) (cid:107) F (cid:107) ∞ ) (cid:101) repeat Choose q at random in ] n , 2 n [ , prime with prob. ≥ − µ Compute the reductions F q = F mod q and G q = G mod q Compute ˆ Q = S PARSE E XACT D IVISION ( F q , G q , µ ) n ← n , i ← i until V ERIFY SP ( F , G , ˆ Q , ε i ) return ˆ Q The error probability analysis of [
12, Algorithm 2 ] is flawed and should be replaced by this new one. heorem 2.13. Let F , G be two sparse polynomials in (cid:90) [ X ] such that G divides F , < ε < , D = deg ( F ) ,C = max ( (cid:107) F (cid:107) ∞ + (cid:107) G (cid:107) ∞ ) and T = max ( F , G , ( F / G )) . With probability at least − ε , Algorithm 4 returnsF / G in ˜ O ε ( T ( log C + log D ) + log D ) bit operations if D > (cid:107) F / G (cid:107) ∞ or ˜ O ε ( T ( log C + log D log (cid:107) F / G (cid:107) ∞ ) + log (cid:107) F / G (cid:107) ∞ ) bit operations otherwise.Proof. The proof goes along the same lines as for Theorem 2.11. The probability concerns both the runningtime and the success. With the same arguments, the probability that the algorithm returns an incorrectquotient is at most ε .In Step 4, we can use a Miller-Rabin based algorithm to compute a number q that is prime with probabilityat least 1 − µ in ˜ O ( log q ) bit operations. Step 5 performs O ( T ) divisions in ˜ O ( T log C ) bit operations. Thus byTheorem 2.11, an iteration of the loop runs in ˜ O µ ( T log D log q + T log C + log q ) bit operations. Let Q = F / G .As soon as n > (cid:107) Q (cid:107) ∞ , the loop correctly computes Q if q is prime and S PARSE E XACT D IVISION is correct. Thishappens with probability ≥ − µ . Therefore, the algorithm stops with n < (cid:107) Q (cid:107) ∞ with probability ≥ − µ .If D > (cid:107) Q (cid:107) ∞ , q satisfies 2 (cid:107) Q (cid:107) ∞ < q < D at the first iteration. Hence, the algorithm correctly computes Q in one iteration with probability ≥ − µ ≥ − ε . Its bit complexity is then ˜ O µ ( T log D + T log C + log D ) .Otherwise, at most (cid:100) log log (cid:107) Q (cid:107) ∞ (cid:101) iterations are needed to get 2 (cid:107) Q (cid:107) ∞ < n < q < (cid:107) Q (cid:107) ∞ . Thus Q iscorrectly computed in ˜ O µ ( T log D log (cid:107) Q (cid:107) ∞ + T log C + log (cid:107) Q (cid:107) ∞ ) bit operations with probability at least1 − µ (cid:100) log log (cid:107) Q (cid:107) ∞ (cid:101) . By using Lemma 2.12, (cid:107) Q (cid:107) ∞ ≤ ( (cid:107) G (cid:107) ∞ + ) (cid:100) T − (cid:101) (cid:107) F (cid:107) ∞ , whence µ (cid:100) log log (cid:107) Q (cid:107) ∞ (cid:101) ≤ ε/ µ = O ( log ε log log ( T log C )) , ˜ O µ ( · ) is ˜ O ε ( · ) .In both cases, whether or not D > (cid:107) Q (cid:107) ∞ , the loop computes the expected result in the claimed timewith probability at least 1 − ε . Since the verification with probability of success at least 1 − ε i requires˜ O ε/ i ( T ( log D + log C + log (cid:107) Q (cid:107) ∞ )) and the maximal value of i is expected to be O ( log log (cid:107) Q (cid:107) ∞ ) , its costits negligible compared with the inner loop. Thus the algorithm works as stated with probability at least1 − ε . Given two sparse polynomials F , G ∈ (cid:75) [ X ] , we want to check whether G divides F in polynomial time. Ifdeg ( F ) = m + n −
1, deg ( G ) = m and F , G ≤ T , then the input size is O ( T log ( m + n )) and the divisibilitycheck must cost ( T log ( m + n )) O ( ) . We first remind the only known positive results. Proposition 3.1.
Let F and G ∈ (cid:75) [ X ] of degrees m + n − and m respectively, with at most T nonzero coefficients.One can check that G divides F in polynomial time if either m or n is polynomial in T log ( m + n ) .Proof. Let F = GQ + R be the Euclidean division of F by G . When n is polynomially bounded, the Euclideandivision algorithm runs in polynomial time and the verification is trivial. When m is polynomially bounded,the degree of the remainder is polynomial and it can be computed in polynomial time without computing Q .Indeed, it suffices to compute X e mod G for each exponent e ∈ supp ( F ) in polynomial time by fast exponen-tiation.The rest of the section can be seen as a generalization of the proposition. As long as one has a polynomialbound on the size of the quotient, the divisibility test is polynomial by either computing F − GQ or assertingthat F = GQ . We begin with a very simple remark, that we shall use repeatedly. Remark 3.2.
Let F , G ∈ (cid:75) [ X ] , and F (cid:63) = X deg F F ( / X ) and G (cid:63) = X deg G G ( / X ) be their reciprocal polynomials.Then G divides F if and only if G (cid:63) divides F (cid:63) . In this case, the quotient Q (cid:63) = F (cid:63) quo G (cid:63) is the reciprocal of thequotient Q = F quo G.Proof.
From the definition, ( AB ) (cid:63) = A (cid:63) B (cid:63) for A , B ∈ (cid:75) [ X ] . Therefore, if there exists Q such that F = GQ , then F (cid:63) = G (cid:63) Q (cid:63) . The converse follows since the reciprocal is involutive.We note that the equality ( F quo G ) (cid:63) = F (cid:63) quo G (cid:63) only holds when G divides F . Otherwise, let F = GQ + R for some nonzero R . Then F (cid:63) = X deg ( F ) G ( / X ) Q ( / X ) + X deg ( F ) R ( / X ) . Considering deg ( F ) = deg ( G ) + deg ( Q ) , the first summand is G (cid:63) Q (cid:63) while the second is X deg ( F ) R ( / X ) = X deg ( F ) − deg ( R ) R (cid:63) and it is of degreedeg ( F ) > deg ( G ) . Therefore Q (cid:63) (cid:54) = F (cid:63) quo G (cid:63) in that case.9 .1 Bounding the sparsity of the quotient In this section, we provide a bound on the sparsity of the quotient Q = F quo G , depending on the gap betweenthe highest and second highest exponents in G . We make use of the following estimation. Lemma 3.3.
For any integers ≤ k ≤ N , (cid:80) ki = (cid:0) Ni (cid:1) ≤ k ! N k .Proof. For 0 ≤ i < k , (cid:0) Ni (cid:1) = i + N − i (cid:0) Ni + (cid:1) ≤ kN − k + (cid:0) Ni + (cid:1) , whence (cid:0) Ni (cid:1) ≤ (cid:0) kN − k + (cid:1) k − i (cid:0) Nk (cid:1) . Therefore, k (cid:88) i = (cid:129) Ni (cid:139) ≤ (cid:129) Nk (cid:139) (cid:88) j ≥ (cid:129) kN − k + (cid:139) j = N − k + N − k + (cid:129) Nk (cid:139) .The result follows since (cid:0) Nk (cid:1) ≤ N k k ! and N − k + N − k + ≤ Lemma 3.4.
Let G ∈ (cid:75) [ X ] of degree m and sparsity G, such that G = − X k G with G ∈ (cid:75) [ X ] of degreem − k. Then for all t ≥ , / G mod X kt has at most ( t − ) ! ( G − ) t − nonzero monomials.Proof. Since G ( ) =
1, it is invertible in the ring of power series. Let φ ∈ (cid:75) [[ X ]] be its inverse. Then φ = (cid:80) i ≥ X ki G i . As soon as i ≥ t , X ki mod X kt =
0. Therefore φ mod X kt = t − (cid:88) i = X ki G i mod X kt .Since G has G − G i has sparsity ≤ (cid:0) G − i (cid:1) . Therefore, the number of nonzeromonomials of φ mod X kt is at most (cid:80) t − i = (cid:0) G − i (cid:1) ≤ ( t − ) ! ( G − ) t − according to Lemma 3.3. Corollary 3.5.
Let F and G ∈ (cid:75) [ X ] of respective degrees m + n − and m, and respective sparsities F and
G.If G = X m − G with G ∈ (cid:75) [ X ] of degree m − k then the quotient Q = F quo G has at most ( (cid:100) n / k (cid:101)− ) ! F ( G − ) (cid:100) n / k (cid:101)− nonzero monomials.Proof. Let F = GQ + R with deg ( R ) < m . It is classical that the reciprocal Q (cid:63) of Q equals F (cid:63) / G (cid:63) mod X n [ ] .We can apply Lemma 3.4 to G (cid:63) since G (cid:63) = − X k G (cid:63) . Hence, 1 / G (cid:63) mod X n has at most ( (cid:100) n / k (cid:101)− ) ! ( G − ) (cid:100) n / k (cid:101)− nonzero monomials, using t = (cid:100) n / k (cid:101) and noting that n ≤ kt . This implies that the sparsity of Q (cid:63) , that is thesparsity of Q , is at most ( (cid:100) n / k (cid:101)− ) ! F ( G − ) (cid:100) n / k (cid:101)− . Corollary 3.6.
Let F , G ∈ (cid:75) [ X ] of respective degrees m + n − and m. If G = − X k G and G divides F , thenthe quotient Q = F quo G has at most ( (cid:100) n / k (cid:101)− ) ! F G (cid:100) n / k (cid:101)− nonzero monomials.Proof. We apply Corollary 3.5 to F (cid:63) and G (cid:63) . Indeed, G (cid:63) = X m − G for some G of degree m − k and since G divides F , we have F quo G = ( F (cid:63) quo G (cid:63) ) (cid:63) .Next example shows that the bound does not hold anymore if G does not divide F . Example . Let F = X m + n − − G = X m − X m − +
1. Then F quo G = (cid:80) m − i = X i is as dense as possible.If F = GQ + R with some nonzero R then obviously F − R = GQ , that is G divides F − R . This impliesthat if R has few nonzero monomials, then Q also since F − R is a sparse polynomial. Conversely, if Q hasfew nonzero monomials, R = F − GQ also. As a result, we observe that the sparsities of the quotient and theremainder in the Euclidean division of F by G = + X k G are polynomially related. Let F , G ∈ (cid:75) [ X ] of respective degrees m + n − m , with n = O ( m ) . Results of the previous sectionshow that if G = X m − G with deg ( G ) ≤ m − k for some k = O ( m ) , the sparsity of the quotient F quo G ispolynomially bounded in the input size. If G = + X k G , the same holds for the quotient F (cid:63) quo G (cid:63) . In bothcases, this implies that one can check whether G divides F by a mere application of the Euclidean divisionalgorithm. Our aim is to extend this approach to a larger family of divisors G through a generalization ofLemma 3.4. It is based on the following lemma. Lemma 3.7.
Let F , G and C ∈ (cid:75) [ X ] , C (cid:54) = . Then G divides F if and only if G divides F C and C divides F C / G. roof. If G divides F , then G clearly divides F C . Writing F = GQ , it is also clear that C divides F C / G = Q C .Conversely, if G divides F C and C divides F C / G , we can write F C / G = CQ . Hence F = GQ and G divides F . The generalization of Lemma 3.4 is given by the following lemma. Lemma 3.8.
Let G ∈ (cid:75) [ X ] of degree m and sparsity G, such that G = G − X k G with deg ( G ) < k andG ( ) (cid:54) = . Then for all t, G t / G mod X tk has at most ( t − ) ! G t − nonzero monomials.Proof. Since G ( ) (cid:54) = G is invertible in the ring of power series. Let φ = G / G ∈ (cid:75) [[ X ]] . Then φ = / ( G / G ) = / ( − X k G G − ) = (cid:80) i ≥ X ki G i G − i . Therefore for all t , G t / G = (cid:80) i ≥ X ki G i G t − i − . If i ≥ t , X ki has degree ≥ kt and G t / G mod X kt = t − (cid:88) i = X ki G i G t − − i mod X kt .The sparsities of G t − − i and G i are at most (cid:0) G t − − i (cid:1) and (cid:0) G i (cid:1) respectively. Since G + G = G , the numberof terms in G t / G mod X kt is at most (cid:80) t − i = (cid:0) G t − − i (cid:1)(cid:0) G i (cid:1) = (cid:0) Gt − (cid:1) by Chu-Vandermonde identity [ ] . The resultfollows since (cid:0) Gt − (cid:1) ≤ ( t − ) ! G t − . Theorem 3.9.
Let F and G ∈ (cid:75) [ X ] be two sparse polynomials, of degrees m + n − and m respectively, and sparsityat most T . One can check whether G divides F in polynomial time if G = G − X k G where k − deg ( G ) = Ω ( n ) and either deg ( G ) or deg ( G ) is bounded by a polynomial function of the input size.Proof. We first note that we can first remove any power of X that divides F or G . If X a divides F and X b divides G , then G divides F if and only if b ≤ a and G / X b divides F / X a . Therefore, we assume from now onthat G ( ) and F ( ) are nonzero. This implies in particular that G and G are both invertible in the ring ofpower series over (cid:75) . We treat the case deg ( G ) = ( T log ( m + n )) O ( ) . The second case is directly obtained bytaking reciprocals.By Lemma 3.7, for any integer t ≥ G divides F if and only if G divides G t F and G t divides F G t / G .Our algorithm checks these conditions for some t such that k − (cid:96) ≥ n / t , where (cid:96) = deg ( G ) . By Lemma 3.8, G t / G mod X kt has at most ( t − ) ! T t − nonzero terms, whence F G t / G mod X kt at most ( t − ) ! T t . Note that t = O ( ) since k − (cid:96) = Ω ( n ) , and that kt ≥ n + (cid:96) t . Since G t / G mod X n + (cid:96) t = (( F G t ) (cid:63) quo G (cid:63) ) (cid:63) , the sparsity of ( F G t ) (cid:63) quo G (cid:63) is at most T O ( ) . One can compute this quotient and check whether the remainder vanishes totest in polynomial time if G divides F G t . If the test fails, G does not divide F . Otherwise, we have computeda polynomial Q such that F G t = Q G . It remains to check whether G t divides Q . Proposition 3.1 providesa polynomial-time algorithm for this since deg ( G t ) is polynomially bounded.The previous proof shows that our techniques extends to more general divisors. In particular, we only needto have a polynomial bound on the sparsity of Q and to be able to test whether G t divides Q in polynomialtime. The second step can be a recursive call if G t satisfies the conditions in the theorem. We identify onesuch situation that further generalizes the theorem. Corollary 3.10.
Let F and G ∈ (cid:75) [ X ] be two sparse polynomials, of degrees m + n − and m respectively, andsparsity at most T . One can check whether G divides F in polynomial time if G = G + X k G − X (cid:96) G with G ,G ∈ (cid:75) [ X ] such that k − deg ( G ) and (cid:96) − k − deg ( G ) are both Ω ( n ) and deg ( G ) = ( T log ( m + n )) O ( ) .Proof. We assume that T log n = n o ( ) . Otherwise, one can use a dense divisibility test. Using Lemma 3.8, F ( G + X k G ) t / G mod X kt has at most T O ( ) nonzero monomials. Therefore, as previously, we can compute thequotient ( F ( G + X k G ) t ) (cid:63) quo G (cid:63) for t = (cid:100) n / ( (cid:96) − k − deg ( G )) (cid:101) , in polynomial time. If the remainder is nonzero, G does not divide F . Otherwise, we have computed a polynomial Q such that F ( G + X k G ) t = Q G and itremains to test whether H = ( G + X k G ) t divides Q . We show that the polynomial H satisfies the conditionsof Theorem 3.9. Let us write H = t (cid:88) i = (cid:129) ti (cid:139) X ki G i G t − i = X kt G t + t − (cid:88) i = (cid:129) ti (cid:139) X ki G i G t − i = X kt G t + H where H has degree at most k ( t − ) + deg ( G )( t − ) + deg ( G ) . Then kt − deg ( H ) ≥ k − deg ( G ) − ( t − ) deg ( G ) = Ω ( n ) since deg ( G ) = Ω ( n ) and deg ( G ) = ( T log n ) O ( ) = n o ( ) . One can test whether H divides Q in polynomial-time using Theorem 3.9. 11heorem 3.9 and Corollary 3.10 allow to handle cases were the quotient of the polynomials and thequotient of their reciprocals are both dense, as shown in the following example. Example . Let F = X n − − X n − X n − + X − X + G = G + X n − G where G = G = − X . Then F quo G = (cid:80) n − i = X i and F (cid:63) quo G (cid:63) = X n − + (cid:80) n − i = X i . References [ ] A. Arnold, M. Giesbrecht, and D. S. Roche. Faster Sparse Interpolation of Straight-Line Programs. In
CASC’13 , pages 61–74, 2013. [ ] A. Arnold, M. Giesbrecht, and D. S. Roche. Sparse interpolation over finite fields via low-order roots ofunity. In
ISSAC’14 , pages 27–34, 2014. [ ] A. Arnold, M. Giesbrecht, and D. S. Roche. Faster sparse multivariate polynomial interpolation ofstraight-line programs.
J. Symb. Comput. , 75:4–24, 2016. [ ] A. Arnold and D. S. Roche. Multivariate sparse interpolation using randomized Kronecker substitutions.In
ISSAC’14 , pages 35–42, 2014. [ ] A. Arnold and D. S. Roche. Output-sensitive algorithms for sumset and sparse polynomial multiplication.In
ISSAC’15 , pages 29–36, 2015. [ ] M. Ben-Or and P. Tiwari. A Deterministic Algorithm for Sparse Multivariate Polynomial Interpolation.In
STOC’88 , pages 301–309, 1988. [ ] P. Bürgisser, M. Clausen, and M. A. Shokrollahi.
Algebraic Complexity Theory , volume 315 of
Grundlehrender mathematischen Wissenschaften . Springer, 1997. [ ] R. Cole and R. Hariharan. Verifying candidate matches in sparse and wildcard matching. In
STOC’02 ,pages 592–601, 2002. [ ] S. Garg and É. Schost. Interpolation of polynomials given by straight-line programs.
Theor. Comput.Sci. , 410(27):2659–2662, 2009. [ ] J. von zur Gathen and J. Gerhard.
Modern Computer Algebra . Cambridge University Press, 3rd edition,2013. [ ] M. Giesbrecht and D. S. Roche. Diversification improves interpolation. In
ISSAC’11 , pages 123–130,2011. [ ] P. Giorgi, B. Grenet, and A. Perret du Cray. Essentially optimal sparse polynomial multiplication. In
ISSAC’20 , pages 202–209, 2020. [ ] R. L. Graham, D. E. Knuth, and O. Patashnik.
Concrete Mathematics . Addison-Wesley Professional, 2ndedition, 1994. [ ] D. Grigoriev, M. Karpinski, and A. M. Odlyzko. Short proofs for nondivisibility of sparse polynomialsunder the extended riemann hypothesis.
Fund. Inform. , 28(3-4):297–301, 1996. [ ] J. van der Hoeven and G. Lecerf. On the Complexity of Multivariate Blockwise Polynomial Multiplication.In
ISSAC’12 , pages 211–218, 2012. [ ] J. van der Hoeven and G. Lecerf. On the bit-complexity of sparse polynomial and series multiplication.
J. Symb. Comput. , 50:227–254, 2013. [ ] J. van der Hoeven and G. Lecerf. Sparse Polynomial Interpolation in Practice.
ACM Commun. Comput.Algebra , 48(3 / [ ] Q. Huang. Sparse polynomial interpolation over fields with large or zero characteristic. In
ISSAC’19 ,pages 219–226, 2019. [ ] Q.-L. Huang and X.-S. Gao. Faster interpolation algorithms for sparse multivariate polynomials givenby straight-line programs.
J. Symb. Comput. , 101:367–386, 2020.12 ] S. C. Johnson. Sparse polynomial arithmetic.
ACM SIGSAM Bulletin , 8(3):63–71, 1974. [ ] E. Kaltofen and W. shin Lee. Early termination in sparse interpolation algorithms.
J. Symb. Comput. ,36(3):365–400, 2003. [ ] M. Monagan and R. Pearce. Polynomial division using dynamic arrays, heaps, and packed exponentvectors. In
CASC’07 , pages 295–315, 2007. [ ] M. Monagan and R. Pearce. Parallel sparse polynomial multiplication using heaps. In
ISSAC’09 , pages263–270, 2009. [ ] M. Monagan and R. Pearce. Sparse polynomial division using a heap.
J. Symb. Comput. , 46(7):807–822,2011. [ ] V. Nakos. Nearly optimal sparse polynomial multiplication.
IEEE T. Inform. Theory , 66(11):7231–7236,2020. [ ] D. A. Plaisted. New NP-hard and NP-complete polynomial and integer divisibility problems.
Theor.Comput. Sci. , 31(1):125–138, 1984. [ ] D. S. Roche. Chunky and equal-spaced polynomial multiplication.
J. Symb. Comput. , 46(7):791–806,2011. [ ] D. S. Roche. What can (and can’t) we do with sparse polynomials? In
ISSAC’18 , pages 25–30, 2018. [ ] T. Yan. The Geobucket Data Structure for Polynomials.