Homotopy techniques for solving sparse column support determinantal polynomial systems
HHomotopy techniques for solving sparse column supportdeterminantal polynomial systems
George Labahn ∗ , Mohab Safey El Din † , ´Eric Schost ∗ , Thi Xuan Vu †∗ Abstract
Let K be a field of characteristic zero with K its algebraic closure. Given a sequenceof polynomials g = ( g , . . . , g s ) ∈ K [ x , . . . , x n ] s and a polynomial matrix F = [ f i,j ] ∈ K [ x , . . . , x n ] p × q , with p ≤ q , we are interested in determining the isolated points of V p ( F , g ), the algebraic set of points in K at which all polynomials in g and all p -minors of F vanish, under the assumption n = q − p + s + 1. Such polynomial systemsarise in a variety of applications including for example polynomial optimization andcomputational geometry.We design a randomized sparse homotopy algorithm for computing the isolatedpoints in V p ( F , g ) which takes advantage of the determinantal structure of the systemdefining V p ( F , g ). Its complexity is polynomial in the maximum number of isolatedsolutions to such systems sharing the same sparsity pattern and in some combinatorialquantities attached to the structure of such systems. It is the first algorithm whichtakes advantage both on the determinantal structure and sparsity of input polynomials.We also derive complexity bounds for the particular but important case where g and the columns of F satisfy weighted degree constraints. Such systems arise naturallyin the computation of critical points of maps restricted to algebraic sets when both areinvariant by the action of the symmetric group. Let g = ( g , . . . , g s ) be a sequence of polynomials in K [ x , . . . , x n ] s , and let F = [ f i,j ] be apolynomial matrix in K [ x , . . . , x n ] p × q , where K is a field of characteristic zero with algebraicclosure K . Assuming p ≤ q , we are interested in describing the set V p ( F , g ) = { x ∈ K n | rank( F ( x )) < p and g ( x ) = · · · = g s ( x ) = 0 } . (1) ∗ David R. Cheriton School of Computer Science, University of Waterloo, Waterloo ON, Canada N2L3G1, emails: { glabahn, eschost, txvu } @uwaterloo.ca † Sorbonne Universit´e, CNRS, Laboratoire d’Informatique de Paris 6 (LIP6, UMR7606), ´Equipe POLSYS,4 place Jussieu, F-75252, Paris Cedex 05, France, email:[email protected] a r X i v : . [ c s . S C ] S e p f for any positive integer r we let M r ( F ) be the set of all r -minors of F then our set ofpoints is given by V ( (cid:104) M p ( F ) (cid:105) + (cid:104) g , . . . , g s (cid:105) ) . As an example, when F denotes the Jacobian of ( g , . . . , g s , φ ) with respect to the vari-ables x , . . . , x n , for some φ ∈ K [ x , . . . , x n ], then V s +1 ( F , g ) is the set of critical points of φ over the algebraic set V ( g ), assuming g is a reduced regular sequence and V ( g ) is smooth.The problem of computing such points appear in many areas such as polynomial optimiza-tion and real algebraic geometry. Note that in this example we have n = q − p + s + 1 (since F has dimensions p = s + 1 and q = n ); we will assume that this holds throughout thispaper.We wish to describe the isolated zeros of our algebraic set V p ( F , g ) when all entries of F and g are sparse polynomials . We also want to take advantage of the special determinantalstructure of our algebraic set to obtain complexity results which are polynomial in the generic number of solutions in K n of such systems (this is the number of solutions obtainedwhen the coefficients of terms appearing in the entries of F , g are algebraically independentindeterminates) and some combinatorial data attached to the monomial structure of theentries.In order to achieve this, we make use of the technique of symbolic homotopy continuation and show how it can be used to obtain a solver with such a good complexity. Homotopy con-tinuation has become a foundational tool for numerical algorithms while the use of symbolichomotopy continuation algorithms is more recent. Such algorithms first appeared in [7, 19]without any structure on the system. Later symbolic homotopies were used in square sparsesystems [25, 20, 21, 22] and multi-homogeneous systems [30, 18, 17].Homotopy continuation involves defining a deformation between our system defining V p ( F , g ) and a second system defining V p ( M , r ) which is similar but whose solutions areeasy to describe. Formally, we let t be a new variable and construct a matrix V = (1 − t ) · M + t · F ∈ K [ t, x , . . . , x n ] p × q (2)which connects a start matrix M ∈ K [ x , . . . , x n ] p × q to our target matrix F , together withpolynomials u = ( u , . . . , u s ) of the form u = (1 − t ) · r + t · g ∈ K [ t, x , . . . , x n ] s , (3)that connects a starting polynomial system r to our target system g . Such a homotopyallows us to define a homotopy curve , steering the solutions of the start system to theisolated solutions to our input system (we do not assume that our input system has finitelymany solutions).We will use a data-structure known as zero-dimensional parametrization to representfinite algebraic sets. If V is such a set, defined by polynomials over K , a zero-dimensionalparametrization R = (( w , v , . . . , v n ) , Λ) of V consists of(i) a square-free polynomial w in K [ y ], where y is a new indeterminate,2ii) polynomials ( v , . . . , v n ) in K [ y ] with each deg( v i ) < deg( w ) and satisfying V = { ( v ( τ ) , . . . , v n ( τ )) ∈ K n | w ( τ ) = 0 } ,(iii) a linear form Λ = λ x + · · · + λ n x n with coefficients in K , such that λ v + · · · + λ n v n = y (so the roots of w are the values taken by Λ on V ).When this holds, we write V = Z ( R ). This representation was introduced in early work ofKronecker and Macaulay [26, 27] and has been widely used as a data structure in computeralgebra, see for instance [13, 1, 14, 15, 28, 16].Then, given a zero-dimensional parametrization R of V p ( M , r ), we will apply the algo-rithm in [17] to the system ( M p ( V ) , u ) to lift R to a zero-dimensional parametrization R of the isolated zeros of V p ( F , g ). At a high level the strategy for using homotopy methodsto determine isolated zeros is relatively simple to describe, but also difficult to realize. Thestart system should have at least the same number of solutions as the target system andshould be ‘easy’ to solve. Also, we want a sparse homotopy algorithm, that is, we also wishto have a complexity which depends on the support of the polynomials appearing in ourtarget system.The main contribution in this paper is to provide the needed ingredients for a sparsehomotopy algorithm for our determinantal systems which makes use of the column support of F . We determine a family of possible start systems, and we show that a generic memberof this family allows us to carry out the procedure successfully; we also show how to computethe solutions of this start system. Our runtime is polynomial in the degree of the start systemand the degree of the homotopy curve, both depending on certain mixed volumes relatedto the polynomials g and the columns of F , see Theorem 5.1. As far as we are aware, thisis the first homotopy algorithm which simultaneously exploits both determinantal structureand sparsity.The tools used to create our sparse column support homotopy also allow us to builda column homotopy algorithm for determinantal systems for weighted degree polynomials.These are important when all our input polynomials (including those in the input matrix)are invariant under the action of the group of permutations on n letters. In that case, one canperform an algebraic change of coordinates to express all entries with respect to elementarysymmetric functions which are naturally weighted (the k -th elementary symmetric functionthen has weighted degree k ). We show that one obtains a speed-up which is polynomial inthe product of the weights, see Theorem 5.3.This is not the first time that determinantal structures have been exploited to speed-uppolynomial system solvers. Previous work includes, for example, [17], which is also based onhomotopy techniques: we borrow some results and techniques from that reference, but ourdiscussion of the “sparse” aspects is new. Note also that one can encode rank deficienciesin a polynomial matrix using extra variables (sometimes called Lagrange multipliers in thecontext of polynomial optimization) to encode that the kernel of the considered matrix isnon-trivial. This would lead to Lagrange systems with a sparse structure, which could besolved using homotopy techniques from [25, 20, 21, 22]. However, this technique does not3ork when isolated solutions to our determinantal system lead to rank deficiencies higherthan one: such isolated points of our determinantal system do not correspond to isolatedpoints of the Lagrange system. Still, we will see that such systems play an important roleto prove intermediate results needed to achieve our results.The use of geometric resolution algorithms is investigated in the series of works [2, 3, 4, 31](and references therein). In this latter setting, relating the complexity parameters (whichare mainly geometric degrees of some algebraic sets defined by the input) with the sparsity ofthese inputs is still a non-trivial problem. Determinantal systems in the context of Gr¨obnerbases are also considered in [11, 12, 32]. Again, this series of works do not take into accountthe sparsity of the entries.The structure of the paper is as follows. Section 2 gives some of the preliminary back-ground on sparse polynomials; it is followed by Section 3 which introduces the template of ahomotopy algorithm and states properties that will guarantee it succeeds; at this stage, wedo not specify how to choose the start system. In Section 4, we introduce a family of startsystems and prove that a generic member of this family satisfies the properties needed forour symbolic homotopy algorithm. The cost of our algorithm is analyzed in Section 5, first inthe general case of sparse polynomials, then in the important case of weighted domains. Anexample illustrating the steps of our homotopy algorithm is given in Section 6. The paperends with a conclusion and topics for future research. Sparse polynomials.
Consider a set x = ( x , . . . , x n ) of indeterminates. Polynomials in x are represented in the form of finite sums f = (cid:80) α =( α ,...,α n ) ∈A c α x α · · · x α n n , with A beinga finite subset of N n , the set { α ∈ N n : c α (cid:54) = 0 } ⊂ A being the support supp( f ) of f . The Newton polytope of f , denoted by conv( f ), is the convex hull of the support of f in R n .We will often work in the following setup. Consider (cid:96) finite sets A , . . . , A (cid:96) in N n , with k i denoting the cardinality of A i for all i . For each i , we let M i = ( m i, , . . . , m i,k i ) bethe corresponding set of monomials in x , . . . , x n . This allows us to define the “genericpolynomials” f , . . . , f (cid:96) supported on A , . . . , A (cid:96) by f i = k i (cid:88) j =1 c i,j m i,j ∈ K [ C ][ x , . . . , x n ] , where C = ( c i,j ) ≤ i ≤ (cid:96), ≤ j ≤ k i are new indeterminates. The total number of indeterminates C is N = (cid:80) (cid:96)i =1 k i .Identifying K N with K k × · · · × K k (cid:96) , we can view any element ρ ∈ K N as a vector ofcoefficients, first for f , then for f , etc. Then, for such a ρ , we will denote by Θ ρ the mapping K [ C ][ x , . . . , x n ] → K [ x , . . . , x n ] (cid:88) α ∈ N n c i,j x α · · · x α n n (cid:55)→ (cid:88) α ∈ N n ρ i,j x α · · · x α n n . ρ ( f , . . . , f (cid:96) ).For the first proposition, (cid:96) is arbitrary, but we impose a restriction on the sets A i . Proposition 2.1.
Suppose that for i = 1 , . . . , (cid:96) , A i contains the origin ∈ N n . Then thereexists a non-empty Zariski open set O ⊂ K N such that for ρ ∈ O , we have the following:(i) if (cid:96) ≤ n , Θ ρ ( f , . . . , f (cid:96) ) generates a radical ideal, whose zero-set in K n is either emptyor smooth and ( n − (cid:96) ) -equidimensional;(ii) if (cid:96) > n , the zero-set of Θ ρ ( f , . . . , f (cid:96) ) in K n is empty.Proof. Without loss of generality, assume that m i,k i = 1 holds since we assume that A i contains the origin ∈ N n for all 1 ≤ i ≤ (cid:96) . Consider the mappingΦ : ( x , ρ ) ∈ K n × K N (cid:55)→ Θ ρ ( f , . . . , f (cid:96) )( x ) . We first claim that is a regular value of Φ, that is, the Jacobian matrix of this sequenceof polynomials has full rank at all points ( x , ρ ) of its zero-set. Indeed, since m i,k i = 1 , the columns corresponding to partial derivatives with respect to C contain an (cid:96) × (cid:96) identitymatrix.As a result, by Thom’s weak transversality theorem (see the algebraic version in e.g. [29]),there exists a non-empty Zariski open set O ⊂ K N such that for ρ in O , is a regular valueof the induced mapping Φ ρ : x ∈ K n (cid:55)→ Θ ρ ( f , . . . , f (cid:96) )( x ) . In other words, the Jacobian matrix of Θ ρ ( f , . . . , f (cid:96) ) has rank (cid:96) at any zero x ∈ K n ofΘ ρ ( f , . . . , f (cid:96) ). For (cid:96) ≤ n , by the Jacobian criterion [9, Theorem 16.19], the ideal (cid:104) Θ ρ ( f , . . . , f (cid:96) ) (cid:105) is therefore radical, and its zero-set is either empty or smooth and ( n − (cid:96) )-equidimensional.For (cid:96) > n , this means that this set is empty (since the matrix above has n columns, it cannothave rank (cid:96) ).For the next properties, we take (cid:96) = n . In what follows, C , . . . , C n are the convex hulls of A , . . . , A n , respectively, with the Euclidean volume of C i in R n being denoted by vol R n ( C i ).Consider the function ϕ : ( λ , . . . , λ n ) (cid:55)→ vol R n ( λ C + · · · + λ n C n ) , where λ C + · · · + λ n C n = { x ∈ R n : x = n (cid:88) i =1 λ i x i with x i ∈ C i } is the Minkowski sum of polytopes. The function ϕ is a homogeneous polynomial func-tion of degree n in λ i (see e.g. [8, Proposition 4.9]). The mixed volume MV ( C , . . . , C n )is then defined as the coefficient of the monomial λ · · · λ n in ϕ . Then, the Bernstein-Khovanskii-Kushnirenko (BKK) theorem [6] gives a bound on the number of isolated zerosof Θ ρ ( f , . . . , f n ) in the torus in terms of this quantity (note that here, we do not assumethat the supports A i contain the origin). 5 roposition 2.2. For any ρ in K N , the number of isolated zeros of Θ ρ ( f , . . . , f n ) in ( K −{ } ) n is at most MV ( C , . . . , C n ) . Furthermore, there exists a non-empty Zariski-open set O BKK ⊂ K N such that the bound is tight for ρ in O BKK . A first application of Proposition 2.1 is the following refinement of this statement (whichof course requires the assumptions of Proposition 2.1 to hold). Again, we take (cid:96) = n . Proposition 2.3.
Suppose that for i = 1 , . . . , n , A i contains the origin ∈ N n . Then, thereexists a non-empty Zariski-open set O (cid:48) BKK ⊂ K N such that for ρ in O (cid:48) BKK , Θ ρ ( f , . . . , f n ) has MV ( C , . . . , C n ) solutions in K n .Proof. Consider a subset i = { i , . . . , i m } of { , . . . , n } , with 1 ≤ m ≤ n , and let ( f i , , . . . , f i ,n )be the polynomials ( f , . . . , f n ) where the coordinates x i , . . . , x i m have been set to zero; theydepend on a certain number N i ≤ N of indeterminate coefficients ρ i .This is thus a system of n equations in n − m < n unknowns, and the support of eachof these equations still contains the origin. Proposition 2.1 then implies that there exists anon-empty Zariski-open ω i ⊂ K N i such that for ρ i in ω i , Θ ρ i ( f i , , . . . , f i ,n ) has no solutionin K n − m . Let then Ω i be the preimage of ω i in K N (under the canonical projection), anddefine Ω as the intersection of all Ω i , for i = { i , . . . , i m } a subset of { , . . . , n } . For ρ in Ω,all coordinates of all solutions of Θ ρ ( f , . . . , f n ) are non-zero. To conclude, we define O (cid:48) BKK as the intersection of O BKK (from Proposition 2.2) and Ω.
Initial forms.
Let e = ( e , . . . , e n ) be non-zero in Q n and consider a polynomial p = (cid:88) α =( α ,...,α n ) ∈ S c α x α · · · x α n n with support S = supp( p ). The field of definition may be our field K , or, as will also happenbelow, a rational function field. Define m ( e , p ) = min( (cid:104) e , α (cid:105) | α ∈ S ) and S e ,p = { α ∈ S | (cid:104) e , α (cid:105) = m ( e , p ) } , where (cid:104) , (cid:105) is the usual dot-product in R n . Thus, S e ,p is the intersection of S with its“support hyperplane” in the direction e . The initial form of p with respect to e is definedas init e ( p ) = (cid:88) α =( α ,...,α n ) ∈ S e ,p c α x α · · · x α n n . In other words, init e ( p ) is the sum over all terms c α x α · · · x α n n for which the dot-product (cid:104) e , α (cid:105) is minimized. For a vector p = ( p , . . . , p n ) of polynomials, we letinit e ( p ) = (init e ( p ) , . . . , init e ( p n )) . Even though there is an infinite number of possible directions e , the number of polynomialsystems { init e ( p ) | e non-zero in Q n } obtained in this manner is finite, since the support ofeach p i has finitely many support hyperplanes.6 Determinantal homotopy
In this section, we review a few useful properties of homotopy continuation methods fordeterminantal ideals. As input, we are given g = ( g , . . . , g s ) and F in K [ x , . . . , x n ] p × q , andwe assume n = q − p + s + 1. Let t be a new variable and construct a matrix V = (1 − t ) · M + t · F ∈ K [ t, x , . . . , x n ] p × q which connects a start matrix M ∈ K [ x , . . . , x n ] p × q to our target matrix F , together withpolynomials u = ( u , . . . , u s ) of the form u = (1 − t ) · r + t · g ∈ K [ t, x , . . . , x n ] s , which connect a starting polynomial system r to our target system g . Then, V and u definea deformation which allows us to connect the solutions of the start system V p ( M , r ) to theisolated solutions of our system V p ( F , g ).Algorithms for symbolic homotopy continuation require several ingredients. We needa start system that can be solved efficiently and has the “right” number of solutions, adescription of the solutions of this start system, and a bound (cid:37) that determines the numberof steps we perform.Proposition 3.1 below makes these requirements more precise; it is a minor modificationof [17, Propositions 13 and 24]. To state it, it will be convenient to describe our homotopyprocess using only vectors of polynomials. To this end, we fix an ordering (cid:31) on the p -minorsof p × q matrices and set m = s + (cid:0) qp (cid:1) . Consider the system of equations B = ( u , . . . , u s , b s +1 , . . . , b m ) ∈ K [ t, x , . . . , x n ] m , where u , . . . , u s are as defined above, and where the polynomials ( b s +1 , . . . , b m ) are the p -minors of V , following the ordering (cid:31) . For τ ∈ K , we write B t = τ for the polynomialsin K [ x , . . . , x n ] obtained by the evaluation t (cid:55)→ τ in B . In particular, B t =0 is the set ofequations in our start system, and B t =1 are the equations we want to solve.Consider the ideal J generated by B in K ( t )[ x , . . . , x n ]. The roots of J have coordinatesin an algebraic closure of K ( t ), so we can view them in K (cid:104)(cid:104) t (cid:105)(cid:105) n , where K (cid:104)(cid:104) t (cid:105)(cid:105) is the field ofPuiseux series with coefficients in K . Thus, these solutions are meant to describe the localbehaviour of the solutions of B at t = 0. A vector α in K (cid:104)(cid:104) t (cid:105)(cid:105) n admits a valuation ν ( α ),defined as the minimum of the valuations (with respect to t ) of its coordinates, and we saythat α is bounded when ν ( α ) ≥
0. This will be one of the conditions we impose on thesolutions of J .The algorithm is in essence a form of Newton iteration with respect to t . One inputneeded for the algorithm is an upper bound (cid:37) on the precision in t at which we need to dothe computations. A sufficient upper bound for (cid:37) is the degree of the homotopy curve , whichis the union of all dimension-1 irreducible components of V ( B ) ⊂ K n +1 whose projectionson the t -space are Zariski dense. In effect, this is the number of isolated solutions of thesystem in K [ t, x , . . . , x n ] obtained by taking all equations in B , together with a linear formin t, x , . . . , x n with random coefficients. 7inally, as in [17], the following proposition assumes that we are given a straight-lineprogram Γ that computes the polynomials B , that is, is a sequence of operations + , − , × thattakes as input t, x , . . . , x n and evaluates B . Its length is simply the number of operationsit performs. Proposition 3.1.
Suppose that the following conditions hold: ( i ) the ideal generated by B t =0 is radical and of dimension zero in K [ x , . . . , x n ] , with χ solutions; ( ii ) all points in V ( B ) ⊂ K (cid:104)(cid:104) t (cid:105)(cid:105) n are bounded.Then, the ideal J generated by B in K ( t )[ x , . . . , x n ] is radical and of dimension zero, with χ solutions, and the system B t =1 admits at most χ isolated solutions (counted with multi-plicities).Furthermore, given a zero-dimensional parametrization of the solutions of B t =0 , a straight-line program Γ of length β that computes B , and the upper bound (cid:37) as above, there exists arandomized algorithm Homotopy which computes a zero-dimensional parametrization of theisolated solutions of B t =1 using O ˜( χ ( (cid:37) + χ ) n β ) (4) operations in K . Given g = ( g , . . . , g s ) and F = [ f i,j ] ≤ i ≤ p, ≤ j ≤ q as in Section 3, our goal in this sectionis to specify the homotopy algorithm. We design a suitable start system for the symbolichomotopy algorithm, and we establish that this system satisfies the assumptions of Propo-sition 3.1. The cost analysis is done in the next section.In order to build the polynomials r = ( r , . . . , r s ) of (3), we take polynomials with thesame supports at g = ( g , . . . , g s ) and generic coefficients, taking care to add the constant 1 totheir monomial supports if it is missing. The main new ingredient is the determination of thestart matrix M of (2). In this paper, we focus on what we call the column support homotopy where the construction of M is derived from the unions of the supports of the entries of F per columns. This extends a similar construction given in [17] for dense polynomials, butwhich was instead based on the total degrees of the columns of F . For 1 ≤ i ≤ s , let A i ⊂ N n denote the support of g i , to which we add the origin ∈ N n . For1 ≤ j ≤ q , let B j ⊂ N n be the union of the supports of the polynomials in the j -th columnof F , to which we add as well.For given i and j we denote by κ i the cardinality of A i and by µ j the cardinality of B j ,and let ( n i, , . . . , n i,κ i ) and ( m j, , . . . , m j,µ j ) denote the monomials in x , . . . , x n supported8y A i and B j , respectively. We can then define the “generic” polynomials supported on A , . . . , A s and B , . . . , B q : r i = κ i (cid:88) k =1 d i,k n i,k (1 ≤ i ≤ s ) and m j = µ j (cid:88) k =1 e j,k m j,k (1 ≤ j ≤ q ) , where all d i,k and e j,k are new indeterminates. Let c i,j , for 1 ≤ i ≤ p and 1 ≤ j ≤ q , be pq ad-ditional new indeterminates so that A = { ( d i,k ) ≤ i ≤ s, ≤ k ≤ κ i , ( e j,k ) ≤ j ≤ q, ≤ k ≤ µ j , ( c i,j ) ≤ i ≤ p, ≤ j ≤ q } ,the set of all these new indeterminates, has size N = s (cid:88) i =1 κ i + q (cid:88) i =1 µ i + pq. We then define the matrix M = c , m c , m . . . c ,q m q ... ... ... c p, m c p, m . . . c p,q m q ∈ K [ A ][ x , . . . , x n ] p × q . As before, for ρ in K N , for any polynomial f having coefficients in K [ A ], Θ ρ ( f ) is thepolynomial with coefficients in K obtained through evaluation of the indeterminates A at ρ ;the notation carries over to polynomial matrices.We will use M and r = ( r , . . . , r s ) to construct our start system, by assigning randomvalues to all indeterminates in A . Thus, we let t be a new indeterminate and we denote by B the polynomials in K [ A ][ t, x , . . . , x n ] obtained by considering the equations (1 − t ) · r + t · g and the p -minors of (1 − t ) · M + t · F . Our goal in this section is to establish the followingresult. Proposition 4.1.
There exists a non-empty Zariski open subset Ω of K N such that for ρ in Ω , B := Θ ρ ( B ) satisfies the assumptions of Proposition 3.1. In other words, we will prove that, for such a choice of ρ , the ideal generated by B t =0 in K [ x , . . . , x n ] is radical and zero-dimensional (this is done in the next subsection) and thatthe solutions of B in K (cid:104)(cid:104) t (cid:105)(cid:105) n are bounded. This boundedness properties is proved in Subsec-tion 4.4 using properties of Lagrange type systems which are established in Subsection 4.3.Note also the following consequence of Proposition 3.1: the number of isolated solutionsof the system we want to solve (counting multiplicities) is bounded above by the number ofsolutions of a generic start system Θ ρ ( B ) t =0 . In this subsection, we prove that for a generic choice of ρ in K N , if we write B := Θ ρ ( B )then the ideal generated by B t =0 in K [ x , . . . , x n ] is radical and zero-dimensional.9 roposition 4.2. There exists a non-empty Zariski open set Ω ⊂ K N such that for ρ in Ω , writing B := Θ ρ ( B ) , the ideal generated by B t =0 in K [ x , . . . , x n ] is radical of dimensionzero.Proof. Note first that the equations B t =0 that we are considering are the p -minors of Θ ρ ( M ),together with Θ ρ ( r , . . . , r s ). Now, any p -minor of M has the form C i ,...,i p m i · · · m i p , for somechoice of columns i , . . . , i p , where C i ,...,i p is the determinant C i ,...,i p = (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) c ,i c ,i . . . c ,i p ... ... ... c p,i c p,i . . . c p,i p (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ∈ K [ A ] . Our first constraint on ρ is thus that Θ ρ ( C i ,...,i p ) ∈ K is non-zero for all { i , . . . , i p } . Inthis case, a point α in K n cancels all the p -minors of Θ ρ ( M ) if and only if it cancels allproducts Θ ρ ( m i ) · · · Θ ρ ( m i p ). This is the case if and only if there exists i = { i , . . . , i q − p +1 } ⊂{ , . . . , q } such that Θ ρ ( m i ) , . . . , Θ ρ ( m i q − p +1 ) all vanish at α .Since we assume n = q − p + s + 1, we can rewrite q − p + 1 as n − s . Then, for asubset i = { i , . . . , i n − s } ⊂ { , . . . , q } , consider the polynomials M i = ( m i , . . . , m i n − s ). ByProposition 2.1(i), there exists a non-empty Zariski open set O i ⊂ K N such that for ρ in O i ,the ideal generated by Θ ρ ( M i , r ) is radical and admits finitely many solutions. For subsets i (cid:48) and i of { , . . . , q } of cardinalities n − s such that i (cid:54) = i (cid:48) , the system defined by M i ∪ i (cid:48) and r contains at least n + 1 polynomials in K [ A ][ x , . . . , x n ]. By using Proposition 2.1(ii),there exists a non-empty Zariski open set O i ∪ i (cid:48) ⊂ K N such that for ρ in O i ∪ i (cid:48) , the systemΘ ρ ( M i ∪ i (cid:48) , r ) has no solutions in K n .Taking the intersection of these O i and O i ∪ i (cid:48) (which are finite in number), together withthe condition that the determinants Θ ρ ( C i ,...,i p ) do not vanish, defines a non-empty Zariskiopen Ω ⊂ K N . Thus, for ρ in Ω , the sets V (Θ ρ ( M i , r )), for any subset i of { , . . . , q } ofcardinality n − s , are finite and pairwise disjoint, and their union is V ( B t =0 ). In particular,the latter set is finite.Take ρ in Ω and α in V ( B t =0 ). We now prove that the ideal generated by B t =0 , thatis, by the p -minors of Θ ρ ( M ) and Θ ρ ( r , . . . , r s ), has multiplicity one at α . This will implythat B t =0 generates a radical ideal. For this, we will use the fact that α is the root of thesystem Θ ρ ( M i , r ), for a unique subset i = ( i , . . . , i n − s ) of { , . . . , q } of cardinality n − s ,and that Θ ρ ( M i , r ) has multiplicity one at α .Let then j = ( j , . . . , j p − ) denote the q − ( n − s ) = p − M not indexed by i .For i in i , the equation Θ ρ ( C j ,...,j p − ,i m j · · · m j p − m i ) appears among the generators of B t =0 .In the local ring at α , we can divide by the non-zero quantity Θ ρ ( C j ,...,j p − ,i m j · · · m j p − )( α ).This implies that locally at α , B t =0 is generated by the polynomials Θ ρ ( m i ) , . . . , Θ ρ ( m i n − s )and Θ ρ ( r ). The conclusion follows. To establish the boundedness property, since B is overdetermined, it will be convenient tointroduce new variables (cid:96) = ( (cid:96) , . . . , (cid:96) p ) and to work with the Lagrange system , which consits10f s + q + 1 equations defined by(1 − t ) r + t g = [ (cid:96) · · · (cid:96) p ]((1 − t ) M + t F ) = t (cid:96) + · · · + t p (cid:96) p − , (5)where t = ( t , . . . , t p ) are new indeterminate coefficients. Recall that n = q − p + s + 1, so s + q + 1 = n + p ; we will write these equations as H = ( H , . . . , H n + p ).There are now N + p parameters in these equations, with elements of the parameter space K N + p written as σ = ( ρ, τ ), with ρ in K N and τ in K p . For σ in K N + p and f a polynomialwith coefficients in K [ A , t ], we write as usual Θ σ ( f ) for the polynomial whose coefficientsare obtained from those of f , with A evaluated at ρ and t evaluated at τ . As before, thenotation carries over to vectors or matrices of polynomials.For 1 ≤ i ≤ n + p , H i can be decomposed as H i = η i + t h i with both η i and h i in K [ A , t ][ x , (cid:96) ]. In particular, note that the polynomials η = ( η , . . . , η n + p ) form the Lagrangesystem r = · · · = r s = [ (cid:96) · · · (cid:96) p ] M = t (cid:96) + · · · + t p (cid:96) p + 1 = 0in K [ A , t ][ x , (cid:96) ], so for i = 1 , . . . , q , the polynomial η s + i is ( c ,i (cid:96) + · · · + c p,i (cid:96) p ) m i .In what follows, we discuss properties of the polynomials Θ σ ( η ) and their initial formsinit e (Θ σ ( η )), for e in Q n + p . Our first claim is the following; the proof is straightforward. Lemma 4.3.
For σ in ( K − { } ) N + p and e in Q n + p , init e (Θ σ ( η )) = Θ σ (init e ( η )) . The second proposition uses the specific shape of the equations H to derive informationabout their roots. Proposition 4.4.
Let φ = ( t e c + . . . , . . . , t e n + p c n + p + . . . ) be in K (cid:104)(cid:104) t (cid:105)(cid:105) n + p with, for all i = 1 , . . . , n + p , e i in Q and c i in K − { } .Then for σ in ( K − { } ) N + p , we have the following: if φ cancels Θ σ ( H ) , then c =( c , . . . , c n + p ) cancels Θ σ (init e ( η )) , with e = ( e , . . . , e n + p ) .Proof. For i = 1 , . . . , s , we have H i = r i + t ( g i − r i ), so η i = r i and h i = g i − r i . Thus byconstruction, the monomial support of h i (with respect to x , . . . , x n , (cid:96) , . . . , (cid:96) p ) is the sameas that of r i . This means that for any term kx u · · · (cid:96) u n + p p in h i , with k in K [ A ], there existsa term k (cid:48) x u · · · (cid:96) u n + p p in η i , where k (cid:48) is one of the indeterminates d i,j .Take σ as in the statement of the proposition, and write a = Θ σ ( H i ), b = Θ σ ( η i ) and c = Θ σ ( h i ), so that b ( φ ) + tc ( φ ) = a ( φ ) = 0. Using our assumption on σ , we deduce thatfor any term of the form ktφ u · · · φ u n + p n + p appearing in tc ( φ ), there is a term k (cid:48) φ u · · · φ u n + p n + p appearing in b ( φ ), with non-zero coefficient k (cid:48) . In particular, all terms of smallest valuationin a ( φ ) appear in b ( φ ), and must add up to zero. Taking their first coefficient, this impliesthat c cancels init e ( b ).The proof for the polynomials H s +1 , . . . , H s + q , η s +1 , . . . , η s + q and h s +1 , . . . , h s + q is similar,taking into account that η s + i = ( c ,i (cid:96) + · · · + c p,i (cid:96) p ) m i . Indeed, again, for i = 1 . . . , q ,the monomial support of h s + i is the same as that of η s + i ; if we define a, b, c as above, ourassumption that no entry of σ vanishes implies as before that all terms of smallest valuationin a ( φ ) appear in b ( φ ), and add up to zero. Finally, for H s + q +1 = H n + p , we have that h n + p = 0, and the claim follows as above. 11ur last property requires a longer proof. For generic choices of σ , it constrains thepossible roots of the system Θ σ (init e ( η )) introduced in the previous proposition. Proposition 4.5.
There exists a non-empty Zariski open set Ω ⊂ K N + p such that for σ ∈ Ω , the following holds for any e in Q n + p : for j = 1 , . . . , n + p , the system obtained bysetting the j -th variable to in Θ σ (init e ( η )) has no solution in ( K − { } ) n + p − .Proof. Even though there is an infinite number of vectors e to take into account, there isonly a finite number of possible systems init e ( η ). Thus, in what follows, we assume e isfixed and prove the existence of a suitable Zariski open set, knowing that we will eventuallytake the intersection of the open sets corresponding to the finite number of systems init e ( η ).Similarly, without loss of generality, we assume j = 1, so that we are setting x to 1.Thus, we call ¯ η = (¯ η , . . . , ¯ η n + p ) the polynomials in K [ A , t ][ x , . . . , x n , (cid:96) , . . . , (cid:96) p ] obtainedby setting x to 1 in init e ( η ). We will prove that for a generic σ in K N + p , the systemΘ σ ( ¯ η ) ⊂ K [ x , . . . , x n , (cid:96) , . . . , (cid:96) p ] has no solution in ( K − { } ) n + p − (this system is indeedthe one mentioned in the statement of the proposition, since Θ σ and variable evaluationcommute).For i = 1 , . . . , n + p , denote by S i the subset of ( A , t ) consisting of those indeterminatesthat appear in the coefficients of η i (so it also contains those that appear in the coefficients of¯ η i ). With this convention, the sets S i are pairwise disjoint, and ( S , . . . , S n + p ) is the set ofall indeterminate coefficients ( A , t ) that appear in η . For all i , we let t i be the cardinality of S i , and we will write the elements of K t i as ρ i , so that a vector σ ∈ K N + p can be decomposedas σ = ( ρ , . . . , ρ n + p ). Given ( ρ , . . . , ρ i ) in K t + ··· + t i , Θ ( ρ ,...,ρ i ) denotes as usual the mappingthat evaluates the t + · · · + t i indeterminates S , . . . , S i at ( ρ , . . . , ρ i ).The key property we will use below is the following: for any α in ( K − { } ) n + p − , thepolynomial γ ∈ K [ S i ] obtained by evaluating x , . . . , x n , (cid:96) , . . . , (cid:96) p at the coordinates of α in ¯ η i is non-zero. For i = 1 , . . . , s and i = n + p , this is because the coefficients of ¯ η i are sumsof elements of S i , no element in S i appears in two such coefficients, and all coordinates of α are non-zero. For i = s + 1 , . . . , n + p −
1, since η i is ( c ,i − s (cid:96) + · · · + c p,i − s (cid:96) p ) m i − s , itsinitial form init e ( η i ) is the product init e ( c ,i − s (cid:96) + · · · + c p,i − s (cid:96) p )init e ( m i − s ). After setting x to 1, we deduce that ¯ η i factors as ¯ η i = f i g i , where the coefficients of both f i and g i are sumsof elements of S i , and again, no element in S i appears in two such coefficients. Thus, theevaluations of f i and g i at α are non-zero, and the same holds for ¯ η i .To describe algebraic sets in the torus ( K − { } ) n + p − , we work in K n + p , using a newindeterminate Z and taking into account the relation x · · · x n (cid:96) · · · (cid:96) p Z = 1. Then, for i = 0 , . . . , n + p , we will prove the following: for a generic choice of ( ρ , . . . , ρ i ) in K t + ··· + t i (in the Zariski sense), the zero-set of Θ ( ρ ,...,ρ i ) (¯ η , . . . , ¯ η i ) and x · · · x n (cid:96) · · · (cid:96) p Z − hasdimension at most n + p − − i in K n + p . Taking i = n + p proves our claim.The proof is by induction on i . For i = 0, there is nothing to prove, so let us assume thatour claim holds for i − i ≥ i . We proceedby contradiction, assuming our claim does not hold. In this case, the vectors ( ρ , . . . , ρ i ) forwhich the zeros of Θ ( ρ ,...,ρ i ) (¯ η , . . . , ¯ η i ) and x · · · x n (cid:96) · · · (cid:96) p Z − n + p − − i in K n + p are contained in a hypersurface of the parameter space K t + ··· + t i . Thus12hey satisfy a relation P ( ρ , . . . , ρ i ) = 0, for some non-zero polynomial P in K [ S , . . . , S i ].Then, take ( ρ , . . . , ρ i − ) in K t + ··· + t i − such that • P ( ρ , . . . , ρ i − , S i ) ∈ K [ S i ] is not identically zero; • the zero-set V of Θ ( ρ ,...,ρ i − ) (¯ η , . . . , ¯ η i − ) and x · · · x n (cid:96) · · · (cid:96) p Z − n + p − i in K n + p (this is possible by the induction assumption). By Krull’stheorem, all its irreducible components have dimension exactly n + p − i .The first condition implies that for a generic ρ i in K t i , the zero-set of Θ ( ρ ,...,ρ i ) (¯ η , . . . , ¯ η i )and x · · · x n (cid:96) · · · (cid:96) p Z − n + p − i . Equivalently, this means thatintersection of V and Θ ( ρ ,...,ρ i ) (¯ η i ) has dimension n + p − i . Let us see how to derive acontradiction.Let V , . . . , V d be the irreducible components of V . Pick α in V , . . . , α d in V d , and let γ , . . . , γ d be the polynomials in K [ S i ] obtained by evaluating x , . . . , x n , (cid:96) , . . . , (cid:96) p at thecoordinates of α , . . . , α d , respectively, in ¯ η i . As we pointed out above, all γ i ’s are non-zero, and thus so is Γ := γ · · · γ d ∈ K [ S i ]. In particular, for a generic choice of ρ i in K t i ,Θ ( ρ ,...,ρ i ) (¯ η i ) vanishes at none of α , . . . , α d , and so it intersects each V i (and thus V ) indimension n + p − i −
1. This contradicts the previous paragraph.
Using the results in the previous subsection, we finally establish the second property neededfor our homotopy algorithm: we prove that for a generic ρ in K N , the solutions of B = Θ ρ ( B )in K (cid:104)(cid:104) t (cid:105)(cid:105) n are bounded. Proposition 4.6.
There exists a non-empty Zariski open set Ω ⊂ K N such that for ρ ∈ Ω ,writing B := Θ ρ ( B ) , all points in V ( B ) ⊂ K (cid:104)(cid:104) t (cid:105)(cid:105) n are bounded.Proof. By Proposition 4.5, there exists a non-empty Zariski open set Ω ⊂ K N + p such thatfor any σ = ( ρ, τ ) in Ω , the following holds: for any e in Q n + p and any j in { , . . . , n + p } ,the system obtained by setting the j -th variable to 1 in Θ σ (init e ( η )) has no solution in( K − { } ) n + p − .We then let Ω (cid:48) ⊂ K N be the image of Ω through the projection π : σ = ( ρ, τ ) (cid:55)→ ρ ; this isa non-empty Zariski open. Finally, we let Ω be the intersection of Ω (cid:48) with ( K −{ } ) N ⊂ K N .We take ρ in Ω and we prove that all solutions of Θ ρ ( B ) in K (cid:104)(cid:104) t (cid:105)(cid:105) n are bounded.Take such a solution, and write it α = ( α , . . . , α n ) ∈ K (cid:104)(cid:104) t (cid:105)(cid:105) n . By construction, thereexists a non-zero ( λ , . . . , λ p ) ∈ K (cid:104)(cid:104) t (cid:105)(cid:105) p such that [ λ · · · λ p ] is in the left nullspace of M ( α ). Let v ∈ Q be the valuation of this vector, and let ( λ (cid:48) , . . . , λ (cid:48) p ) ∈ K p be the vector ofcoefficients of t v in ( λ , . . . , λ p ), so that ( λ (cid:48) , . . . , λ (cid:48) p ) is not identically zero. Let us then take τ = ( τ , . . . , τ p ) such that σ := ( ρ, τ ) is in Ω and in addition τ (cid:54) = 0 , . . . , τ p (cid:54) = 0 and τ λ (cid:48) + · · · + τ p λ (cid:48) p (cid:54) = 0 (this is possible, since all these conditions are Zariski-open). In particular, τ λ + · · · + τ p λ p (cid:54) = 0. We can then define ¯ λ = (¯ λ , . . . , ¯ λ p ) by ¯ λ i = λ i / ( τ λ + · · · + τ p λ p )for all i . Let us write φ = ( α , ¯ λ ); our goal is then to prove that φ is bounded, since it willimply that α is bounded. 13y construction, the vector [¯ λ · · · ¯ λ p ] is still in the left nullspace of M ( α ) and satisfies τ ¯ λ + · · · + τ p ¯ λ p − φ is in V (Θ σ ( H )). Let us then write φ =( t e c + . . . , . . . , t e n + p c n + p + . . . ) with, for all i = 1 , . . . , n + p , e i in Q and c i in K − { } .Because none of the coordinates of σ vanishes, we can apply Proposition 4.4, and deducethat c = ( c , . . . , c n + p ) cancels Θ σ (init e ( η )), with e = ( e , . . . , e n + p ).Suppose then by way contradiction that some e i is negative; without loss of generality,we can assume that e <
0. The polynomials Θ σ (init e ( η )) are weighted-homogeneous, forthe weight vector e . In particular, the point˜ c = (cid:16) , c (cid:15) e , . . . , c n + p (cid:15) e n + p (cid:17) is also a solution of these equations, where (cid:15) denotes any element in K such that (cid:15) e = c .Note that none of the coordinates of the vector ˜ c vanishes. However, by construction, σ isin Ω , so Proposition 4.5 asserts that the system obtained by setting the first variable x to1 in Θ σ (init e ( η )) has no solution in ( K − { } ) n + p − . This is the contradiction we wanted,so we have e i ≥ i , as claimed.At this stage, to prove Proposition 4.1, it suffices to let Ω be the intersection of Ω (fromProposition 4.2) and Ω (from the proposition above). Let the polynomials in g = ( g , . . . , g s ) and F = [ f i,j ] ≤ i ≤ p, ≤ j ≤ q be as before. To find theisolated points in V p ( F , g ), we take B = Θ ρ ( B ) as in the previous section, for a randomlychosen ρ ∈ K N and apply the Homotopy algorithm of Proposition 3.1.Proposition 4.1 established the basic properties needed for the correctness of our homo-topy algorithm. To finish the analysis, and establish a cost bound, we now give upper boundson the parameters that appear in the runtime reported in Proposition 3.1, such as the sizeof the input, the number of solutions to our start system and on the degree of the homotopycurve; we also have to give the cost of solving the start system.We first consider the case of arbitrary sparse polynomials, for which we state our resultsin terms of certain mixed volumes; later we discuss the particular case of weighted-degreepolynomials. Some quantities will be defined similarly in both cases. As before, for i =1 , . . . , s , A i ⊂ N n denotes the support of g i , to which we add the origin ∈ N n , and for j = 1 , . . . , q , B j ⊂ N n is the union of the supports of the polynomials in the j -th columnof F , to which we add as well. For indices i, j as above, we let a i , respectively b j , be thecardinality of A i , respectively B j . As input, in either case, we are given g and F throughthe list of their non-zero terms; this involves O ( γ ) elements in K , with γ := a + · · · + a s + p ( b + · · · + b q ) . (6)Finally, we let d be the maximum degree of all the polynomials in g and F .14 .1 General sparse polynomials Representing the input.
The algorithm in Proposition 3.1 takes as input a straight-line program representation of the polynomials B = Θ ρ ( B ). To obtain such a straight-lineprogram is straightforward. We first compute the values of all monomials supported on A , . . . , A s , B , . . . , B q ; we then combine them to obtain the polynomials (1 − t ) · Θ ρ ( r ) + t · g and the matrix (1 − t ) · Θ ρ ( M ) + t · F , and take all p -minors in this matrix.Computing the value of a single monomial supported on A i , respectively B j , can be donethrough repeated squaring, using O ( n log( d )) operations in K . Hence, we can obtain thevalues of all monomials supported on A , . . . , A s , B , . . . , B q by using a straight-line programof length O ( nγ log( d )). Combining these monomials to obtain (1 − t ) · Θ ρ ( r ) + t · g and(1 − t ) · Θ ρ ( M ) + t · F takes another O ( γ ) operations. Finally, it takes O ( p (cid:0) qp (cid:1) ) operationsto compute all p -minors of the latter matrix using a division-free determinant algorithm.Altogether, we obtain a straight-line program of length β ∈ O (cid:18) nγ log( d ) + p (cid:18) qp (cid:19)(cid:19) (7)to compute all entries of B . Number of solutions of the start system.
For ρ in the open set Ω ⊂ K N defined inProposition 4.1, we saw that the solutions of the start system B t =0 are the disjoint unionof the solutions of the systems Θ ρ ( M i , r ), where for a subset i = { i , . . . , i n − s } of { , . . . , q } we write M i = ( m i , . . . , m i n − s ).For i = 1 , . . . , s and j = 1 , . . . , q , we let C i and D j be the convex hulls of respectively A i and B j . Proposition 2.3 then implies that, for i as above, the number of solutions ofΘ ρ ( M i , r ) in K n is the mixed volume χ i := MV ( C , . . . , C s , D i , . . . , D i n − s )for any ρ in a certain non-empty Zariski open set O BKK i ⊂ K N . Define χ := (cid:88) i = { i ,...,i n − s }⊂{ ,...,q } χ i = (cid:88) i = { i ,...,i n − s }⊂{ ,...,q } MV ( C , . . . , C s , D i , . . . , D i n − s ) , (8)and let Ω (cid:48) be the intersection of Ω with the finitely many O BKK i . Then, for ρ in Ω (cid:48) , the startsystem B t =0 has precisely χ solutions. As we pointed out after Proposition 4.1, this impliesthat the system B t =1 which we want to solve admits at most χ isolated solutions, countedwith multiplicities. Solving the start system.
To solve the systems Θ ρ ( M i , r ), we rely on the sparse symbolichomotopy algorithm of [25, Section 5]. This algorithm finds the solutions of a sparse systemof n equations in n unknowns, with arbitrary support and generic coefficients (in the Zariskisense); this means that in addition to the constraint ρ ∈ Ω, our choice of ρ will also have tosatisfy the constraints stated in that reference.15he runtime of this algorithm depends on some combinatorial quantities (we refer to theoriginal reference for a more extensive discussion): we need a so-called lifting function ω i ,and the associated fine mixed subdivision M i , for the support A , . . . , A s , B i , . . . , B i n − s of r and M i [23]. We then let w i be the maximum value taken by ω i on the support, and µ i bethe maximum norm of the (primitive, integer) normal vectors to the cells of M i . Then, thealgorithm in [25, Theorem 6.2] compute as zero-dimensional parametrization R i such that Z ( R i ) = V (Θ ρ ( M i , r )) using O ˜( n γ log( d ) χ i µ i w i ) operations in K .Taking the union of all these parametrizations, using for example, [29, Lemma J.3], doesnot introduce any added cost. Thus we obtain a randomized algorithm to compute a zero-dimensional parametrization of V p (Θ ρ ( M , r )) using O ˜( n γ log( d ) χ µw ) (9)operations in K , where we write µ := max i ( µ i ) and w := max i ( w i ). Degree of the homotopy curve.
The complexity of the
Homotopy algorithm dependson χ , which measures the number of solutions which are tracked during the homotopy, andon the precision t (cid:37) at which we need to do the computations. As mentioned in Section 3, anupper bound for (cid:37) is the number of isolated points defined by the equations in B = Θ ρ ( B )together with a generically chosen hyperplane.Let h = ζ + ζ x + · · · + ζ n x n + ζ n +1 t be a linear form defining such a hyperplane (here,we take ζ i ∈ K ). Using it allows us to rewrite t as ℘ ( x , . . . , x n ) = − ( ζ + ζ x + · · · + ζ n x n ) /ζ n +1 . The isolated points in V ( B ) ∩ V ( h ) are in one-to-one correspondence with the isolatedsolutions of the system B (cid:48) = ( b (cid:48) , . . . , b (cid:48) s , b (cid:48) s +1 , . . . , b (cid:48) m ), where b (cid:48) i = (1 − ℘ ) r i + ℘g i , for i =1 , . . . , s , and ( b (cid:48) s +1 , . . . , b (cid:48) m ) are the p -minors of the matrix V (cid:48) = [ v (cid:48) i,j ] = (1 − ℘ ) M + ℘ F ∈ K [ x , . . . , x n ] p × q . Hence it is sufficient to bound the number of isolated solutions of V ( B (cid:48) ).For i = 1 , . . . , p and j = 1 , . . . , q , let B (cid:48) i,j be the support of v (cid:48) i,j . We then define B (cid:48) j = ∪ ≤ i ≤ p B (cid:48) i,j , to which we add the origin if needed, and let D (cid:48) j be its Newton polytope. Similarly,for i = 1 , . . . , s we let C (cid:48) i denote the Newton polytope of the support of b (cid:48) i . Then, thediscussion on the number of solutions of the target system still applies, and shows that thesystem B (cid:48) admits at most (cid:37) = (cid:88) { i ,...,i n − s }⊂{ ,...,q } MV ( C (cid:48) , . . . , C (cid:48) s , D (cid:48) i , . . . , D (cid:48) i n − s ) (10)solutions. Completing the cost analysis.
The previous discussion allows us to use the
Homotopy algorithm from Proposition 3.1. In addition to the polynomials g and matrix F , we alsoneed the combinatorial information ω i , M i described previously. The sum of the costs ofsolving the start system, and of the Homotopy algorithm is as follow.16 heorem 5.1.
The set V p ( F , g ) admits at most χ isolated solutions, counted with multi-plicities. There exists a randomized algorithm which takes g , F , all lifting functions ω i andsubdivisions M i as input and computes a zero-dimensional parametrization of these isolatedsolutions using O ˜ (cid:18) n (cid:18) γ log( d ) χ µw + χ ( (cid:37) + χ ) (cid:18) qp (cid:19)(cid:19)(cid:19) operations in K , where γ, χ, (cid:37) are as in respectively (6) , (8) and (10) , and µ and w as in (9) . Weighted polynomial domains are multivariate polynomial rings K [ x , . . . , x n ] where eachvariable x i has an integer weight w i ≥ x i ) = w i ). The weighted degree ofa monomial x α · · · x α n n is then (cid:80) ni =1 w i α i , and the weighted degree wdeg( f ) of a polynomial f is the maximum of the weighted degrees of its terms with non-zero coefficients.Weighted domains arise naturally in determining isolated critical points of a symmetricfunction φ defined over a variety V ( f , . . . , f s ) defined by symmetric functions f i . In [10],with J.-C. Faug`ere, we show that the orbits of these critical points can be described bydomains of the form K [ e , , . . . , e ,(cid:96) , e , , . . . , e ,(cid:96) , . . . , e r, , . . . , e r,(cid:96) r ] with e i,k the k -th ele-mentary symmetric function on (cid:96) i letters. Measured in terms of these letters, each e i,k hasnaturally weighted degree k .Polynomials in weighted domains have a natural sparse structure when compared topolynomials in classical domains. For example, a polynomial p ∈ K [ x , x , x ] having totaldegree bounded by 10 has 286 possible terms in a classical domain. However in a weighteddomain with weights w = (5 , ,
2) there are only 19 possible terms. Such a reduction alsoexists when considering bounds for solutions of polynomial systems when comparing classicalto weighted domains. For instance, B´ezout’s theorem bounds the number of isolated solutionsto polynomial systems of equations by the product of their degrees. With polynomial systemslying in a weighted polynomial domain K [ x , . . . , x n ] having weights w = ( w , . . . , w n ) ∈ Z n> ,the weighted B´ezout theorem (see e.g. [24]) states that the number of isolated points of V ( f , . . . , f n ) ⊂ K n is bounded by δ = d · · · d n w · · · w n with d i = wdeg( f i ) . (11)In this section we show how our sparse homotopy algorithm also allows us to describethe isolated points of V p ( F , g ) where F = [ f i,j ] ∈ K [ x , . . . , x n ] p × q and g = ( g , . . . , g s ) ∈ K [ x , . . . , x n ] s with n = q − p + s + 1, assuming bounds on the weighted degrees of allpolynomials f i,j and g j . Without loss of generality, we will assume that w ≤ · · · ≤ w n , andwe will let ( γ , . . . , γ s ) be the weighted degrees of ( g , . . . , g s ) and ( δ , . . . , δ q ) be the weightedcolumn degrees of F .In particular, the monomial supports A , . . . , A s of g , . . . , g s are contained in the sets A (cid:48) , . . . , A (cid:48) s , where A (cid:48) i is the set of all ( e , . . . , e n ) ∈ N n such that w e + · · · + w n e n ≤ γ i .Similarly, for 1 ≤ j ≤ q , B j ⊂ N n is contained in the set B (cid:48) j of all ( e , . . . , e n ) ∈ N n for which17 e + · · · + w n e n ≤ δ j . The sets A (cid:48) i , respectively B (cid:48) j , are the supports of generic polynomialsof weighted degrees at most γ i , respectively δ j . We denote their cardinalities by a (cid:48) , . . . , a (cid:48) s and b (cid:48) , . . . , b (cid:48) q . Representing the input.
We follow the same approach as in the last subsection to obtaina straight-line program for B = Θ ρ ( B ), simply by computing all monomials of respectiveweighted degrees at most ( γ , . . . , γ s ) and ( δ , . . . , δ q ), combining them to form the polyno-mials (1 − t ) · Θ ρ ( r ) + t · g and the matrix (1 − t ) · Θ ρ ( M ) + t · F and taking the p -minors of thelatter. We benefit from a minor improvement here, as for a fixed γ i or δ j we can compute allthese monomials in an incremental manner, starting from the monomial 1, foregoing the useof repeated squaring: this saves a factor n log( d ). Altogether, this results in a straight-lineprogram of size Γ ∈ O (cid:18) ( a (cid:48) + · · · + a (cid:48) s + p ( b (cid:48) + · · · + b (cid:48) q )) + p (cid:18) qp (cid:19)(cid:19) to compute all entries of B .Recall that a term such as a (cid:48) i denotes the number of monomials of weighted degree atmost γ i in n variables, with γ i ≤ d for all i (and similarly for b (cid:48) j , for the weighted degreebound δ j ). A crude bound is thus a (cid:48) i , b (cid:48) j ≤ (cid:0) n + dn (cid:1) , resulting in the estimateΓ ∈ O (cid:18) n (cid:18) n + dn (cid:19) + n (cid:18) qp (cid:19)(cid:19) . (12)This is not the sharpest possible bound. Bounding a (cid:48) i by the volume of the non-negativesimplex defined by w ( e −
1) + · · · + w n ( e n − ≤ γ i results in the upper bound a (cid:48) i ≤ ( γ i + w + · · · + w n ) n / ( n ! w · · · w n ). Using [5] and [33,Theorem 1.1] gives more refined results for a (cid:48) i and b (cid:48) j and hence also for Γ. Number of solutions of the start system.
As in the case of sparse polynomials, we take ρ in the open set Ω ⊂ K N of Proposition 4.1 and set B = Θ ρ ( B ). In this case, the solutionsof the start system B t =0 are the disjoint union of the solutions of systems Θ ρ ( M i , r ), with M i = ( m i , . . . , m i n − s ) for i = { i , . . . , i n − s } ⊂ { , . . . , q } .By the weighted B´ezout theorem, the system Θ ρ ( M i , r ) has c i = γ · · · γ s δ i · · · δ i n − s w · · · w n solutions in K n . Taking the sum over all subsets i of { , . . . , q } of cardinality n − s , wededuce that the number of solutions of B t =0 is at most c = (cid:88) i c i = γ · · · γ s η n − s ( δ , . . . , δ q ) w · · · w n , (13)where η n − s ( δ , . . . , δ q ) is the elementary symmetric polynomial of degree n − s in δ , . . . , δ q .The discussion following Proposition 4.1 implies that the system B t =1 which we want tosolve admits at most c isolated solutions. 18 olving the start system. To find these solutions, as in the previous subsection, wesolve all systems Θ ρ ( M i , r ) independently. We are not aware of a dedicated algorithmfor weighted-degree polynomial systems whose complexity would be suitable; instead, werely on the geometric resolution algorithm as presented in [16]. In what follows, our firstrequirement is that ρ be in the open set Ω ⊂ K N of Proposition 4.1, but we will add finitelymany Zariski-open conditions on ρ .For a subset i = { i , . . . , i n − s } ⊂ { , . . . , q } , let ( d i , , . . . , d i ,n ) denote the sequence( γ , . . . , γ s , δ i , . . . , δ i n − s ) sorted in non-decreasing order; we write κ i = max ≤ k ≤ n ( d i , · · · d i ,k w k +1 · · · w n ) and κ = (cid:88) i = { i ,...,i n − s }⊂{ ,...,q } κ i . (14)Recall as well that we set d = max( γ , . . . , γ s , δ , . . . , δ q ). Lemma 5.2.
For i = { i , . . . , i n − s } ⊂ { , . . . , q } , and a generic ρ ∈ K N , one can solve Θ ρ ( M i , r ) by a randomized algorithm that uses O ˜ (cid:32) n Γ d (cid:18) κ i w · · · w n (cid:19) (cid:33) operations in K .Proof. The polynomials Θ ρ ( M i , r ) have weighted degrees at most ( γ , . . . , γ s , δ i , . . . , δ i n − s ).We first reorder these equations in non-decreasing order of weigthed degree; we write thereordered sequence of polynomials as ( h , . . . , h n ), their respective weighted degrees being atmost ( d i , , . . . , d i ,n ).By Proposition 4.2, since the supports of M i and r contain the origin, for a generic choiceof ρ , the equations Θ ρ ( M i , r ) define a reduced regular sequence (possibly terminating earlyand thus defining the empty set). We can thus apply the geometric resolution algorithm asin [16, Theorem 1].The algorithm in [16] takes its input represented as a straight-line program. To obtainone, we take our straight-line program of length Γ that computes B and set t = 0; theresulting straight-line program computes all Θ ρ ( r ) and Θ ρ ( m , . . . , m q ), and in particularΘ ρ ( M i ). We deduce that we can compute a zero-dimensional parametrization of the solutionsof Θ ρ ( M i , r ) using O ˜( n Γ d Σ i ) operations in K . Here, Σ i is the maximum of the degreesof the “intermediate varieties” V , . . . , V n , where V i is defined by the first i equations inΘ ρ ( M i , r ). Hence, to conclude, it suffices to prove that Σ i ≤ κ i / ( w · · · w n ).Fix an index (cid:96) in { , . . . , n } . We identify degree-1 polynomials P = p + p x + · · · + p n x n in K [ x , . . . , x n ] with points in K n +1 . Then, there exists a non-empty Zariski open set P ⊂ K ( n +1)( n − (cid:96) ) such that for ( p i,j ) ≤ j ≤ n, ≤ i ≤ n − (cid:96) ∈ P , defining P i as P i = p i, + p i, x + · · · + p i,n x n implies that V (cid:96) ∩ V ( P ) · · · ∩ V ( P n − (cid:96) ) has cardinality deg( V (cid:96) ). Up to taking the p i,j ’s inthe intersection of P with another non-empty Zariski open set, one can perform Gaussian19limination to rewrite P , . . . , P n − (cid:96) as x (cid:96) +1 − ℘ (cid:96) +1 ( x , . . . , x (cid:96) ) , . . . , x n − ℘ n ( x , . . . , x (cid:96) ) . For k = 1 , . . . , (cid:96) , let g k ( x , . . . , x (cid:96) ) = h k ( x , . . . , x (cid:96) , ℘ (cid:96) +1 ( x , . . . , x (cid:96) ) , . . . , ℘ n ( x , . . . , x (cid:96) )) in K [ x , . . . , x (cid:96) ]. Because the sequence of weights is non-decreasing, these have respectiveweighted degrees at most d i , , . . . , d i ,(cid:96) and, by construction, V ( g , . . . , g (cid:96) ) is finite and deg( V (cid:96) ) =deg( V ( g , . . . , g (cid:96) )). Using the weighted B´ezout’s theorem impliesdeg( V ( g , . . . , g (cid:96) )) ≤ d i , · · · d i ,(cid:96) w · · · w (cid:96) = d i , · · · d i ,(cid:96) w (cid:96) +1 · · · w n w · · · w n = κ i w · · · w n . Taking all possible i into account, we see that for a generic ρ we can compute zero-dimensionalparametrizations for all Θ ρ ( M i , r ) using O ˜ (cid:32) n Γ d (cid:18) κw · · · w n (cid:19) (cid:33) operations in K . As in the previous subsection, taking the union of all these parametrizationsdoes not introduce any added cost. Degree of the homotopy curve.
Finally, we need an upper bound on the precision t e towhich we do the computations. As before, a suitable upper bound is the number of isolatedintersection points in K n +1 between V ( B ) and a generic hyperplane.Let ζ = ζ + ζ x + · · · + ζ n x n + ζ n +1 t be a linear form defining such a hyperplane(here, we take ζ i ∈ K ). We are interested in counting the isolated solutions of all equations g (cid:48) = ( ζ, (1 − t ) · Θ ρ ( r ) + t · g ), and all p -minors of F (cid:48) = (1 − t ) · Θ ρ ( M ) + t · F , that is, of V p ( F (cid:48) , g (cid:48) ).Assign weight w t = 1 to t , so the weighted degree of ζ is w n . Then, the system above is ofthe kind considered in this section, but with n + 1 variables instead of n , and s + 1 equations g (cid:48) instead of s . The weighted degrees of the equations g (cid:48) are ( w n , γ + 1 , . . . , γ s + 1) and theweighted column degrees of F (cid:48) are ( δ + 1 , . . . , δ q + 1). As we pointed out when countingthe solutions of the start system, this implies that our equations admit at most e isolatedsolutions, with e = ( γ + 1) · · · ( γ s + 1) η n − s ( δ + 1 , . . . , δ q + 1) w · · · w n − , (15)where η n − s is the elementary symmetric polynomial of degree n − s . Completing the weighted homotopy algorithm.
The previous paragraphs allow usto use the
Homotopy algorithm from Proposition 3.1; we obtain the following result.
Theorem 5.3.
The set V p ( F , g ) admits at most c isolated solutions, counted with multiplic-ities. There exists a randomized algorithm which takes g and F as input and computes azero-dimensional parametrization of these isolated solutions using O ˜ (cid:16)(cid:0) c ( e + c ) + d (cid:0) κw · · · w n (cid:1) (cid:1) n Γ (cid:17) perations in K , where Γ , c, κ, e are as in respectively (12) , (13) , (14) and (15) . In this section we provide an example illustrating the steps of our homotopy algorithm. Let g = (99 x + 92 x − x x + 67 x − x + 98 x + 25) ∈ Q [ x , x , x ]and F ∈ Q [ x , x , x ] × be x + 65471 x + 59 x + 42308 x + 65504 86 x + 65460 x + 65414 x + 12381 x + 44 65477 x + 59898 x + 7665501 x + 51 x + 65466 x + 57496 x + 35 16 x + 99 x + 65503 x + 17950 x + 31 65454 x + 41178 x + 65453 . The support of g is A = { (3 , , , (2 , , , (1 , , , (1 , , , (0 , , , (0 , , , (0 , , } ⊂ Z with unions of the column supports of F being B = { (2 , , , (1 , , , (0 , , , (0 , , , (0 , , } , B = { (2 , , , (1 , , , (0 , , , (0 , , , (0 , , } , B = { (1 , , , (0 , , , (0 , , } . Start system.
The start system for ( F , g ) is built as follows. Let r = 88 x − x − x x + 41 x + 91 x + 29 x + 70 ∈ Q [ x , x , x ] a polynomial supported by A and define m = − x − x + 5 x − x − , m = 63 x + 10 x − x − x − , and m =88 x + 95 x + 9, polynomials in Q [ x , x , x ] supported by ( B , B , B ). The starting poly-nomial system r = ( r ) and the start matrix are given as M = (cid:18) − m m m − m − m − m (cid:19) ∈ Q [ x , x , x ] × . We remark that the coefficients in the start vector and start matrix for this example werechosen randomly, in this case with the help of the rand() command in Maple.
A parametrization of the start system.
The set of 2-minors of M is given by (2344 m m , m m , − m m ) and hence V ( M , r ) = V ∪ V ∪ V , where V = V ( m , m , r ) , V = V ( m , m , r ) , and V = V ( m , m , r ) . V , V , and V are given by R , = (cid:0) (10671923044484 y + 164650405712264 y + 541980679674061 y + 393540496795784 , y + 197994419338092137205138445880446701 y + 3859258707817950205138445880446701 , y − y − , y ) , x (cid:1) , R , = (cid:0) (1076005625 y + 2749690925 y + 2278375403 y + 797867887 , − y − , y + 2011619680 y + 17194319360 , y ) , x (cid:1) , R , = (cid:0) (410682625 y + 773879025 y + 2045246267 y − , − y − , y − y − , y ) , x (cid:1) . Taking the union of ( R ,i ) ≤ i ≤ gives a parametrization R of V p ( M , r ) with R = (( q , v , , v , , v , ) , Λ )= (cid:0) (4715888798904593238258009062500 y + · · · , y + · · · , y + · · · , y + · · · ) , x (cid:1) . Degree bounds.
The mixed volumes associated to our square sub-systems are MV = MV (conv( A ) , conv( B ) , conv( B )) = 3, MV = MV (conv( A ) , conv( B ) , conv( B )) = 3, andfinally MV = MV (conv( A ) , conv( B ) , conv( B )) = 3. So χ = MV + MV + MV = 9 whichis a bound on the number of isolated solutions of V ( F , g ). Note that this number coincideswith the actual number of isolated solutions of V ( M , r ) as the degree of q equals 9. A parametrization R of V ( F , g ) . We apply the
Homotopy algorithm to the system( M ((1 − t ) F + t M ) , (1 − t ) r + tg ) and R to obtain R . As the coefficients of the result over Q are quite large we illustrate this calculation over F , the finite field of 65521 elements.In this case we obtain R = (cid:0) ( y + 42377 y + 63439 y + 23268 y + 1541 y + 21916 y + 24479 y + 1064 y + 47617 y + 765 , y + 58286 y + 48619 y + 49312 y + 42721 y + 44021 y + 47621 y + 39038 y + 13072 , y + 30892 y + 29236 y + 63043 y + 623 y + 8249 y + 22956 y + 23577 y + 41427 , y + 19233 y + 56323 y + 58151 y + 8939 y + 30577 y + 13156 y ) , x (cid:1) R = (cid:0) ( y + 27502 y + 1022 y + 42474 y + 21370 y + 47501 y + 37694 y + 13474 y + 49870 y + 26489 , y + 28497 y + 23045 y + 29265 y + 32212 y + 8948 y + 16460 y + 19357 y + 9600 , y + 24119 y + 48429 y + 34031 y + 32994 y + 13559 y + 34993 y + 59636 y + 64778 , y ) , x (cid:1) . We note that using the non-sparse homotopy algorithm from [17] produces a degree boundof 24, a considerable over estimate of the number of isolated zeros.
We have presented a new homotopy algorithm for determining isolated solutions of algebraicsets V p ( F , g ) for F a p × q matrix and g a vector having entries from a multivariate polynomialdomain. Our algorithm determines the bounds central to homotopy algorithms based on thecolumn support of the matrix F . Our column supported homotopy algorithm can be appliedto the case where our entries come from a weighted polynomial domain. Such weighteddomains arise when we determine the isolated critical points of a symmetric function φ defined over a variety V ( f ) generated by symmetric functions in f . The resulting complexityis improved by a factor depending on the size of the symmetric group.Still regarding critical point computations, but for non symmetic input F , g , the naturalbounds for a sparse homotopy would come from considering the row support rather thanthe column support of F . An interesting approach would be the follow the algorithm givenin [17] for dense polynomials. However, proving that in the sparse case, the correspondingstart systems satisfy the genericity properties we need is not straightforward; this is thesubject of future work. Acknowledgements.
G. Labahn is supported by the Natural Sciences and EngineeringResearch Council of Canada (NSERC), grant number RGPIN-2020-04276. ´E. Schost issupported by an NSERC Discovery Grant. T.X. Vu is supported by a labex CalsimLabfellowship/scholarship. The labex CalsimLab, reference ANR-11-LABX-0037-01, is fundedby the program “Investissements d’avenir” of the Agence Nationale de la Recherche, referenceANR-11-IDEX-0004-02. M. Safey El Din and T.X. Vu are supported by the ANR grantsANR-18-CE33-0011
Sesame , ANR-19-CE40-0018
De Rerum Natura and ANR-19-CE48-0015
ECARP , the PGMO grant
CAMiSAdo and the European Union’s Horizon 2020research and innovation programme under the Marie Sklodowska-Curie grant agreement N.813211 (
POEMA ). 23 eferences [1] M.-E. Alonso, E. Becker, M.-F. Roy, and T. W¨ormann. Zeros, multiplicities, and idem-potents for zero-dimensional systems. In
Algorithms in Algebraic Geometry and Appli-cations , pages 1–15. Springer, 1996.[2] B. Bank, M. Giusti, J. Heintz, G. Lecerf, G. Matera, and P. Solern´o. Degeneracy loci andpolynomial equation solving.
Foundations of Computational Mathematics , 15(1):159–184, 2015.[3] B. Bank, M. Giusti, J. Heintz, and Luis M. Pardo. Generalized polar varieties and anefficient real elimination.
Kybernetika , 40(5):519–550, 2004.[4] B. Bank, M. Giusti, J. Heintz, and Luis M. Pardo. Generalized polar varieties: Geometryand algorithms.
Journal of Complexity , 21(4):377–412, 2005.[5] Aharon Gavriel Beged-Dov. Lower and upper bounds for the number of lattice pointsin a simplex.
SIAM Journal on Applied Mathematics , 22(1):106–108, 1972.[6] D. N. Bernstein. The number of roots of a system of equations.
Funkcional. Anal. iPriloˇzen. , 9(3):1–4, 1975.[7] A. Bompadre, G. Matera, R. Wachenchauzer, and A. Waissbein. Polynomial equationsolving by lifting procedures for ramified fibers.
Theoretical Computer Science , 315(2-3):335–369, May 2004.[8] D.A. Cox, J. Little, and D. O’Shea.
Using Algebraic Geometry , volume 185. SpringerScience & Business Media, 2006.[9] D. Eisenbud.
Commutative Algebra: with a View Toward Algebraic Geometry . GraduateTexts in Mathematics. Springer, New York, Berlin, Heildelberg, 1995.[10] J-C. Faug`ere, G. Labahn, M. Safey El Din, ´E. Schost, and T.X. Vu. Computing criticalpoints for invariant algebraic systems. 2020.[11] J-C. Faug`ere, M. Safey El Din, and P-J. Spaenlehauer. Computing loci of rank defectsof linear matrices using Gr¨obner bases and applications to cryptology. In
Proceedingsof the 2010 International Symposium on Symbolic and Algebraic Computation , pages257–264, 2010.[12] J-C. Faug`ere, M. Safey El Din, and P-J. Spaenlehauer. Critical points and Gr¨obnerbases: the unmixed case. In
Proceedings of the 37th International Symposium on Sym-bolic and Algebraic Computation , pages 162–169, 2012.[13] P. Gianni and T. Mora. Algebraic solution of systems of polynomial equations usingGroebner bases. In
AAECC , volume 356 of
LNCS , pages 247–257. Springer, 1989.2414] M. Giusti, J. Heintz, J.-E. Morais, J. Morgenstern, and L.-M. Pardo. Straight-lineprograms in geometric elimination theory.
J. of Pure and Applied Algebra , 124:101–146,1998.[15] M. Giusti, J. Heintz, J.-E. Morais, and L.-M. Pardo. When polynomial equation systemscan be solved fast? In
AAECC-11 , volume 948 of
LNCS , pages 205–231. Springer, 1995.[16] M. Giusti, G. Lecerf, and B. Salvy. A Gr¨obner-free alternative for polynomial systemsolving.
Journal of Complexity , 17(1):154–211, 2001.[17] J.D. Hauenstein, M. Safey El Din, ´E. Schost, and T.X. Vu. Solving determinantalsystems using homotopy techniques. 2019.[18] J. Heintz, G. Jeronimo, J. Sabia, and P. Solerno. Intersection theory and deformationalgorithms: the multi-homogeneous case, 2002.[19] J. Heintz, T. Krick, S. Puddu, J. Sabia, and A. Waissbein. Deformation techniques forefficient polynomial equation solving.
Journal of Complexity , 16(1):70 – 109, 2000.[20] M. I. Herrero, G. Jeronimo, and J. Sabia. Computing isolated roots of sparse polynomialsystems in affine space.
Theoretical Computer Science , 411(44):3894 – 3904, 2010.[21] M. I. Herrero, G. Jeronimo, and J. Sabia. Affine solution sets of sparse polynomialsystems.
Journal of Symbolic Computation , 51:34 – 54, 2013.[22] M. I. Herrero, G. Jeronimo, and J. Sabia. Elimination for generic sparse polynomialsystems.
Discrete and Computational Geometry , 51(3):578–599, 2014.[23] B. Huber and B. Sturmfels. A polyhedral method for solving sparse polynomial systems.
Mathematics of Computation. , 64(212):1541–1555, October 1995.[24] D. James. A global weighted version of B´ezout’s theorem.
The Arnoldfest (Toronto,ON, 1997) , 24:115–129, 1999.[25] G. Jeronimo, G. Matera, P. Solern´o, and A. Waissbein. Deformation techniques forsparse systems.
Foundations of Computational Mathematics. , 9(1):1–50, 2009.[26] L. Kronecker. Grundz¨uge einer arithmetischen Theorie der algebraischen Gr¨ossen.
Jour-nal f¨ur die Reine und Angewandte Mathematik , 92:1–122, 1882.[27] F. S. Macaulay.
The Algebraic Theory of Modular Systems . Cambridge University Press,1916.[28] F. Rouillier. Solving zero-dimensional systems through the Rational Univariate Repre-sentation.
Applicable Algebra in Engineering, Communication and Computing , 9(5):433–461, 1999. 2529] M. Safey El Din and ´E. Schost. A nearly optimal algorithm for deciding connectivityqueries in smooth and bounded real algebraic sets.
Journal of ACM , 63(6):1–48, 2017.[30] M. Safey El Din and ´E. Schost. Bit complexity for multi-homogeneous polynomial sys-tem solving - application to polynomial minimization.
Journal of Symbolic Computation ,87:176–206, 2018.[31] M. Safey El Din and P-J. Spaenlehauer. Critical point computations on smooth varieties:degree and complexity bounds. In
Proceedings of the ACM on International Symposiumon Symbolic and Algebraic Computation , pages 183–190, 2016.[32] P-J. Spaenlehauer. On the complexity of computing critical points with Gr¨obner bases.
SIAM Journal on Optimization , 24(3):1382–1401, 2014.[33] S.S.-T. Yau and L. Zhang. An upper estimate on integral points in real simplices withan application in singularity theory.