Exact Moment Representation in Polynomial Optimization
aa r X i v : . [ m a t h . A C ] F e b Exact Moment Representation in Polynomial Optimization
Lorenzo Baldi * , Bernard Mourrain * Inria, Université Côte d’Azur, Sophia Antipolis, France
In memory of Carlo Casolo
Abstract
We investigate the problem of representation of moment sequences by measures in Polyno-mial Optimization Problems, consisting in finding the infimum f ∗ of a real polynomial f on areal semialgebraic set S defined by a quadratic module Q . We analyse the exactness of MomentMatrix (MoM) relaxations, dual to the Sum of Squares (SoS) relaxations, which are hierarchiesof convex cones introduced by Lasserre to approximate measures and positive polynomials. Weshow that the MoM relaxation coincides with the dual of the SoS relaxation extended with thereal radical of the support of the associated quadratic module Q . We prove that the vanishingideal of the semialgebraic set S is generated by the kernel of the Hankel operator associated toa generic element of the truncated moment cone for a su ffi ciently high order of the MoM relax-ation. When the quadratic module Q is Archimedean, we show the convergence, in Hausdor ff distance, of the convex sets of the MoM relaxations to the convex set of probability measuressupported on S truncated in a given degree. We prove the exactness of MoM relaxation when S is finite and when regularity conditions, known as Boundary Hessian Conditions, hold on theminimizers. This implies that MoM exactness holds generically. When the set of minimizersis finite, we describe a MoM relaxation which involves f ∗ , show its MoM exactness and pro-pose a practical algorithm to achieve MoM exactness. We prove that if the real variety of polarpoints is finite then the MoM relaxation extended with the polar constraints is exact. E ff ectivenumerical computations illustrate this MoM exactness property. Contents * This work has been supported by European Union’s Horizon 2020 research and innovation programme under theMarie Skłodowska-Curie Actions, grant agreement 813211 (POEMA) Applications to Polynomial Optimization 17
Let f , g , . . . , g s ∈ R [ X , . . . , X n ] be polynomials in the indeterminates X , . . . , X n with real coe ffi cients.The goal of Polynomial Optimization is to find: f ∗ ≔ inf n f ( x ) ∈ R | x ∈ R n , g i ( x ) ≥ i = 1 , . . . , s o . (1)that is the infimum f ∗ of the objective function f on the basic semialgebraic set S ≔ { x ∈ R n | g i ( x ) ≥ i = 1 , . . . , s } . It is a general problem, which appears in many contexts (e.g. real solution ofpolynomial equations, . . . ) and with many applications. To cite a few of them: in combinatorics,network optimization design, control, . . . See for instance [Las10].To solve this NP hard problem, Lasserre [Las01] proposed to use two hierarchies of finite di-mensional convex cones depending on an order d ∈ N and he proved, for Archimedean quadraticmodules, the convergence when d → ∞ of the optima associated to these hierarchies to the min-imum f ∗ of f on S . The first hierarchy replaces non-negative polynomials by Sums of Squares(SoS) and non-negative polynomials on S by polynomials of degree ≤ d in the truncated quadraticmodule Q d ( g ) generated by g = { g , . . . , g s } .The second and dual hierarchy replaces positive measures by linear functionals ∈ L d ( g ) whichare non-negative on the polynomials of the truncated quadratic module Q d ( g ). We will describemore precisely these constructions in section 2.1.This approach has many interesting properties (see e.g. [Las15], [Lau09], [Mar08]). But it alsoraises a challenging question, of practical importance: Can the solution of (1) be recovered at a finiteorder of these convex relaxations ?
The aim is to recover the infimum f ∗ and, if this infimum isreached, the minimizer set { ξ ∈ S | f ( ξ ) = f ∗ } .To answer this question, one can first address the finite convergence problem, that is whenthe value f ∗ can be obtained at a given order of the relaxation(s). The second problem is theexactness of the relaxations, which is the main topic of this paper. The Sum of Squares (SoS)exactness is when the non-negative polynomial f − f ∗ belongs to the truncated quadratic module Q d ( g ) for some d ∈ N . The Moment Matrix (MoM) exactness is when an optimal linear functional σ ∗ ∈ L d ( g ) for f is coming from a positive measure supported on S for some d ∈ N . We are goingto investigate in details this MoM exactness property.Several works have been developed over the last decades to tackle these problems. [Par02]showed that if the complex variety V C ( I ) defined by an ideal I generated by real polynomials isfinite and I is radical, then f − f ∗ has a representation as a sum of squares modulo I and the SoSrelaxation is exact. [Lau07] showed the finite convergence property if the complex variety V C ( I ) isfinite, and a moment sequence representation property, if moreover the ideal I is radical. [Nie13b]showed that if the semialgebraic set S is finite, then the finite convergence property holds for afinitely generated preordering defining S . [LLR08] proved that if S is finite, the value f ∗ andthe minimizers can be recovered from moment matrices associated to the truncated preorderingdefining S . In [Las+13], the kernel of moment matrices is used to compute a border basis of thereal radical ideal R √ I when S = V R ( I ) is finite. [Sch05a] proved that f − f ∗ is in the quadraticmodule Q defining S modulo ( f − f ∗ ) if and only if f − f ∗ ∈ Q and then the SoS relaxation isexact. [Mar06], [Mar09] proved that under some regularity conditions on the minimizers, knownas Boundary Hessian Conditions (BHC), f − f ∗ is in the quadratic module and the SoS exactnessproperty holds. [NDS06], [DNP07] showed that adding gradient constraints when S = R n or KKTconstraints when S is a general basic semialgebraic set, the SoS exactness property holds when2he corresponding Jacobian ideal is radical. [Nie13a] showed that adding the Jacobian constraints,the finite convergence property holds under some regularity assumption on the complex varietyassociated to these constraints and on the compactness of S . In [Nie14], it is shown that BHCimply finite convergence and that BHC are generic. [KS19] showed the SoS exactness property ifthe quadratic module defining S is Archmedian and some strict concavity properties of f at thefinite minimizers are satisfied.Though many works focussed on the SoS relaxation and on the representation of positive poly-nomials with sums of squares, the MoM relaxation has been much less studied. It has interestingfeatures, that deserve a deeper exploration: the convex cones L d ( g ) of truncated non-negativelinear functionals are closed; finite convergence can be decided by flat extension tests on mo-ment matrices [CF98], [LM09]; finite minimizers can be extracted from moment matrices [HL05],[Mou18]. On the other hand, exact SoS relaxations can provide certificates of positivity, which isalso interesting from a theoretical and practical point of view.In this paper, we investigate the truncated moment relaxation from a new perspective, devel-oping a theoretical and computational study of truncated positive linear functionals. We analysein details the properties of moment relaxations and present new results on the representation ofmoments of positive linear functionals as moments of measures.We first show in Theorem 3.9 that the MoM relaxation ( L d ( g )) d ∈ N dual to the SoS relaxation( Q d ( g )) d ∈ N is the same as the one associated to the quadratic module Q extended with the realradical of the support of Q . This yields the vanishing ideal of S as the ideal generated by the kernelof the Hankel operator H σ associated to a generic element σ ∈ L d ( g ) for d su ffi ciently large (seeTheorem 3.16).The second result concerns the convergence of truncated moment sequences. When the quadraticmodule Q defining S is Archimedean, the optima of the SoS and MoM relaxations converge to theminimum f ∗ of f on S [Las01]. We generalize this result in Theorem 3.19, showing that the convexsets L (1) d ( g ) of linear functionals σ non-negative on Q d ( g ) such that σ (1) = 1 truncated in degree t converge, in Hausdor ff distance, to the probability measures supported on S truncated in degree t , when d → ∞ .Our main result on exact moment representations is given in Theorem 3.23. When S is finiteand the quotient by the support of Q is of dimension zero, we prove that the linear functionals in L d ( g ) truncated in a degree greater than twice the regularity of the points in S coincide with themeasures supported on S , that is the convex hull of the evaluations at the points of S . Moreover,the ideal generated by the kernel of the Hankel operator of a generic element in L d ( g ) is thevanishing ideal of S .We apply these results to Polynomial Optimization Problems, showing in Theorem 4.1 thatwhen the set S is finite, the MoM relaxation is exact if the quotient by the support of the quadraticmodule Q is of dimension zero. This generalizes the results of [LLR08] on semi-definite momentrepresentations.The main result on exactness is Theorem 4.8. We prove that when the Boundary HessianConditions are satisfied, the MoM relaxation is exact. This generalizes the results on SoS exactnessproved in [Mar06], [Mar09]. It also shows that MoM exactness holds generically (Corollary 4.9).When the set of minimizers is finite, we describe a MoM relaxation which involves f ∗ , showits MoM exactness (Corollary 4.10) and propose a practical algorithm to achieve MoM exactnessusing approximate numerical computation. In Theorem 4.13, we prove that if the real variety ofpolar points is finite then the relaxation extended with Jacobian constraints is MoM exact. Thisgeneralizes the results of finite convergence and SoS exactness of the KKT and Jacobian relaxationsunder regularity conditions, proved in [NDS06], [DNP07], [Nie13a].The paper is structured as follows. In the next sections of the introduction, we define thealgebraic objects that we will use and recall their main properties. In Section 2, we describe indetails the notions of finite convergence and exactness for the Sum of Squares (SoS) and MomentMatrix (MoM) relaxations. We give several examples showing how these notions are related. InSection 3, we recall the properties of full moment sequences (Section 3.1), investigate the trun-3ated moment sequence properties (Section 3.2), analyse the convergence of truncated momentsequences (Section 3.3) and prove the moment representation property for a finite semialgebraicset S (Section 3.4). Finally in Section 3.5 we describe the hierarchies of truncated moments andkernels of generic elements. In Section 4, we apply these results to polynomial optimization prob-lems on finite semialgebraic sets (Section 4.1), show that the MoM exactness property holds ifthe Boundary Hessian conditions are satisfied (Section 4.2) and investigate Polynomial Optimiza-tion Problems with a finite number of minimizers from a moment representation point of view(Section 4.3). In Section 4.4, we define polar ideals and analyse the MoM exactness property ofthe relaxation extended with these polar constraints. Examples of Polynomial Optimization Prob-lems and numerical experimentations with the Julia package MomentTools.jl are presented inSection 4.5.
We provide the basic definitions on real polynomials and refer to [Mar08] for more details. Let R [ X ] ≔ R [ X , . . . , X n ] be the R -algebra of polynomials in n indeterminates X , . . . , X n . Let Σ = Σ [ X ] ≔ n f ∈ R [ X ] | ∃ r ∈ N , g i ∈ R [ X ] : f = g + · · · + g r o be the convex cone of Sum of Squarespolynomials (SoS). If A ⊂ R [ X ], A d ≔ { f ∈ A | deg f ≤ d } . In particular R [ X ] d is the vector space ofpolynomials of degree ≤ d .We denote ( h , . . . , h r ) ⊂ R [ X ] the ideal generated by h , . . . , h r ∈ R [ X ]. Q ⊂ R [ X ] is called quadratic module if 1 ∈ Q , Σ · Q ⊂ Q and Q + Q ⊂ Q . If in addition Q · Q ⊂ Q , Q is preordering .For Q ⊂ R [ X ], we define supp Q ≔ Q ∩ − Q . If Q is a quadratic module then supp Q is an ideal.We say that a quadratic module Q is finitely generated (f.g.) if ∃ g . . . g l ∈ R [ X ] : Q = Q ( g , . . . , g l ) ≔ Σ + Σ · g + · · · + Σ · g l (it is the smallest quadratic module containing g , . . . , g l ). We say that apreordering O is finitely generated if ∃ g , . . . , g l ∈ R [ X ] : O = O ( g , . . . , g l ) ≔ Q ( Q j ∈ J g j | J ⊂ { , . . . , l } )(it is the smallest preordering containing g , . . . , g l ).For G ⊂ R [ X ], let Q t ( G ) ≔ n s + P rj =1 s j g j ∈ R [ X ] t | r ∈ N , g j ∈ G, s ∈ Σ t , s j ∈ Σ t − deg g j o and h G i t ≔ n P ri =1 f i h i ∈ R [ X ] t | r ∈ N , h i ∈ G, f i ∈ R [ X ] t − deg h i o .For a sequence of polynomials g ≔ g , . . . , g s we define Π g ≔ Q j ∈ J g j : J ⊂ { , . . . , t } and ± g ≔ g , − g , . . . , g r , − g r . Observe that Q t ( g , ± h ) = Q t ( g ) + h h i ⌊ t ⌋ and Q t ( Π ( g , ± h )) = Q t ( Π g , ± h ). Noticethat h h i t ⊂ ( h ) t and Q t ( g ) ⊂ Q ( g ) t , but (unluckily) these inclusions are strict in general. Finally if A ⊂ R [ X ] we define S ( A ) ≔ n x ∈ R n | f ( x ) ≥ ∀ f ∈ A o . In particular we denote S ( g ) = n x ∈ R n | g ( x ) ≥ ∀ g ∈ g o (the basic semialgebraic set defined by g ). If Q = Q ( g ), notice that S ( g ) = S ( Π g ) = S ( Q ). We denote by Pos( S ) = { f ∈ R [ X ] : ∀ x ∈ S, f ( x ) ≥ } the cone of positive polynomials on S . We describe the dual algebraic objects and refer to [Mou18] for more details. For σ ∈ ( R [ X ]) ∗ = { σ : R [ X ] → R | σ is linear } , we denote h σ | f i = σ ( f ) the application of σ to f ∈ R [ X ]. Recall that( R [ X ]) ∗ (cid:27) R [[ Y ]] ≔ R [[ Y , . . . , Y n ]], with the isomorphism given by:( R [ X ]) ∗ ∋ σ X α ∈ N n (cid:10) σ | X α (cid:11) Y α α ! ∈ R [[ Y ]] , where { Y α α ! } is the dual basis to { X α } , i.e. D Y α (cid:12)(cid:12)(cid:12) X β E = α ! δ α,β . With this basis we can also identify σ ∈ ( R [ X ]) ∗ with its sequence of coe ffi cients ( σ α ) α , where σ α ≔ h σ | X α i . We will consider Borelmeasures with support included in S ⊂ R n , denoted as M ( S ), as linear fuctionals, i.e. M ( S ) ⊂ ( R [ X ]) ∗ . In this case the sequence ( µ α ) α associated with a measure µ is the sequence of moments : µ α = R X α d µ . Moreover M (1) ( S ) will denote the Borel probability measures supported on S . Werecall a version of Haviland’s theorem [Mar08, th. 3.1.2]: if σ ∈ ( R [ X ]) ∗ , then σ ∈ M ( S ) if andonly if ∀ f ∈ Pos( S ) , h σ | f i ≥
0. In particular we are interested in evaluations : if ξ ∈ R n then e ξ ( f ) = D e ξ (cid:12)(cid:12)(cid:12) f E = R f d e ξ = f ( ξ ) for all f ∈ R [ X ]. 4f σ ∈ ( R [ X ]) ∗ and g ∈ R [ X ], we define the convolution of g and σ as g ⋆ σ ≔ σ ◦ m g ∈ ( R [ X ]) ∗ (i.e. h g ⋆ σ | f i = h σ | gf i ∀ f ) and the Hankel operator H σ : R [ X ] → ( R [ X ]) ∗ , g g ⋆ σ . If σ = ( σ α ) α and g = P α g α X α then g ⋆ σ = ( P β g β σ α + β ) α ; the matrix H σ in the basis { X α } and { Y α α ! } is H σ = ( σ α + β ) α,β .Notice that g ⋆ σ = 0 ⇐⇒ H g⋆σ = 0.We say that σ is positive semidefinite (psd) ⇐⇒ H σ is psd, i.e. h H σ ( f ) | f i = D σ (cid:12)(cid:12)(cid:12) f E ≥ ∀ f ∈ R [ X ](see [Sch17] or [Mar08] for basic properties of psd matrices).If σ ∈ ( R [ X ]) ∗ then σ [ t ] ∈ ( R [ X ] t ) ∗ denotes its restriction to R [ X ] t (and same for σ ∈ ( R [ X ] r ) ∗ , r ≥ t ); moreover if B ⊂ ( R [ X ]) ∗ then B [ t ] ≔ { σ [ t ] ∈ ( R [ X ] t ) ∗ | σ ∈ B } (and same for B ⊂ ( R [ X ] r ) ∗ , r ≥ t ).If σ ∈ ( R [ X ] t ) ∗ and g ∈ R [ X ] t , then g ⋆ σ ≔ σ ◦ m g ∈ ( R [ X ] t − deg g ) ∗ . If σ ∈ ( R [ X ]) ∗ (or σ ∈ ( R [ X ] r ) ∗ , r ≥ t ), then we define H tσ : R [ X ] t → ( R [ X ] t ) ∗ , g ( g ⋆ σ ) [ t ] . We have ( g ⋆ σ ) [2 t ] = 0 ⇐⇒ H tg⋆σ =0. Notice that, if s ≤ t , we can identify the matrix of H sσ with the submatrix of H tσ indexed bymonomials of degree ≤ t .Let A ⊂ R [ X ] (resp. A ⊂ R [ X ] t ). We define A ⊥ ≔ n σ ∈ ( R [ X ]) ∗ | h σ | f i = 0 ∀ f ∈ A o (resp. A ⊥ ≔ n σ ∈ ( R [ X ] t ) ∗ | h σ | f i = 0 ∀ f ∈ A o ). Notice that σ ∈ h h i ⊥ t (resp. ( h ) ⊥ ) if and only if ( h ⋆ σ ) [ t − deg h ] =0 ∀ h ∈ h (resp. h ⋆ σ = 0 ∀ h ∈ h ).For G ⊂ R [ X ] t we define: L t ( G ) = { σ ∈ ( R [ X ] t ) ∗ | ∀ q ∈ Q t ( G ) h σ | q i ≥ } Equivalently σ ∈ L t ( G ) if and only if h σ | s i ≥ ∀ s ∈ Σ t and h σ | sf i ≥ ∀ f ∈ G, ∀ s ∈ Σ : deg f s ≤ t .For the non truncated version we write L ( A ). Notice that if Q = Q ( g ) then L ( g ) = L ( Q ) (resp. L t ( g ) = L t ( Q t ( g ))) is the dual convex cone to Q (resp. to Q t ( g )), see [Mar08, sec. 3.6]: L ( g ) = Q ∨ and L t ( g ) = Q t ( g ) ∨ . We give to R [ X ] and ( R [ X ]) ∗ the locally convex topology defined as follows.If V = R [ X ] or V = ( R [ X ]) ∗ and W ⊂ V is a finitely dimensional vector subspace, W is equippedwith the Euclidean topology. We define U ⊂ V open if and only if U ∩ W is open in W for everyfinitely dimensional vector subspace W . By conic duality: Q = L ( g ) ∨ and Q t ( g ) = L t ( g ) ∨ . If A ⊂ V ,we denote by cone( A ) the convex cone generated by A , by conv( A ) its convex hull and by h A i itslinear span. We refer to [BCR98] and [Mar08] for real algebra and geometry. An ideal I is called real (or realradical ) if a + · · · + a s ∈ I ⇒ a i ∈ I ∀ i . We define the real radical of an ideal I as: R √ I ≔ { f ∈ R [ X ] | ∃ h ∈ N , s ∈ Σ f h + s ∈ I } (2)= { f ∈ R [ X ] | ∃ k ∈ N , s ∈ Σ f k + s ∈ I } . (3)Definition (2) is the classical one, and it is equivalent to (3), that will be more convenient in thepaper. The real radical of I is the smallest real ideal containing I . If Q is a quadratic module and I is an ideal, we say that I is Q -convex if ∀ g , g ∈ Q, g + g ∈ I ⇒ g , g ∈ I . Then I is a real idealif and only if I is radical and Σ -convex.Minimal primes lying over supp Q are Q -convex (see [Mar08, prop. 2.1.7]), and thus Σ -convex. Prime ideals are radical, then minimal primes lying over supp Q are real. Then p supp Q = R p supp Q .If I ⊂ R [ X ] is an ideal, we denote by V ( I ) its (complex) variety, and we define V R ( I ) ≔ V ( I ) ∩ R n .Moreover, if S ⊂ R n we denote I ( S ) its (real) vanishing ideal.We recall the description of the Zariski closure of basic semialgebraic sets. Theorem 1.1 (Real Nullstellensatz, [Mar08, th. 2.2.1], [BCR98, cor. 4.4.3]) . Let S = S ( g ) be a basicsemialgebraic set. Then I ( S ) = R p supp O ( g ) . In other words, f = 0 on S ⇐⇒ ∃ h ∈ N : − f h ∈ O ( g ) . In particular, S ( g ) is empty if and only if − ∈ O ( g ) (and thus R [ X ] = O ( g )), and if I is an idealthen I ( V R ( I )) = p supp( Σ + I ) = R √ I . 5f S ⊂ R n , we denote by Pos( S ) the convex cone of non-negative polinomials on S : Pos( S ) ≔ { f ∈ R [ X ] | f ( x ) ≥ ∀ x ∈ S } .We say that a quadratic module Q is Archimedean if ∃ ≤ r ∈ R : r − k X k ∈ Q . Notice thatif Q is Archimedean then S ( Q ) is compact. By [Wö98] (see also [Mar08, th. 6.1.1]) for a finitelygenerated preordering O = O ( g ), S ( g ) is compact if and only if O is Archimedean.When S is compact, one can obtain an Archimedean quadratic module from Q ( g ) by adding agenerator g M = M − k X k ≥
0, for M big enough or by adding all the products of the g i ’s, replacingthe generators g by Π g .The importance of Archimedean quadratic modules is illustrated by Schmüdgen/Putinar’scharacterization of strictly positive polynomials, and their solution of the moment problem (seetheorem 3.1). Theorem 1.2 (Schmüdgen / Putinar Positivestellensatz,[Sch91] [Put93]) . Let S ( g ) be a basic semi-algebraic set and suppose that Q ( g ) Archimedean. If f > on S ( g ) then f ∈ Q ( g ) . As a corollary one can prove that, if Q is Archimedean, then Q = Pos( S ). Now we move to interpolator polynomials, a tool which will be often used in the proofs.Consider a finite set of points Ξ = { ξ , . . . , ξ r } ∈ C n . It is well known that it admits a family ofinterpolator polynomials. Such a family ( u i ) ⊂ C [ X ] is by definition such that u i ( ξ j ) = δ i,j . Theminimal degree ι ( Ξ ) of a family of interpolator polynomials is called the interpolation degree of Ξ .Let I ( Ξ ) = { p ∈ C [ X ] | p ( ξ i ) = 0 ∀ i ∈ , . . . , r } be the complex vanishing ideal of the points Ξ .The Catelnuovo-Mumford regularity of an ideal I (resp. Ξ ) is max i (deg S i − i ) where S i is the i th module of syzygies in a minimal resolution of I (resp. I ( Ξ )). Let denote it by ρ ( I ) (resp. ρ ( Ξ )).Since a family of interpolator polynomials ( p i ) is a basis of C [ X ] / I ( Ξ ), the ideal I ( Ξ ) is gen-erated in degree ≤ ι ( Ξ ) + 1 and ρ ( Ξ ) ≤ ι ( Ξ ) + 1. A classical result [Eis05, th. 4.1] relates theinterpolation degree of Ξ with its regularity, and the minimal degree of a basis of C [ X ] / I ( Ξ ). Thisresult can be stated as follows, for real points Ξ ⊂ R n : Proposition 1.3.
Let Ξ = { ξ , . . . , ξ r } ⊂ R r with regularity ρ ( Ξ ) . Then ι ( Ξ ) = ρ ( Ξ ) − , the minimal de-gree of a basis of R [ X ] / I ( Ξ ) is ρ ( Ξ ) − and there exists interpolator polynomials u , . . . , u r ∈ R [ X ] ρ ( Ξ ) − . We say that h = { h , . . . , h s } is a graded basis of an ideal I if for all p ∈ I , there exists q i ∈ R [ X ]with deg( q i ) ≤ deg( p ) − deg( h i ) such that p = P si =1 h i q i . Equivalently, we have for all t ∈ N , h h i t = I t .For p ∈ R [ X ] and I an ideal of R [ X ], let λ deg ( p ) be its homogeneous component of highestdegree, that we call the initial of p , and let λ deg ( I ) = ( { λ deg ( p ) | p ∈ I } ) be the initial of I . A family h = ( h , . . . , h s ) is a graded basis of the ideal I = ( h , . . . , h s ) i ff λ deg ( I ) = ( λ deg ( h ) , . . . , λ deg ( h s )). Formore properties of graded bases, also known as H -bases, see e.g. [Mac16].A graded basis of an ideal I = ( h ) can be computed as a Grobner basis using a monomialordering ≺ , which refines the degree ordering (see e.g. [CLO15]). It can also be computed as aborder basis for a monomial basis of least degree of R [ X ] /I (see e.g. [MT05]).The degree of a graded basis of an ideal I is bounded by its regularity ρ ( I ) (see e.g. [BS87]).For a set of points Ξ = { ξ , . . . , ξ r } , the ideal I ( Ξ ) has a graded (resp. Grobner, resp. border) ba-sis of degree equal to the regularity ρ ( Ξ ). The minimal degree of a monomial basis B of R [ X ] / I ( Ξ )is ι ( Ξ ) = ρ ( Ξ ) −
1. Such a basis B can be chosen so that it is stable by monomial division. Proposition 1.4.
Let Ξ = { ξ , . . . , ξ r } ⊂ R n , I = I ( Ξ ) its real vanishing ideal and let ρ = ρ ( Ξ ) theregularity of Ξ . For t ≥ ρ − , σ ∈ I ⊥ t if and only if σ ∈ h e [ t ] ξ , . . . , e [ t ] ξ r i . Moreover if t ≥ ρ − and σ ∈ L t ( I t ) then σ ∈ cone( e [ t ] ξ , . . . , e [ t ] ξ r ) . roof. Let u , . . . , u r ∈ R [ X ] t be interpolation polynomials of degree ≤ ρ − ≤ t (Proposition 1.3).Consider the sequence of vector space maps:0 → I t → R [ X ] t ψ −→ h u , . . . , u r i → p r X i =1 p ( ξ i ) u i , which is exact since ker ψ = { p ∈ R [ X ] t | p ( ξ i ) = 0 } = I t . Therefore we have R [ X ] t = h u , . . . , u r i ⊕ I t .Let σ ∈ I ⊥ t . Then ˜ σ = σ − P ri =1 h σ | u i i e [ t ] ξ i ∈ I ⊥ t is such that h ˜ σ | u i i = 0 for i = 1 , . . . , r . Thus,˜ σ ∈ h u , . . . , u r i ⊥ ∩ I ⊥ t = ( h u , . . . , u r i ⊕ I t ) ⊥ = R [ X ] ⊥ t , i.e. ˜ σ = 0 showing that I ⊥ t ⊂ h e [ t ] ξ , . . . , e [ t ] ξ r i . Thereverse inclusion is direct since I t is the space of polynomials of degree ≤ t vanishing at ξ i for i = 1 , . . . , r .Assume that t ≥ ρ −
1) and that σ ∈ L t ( I t ). Then σ ∈ I ⊥ t and D σ (cid:12)(cid:12)(cid:12) p E ≥ p ∈ R [ X ] ⌊ t ⌋ . Bythe previous analysis, σ = r X i =1 ω i e [ t ] ξ i As D σ (cid:12)(cid:12)(cid:12) u i E = ω i ≥ i = 1 , . . . , r , we deduce that σ ∈ cone( e [ t ] ξ , . . . , e [ t ] ξ r ). We describe now the Lasserre SoS and MoM relaxations [Las01], and we define the exactness prop-erty. Hereafter we assume that the minimum f ∗ of the objective function f is always attained on S , that is: S min ≔ { x ∈ S | f ( x ) = f ∗ } , ∅ . We define the
SoS relaxation of order d of problem (1) as Q d ( g ) and the supremum: f ∗ SoS ,d ≔ sup n λ ∈ R | f − λ ∈ Q d ( g ) o . (4)When necessary we will replace g by Π g (that is Q ( g ) by O ( g )).We want to define the dual approximation of the polynomial optimization problem. We areinterested in an a ffi ne hyperplane section of the cone L d ( g ): L (1) d ( g ) ≔ n σ ∈ L d ( g ) | h σ | i = 1 o . We will use the notation L (1) ( g ) in the infinite dimensional case. The convex set L (1) d ( g ) is alsocalled the state space of ( R [ X ] d , Q d ( g ) ,
1) (see [KS19]). The pure states are the extreme points ofthis convex set.With this notation we define the
MoM relaxation of order d of problem (1) as L d ( g ) and theinfimum: f ∗ MoM ,d ≔ inf n h σ | f i ∈ R | σ ∈ L (1)2 d ( g ) o . (5)When necessary we will replace g by Π g (that is Q ( g ) by O ( g )). We are interested, in particular,in the linear functionals that realize the minimum. We easily verify that f ∗ SoS ,d ≤ f ∗ MoM ,d ≤ f ∗ .When S min , ∅ , the infimum f ∗ MoM ,d is reached since L (1) d ( g ) is closed. Definition 2.1.
Let f ∈ R [ X ] and f ∗ denote its minimum on S ( g ). We define the set of functionalminimizers as: L min2 d ( g ) ≔ n σ ∈ L (1)2 d ( g ) | h σ | f i = f ∗ o . L d ( g ) is the cone over L (1) d ( g ), since for σ ∈ L d ( g ) we have h σ | i = 0 ⇒ σ = 0 (see[Las+13, lem. 3.12]), and σ ∈ L d ( g ) , h σ | i σ ∈ L (1) d ( g ) We introduce two convergenceproperties that will be central in the article. Definition 2.2 (Finite Convergence) . We say that the SoS relaxation ( Q d ( g )) d ∈ N (resp. the MoMrelaxation L d ( g )) d ∈ N has the Finite Convergence property for f if ∃ k ∈ N such that for every d ≥ k , f ∗ SoS ,d = f ∗ (resp. f ∗ MoM ,d = f ∗ ).Notice that if the SoS relaxation has finite convergence then the MoM relaxation has finiteconvergence too, since f ∗ SoS ,d ≤ f ∗ MoM ,d ≤ f ∗ . Definition 2.3 (SoS Exactness) . We say that the SoS relaxation ( Q d ( g )) d ∈ N is exact for f if it hasthe finite convergence property and for all d big enough, we have f − f ∗ ∈ Q d ( g ) (in other wordssup = max in the definition of f ∗ SoS ,d ).For the moment relaxation we can ask the (stronger) property that every truncated functionalminimizer is coming from a measure: Definition 2.4 (MoM Exactness) . We say that the MoM relaxation ( L d ( g )) d ∈ N is exact for f on thebasic closed semialgebraic set S if:• it has the finite convergence property;• for every k ∈ N big enough, for d = d ( k ) ∈ N big enough, every truncated functional mini-mizer is coming from a probability measure supported on S , i.e. L min2 d ( g ) [ k ] ⊂ M (1) ( S ) [ k ] .If not specified, S will be the semialgebraic set S = S ( g ) defined by g .MoM exactness may be considered as a particular instance of the so called Moment Problem (i.e. asking if σ ∈ R [ X ] ∗ is coming from a measure) or of the Strong Moment Problem (i.e. askingthat the measure has a specified support). More precisely, MoM exactness can be considered as a
Truncated Strong Moment Property (since we are considering functionals restricted to polynomialsup to a certain degree).We recall results of strong duality, i.e. cases when we know that f ∗ SoS ,d = f ∗ MoM ,d , that will beusing. See also Proposition 3.10. Theorem 2.5 (Strong duality) . Let Q = Q ( g ) be a quadratic module and f the objective function. Then: • if supp Q = 0 then ∀ d : f ∗ SoS ,d is attained (i.e. f − f ∗ SoS ,d ∈ Q d ( g ) ) and f ∗ SoS ,d = f ∗ MoM ,d [Mar08,prop. 10.5.1]; • if there exists ≤ r ∈ R : r − k X k ∈ Q d ( g ) then f ∗ SoS ,d = f ∗ MoM ,d [JH16]. We recall that we are assuming S min , ∅ (in particular f ∗ is finite: otherwise it may happenthat f ∗ SoS ,d = −∞ ). Notice that if strong duality holds, then SoS finite convergence is equivalent toMoM finite convergence. In this section, we give examples showing how these notions are (not) related.
No finite convergence.
The first example shows that SoS relaxations for polynomial optimiza-tion on algebraic curves do not have necessarily the finite convergence property.
Example 2.6 ([Sch00]) . Let
C ⊂ R n be a smooth connected curve of genus ≥
1, with only real pointsat infinity. Let h = { h , . . . , h s } ⊂ R [ X ] such that I = I ( C ) = ( h ). Then there exists f ∈ R [ X ] such thatthe SoS relaxation Q d ( ± h ) and the MoM relaxation L d ( ± h ) have no finite convergence and arenot exact. 8ndeed by [Sch00, Theorem 3.2], there exists f ∈ R [ X ] such that f ≥ C = S ( ± h ), which isnot a sum of squares in R [ C ] = R [ X ] /I . Consequently, f < Σ [ X ] + I = Q ( ± h ). As f ≥ C , itsinfimum f ∗ is non-negative and we also have f − f ∗ < Q ( ± h ).As I = supp Q ( ± h ) is real radical, using Proposition 3.10 we deduce that Q d ( ± h ) is closed, thatthere is no duality gap and that the supremum f ∗ SoS ,d is reached. Thus if the SoS relaxation hasfinite convergence then f − f ∗ ∈ Q d ( ± h ) for some d ∈ N . This is a contradiction, showing that theSoS and the MoM relaxations have no finite convergence and cannot be SoS exact for f .In dimension 2, there are also cases where the SoS and MoM relaxations cannot have finiteconvergence or be exact. Example 2.7 ([Mar08]) . Let g = X − X , g = 1 − X . Then S = S ( g ) is a compact semialgebraicset of dimension 2 and O ( g ) is Archimedean. We have f = X ≥ S but X < O ( g ) (see [Mar08,Example 9.4.6(3)]). The infimum of f on S is f ∗ = 0. By theorem 2.5, Q d ( Π g ) is closed, thesupremum f ∗ SoS ,d is reached and strong duality holds. Assume that f ∗ SoS ,d = f ∗ MoM ,d = f ∗ = 0 for d ∈ N big enough, then f − f ∗ = f ∈ O ( g ): but this is a contraction. Therefore, the relaxations Q d ( Π g ) and L d ( Π g ) cannot have finite convergence and be thus cannot be exact for f = X .The next example shows that non-finite convergence and non-exactnesss always happen indimension ≥ Example 2.8.
Let n ≥
3. Let Q be an Archimedean quadratic module generated by g , . . . , g s ∈ R [ X ]such that S ( Q ) ⊂ R n is of dimension m ≥
3. If Q is reduced, i.e. if supp Q = R p supp Q (in particularthis happens if supp Q = 0 or if m = n , i.e. S ( Q ) is of maximal dimension), then there exists f ∈ R [ X ] such that the SoS relaxation ( Q d ( g )) d ∈ N and MoM relaxation ( L d ( g )) d ∈ N do not havethe finite convergence property (and thus are not exact).Indeed by Proposition 3.10 f ∗ SoS ,d = f ∗ MoM ,d for d big enough and the supremum f ∗ SoS ,d is reached.By [Sch00, Prop. 6.1] for m ≥
3, Pos( S ( Q )) ) Q . So let f ∈ Pos( S ( Q )) \ Q and let f ∗ be its minimumon S ( Q ). Suppose that f − f ∗ ∈ Q , then f ∈ Q + f ∗ = Q , a contradiction. Then the SoS and the MoMrelaxations do not have the finite convergence property (and they are not exact). Remark.
The reduceness condition in Example 2.8 is not restrictive: if Q is a quadratic modulethen Q + R p supp Q is reduced (see [Sch05b, lemma 3.16]) and S ( Q ) = S ( Q + R p supp Q ). SoS exactness, no MoM exactness.Example 2.9.
We want to find the global minimum of f = X ∈ R [ X , . . . , X n ] = R [ X ] for n ≥
3. Let d ≥ X ′ = ( X , . . . , X n ) and σ ∈ L d ( Σ [ X ′ ]) such that σ < M ( R n − ) [ d ] . Such a linear functional existsbecause when n > R [ X ′ ] which are not sum of squares,such as the Motzkin polynomial (see [Rez96]). As Q d ( Σ [ X ′ ]) is closed, such a polynomial canbe separated from Q d ( Σ [ X ′ ]) by a linear functional σ L d ( Σ [ X ′ ]), which cannot be the truncationof a measure (i.e. Σ [ X ′ ] does not have the truncated moment property). Define σ : h
7→ h σ | h i = h σ | h (0 , X , . . . , X n ) i . We have σ ∈ L ( Σ [ X ]) since σ ∈ L ( Σ [ X ′ ]). Obviously h σ | f i = 0 = f ∗ (theminimum of X n ), f − f ∗ = X ∈ Σ and the SoS relaxation is exact. Since σ is coming from ameasure if and only if σ is coming from a measure, the MoM relaxation cannot be exact.The previous example generalizes easily to quadratic modules Q with supp( Q ) , { } , which donot have the (truncated) moment property, i.e. there exists σ ∈ L d ( Q ) such that σ < M ( S ( Q )) [ d ] .Taking f = h with h ∈ supp( Q ), h ,
0, we have h σ | f i = 0 = f ∗ and the MoM relaxation cannot beexact since σ < M ( S ( Q )) [ d ] , while the SoS relaxation is exact ( f − f ∗ = h ∈ Q ). SoS finite convergence, MoM exactness.Example 2.10.
Let f = ( X Y + X Y + Z − X Y Z ) + X + Y + Z ∈ R [ X, Y , Z ]. We want tooptimize f over the gradient variety V R (cid:16) ∂f∂X , ∂f∂Y , ∂f∂Z (cid:17) which is zero dimensional (see [NDS06]). ByTheorem 4.1 the MoM relaxation is exact, and by Corollary 4.4 the SoS has the finite convergenceproperty. But the SoS relaxation is not exact, as shown in [NDS06].9able 1: Summary of convergence results.Expl. SoS f. c. SoS ex. MoM f. c. MoM ex. m2.6 NO NO NO NO 12.7 NO NO NO NO 22.8 NO NO NO NO ≥ ≥ Example 2.11.
Let f = X . We want to find its value at the origin, defined by k X k = 0. As provedin [Nie13b] there is finite convergence but not exactness for the SoS relaxation. By Theorem 4.1the MoM relaxation is exact.We summarize the previous examples in Table 1 in terms of the properties of finite conver-gence (SoS f.c. and MoM f.c.) exactness (SoS ex. and MoM ex.) and the dimension m of thesemialgebraic set S . We give a description of the moment linear functionals in the full dimensional and truncated case.
With our setting, the classical moment problem can be stated as follows: given σ ∈ R [ X ] ∗ , whenthere exists µ ∈ M ( R n ) such that: ∀ f ∈ R [ X ] h σ | f i = Z f d µ . Haviland’s theorem (see [Mar08, th. 3.1.2] and [Sch17, th. 1.12]) says that this happens if andonly if σ is positive on positive polynomials. Since checking this is a computationally hard task,then it is interesting to find (proper) subsets of positive polynomials that have the same property,chosen in such a way that checking this conditions is easy. Important results in this direction aretheorems of Schmüdgen and Putinar. Theorem 3.1 ([Sch91],[Put93]) . Let Q be an Archimedean finitely generated quadratic module and S = S ( Q ) . Then L ( Q ) = M ( S ) = cone( e ξ : ξ ∈ S ) . This theorem solves the moment problem in the Archimedean (compact) case. Notice that M ( S ( Q )) depends only on S = S ( Q ) and not on the generators of Q . In particular, if Q and Q ′ areArchimedean and S ( Q ) = S ( Q ′ ) then L ( Q ) = L ( Q ′ ).If we have a generic measure µ ∈ M ( S ), i.e. which is nonzero on any nonzero polynomial on S , obviously its support is equal to S : supp µ = S . We want to generalize this property to lin-ear functionals which are not necessary coming from measures. In particular we want to recoverinformations about the semialgebraic set S = S ( g ) from linear functionals σ ∈ L ( g ). We are inter-ested in generic elements σ ∗ ∈ L ( Q ), that we characterize in terms of the the kernel of the Hankeloperator (see also Proposition 3.15). Definition 3.2.
We say that σ ∗ ∈ L ( Q ) is generic if ker H σ ∗ ⊂ ker H σ ∀ σ ∈ L ( Q ). Proposition 3.3.
Let I be an ideal of R [ X ] and σ ∗ ∈ L ( I ) be generic. Then ker H σ ∗ = R √ I . roof. Notice that if x ∈ V R ( I ) then e x ∈ L ( I ). Moreover ker H e x = I ( x ). This implies:ker H σ ∗ ⊂ \ x ∈V R ( I ) ker H e x = \ x ∈V R ( I ) I ( x ) = I ( V R ( I )) = R √ I , where the last equality is the Real Nullstellenstatz, Theorem 1.1.By definition, I ⊂ ker H σ ∗ . Since ker H σ ∗ is a real radical ideal (see [Las+13, prop. 3.13]) wehave R √ I ⊂ ker H σ ∗ =, which proves that ker H σ ∗ = R √ I .Proposition 3.3 generalizes to quadratic modules as follows. Proposition 3.4.
Let Q be a quadratic module, S = S ( Q ) and σ ∗ ∈ L ( Q ) be generic. Then p supp Q ⊂ ker H σ ∗ ⊂ I ( S ) . Moreover if Q is Archimedean then ker H σ ∗ = I ( S ) .Proof. As in the proof of Proposition 3.3, we get:ker H σ ∗ ⊂ \ x ∈ S ker H e x = \ x ∈ S I ( x ) = I ( S ) . Now observe that supp Q ⊂ ker H σ ∗ by definition. Since ker H σ ∗ is a real radical ideal (see [Las+13,prop. 3.13]), then R p supp Q ⊂ ker H σ ∗ .For the second part, if Q is Archimedean, then by Theorem 3.1 L ( Q ) = M ( S ). In particular σ ∗ is a measure µ ∈ M ( S ) supported on S : ∀ f ∈ R [ X ], h σ ∗ | f i = R f d µ . Let h ∈ I ( S ) and f ∈ R [ X ].Then: h σ ∗ | f h i = Z f h d µ = Z µ = 0 , i.e. h ∈ ker H σ ∗ , which proves the reverse inclusion.Now we describe L ( Q ) without the Archimedean hypothesis (compare with Theorem 3.1). Lemma 3.5.
Let Q be a quadratic module. Then L ( Q ) = L ( R p supp Q + Q ) . In particular for any ideal I ⊂ R p supp Q we have L ( Q ) = L ( R √ I + Q ) .Proof. Since p supp Q ⊂ Q (see [Mar08, th. 4.1.2]), we have p supp Q + Q ⊂ Q + Q = Q . Then: L ( Q ) = L ( Q ) ⊂ L ( p supp Q + Q ) ⊂ L ( Q ) . Since R p supp Q = p supp Q (see Section 1.3) we have L ( R p supp Q + Q ) = L ( p supp Q + Q ) = L ( Q ). Remark.
Lemma 3.5 shows that, even if the semialgebraic set is not compact, we can replace anyideal in the description of the semialgebraic set with its real radical. In particular, since I ( S ( g )) = R p O ( g ) (by Theorem 1.1), we have L ( O ( g )) = L ( O ( g ) + I ( S ( g ))).The inclusion Q + p supp Q ⊂ Q can be strict, as shown by the following example. Example 3.6 ([Sch05a, ex. 3.2], [Sch05b, rem. 3.15]) . Let Q = Q (1 − X − Y , − XY , X − Y , Y − X ) ⊂ R [ X, Y ]. Notice that S = S ( Q ) = { } and that, since Q is Archimedean, Q = Pos( { } ). In this casesupp Q = (0) and I ( S ) = supp Q = ( X, Y ), and thus Q + p supp Q ( Q . Now we prove the corresponding results in the truncated case. For a finitely generated quadraticmodule Q = Q ( g ) ⊂ R [ X ], we denote Q [ k ] = Q k ( g ). Definition 3.7.
Let Q = Q ( g ) be a finitely generated quadratic module. We define e Q = S d Q d ( g ) = S d Q [ d ] . 11otice that e Q depends a priori on the generators g of Q : we will prove that e Q is a finitelygenerated quadratic module and that it does not depend on the particular choice of generators.Moreover notice that Q ⊂ e Q = [ d Q [ d ] ⊂ [ d Q [ d ] = Q , but these inclusions may be strict as we willsee. Lemma 3.8.
Let Q = Q ( g ) and J = R p supp Q . Then for every d ∈ N there exists k ≥ d such that J d ⊂ Q [ k ] .Proof. Let m be big enough such that ∀ f ∈ J = R p supp Q = p supp Q we have: f m ∈ supp Q (if √ J = ( h , . . . , h t ) and h a i i ∈ I , we can take m such that 2 m ≥ a + · · · + a t ). Let f ∈ J d with deg f ≤ d .Then f m ∈ supp Q [ k ′ ] ⊂ Q [ k ′ ] for k ′ ∈ N big enough. Using the identity [Sch05b, remark 2.2]: m − a = (1 − a + (1 − a + (1 − a
128 ) + · · · + (1 − a m − m − ) − a m m +1 − , substituting a by − mfε and multiplying by εm , we have that ∀ ε > f + ε ∈ Q [ k ] for k = max { k ′ , m d } (the degree of the representation of f + ε does not depend on ε ). This implies that f ∈ Q [ k ] .We can now prove the main result of this section. Theorem 3.9.
Let Q = Q ( g ) be a finitely generated quadratic module and let J = R p supp Q . Then e Q = ∪ d ∈ N Q [ d ] = Q + J and supp e Q = J . In particular, e Q is a finitely generated quadratic module anddoes not depend on the particular choice of generators of Q .Proof. By [Mar08, lemma 4.1.4] Q [ d ] + J d is closed in R [ X ] d , thus Q [ d ] ⊂ Q [ d ] + J d . Taking unions weprove that e Q ⊂ Q + J .Conversely by Lemma 3.8 for d ∈ N and k ≥ d ∈ N big enough, J d ⊂ Q [ k ] . Then, we have Q [ d ] + J d ⊂ Q [ k ] + Q [ k ] ⊂ Q [ k ] + Q [ k ] ⊂ Q [ k ] . Taking unions on both sides gives Q + J ⊂ e Q .Finally supp e Q = supp( Q + J ) = J by [Sch05b, lemma 3.16]. Remark.
We proved that e Q = Q + R p supp Q , and thus in Example 3.6 we have that e Q ( Q . We alsohave supp e Q = R p supp Q so that if supp Q is not real radical then Q ( e Q . Example 2.11 is sucha case where supp Q , R p supp Q . We notice that, by Theorem 3.9 and [Sch05b, th. 3.17], if Q is stable then e Q = Q .As a consequence we have no duality gap when supp Q is real radical. This generalizes thecondition supp Q = 0 in Theorem 2.5. Proposition 3.10.
Let Q = Q ( g ) be a finitely generated quadratic module. If R p supp Q = supp Q thenfor any d ∈ N , Q d ( g ) = Q d ( g ) is closed and e Q = Q . Moreover for any f ∈ R [ X ] such that f ∗ > −∞ wehave that f ∗ SoS ,d is attained (i.e. f − f ∗ SoS ,d ∈ Q d ( g ) ) and there is no duality gap: f ∗ SoS ,d = f ∗ MoM ,d .Proof. Let I = supp Q . By hypothesis, √ I = I so that e Q = Q . By [Mar08, lemma 4.1.4], Q [ d ] + I d is closed. As supp Q [ d ] ⊂ I d is a closed finite-dimensional subspace of I d , we also have that Q [ d ] +supp Q [ d ] = Q [ d ] is closed. Therefore we have L d ( g ) ∨ = Q ∨∨ [ d ] = Q [ d ] = Q [ d ] , from which we deducethat there is not duality gap, by classical convexity arguments, as follows.If f ∈ R [ X ] such that f ∗ > −∞ , then n λ ∈ R | f − λ ∈ Q d ( g ) o is bounded from above. Since Q d ( g ) is closed f ∗ SoS ,d = sup n λ ∈ R | f − λ ∈ Q d ( g ) o is attained. If f ∗ SoS ,d < f ∗ MoM ,d , then f − f ∗ MoM ,d < Q [ d ] . Thus there exists a separating functional σ ∈ L (1) d ( g ) such that D σ (cid:12)(cid:12)(cid:12) f − f ∗ MoM ,d E <
0, whichimplies that h σ | f i < f ∗ MoM ,d in contradiction with the definition of f ∗ MoM ,d . Consequently, f ∗ SoS ,d = f ∗ MoM ,d . Q ( g ) is stable if ∀ d ∈ N there exists k ∈ N such that Q ( g ) ∩ R [ X ] d = Q k ( g ) ∩ R [ X ] d .
12e describe now relations between the truncated parts of L d ( g ). Lemma 3.11.
Let J = R p supp Q ( g ) . If ( h ) ⊂ J , deg h ≤ d , then ∃ k ≥ d : L k ( g ) [ d ] ⊂ L d ( g , ± h ) ⊂ L d ( g ) . In particular L k ( g ) [ d ] ⊂ L d ( ± h ) .Proof. By Lemma 3.8, h h i d ⊂ ( h ) d ⊂ Q k ( g ) for some k ≥ d . Let h ∈ h and f ∈ R [ X ] d − deg h . Then ± f h ∈ Q k ( g ), and for σ ∈ L k ( g ), we have D σ [ d ] (cid:12)(cid:12)(cid:12) f h E = h σ | f h i = 0, i.e. L k ( g ) [ d ] ⊂ L d ( g , ± h ). The otherinclusion L d ( g , ± h ) ⊂ L d ( g ) follows by definition. Remark.
Lemma 3.11 says that the MoM relaxation ( L d ( g )) d ∈ N is equivalent to the MoM relax-ation ( L d ( g , ± h )) d ∈ N , where ( h ) = R p supp Q ( g ). Since supp( Q ( g ) + ( h )) = supp e Q = ( h ) is a realradical ideal, we can apply Proposition 3.10 to it: then the MoM relaxation ( L d ( g )) d ∈ N is equiva-lent to the SoS relaxation ( Q d ( g , ± h )) d ∈ N .Lemma 3.11 is an algebraic result, in the sense that supp Q ( g ) may be unrelated to the geometry S ( g ) that it defines. If some additional conditions hold (namely if we have only equalities, or apreordering, or a small dimension), it can however provide geometric characterizations that willbe useful in Section 4. Corollary 3.12.
Suppose that S ( g ) ⊂ V R ( h ) . Then for every t ≥ deg h there exists t ≥ t such that: L t ( Π g ) [ t ] ⊂ L t ( ± h ) . In particular this holds when ( h ) = I ( S ( g )) .Proof. S ( g ) ⊂ V R ( h ) if and only if R p ( h ) ⊂ I ( S ( g )) = R p supp Q ( Π g ) by Theorem 1.1. Then we canapply Lemma 3.11. Corollary 3.13.
Let Q = Q ( g ) . Suppose that S ( g ) ⊂ V R ( h ) and dim R [ X ]supp Q ≤ . Then for every t ≥ deg h there exists t ≥ t such that: L t ( g ) [ t ] ⊂ L t ( ± h ) . In particular this holds when ( h ) = I ( S ( g )) .Proof. We prove it as Corollary 3.12, using [Mar08, cor. 7.4.2 (3)] instead of Theorem 1.1.With the characterization of e Q we can now describe the kernel of Hankel operators associatedto truncated moment sequences, in analogy to the infinite dimensional case analyzed in Propo-sition 3.4. First we recall the definition of genericity in the truncated setting and equivalentcharacterizations. Definition 3.14.
We say that σ ∗ ∈ L k ( g ) is generic if rank H kσ ∗ = max { rank H kη | η ∈ L k ( g ) } .This genericity can be characterized as follows, see [Las+13, prop. 4.7]. Proposition 3.15.
Let σ ∈ L k ( g ) . The following are equivalent:(i) σ is generic;(ii) ker H kσ ⊂ ker H kη ∀ η ∈ L k ( g ) ;(iii) ∀ d ≤ k , we have: rank H dσ = max { rank H dη | η ∈ L k ( g ) } .Remark. By Proposition 3.15 notice that ∀ d ≤ k , if σ ∗ ∈ L k ( g ) is generic then ( σ ∗ ) [2 d ] is generic in L k ( g ) [2 d ] . In particular, ker H dσ ∗ ⊂ ker H dη ∀ η ∈ L k ( g ).We are now ready to describe the kernel of generic elements.13 heorem 3.16. Let Q = Q ( g ) and J = R p supp Q . Then there exists d, t ∈ N such that for σ ∗ ∈ L d ( g ) generic, we have J = (ker H tσ ∗ ) .Proof. Let t ∈ N such that J is generated in degree ≤ t , by the graded basis h = { h , . . . , h s } . FromLemma 3.8 we deduce that there exists d ∈ N such that J t ⊂ Q d ( g ). Let σ ∗ ∈ L d ( g ) generic.We first prove that J ⊂ (ker H tσ ∗ ). By Proposition 3.15 we have ker H tσ ∗ = T σ ∈L d ( g ) ker H tσ . Thenit is enough to prove that J t ⊂ ker H tσ for all σ ∈ L d ( g ).By Lemma 3.11 L d ( g ) [2 t ] ⊂ L t ( ± h ) ⊂ h h i ⊥ t . Then ∀ f ∈ J t = h h i t , ∀ p ∈ R [ X ] t , ∀ σ ∈ L d ( g ), wehave f p ∈ h h i t and D σ [2 t ] (cid:12)(cid:12)(cid:12) f p E = 0. This shows that H tσ ( f )( p ) = D ( f ⋆ σ ) [ t ] (cid:12)(cid:12)(cid:12) p E = h σ | f p i = 0, i.e. f ∈ ker H tσ .Conversely, we show that (ker H tσ ∗ ) ⊂ J for σ ∗ generic in L d ( g ). Since J = supp e Q = supp S j Q j ( g )(by Theorem 3.9) it is enough to prove that ker H tσ ∗ ⊂ supp Q d ( g ) = supp L d ( g ) ∨ .Let f ∈ ker H tσ ∗ = T σ ∈L k ( g ) ker H tσ (we use again Proposition 3.15) and let σ ∈ L d ( g ). Then h σ | f i = D ( f ⋆ σ ) [ t ] (cid:12)(cid:12)(cid:12) E = H tσ ( f )(1) = 0. In particular f ∈ L d ( g ) ∨ . We prove that − f ∈ L d ( g ) ∨ in thesame way. Then f ∈ supp Q d ( g ), which proves that ker H tσ ∗ ⊂ supp e Q = J .As a corollary of this theorem we have the following result: there exists d, t ∈ N such that R p ( h ) = (ker H tσ ∗ ) for σ ∗ ∈ L d ( ± h ) generic. The particular cases of zero dimensional ideals wereinvestigated in [Lau07], [LLR08], [Las+13].The geometric corollary of this theorem is the following: Corollary 3.17.
Let O = O ( g ) and S = S ( g ) . Then there exists d, t ∈ N such that for σ ∗ ∈ L d ( Π g ) generic, we have I ( S ) = (ker H tσ ∗ ) .Proof. Apply Theorem 3.16 and Theorem 1.1.
Lasserre [Las01] proved that, if Q ( g ) is an Archimedian quadratic module, then lim d →∞ f ∗ MoM ,d = f ∗ , i.e. the minimum of the truncated moment relaxation is close to the evaluation f ∗ of f at theminimizers. We will show that this happens because truncated linear funcionals are indeed closeto measures.We first recall a compactness property for measures with compact support. Lemma 3.18.
Let S ⊂ R n be compact. Then M (1) ( S ) [ k ] is compact.Proof. Every truncated linear functional coming from a measure is coming from (finite) sums ofevaluations, see [Sch17, th. 1.24]. Then M (1) ( S ) [ k ] is the image of S under a continuous map, sothat it is compact. Theorem 3.19.
Let Q = Q ( g ) be an Archimedean quadratic module and S = S ( g ) . Then ∀ d : ∞ \ k = d L k ( g ) [ d ] = M ( S ) [ d ] . Moreover: lim k →∞ d H ( L (1) k ( g ) [ d ] , M (1) ( S ) [ d ] ) = 0 , where d H denotes the Hausdor ff distance.Proof. Since L k ( g ) [ d ] = cone( L (1) k ( g ) [ d ] ) and M ( S ( g )) [ d ] = cone( M (1) ( S ) [ d ] ), it is enough to prove: ∞ \ k = d L (1) k ( g ) [ d ] = M (1) ( S ) [ d ] . T ∞ k = d L (1) k ( g ) [ d ] ⊃ M (1) ( S ) [ d ] is direct.Conversely, suppose that τ ∈ L (1) k ( g ) [ d ] \M (1) ( S ) [ d ] . We want to prove that ∃ h ∈ N : τ < L (1) h ( g ) [ d ] .By [Sch17, th. 17.6], as τ is not coming from a measure supported on S , there exists f ∈ Pos( S ) d such that h τ | f i <
0. Let ε > h τ | f i < − ε . Now, f + ε > S ( g ) and by Theorem 1.2 thereexists h ≥ d such that f + ε ∈ Q h ( g ). Then ∀ σ ∈ L (1) h ( g ): D σ [ d ] (cid:12)(cid:12)(cid:12) f + ε E = h σ | f + ε i ≥ ⇒ D σ [ d ] (cid:12)(cid:12)(cid:12) f E ≥ − ε, and thus σ [ d ] , τ , i.e. τ < L (1) h ( g ) [ d ] , which is a contradiction. We deduce the reverse inclusionwhich concludes the proof of the first point.For the second part, we proceed by contradiction. If the distance is not going to zero, then ∀ k ∈ N ∃ τ k ∈ L (1) k ( g ) [ d ] , with d H ( τ k , M (1) ( S ) [ d ] ) = ε > L (1) k ( g ) [ d ] is convex).Since M (1) ( S ) [ d ] is compact (see Lemma 3.18), the set of points at distance ε from M (1) ( S ) [ d ] iscompact too. Then up to restricting to a subsequence, we can assume that τ k has limit τ ∈ ( R [ X ] d ) ∗ ,and d H ( τ, M (1) ( S ) [ d ] ) = ε by continuity. But since L (1) k ( g ) [ d ] are closed and τ k ∈ L (1) h ( g ) [ d ] for k ≥ h ,then τ ∈ T ∞ k = d L (1) k ( g ) [ d ] = M (1) ( S ) [ d ] , which is a contradiction to d H ( τ, M (1) ( S ) [ d ] ) = ε > Remark.
Notice that since M (1) ( S ) [ d ] is compact, this proves that L (1) k ( g ) [ d ] is also compact.As L (1) k ( g ) [ d ] ⊃ L (1) k ( Π g ) [ d ] ⊃ M (1) ( S ) [ d ] , we deduce the following result. Corollary 3.20. If Q ( g ) is Archimedean then for d ∈ N , lim k →∞ d H ( L (1) k ( g ) [ d ] , L (1) k ( Π g ) [ d ] ) = 0 . From a computational point of view Corollary 3.20 says that, in the Archimedean case, work-ing with L d ( g ) yields good numerical approximation of L d ( Π g ).It would be interesting to investigate the rate of convergence in Theorem 3.19. A possibleapproach (on the polynomial SoS side) could be applying results from [NS07], where the degreeof Putinar representation is analyzed.In Section 3.4 we will show that, in the case of a finite semialgebraic set, we need only a finitenumber of steps in the intersection of Theorem 3.19. In the case of a finite semialgebraic set we have an easy characterization of positive functionals.Theorem 3.1 reads as follows: if Q is Archimedean and S ( Q ) = { ξ , . . . , ξ r } is finite then L ( Q ) = L ( I ( S ( Q ))) = cone( e ξ , . . . , e ξ r ).We want to prove that this holds also for truncated positive functionals. We recall an auxiliarylemma. Lemma 3.21.
Let I be a real radical ideal such that |V R ( I ) | < + ∞ . Then V C ( I ) = V R ( I ) .Proof. Let y ∈ V C ( I ) \ V R ( I ) and let ¯ y be its conjugate. Since |V R ( I ) | < + ∞ we can consider theinterpolator polynomial u y ∈ C [ X ] such that: u y ( x ) = x = y x ∈ V R ( I ) ∪ { ¯ y } . Since u y = u ¯ y , we have u y + u ¯ y ∈ R [ X ] and u y + u ¯ y vanishes on V R ( I ), i.e. u y + u ¯ y ∈ I ( V R ( I )) = I . But( u y + u ¯ y )( y ) = 1, then y < V C ( I ).We prove a strong moment property for ideals whose associated real variety is finite. Theorem 3.22.
Let I = ( h ) be a real radical ideal with finite variety: V ( I ) = { ξ , . . . , ξ r } ⊂ R n . Let ρ = ρ ( ξ , . . . , ξ r ) be the regularity of the points. Then there exits d ∈ N such that ∀ k ∈ N
15 dim cone( e ξ , . . . , e ξ r ) [ ρ − k ] = r ; • L d + k ( ± h ) [2( ρ − k ] = cone( e ξ , . . . , e ξ r ) [2( ρ − k ] .Proof. We start to prove the first point. Let Ξ = { ξ , . . . , ξ r } = V ( I ) and ρ = ρ ( Ξ ) be the regularity, t ≥ ρ − u , . . . , u r ∈ R [ X ] t interpolator polynomials of degree < ρ (see Proposition 1.3). Supposethat a e [ t ] ξ + · · · + a r e [ t ] ξ r = 0. Then for any i ∈ { , . . . , r } : a i = (cid:28) a i e [ t ] ξ i (cid:12)(cid:12)(cid:12)(cid:12) u i (cid:29) = − *X j , i a j e [ t ] ξ j (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12)(cid:12) u i + = 0 . For the second part, the inclusion cone( e ξ , . . . , e ξ r ) [2( ρ − k ] ⊂ L d + k ( ± h ) [2( ρ − k ] is obvious. Letus take d ≥ ρ −
1) big enough such that h h i d contains a graded basis h ′ of degree ρ of I = I ( Ξ ). Then we have I ρ − k = h h ′ i ρ − k ⊂ h h i d + k and L d + k ( ± h ) [2( ρ − k ] ⊂ L ρ − k ( ± h ′ ) = L ρ − k ( I ρ − k ). By Proposition 1.4 for k ∈ N , L ρ − k ( I ρ − k ) = cone( e ξ , . . . , e ξ r ) [2( ρ − k ] ,which proves the reverse inclusion and L d + k ( ± h ) [2( ρ − k ] = cone( e ξ , . . . , e ξ r ) [2( ρ − k ] .Now we prove a theorem that is central in the paper. We generalize Theorem 3.22 to the caseof inequalities: finite semialgebraic sets enjoy a truncated strong moment property. Theorem 3.23.
Suppose that dim R [ X ]supp Q ( g ) = 0 . Then, S = S ( g ) = { ξ , . . . , ξ r } is non-empty and finiteand there exists d ∈ N such that ∀ k ∈ N : L d + k ( g ) [2( ρ − k ] = cone( e ξ , . . . , e ξ r ) [2( ρ − k ] . where ρ = ρ ( ξ , . . . , ξ r ) is the regularity of S .Proof. Let I = supp Q ( g ) and J = R p supp Q ( g ) = p supp Q ( g ). Since dim R [ X ] J = dim R [ X ] I = 0 we have I ( S ( g )) = R p supp Q ( g ) = J by [Mar08, cor. 7.4.2 (3)]. Then dim R [ X ] I ( S ( g )) = 0 and V R ( J ) = V R ( I ( S ( g ))) = S ( g ) = { ξ , . . . , ξ r } is finite.We choose a graded basis h of J with deg h ≤ ρ = ρ ( ξ , . . . , ξ r ), see Section 1.4. By Corollary 3.13and Proposition 1.4, there exists d ∈ N big enough such that for every k ∈ N : L d + k ( g ) [2( ρ − k ] ⊂ L ρ − k ( ± h ) = L ρ − k ( I ρ − k ) = cone( e ξ , . . . , e ξ r ) [2( ρ − k ] . Since the converse inclusion is obvious, we prove that L d + k ( g ) [2( ρ − k ] = cone( e ξ , . . . , e ξ r ) [2( ρ − k ] . Remark.
Notice that there exist examples with S ( g ) finite and dim R [ X ]supp Q ( g ) >
1, see Example 3.6.However the hypothesis:(i) dim R [ X ]supp Q ( g ) = 0; and(ii) S ( g ) is finite and dim R [ X ]supp Q ( g ) ≤ ⇒ (ii) is shown in the proof of Theorem 3.23, while (ii) ⇒ (i) follows from I ( S ( g )) = R p supp Q ( g ) (see [Mar08, cor. 7.4.2 (3)]). Corollary 3.24.
Suppose that S ( g ) = { ξ , . . . , ξ r } is non-empty and finite and let ρ = ρ ( ξ , . . . , ξ r ) be theregularity. Then there exists d ∈ N such that ∀ k ∈ N : L d + k ( Π g ) [2( ρ − k ] = cone( e ξ , . . . , e ξ r ) [2( ρ − k ] . Proof.
We combine Theorem 3.23 and Theorem 1.1 replacing g by Π g .Related results were obtained in [LLR08] and [Las+13]. In particular Theorem 3.23 generalizetheir setting, i.e. the case when the description of the quadratic module defines an ideal withfinite real variety. Theorem 3.23 gives a complete description of the truncated moment sequencesin terms of convex sums of evaluations at the points of the semialgebraic set.16 .5 The truncated moment hierarchy We summarize the relations between the truncated moment cones that we have seen. Let Q = Q ( g ), S = S ( g ) and let h be a (graded) basis of J = R p supp Q . For d, t big enough, we have the followinginclusions: L t ( g , ± h ) ⊃ (1) L d ( g ) [ t ] ⊃ (2) L d +1 ( g ) [ t ] ⊃ . . . ⊃ ∞ \ d = t L d ( g ) [ t ] ⊃ (3) L ( Q ) [ t ] ⊃ (4) M ( S ) [ t ] . All these inclusions are obvious, except for (1) which is Lemma 3.11. In this section we analyzedcases when these inclusions are equalities.(1) is an equality if we can extend degree- t positive functionals on Q t ( g ) to degree- d positivefunctionals on Q d ( g ), i.e. if L t ( g ) = L d ( g ) [ t ] .(2) is an equality if Q is stable: if Q t ( g ) = ( Q d ( g )) t then L t ( g ) = L d ( g ) [ t ] (see [Mar08, ch. 4]).(3) and (4) are equalities if Q is Archimedan, see Theorem 3.19.(4) is an equality if L ( Q ) has the strong moment property (and this is the case when Q isArchimedean, see Theorem 1.2). We cannot deduce that (3) is an equality with just strong momentproperty hypothesis.Theorem 3.23 says that, if J defines a finite real variety, then all these inclusions are equalities.Morever notice that, if Q is a reduced, Archimedean quadratic module with dim S ≥
3, then wecannot have finite convergence in general (see Example 2.8) and inclusions (2) are always proper:(3) and (4) are equalities since Q is Archimedean, but there exists σ ∈ L d ( g ) [ t ] \ T ∞ e = t L e ( g ) [ t ] for all d . The situation is simplier if we consider generic linear funcionals and kernels of Hankel opera-tors. If τ ∗ d ∈ L d ( g ) and σ ∗ ∈ L ( Q ) are generic, then: J = ( a ) (ker H tτ ∗ d ) = ( b ) (ker H tτ ∗ d +1 ) = . . . ⊂ ( c ) (ker H tσ ∗ ) ⊂ ( d ) I ( S ) . (a) and (b) are Theorem 3.16. (d) is an equality if Q is Archimedean, see Proposition 3.4. Inclusion(c) can be proper, even in the Archimedean case: in Example 3.6 (ker H tτ ∗ d ) = (0) but (ker H tσ ∗ ) =( X, Y ). In this section, we consider the Polynomial Optimization Problem of minimizing f ∈ R [ X ] on abasic semialgebraic set S = S ( g ) where g = { g , . . . , g s } ⊂ R [ X ]. Theorem 4.1.
Let f ∗ denote the infimum of f on S = S ( g ) and let Q = Q ( g ) . Suppose that dim R [ X ]supp Q =0 . Then the moment relaxation ( L d ( g )) d ∈ N is exact. For t ∈ N and d ≥ t big enough, L min2 d ( g ) [ t ] = conv( e ξ , . . . , e ξ l ) [ t ] , where { ξ , . . . , ξ l } ⊂ R n is the finite set of minimizers of f on S . Moreover, if d ≥ t ≥ ρ = ρ ( ξ , . . . , ξ l ) and σ ∈ L min2 d ( g ) is generic, then (ker H tσ ) = I ( ξ , . . . , ξ l ) is the vanishing ideal of the minimizers { ξ , . . . , ξ l } of f on S .Proof. By Theorem 3.23) for t ≥ deg f and d ≥ t , L d ( Π g ) [ t ] = cone( e ξ , . . . , e ξ r ) [ t ] . Assume that theminimizers of f on S = { ξ l , . . . , ξ r } are { ξ l , . . . , ξ l } ( l ≤ r ) so that f ( ξ i ) > f ∗ if l < i ≤ r . Then for all σ ∈ L min2 d ( g ), h σ | f i = D σ [ t ] (cid:12)(cid:12)(cid:12) f E = f ∗ = P ri =1 w i f ( ξ i ) is a convex sum of evaluations of f at the points ofthe semialgebraic set S with P ri =1 w i = D σ [ t ] (cid:12)(cid:12)(cid:12) f E = 1. We deduce that P ri =1 w i ( f ( ξ i ) − f ∗ ) = 0 so that17f f ( ξ i ) > f ∗ then w i = 0. Consequently σ [ t ] ∈ cone( e ξ i , . . . , e ξ l ) [ t ] and the first part of the theoremfollows.Let us choose t ≥ ρ and d ≥ t big enough. We have shown that if σ ∈ L min2 d ( g ) then σ [2 t ] = P li =1 ω i e ξ i with ω i ≥
0. Let ( u i ) i =1 ,...,r be a family of interpolation polynomials at the points ξ l , . . . , ξ l of degree ≤ ρ − D σ [2 t ] (cid:12)(cid:12)(cid:12) u i E = ω i , u i ∈ ker H t σ if and only if ω i = 0.Therefore a generic element σ ∈ L min2 d ( g ) is such that ω i >
0, since the kernel of H tσ is included inall the other kernels.For p ∈ I = I ( ξ l , . . . , ξ l ) with deg( p ) ≤ t , we have D σ [2 t ] (cid:12)(cid:12)(cid:12) p E = P li =1 ω i p ( ξ i ) = 0 so that p ∈ ker H tσ .Conversely, for p ∈ ker H tσ , D σ [2 t ] (cid:12)(cid:12)(cid:12) p E = P li =1 ω i p ( ξ i ) = 0, which implies that p ( ξ i ) = 0 and p ∈ I t .This shows that ker H tσ = I t . As I is generated in degree ρ ≤ t , we have proved the second part oftheorem: (ker H tσ ) = I . Corollary 4.2.
With the same notation, if S = S ( g ) is finite then the moment relaxation ( L d ( Π g )) d ∈ N is exact. For t ∈ N and d ≥ t big enough, L min2 d ( Π g ) [ t ] = conv( e ξ , . . . , e ξ l ) [ t ] . where { ξ , . . . , ξ l } is the finite set of minimizers of f on S . Moreover, if d ≥ t ≥ ρ = ρ ( ξ , . . . , ξ l ) and σ ∈ L min2 d ( Π g ) is generic, then (ker H tσ ) = I ( ξ , . . . , ξ l ) is the vanishing ideal of the minimizers { ξ , . . . , ξ l } of f on S .Proof. It is a consequence of Theorem 4.1 and Theorem 1.1, since if S ( g ) is finite then dim R [ X ]supp O ( g ) =dim R [ X ] √ supp O ( g ) = dim R [ X ] I ( S ( g )) = 0.We consider now the case of finite semialgebraic set S = S ( g , ± h ) defined with equations h withan associated finite real variety: |V R ( h ) | < ∞ . Corollary 4.3. If V R ( h ) is finite then the moment relaxation ( L d ( g , ± h )) d ∈ N is exact. For t ∈ N and d ≥ t big enough, L min2 d ( g , ± h ) [ t ] = conv( e ξ , . . . , e ξ l ) [ t ] . where { ξ , . . . , ξ l } is the finite set of minimizers of f on S = S ( g , ± h ) . Moreover, if d ≥ t ≥ ρ = ρ ( ξ , . . . , ξ l ) and σ ∈ L min2 d ( g , ± h ) is generic, then (ker H tσ ) = I ( ξ , . . . , ξ l ) is the vanishing ideal of the minimizers { ξ , . . . , ξ l } of f on S .Proof. Let Q = Q ( g , ± h ). If |V R ( h ) | < ∞ then dim R [ X ] R √ ( h ) = 0. Since p supp Q = R p supp Q (see Sec-tion 1.3) and ( h ) ⊂ supp Q , we have:dim R [ X ]supp Q = dim R [ X ] p supp Q = dim R [ X ] R p supp Q ≤ dim R [ X ] R p ( h ) = 0 . Then the relaxation is exact by Theorem 4.1.Applying strong duality we can also deduce finite convergence for the SoS relaxation.
Corollary 4.4.
Under the assumptions of Theorem 4.1 (resp. Corollary 4.2, resp. Corollary 4.3) theSoS relaxation: f ∗ SoS ,d = sup n λ ∈ R | f − λ ∈ Q d ( g ) o ( resp. f ∗ SoS ,d = sup n λ ∈ R | f − λ ∈ Q d ( Π g ) o , resp. f ∗ SoS ,d = sup n λ ∈ R | f − λ ∈ Q d ( g , ± h ) o ) has the finite convergence property. roof. For the quadratic module cases, Q ( g ) is Archimedean by [Mar08, cor. 7.4.3]. Then by strongduality Theorem 2.5 the result follows.For the preordering case, O ( g ) is Archimedean by [Wö98] (see also [Mar08, th. 6.1.1]). Thenby strong duality Theorem 2.5 the result follows.Notice that, even if the SoS relaxation has the finite convergence property, it may not be SoSexact as shown in Example 2.10 and Example 2.11. Boundary Hessian Conditions (BHC) are conditions on the minimizers of a polynomial f on abasic semialgebraic set S introduced by Marshall in [Mar06] and [Mar09]. These conditions areparticular cases of the so called local-global principle, which allows one to prove global propertiesof polynomials (e.g. f ∈ Q ) analyzing local properties (e.g. checking the BHC at the minimizers of f on S ( Q )). We refer to [Sch05a], [Sch06] and [Mar08, ch. 9] for more details. We introduce BHCconditions following [Sch09]. Definition 4.5 (Boundary Hessian Conditions) . Let V ⊂ R n be a variety, and let Q be a finitely gen-erated Archimedean quadratic module in R [ V ] (cid:27) R [ X ] I ( V ) (or equivalently Q + I ( V ) is Archimedeanin R [ X ]). Let S = S ( Q ) ∩ V and f ∈ Pos( S ). We say that the Boundary Hessian Conditions holds at x ∈ V ( f ) ∩ S if there exists t , . . . , t m ∈ Q such that:• t , . . . , t m are part of a regular system of parameters for V at x ;• ∇ f ( x ) = a ∇ t ( x ) + · · · + a m ∇ f ( x ), where a i are strictly positive real numbers;• the Hessian of f restricted to V ( t , . . . , t m ) ∩ V is positive definite at x .When BHC holds, the minimizers are non-singular, isolated points and thus finite. It is provedin [Mar09] that if BHC holds at every minimizer of f on S ( g ) then f ∈ Q ( g ), which implies that theSoS relaxation f ∗ SoS ,d is exact. [Nie14] proved that the BHC at every minimizer of f , which holdgenerically, implies the SoS finite convergence property.In this section, we prove that, if the BHC hold, then the MoM relaxation is exact. We needsome preliminary lemmas. Lemma 4.6.
Let p, g ∈ R [ X ] , k ≥ deg p +deg g and d ≥ k +deg g . If σ = σ [ d ] ∈ L d ( g ) then: D σ [ d ] (cid:12)(cid:12)(cid:12) p g E =0 implies pg ∈ ker H kσ .Proof. Let h ∈ R [ X ] k and σ = σ [ d ] ∈ L d ( g ). Since σ [ d ] is positive on Q d ( g ) and D σ [ d ] (cid:12)(cid:12)(cid:12) p g E = 0, then ∀ t ∈ R : 0 ≤ D σ [ d ] (cid:12)(cid:12)(cid:12) ( p + th ) g E = t D σ [ d ] (cid:12)(cid:12)(cid:12) h g E + 2 t D σ [2 d ] (cid:12)(cid:12)(cid:12) phg E . As a function of t the last expression is non-negative, and equal to 0 for t = 0. Then t = 0 must be adouble root, and thus D σ [ d ] (cid:12)(cid:12)(cid:12) phg E = D ( f g ⋆ σ ) [ k ] (cid:12)(cid:12)(cid:12) h E = 0 for all h ∈ R [ X ] k . But this means pg ∈ H kσ . Lemma 4.7.
Let f ∈ Q l ( g ) . Then for k and d ≥ k big enough, if σ ∈ L d ( g ) then: h σ | f i = 0 if and onlyif f ∈ ker H kσ .Proof. The if part is obvious.For the only if part, we set g = 1 for notation convenience. Since f ∈ Q l ( g ), then f = P i s i g i ,with s i = P j p i,j and deg s i g i ≤ l . Let d ≥ max i,j { p i,j ) + deg( g i ) } and σ ∈ L d ( g ). By hypothesis:0 = D σ [ d ] (cid:12)(cid:12)(cid:12) f E = X i,j D σ [ d ] (cid:12)(cid:12)(cid:12)(cid:12) p i,j g i E , which implies D σ [ d ] (cid:12)(cid:12)(cid:12)(cid:12) p i,j g i E = 0 for all i and j . Let k i,j and d i,j be given from Lemma 4.6 (appliedto p i,j and g i ). Let k ′ ≥ max i,j { k i,j } . Then p i,j g i ∈ ker H k ′ σ for all i and j which implies that p i,j g i ∈ H k ′ − deg p i,j σ . Letting k = min i,j { k ′ − deg p i,j } , we finally get p i,j g i ∈ ker H kσ for all i and j , and f = P i,j p i,j g i ∈ ker H kσ . Theorem 4.8.
Let f ∈ R [ X ] , Q = Q ( g ) be an Archimedean finitely generated quadratic module and as-sume that the BHC hold at every minimizer of f on S = S ( g ) . Then the moment relaxation ( L d ( g )) d ∈ N is exact. For t ∈ N and d, e ≥ t big enough: L min2 d ( g ) [ t ] = L e ( g , ± ( f − f ∗ )) [ t ] = conv( e ξ , . . . , e ξ r ) [ t ] . where { ξ , . . . , ξ r } is the finite set of minimizers of f on S . Moreover, if d ≥ t ≥ ρ ( ξ , . . . , ξ r ) and σ ∗ ∈L min2 d ( g ) is generic, then (ker H tσ ∗ ) = I ( ξ , . . . , ξ r ) is the vanishing ideal of the minimizers of f on S .Proof. We can assume without loss of generality that f ∗ = 0. For d, e big enough, if σ ∈ L min2 d ( g ) then f ∈ ker H eσ by Lemma 4.7. This implies that L min2 d ( g ) [2 e ] ⊂ L e ( g , ± f ). Since the BHC hold, we knowthat dim R [ X ]supp( Q +( f )) = 0 (see the proof of [Mar06, th. 2.3]). By Theorem 3.23 applied to L e ( g , ± f ),we have L e ( g , ± f ) [ t ] = conv( e ξ , . . . , e ξ r ) [ t ] for t ∈ N and e big enough. Since conv( e ξ , . . . , e ξ r ) [ t ] ⊂L min2 d ( g ) [ t ] by definition, we proved the first part: up to restriction, functional minimizers arecoming from convex sums of evaluations at the minimizers of f .The proof of the second part is equal to that of Theorem 4.1. Remark.
In Theorem 4.8 we use BHC to prove the following:• f − f ∗ ∈ Q (i.e. SoS exactness);• dim R [ X ]supp( Q +( f − f ∗ )) = 0.If the previous two conditions holds, the conclusions of Theorem 4.8 remain valid.We show now that moment exactness holds generically . For polynomials f ∈ R [ X ] d and g ∈ R [ X ] d , . . . , g s ∈ R [ X ] d s , we say that a property holds generically (or that the property holds forgeneric f , g , . . . , g s ) if there exists finitely many nonzero polynomials φ , . . . , φ l in the coe ffi cientsof polynomials in R [ X ] d and R [ X ] d , . . . , R [ X ] d s such that, when φ ( f , g ) , , . . . , φ l ( f , g ) ,
0, theproperty holds.
Corollary 4.9.
For f ∈ R [ X ] d and g ∈ R [ X ] d , . . . , g s ∈ R [ X ] d s generic, the moment relaxation ( L d ( g )) d ∈ N is exact.Proof. By [Nie14, th. 1.2] BHC hold generically. We apply Theorem 4.8 to conclude.
In this section, we consider Polynomial Optimization Problems for which the non-empty set ofminimizers is finite and we propose a strategy to recover them.If the set of minimizers is non-empty and finite, and we know the minimum f ∗ of f on S = S ( g ),by adding the equation f − f ∗ to the definition of the truncated quadratic module, we obtain aquadratic module Q ′ = Q ( g , ± ( f − f ∗ )), which defines the finite set S ( Q ′ ) of minimizers of f on S .We can then apply the results of Section 4.1 to the relaxation ( L d ( g , ± ( f − f ∗ ))) d or ( L d ( Π g , ± ( f − f ∗ ))) d . Corollary 4.10.
Let f ∈ R [ X ] , Q = Q ( g ) be a finitely generated quadratic module. Assume that theminimizers of f on S = S ( g ) are finite: { x ∈ S | f ( x ) = f ∗ } = { ξ , . . . , ξ r } . Then for any t ∈ N and d ≥ t big enough: L d ( Π g , ± ( f − f ∗ )) [ t ] = conv( e ξ , . . . , e ξ r ) [ t ] . Moreover, if d ≥ t ≥ ρ = ρ ( ξ , . . . , ξ r ) and σ ∈ L min2 d ( g , ± ( f − f ∗ )) is generic, then (ker H tσ ) = I ( ξ , . . . , ξ r ) is the vanishing ideal of the minimizers of f on S .
20n practice, the minimum f ∗ is usually not known. Since the computation of moment mini-mizers σ ∗ ∈ L min d ( g , ± ( f − f ∗ )) is based on numerical Semi-Definite Program (SDP) solvers, we canreplace f ∗ by an approximate value, taking for instance f ∗ MoM ,d = inf n h σ | f i ∈ R | σ ∈ L (1)2 d ( g ) o ≤ f ∗ for d ∈ N . Notice that if v < f ∗ then L (1)2 d ( g , ± ( f − v )) is empty since S ( Π g , ± ( f − v )) is empty. If v isnot close to f ∗ , the SDP solvers can detect the feasibility/infeasibility of the relaxation, that is if L d ( Π g , ± ( f − v )) is empty or not.Notice also that by [Wö98] (or [Mar08, th. 6.1.1]) O ( g ) is Archimedean if the semialgebraicset is finite. If also Q ( g ) is Archimedean, since the SDP solvers perform approximate numericalcomputations, and since lim d →∞ d H ( L (1) d ( g ) [ k ] , L (1) d ( Π g ) [ k ] ) = 0 for k big enough by Corollary 3.20, wecan also replace the relaxation associated to the preordering by the relaxation associated to thequadratic module. This leads to the following algorithm for the computation of finite minimizers. Algorithm 1:
Finite Minimizers input : d ∈ N , f , g , . . . g r ∈ R [ X ] d such that f has a finite set of minimizers on S = S ( g ). output : The minimizers { ξ , . . . , ξ r } of f on S and f ∗ = inf x ∈ S f ( x ). k = ⌈ d ⌉ repeat Compute f ∗ MoM ,k = inf n h σ | f i ∈ R | σ ∈ L (1)2 k ( g ) o . Compute a generic element σ ∗ ∈ L (1)2 k ( g , ± ( f − f ∗ MoM ,k ))Extract of the minimizers ξ , . . . , ξ r from H tσ ∗ for t ≤ k big enough. k = k + 1 until minimizer extraction success return the minimizers { ξ , . . . , ξ r } and f ∗ = h σ ∗ | f i Each loop of this algorithm requires two calls to SDP solvers. The first one is to compute f ∗ MoM ,k on the convex set L (1)2 k ( g ). The second one is to compute an interior or generic point σ ∗ of L (1)2 k ( g , ± ( f − f ∗ MoM ,k )), using an interior point SDP solver.The extraction of minimizers from the Hankel matrix H tσ ∗ is based on the algorithm of polynomial-exponential decomposition of series described in [Mou18]. It involves numerical linear algebrafunctions such as SVD, eigenvalue and eigenvector computation. It provides an approximationof the linear functional σ ∗ as a weighted sum of evaluations σ ∗ ≈ P ri =1 ω i e ξ i . We consider thatthe minimizer extraction succeeds when such an approximation of σ ∗ is obtained within a giventhreshold.If the set of minimizers is finite and the moment relaxation ( L d ( g )) d ∈ N is exact for f then thisalgorithm terminates. When this is not the case, it shall also terminates using approximate com-putation. Indeed, increasing the degree k , we obtain better approximations of f ∗ and of a genericelement of L d ( Π g , ± ( f − f ∗ )) [2 t ] = conv( e ξ , . . . , e ξ r ) [2 t ] . When a su ffi ciently good approximation ofa generic element of conv( e ξ , . . . , e ξ r ) [2 t ] is obtained, the minimizer extraction succeeds and Algo-rithm 1 outputs an approximation of the minimizers { ξ , . . . , ξ r } and the minimum f ∗ . We willillustrate it in Example 4.14. Another approach which has been investigated to make the relaxations exact, is to add equalityconstraints satisfied by the minimizers (and independent of the minimum f ∗ ) to a PolynomialOptimization Program.For global optimization we can consider the gradient equations (see [NDS06]): obviously ∇ f ( x ∗ ) = for all the minimizers x ∗ of f on S = R n . For constrained optimization we can con-sider Karush–Kuhn–Tucker (KKT) constraints, adding new variables (see [DNP07]) or projectingthem to the variables X (Jacobian equations, see [Nie13a]). We shortly describe them.21et g , . . . , g r , h , . . . , h s ∈ R [ X ] defining S = S ( g , ± h ), and let f ∈ R [ X ] be the objective function.Let Λ = ( Λ , . . . , Λ r ) and Γ = ( Γ , . . . , Γ s ) be variables representing the Lagrange multipliers associatedwith g and h . The KKT constraints associated to the optimization problem min f ( x ) : x ∈ S ( g , ± h )are: ∂f∂X i − r X k =1 Λ k ∂g k ∂X i − s X j =1 Γ j ∂h j ∂X i = 0 ∀ i Λ k g k = 0 , h j = 0 , g k ≥ ∀ j, k, (6)where the polynomials belong to R [ X , Γ , Λ ]. These are su ffi cient but not necessary conditions for x ∗ ∈ S being a minimizer.For x ∈ S , we say that g i is an active constraint at x if g i ( x ) = 0. Let x ∗ ∈ S and g i , . . . g i k bethe active constraints at x ∗ . The KKT constraints are necessary if Linear Independence ConstraintQualification (LICQ) holds, that is, if ∇ h ( x ∗ ) , . . . , ∇ h s ( x ∗ ) , ∇ g i ( x ∗ ) , . . . , ∇ g i k ( x ∗ ) are linearly indepen-dent at the minimizer x ∗ ∈ S (see [NW06, th. 12.1]). We cannot avoid the LICQ hypothesis: forexample if f = X ∈ R [ X ] and g = X ∈ R [ X ], then x ∗ = 0 is a minimizer, but the KKT equationsare not satisfied at x ∗ = 0.To avoid this problem we define the polar ideal . Observe from eq. (6) that, if KKT constraintsare satisfied at x and• if g i is not an active constraint at x , then Λ i = 0;• if g i , . . . g i k are the active constraints at x , then the gradients ∇ f ( x ) , ∇ h ( x ) , . . . , ∇ h r ( x ) , ∇ g i ( x ), . . . , ∇ g i k ( x ) are linearly dependent. Definition 4.11.
For f , g , . . . , g r , h , . . . , h s ∈ R [ X ] as before, the polar ideal is defined as follows: J ≔ ( h ) + Y { a ,...,a k }⊂{ ,...r } (cid:16) ( g a , . . . , g a k ) + (cid:16) rank Jac( f , h , g a , . . . , g a k ) (cid:17) < s + k + 1 (cid:17) . where (cid:16)(cid:16) rank Jac( f , h , g a , . . . , g a k ) (cid:17) < l (cid:17) is the ideal generated by the l × l minors of the Jacobianmatrix Jac( f , h , g a , . . . , g a k ). The generators of J besides h are the product of active constraints andthe generators of rank ideals.In this definition, we could replace the product of ideals by their intersection and the l × l minorsof the Jacobian matrices by polynomials defining the same varieties.We prove that every minimizer belongs to V R ( J ). Lemma 4.12.
Let x ∗ be a minimizer of f on S = S ( g , ± h ) . Then x ∗ ∈ V R ( J ) .Proof. Since x ∗ ∈ S , then x ∗ ∈ V R ( h ).If the LICQ hold at x ∗ , then x ∗ is a KKT point (see [NW06, th. 12.1]) and ∇ f ( x ) = P j γ j ∇ h j ( x ) + P j λ j ∇ g j ( x ) for some γ j and λ i in R . As λ k = 0 if g k is not an active constraint, we have that ∇ f ( x ∗ ) , ∇ h ( x ∗ ) , . . . , ∇ h r ( x ∗ ) , ∇ g i ( ∗ x ) , . . . , ∇ g i k ( x ∗ )are linearly dependent, where g i , . . . g i k are the active constraints at x ∗ . Thus x ∗ ∈ V R ( g a , . . . , g a k )and (cid:16) rank Jac( f , h , . . . , h s , g a , . . . , g a k )( x ∗ ) (cid:17) < s + k + 1. This implies x ∗ ∈ V R ( J ).If the LICQ do not hold at x ∗ and g i , . . . , g i k are the active constraints, then the gradients ∇ h ( x ∗ ) , . . . , ∇ h s ( x ∗ ) and ∇ g i ( x ∗ ) , . . . , ∇ g i k ( x ∗ ) are linearly dependent. This implies that ∇ f ( x ∗ ), ∇ h ( x ∗ ), . . . , ∇ h s ( x ∗ ) and ∇ g i ( x ∗ ) , . . . , ∇ g i k ( x ∗ ) are also linearly dependent, and we conclude as in the previouscase. Theorem 4.13.
Let Q = Q ( g , ± h ) and J = ( h ′ ) be the polar ideal, where h ′ ⊂ R [ X ] is a finite set ofgenerators. If V R ( J ) is finite then the moment relaxation ( L d ( g , ± h ′ )) d ∈ N is exact. roof. Minimizers belongs to V R ( J ) by Lemma 4.12. Then MoM exactness follows from Corol-lary 4.3.The assumption in [NDS06], [DNP07] and [Nie13a] for finite convergence and SoS exactnessare smoothness conditions or radicality assumptions on the associated complex variety. Our con-dition for MoM exactness is of a di ff erent nature, since it is on the finiteness of the real polarvariety (see Example 4.17).Notice that by taking equations h ′ such that ( h ′ ) = R √ J instead of generators of J , we havethe same MoM relaxation (by Lemma 3.11 and following remark). Then the SoS exactness prop-erty under the (real) radicality assumption implies SoS exactness for the extended relaxation( Q d ( g , h ′ )) d ∈ N . We give some examples where we compute the minimum and the minimizers for some POP, whichMoM relaxation is exact. Computations were performed with the Julia package
MomentTools.jl using the SDP solver Mosek , based on an interior point method.
Example 4.14 (Motzkin polynomial) . We find the global minimizers of the bivariate Motzkinpolynomial f = x y + x y − x y + 1. This is an example of a (globally) positive polynomialwhich is not sum of squares (and then the SoS relaxation cannot be exact). Its minimum is f ∗ = 0and the four minimizers are ( ± , ± ∈ R (see [Rez96]). v0, M = minimize(f, [], [], X, 4, Mosek.Optimizer) Here f ∗ MoM , ≈ v = − . · − , but we cannot recover the minimizers: exactness does nothold. We add f − f ∗ MoM , = 0 to find them, i.e. use L d ( ± ( f − f ∗ MoM , )). v1, M = minimize(f, [f-v0], [], X, 4, Mosek.Optimizer) Here the new optimum if v ≈ . · − . In this case the approximation of the minimum isof the same order as before, but we can recover the minimizers by Corollary 4.10: w, Xi = get_measure(M) We obtain the following approximation of the 4 minimizers: ξ = (1 . , . ξ = (1 . , − . ξ = ( − . , . ξ = ( − . , − . Example 4.15 (Robinson form) . We find the minimizers of the Robinson form f = x + y + z +3 x y z − x ( y + z ) − y ( x + z ) − z ( x + y ) on the unit sphere h = x + y + z −
1. The Robinsonpolynomial has minimum f ∗ = 0 (globally and on the unit sphere), and the minimizers on V R ( h )are: √
33 ( ± , ± , ± , √
22 (0 , ± , ± , √
22 ( ± , , ± , √
22 ( ± , ± , . BHC are satisfied at every minimizer (see [Nie14, ex. 3.2]) and we can recover the minimizers byTheorem 4.8. v, M = minimize(f, [h], [], X, 5, Mosek.Optimizer)w, Xi = get_measure(M) x : 0 . . − . . ξ y : 0 . . . − . ξ z : 0 . . . . − Here f ∗ MoM , ≈ v = − . · − and the minimizers with positive coordinates are (all the twentyminimizers are found): Example 4.16 (Gradient ideal) . We compute the minimizers of Example 2.11.Let f = ( X Y + X Y + Z − X Y Z ) + X + Y + Z ∈ R [ X, Y , Z ]. We want to minimize f over the gradient variety V R (cid:16) ∂f∂X , ∂f∂Y , ∂f∂Z (cid:17) . v, M = minimize(f, differentiate(f,X), [], X, 4, Mosek.Optimizer)w, Xi = get_measure(M, 2.e-2) The approximation of the minimum f ∗ = 0 is v = − . · − , and the decomposition with athreshold of 2 · − gives the following numerical approximation of the minimizer (the origin): ξ = (2 . − ; − . − ; 3 . − ) . Example 4.17 (Singular minimizer) . We minimize f = x on the compact semialgebraic set S = S ( x − y , − x − y ). The only minimizer is the origin, which is a singular point of the boundaryof S . Thus BHC do not hold. The regularity conditions for the Jacobian and KKT constraints arenot satisfied, but the real polar variety is finite. Adding the polar constraints, we have an exactMoM relaxation. We can recover an approximation of the minimizer from the MoM relaxation oforder 5: v, M = polar_minimize(f, [], [x^3-y^2,1-x^2-y^2], X, 5, Mosek.Optimizer)w, Xi = get_measure(M, 2.e-3) The approximation of the minimum f ∗ = 0 is v = − . · − gives the following approximation of the minimizer (theorigin): ξ = ( − . , . − ) . The error of approximation on the minimizer is of the same order than the error on the minimum f ∗ . References [BCR98] Jacek Bochnak, Michel Coste, and Marie-Francoise Roy.
Real Algebraic Geometry . Ergeb-nisse der Mathematik und ihrer Grenzgebiete. 3. Folge / A Series of Modern Surveysin Mathematics. Berlin Heidelberg: Springer-Verlag, 1998. isbn : 978-3-540-64663-1.[BS87] David Bayer and Michael Stillman. “A criterion for detectingm-regularity”.
InventionesMathematicae
Flat Extensions of Positive Moment Matrices:Recursively Generated Relations . American Mathematical Soc., 1998. 73 pp. isbn : 978-0-8218-0869-6.[CLO15] David A. Cox, John B. Little, and Donal O’Shea.
Ideals, varieties, and algorithms: anintroduction to computational algebraic geometry and commutative algebra . Fourth edition.Undergraduate texts in mathematics. Cham Heidelberg New York Dordrecht London:Springer, 2015. isbn : 978-3-319-16720-6 978-3-319-16721-3. https://gitlab.inria.fr/AlgebraicGeometricModeling/MomentTools.jl Journal of Pure and AppliedAlgebra
The Geometry of Syzygies: A Second Course in Algebraic Geometry andCommutative Algebra . Graduate Texts in Mathematics. New York: Springer-Verlag, 2005. isbn : 978-0-387-22215-8.[HL05] Didier Henrion and Jean bernard Lasserre. “Detecting global optimality and extractingsolutions in GloptiPoly”.
Chapter in D. Henrion, A. Garulli (Editors). Positive polynomialsin control. Lecture Notes in Control and Information Sciences . Springer Verlag, 2005.[JH16] Cédric Josz and Didier Henrion. “Strong duality in Lasserre’s hierarchy for polynomialoptimization”.
Optimization Letters
Foundations of Computational Mathematics
SIAM Journal on Optimization
Moments, positive polynomials and their applications . ImperialCollege Press optimization series v. 1. London : Signapore ; Hackensack, NJ: ImperialCollege Press ; Distributed by World Scientific Publishing Co, 2010. isbn : 978-1-84816-445-1.[Las+13] Jean-Bernard Lasserre, Monique Laurent, Bernard Mourrain, Philipp Rostalski, andPhilippe Trébuchet. “Moment matrices, border bases and real radical computation”.
Journal of Symbolic Computation
51 (2013), pp. 63–85.[Las15] Jean Bernard Lasserre.
An Introduction to Polynomial and Semi-Algebraic Optimization .Cambridge: Cambridge University Press, 2015. isbn : 978-1-107-44722-6.[Lau07] Monique Laurent. “Semidefinite representations for finite varieties”.
Mathematical Pro-gramming
Emerging applications of algebraic geometry . Vol. 149. IMA Volumes in Mathe-matics and Its Applications. Springer, 2009, pp. 157–270.[LLR08] Jean Bernard Lasserre, Monique Laurent, and Philipp Rostalski. “Semidefinite Charac-terization and Computation of Zero-Dimensional Real Radical Ideals”.
Foundations ofComputational Mathematics
Archiv der Mathematik
The Algebraic Theory of Modular Systems . Cambridge UniversityPress, 1916. 148 pp. isbn : 978-0-521-45562-6.[Mar06] Murray Marshall. “Representations of Non-Negative Polynomials Having Finitely ManyZeros”.
Annales de la faculté des sciences de Toulouse Mathématiques
Positive Polynomials and Sums of Squares . American MathematicalSoc., 2008. isbn : 978-0-8218-7527-8.[Mar09] M. Marshall. “Representations of Non-Negative Polynomials, Degree Bounds and Ap-plications to Optimization”.
Canadian Journal of Mathematics
Foun-dations of Computational Mathematics
ISSAC: Proceedings of the ACM SIGSAM International Symposium on Symbolicand Algebraic Computation . Ed. by M. Kauers. 2005, pp. 253–260.[NDS06] Jiawang Nie, James Demmel, and Bernd Sturmfels. “Minimizing Polynomials via Sumof Squares over the Gradient Ideal”.
Mathematical Programming
Mathe-matical Programming
SIAM Journal on Opti-mization
Mathematical Programming
Journal of Complexity
Numerical Optimization . 2nd ed. Springer Series in Op-erations Research and Financial Engineering. New York: Springer-Verlag, 2006. isbn :978-0-387-30303-1.[Par02] Pablo A. Parrilo.
An Explicit Construction of Distinguished Representations of PolynomialsNonnegative Over Finite Sets . 2002.[Put93] Mihai Putinar. “Positive Polynomials on Compact Semi-algebraic Sets”.
Indiana Univer-sity Mathematics Journal
In ContemporaryMathematics . American Mathematical Society, 1996, pp. 251–272.[Sch00] Claus Scheiderer. “Sums of squares of regular functions on real algebraic varieties”.
Transactions of the American Mathematical Society
Jour-nal of Algebra
Journal of Complexity manuscripta mathemat-ica
Emerg-ing Applications of Algebraic Geometry . Ed. by Mihai Putinar and Seth Sullivant. TheIMA Volumes in Mathematics and its Applications. New York, NY: Springer, 2009,pp. 271–324. isbn : 978-0-387-09686-5.[Sch17] Konrad Schmüdgen.
The Moment Problem . Graduate Texts in Mathematics. SpringerInternational Publishing, 2017. isbn : 978-3-319-64545-2.[Sch91] Konrad Schmüdgen. “TheK-moment problem for compact semi-algebraic sets”.
Math-ematische Annalen