Convex Generalized Nash Equilibrium Problems and Polynomial Optimization
aa r X i v : . [ m a t h . O C ] J a n CONVEX GENERALIZED NASH EQUILIBRIUM PROBLEMSAND POLYNOMIAL OPTIMIZATION
JIAWANG NIE AND XINDONG TANG
Abstract.
This paper studies convex Generalized Nash Equilibrium Prob-lems (GNEPs) that are given by polynomials. We use rational and parametricexpressions for Lagrange multipliers to formulate efficient polynomial optimiza-tion for computing Generalized Nash Equilibria (GNEs). The Moment-SOShierarchy of semidefinite relaxations are used to solve the polynomial optimiza-tion. Under some general assumptions, we prove the method can find a GNEif there exists one, or detect nonexistence of GNEs. Numerical experimentsare presented to show the efficiency of the method. Introduction
The Generalized Nash Equilibrium Problem (GNEP) is a kind of games to findstrategies for a group of players such that each player’s objective function is opti-mized, for given other players’ strategies. Suppose there are N players and the i thplayer’s strategy is a vector x i ∈ R n i (the n i -dimensional real Euclidean space).We write that x i := ( x i, , . . . , x i,n i ) , x := ( x , . . . , x N ) . The total dimension of all strategies is n := n + . . . + n N . The main task of theGNEP is to find a tuple u = ( u , . . . , u N ) of strategies such that each u i is a mini-mizer of the i th player’s optimization (denote u − i := ( u , . . . , u i − , u i +1 , . . . , u N ))(1.1) F i ( u − i ) : min x i ∈ R ni f i ( u , . . . , u i − , x i , u i +1 , . . . , u N ) s . t . g i,j ( u , . . . , u i − , x i , u i +1 , . . . , u N ) = 0 ( j ∈ E i ) ,g i,j ( u , . . . , u i − , x i , u i +1 , . . . , u N ) ≥ j ∈ I i ) , where the f i and g i,j are continuously differentiable functions in x i , and the E i , I i are disjoint finite (possibly empty) labeling sets. The point u satisfying the aboveis called a Generalized Nash Equilibrium (GNE). For notational convenience, whenthe i th player’s strategy is considered, we use x − i to denote the subvector of allplayers’ strategies except the i th one, i.e., x − i := ( x , . . . , x i − , x i +1 , . . . , x N ) , and write x = ( x i , x − i ) accordingly.This paper focuses on the Generalized Nash Equilibrium Problem of Polynomials(GNEPP), i.e., all the functions f i and g i,j are polynomials in x . For each i = Mathematics Subject Classification.
Key words and phrases.
Generalized Nash Equilibrium Problem, Convex Polynomials, Poly-nomial Optimization, Moment-SOS relaxation, Lagrange multiplier expression. , . . . , N , let X i be the point-to-set map such that(1.2) X i ( x − i ) := (cid:26) x i ∈ R n i (cid:12)(cid:12)(cid:12)(cid:12) g i,j ( x i , x − i ) = 0 , j ∈ E i ,g i,j ( x i , x − i ) ≥ , j ∈ I i (cid:27) . The X i ( x − i ) is the feasible strategy set of F i ( x − i ). The domain of X i isdom( X i ) := { x − i ∈ R n − n i : X i ( x − i ) = ∅} . The tuple x is said to be a feasible point of the GNEP if x i ∈ X ( x − i ) for all i .Denote the set(1.3) X := (cid:26) x ∈ R n (cid:12)(cid:12)(cid:12)(cid:12) g i,j ( x i , x − i ) = 0 , j ∈ E i , i = 1 , . . . , N,g i,j ( x i , x − i ) ≥ , j ∈ I i , i = 1 , . . . , N (cid:27) . Then x is a feasible point for the GNEP if and only if x ∈ X. Definition 1.1.
The GNEP given by (1.1) is called convex if for all i = 1 , . . . , N andfor all given x − i ∈ dom( X i ), the objective f i ( x i , x − i ) is convex in x i on X i ( x − i ), all g i,j ( x i , x − i ) ( j ∈ E i ) are affine linear in x i and all g i,j ( x i , x − i ) ( j ∈ I i ) are concavein x i .For instance, consider the 2-player GNEPP(1.4) min x ∈ R P j =1 ( x ,j − x ,j ) min x ∈ R P j =1 (cid:16) ( x ,j ) − x ,j Q k =1 x ,k (cid:17) s . t . x T x − , s . t . k x k − k x k ≥ . ( x , x , x ) ≥ k · k denotes the Euclidean norm. For each i , the Hessian of f i with respect to x i is positive semidefinite for all x − i ∈ dom( X i ). All players haveconvex optimization problems, so this is a convex GNEPP. One can directly checkthat it has a unique GNE u = ( u , u ) with u = √ √ , √ √ , √ √ ! , u = (cid:18) √ , √ , √ (cid:19) . GNEPs originated from economics in [4, 8]. Recently, it has been widely used inmany areas, such as economics, transportation, telecommunications and pollutioncontrol. Convex GNEPs often appear in applications. We refer to [1,3,7] for recentapplications of GNEPPs. Some application examples are shown in Section 6.For the classical Nash Equilibrium Problems (NEPs) of polynomials, there existsemidefinite relaxation methods [2, 45]. Convex GNEPs can be reformulated asvariational inequality (VI) or quasi-variational inequality (QVI) problems [11, 18,19, 34, 48]. The Karush-Kuhn-Tucker (KKT) system for all player’s optimizationproblems is considered in [10]. The penalty functions are used to solve convexGNEPs in [14, 15, 17]. Some methods using the Nikaido-Isoda function are givenin [23, 24]. For general nonconvex GNEPs, we refer to [9, 12, 25, 44]. It is generallyquite difficult to solve GNEPs, even if they are convex. This is because the KKTsystem of a convex GNEP may still be difficult to solve. The set of GNEs maybe nonconvex, even for convex NEPs (see [45]). We refer to [13] for a survey onGNEPs.
ONVEX GNEPS AND POLYNOMIAL OPTIMIZATION 3
Contributions.
This paper focuses on convex GNEPPs. Under some constraintqualifications, a feasible point is a GNE if and only if it satisfies the KKT conditions.We introduce rational and parametric expressions for Lagrange multipliers andformulate polynomial optimization for computing GNEs. Our major results are: • For GNEPPs, we introduce the rational expression for Lagrange multipliersand study their properties. We prove the existence of rational expressionsand give a sufficient and necessary condition for positivity of denomina-tors. Moreover, we give parametric expressions for Lagrange multipliers forseveral cases. For all GNEPs, parametric expressions always exist. • Using rational and parametric expressions, we formulate polynomial op-timization and propose an algorithm for computing GNEs. Under somegeneral assumptions, we prove that the algorithm can compute a GNE if itexists, or detect nonexistence of GNEs. This is the first numerical methodthat has these properties, to the best of the authors’ knowledge. • The Moment-SOS semidefinite relaxations are used to solve polynomialoptimization for finding and verifying GNEs. Numerical experiments arepresented to show the efficiency of the method.The paper is organized as follows. Some preliminaries about polynomial opti-mization are given in Section 2. We introduce rational expressions for Lagrangemultipliers in Section 3. The parametric expressions for Lagrange multipliers aregiven in Section 4. We formulate polynomial optimization problems for computingGNEs and show how to solve them using the Moment-SOS hierarchy in Section 5.Numerical experiments and applications are given in Section 6. Conclusions andsome discussions are given in Section 7.2.
Preliminaries
Notation
The symbol N (resp., R , C ) stands for the set of nonnegative integers(resp., real numbers, complex numbers). For a positive integer k , denote the set[ k ] := { , . . . , k } . For a real number t , ⌈ t ⌉ (resp., ⌊ t ⌋ ) denotes the smallest integernot smaller than t (resp., the biggest integer not bigger than t ). We use e i to denotethe vector such that the i th entry is 1 and all others are zeros. By writing A (cid:23) A ≻ A is symmetric positive semidefinite (resp.,positive definite). For the i th player’s strategy vector x i ∈ R n i , the x i,j denotesthe j th entry of x i , for j = 1 , . . . , n i . When we write ( y, x − i ), it means that the i thplayer’s strategy is y ∈ R n i , while the vector of all other players’ strategy is fixedto be x − i . Let R [ x ] denote the ring of polynomials with real coefficients in x , and R [ x ] d denote its subset of polynomials whose degrees are not greater than d . Forthe i th player’s strategy vector x i , the notation R [ x i ] and R [ x i ] d are defined in thesame way. For i th player’s objective f i ( x ), the notation ∇ x i f i , ∇ x i f i respectivelydenote its gradient and Hessian with respect to x i .In the following, we use the letter z to represent either x , x i or ( x, ω ) for somenew variables ω , for convenience of discussion. Suppose z := ( z , . . . , z l ). For apolynomial p ( z ) ∈ R [ z ], the p = 0 means p ( z ) is identically zero on R l . We say thepolynomial p is nonzero if p = 0. Let α := ( α , . . . , α l ) ∈ N l , and we denote z α := z α · · · z α l l , | α | := α + . . . + α l . For an integer d >
0, denote the monomial power set N ld := { α ∈ N l : | α | ≤ d } . JIAWANG NIE AND XINDONG TANG
We use [ z ] d to denote the vector of all monomials in z whose degree is at most d ,ordered in the graded alphabetical ordering. For instance, if z = ( z , z ), then[ z ] = (1 , z , z , z , z z , z , z , z z , z z , z ) . Throughout the paper, a property is said to hold generically if it holds for all pointsin the space of input data except a set of Lebesgue measure zero.2.1.
Ideals and positive polynomials.
Let F := R or C . For a polynomial p ∈ F [ z ] and subsets I, J ⊆ F [ z ], define the product and Minkowski sum p · I := { pq : q ∈ I } , I + J := { a + b : a ∈ I, b ∈ J } . The subset I is an ideal if p · I ⊆ I for all p ∈ F [ z ] and I + I ⊆ I . For a tuple ofpolynomials q = ( q , . . . , q m ), the setIdeal[ q ] := q · F [ z ] + . . . + q m · F [ z ]is the ideal generated by q , which is the smallest ideal containing each q i .We review basic concepts in polynomial optimization. A polynomial σ ∈ R [ z ]is said to be a sum of squares (SOS) if σ = p + . . . + p k for some polynomials p i ∈ R [ z ]. The set of all SOS polynomials in z is denoted as Σ[ z ]. For a degree d ,we denote the truncation Σ[ z ] d := Σ[ z ] ∩ R [ z ] d . For a tuple g = ( g , . . . , g t ) of polynomials in z , its quadratic module is the setQmod[ g ] := Σ[ z ] + g · Σ[ z ] + . . . + g t · Σ[ z ] . Similarly, we denote the truncation of Qmod[ g ]Qmod[ g ] d := Σ[ z ] d + g · Σ[ z ] d − deg( g ) + . . . + g t · Σ[ z ] d − deg( g t ) . The tuple g determines the basic closed semi-algebraic set(2.1) S ( g ) := { z ∈ R l : g ( z ) ≥ , . . . , g t ( z ) ≥ } . For a tuple h = ( h , . . . , h s ) of polynomials in R [ z ], its real zero set is Z ( h ) := { z ∈ R l : h ( z ) = . . . = h s ( z ) = 0 } . The set Ideal[ h ] + Qmod[ g ] is said to be archimedean if there exists ρ ∈ Ideal[ h ] +Qmod[ g ] such that the set S ( ρ ) is compact. If Ideal[ h ] + Qmod[ g ] is archimedean,then Z ( h ) ∩ S ( g ) must be compact. Conversely, if Z ( h ) ∩ S ( g ) is compact, say, Z ( h ) ∩ S ( g ) is contained in the ball R − k z k ≥
0, then Ideal[ h ] + Qmod[ g, R −k z k ] is archimedean and Z ( h ) ∩ S ( g ) = Z ( h ) ∩ S ( g, R − k z k ). Clearly, if f ∈ Ideal[ h ] + Qmod[ g ], then f ≥ Z ( h ) ∩ S ( g ). The reverse is not necessarily true.However, when Ideal[ h ]+Qmod[ g ] is archimedean, if f > Z ( h ) ∩S ( g ), then f ∈ Ideal[ h ] + Qmod[ g ]. This conclusion is referenced as Putinar’s Positivestellensatz[49]. Interestingly, if f ≥ Z ( h ) ∩ S ( g ), we also have f ∈ Ideal[ h ] + Qmod[ g ],under some standard optimality conditions [38]. ONVEX GNEPS AND POLYNOMIAL OPTIMIZATION 5
Localizing and moment matrices.
Let R N l d denote the space of all realvectors that are labeled by α ∈ N l d . A vector y ∈ R N l d is labeled as y = ( y α ) α ∈ N l d . Such y is called a truncated multi-sequence (tms) of degree 2 d . For a polynomial f = P α ∈ N l d f α z α ∈ R [ z ] d , define the operation(2.2) h f, y i := X α ∈ N l d f α y α . The operation h f, y i is a bilinear function in ( f, y ). For a polynomial q ∈ R [ z ], withdeg( q ) ≤ d , and the integer t = d − ⌈ deg( q ) / ⌉ , the outer product q · [ z ] t ([ z ] t ) T isa symmetric matrix polynomial in z , with length (cid:0) n + tt (cid:1) . We write the expansion as q · [ z ] t ([ z ] t ) T = X α ∈ N l d z α Q α , for some symmetric matrices Q α . Then we define the matrix function(2.3) L ( d ) q [ y ] := X α ∈ N l d y α Q α . It is called the d th localizing matrix of q and generated by y . For given q , the matrix L ( d ) q [ y ] is linear in y . Localizing and moment matrices are important for gettingsemidefinite relaxations of solving polynomial optimization [27,36,37]. They are alsouseful for solving truncated moment problems [16, 40] and tensor decompositions[41, 42]. We refer to [29, 30, 32, 33, 35, 39] for more references about polynomialoptimization and moment problems.2.3. Lagrange multiplier expressions.
We study optimality conditions for GNEs.Consider the i th player’s optimization. For convenience, suppose E i ∪ I i = [ m i ] and g i = ( g i, , . . . , g i,m i ). For a given x − i , under some suitable constraint qualifica-tions (e.g., the linear independence constraint qualification (LICQ), Mangasarian-Fromovite constraint qualification (MFCQ), or the Slater’s Condition; see [6] forthem), if x i is a minimizer of F i ( x − i ), then there exists a Lagrange multiplier vector λ i := ( λ i, , . . . , λ i,m i ) such that(2.4) ∇ x i f i ( x ) − P m i j =1 λ i,j ∇ x i g i,j ( x ) = 0 ,λ i ⊥ g i ( x ) , g i,j ( x ) = 0 ( j ∈ E i ) ,λ i,j ≥ j ∈ I i ) , g i,j ( x ) ≥ j ∈ I i ) . This is called the first order Karush-Kuhn-Tucker system for F i ( x − i ). Such ( x i , λ i )is called a critical pair of F i ( x − i ). Therefore, if x is a GNE, under constraintqualifications, then (2.4) holds for all i ∈ [ N ], i.e., there exist Lagrange multipliervectors λ , . . . , λ N such that(2.5) ∇ x i f i ( x ) − P m i j =1 λ i,j ∇ x i g i,j ( x ) = 0 ( i ∈ [ N ]) ,λ i ⊥ g i ( x ) ( i ∈ [ N ]) , g i,j ( x ) = 0 ( i ∈ [ N ] , j ∈ E i ) ,λ i,j ≥ i ∈ [ N ] , j ∈ I i ) , g i,j ( x ) ≥ i ∈ [ N ] , j ∈ I i ) . A point x satisfying (2.5) is called a KKT point for the GNEP. For convex GNEPs,each KKT point is a GNE [13, Theorem 4.6]. JIAWANG NIE AND XINDONG TANG
For each critical pair ( x i , λ i ) of F i ( x − i ), the equation (2.4) implies that(2.6) ∇ x i g i, ( x ) ∇ x i g i, ( x ) · · · ∇ x i g i,m i ( x ) g i, ( x ) 0 · · · g i, ( x ) · · · · · · g i,m i ( x ) | {z } G i ( x ) λ i, λ i, ... λ i,m i | {z } λ i = ∇ x i f i ( x )0...0 | {z } ˆ f i ( x ) . If there exists a matrix polynomial L i ( x ) such that(2.7) L i ( x ) G i ( x ) = I m i , then the Lagrange multipliers λ i can be expressed as λ i = L i ( x ) ˆ f i ( x ) . The vector of polynomials λ i ( x ) := ( λ i, ( x ) , . . . , λ i,m i ( x )) is called a polynomialexpression for Lagrange multipliers [43], where λ i,j ( x ) is the j th component of L i ( x ) ˆ f i ( x ). The matrix polynomial G i ( x ) is said to be nonsingular if it has fullcolumn rank for all x ∈ C n . It was shown that G i ( x ) is nonsingular if and onlyif there exists L i ( x ) ∈ R [ x ] ( m i + n i ) × m i such that (2.7) hold [43, Proposition 5.1].The nonsingularity of G i ( x ) is independent of objective functions or other player’sconstraints.For example, consider the GNEP given by (1.4). The first player’s optimizationhas a polynomial expression of Lagrange multipliers(2.8) λ , = x T ∇ x f , λ ,j +1 = ∂f ( x ) ∂x ,j − λ , x ,j ( j = 1 , , . For the second player, the matrix polynomial G ( x ) is not nonsingular, and poly-nomial expressions do not exist. In section 6, we give a rational expression for thesecond player’s Lagrange multipliers.3. Rational expressions for Lagrange Multipliers
In Section 2.3, a polynomial expression for the i th player’s Lagrange multipliersexists if and only if the matrix G i ( x ) is nonsingular. For classical NEPs of polyno-mials, the nonsingularity holds generically [43, 45]. However, this is often not thecase for GNEPs. Let g i = ( g i, , . . . , g i,m i ) be the tuple of constraining polynomialsin F i ( x − i ) and G i ( x ) be the matrix polynomial as in (2.7). If there exists a matrixpolynomial ˆ L i ( x ) and a nonzero scalar polynomial q i ( x ) such that(3.1) ˆ L i ( x ) G i ( x ) = q i ( x ) · I m i , then q i ( x ) λ i = ˆ L i ( x ) ˆ f i ( x ) for all critical pairs ( x i , λ i ) of F i ( x − i ). Let(3.2) ˆ λ i ( x ) := ˆ L i ( x ) ˆ f i ( x ) . Denote by ˆ λ i,j ( x ) the j th entry of ˆ λ i ( x ). Definition 3.1.
For the i th player’s optimization F i ( x − i ), if there exist polynomi-als ˆ λ i, , . . . , ˆ λ i,m i and a nonzero polynomial q i such that q i ( x ) ≥ x ∈ X ,and ˆ λ i,j ( x ) = q i ( x ) λ i,j holds for all critical pairs ( x i , λ i ), then we call the tupleˆ λ i /q i := (ˆ λ i, ( x ) /q i ( x ) , . . . , ˆ λ i,m i ( x ) /q i ( x )) ONVEX GNEPS AND POLYNOMIAL OPTIMIZATION 7 a rational expression for Lagrange multipliers.The following is an example of rational expression.
Example 3.2.
Consider the 2-player convex GNEP(3.3) min x ∈ R f ( x , x ) min x ∈ R f ( x , x ) s . t . − x T x − x ≥ s . t . x − x T x ≥ , − x ≥ . The matrices of polynomials G ( x ) and G ( x ) are G ( x ) := − x , − x , − x T x − x , G ( x ) := − x − x T x − x . For x = (0 ,
0) and x = 2, the G ( x ) is the zero vector. For x = ( √ ,
0) and x = 1, rank( G ( x )) = 1. Both G ( x ) , G ( x ) are not nonsingular, so there are nopolynomial expressions for Lagrange multipliers. However, the (3.1) holds for(3.4) q ( x ) = 2 − x , q ( x ) = 1 − x T x , ˆ L ( x ) = (cid:2) − x , − x , (cid:3) , ˆ L ( x ) = (cid:20) − x x T x − x (cid:21) . The Lagrange multiplier expressions are(3.5) λ = − x T ∇ x f q , λ , = (1 − x )3 q · ∂f ∂x , λ , = x T x − x q · ∂f ∂x . In section 3.2, we show that if none of the g i,j is identically zero, then a rationalexpression for λ i always exists.3.1. Optimality conditions and rational expressions.
Suppose for each i ,there exists a rational expression ˆ λ i /q i for the i th player’s Lagrange multipliervector. Since q i ( x ) λ i,j = ˆ λ i ( x ) and q i ( x ) ≥ x ∈ X , the following holds forall GNEs(3.6) q i ( x ) ∇ x i f i ( x ) − P m i j =1 ˆ λ i,j ( x ) ∇ x i g i,j ( x ) = 0 ( i ∈ [ N ]) , ˆ λ i ( x ) ⊥ g i ( x ) , g i,j ( x ) = 0 ( j ∈ E i , i ∈ [ N ]) ,g i,j ( x ) ≥ , ˆ λ i,j ( x ) ≥ j ∈ I i , i ∈ [ N ]) . Under some constraint qualifications, if x is a GNE, then it satisfies (3.6). Forconvex GNEPs, if x satisfies (3.6) and q i ( x ) >
0, then x must be a GNE, since itsatisfies (2.5) with λ i,j given by λ i,j = ˆ λ i,j ( x ) /q i ( x ) . This leads us to consider thefollowing optimization problem(3.7) min x ∈ X [ x ] T Θ[ x ] s . t . q i ( x ) ∇ x i f i ( x ) − P m i j =1 ˆ λ i,j ( x ) ∇ x i g i,j ( x ) = 0 ( i ∈ [ N ]) , ˆ λ i,j ( x ) ⊥ g i,j ( x ) ( j ∈ E i ∪ I i , i ∈ [ N ]) , ˆ λ i,j ( x ) ≥ j ∈ I i , i ∈ [ N ]) . In the above, Θ is a generically chosen positive definite matrix. The followingproposition is straightforward.
Proposition 3.3.
For the GNEPP given by (1.1), suppose for each i ∈ [ N ] , theLagrange multiplier vector λ i has the rational expression as in Definition 3.1. JIAWANG NIE AND XINDONG TANG (i) If (3.7) is infeasible, then the GNEP has no KKT points. Therefore, if everyGNE is a KKT point, then the infeasibility of (3.7) implies the nonexistenceof GNEs.(ii) Assume the GNEP is convex. If u is a feasible point of (3.7) and q i ( u ) > for all i ∈ [ N ] , then u must be a GNE. In Proposition 3.3 (ii), if q i ( u ) = 0, then u may not be a GNE. The following issuch an example. Example 3.4. [14, Example A.8] Consider the 3-player convex GNEPmin x ∈ R − x min x ∈ R ( x − . min x ∈ R ( x − . x ) s . t . x ≤ x + x ≤ , s . t . x ≤ x + x ≤ , s . t . ≤ x ≤ .x ≥ x ≥ i = 1 , λ i, = x i (1 − x − x ) q i ∂f i ∂x i , λ i, = − x i ( x + x − x ) q i ∂f i ∂x i ,λ i, = ∂f i ∂x i − λ i, + λ i, , q i ( x ) = x i (1 − x ) . For the third player, we have the polynomial expression(3.9) λ , = 2 − x ∂f ∂x , λ , = λ , − ∂f ∂x . Let q ( x ) = 1. Then u = 0 , u = 0 . , u = 0 satisfy (2.5) with q ( u ) = 0. However, u = 0 is not a minimizer for the first player’s optimization F ( u − ). It is interestingto note that for u = , u = , u = 1, the tuple u = ( u , u , u ) satisfies (2.5)with q ( u ) = q ( u ) = 0, but u is still a GNE [14].We would like to remark that for some special GNEPs, the equality q i ( u ) = 0may imply that u i is a minimizer of F i ( u − i ). See Example 3.8 for such a case.3.2. Existence of rational expressions.
We study the existence of rational ex-pressions with nonnegative q i ( x ). The following is a useful lemma. Lemma 3.5.
For the i th player’s optimization F i ( x − i ) , if every g i,j ( x ) is not iden-tically zero, then a rational expression exists for λ i .Proof. Let H i ( x ) = G i ( x ) T G i ( x ), where G i ( x ) is the matrix polynomial in (2.6).If every g i,j ( x ) is not identically zero, then the determinant det H i ( x ) is also notidentically zero. Let adj H i ( x ) denote the adjoint matrix of H i ( x ), then H i ( x ) · adj H i ( x ) = det H i ( x ) · I m i . For ˆ L i ( x ) := G i ( x ) T · adj H i ( x ), we get the rational expression(3.10) λ i,j ( x ) = 1det H i ( x ) ˆ L i ( x ) · ˆ f i ( x ) . Moreover, q i ( x ) ≥ x , since H i ( x ) is positive semidefinite everywhere. (cid:3) The rational expression in (3.10) may not be very practical, because the deter-minantal polynomials often have high degrees. In practice, we usually have rationalexpressions with low degrees. If each q i ( x ) > x ∈ X , then every solutionof (3.7) is a GNE. One wonders when a rational expression exists with q i ( x ) > X . The matrix polynomial G i is said to be nonsingular on X if G i ( x ) has fullcolumn rank for all x ∈ X . For the GNEP given in Example 3.2, both G ( x ) and G ( x ) are nonsingular on X . The following proposition is useful. ONVEX GNEPS AND POLYNOMIAL OPTIMIZATION 9
Proposition 3.6.
The matrix G i ( x ) is nonsingular on X if and only if there existsa matrix polynomial ˆ L i ( x ) satisfying (3.1) with q i ( x ) > on X .Proof. First, if G i ( x ) has full column rank for all x ∈ X , let H i ( x ) := G i ( x ) T G i ( x ),then H i ( x ) is positive definite and the determinant det H i ( x ) > x ∈ X .Therefore, for ˆ L i ( x ) := adj H i ( x ), the equation (3.10) is satisfied with q i ( x ) :=det H i ( x ) > X . Second, if (3.1) holds with q i ( x ) > X , then G i ( x ) isclearly nonsingular on X . (cid:3) If G i ( x ) is nonsingular on X , then the LICQ must hold for the i th player’soptimization. For such a case, every GNE must be a KKT point. We remark thateven for the case q i ( x ) < x ∈ X , it is still possible to get a GNE. Werefer to Example 6.6 for such a case.3.3. A numerical method for finding rational expressions.
We give a nu-merical method for finding rational expressions for Lagrange multipliers. It wasintroduced in [46] for solving bilevel optimization problems. Let G i ( x ) be the ma-trix polynomial defined in (2.6). For convenience, denote the tuples g E := ( g i,j ) i ∈ [ N ] ,j ∈E i , g I := ( g i,j ) i ∈ [ N ] ,j ∈I i . For a priori degree d , consider the following linear convex optimization:(3.11) max ˆ L i ,q i ,γ γ s . t . ˆ L i · G i = q i · I m i , q i ( v ) = 1 ,q i − γ ∈ Ideal[ g E ] d + Qmod[ g I ] d , ˆ L i ∈ ( R [ x ] d − deg G i ) m i × ( m i + n i ) . In the above, the first equality is the same as (3.1). The second equality ensuresthat q i is not identically zero, where v is a priori point in X . The constraint q i − γ ∈ Ideal[ g E i ] + Qmod[ g E i ] forces the q i ( x ) ≥ γ on X . Therefore, if themaximum γ is positive, then q i ( x ) > X . By Lemma 3.5, one can alwaysfind a feasible γ ≥ d ≤ deg( H ( x )), if none of g i,j ( x )is identically zero. By Proposition 3.6, if each G i ( x ) is nonsingular on X and thearchimedeanness holds for X , then there must exist γ > d . If ( ˆ L i , q i , γ ) is a feasible point of (3.11), then one can get a rational expressionfor Lagrange multipliers by letting ˆ λ i,j ( x ) = ˆ L i ( x ) ˆ f i . Example 3.7.
Consider the GNEP in Example 3.2. We have g E = ∅ , g I = (2 − x T x − x , x − x T x , − x ) . Let ˆ L ( x ) and ˆ L ( x ) be the matrix polynomials in (3.4), and q ( x ) = 2 − x , q ( x ) =1 − x T x . Let v := (0 , ,
1) for both players, and γ = 1, γ = 1 /
2. Then, the( ˆ L ( x ) , q ( x ) , γ ) and ( ˆ L ( x ) , q ( x ) , γ ) are feasible points of (3.11), for i = 1 , q ( v ) = q ( v ) = 1, and q ( x ) − γ = 1 − x = 0 + 1 · (1 − x ) ∈ Qmod[ g I ] ,q ( x ) − γ = − x T x = 0 + (2 − x T x − x ) + (3 x − x T x ) ∈ Qmod[ g I ] . The rational expressions for Lagrange multipliers are given by (3.5).
Example 3.8.
Consider the following GNEPmin x ∈ R f ( x , x ) min x ∈ R f ( x , x ) s . t . − x T x − x T x ≥ s . t . − x T x − x T x ≥ . The constraining tuples g E := ∅ , g I := (1 − x T x − x T x ) . Let v := (0 , , γ = γ = 0, q ( x ) = 1 − x T x , q ( x ) = 1 − x T x , andˆ L = (cid:20) − x , , − x , , − x , , (cid:21) , ˆ L = (cid:20) − x , , − x , , − x , , (cid:21) . One can verify that q ( v ) = q ( v ) = 1 and q ( x ) − γ = 1 − x T x = x T x + 1 · (1 − x T x − x T x ) ∈ Qmod[ g I ] ,q ( x ) − γ = 1 − x T x = x T x + 1 · (1 − x T x − x T x ) ∈ Qmod[ g I ] . By Proposition 3.6, we know ( ˆ L ( x ) , q ( x ) , γ ) and ( ˆ L ( x ) , q ( x ) , γ ) are minimizersof (3.11) for i = 1 , λ = − x T ∇ x f · q ( x ) , λ = − x T ∇ x f · q ( x ) . For each i = 1 ,
2, if q i ( x ) = 0, then 0 ≤ x iT x i ≤ − x − iT x − i = 0 . This implies x i = (0 , ,
0) is the only feasible point of the i th player’s optimization and hence itis the minimizer. Therefore, each feasible point of (3.7) is a GNE.One can solve (3.11) numerically for getting rational expressions. This is donein Example 6.5.4. Parametric expressions for Lagrange multipliers
For some GNEPs, it may be difficult to find convenient rational expressions forLagrange multipliers. Sometimes, the denominators may have high degrees. Thisis the case especially when m i > n i . If some q i has high degree, the polynomialoptimization (3.6) also has a high degree, which makes the result moment SDPrelaxations (see subsections 5.1 and 5.2) very difficult to be solved. To fix suchissues, we introduce parametric expressions for Lagrange multipliers. Definition 4.1.
For the i th player’s optimization F i ( x − i ), a parametric expressionfor the Lagrange multipliers is a tuple of polynomialsˆ λ i ( x, ω i ) := (ˆ λ i, ( x, ω i ) , . . . , ˆ λ i,m i ( x, ω i )) , in x and in a parameter ω i := ( ω i, , . . . , ω i,s i ) with s i ≤ m i , such that ( x i , λ i ) isa critical pair if and only if there is a value of ω i such that (2.4) is satisfied for λ i,j = ˆ λ i,j ( x, ω i ) with j ∈ [ m i ].The following is an example of parametric expressions. Example 4.2.
Consider the 2-player convex GNEPmin x ∈ R f ( x , x ) min x ∈ R f ( x , x ) s . t . x , − x , + x , ≥ , s . t . x , + x , − x , + 1 ≥ , − x , · x T x ≥ , − x , ≥ , x , ≥ ,x , ≥ , x , ≥ x , ≥ . ONVEX GNEPS AND POLYNOMIAL OPTIMIZATION 11
The Lagrange multipliers can be expressed as(4.1) λ , = ω , ,λ , = x , ( ∂f ∂x , − ω , ) + x , ( ∂f ∂x , + 2 ω , ) ,λ , = ∂f ∂x , − ω , + 2 x , x , λ , ,λ , = ∂f ∂x , + 2 ω , + 2 x , x , λ , ; λ , = ω , ,λ , = − · h ( ∂f ∂x , + 2 x , ω , ) x , + ( ∂f ∂x , − ω , )( x , + 1) i ,λ , = ∂f ∂x , + λ , − ω , ,λ , = ∂f ∂x , + 2 x , ω , . Parametric expressions are quite useful for solving the GNEPs. The followingare some useful cases.(i) Suppose the i th player’s optimization F i ( x − i ) contains the nonnegativeconstraints, i.e., its constraints are x i, ≥ , . . . , x i,n i ≥ , g i,j ( x ) ≥ j = n i + 1 , . . . , m i ) . Let s i := m i − n i , then a parametric expression is(4.2) ( λ i, , . . . , λ i,n i ) = ∇ x i f i − P s i k =1 ω i,k · ∇ x i g i,k + n i , ( λ i,n i +1 , . . . , λ i,m i ) = ( ω i, , . . . , ω i,s i ) . (ii) Suppose the i th player’s optimization F i ( x − i ) contains box constraints, i.e.,its constraints are x i,j − a i,j ≥ , b i,j − x i,j ≥ , j = 1 , . . . , n i g i,j ( x ) ≥ . j = n i + 1 , . . . , m i . Let s i := m i − n i , then a parametric expression is(4.3) λ i,j = b − x i,j b − a (cid:16) ∂f i ∂x i,j − P s i k =1 ω i,k ∂g i,k +2 ni ∂x i,j (cid:17) , j = 1 , , . . . , n i − ,λ i,j = a − x i,j b − a (cid:16) ∂f i ∂x i,j − P s i k =1 ω i,k ∂g i,k +2 ni ∂x i,j (cid:17) , j = 2 , , . . . , n i ,λ i,j = ω i,j − n i , j = 2 n i + 1 , . . . , m i . (iii) Suppose the i th player’s optimization F i ( x − i ) contains simplex constraints,i.e., its constraints are1 − e T x i ≥ , x i, ≥ , . . . , x i,n i ≥ , g i,j ( x ) ≥ , j = n i + 2 , . . . , m i . Let s i := m i − n i −
1, then a parametric expression is(4.4) λ i,j = ( ∇ x i f i − P s i k =1 ω i,k · ∇ x i g i,k + n i +1 ) T x i , j = 1 λ i,j = ∂f i ∂x i,j − − P s i k =1 ω i,k · ∂g i,k + ni +1 ∂x i,j − − λ i, , j = 2 , . . . , n i + 1 λ i,j = ω i,j − n i − . j = n i + 2 , . . . , m i (iv) Suppose the i th player’s optimization F i ( x − i ) contains linear constraints,i.e., its constraints are a Tj x i − b j ( x − i ) ≥ , j = 1 , . . . , r, g i,j ( x ) ≥ , j = r + 1 , . . . , m i , where each b j is a polynomial in x − i . Let A = (cid:2) a · · · a r (cid:3) T . Assumerank A = r . If we let s i := m i − r , then a parametric expression is( λ i, , . . . , λ i,r ) = ( AA T ) − A ( ∇ x i f i − P s i k =1 ω i,k · ∇ x i g i,k + r ) , ( λ i,r +1 , . . . , λ i,m i ) = ( ω i, , . . . , ω i,s i ) . (v) Suppose there exists a label subset T i := ( t , . . . , t r ) ⊆ [ m i ] such thatˆ G i ( x ) := ∇ x i g i,t ( x ) . . . ∇ x i g i,t r ( x ) g i,t ( x ) . . . g i,t r ( x ) is nonsingular for all x ∈ C n . By [43, Proposition 5.1], there exists a matrixpolynomial D i ( x ) such that D i ( x ) · ˆ G i ( x ) = I r . Let s i := m i − r , then aparametric expression is( λ i, , . . . , λ i,r ) = D i ( x )( ∇ x i f i − P s i k =1 ω i,k · ∇ x i g i,k + r ) , ( λ i,r +1 , . . . , λ i,m i ) = ( ω i, , . . . , ω i,s i ) . We would like to remark that a parametric expression always exists. For instance,one can set ω i,j = λ i,j for all j . However, it is preferable to have small s i , to savecomputational costs.4.1. Optimality conditions and parametric expressions.
Suppose all playershave parametric expressions for their Lagrange multipliers as in Definition 4.1. Let s := s + . . . + s N , and denote x := ( x, ω , . . . , ω N ) . The optimality conditions (2.5) can be equivalently expressed as(4.5) ∇ x i f i ( x ) − P m i j =1 ˆ λ i,j ( x ) ∇ x i g i,j ( x ) = 0 ( i ∈ [ N ]) , ˆ λ i ( x ) ⊥ g i ( x ) , g i,j ( x ) = 0 ( j ∈ E i , i ∈ [ N ]) ,g i,j ( x ) ≥ , ˆ λ i,j ( x ) ≥ j ∈ I i , i ∈ [ N ]) . For convex GNEPs, a point x is a GNE if and only if there exists ω := ( ω , . . . , ω N )such that x satisfies (4.5). Therefore, we consider the optimization(4.6) min x ∈ X × R s [ x ] T Θ [ x ] s . t . ∇ x i f i ( x ) − P m i j =1 ˆ λ i,j ( x ) ∇ x i g i,j ( x ) = 0 ( i ∈ [ N ]) , ˆ λ i,j ( x ) ⊥ g i,j ( x ) ( j ∈ E i ∪ I i , i ∈ [ N ]) , ˆ λ i,j ( x ) ≥ j ∈ I i , i ∈ [ N ]) . In the above, the Θ is a generically chosen positive definite matrix. The followingproposition is straightforward
Proposition 4.3.
For the GNEPP given by (1.1), suppose each player’s optimiza-tion has a parametric expression for their Lagrange multipliers as in Definition 4.1.(i) If (4.6) is infeasible, then the GNEP has no KKT points. If every GNE is aKKT point, then the infeasibility of (4.6) implies nonexistence of GNEs.(ii) Assume the GNEP is convex. If ( u, w ) is a feasible point of (4.6), then u is aGNE. ONVEX GNEPS AND POLYNOMIAL OPTIMIZATION 13 The polynomial optimization reformulation
In this section, we give an algorithm for solving convex GNEPs. We assume each λ i has either a rational or parametric expression, as in Definition 3.1 or 4.1. If λ i has a polynomial or parametric expression, we let q i ( x ) := 1. If λ i has a polynomialor rational expression, then we let s i = 0. Recall the notation x := ( x, ω , . . . , ω N ) . Choose a generic positive definite matrix Θ. Then solve the following polynomialoptimization(5.1) min x [ x ] T Θ [ x ] s . t . q i ( x ) ∇ x i f i ( x ) − P m i j =1 ˆ λ i,j ( x ) ∇ x i g i,j ( x ) = 0 ( i ∈ [ N ]) , ˆ λ i,j ( x ) ⊥ g i,j ( x ) ( j ∈ E i ∪ I i , i ∈ [ N ]) ,g i,j ( x ) = 0 ( j ∈ E i , i ∈ [ N ]) ,g i,j ( x ) ≥ j ∈ I i , i ∈ [ N ]) , ˆ λ i,j ( x ) ≥ j ∈ I i , i ∈ [ N ]) . If (5.1) is infeasible, then there are no KKT points. Since Θ is positive definite, if(5.1) is feasible, then it must have a minimizer, say, ( u, w ) ∈ X × R s . For convexGNEPs, if q i ( u ) > i , then u must be a GNE. If q i ( u ) ≤ i , then u may or may not be a GNE. To check this, we solve the following optimizationproblem for those i with q i ( u ) ≤ ( δ i := min x i f i ( x i , u − i ) − f i ( u i , u − i ) s . t . g i,j ( x i , u − i ) = 0 ( j ∈ E i ) , g i,j ( x i , u − i ) ≥ j ∈ I i ) . This is a polynomial optimization in x i . Since u ∈ X , the point u i is feasible for(5.2), so δ i ≤
0. If δ i ≥ i , then u must be a GNE. The following is analgorithm for solving the GNEP. Algorithm 5.1.
For the convex GNEP given by (1.1), do the following:Step 0 Choose a generic positive definite matrix Θ of length n + s + 1.Step 1 Solve the polynomial optimization (5.1). If it is infeasible, then there areno KKT points and stop; otherwise, solve it for a minimizer ( u, w ).Step 2 If all q i ( u ) >
0, then u is a GNE. Otherwise, for those i with q i ( u ) ≤ δ i . If δ i ≥ i , then u is a GNE; otherwise, it is not.In Step 0, we can choose Θ = R T R for a randomly generated square matrix R oflength n + s + 1. The objective in (5.1) is a positive definite quadratic function, so itmust have a minimizer if (5.1) is feasible. Since the objective f i ( x i , u − i ) is assumedto be convex in x i , if it is bounded from below on X i ( u − i ), then (5.2) must have aminimizer (see [5, Theorem 3]). In applications, we are mostly interested in casesthat (5.2) has a minimizer, for the existence of a GNE. In the subsections 5.1 and5.2, we will discuss how to solve polynomial optimization problems in Algorithm 5.1,by the Moment-SOS hierarchy of semidefinite relaxations. The convergence ofAlgorithm 5.1 is shown as follows. Theorem 5.2.
For the convex GNEPP given by (1.1), suppose each Lagrangemultiplier vector λ i has a rational expression as in Definition 3.1 or a parametricexpression as in Definition 4.1. (i) If ( u, w ) is a feasible point of (5.1) such that q i ( u ) > for all i , then u is aGNE.(ii) Assume every GNE is a KKT point. If (5.1) is infeasible, then the GNEPhas no GNEs. If Θ is positive definite and every q i ( x ) > for all feasible x of (5.1), then Algorithm 5.1 will find a GNE if it exists.Proof. (i) This is directly implied by Propositions 3.3 and 4.3.(ii) If (5.1) is infeasible, then there is no GNE, because every GNE is assumedto be a KKT point and it must be feasible for (5.1). Next, assume (5.1) is feasible.Since Θ is positive definite, the optimization (5.1) has a minimizer, say, ( u, w ). Bythe given assumption, we have q i ( u ) > i . So u is a GNE, by the item(i). (cid:3) In Theorem 5.2(ii), if q i ( x ) > x ∈ X , then we must have q i ( x ) > x of (5.1). Suppose ( u, w ) is a computed minimizer of (5.1). If u is nota GNE, i.e., δ i < i , we can let N ⊆ [ N ] be the labeling set of i with δ i <
0. By Theorem 5.2, we know q i ( u ) = 0 for all i ∈ N . For a priori small ε > q i ( x ) ≥ ε ( i ∈ N ) to the optimization (5.1), to exclude u from the feasible set. Then we solve the following new optimization(5.3) min x ∈ X × R s [ x ] T Θ [ x ] s . t . q i ( x ) ∇ x i f i ( x ) − P m i j =1 ˆ λ i,j ( x ) ∇ x i g i,j ( x ) = 0 ( i ∈ [ N ]) , ˆ λ i,j ( x ) ⊥ g i,j ( x ) ( j ∈ E i ∪ I i , i ∈ [ N ]) , ˆ λ i,j ( x ) ≥ j ∈ I i , i ∈ [ N ]) ,q i ( x ) ≥ ε ( i ∈ N ) . If ε > q i ( x ) ≥ ε may also exclude some GNEs. If thenew optimization (5.3) is infeasible, one can heuristically get a candidate GNE bychoosing a different generic positive definite Θ in (5.1). In computational practice,when a GNE exists, it is very likely that we can get one by doing this. However,how to detect nonexistence of GNEs when (5.1) is feasible can be theoreticallydifficult. The theoretical side of this problem is mostly open, to the best of theauthors’ knowledge.5.1. The optimization for all players.
We discuss how to solve the polynomialoptimization problems in Algorithm 5.1, by using the Moment-SOS hierarchy ofsemidefinite relaxations [27, 29, 30, 32, 33]. We refer to the notation in subsections2.1 and 2.2.First, we discuss how to solve the optimization (5.1). Denote the polynomialtuples(5.4) Φ i := n q i ( x ) ∇ x i f i ( x ) − m i X j =1 ˆ λ i,j ( x ) ∇ x i g i,j ( x ) o ∪ n g i,j ( x ) : j ∈ E i o ∪ n ˆ λ i,j ( x ) · g i,j ( x ) : j ∈ I i o , (5.5) Ψ i := n g i,j ( x ) : j ∈ I i o ∪ n ˆ λ i,j ( x ) : j ∈ I i o . ONVEX GNEPS AND POLYNOMIAL OPTIMIZATION 15
For notational convenience, for a vector p = ( p , . . . , p s ), the set { p } stands for { p , . . . , p s } , in the above. Denote the unionsΦ := N [ i =1 Φ i , Ψ := N [ i =1 Ψ i . They are both finite sets of polynomials. Then, the optimization (5.1) can beequivalently written as(5.6) ϑ min := min x θ ( x ) := [ x ] T Θ[ x ] s . t . p ( x ) = 0 ( ∀ p ∈ Φ) ,q ( x ) ≥ ∀ q ∈ Ψ) . Denote the degree d := max {⌈ deg( p ) / ⌉ : p ∈ Φ ∪ Ψ } . For a degree k ≥ d , consider the k th order Lasserre type semidefinite momentrelaxation for solving (5.6)(5.7) ϑ k := min y h θ, y i s . t . y = 1 , L ( k ) p [ y ] = 0 ( p ∈ Φ) ,M k [ y ] (cid:23) , L ( k ) q [ y ] (cid:23) q ∈ Ψ) ,y ∈ R N n + s k . Its dual optimization problem is the k th order SOS relaxation(5.8) (cid:26) max γ s . t . θ − γ ∈ Ideal[Φ] k + Qmod[Ψ] k . For relaxation orders k = d , d + 1 , . . . , we get the Moment-SOS hierarchy of semi-definite relaxations (5.7)-(5.8). This produces the following algorithm for solvingthe polynomial optimization problem (5.6). Algorithm 5.3.
Let θ, Φ , Ψ be as in (5.6). Initialize k := d .Step 1 Solve the semidefinite relaxation (5.7). If it is infeasible, then (5.6) has nofeasible points and stop; otherwise, solve it for a minimizer y ∗ .Step 2 Let u = ( u, w ) := ( y ∗ e , . . . , y ∗ e n + s ). If u is feasible for (5.6) and ϑ k = θ ( u ),then u is a minimizer of (5.6). Otherwise, let k := k + 1 and go to Step 1.In the Step 2, e i denotes the labeling vector such that its i th entry is 1 while allother entries are 0. For instance, when n = s = 2, y e = y . The optimization(5.7) is a relaxation of (5.6). This is because if x is a feasible point of (5.6), then y = [ x ] k must be feasible for (5.7). Hence, if (5.7) is infeasible, then (5.6) mustbe infeasible, which also implies the nonexistence of KKT points. Moreover, theoptimal value ϑ k of (5.7) is a lower bound for the minimum value of (5.6), i.e., ϑ k ≤ θ ( x ) for all x that is feasible for (5.6). In the Step 2, if u is feasible for(5.6) and ϑ k = θ ( u ), then u must be a minimizer of (5.6). The Algorithm 5.3 canbe implemented in GloptPoly [22]. The convergence of Algorithm 5.3 is shown asfollows.
Theorem 5.4.
Assume the set Ideal [Φ] +
Qmod [Ψ] ⊆ R [ x ] is archimedean. (i) If (5.6) is infeasible, then the moment relaxation (5.7) must be infeasiblewhen the order k is big enough. (ii) Suppose (5.6) is feasible and Θ is a generic positive definite matrix. Let u ( k ) be the point u produced in the Step 2 of Algorithm 5.3 in the k th loop.Then u ( k ) converges to the unique minimizer of (5.6). In particular, if thereal zero set of Φ is finite, then u ( k ) is the unique minimizer of (5.6), when k is sufficiently large.Proof. (i) If (5.6) is infeasible, the constant polynomial − − ∈ Ideal[Φ] k + Qmod[Ψ] k , for k big enough, by thePutinar Positivstellensatz [49]. For such a big k , the SOS relaxation (5.8) is un-bounded from above, hence the moment relaxation (5.7) must be infeasible.(ii) When the optimization (5.6) is feasible, it must have a unique minimizer,say, x ∗ , because its objective is a generic positive definite quadratic polynomial.The convergence of u ( k ) to x ∗ is shown in [50] or [36, Theorem 3.3]. For the specialcase that Φ( x ) = 0 has finitely many real solutions, the point u ( k ) must be equalto x ∗ , when k is large enough. This is shown in [31] (also see [37]). (cid:3) The archimedeaness of the set Ideal[Φ] + Qmod[Ψ] is essentially requiring thatthe feasible set of (5.6) is compact. The archimedeaness is sufficient but not nec-essary for Algorithm 5.3 to converge. Even if the archimedeaness fails to hold,Algorithm 5.3 is still applicable for solving (5.1). If the point u ( k ) is feasibleand ϑ k = θ ( u ( k ) ), then u ( k ) must be a minimizer of (5.1), regardless of thearchimedeaness holds or not. Moreover, without archimedeaness, the infeasibil-ity of (5.7) still implies that (5.1) is infeasible. In our computational practice,Algorithm 5.3 almost always has finite convergence.The polynomial optimization (5.3) can be solved in the same way by the Moment-SOS hierarchy of semidefinite relaxations. The convergence property is the same.For the cleanness of this paper, we omit the details.5.2. Checking Generalized Nash Equilibria.
Suppose u = ( u, w ) ∈ R n × R s is a minimizer of (5.1). For convex GNEPPs, if all q i ( u ) >
0, then u is a GNE, byTheorem 5.2(i). If q i ( u ) ≤ i , we need to solve the optimization (5.2),to check if u = ( u i , u − i ) is a GNE or not, Note that (5.2) is a convex polynomialoptimization problem in x i . For given u − i , if it is bounded from below, then (5.2)achieves its optimal value at a minimizer.Consider the i th player’s optimization with q i ( u ) ≤
0. For notational conve-nience, we denote the polynomial tuples(5.9) H i ( u ) := (cid:8) g i,j ( x i , u − i ) : j ∈ E i (cid:9) ∪ (cid:8) ˆ λ i,j ( x i , u − i ) · g i,j ( x i , u − i ) : j ∈ I i (cid:9) ∪ (cid:8) q i ( x i , u − i ) ∇ x i f i ( x i , u − i ) − m i X j =1 ˆ λ i,j ( x i , u − i ) ∇ x i g i,j ( x i , u − i ) (cid:9) , (5.10) J i ( u ) := (cid:8) g i,j ( x i , u − i ) : j ∈ I i (cid:9) ∪ (cid:8) ˆ λ i,j ( x i , u − i ) : j ∈ I i (cid:9) . Like in (5.4)-(5.5), the set { p } stands for { p , . . . , p s } , when p = ( p , . . . , p s ) is avector of polynomial. The sets H i ( u ) , J i ( u ) are finite collections. ONVEX GNEPS AND POLYNOMIAL OPTIMIZATION 17
Under some suitable constraint qualification conditions (e.g., the Slater’s Con-dition), when (5.2) has a minimizer, it is equivalent to(5.11) η i := min x i ∈ R ni ζ i ( x i ) := f i ( x i , u − i ) − f i ( u i , u − i ) s . t . p ( x i ) = 0 ( p ∈ H i ( u )) ,q ( x i ) ≥ q ∈ J i ( u )) . Denote the degree in variables x i for its constraining polynomials(5.12) d i := max (cid:8) ⌈ deg( ζ i ( x i , u − i )) / , deg( p ( x i )) / , deg( q ( x i )) / p ∈ H i ( u ) , q ∈ J i ( u ) ⌉ (cid:9) . For a degree k ≥ d i , the k th order moment relaxation for (5.6) is(5.13) η ( k ) i := min y h ζ i ( x i ) , y i s . t . y = 1 , L ( k ) p [ y ] = 0 ( p ∈ H i ( u )) ,M k [ y ] (cid:23) , L ( k ) q [ y ] (cid:23) q ∈ J i ( u )) ,y ∈ R N ni k . The dual optimization problem of (5.13) is the k th order SOS relaxation(5.14) (cid:26) max γ s . t . ζ i ( x i ) − γ ∈ Ideal[ H i ( u )] k + Qmod[ J i ( u )] k . By solving the above relaxations for k = d i , d i + 1 , . . . , we get the Moment-SOShierarchy of relaxations (5.13)-(5.14). This gives the following algorithm. Algorithm 5.5.
For a minimizer u = ( u i , u − i ) of (5.1) with q i ( u ) ≤
0, solve the i th player’s optimization (5.11). Initialize k := d i .Step 1 Solve the moment relaxation (5.13) for the minimum value η ( k ) i and a min-imizer y ∗ . If η ( k ) i ≥
0, then η i = 0 and stop; otherwise, go to the nextstep.Step 2 Let t := d i as in (5.12). If y ∗ satisfies the rank condition(5.15) rank M t [ y ∗ ] = rank M t − d i [ y ∗ ] , then extract a set U i of r := rank M t ( y ∗ ) minimizers for (5.11) and stop.Step 3 If (5.15) fails to hold and t < k , let t := t + 1 and then go to Step 2;otherwise, let k := k + 1 and go to Step 1.We would like to remark that the optimization (5.11) is always feasible, because u i is a feasible point since u is a minimizer of (5.1). The moment relaxation (5.13)is also feasible. Because η ( k ) i is a lower bound for η i , and η i ≤ ζ i ( u i , u − i ) = 0,if η ( k ) i ≥
0, then η i must be 0. In Step 2, the rank condition (5.15) is called flat truncation [36]. It is a sufficient (and almost necessary) condition to checkconvergence of moment relaxations. When (5.15) holds, the method in [21] can beused to extract r minimizers for (5.11). The Algorithm 5.5 can also be implementedin GloptPoly [22]. If Ideal[ H i ( u )] + Qmod[ J i ( u )] is archimedean, then η ( k ) i → η i as k → ∞ [27]. It is interesting to remark that I := Ideal[ g i,j ( x i , u − i ) : j ∈ E i ] ⊆ Ideal[ H i ( u )] ,I := Qmod[ g i,j ( x i , u − i ) : j ∈ I i ] ⊆ Qmod[ J i ( u )] . If I + I is archimedean, then Ideal[ H i ( u )]+Qmod[ J i ( u )] must also be archimedean.Furthermore, we have the following convergence theorem for Algorithm 5.5. Theorem 5.6.
For the convex polynomial optimization (5.2), assume its optimalvalue is achieved at a KKT point. If either one of the following conditions hold, (i)
The set I + I is archimedean, and the Hessian ∇ x i ζ i ( x ∗ i , u − i ) ≻ for aminimizer x ∗ i of (5.11); or (ii) The real zero set of polynomials in H i ( u ) is finite,then Algorithm 5.5 must terminate within finitely many loops.Proof. Since its optimal value is achieved at a KKT point, the optimization prob-lem (5.2) is equivalent to (5.11).(i) If I + I is archimedean and ∇ x i ζ i ( x ∗ i , u − i ) ≻ x ∗ i is a minimizer of (5.11),then ζ i ( x i ) − η i ∈ I + I , by [26, Corollary 3.3]. Since I + I ⊆ Ideal[ H i ( u )] + Qmod[ J i ( u )] , we have ζ i ( x i ) − η i ∈ Ideal[ H i ( u )] k +Qmod[ J i ( u )] k for all k big enough. Therefore,Algorithm 5.5 must terminate within finitely many loops, by the duality theory.(ii) If the real zero set of polynomials in H i ( u ) is finite, then the conclusion isimplied by [37, Theorem 1.1] and [36, Theorem 2.2]. (cid:3) Remark.
If the objective polynomial in (5.2) is SOS-convex and its constrainingones are SOS-concave (see [20] for the definition of SOS-convex polynomials), thenAlgorithm 5.5 must terminate in the first loop (see [28]). If the optimal valueof (5.2) is not achieved at a KKT point, the classical Moment-SOS hierarchy ofsemidefinite relaxations can be used to solve it. We refer to [26–30, 32, 33] for thework for solving general polynomial optimization.6.
Numerical experiments
In this section, we apply Algorithm 5.1 to solve convex GNEPs. To use it, weneed Lagrange multiplier expressions. This can be done as follows. • When polynomial expressions exist, we always use them. In particular, weuse polynomial expressions for the first player of the GNEP given by (1.4),the third player in Example 3.4, the production unit and market players inExample 6.8. • We use rational expressions for all players in Examples 6.2, 6.3 and 6.5.Moreover, rational expressions are used for the second player of the GNEPgiven by (1.4), the first two players in Example 3.4, the first and thirdplayers in Example 6.6 and the consumer players in Example 6.8. For Ex-ample 6.5, the rational expression is obtained by solving (3.11) numerically. • When it is difficult to find convenient or rational expressions, we use para-metric expressions for Lagrange multipliers. For all players in Examples 6.4,6.7, for the first player in Example 6.4, for the second player in Example 6.6,we use parametric expressions.We apply the software
GloptiPoly 3 [22] and
SeDuMi [51] to solve the Moment-SOS relaxations for the polynomial optimization (5.6) and (5.11). We use thesoftware
YALMIP for solving (3.11). The computation is implemented in an Alien-ware Aurora R8 desktop, with an Intel ® Core(TM) i7-9700 CPU at 3.00GHz × ONVEX GNEPS AND POLYNOMIAL OPTIMIZATION 19 and 16GB of RAM, in a Windows 10 operating system. For neatness of the paper,only four decimal digits are shown for the computational results.In Step 2 of Algorithm 5.1, if the optimal values δ i ≥ i such that q i ( u ) ≤
0, then the computed minimizer of (5.1) is a GNE. In numerical computations, wemay not have δ i ≥ δ i is nearzero, say, δ i ≥ − − , we regard the computed solution as an accurate GNE. Inthe following, all the GNEPs are convex. Example 6.1. (i) For the GNEP given by (1.4), the first player has a polynomialexpression for Lagrange multipliers given by (2.8), and the second player has arational expression given as λ , = − x T ∇ x f q ( x ) , q ( x ) = x T x . For each i , the q i ( x ) > x ∈ X . We ran Algorithm 5.1 and obtained theGNE u = ( u , u ) with u ≈ (0 . , . , . , u ≈ (0 . , . , . . It took around 3 .
06 seconds.However, if the first player’s objective is changed to f ( x ) = ( x , + x , − x , )( x , + x , − x , ) + x , + x , − x , , then the GNEP has no GNE, detected by Algorithm 5.1. It took around 70 secondsto detect the nonexistence. The matrix polynomials G ( x ) and G ( x ) are nonsin-gular on X , so all GNEs must be KKT points if they exist.(ii) For the GNEP in Example 3.4, we use the rational expression given by (3.8) forthe first two players, and polynomial expression (3.9) for the third player. By Al-gorithm 5.1, we obtained a feasible point ˆ u = 10 − · (0 . , . , . q (ˆ u ) ≈ . · − and q (ˆ u ) ≈ . · − . We solved (5.2), for i = 1 ,
2, tocheck if ˆ u is a GNE or not, and got δ ≈ − . δ ≈ − . · − . Therefore,we solved (5.3) with N = { } and ε = 0 .
1, and obtained a GNE u = ( u , u , u )with u ≈ . , u ≈ . , u ≈ . , q ( u ) ≈ q ( u ) ≈ . . It took around 0 .
89 second.
Example 6.2.
Consider the GNEP in Example 3.2 with objectives f ( x ) = X j =1 ( x ,j − + x ( x , − x , ) , f ( x ) = ( x ) − x , x , x − x . The rational expressions for both players are given by (3.5). For each i , the q i ( x ) > x ∈ X . We ran Algorithm 5.1 and got the GNE u = ( u , u ) with u ≈ (0 . , . , u ≈ . . It took around 0.62 second.
Example 6.3.
Consider the GNEP in Example 3.8 with objectives f ( x ) = 10 x T x − X j =1 x ,j , f ( x ) = X j =1 ( x ,j x ,j ) + (3 Y j =1 x ,j − X j =1 x ,j . We use rational expressions as in (3.12). From Example 3.8, we know all feasiblepoints of (5.1) are GNEs. By Algorithm 5.1, we got the GNE u = ( u , u ) with u ≈ (0 . , . , . , u ≈ (0 . , . , . . It took around 2.03 seconds.
Example 6.4.
Consider the GNEP in Example 4.2 with objectives f ( x ) = x , ( x , ) + ( x , ) − P j =1 x ,j · P j =1 x ,j ,f ( x ) = ( x , + x , )( x , ) − x , + ( x , ) + x , x , x , . We use parametric expressions as in (4.1). For each i , the q i ( x ) > x ∈ X .By Algorithm 5.1, we got the GNE u = ( u , u ) with u ≈ (0 . , . , u ≈ (1 . , − . . It took around 63.97 seconds.
Example 6.5.
Consider the 2-player GNEPmin x ∈ R ( x , ) + 2( x , ) min x ∈ R k x k · k x k + x , − x , + P j =1 x ,j ( x ,j ) +2 x , x , x , x , s . t . x , + 2 x , − x , ≤ , s . t . ( x , ) + x , x , ≤ , x , + 2 x , − x , ≤ . , ( x , ) + ( x , ) ≤ , ( x , ) + ( x , ) ≤ , x , ≥ .x , ≥ , We solve (3.11) numerically for i = 1 , v = (0 , , , , d = 2 to get rationalexpressions for λ i ’s. By Algorithm 5.1, we got the GNE u = ( u , u ) with u ≈ (0 . , − . , u ≈ ( − . , . , q ( u ) ≈ . , q ( u ) ≈ . . It took around 0 .
34 second in solving (3 .
11) for both players, and 8 .
40 seconds tofind the GNE. For neatness of the paper, we do not display Lagrange multiplierexpressions obtained by solving (3.11).
Example 6.6.
Consider the 3-player GNEP ( min x ∈ R x , ( x , ) − x , x , s . t . x T x + x T x + ( x , + x , ) x T x ≤ min x ∈ R ( x , ) + ( x , − x , + ( x , x , ) − x , x , s . t . x , x , x , + x , x , x , + 0 . ≥ , − P j =1 x ,j ≥ ( min x ∈ R ( x − x + x ) T x s . t . ( x , − x , ) ≤ x , , ( x , + x , ) ≤ . The first player’s Lagrange multipliers have a rational expression, that λ = − x T ∇ x f q ( x ) , q ( x ) = 1 − x T x − ( x , + x , ) x T x . ONVEX GNEPS AND POLYNOMIAL OPTIMIZATION 21
For the second player, we use the parametric expression in (4.4), with s = 1. For λ , if we let q = 2 x , − x , ) + 2( x , ) , then λ , = 1 q (cid:16) − x T ∇ x f + ( x , + x , ) ∂f ∂x , (cid:17) , λ , = 16 (cid:16) − x , λ , − x T ∇ x f q (cid:17) . Note that q ( x ) X . So we change the constraint ˆ λ ,j ( x ) ≥ q ,j · ˆ λ ,j ( x ) ≥
0, to make it work. By Algorithm 5.1, we got the GNE u =( u , u , u ) with u ≈ (0 . , − . , u ≈ (0 . , . , u ≈ ( − . , − . ,q ( u ) ≈ . , q ( u ) = 1 , q ( u ) ≈ . . It took around 10 .
44 seconds.
Example 6.7. [14, Example A.3] Consider the GNEP of 3 players. For i = 1 , , i th player aims to minimize the quadratic function f i ( x ) = 12 x Ti A i x i + x Ti ( B i x − i + b i ) . All variables have box constraints − ≤ x i,j ≤
10, for all i, j . In addition to them,the first player has linear constraints x , + x , + x , ≤ , x , + x , − x , ≤ x , − x , + 5; the second player has x , − x , ≤ x , + x , − x , + 7; and thethird player has x , ≤ x , + x , − x , + 4 . The values of parameters are set asfollows A =
20 5 35 5 − − , A = (cid:20) − − (cid:21) , A = (cid:20)
48 3939 53 (cid:21) ,B = − − −
17 915 8 −
22 21 , B = (cid:20)
20 1 − − (cid:21) ,B = (cid:20) − − (cid:21) , b = − , b = (cid:20) (cid:21) , b = (cid:20) − (cid:21) . We use parametric expressions for Lagrange multipliers as in (4.3). It is clear q i ( x ) for all x ∈ X and for all i = 1 , , . By Algorithm 5.1, we got the GNE u = ( u , u , u ) with u ≈ ( − . , − . , − . , u ≈ ( − . , − . ,u ≈ ( − . , . . It took around 7.63 seconds.
Example 6.8.
Consider the GNEP based on the Arrow and Debreu model of acompetitive economy [4, 14]. The first N players are consumers, the second N players are production units, and the last player is the market, so N = N + N + 1.Each player has P variables. Let Q i ∈ R P × P , b i ∈ R P , ξ i ∈ R P + and a i,k ∈ R + beparameters. These players’ optimization problems are:The i th player (a consumer): min x i ∈ R P + x Ti Q i x i − b Ti x i s . t . x TN x i ≤ x TN ξ i + P N − k = N +1 a i,k x TN x k . The i th player (a production unit): ( min x i ∈ R P + − x TN x i s . t . x Ti x i ≤ i − N . The N th player (the market): min x N ∈ R P + x TN (cid:16)P N − k = N +1 x k − P N k =1 ( x k − ξ k ) (cid:17) s . t . P Pj =1 x N,j = 1 . For each i ∈ [ N ], the Lagrange multipliers have rational expressions as λ i, = − x Ti ∇ x i f i q i ( x ) , λ i,j = ∂f i ∂x i,j + x N,j · λ i, ( j = 1 , . . . , P ) , where q i ( x ) = x TN ξ i + P N − k = N +1 a i,k x TN x k > x ∈ X . For each i = N +1 , . . . , N + N , the i th player (a production unit) has polynomial expressions λ i, = − x Ti ∇ x i f i i − N ) , λ i,j = ∂f i ∂x i,j + 2 x i,j · λ i, ( j = 1 , . . . , P ) . For the last player (the market), we substitute x N,P by 1 − P P − j =1 x N,j , then theconstraints become 1 − P P − j =1 x N,j ≥ , x N, ≥ , . . . , x N,P − ≥ , and hence λ N, = − X P − j =1 ∂f N ∂x N,j · x N,j , λ
N,j +1 = ∂f N ∂x N,j + λ N, ( j = 1 , . . . , P − . For the case N = 2 , N = 2 , P = 3, we run the Algorithm 5.1 with the followingparameter setting: Q = − − , Q = − − − − , b = , b = ,ξ = , ξ = , a , = 0 . , a , = a , = a , = 0 . . By the algorithm, we got the GNE u = ( u , u , u , u , u ) with u ≈ (0 . , . , . , u ≈ (0 . , . , . ,u ≈ (1 . , . , . , u ≈ (2 . , . , . ,u ≈ (0 . , . , . . It took around 67.12 seconds.6.1.
Comparison with other methods.
We compare our Algorithm 5.1 withsome other classical methods for solving convex GNEPPs, such as the two-stepmethod in [18] based on Quasi-variational formulation, the penalty method in [14],and the interior point method based on the KKT system in [10]. The testedGNEPPs are those in (1.4), Example 6.2-6.3 and Example 6.5. For the jointlyconvex GNEP in Example 6.3, we also compare with the relaxation method basedon the Nikaido-Isoda function in [23].For a computed tuple u := ( u , . . . , u N ), we use the value ξ := max (cid:8) max i ∈ [ N ] ,j ∈I i {− g i,j ( u ) } , max i ∈ [ N ] ,j ∈E i {| g i,j ( u ) |} (cid:9) to measure the feasibility violation. Clearly, the point u is feasible if and only if ξ ≤
0. If we solve (5.2) for all i ∈ [ N ], the accuracy parameter of u is δ := max i ∈ [ N ] | δ i | .For these methods, we use the following stopping criterion: For each time we get a ONVEX GNEPS AND POLYNOMIAL OPTIMIZATION 23
Table 1.
Comparison with some methodsAlgorithm u time max { δ, ξ } Problem (1.4)QVI (0 . , . , . , . , . , . . · − Penalty 10 − · (9 . , . , . , . , . , . . , . , . , . , . , . . , . , . , . , . , . . · − Example 6.2QVI (0 . , . , . . · − Penalty (0 . , . , . . · − IPM (0 . , . , . . · − ALG 5.1 (0 . , . , . . · − Example 6.3QVI ( − . , − . , − . , . , . , . . , . , . , . , . , . . · − IPM (0 . , . , . , . , . , . . , . , . , . , . , . . , . , . , . , . , . . · − Example 6.5QVI (0 . , − . , − . , . . , − . , . , . . , − . , . , − . . , − . , − . , . . · − new iterate u , if its feasibility violation ξ < − , then we compute the accuracyparameter δ . If δ < − , then we stop the iteration. For all these methods, theparameters are chosen the same as in [10,14,18,23], except the penalty method, forwhich the maximum number of inner iterations is 100. Moreover, we allow 1000maximum iterations for the QVI method and NI-function method, 1000 maximumouter iterations for the penalty method, and 100 ,
000 maximum iterations for theinterior point method. For initial points, we use (1 , , , , ,
0) for (1.4), and thezero vectors for other GNEPs. If the maximum number of iterations is achievedbut the stopping criterion is not met, we still solve the (5.2) to check if the latestiterate is a GNE or not.The computational results are shown in Table 1. The “QVI” stands the QVImethod,, “Penalty” for the penalty method, “IPM” for the interior point method,“NI” for the NI function method, and “ALG 5.1” is for Algorithm 5.1. The “ u ” is the latest iterate for each method, “time” is the consumed time (in seconds), andmax { δ, ξ } is the bigger one of the feasibility violation and accuracy parameter of u . “Not convergent” means the sequence cannot reach a limit point, or the limitpoint is far from being a GNE.For the GNEP (1.4), the QVI method seems to converge, but the accuracyparameter after 1000 iterations is still around 1 . · − . The penalty methodand the interior point failed to converge. For Example 6.2, the QVI method and theinterior point method successfully got a GNE in 1 .
83 and 0 .
02 seconds respectively,and the penalty method got a candidate GNE with accuracy parameter around3 . · − after 1000 outer iterations. For Example 6.3, the penalty method gota candidate GNE with accuracy parameter around 2 . · − at the maximumnumber of iterations. However, the QVI method, the interior point method andthe NI-function method did not converge. For Example 6.5, the QVI method, thepenalty method and the interior point method failed to find a GNE. In contrast,Algorithm 5.1 can solve all these convex GNEPPs very quickly, with accuracyparameters less than 6 · − . Algorithm 5.1 is more reliable for solving convexGNEPs given by polynomials.7. Conclusions and Discussions
This paper studies convex GNEPs given by polynomials. The rational and para-metric expressions for Lagrange multipliers are used. Based on these expressions,Algorithms 5.1 is proposed for computing a GNE. The Moment-SOS hierarchy ofsemidefinite relaxations are used to solve the appearing polynomial optimizationproblems. Under some general assumptions, we show that Algorithm 5.1 is able tofind a GNE if there exists one, or detect nonexistence of GNEs if there is none.For future work, it is interesting to solve nonconvex GNEPPs. Under someconstraint qualifications, the KKT system (2.5) is necessary but not sufficient forGNEs. A solution u of (2.5) may not be a GNE for nonconvex GNEPPs. If u isnot a GNE, one needs to find an efficient method to obtain a different candidate.Such a method is proposed for solving NEPs [45]. For GNEPs, it is not clear howto generalize the method in [45]. When the point u is not a GNE, how can weexclude it and find a better candidate? When (5.1) is feasible, how do we detectnonexistence of GNEs? These questions are mostly open, to the best of the authors’knowledge. References [1] J. Anselmi, D. Ardagna and M. Passacantando, Generalized nash equilibria for saas/paasclouds,
European Journal of Operational Research , 236.1 (2014): 326-339.[2] A. A. Ahmadi and J. Zhang, Semidefinite programming and Nash equilibria in bimatrixgames,
INFORMS Journal on Computing , to appear.[3] D. Ardagna, M. Ciavotta and M. Passacantando, Generalized Nash equilibria for the serviceprovisioning problem in multi-cloud systems,
IEEE Transactions on Services Computing , 10(2017), pp. 381–395.[4] K. Arrow and G. Debreu, Existence of an equilibrium for a competitive economy,
Economet-rica: Journal of the Econometric Society , 22 (1954), pp. 265–290.[5] E.G. Belousov and D. Klatte, A Frank–Wolfe type theorem for convex polynomial programs,
Computational Optimization and Applications , 22.1 (2002): 37-48.[6] D. Bertsekas.
Nonlinear programming , second edition, Athena Scientific, 1995.
ONVEX GNEPS AND POLYNOMIAL OPTIMIZATION 25 [7] M. Breton, G. Zaccour, and M. Zahaf, A game-theoretic formulation of joint implementationof environmental projects,
European Journal of Operational Research , 168 (2006), pp. 221–239.[8] G. Debreu, A social equilibrium existence theorem,
Proceedings of the National Academy ofSciences , 38 (1952), pp. 886–893.[9] A. Dreves, F. Facchinei, A. Fischer, and M. Herrich, A new error bound result for GeneralizedNash Equilibrium Problems and its algorithmic application,
Computational Optimization andApplications , 59 (2014), pp. 63–84.[10] A. Dreves, F. Facchinei, C. Kanzow, and S. Sagratella, On the solution of the KKT conditionsof Generalized Nash Equilibrium Problems,
SIAM Journal on Optimization , 21 (2011), pp.1082–1108.[11] F. Facchinei, A. Fischer, and V. Piccialli, On generalized nash games and variational inequal-ities,
Operations Research Letters , 35 (2007), pp. 159–164.[12] F. Facchinei, A. Fischer, and V. Piccialli, Generalized Nash Equilibrium Problems and New-ton methods,
Mathematical Programming , 117 (2009), pp. 163–194.[13] F. Facchinei and C. Kanzow, Generalized Nash Equilibrium Problems,
Annals of OperationsResearch , 175.1 (2010): 177-211.[14] F. Facchinei and C. Kanzow, Penalty methods for the solution of Generalized Nash Equilib-rium problems,
SIAM Journal on Optimization , 20 (2010), pp. 2228–2253.[15] F. Facchinei and L. Lampariello, Partial penalization for the solution of Generalized NashEquilibrium Problems,
Journal of Global Optimization , 50(1):39-57, 2011.[16] L. Fialkow and J. Nie, The truncated moment problem via homogenization and flat exten-sions,
Journal of Functional Analysis , 263(6), 1682–1700, 2012.[17] M. Fukushima, Restricted generalized Nash equilibria and controlled penalty algorithm,
Com-putational Management Science , 8 (2011), pp. 201–208.[18] D. Han, H. Zhang, G. Qian, and L. Xu, An improved two-step method for solving GeneralizedNash Equilibrium Problems,
European Journal of Operational Research , 216(3), 613-623,2012.[19] P. Harker, Generalized nash games and quasi-variational inequalities,
European Journal ofOperational Research , 54 (1991), pp. 81–94.[20] J.W. Helton and J. Nie, Semidefinite representation of convex sets,
Mathematical Program-ming , 122.1 (2010): 21-64.[21] D. Henrion and J. Lasserre,
Detecting global optimality and extracting solutions in Glop-tiPoly, Positive polynomials in control , 293.C310, Lecture Notes in Control and Inform. Sci.,312,Springer, Berlin, 2005.[22] D. Henrion, J. Lasserre, and J. L¨ofberg, Gloptipoly 3: moments, optimization and semidefi-nite programming,
Optimization Methods and Software , 24(4-5):761–779, 2009.[23] A. von Heusinger and C. Kanzow, Relaxation methods for Generalized Nash EquilibriumProblems with inexact line search,
Journal of Optimization Theory and Applications , 143(2009), pp. 159–183.[24] A. von Heusinger and C. Kanzow, Optimization reformulations of the Generalized NashEquilibrium Problem using Nikaido-Isoda-type functions,
Computational Optimization andApplications , 43 (2009), pp. 353–377.[25] C. Kanzow and D. Steck, Augmented Lagrangian methods for the solution of GeneralizedNash Equilibrium Problems,
SIAM Journal on Optimization , 26 (2016), pp. 2034–2058.[26] E De Klerk, and M Laurent, On the Lasserre hierarchy of semidefinite programming relax-ations of convex polynomial optimization problems,
SIAM Journal on Optimization , 21.3(2011): 824-832.[27] J. Lasserre, Global optimization with polynomials and the problem of moments,
SIAM Jour-nal on Optimization , 11 (2001), pp. 796–817.[28] J. Lasserre, Convexity in semialgebraic geometry and polynomial optimization,
SIAM Journalon Optimization
An introduction to polynomial and semi-algebraic optimization , Cambridge Uni-versity Press, Volume 52, 2015.[30] J. Lasserre,
The Moment-SOS Hierarchy , Proceedings of the International Congress of Math-ematicians (ICM 2018), vol. 3, B. Sirakov, P. Ney de Souza and M. Viana (Eds.), pp. 3761–3784, World Scientific, 2019. [31] J. Lasserre, M. Laurent and P. Rostalski, Semidefinite characterization and computation ofzero-dimensional real radical ideals,
Foundations of Computational Mathematics , 8(5), 607–647, 2008.[32] M. Laurent, Sums of squares, moment matrices and optimization over polynomials,
EmergingApplications of Algebraic Geometry of IMA Volumes in Mathematics and its Applications ,vol. 149, pp. 157–270, Springer, 2009.[33] M. Laurent, Optimization over polynomials: Selected topics,
Proceedings of the InternationalCongress of Mathematicians , ICM 2014, S. Jang, Y. Kim, D-W. Lee, and I. Yie (eds.), pp.843-869, 2014.[34] K. Nabetani, P. Tseng, and M. Fukushima, Parametrized variational inequality approaches toGeneralized Nash Equilibrium Problems with shared constraints,
Computational Optimiza-tion and Applications , 48 (2011), pp. 423–452.[35] J. Nie and B. Sturmfels, Matrix cubes parameterized by eigenvalues,
SIAM journal on matrixanalysis and applications , 31 (2), 755–766, 2009.[36] J. Nie, Certifying convergence of Lasserre’s hierarchy via flat truncation,
Mathematical Pro-gramming , 142(1-2):485–510, 2013.[37] J. Nie, Polynomial optimization with real varieties,
SIAM Journal On Optimization
Mathematicalprogramming , 146(1-2):97–121, 2014.[39] J. Nie, The hierarchy of local minimums in polynomial optimization,
Mathematical Program-ming
151 (2), pp. 555–583, 2015.[40] J. Nie, Linear optimization with cones of moments and nonnegative polynomials,
Mathemat-ical Programming , 153(1), 247–274, 2013.[41] J. Nie, Generating polynomials and symmetric tensor decompositions,
Foundations of Com-putational Mathematics
17 (2), 423–465, 2017.[42] J. Nie, Low rank symmetric tensor approximations,
SIAM Journal on Matrix Analysis andApplications , 38(4), 1517–1540, 2017.[43] J. Nie, Tight relaxations for polynomial optimization and Lagrange multiplier expressions,
Mathematical Programming
Computational Optimization and Applications , (2020).[45] J. Nie and X. Tang, Nash Equilibrium Problems of Polynomials,
Preprint , 2020. arXiv:2006.09490 [46] J. Nie, L. Wang, J. Ye and S. Zhong, A Lagrange Multiplier Expression Method for BilevelPolynomial Optimization,
Preprint , 2020. arXiv:2007.07933 [47] J. Nie, Z. Yang and G. Zhou, The Saddle Point Problem of Polynomials,
Preprint , 2018. arXiv:1809.01218 [48] J. Pang and M. Fukushima, Quasi-variational inequalities, generalized nash equilibria, andmulti-leader-follower games,
Computational Management Science , 2 (2005), pp. 21–56.[49] M. Putinar, Positive polynomials on compact semi-algebraic sets.
Indiana University Math-ematics Journal , 42(3):969–984, 1993.[50] M. Schweighofer, Optimization of polynomials on compact semialgebraic sets,
SIAM J. Op-tim. , vol. 15, no. 3, pp. 805–825, 2005.[51] J. Sturm, Using sedumi 1.02, a matlab toolbox for optimization over symmetric cones.
Opti-mization methods and software , 11(1-4):625–653, 1999.
Jiawang Nie, Xindong Tang, Department of Mathematics, University of CaliforniaSan Diego, 9500 Gilman Drive, La Jolla, CA, USA, 92093.
Email address ::