A new characterization of symmetric H + -tensors
aa r X i v : . [ m a t h . SP ] A ug A new characterization of symmetric H + -tensors ✩ Xin Shi a, ∗ , Luis F. Zuluaga a a Department of Industrial and Systems Engineering, P.C. Rossin College of Engineering &Applied Science, Lehigh University, PA, 18015,
Abstract
In this work, we present a new characterization of symmetric H + -tensors. It isknown that a symmetric tensor is an H + -tensor if and only if it is a generalizeddiagonally dominant tensor with nonnegative diagonal elements. By explor-ing the diagonal dominance property, we derive new necessary and sufficientconditions for a symmetric tensor to be an H + -tensor. Based on these condi-tions, we propose a new method that allows to check if a tensor is a symmetric H + -tensor in polynomial time. In particular, this allows to efficiently computethe minimum H -eigenvalue of tensors in the related and important class of M -tensors. Furthermore, we show how this result can be used to approximatelysolve polynomial optimization problems. Keywords: H + -tensors, Generalized diagonally dominant tensors, Power coneoptimization, Polynomial optimization, Minimum H -eigenvalues
1. Introduction
Tensors can be regarded as a high-order generalization of matrices. For m, n ∈ N , an m -order n -dimensional real tensor is a multidimensional array ✩ This research did not receive any specific grant from funding agencies in the public,commercial, or not-for-profit sectors. ∗ Corresponding author
Email address: [email protected] (Xin Shi), [email protected] (Luis F.Zuluaga) ith the form A = ( a i i ...i m ) , a i i ...i m ∈ R , ≤ i , i , . . . , i m ≤ n. Matrices are tensors with order m = 2. Denote T m,n as the space of all realtensors with order m and dimension n . Then T m,n = R n ⊗ R n ⊗ · · · ⊗ R n | {z } m , where ⊗ is the outer product. Denote [ n ] = { , , . . . , n } . The tensor A =( a i ...i m ) ∈ T m,n is called symmetric if its entries a i ...i m are invariant underany permutation of ( i , . . . , i m ) for i j ∈ [ n ] , j ∈ [ m ]. Denote S m,n as the set ofsymmetric tensors in T m,n . The entries a ii...i for any i ∈ [ n ] are called diagonal elements (or entries) of A .Following [7, 28, 38], for A ∈ T m,n , λ ∈ C is called an eigenvalue of A ,if there exists an eigenvector x ∈ C n \{ } such that A x m − = λx [ m − , where A x m − ∈ C n is defined by( A x m − ) i = n X i ,...,i m =1 a ii ...i m x i · · · x i m , and x [ m − ∈ C n \{ } is defined by ( x [ m − ) i = x m − i for all i ∈ [ n ]. In particular,if x is real, then λ is also real. In this case, we say that λ is an H-eigenvalue of A .The comparison tensor of A ∈ T m,n , denoted as M ( A ), is defined in [12, 20]as follows: M ( A ) i ...i m = | a i ...i m | if i = · · · = i m , −| a i ...i m | otherwise . (1)Following [12, 20], we introduce the following classes of tensors. A tensor is called a nonnegative tensor if all its entries are nonnegative and a tensor is called a diagonal tensor if all its off-diagonal elements are zero. A tensor A ∈ T m,n is said to be a Z -tensor if there exists a nonnegative tensor D ∈ T m,n and anonnegative scalar s such that A = s I − D , where
I ∈ T m,n is a diagonaltensor with all diagonal elements equal to one. For tensor A , denote ρ ( A ) as Z -tensor A = s I − D is said to be an M -tensor if s ≥ ρ ( D ). If s > ρ ( D ), then A is called a strong M -tensor . A tensoris called an H -tensor if its comparison tensor is an M -tensor. A tensor is calleda strong H -tensor if its comparison tensor is a strong M -tensor. An H -tensorwith nonnegative diagonal elements is called an H + -tensor . The authors in [20, Theorem 4.9] show that a symmetric tensor is an H -tensor if and only if it is a generalized diagonally dominant tensor (see Definition1). The matrix version (i.e., when m = 2) of this result is given in [6, Theorem8] and [45]. Furthermore, the authors in [6] prove that a symmetric matrixis an H + -matrix if and only if it can be written as the sum of a number of positive semidefinite matrices which have a special sparse structure. Based onthis result, the authors in [1] show that membership to the set of symmetric H + -matrices can be decided in polynomial time by solving a second-order coneoptimization problem [see, e.g., 30].In this work we generalize these results to symmetric H + -tensors. Namely, we prove that a symmetric tensor is an H + -tensor if and only if it can bewritten as the sum of a number of tensors which have a special sparse structure(see Theorem 11). Based on this result, we obtain (see Theorem 13) a novelcharacterization of H + -tensors that is amenable to the use of conic optimizationtechniques [see, e.g., 48]. In particular, we show (see Corollary 15 and (27)) that membership to the set of symmetric H + -tensors can be decided in polynomialtime by solving a power cone optimization problem [see, e.g., 9, 15]A lot of effort has been made to characterize H + -tensors [see, e.g., 17, 25,27, 29, 46, 49, 51]. However, these articles typically focus on studying sufficientconditions for a tensor to be an H -tensor. A notable exception is recent work based on the use of spectral theory of nonnegative tensors. Namely, the authorsin [31] present a necessary and sufficient condition for strong H -tensors andpropose an iterative algorithm for identifying strong H-tensors. In contrastfrom their methodology, here we study sufficient and necessary condition fora symmetric tensor to be an H + -tensor by exploring the diagonal dominance property. This type of characterization allows, unlike the recent results in [31],3o directly optimize over the set of H + -tensors. In Section 4 and Section 5.In particular, in Section 4, we consider the problem of computing the min-imum H -eigenvalue of M -tensors (which generalize M -matrices), which playan important role in a wide range of interesting applications [see, 18, and the references therein]. In contrast with the problem of obtaining bounds on theminimum H -eigenvalue of M -tensors that has received significant attention inthe literature [14, 18, 24, 43]; here, we use our characterization of H + -tensorsto compute H -eigenvalues of M -tensors by solving a power cone optimizationproblem (see Corollary 21). A comparison of the H -eigenvalues obtained in this way with bounds proposed in the literature is provided in Table 1.Further, in Section 5, we show that our characterization can be applied toaddress the solution of polynomial optimization problems [see, e.g., 22]; thatis, an optimization problem in which both the objective and the constraintsare defined by polynomials. The connection between tensors and polynomials stems from the fact that the coefficients of a polynomial can be described usingtensors. Of particular importance, is the problem of finding if a given polynomialis nonnegative.For example, this problem appears in the field of shape-constrained functionestimation [36], the stability study of nonlinear autonomous systems in auto- matic control [2], and spectral hypergraph theory [26]. However, checking ifan m order and n variate polynomial is nonnegative is an NP-hard problemwhen n ≥ m ≥ tractable subclasses of nonnegative polynomials [see, e.g., 1, 8, 19, 42]. One classical choice is to use sum of squares (SOS) to certify the nonnegativity of apolynomial [see, e.g., 22]. To further improve tractability, for example, authorsin [1] propose more tractable alternatives to SOS to certify the nonnegativity ofa polynomial. Their approach is based on properties of symmetric H + -matricesand the fact that polynomials that correspond to symmetric H + -matrices are nonnegative polynomials. Note that H + -tensors corresponds to a high-ordergeneralization of H + -matrices. Thus, it is relevant to consider whether H + -4ensors can be used to certify the nonnegativity of a polynomial. In Section 5we introduce results that allow to use polynomials whose coefficients are givenby H + -tensors (see Proposition 25 and Proposition 26) to obtain alternative approaches for the approximation of polynomial optimization problems. We il-lustrate our results by presenting and analyzing the numerical results obtainedin Table 2 and Table 3. In particular, we show the use of H + -tensors inducedpolynomials can provide a good trade-off between the tightness and the effortrequired to obtain approximations for polynomial optimization problems. For ease of exposition, in what follows, we use small letters a, b, . . . for scalarsand vectors; capital letters
A, B, . . . for matrices; calligraphic letters A , B , . . . for tensors and A , B , . . . for index sets; and blackboard bold letters T , D , . . . for other kinds of sets or spaces in this work.The remaining of the article is structured as follows: Section 2 introduces additional notation, definitions and some basic results. In Section 3, the char-acterizations of symmetric H + -tensors are presented. With these characteri-zations, we provide a way to check if a tensor is a symmetric H + -tensor inpolynomial time. The applications of these results in polynomial optimizationare illustrated in Section 5. Section 6 concludes this work. In Appendix, we derive additional results regarding the relationship between H + -tensor inducedpolynomials and other classes of polynomials. This results are used in Section 4,but are also interesting on their own.
2. Preliminaries
First we introduce additional notation and fundamental properties of tensors.
Let R [ x ] := R [ x , . . . , x n ] be the set of polynomials in n variables with realcoefficients. A polynomial p ∈ R [ x ] is called a sum of squares (SOS) if it canbe written as p = P i q i for a finite number of polynomials q i ∈ R [ x ]. Tensor A ∈ S m,n is said to have an SOS decomposition if its corresponding polynomial A x m is an SOS [see, e.g., 33]. The authors in [10] show that every symmetric H + -tensor has an SOS decomposition. 5 heorem 1 ([10], Theorem 3.7) . Let m, n ∈ N and A ∈ S m,n be an H + -tensor.If m is even, then A has an SOS tensor decomposition. From Theorem 1, it follows that a symmetric H + -tensor is also a PSD tensor.On the other hand, symmetric H + -tensors can be characterized using the notion of diagonally dominant tensors (see Definition 1). Most of the work related to H + -tensors makes use of the diagonal dominance property [see, e.g., 17, 25, 27,46, 51]. We will also make use of this property in this work. The definitionsof diagonally dominant tensors and generalized diagonally dominant tensors aregiven below. Definition 1 ([32], Definition 6.5) . Let m, n ∈ N and A = ( a i ...i m ) ∈ T m,n .(i) A is called a diagonally dominant (DD) tensor if | a ii...i | ≥ X ( i ,...,i m ) =( i,...,i ) | a ii ...i m | , ∀ i ∈ [ n ] . (2) (ii) A is called a generalized diagonally dominant (GDD) tensor if there existsa positive diagonal matrix D such that the tensor A D − m D · · · D definedas ( A D − m D · · · D ) i ...i m = a i ...i m d − mi d i · · · d i m , ∀ i , . . . , i m ∈ [ n ] , (3) is diagonally dominant, where d i = D ii is the i th diagonal element of D . From the definition of DD tensors and GDD tensors, one can derive anequivalent definition of GDD tensors that will be useful throughout the article.
Proposition 2.
Let m, n ∈ N , then A ∈ T m,n is a GDD tensor if and only ifthere exists a positive diagonal matrix D such that the tensor A DD . . . D definedas ( A DD · · · D ) i ...i m = a i ...i m d i d i · · · d i m , ∀ i , . . . , i m ∈ [ n ] , (4) is diagonally dominant, where d i = D ii is the i th diagonal element of D . If A ∈ S m,n , then A DD · · · D ∈ S m,n . roof. From Definition 1(ii), if A = ( a i ...i m ) ∈ T m,n is a GDD tensor, thenthere exists a positive diagonal matrix D such that A D − m D · · · D is a DDtensor. That is for all i ∈ [ n ], | ( A D − m D · · · D ) i...i | ≥ X ( i ,...,i m ) =( i,...,i ) | ( A D − m D · · · D ) ii ...i m | . (5)Note that (5) is equivalent to | a i...i | ≥ X ( i ,...,i m ) =( i,...,i ) | a i...i m d − mi d i · · · d i m | . (6)Considering that d i > i ∈ [ n ], and multiplying by d mi on both sidesof (6), we have that | a i...i | d mi ≥ X ( i ,...,i m ) =( i,...,i ) | a i...i m | d i d i · · · d i m (7)for all i ∈ [ n ]. Thus, the tensor A DD . . . D defined by (4) is a DD tensor.For the another direction, if the tensor A DD · · · D defined by (4) is a DDtensor for a positive diagonal matrix D , then inequality (7) holds for all i ∈ [ n ].Dividing both sides of (7) by d mi >
0, we have inequality (6) which is equivalent to (5) for all i ∈ [ n ] and indicates that A is a GDD tensor.For the remainder of this work, we assume that every tensor is a symmetrictensor unless it is explicitly stated otherwise. Denote by DD m,n and GDD m,n the set of DD tensors and the set of GDD tensors in S m,n , respectively. DD andGDD tensors with nonnegative diagonal elements will be referred as DD + and GDD + tensors, respectively. Also, denote by DD + m,n and GDD + m,n the set ofDD + tensors and the set of GDD + tensors in S m,n , respectively.For n ∈ N , a set W ⊂ R n is called a cone if 0 ∈ W and x ∈ W implies λx ∈ W for any λ ≥
0. A set W is called a convex cone if it contains λx + µy for any x, y ∈ W and any λ, µ ≥
0. Given a set W , let cone ( W ) = { λx | x ∈ W , λ ≥ } be the conic hull of W ; and convex ( W ) = { λx + µy | x, y ∈ W , λ, µ ≥ , λ + µ = 1 } be the convex hull of W .Clearly, for m, n ∈ N , DD m,n is a cone and DD + m,n is a convex cone. Wewill show that GDD + m,n is also a convex cone later (see Proposition 10). Next7e present a characterization of symmetric H -tensors using symmetric GDD tensors. Theorem 3 ([20] Theorem 4.9) . Let m, n ∈ N and A ∈ S m,n . Then A isan H -tensor if and only if A ∈
GDD m,n . Corollary 4.
Let m, n ∈ N and A ∈ S m,n . Then A is an H + -tensor if andonly if A ∈
GDD + m,n . From Theorem 1 and Corollary 4, if m is even, we have the following inclusionrelationships: DD + m,n ⊆ GDD + m,n ⊆ P SD m,n . In light of Corollary 4, in what follows we will take the liberty to use both H + and GDD + interchangeably to refer to H + -tensors.Denote card( A ) as the cardinality of the set A . For m, n ∈ N , define theindex sets D mn = { ( i , . . . , i m ) | ≤ i ≤ · · · ≤ i m ≤ n }∩{ ( i , . . . , i m ) : card( { i , . . . , i m } ) > } , and F mn = { ( i, i, . . . , i | {z } m ) | i ∈ [ n ] } . For any index ( i , . . . , i m ) ∈ D mn ∪ F mn , denote P i ...i m as the set of allpermutations of i , . . . , i m and denote Q i ...i m = { ( p, p, . . . , p | {z } m ) | p ∈ { i , . . . i m }} . Also, for ( i , . . . , i m ) ∈ D mn ∪ F mn , let D i ...i m m,n ∈ S m,n be the set of sparse tensorsdefined as follows: D i ...i m m,n = { ( a j ...j m ) ∈ S m,n | a j ...j m = 0 if ( j , . . . , j m ) / ∈ P i ...i m ∪ Q i ...i m } . (8)Further, let D m,n = [ ( i ,...,i m ) ∈ D mn D i ...i m m,n . To assist the proofs in this work, we introduce the following class of tensors.8 efinition 2.
For m, n ∈ N and any ( i , . . . , i m ) ∈ D mn , c ∈ { , } , denote V c,i ...i m = ( v c,i ...i m j ...j m ) ∈ D i ...i m m,n , as the tensor defined by: (i) v c,i ...i m j ...j m = ( − c if ( j , . . . , j m ) ∈ P i ...i m .(ii) The value of j -th diagonal element is equal to the sum of the absolute val-ues of the off-diagonal entries on the j -th slice (the diagonal elements areexcluded in the sum); that is v c,i ...i m jj...j = X ( j ,...,j m ) =( j,...,j ) | v c,i ...i m jj ...j m | , ∀ j ∈ [ n ] . Further, for all i ∈ [ n ] , denote V ,ii...i as the tensor where the only nonzeroentry is v ,ii...iii...i = 1 ; and V ,ii...i as the tensor with all entries set to 0. Also,denote E m,n = {V c,i ...i m | c ∈ { , } , ( i , . . . , i m ) ∈ D mn ∪ F mn } . Clearly, from Definition 2, it follows that for all ( i , . . . , i m ) ∈ D mn ∪ F mn and c ∈ { , } , V c,i ...i m ∈ DD + m,n . For example, when m = 2 and n = 4, wehave V , = , V , = − − . For ease of exposition, we also introduce an auxiliary notation for indices.For m, n ∈ N , index ~i := ( i , i , . . . , i m ) ∈ D mn and some l ~i ∈ [ m ], we call(( j ~i , j ~i , . . . , j ~il ~i ) , ( α ~i , α ~i , . . . , α ~il ~i )) ∈ [ n ] l ~i × [ m ] l ~i as the tight pair of ~i if ( j ~i , j ~i , . . . , j ~il ~i ) and ( α ~i , α ~i , . . . , α ~il ~i ) satisfy x i x i . . . x i m = x α ~i j ~i x α ~i j ~i . . . x α ~il~i j ~il~i , (9)where 1 ≤ j ~i < j ~i < · · · < j ~il ~i ≤ n . We will refer to ( j ~i , j ~i , . . . , j ~il ~i ) as the tight index and to ( α ~i , α ~i , . . . , α ~il ~i ) as the tight power. However, we will routinelydrop the upper ~i in the notation when the ~i we are referring to is clear from (orfixed in) the context. Also, denote e j as the unitary vector in the j th directionof appropriate dimensions. 9 . New characterization of symmetric H + -tensors Next, we present a new characterization of symmetric H + -tensors (or equiv-alently GDD + tensors (cf., Corollary 4)) based on the power cone [9, 15]. First,we characterize DD + tensors with the following result. Proposition 5.
For m, n ∈ N , DD + m,n = convex ( cone ( E m,n )) and each tensorin E m,n generates an extreme ray of DD + m,n . Proof.
First, from Definition 2, it follows that E m,n ⊆ DD + m,n . This, togetherwith the fact that DD + m,n is a convex cone, implies that convex ( cone ( E m,n )) ⊆ DD + m,n .Second, for A = ( a i ...i m ) ∈ DD + m,n , denote P + = { ( i , . . . , i m ) ∈ D mn | a i i ...i m ≥ } and P − = { ( i , . . . , i m ) ∈ D mn | a i i ...i m < } . Then A = n X i =1 a ii...i − X ( i ,...,i m ) =( i,...,i ) | a ii ...i m | V ,ii...i (10)+ X ( i ,i ,...,i m ) ∈ P + a i i ...i m V ,i i ...i m + X ( i ,i ,...,i m ) ∈ P − ( − a i i ...i m ) V ,i i ...i m . Since
A ∈ DD + m,n , a ii...i ≥ P ( i ,...,i m ) =( i,...,i ) | a ii ...i m | for all i ∈ [ n ]. Thus, A is in the convex hull of the conic hull of E m,n , after noticing that all the coefficients in the right hand side of (10) are nonnegative. That is DD + m,n ⊆ convex ( cone ( E m,n )).To give a similar characterization for GDD + tensors, we need Theorem 6and 7 and Propositions 8 and 9. Theorem 6 ([39], Theorem 1 (a)) . For m, n ∈ N , if D ∈ S m,n is a nonnegative tensor, then ρ ( D ) is an H -eigenvalue of D . Denote the largest H -eigenvalue of tensor A ∈ S m,n as λ m ax ( A ). Theorem 7 ([39], Theorem 2) . For m, n ∈ N , if A ∈ S m,n is a nonnegativetensor, then λ m ax ( A ) = max ( A x m : x ∈ R n + , n X i =1 x mi = 1 ) . roposition 8. For m, n ∈ N , if both A ∈ S m,n and B ∈ S m,n are nonnegativetensors, then ρ ( A + B ) ≤ ρ ( A ) + ρ ( B ) .Proof. Let
D ∈ T m,n . From the definition of ρ ( D ) and λ m ax ( D ), it clearlyfollows that ρ ( D ) ≥ λ m ax ( D ). If D is a symmetric nonnegative tensor, it thenfollows from Theorem 6 that ρ ( D ) = λ m ax ( D ) . (11)Let A ∈ S m,n , and B ∈ S m,n be nonnegative tensors. Then we have fromequation (11) that ρ ( A ) = λ m ax ( A ) and ρ ( B ) = λ m ax ( B ). Furthermore, fromTheorem 7, we have λ m ax ( A + B ) = max ( ( A + B ) x m : x ∈ R n + , n X i =1 x mi = 1 ) = max ( A x m + B y m : x, y ∈ R n + , n X i =1 x mi = 1 , n X i =1 y mi = 1 , x = y ) ≤ max ( A x m : x ∈ R n + , n X i =1 x mi = 1 ) + max ( B y m : y ∈ R n + , n X i =1 y mi = 1 ) = λ m ax ( A ) + λ m ax ( B ) . To finish, notice that A + B is a symmetric nonnegative tensor. Thus after using equation (11) for the tensor A + B , we conclude that ρ ( A + B ) = λ m ax ( A + B ) ≤ λ m ax ( A ) + λ m ax ( B ) = ρ ( A ) + ρ ( B ). Proposition 9 ([20], Proposition 2.7) . For m, n ∈ N , let B ∈ S m,n be a Z -tensor such that A ≤ B where A is an M -tensor. Then B is also an M -tensor. Proposition 10.
For m, n ∈ N , GDD + m,n is a convex cone. Proof.
Let A = ( a i ...i m ) ∈ GDD + m,n and B = ( b i ...i m ) ∈ GDD + m,n . FromCorollary 4, both A and B are symmetric H + -tensors. Thus M ( A ) and M ( B )are symmetric M -tensors. That is, there exist nonnegative scalars s , s andnonnegative tensors D and D such that M ( A ) = s I − D , M ( B ) = s I − D and s ≥ ρ ( D ), s ≥ ρ ( D ). Then M ( A ) + M ( B ) = ( s + s ) I − ( D + D ). s + s ≥ D + D is a nonnegative tensor, M ( A ) + M ( B ) is asymmetric Z -tensor. Also, from Proposition 8, if follows that ρ ( D + D ) ≤ ρ ( D ) + ρ ( D ) ≤ s + s . Thus, M ( A ) + M ( B ) is also a symmetric M -tensor.Next, we prove that M ( A + B ) is a Z -tensor. Recall that M ( A + B ) is thecomparison matrix of A + B . Thus, all its diagonal elements are nonnegative and all off-diagonal elements are nonpositive. Denote s = max {| a ii...i | + | b ii...i | , i ∈ [ n ] } . Then M ( A + B ) = sI − ( sI − M ( A + B )) where sI − M ( A + B ) is anonnegative tensor. Thus, M ( A + B ) is a Z -tensor.From the definition of comparison tensors and the fact that A , B have non-negative diagonal elements, M ( A + B ) ≥ M ( A ) + M ( B ) componentwise. From the fact that M ( A ) + M ( B ) is an M -tensor and M ( A + B ) is a Z -tensor, itfollows from Proposition 9 that M ( A + B ) is also an M -tensor. Thus A + B is a symmetric H + -tensor, and from Corollary 4, A + B is a GDD + tensor.Thus, A + B ∈
GDD + m,n . This, together with the fact that A ∈
GDD + m,n im-plies λ A ∈
GDD + m,n for any nonnegative scalar λ , implies that GDD + m,n is a convex cone. Theorem 11.
For m, n ∈ N , A ∈
GDD + m,n if and only if A = P ri =1 B i where r ∈ N and B i ∈ D m,n ∩ GDD + m,n .Proof. For m, n ∈ N , let A ∈
GDD + m,n . Then, from Proposition 2, there exists a positive diagonal matrix D such that B := A DD · · · D ∈ DD + m,n . FromProposition 5, it follows that there exist r ∈ N , λ i ≥ C i ∈ E m,n ⊂ D m,n ∩ DD + m,n for i ∈ [ r ] such that B = P ri =1 C i . Then A = P ri =1 C i D − · · · D − D − .Let B i = C i D − · · · D − D − for all i ∈ [ r ]. Then the only if statement followsafter noticing that for all i ∈ [ r ], B i ∈ GDD + m,n and B i ∈ D m,n (as multiplying with positive numbers will not affect the sparse structure of tensors C i ∈ D m,n , i ∈ [ r ]). For the if statement, note that if A = P ri =1 B i with B i ∈ D + m,n ∩ GDD + m,n for all i ∈ [ r ], then, from Proposition 10, we have A ∈
GDD + m,n .The matrix version of Theorem 11 has been presented in [1, 6].12 emma 12 ([1], Lemma 3.8) . For n ∈ N , if matrix A ∈ S ,n , then A is a GDD + matrix if and only if A = P i Let m, n ∈ N , ( i , . . . , i m ) ∈ D mn ∪ F mn , and a tensor B =( b p ...p m ) ∈ D i ...i m m,n be given. Then,(i) if ( i , . . . , i m ) ∈ D mn , B ∈ GDD + m,n if and only if its entries satisfy l Y k =1 b α k j k j k ...j k ≥ c | b i ...i m | m , (12) where c = Q lk =1 (cid:0) m − α − e k (cid:1) α k , and (( j , . . . , j l ) , α = ( α , . . . .α l )) is the tightpair associated with ( i , . . . , i m ) , and b pp...p ≥ , ∀ ( pp . . . p ) ∈ Q i ...i m . (13) (ii) If ( i , . . . , i m ) ∈ F mn , B ∈ GDD + m,n if and only if B is a diagonal tensorsatisfying b i ...i m ≥ .Proof. Let ( i , . . . , i m ) ∈ D mn be given. Denote (( j , . . . , j l ) , α = ( α , . . . .α l ))as the tight pair associated with ( i , . . . , i m ). Let B ∈ D i ...i m m,n . Then, allthe off-diagonal elements of B are zero except for the elements b p ...p m , where( p , . . . , p m ) ∈ P i ...i m . Then, using Proposition 2, it follows that B ∈ GDD + m,n if and only if its entries satisfy (13) and b j k j k ...j k d mj k ≥ (cid:18) m − α − e k (cid:19) | b i ...i m | d i d i . . . d i m , (14)13or k ∈ [ l ] and some d j k > 0, for all k ∈ [ l ], after using in (7) the sparsity patternand symmetry of B , and the fact that the number of equal summands in theright-hand side of (7) in this case is (cid:0) m − α − e k (cid:1) . Now note that if (13) and (14) hold then (13) and b α k j k j k ...j k d mα k j k ≥ (cid:18) m − α − e k (cid:19) α k | b i ...i m | α k d α k i d α k i . . . d α k i m , (15)hold for all k ∈ [ l ], and some d j k > 0, for all k ∈ [ l ]; since (15) is obtainedby taking the α k th power on both sides of (14), whose (multiplicative) termsare all nonnegative. Given that both the left-hand side and the right-hand sideof (15) are nonnegative, it follows, after multiplying the left-hand sides and theright-hand sides of (15) for all k ∈ [ l ], and using the fact that k α k = m , that(13) and (15) imply (13) and l Y k =1 ( b α k j k j k ...j k d mα k j k ) ≥ l Y k =1 (cid:18) m − α − e k (cid:19) α k ! | b i ...i m | m ( d i d i . . . d i m ) m , (16)for some d j k > 0, for all k ∈ [ l ]. In turn, (16) is equivalent to (12), with c := Q lk =1 (cid:0) m − α − e k (cid:1) α k , after noticing that from the definition of tight pair (9), itfollows that l Y k =1 d α k j k = d i d i . . . d i m . (17)Now, to complete the proof, we show that (13) and (12) imply (13) and (14)(i.e., that B is a GDD + m,n tensor). First note that if for any k ∈ [ l ], b j k j k ...j k = 0,then (12) implies that b i ...i m = 0. Thus, in this case, given (13) and the factthat d j k > k ∈ [ l ], it follows that (14) is satisfied for all k ∈ [ l ].Moreover, in the case where b i ...i m = 0, condition (14) follows from (13), giventhe fact that d j k > k ∈ [ l ]. Thus, it is enough to consider the case inwhich b j k j k ...j k > k ∈ [ l ], and b i ...i m = 0. In this case, using the factthat d j k > 0, we can write that d j k = z k m s (cid:0) m − α − e k (cid:1) b j k j k ...j k , (18)for some z k > 0, for all k ∈ [ l ]. Thus, for any k ∈ [ l ], it follows that | b i ...i m | d i . . . d i m = z mk | b i ...i m | m s c Π lk =1 b α k j k j k ...j k ≤ z mk = b j k j k ...j k d mj k (cid:0) m − α − e k (cid:1) , (19)14here the first equality follows by using (17), (18), and the definition of c ; theinequality follows from (12), and the last equality follows by using (18) again.After noticing that (19) is equivalent to (14), it then follows that (13) and (12)imply (13) and (14); that is, that B ∈ GDD + n,m .If ( i , . . . , i m ) ∈ F mn and tensor B = ( b p ...p m ) ∈ D i ...i m m,n , it follows from the definition of D i ...i m m,n (i.e. (8)) that B is a diagonal tensor in which the onlynonzero entry is b i ...i m . Thus, B ∈ GDD + m,n tensor if and only if B is a diagonaltensor satisfying b i ...i m ≥ A ∈ S m,n to be an H + -tensor (or equivalently GDD + tensor). Corollary 14. Let m, n ∈ N . Then A = ( a p p ...p m ) ∈ S m,n is a GDD + tensorif and only if there exist b ~ij ≥ for all ~i = ( i , . . . , i m ) ∈ D mn , j ∈ ~i satisfying(i) For ~i ∈ D mn , l ~i Y k =1 ( b ~ij k ) α ~ik ≥ c ( ~i ) | a ~i | m (20) where c ( ~i ) = Q l ~i k =1 (cid:0) m − α ~i − e k (cid:1) α ~ik , and (( j ~i , j ~i , . . . , j ~il ~i ) , α ~i = ( α ~i , α ~i , . . . .α ~il ~i )) is the tight pair associated with ~i . (ii) For j ∈ [ n ] , a jj...j ≥ X ~i ∈D mn : j ∈ ~i b ~ij . (21) Proof. Let m, n ∈ N . From Theorem 11, A = ( a p p ...p m ) ∈ S m,n is a GDD + tensor if and only if A = X ~i ∈ D mn ∪ F mn B ~i (22)and for ~i ∈ D mn ∪ F mn , B ~i ∈ D m,n ∩ GDD + m,n satisfies conditions (i) and (ii) inTheorem 13. Note that from the sparse structure of the tensors B ~i used in (22),it follows that for any j ∈ [ n ], a jj...j = X ~i ∈ D mn :( jj...j ) ∈ Q ~i b ~ijj...j + b jj...jjj...j , (23)15nd for any ~i ∈ D mn , a ~i = b ~i~i . (24)From Theorem 13(i) and (24), it follows that c ( ~i ) | a ~i | m = c ( ~i ) | b ~i~i | m ≤ Q l ~i k =1 ( b ~ij k j k ...j k ) α ~ik where c ( ~i ) = Q l ~i k =1 (cid:0) m − α ~i − e k (cid:1) α ~ik , (( j ~i , j ~i , . . . , j ~il ~i ) , α ~i = ( α ~i , α ~i , . . . .α ~il ~i )) is thetight pair associated with ~i , and b ~ipp...p ≥ 0, for all p ∈ Q ~i . The state-ment then follows from this and (23), after noticing that from Theorem 13(ii), b jj...jjj...j ≥ j ∈ [ n ], and after simplifying notation to let b ~ijj...j := b ~ij for any ~i ∈ D mn : ( jj . . . j ) ∈ Q ~i ; that is, for any ~i ∈ D mn : j ∈ ~i .Now we provide an example to illustrate the results in Theorem 11 andCorollary 14. Example 1. Consider the following symmetric tensor A = [ A (1 , , : , :) , A (1 , , : , :); A (2 , , : , :) , A (2 , , : , :)] ∈ S , , where A (1 , , : , :) = − − − , A (1 , , : , :) = − − − / ,A (2 , , : , :) = − − − / , A (2 , , : , :) = − / / . Denote D = , D = / , D = / . Then, we have A = 10371296 V , + 168 V , + B (1112) + B (1122) + B (1222) , where B (1112) = V , D D D D , B (1122) = V , D D D D and B (1222) = V , D D D D . Besides, b ~ij = B ~ijjjj ≥ , j ∈ ~i for ~i ∈ D satisfy (21) and (20) . As a result, A is a symmetric H + -tensor (GDD + tensor).On the other hand, denote D = / . Then ¯ A = A DDDD = [ ¯ A (1 , , : , :) , ¯ A (1 , , : , :); ¯ A (2 , , : , :) , ¯ A (2 , , : , :)] , here ¯ A (1 , , : , :) = − − − / , ¯ A (1 , , : , :) = − − / − / , ¯ A (2 , , : , :) = − − / − / , ¯ A (2 , , : , :) = − / / , is a DD + tensor.3.1. Checking membership with power cone optimization Corollary 14 readily implies that membership in the set of H + -tensors canbe tested using tractable conic optimization techniques, and more precisely, the power cone [see, e.g., 9, 15]. To illustrate this, let us first introduce the high-dimensional power cone . Definition 3 (High-dimensional power cone [9, Sec. 4.1.2]) . For any α ∈ R m + such that e ⊺ α = 1 , the high-dimensional power cone is defined by K ( m ) α = { ( x, z ) ∈ R m + × R : x α · · · x α m m ≥ | z |} (25)Now, for any tensor A ∈ S m,n , let F ( A ) = n d ~ij ∈ R ,~i ∈ D mn , j ∈ ~i : a jj...j ≥ X ~i ∈D mn : j ∈ ~i d ~ij , ∀ j ∈ [ n ]( d ~ii , . . . , d ~ii m , c ( ~i ) m a ~i ) ∈ K ( m ) m e , ∀ ~i ∈ D mn (26)The next Corollary then follows from Definition 3 and Corollary 14. Corollary 15. Let m, n ∈ N . Then A = ( a p p ...p m ) ∈ S m,n is a GDD + tensorif and only if F ( A ) = ∅ . Furthermore, the condition F ( A ) = ∅ in Corollary 15 can be checked inpolynomial time using appropriate interior point methods [see, e.g., 40]. Toshow this, we make use of the power cone , which is a lower-dimensional versionof the high-dimensional power cone introduced in Definition 3. Namely, for any17 ∈ [0 , K α := K α, − α = { ( x, z ) ∈ R × R : x α x − α ≥ | z |} [see,e.g. 21, 35, 41]. As shown in [9, eq. (4.3), Sec. 4.1.2], the higher-dimensionalpower cone K ( m ) α can be decomposed into m − F ( A ) = d ~ij ∈ R ,~i ∈ D mn , j ∈ ~iv ~il ∈ R + ,~i ∈ D mn , l ∈ [ m − 2] : a jj...j ≥ X ~i ∈D mn : j ∈ ~i d ~ij , ∀ j ∈ [ n ] , ( d ~ii , v ~i , c ( ~i ) m a ~i ) ∈ K m , ∀ ~i ∈ D mn ( d ~ii l , v ~il , v ~il − ) ∈ K m − l +1 , ∀ ~i ∈ D mn , l = 2 , . . . , m − d ~ii m − , d ~ii m , v ~im − ) ∈ K , ∀ ~i ∈ D mn (27)The relevance of introducing the power cone in (27) is that [9, 35, 41] providedifferent self-concordant barriers for the power cone. In short, this means thatfor any A ∈ S m,n , the nonsymmetric conic feasibility system defined by (27) canbe solved in polynomial time using a primal-dual predictor-corrector method [48].The reference to nonsymmetric, stems from the fact that the power cone is not symmetric if α = [15, 44]. Theorem 16. For m, n ∈ N , to check if a tensor in S m,n is an H + -tensor(GDD + tensor) is equivalent to solve a power cone optimization problem of sizepolynomial in n for a fixed m .Proof. The result follows from Corollary 15, equation (27), and the fact that | D mn | = (cid:0) n + m − m (cid:1) − n .For a detailed discussion of the properties of, and optimization over thepower cone, we direct the reader to [4, 9]. 4. Computing the minimum H -eigenvalue of M -tensors The problem of obtaining bounds on the minimum H -eigenvalue of M - tensors has received significant attention in the literature [14, 18, 24, 43]. This18s due to the important role the M -tensors play in a wide range of interestingapplications [see, 18, and the references therein]. However, these bounds areloose [see, e.g., 18, Table 1], and even expensive to compute [see, e.g., 18, Ta-ble 2]. Next, we show that the characterization in Corollary 15 can be applied to obtain the minimum H -eigenvalue of M -tensors in polynomial time. For thatpurpose, we first introduce the following results. Lemma 17 ([50], Lemma 2.2) . For m, n ∈ N , let A ∈ T m,n . Suppose that B = a ( A + b I ) , where a and b are two real numbers. Then µ is an eigenvalue( H -eigenvalue) of B if and only if µ = a ( λ + b ) and λ is an eigenvalue ( H - eigenvalue) of A . Lemma 18. For m, n ∈ N , if A = s I − D ∈ S m,n where D is a nonnegativetensor and s is a scalar, then s − ρ ( D ) is the minimum H -eigenvalue of A .Proof. First, from Theorem 6 it follows that ρ ( D ) is an H -eigenvalue of D .Then, from Lemma 17, s − ρ ( D ) is an H -eigenvalue of A . Now, assume that λ is an H -eigenvalue of A . Then, s − λ is an H -eigenvalue of D . Thus, ρ ( D ) ≥| s − λ | ≥ s − λ . That is, λ ≥ s − ρ ( D ). Thus, s − ρ ( D ) is the minimum H -eigenvalue of A .In what follows, for any A ∈ S m,n , let λ min ( A ) denote the smallest H -eigenvalue of A . Proposition 19. For m, n ∈ N , if A ∈ S m,n is an M -tensor, then for any λ ≤ λ min ( A ) , A − λI is also an M -tensor. Besides, for any λ > λ min ( A ) , A − λI is not an M -tensor.Proof. Since A ∈ S m,n is an M -tensor, then there exist a nonnegative tensor D and nonnegative scalar s ≥ ρ ( D ) such that A = sI − D . Then, for any λ ≤ λ min ( A ), A − λI = ( s − λ ) I − D . From Lemma 18, λ min ( A ) = s − ρ ( D ). Thus s − λ − ρ ( D ) ≥ s − λ min ( A ) − ρ ( D ) =0. Furthermore, s − λ ≥ ρ ( D ) ≥ 0. As a result, A − λI is an M -tensor. Now, λ > λ min ( A ), assume A − λI is an M -tensor. Then there exist anonnegative tensor ˜ D and nonnegative scalar ˜ s ≥ ρ ( ˜ D ) such that A − λI =˜ sI − ˜ D . Thus A = ( λ + ˜ s ) I − ˜ D . From Lemma 18, λ min ( A ) = ( λ + ˜ s ) − ρ ( ˜ D ) ≥ λ which contradicts the condition λ > λ min ( A ). Thus, A − λI is not an M -tensor. Note that from Corollary 15 and the definition of H + -tensors in terms ofthe comparison tensor (cf., (1)), one obtains the following characterization for M -tensors. Corollary 20. Let m, n ∈ N . Then A = ( a i i ...i m ) ∈ S m,n is an M -tensor ifand only if a i i ...i m ≤ for all ( i , i , . . . , i m ) ∈ D mn , and F ( A ) = ∅ . Proposition 19 and the characterization of M -tensors in Corollary 20 and (27),readily provide a way to compute the H -eigenvalue of M -tensors in polynomialtime. Corollary 21. For m, n ∈ N , if A ∈ S m,n is an M -tensor, then λ min ( A ) = max { λ : F ( A − λ I ) , a i i ...i m ≤ , ∀ ( i , i , . . . , i m ) ∈ D mn } . (28) Proof. From Proposition 19, it follows that λ min ( A ) = max { λ : A − λ I is an M -tensor } . The result then follows by using Corollary 20 to characterize the membershipin the set of M -tensors. Equation (27), and the discussion that follows it means that one can com-pute the H -eigenvalue of an M -tensor by solving the power cone optimizationproblem (28). To show the performance of the proposed method to computethe minimum H -eigenvalue, we apply it to obtain the H -eigenvalue of the M -tensors considered in Example 3.1 and Example 3.2 in [18]. Specifically, in Table 1, we compare the best upper and lower bounds for the H -eigenvalue ofthese M -tensors obtained in [18], versus the value of the H -eigenvalue of these M -tensors obtained using (28). 20inimum H -eigenvaluebest lower best upper M -tensor m n bound [18] value (28) bound [18]Example 3.1 [18] 3 3 3.0738 5.8046 6.8390Example 3.2 [18] 3 3 4.0768 7.7442 9.0313 Table 1: H -eigenvalues of M -tensors. The results in Table 1 show that neither the lower or upper bounds for the H -eigenvalues are particularly tight in comparison with the actual H -eigenvalues. All the tests in Table 1 were implemented in MATLAB using the SystemsPolynomial Optimization Toolbox (SPOT) [34], and the solver MOSEK [4], usingan Intel computer Core i7-4770HQ with 2.20 GHz frequency and 16 GB RAMmemory. 5. Application on polynomial optimization Let us begin by introducing some additional notation. Let R m [ x ] := R m [ x , . . . , x n ]be the set of polynomials in n variables with real coefficients of degree at most m .Denote Σ m [ x ] := Σ m [ x , . . . , x n ] as the cone of SOS in n variables with realcoefficients of degree at most 2 m . Let P m [ x ] := P m [ x , . . . , x n ] be the cone ofnonnegative polynomials in n variables with real coefficients of degree at most m . For ease of exposition, in what follows, we work mainly with homogeneous poly-nomials. The results presented for homogeneous polynomials can be extendedto polynomials by setting one of the homogeneous polynomial variables to 1. Forthat purpose, let H m := H m [ x , . . . , x n ] be the set of homogeneous polynomialsin n variables with real coefficients of degree m . If A is a symmetric H + -tensor, then A x m is an SOS from Theorem 1. Thisfact can be used to have a restricted (i.e., with a potentially smaller feasible set)optimization problem to approximately solve global optimization problems in21olynomial time. To illustrate this, let us first state the well-known characteri-zation of SOS using a Gram matrix [11]. Theorem 22 ([see, e.g., 1, Thm. 2.1]) . For m ∈ N and f ( x ) ∈ R m [ x ] , denote z ( x ) as the vector with all monomials of degree less than or equal to m . Then f ( x ) is an SOS if and only if f = z ( x ) T Qz ( x ) , where Q (cid:23) . In Theorem 22, the matrix Q satisfying f = z ( x ) T Qz ( x ) is called the Grammatrix of f [see, e.g., 11]. From Theorem 22, optimization over SOS is equivalent to a semidefinite optimization (SDO) problem which is solvable in polynomialtime [48, see, e.g.,]. However, in practice it is prohibitively time consumingto solve SDOs when the involved polynomial(s) is of high degree and/or witha large number of variables. Given that optimization over SOS can be usedto solve polynomial optimization problems [see, e.g., 3, 5], different methods have been proposed to obtain a good feasible solution of SOS optimizationproblems efficiently [see, e.g., 1, 13, 37, 47, 52]. The authors in [1] proposed twosubclasses of SOS: DSOS and SDSOS which are constructed via DD + matricesGDD + matrices (in their work, they use another name of GDD + matrices:scaled diagonally dominant (SDD) matrices). Definition 4 ([1], Definition 3.1) . A polynomial p ( x ) is a diagonally dominantsum of squares (DSOS) if it can be written as p ( x ) = X i γ i m i ( x ) + X i,j β + ij ( m i ( x ) + m j ( x )) + X i,j β − ij ( m i ( x ) − m j ( x )) , (29) for some monomials m i ( x ) , m j ( x ) and some nonnegative scalars γ i , β + ij , β − ij . For m, n ∈ N , the set of polynomials in n variables and degree m that are DSOS isdenoted by DSOS m,n . Definition 5 ([1], Definition 3.2) . A polynomial p ( x ) is a scaled diagonallydominant sum of squares (SDSOS) if it can be written as p ( x ) = X i γ i m i ( x ) + X i = j ( ˆ β + ij m i ( x ) + ˜ β + ij m j ( x )) + X i = j ( ˆ β − ij m i ( x ) − ˜ β + ij m j ( x )) , (30)22 or some monomials m i ( x ) , m j ( x ) and some nonnegative scalars γ i , ˆ β + ij , ˜ β + ij , ˆ β − ij , ˜ β − ij .For m, n ∈ N , the set of polynomials in n variables and degree m that are SD- SOS is denoted by SDSOS m,n . In [1, Theorem 3.4] (resp., [1, Theorem 3.6]), the authors prove that f ∈ DSOS m,n (resp., SDSOS m,n ) if and only if there is a DD + (resp., GDD + )matrix Q such that f = z ( x ) T Qz ( x ), where z ( x ) is the vector with all monomialsof degree less than or equal to m . Similar to their results, we also present the polynomials that are induced byDD + and GDD + tensors. Let m, n ∈ N , ( i , . . . , i m ) ∈ D mn and (( j , . . . , j l ) , α :=( α , . . . , α l )) be the tight pair associated with ( i , . . . , i m ). Define the polyno-mials f + i ...i m ( x ) = l X k =1 (cid:18) m − α − e k (cid:19) x mj k + (cid:18) mα (cid:19) x i . . . x i m , (31) f − i ...i m ( x ) = l X k =1 (cid:18) m − α − e k (cid:19) x mj k − (cid:18) mα (cid:19) x i . . . x i m , (32) g + i ...i m ( x ) = l X k =1 β (+ k ) i ...i m x mj k + (cid:18) mα (cid:19) m ( i , . . . , i m ) + x i . . . x i m , (33)where β (+ k ) i ...i m , k ∈ [ l ] are nonnegative scalars and m ( i , . . . , i m ) + = m vuuut Q lk =1 (cid:16) β (+ k ) i ...i m (cid:17) α k Q lk =1 (cid:0) m − α − e k (cid:1) α k ,g − i ...i m ( x ) = l X k =1 β ( − k ) i ...i m x mj k − (cid:18) mα (cid:19) m ( i , . . . , i m ) − x i . . . x i m , (34)where β ( − k ) i ...i m , k ∈ [ l ] are nonnegative scalars and m ( i , . . . , i m ) − = m vuuut Q lk =1 (cid:16) β ( − k ) i ...i m (cid:17) α k Q lk =1 (cid:0) m − α − e k (cid:1) α k . Definition 6. For m, n ∈ N , a polynomial p ( x ) is a diagonally dominant tensorhomogeneous (DDTH) polynomial in n variables and degree m if it can be written s p ( x ) = X i ∈ [ n ] γ i x mi + X ( i ,...,i m ) ∈ D mn β + i ...i m f + i ...i m ( x ) + X ( i ,...,i m ) ∈ D mn β − i ...i m f − i ...i m ( x ) , (35) for some nonnegative scalars γ i , β + i ...i m , β − i ...i m . The set of DDTH polynomialsin n variables and degree m is denoted as DDT H m,n . The polynomials defined in Definition 6 and Definition 4 are closely related. Proposition 23. DDT H ,n = DSOS ,n ∩ H for n ∈ N .Proof. Let n ∈ N and p ∈ DDT H ,n . Then it follows from (35), (31), (32) that p ( x ) = X i ∈ [ n ] γ i x i + X i,j ∈ [ n ] ,i = j β + ij ( x i + x j ) + X i,j ∈ [ n ] ,i = j β − ij ( x i − x j ) (36)for some nonnegative γ i , i ∈ [ n ] and β + ij , β − ij , i, j ∈ [ n ] and i = j . Comparing (36) with (29), it is clear that p ∈ DSOS ,n ∩ H . Next, notice that for p ∈ DSOS ,n ∩ H to hold, the monomials in (29) must be given by m i ( x ) = x i , i ∈ [ n ]. Thus, p ∈ DSOS ,n ∩ H implies (36), which completes the proof. Definition 7. For m, n ∈ N , a polynomial p ( x ) is a generalized diagonally dom-inant tensor homogeneous (GDDTH) polynomial in n variables and degree m ifit can be written as p ( x ) = X i ∈ [ n ] γ i x mi + X ( i ,...,i m ) ∈ D mn g + i ...i m ( x ) + X ( i ,...,i m ) ∈ D mn g − i ...i m ( x ) , (37) for some nonnegative scalars γ i , i ∈ [ n ] . The set of GDDTH in n variables anddegree m is denoted as GDDT H m,n . Similarly, the polynomials in Definition 7 and Definition 5 are closely related. Proposition 24. GDDT H ,n = SDSOS ,n ∩ H for n ∈ N . roof. Let n ∈ N and p ∈ GDDT H ,n . Then, it follows from (37), (33), (34)that p ( x ) = X i ∈ [ n ] γ i x i + X i,j ∈ [ n ] ,i = j (cid:18)q β (+1) ij x i + q β (+2) ij x j (cid:19) + X i,j ∈ [ n ] ,i = j (cid:18)q β ( − ij x i − q β ( − ij x j (cid:19) (38)for some nonnegative γ i , i ∈ [ n ] and β + ij , β − ij , i, j ∈ [ n ] and i = j, k ∈ { , } .Comparing with (30), it is clear that p ∈ SDSOS ,n ∩ H . Next, notice that for p ∈ SDSOS ,n ∩ H to hold, the monomial in (30) must be given by m i ( x ) = x i , i ∈ [ n ]. Thus, p ∈ SDSOS ,n ∩ H implies (38), which completes the proof.As mentioned earlier, DDTH and GDDTH polynomials are induced by DD + tensors and GDD + tensors, respectively. To formally see this, first denote h· , ·i m,n as the inner product in T m,n defined by hA , Bi m,n = n X i ,...,i m =1 a i ...i m b i ...i m , where A , B ∈ T m,n . Notice that when m = 2, the inner product h· , ·i ,n isthe Frobenius inner product for matrices. For any tensor A ∈ T m,n , define itscorresponding polynomial as: A x m = hA , x ⊗ · · · ⊗ x | {z } m i m,n = n X i ,...i m =1 a i ...i m x i · · · x i m , Proposition 25. For m, n ∈ N , a polynomial p ∈ DDT H m,n if and only ifthere is a tensor A ∈ DD + m,n such that p ( x ) = hA , x ⊗ · · · ⊗ x i .Proof. Assume p ( x ) = hA , x ⊗ · · · ⊗ x i where A ∈ DD + m,n . From (10), p ( x ) = X i ∈ [ n ] γ i hV ,ii...i , x ⊗ · · · ⊗ x i + X ( i ,...,i m ) ∈ D mn β + i ...i m hV ,i ...i m , x ⊗ · · · ⊗ x i + X ( i ,...,i m ) ∈ D mn β − i ...i m hV ,i ...i m , x ⊗ · · · ⊗ x i , (39)25or some nonnegative γ i , β + i ...i m , β − i ...i m , i ∈ [ n ] , ( i , . . . , i m ) ∈ D mn . For i ∈ [ n ], hV ,ii...i , x ⊗ · · · ⊗ x i = x mi . (40)Furthermore, for ( i , . . . , i m ) ∈ D mn , it follows from Definition 2 that hV ,i ...i m , x ⊗ · · · ⊗ x i = f + i ...i m ( x ) , (41)and hV ,i ...i m , x ⊗ · · · ⊗ x i = f − i ...i m ( x ) . (42)After replacing (40), (41), (42) in (39) and comparing with (35), it follows that p ∈ DDT H m,n .Similarly, if p ∈ DDT H m,n , one obtains (39) after replacing (40), (41), (42)into (35); which implies that p ( x ) = hA , x ⊗ · · · ⊗ x i for some A ∈ DD + m,n .An analogous result holds for GDDT H m,n . Proposition 26. For m, n ∈ N , a polynomial p ∈ GDDT H m,n if and only ifthere is a tensor A ∈ GDD + m,n such that p ( x ) = hA , x ⊗ · · · ⊗ x i .Proof. Assume p ( x ) = hA , x ⊗ · · · ⊗ x i where A ∈ GDD + m,n . Then there existsa diagonal matrix D with positive diagonal entries d i , i ∈ [ n ], such that B = A DD · · · D is a DD + tensor. Thus p ( x ) = hA , x ⊗ · · · ⊗ x i = hB D − D − . . . D − , x ⊗ · · · ⊗ x i = hB , ¯ x ⊗ · · · ⊗ ¯ x i , where ¯ x = ( x d − , x d − , . . . , x n d − n ). From the proof of Proposition 25, p ( x ) = X i ∈ [ n ] ˆ γ i ¯ x mi + X ( i ,...,i m ) ∈ D mn β + i ...i m f + i ...i m (¯ x i )+ X ( i ,...,i m ) ∈ D mn β − i ...i m f − i ...i m (¯ x i ) , (43)26here ˆ γ i , i ∈ [ n ] , β + i ...i m , β − i ...i m , ( i , . . . , i m ) ∈ D mn are nonnegative. Assume(( j , . . . , j l ) , α := ( α , . . . , α l )) is the tight pair of ( i , . . . , i m ), then f + i ...i m (¯ x ) = l X k =1 (cid:18) m − α − e k (cid:19) ¯ x mj k + (cid:18) mα (cid:19) ¯ x i . . . ¯ x i m = l X k =1 (cid:18) m − α − e k (cid:19) d − mj k x mj k + (cid:18) mα (cid:19) d − i . . . d − i m x i . . . x i m , (44) f − i ...i m (¯ x ) = l X k =1 (cid:18) m − α − e k (cid:19) ¯ x mj k − (cid:18) mα (cid:19) ¯ x i . . . ¯ x i m = l X k =1 (cid:18) m − α − e k (cid:19) d − mj k x mj k − (cid:18) mα (cid:19) d − i . . . d − i m x i . . . x i m . (45)Now, let γ i := ˆ γ i d − mi for i ∈ [ n ], β (+ k ) i ...i m = β + i ...i m (cid:0) m − α − e k (cid:1) d − mj k ≥ β ( − k ) i ...i m = β − i ...i m (cid:0) m − α − e k (cid:1) d − mj k ≥ i , . . . , i m ) ∈ D mn and k ∈ [ l ]. Notice that β + i ...i m d − i · · · d − i m = m vuut Π lk =1 ( β (+ k ) i ...i m ) α k Π lk =1 ( (cid:0) m − α − e k (cid:1) ) α k = m ( i , . . . , i m ) + , and similarly, β − i ...i m d − i · · · d − i m = m ( i , . . . , i m ) − for ( i , . . . , i m ) ∈ D mn . Thus,after replacing (44), (45) in (43), and using the definitions of γ i , i ∈ [ n ], β (+ k ) i ...i m , β ( − k ) i ...i m , ( i , . . . , i m ) ∈ D mn and k ∈ [ l ], as well as (33) and (34), it follows that p satisfies (7). That is p ∈ GDDT H m,n .For the other direction, let p ∈ GDDT H m,n . Then, there exists nonnegativescalars γ i , β (+ k ) i ...i m , β ( − k ) i ...i m , i ∈ [ n ] , ( i , . . . , i m ) ∈ D mn such that p ( x ) = X i ∈ [ n ] γ i x mi + X ( i ,...,i m ) ∈ D mn g + i ...i m ( x ) + X ( i ,...,i m ) ∈ D mn g − i ...i m ( x ) , where g + i ...i m ( x ) , g − i ...i m ( x ) are defined in equation (33) and (34) for ( i , . . . , i m ) ∈ D mn . First note that n X i =1 γ i x mi = n X i =1 γ i hV ,ii...i , x ⊗ · · · ⊗ x i . Also, for a given ( i , . . . , i m ) ∈ D mn , let (( j , . . . , j l ) , α = ( α , . . . , α l )) be its27ight pair and denote d − j k := m s β (+ k ) i ...im ( m − α − ek ) , k ∈ [ l ]. Then g + i ...i m ( x ) = l X k =1 β (+ k ) i ...i m x mj k + (cid:18) mα (cid:19) m ( i , . . . , i m ) + x i . . . x i m = l X k =1 (cid:18) m − α − e k (cid:19) d − mj k x mj k + (cid:18) mα (cid:19) d − i . . . d − i m x i . . . x i m = hV ,i ...i m D − · · · D − , x ⊗ · · · ⊗ x i = h ¯ V ,i ...i m , x ⊗ · · · ⊗ x i , where D = ( d ij ) is an m by m diagonal matrix with diagonal elements d ii = d j k if i = j k for some k ∈ [ l ] , , for i ∈ [ m ] and ¯ V ,i ...i m := V ,i ...i m D − · · · D − ∈ GDD + m,n . Similarly, g − i ...i m ( x ) = h ¯ V ,i ...i m , x ⊗ · · · ⊗ x i , where ¯ V ,i ...i m := V ,i ...i m ˆ D − · · · ˆ D − ∈ GDD + m,n and ˆ D = ( ˆ d ij ) is an m by m diagonal matrix with diagonal elements d ii = m r β ( − k ) i ...im ( m − α − e ) if i = j k for some k ∈ [ l ] , , for i ∈ [ m ]. Thus, from Proposition (10), it follows that M := X i ∈ [ n ] α i V ,ii...i + X ( i ,...,i m ) ∈ D mn ¯ V ,i ...i m + X ( i ,...,i m ) ∈ D mn ¯ V ,i ...i m ∈ GDD + m,n and the result follows the fact that p ( x ) = hM , x ⊗ · · · ⊗ x i .From Proposition 25 and 26, p ( x ) ∈ DDT H m,n (resp. GDDT H m,n ) if andonly if p ( x ) = hA , x ⊗ · · · ⊗ x i for an A ∈ DD + m,n (resp. GDD + m,n ). This28olynomial equality can be imposed with a finite set of linear equations inthe coefficients of p ( x ) and the elements of A . Then the diagonally dominantconstraints on A = ( a i ...i m ) can be formulated using the linear inequalities: a ii...i ≥ X ( i ,...,i m ) =( i,...,i ) z ii ...i m , i ∈ [ n ] , − z i ...i m ≤ a i ...i m ≤ z i ...i m , ( i , . . . , i m ) ∈ D mn . The number of linear constraints is at most n + 2 (cid:0) n + m − m (cid:1) .The constraints on GDD + tensors can be formulated using power cone op-timization problem of size polynomial in n for fixed m (Theorem 16). Thus in both cases, the resulting linear and power cone optimization problems are ofpolynomial size in n for fixed m . Next, we illustrate how the polynomial classes DDT H m,n and GDDT H m,n can be used to address the solution of polynomialoptimization problems. DSOS m,n ( SDSOS m,n ) and DDT H m,n ( GDDT H m,n ) From Theorem 1 and Proposition 25 and 26, polynomials in DDT H m,n and GDDT H m,n are all nonnegative and from Theorem 16, membership inthese polynomial classes can be checked in polynomial time. As a result, wecan make use of them to approximately solve polynomial optimization prob- lems. To illustrate this, we consider the particular problem of finding the small-est H -eigenvalue of an even order symmetric tensor and compare the perfor-mance of the approximations obtained using the polynomial classes DSOS m,n ( SDSOS m,n ) and optimization on DDT H m,n ( GDDT H m,n ).For m, n ∈ N , denote the smallest H -eigenvalue of A ∈ S m,n as λ min ( A ).Then, from [38, 39], λ m in ( A ) = min ( A x m : x ∈ R n , n X i =1 x mi = 1 ) . (46)It is well-known that the problem of computing eigenvalues of higher ordertensors (i.e., m ≥ 3) is NP-hard [16]. Thus, it is not easy to obtain the optimal29alue of problem (46). On the other hand, its optimal value is easily seen to beequivalent to the optimal value of the following problem:max ( λ : A x m − λ n X i x mi ≥ , ∀ x ∈ R n ) . (47)For any K ⊆ R m [ x ] ∩ H m [ x ], define the following problem P K :max ( λ : A x m − λ n X i x mi ∈ K , ∀ x ∈ R n ) . (48)and denote its optimal value as λ ( A ) K . With this notation, λ m in ( A ) = λ ( A ) P m [ x ] . From (48), it follows that λ ( A ) K ≤ λ m in ( A ) if K ⊆ P m [ x ]. One classical choiceof K is Σ m [ x ] ∩ H m [ x ] and the resulting SDO problem is solvable in polynomialtime. However, it is still computationally expensive to solve the resulting SDOproblem if either the degree m or the number of variables n is large. Giventhis, the authors in [1] propose two scalable methods by setting K = DSOS m,n and K = SDSOS m,n . Considering the close relationship with DSOS m,n and SDSOS m,n , it is natural to consider the choice K = DDT H m,n and K = GDDT H m,n . In what follows, we compare the approximations obtainedfrom these four polynomials classes when (48) is used to approximate the valueof problem (46) from below. In this numerical test, A x m is set as a homogeneous polynomial with or-der 4 and n variables. Its coefficients are sampled from the standard normaldistribution. The test is implemented in MATLAB using the Systems PolynomialOptimization Toolbox (SPOT) [34], and the solver MOSEK [4], using an Intelcomputer Core i7-4770HQ with 2.20 GHz frequency and 16 GB RAM memory. The lower bounds and computational time for solving problem (48) witheach method and different number of variables are listed in Table 2 and Table 3.In this numerical test, we observe the following facts: The ranking in termsof strength of the bounds for these four methods is not altered by the size of theproblems. In particular, λ ( A ) SDSOS ,n gives the best bound while λ ( A ) DDT H ,n gives the worst bound. Compared to λ ( A ) GDDT H m,n (resp., λ ( A ) DDT H m,n ), λ ( A ) SDSOS ,n (resp., λ ( A ) DSOS ,n ) provides a stronger bound. This concurs30 n = 5 n = 10 n = 15 n = 20 DSOS ,n -11.7592 -61.9733 -170.8438 -364.0555 SDSOS ,n -9.4565 -57.4002 -162.8298 -352.0931 DDT H ,n -14.1803 -65.4432 -177.6707 -370.2231 GDDT H ,n -11.5839 -61.9366 -168.7655 -360.0859 Table 2: Comparison of lower bounds on problem (46) using the restriction (48) for differentchoices of K . K n = 5 n = 10 n = 15 n = 20 DSOS ,n SDSOS ,n DDT H ,n GDDT H ,n Table 3: Comparison of solution time (in seconds) required to obtain the bounds on problem(46) using the restriction (48) for different choices of K . with the fact that the set of DDT H m,n (resp., GDDT H m,n ) is contained in theset of DSOS m,n (resp., SDSOS m,n ). We will prove this result for fourth orderpolynomials in Proposition 27 in the Appendix. From Table 3, optimization over DDT H ,n is faster than optimization over DSOS ,n . Thus, there is a trade off between quality of the bounds and solu-tion time when choosing between optimization over DDT H ,n and optimizationover DSOS ,n to approximately solve polynomial optimization problems. Onemight expect that optimization over GDDT H ,n would be faster than opti- mization over SDSOS ,n so that there is a similar trade between strength ofthe bound and solution time as the one between optimization over DDT H ,n and optimization over DSOS ,n . However, Table 3 shows opposite result. Themain reason for this is that efficient power cone optimization solvers are stillunderdeveloped in comparison with the highly developed second-order cone op- timization solvers used to optimize over SDSOS ,n .31 . Conclusions In this work, a new characterization of symmetric H + -tensors is presented(see Corollary (14)). As a result of this characterization, it follows that onecan decide whether a tensor is a symmetric H + -tensor in polynomial time (see Theorem 16). Comparing to other characterizations which typically focus on suf-ficient conditions for a tensor to be an H + -tensor, our characterization providessufficient and necessary conditions. Besides, the set of symmetric H + -tensors isdescribed using tractable convex cones; in particular, the power cone.We apply the new characterization of symmetric H + -tensors in computing the minimum H -eigenvalue of M -tensors. In particular, we show how these H -eigenvalues; which can be computed in polynomial time, compare with thebest bounds for the minimum H -eigenvalues of M -tensors proposed in the re-lated literature. Furthermore, we illustrate how this new characterization ofsymmetric H + -tensors can be used to obtain alternative solution approaches to approximately solve polynomial optimization problems (see Section 5). Inparticular, we compare and discuss the trade-offs between the use of H + -tensorinduced polynomials versus the use of DSOS and SDSOS polynomials to ap-proximately solve the polynomial optimization problem associated with findingthe minimum H -eigenvalue of an even order symmetric tensor. References [1] Ahmadi, A. A. and Majumdar, A. (2019). DSOS and SDSOS optimization:more tractable alternatives to sum of squares and semidefinite optimization. SIAM Journal on Applied Algebra and Geometry , 3(2):193–230.[2] Anderson, B., Bose, N., and Jury, E. (1975). Output feedback stabilization and related problems-solution via decision methods. IEEE Transactions onAutomatic control , 20(1):53–66.[3] Anjos, M. F. and Lasserre, J. B. (2011). Handbook on semidefinite, conic andpolynomial optimization , volume 166. Springer Science & Business Media.324] ApS, M. (2019). MOSEK Modeling Cookbook . [5] Blekherman, G., Parrilo, P. A., and Thomas, R. R. (2012). Semidefiniteoptimization and convex algebraic geometry . SIAM.[6] Boman, E. G., Chen, D., Parekh, O., and Toledo, S. (2005). On factor widthand symmetric H -matrices. Linear Algebra and its Applications , 405:239–248.[7] Cartwright, D. and Sturmfels, B. (2013). The number of eigenvalues of a tensor. Linear Algebra and its Applications , 438(2):942–952.[8] Chandrasekaran, V. and Shah, P. (2016). Relative entropy relaxations forsignomial optimization. SIAM Journal on Optimization , 26(2):1147–1173.[9] Chares, R. (2009). Cones and interior-point algorithms for structured convexoptimization involving powers and exponentials . PhD thesis, UCL-Universit´e Catholique de Louvain, Louvain-la-Neuve, Belgium.[10] Chen, H., Li, G., and Qi, L. (2015). SOS tensor decomposition: theory andapplications. arXiv preprint arXiv:1504.03414 .[11] Choi, M.-D., Lam, T. Y., and Reznick, B. (1995). Sums of squares of realpolynomials. In Proceedings of Symposia in Pure mathematics , volume 58, pages 103–126. American Mathematical Society.[12] Ding, W., Qi, L., and Wei, Y. (2013). M-tensors and nonsingular M-tensors. Linear Algebra and its Applications , 439(10):3264–3278.[13] Gatermann, K. and Parrilo, P. A. (2004). Symmetry groups, semidefiniteprograms, and sums of squares. Journal of Pure and Applied Algebra , 192(1- Journal ofInequalities and Applications , 2014(1):114.[15] Hien, L. T. K. (2015). Differential properties of Euclidean projection ontopower cone. Mathematical Methods of Operations Research , 82:265–284. Journal of the ACM (JACM) , 60(6):1–39.[17] Huang, B. and Ma, C. (2019). Iterative criteria for identifying strong H -tensors. Journal of Computational and Applied Mathematics , 352:93–109.[18] Huang, Z.-G., Wang, L.-G., Xu, Z., and Cui, J.-J. (2018). Some new inequalities for the minimum H-eigenvalue of nonsingular M-tensors. LinearAlgebra and its Applications , 558:146–173.[19] Iliman, S. and De Wolff, T. (2016). Amoebas, nonnegative polynomialsand sums of squares supported on circuits. Research in the MathematicalSciences , 3(1):9. [20] Kannan, M. R., Shaked-Monderer, N., and Berman, A. (2015). Some prop-erties of strong H -tensors and general H -tensors. Linear Algebra and itsApplications , 476:42–55.[21] Koecher, M. (1957). Positivit¨atsbereiche im R n . American Journal ofMathematics , 79(3):575–596. [22] Lasserre, J. B. (2001). Global optimization with polynomials and the prob-lem of moments. SIAM Journal on Optimization , 11(3):796–817.[23] Lasserre, J. B. (2007). A sum of squares approximation of nonnegativepolynomials. SIAM Review , 49(4):651–669.[24] Li, C., Li, Y., and Zhao, R. (2013a). New inequalities for the minimum eigenvalue of M-matrices. Linear and Multilinear Algebra , 61(9):1267–1279.[25] Li, C., Wang, F., Zhao, J., Zhu, Y., and Li, Y. (2014). Criterions for thepositive definiteness of real supersymmetric tensors. Journal of Computa-tional and Applied Mathematics , 255:1–14.[26] Li, G., Qi, L., and Yu, G. (2013b). The Z-eigenvalues of a symmetric tensor and its application to spectral hypergraph theory. Numerical Linear Algebrawith Applications , 20(6):1001–1029. 3427] Li, Y., Liu, Q., and Qi, L. (2017). Programmable criteria for strong H -tensors. Numerical Algorithms , 74(1):199–221.[28] Lim, L.-H. (2005). Singular values and eigenvalues of tensors: a variational approach. In , pages 129–132. IEEE.[29] Liu, Q., Li, C., and Li, Y. (2017). On the iterative criterion for strongH-tensors. Computational and Applied Mathematics , 36(4):1623–1635.[30] Lobo, M. S., Vandenberghe, L., Boyd, S., and Lebret, H. (1998). Applica- tions of second-order cone programming. Linear Algebra and its Applications ,284(1-3):193–228.[31] Luan, Z. and Zhang, L. (2019). A new programmable iterative algorithmfor identifying strong H-tensors. Computational and Applied Mathematics ,38(2):43. [32] Luo, Z. and Qi, L. (2016). Completely positive tensors: properties, easilycheckable subclasses, and tractable relaxations. SIAM Journal on MatrixAnalysis and Applications , 37(4):1675–1698.[33] Luo, Z., Qi, L., and Ye, Y. (2015). Linear operators and positive semidefi-niteness of symmetric tensor spaces. Science China Mathematics , 58(1):197– Opti-mization Methods and Software , 27(4-5):893–917. [36] Papp, D. (2011). Optimization models for shape-constrained function es-timation problems involving nonnegative polynomials and their restrictions .PhD thesis, Rutgers University-Graduate School-New Brunswick.3537] Pataki, G. (1998). On the rank of extreme matrices in semidefinite pro-grams and the multiplicity of optimal eigenvalues. Mathematics of Operations Research , 23(2):339–358.[38] Qi, L. (2005). Eigenvalues of a real supersymmetric tensor. Journal ofSymbolic Computation , 40(6):1302–1324.[39] Qi, L. (2013). Symmetric nonnegative tensors and copositive tensors. Lin-ear Algebra and its Applications , 439(1):228–238. [40] Renegar, J. (2001). A mathematical view of interior-point methods in con-vex optimization , volume 3. SIAM.[41] Roy, S. and Xiao, L. (2018). On self-concordant barriers for general-ized power cones. Technical report, Optimization Online. Available at . [42] Saunderson, J. (2019). Certifying polynomial nonnegativity via hyperbolicoptimization. SIAM Journal on Applied Algebra and Geometry , 3(4):661–690.[43] Tian, G.-X. and Huang, T.-Z. (2010). Inequalities for the minimum eigen-value of M-matrices. The Electronic Journal of Linear Algebra , 20.[44] Tun¸cel, L. and Nemirovski, A. (2010). Self-concordant barriers for con- vex approximations of structured convex sets. Foundations of ComputationalMathematics , 10(5):485–525.[45] Varga, R. S. and Gillis, J. (1963). Matrix iterative analysis. Physics Today ,16:52.[46] Wang, F., Sun, D., Zhao, J., and Li, C. (2017). New practical criteria for H -tensors and its application. Linear and Multilinear Algebra , 65(2):269–283.[47] Weisser, T., Lasserre, J. B., and Toh, K.-C. (2018). Sparse-BSOS: abounded degree SOS hierarchy for large scale polynomial optimization withsparsity. Mathematical Programming Computation , 10(1):1–32.3648] Wright, S. J. (1997). Primal-dual interior-point methods . SIAM. [49] Zhang, K. and Wang, Y. (2016). An H-tensor based iterative scheme foridentifying the positive definiteness of multivariate homogeneous forms. Jour-nal of Computational and Applied Mathematics , 305:1–10.[50] Zhang, L., Qi, L., and Zhou, G. (2014). M-tensors and some applications. SIAM Journal on Matrix Analysis and Applications , 35(2):437–452. [51] Zhao, R., Gao, L., Liu, Q., and Li, Y. (2016). Criterions for identifying H -tensors. Frontiers of Mathematics in China , 11(3):661–678.[52] Zheng, Y., Sootla, A., and Papachristodoulou, A. (2019). Block factor-width-two matrices and their applications to semidefinite and sum-of-squaresoptimization. arXiv preprint arXiv:1909.11076 . Appendix From Theorem 1 and Definitions 4 and Definition 5, for m, n ∈ N , both DDT H m,n , GDDT H m,n and DSOS m,n , SDSOS m,n are contained in Σ m [ x ].In this section, we will explore the relationship between DDT H m,n , GDDT H m,n and DSOS m,n , SDSOS m,n . We find that for fourth order polynomials, there is the following result. Proposition 27. For n ∈ N , DDT H ,n ⊆ DSOS ,n and GDDT H ,n ⊆ SDSOS ,n . Proof. Let n ∈ N . From Proposition 25, f ∈ DDT H ,n if and only if f is the sumof γ i x i , β + i i i i f + i i i i , β − i i i i f − i i i i for some nonnegative γ i , β + i i i i , β − i i i i , i ∈ [ n ] , ( i , i , i , i ) ∈ D n . Also, from Proposition 26, g ∈ GDDT H ,n if and only if g is the sum of γ i x i , g + i i i i , g − i i i i for some nonnegative γ i , β (+ k ) i i i i , β ( − k ) i i i i , i ∈ [ n ] , ( i , i , i , i ) ∈ D n . Considering that both DSOS ,n and SDSOS ,n are convex cones, the proof is finished if we can prove that for i ∈ [ n ] and( i , i , i , i ) ∈ D n , x i , f + i i i i , f − i i i i ∈ DSOS ,n , (49)37nd x i , g + i i i i , g − i i i i ∈ SDSOS ,n . (50)Clearly, x i ∈ DSOS ,n ⊆ SDSOS ,n for i ∈ [ n ]. Then all the ( i , i , i , i ) ∈ D n can be classified in four cases depending on card( { i , i , i , i } ) and tight pairs of { i , i , i , i } .(i) card( { i , i , i , i } ) = 4.(ii) card( { i , i , i , i } ) = 3.(iii) card( { i , i , i , i } ) = 2 and x i x i x i x i = x j x j for j = j .(iv) card( { i , i , i , i } ) = 2 and x i x i x i x i = x j x j for j = j . Next, we are going to prove (49) and (50) for these four cases. Without loss ofgenerality, in the proof, we assume n = 4 and { i , i , i , i } ⊆ { , , , } .Case 1 : If card( { i , i , i , i } ) = 4, then f ± i i i i = f ± = X j =1 x j ± x x x x = 6( x − x ) + 6( x − x ) + 12( x x ± x x ) ∈ DSOS ,n , Also, after recalling the nonnegative of β (+ k )1234 and β ( − k )1234 , k ∈ [4], we have g ± i i i i = g ± = X k =1 β (+ k )1234 x j ± q β (+1)1234 β (+2)1234 β (+3)1234 β (+4)1234 x x x x = (cid:18)q β (+1)1234 x − q β (+2)1234 x (cid:19) + (cid:18)q β (+3)1234 x − q β (+4)1234 x (cid:19) + 2 (cid:18)q β (+1)1234 β (+2)1234 x x ± q β (+3)1234 β (+4)1234 x x (cid:19) ∈ SDSOS ,n , Case 2 : If card( { i , i , i , i } ) = 3, without loss of generality, assume { i , i , i , i } = { , , , } and then f ± i i i i = f ± =6 x + 3 x + 3 x ± x x x =3( x − x ) + 3( x − x ) + 6( x x ± x x ) ∈ DSOS ,n , β (+ k )1123 and β ( − k )1123 , k ∈ [3], we have g ± i i i i = g ± = X k =1 β (+ k )1123 x j ± vuut (cid:16) β (+1)1123 (cid:17) β (+2)1123 β (+3)1123 x x x = s β (+1)1123 x − q β (+2)1123 x + s β (+1)1123 x − q β (+3)1123 x + 2 s β (+1)1123 β (+2)1123 x x ± s β (+1)1123 β (+3)1123 x x ∈ SDSOS ,n , Case 3 : If card( { i , i , i , i } ) = 2 and x i x i x i x i = x j x j for j < j , withoutloss of generality, assume { i , i , i , i } = { , , , } and then f ± i i i i = f ± = 3 x + x ± x x = ( x − x ) + 2( x ± x x ) ∈ DSOS ,n , Also, after recalling the nonnegative of β (+ k )1112 and β ( − k )1112 , k ∈ [2], we have g ± i i i i = g ± = X j = k β (+ k )1112 x j ± vuut (cid:16) β (+1)1112 (cid:17) β (+2)1112 x x , = s β (+1)1112 x − q β (+2)1112 x + 2 vuut (cid:16) β (+1)1112 (cid:17) x ± s β (+1)1112 β (+2)1112 x x ∈ SDSOS ,n , Case 4 : If card( { i , i , i , i } ) = 2 and x i x i x i x i = x j x j for j < j , withoutloss of generality, assume { i , i , i , i } = { , , , } and then f ± i i i i = f ± = 3 x + 3 x ± x x = 3( x ± x ) ∈ DSOS ,n , Also, after recalling the nonnegative of β (+ k )1122 and β ( − k )1122 , k ∈ [2], we have g ± i i i i = g ± = X j =1 β (+ j )1122 x j ± q β (+1)1122 β (+2)1122 x x = (cid:18)q β (+1)1122 x ± q β (+2)1122 x (cid:19) , m ∈ N , DDT H m,n ⊆ GDDT H m,n ⊆ SDSOS m,n , ∀ n ∈ N . The proof is based on the fact that for ( i , . . . i m ) ∈ D mn , f + i ...i m , f − i ...i m , g + i ...i m and g − i ...i m defined in (31) and (32) are related to circuit polynomials [see, e.g.,630