Strict Complementarity in MaxCut SDP
SSTRICT COMPLEMENTARITY IN MAXCUT SDP
MARCEL K. DE CARLI SILVA AND LEVENT TUNÇEL
Abstract.
The MaxCut SDP is one of the most well-known semidefinite programs, and it has manyfavorable properties. One of its nicest geometric/duality properties is the fact that the vertices of its feasibleregion correspond exactly to the cuts of a graph, as proved by Laurent and Poljak in 1995. Recall that aboundary point x of a convex set C is called a vertex of C if the normal cone of C at x is full-dimensional.We study how often strict complementarity holds or fails for the MaxCut SDP when a vertex of thefeasible region is optimal, i.e., when the SDP relaxation is tight. While strict complementarity is known tohold when the objective function is in the interior of the normal cone at any vertex, we prove that it failsgenerically at the boundary of such normal cone. In this regard, the MaxCut SDP displays the nastiestbehavior possible for a convex optimization problem.We also study strict complementarity with respect to two classes of objective functions. We show that,when the objective functions are sampled uniformly from the negative semidefinite rank-one matrices in theboundary of the normal cone at any vertex, the probability that strict complementarity holds lies in (0 , .We also extend a construction due to Laurent and Poljak of weighted Laplacian matrices for which strictcomplementarity fails. Their construction works for complete graphs, and we extend it to cosums of graphsunder some mild conditions. Introduction
Complementary slackness is a fundamental optimality condition, and hence ubiquitous in optimization.In the most general setting of nonlinear programming (formulated below in the classical language of nonlinearoptimization), it requires a pair (¯ x, ¯ y ) of primal-dual feasible solutions to an optimization problem min (cid:8) f ( x ) : g i ( x ) ≥ ∀ i ∈ { , . . . , m } (cid:9) (1)and its (Lagrangean) dual to satisfy ¯ y i g i (¯ x ) = 0 for every i ∈ [ m ] := { , . . . , m } ; that is, at least one of thefeasibility conditions g i (¯ x ) ≥ (in the primal) and ¯ y i ≥ (in the dual) must be tight, i.e., they cannot bothhave a slack. In this case, we say that (¯ x, ¯ y ) is complementary . This condition can be stated very convenientlyin structured convex optimization. For a linear program (LP) max { c T x : Ax = b, x ≥ } (2)and its dual min { b T y : s = A T y − c, s ≥ } , where A ∈ R m × n is a matrix, and b ∈ R m and c ∈ R n arevectors, a pair (¯ x, ¯ y ⊕ ¯ s ) of primal-dual feasible solutions is complementary if ¯ x i ¯ s i = 0 for every i ∈ [ n ] . Onecan similarly consider a semidefinite program (SDP) max { Tr( CX ) : A ( X ) = b, X (cid:23) } ; (3)here, as usual, we equip the space S n of symmetric n -by- n matrices with the trace inner-product (cid:104) C, X (cid:105) :=Tr( CX T ) = (cid:80) i,j C ij X ij , the map A : S n → R m is linear, and X (cid:23) denotes that X ∈ S n is positive semidefi-nite; most of our notation can be found in Tables 1 to 5. The dual SDP is min { b T y : S = A ∗ ( y ) − C, S (cid:23) } ,where A ∗ : R m → S n is the adjoint of A , and a pair ( ¯ X, ¯ y ⊕ ¯ S ) of primal-dual feasible solutions is called complementary if Tr( ¯ X ¯ S ) = 0 ; equivalently, if ¯ X ¯ S = 0 , since ¯ X, ¯ S (cid:23) . Date : June 2, 2018.Research of the first author was supported in part by Discovery Grants from NSERC and by U.S. Office of Naval Researchunder award number N00014-12-1-0049, while at the Department of Combinatorics and Optimization, University of Waterloo,and by FAPESP (Proc. 2013/03447-6), CNPq (Proc. 477203/2012-4), CNPq (Proc. 456792/2014-7), and CAPES, while at theInstitute of Mathematics and Statistics, University of São Paulo.Research of the second author was supported in part by Discovery Grants from NSERC and by U.S. Office of Naval Researchunder award numbers N00014-12-1-0049, N00014-15-1-2171, and N00014-18-1-2078. Part of this work was done while the secondauthor was visiting the Simons Institute for the Theory of Computing, supported in part by the DIMACS/Simons Collaborationon Bridging Continuous and Discrete Optimization through NSF grant a r X i v : . [ m a t h . O C ] J un MARCEL K. DE CARLI SILVA AND LEVENT TUNÇEL
Strict complementarity is a refinement of the notion of complementary slackness where we require precisely one of the feasibility conditions involved to be tight, which forces the other one to have a slack.A pair (¯ x, ¯ y ) of primal-dual feasible solutions for the optimization problem in (1) and its dual is strictlycomplementary if ¯ y i g i (¯ x ) = 0 and ¯ y i + g i (¯ x ) > for every i ∈ [ m ] . A pair (¯ x, ¯ y ⊕ ¯ s ) of primal-dual feasiblesolutions for the LP in (2) and its dual is strictly complementary if ¯ x i ¯ s i = 0 and ¯ x i + ¯ s i > for every i ∈ [ n ] . Finally, a pair ( ¯ X, ¯ y ⊕ ¯ S ) of primal-dual feasible solutions for the SDP in (3) and its dual is strictlycomplementary if ¯ X ¯ S = 0 and ¯ X + ¯ S (cid:31) , i.e., ¯ X + ¯ S is positive definite. The latter two notions can beneatly unified in the context of convex conic optimization via the concept of faces (see [17]).Complementary slackness characterizes optimality whenever Strong Duality holds, in both LPs and SDPs:a primal-dual pair of feasible solutions is optimal if and only if it is complementary. This is sometimesdescribed by saying that complementary slackness holds for the (primal-dual pair of) programs. In the caseof LPs, whenever primal and dual are both feasible, there exists a primal-dual pair of optimal solutions thatis strictly complementary [7]; i.e., strict complementarity holds for every primal-dual pair of feasible LPs.However, there exist primal-dual pairs of SDPs (which satisfy strong regularity conditions sufficient forSDP Strong Duality) that have no strictly complementary primal-dual pair of optimal solutions (see [22]);in such cases, we say that strict complementarity fails for said primal-dual pair of SDPs. In fact, failure of strictcomplementarity is deeply related to failure of Strong Duality in the context of convex conic optimization [23].Existence of a strictly complementary pair of optimal solutions for an SDP is crucial for some key propertiesof interior-point methods used to solve such an optimization problem; see, e.g., [2, 10, 15, 11] for superlinearconvergence and [8] for convergence of the central path to the analytic center of the optimal face. Strictcomplementarity is also very useful in the identification of optimal faces (in the primal and dual problems),for detection of infeasibility and unboundedness as well as efficient recovery of certificates of these [25, 16].Hence, it is important to determine whether strict complementarity holds for a given SDP.It is known that strict complementarity holds generically for SDPs [1]; for a generalization to convexoptimization problems, see [18]. However, there are some generic properties of LPs that fail in some natural,highly structured formulations arising in combinatorial optimization. For instance, whereas systems oflinear inequalities are well-known to be generically nondegenerate, the natural description of many classicalpolytopes is degenerate (e.g., for the matching polytope, see [21, Theorem 25.4]), and “. . . most real-world LPproblems are degenerate” according to [24]. Thus, one ought to be careful about strict complementarity whenapproaching combinatorial optimization problems via SDP relaxations.In this paper, we study how often strict complementarity holds or fails for the MaxCut SDP and itsdual, when an optimal solution of the primal occurs at a vertex of its feasible region. Recall that the MaxCut problem for a given graph G = ( V, E ) on V = [ n ] and weight function w : E → R can be cast as theoptimization problem max { x T Cx : x ∈ {± } n } , where C ∈ S n is defined as C := L G ( w ) := (cid:88) { i,j }∈ E w { i,j } ( e i − e j )( e i − e j ) T (4)and { e , . . . , e n } is the standard basis of R n . The matrix L G ( w ) is known as (a weighted) Laplacian matrixof G , and it is simple to check that L G ( w ) (cid:23) if w ≥ . The natural SDP relaxation for this problem is thefollowing MaxCut SDP , which we write along with its dual: max Tr( CX )diag( X ) = ,X (cid:23) , = min T yS = Diag( y ) − C,S (cid:23) (5)here, diag : S n → R n extracts the diagonal, Diag : R n → S n is the adjoint of diag , and is the vector ofall-ones. Strong Duality holds for every C ∈ S n since both SDPs have Slater points , i.e., feasible solutionsthat are positive definite.The feasible region of the MaxCut SDP, called the elliptope and denoted by E n , is a compact convexset in S n and its vertices are precisely its elements that are rank-one matrices [13], i.e., matrices of theform xx T with x ∈ {± } n . Thus, they correspond precisely to the exact solutions of the MaxCut problem,for which the SDP is a relaxation. The vertices of E n are by definition the points of E n whose normal conesare full-dimensional (we postpone the definition of normal cone to Section 2.2). It is known [3] that strictcomplementarity holds in (5) precisely when C lies in the relative interior of the normal cone of some X ∈ E n .In particular, if ¯ X is a vertex of E n , then strict complementarity holds for (5) whenever C is in the interior TRICT COMPLEMENTARITY IN MAXCUT SDP 3 of the normal cone of E n at ¯ X . However, when C lies in the boundary of this normal cone, it is not clearwhether strict complementarity holds.In this paper, we prove that, when C is chosen from the boundary of the normal cone at a vertex of theelliptope, strict complementarity almost always fails for (5); in this regard, surprisingly, the MaxCut SDPdisplays the worst possible behavior for a convex optimization problem. In order to make the statement “almostalways fails” rigorous, we shall make use of Hausdorff measures. However, our treatment is self-contained andit does not require in-depth knowledge of the theory of Hausdorff measures.We also focus on two classes of objective functions for (5). We prove that, when C is sampled uniformlyfrom (a normalization of) the negative semidefinite rank-one matrices in the normal cone at a vertex of theelliptope, the probability that strict complementarity fails for (5) is in (0 , . Naturally, we shall also useHausdorff measures to achieve this. Finally, we also extend a construction due to Laurent and Poljak [14],who proved that strict complementarity may fail for (5) when C is a weighted Laplacian matrix. Theirconstruction works for complete graphs, and we extend it to graphs which are cosums where one of thesummands is connected and with some mild condition relating the maximum eigenvalues of their Laplacians.The order in which our results are presented is different from what we described above. Since the weightedLaplacian construction generalized from Laurent and Poljak involves only matrix analysis and spectral graphtheory, and no measure theory, we start with that result. Only then we shall delve into measure theory toolsto prove the other results. Hence, the rest of this paper is organized as follows. Section 2 contains somepreliminaries, such as notation and background results about the MaxCut SDP (5). In Section 3 we discussfailure of strict complementarity for (5) using previous results by Laurent and Poljak and we extend theirLaplacian construction to cosums of graphs. In Section 4, we develop some Hausdorff measure basics and usethem to prove that strict complementarity fails generically (“almost everywhere”) for the MaxCut SDP (5)when the objective function is in the boundary of the normal cone of a vertex of the elliptope. Finally, inSection 5, we zoom into the set of negative semidefinite rank-one matrices in the latter boundary, and provethat in this case the probability that strict complementarity holds is in (0 , .2. Preliminaries
We refer the reader to Tables 1 to 5 for our mostly standard notation and terminology. In order totreat R n and S n uniformly, we adopt the language of Euclidean spaces, i.e., finite-dimensional real vectorsspaces equipped with an inner product. We denote arbitrary Euclidean spaces by E and Y . We adoptMinkowski’s notation; for instance, C + Λ D := { x + λy : x ∈ C , λ ∈ Λ , y ∈ D } for C , D ⊆ E and Λ ⊆ R .Also, whenever possible we shorten singletons to their single elements, e.g., we write R + (1 ⊕ C ) to denote theconic homogenization of the set C in one higher dimensional space. Table 1.
Notation for special sets. [ n ] := { , . . . , n } for each n ∈ N P ( X ) := the power set of X R + := { x ∈ R : x ≥ } , the set of nonnegative reals R ++ := { x ∈ R : x > } , the set of positive reals R n × n := the space of n × n real-valued matrices S n := { X ∈ R n × n : X = X T } , the space of symmetric n × n matrices S n + := { X ∈ S n : h T Xh ≥ ∀ h ∈ R n } , the cone of positive semidefinite matrices S n ++ := { X ∈ S n : h T Xh > ∀ h ∈ R n \ { }} , the cone of positive definite matrices E n := the elliptope ; see (10)2.1. Uniqueness of Dual Optimal Solutions.
Delorme and Poljak [4] proved that the dual SDP in (5)has a unique optimal solution. We shall state a slightly generalized version of their result with some changesand include a proof for the sake of completeness.
MARCEL K. DE CARLI SILVA AND LEVENT TUNÇEL
Table 2.
Notation for linear algebra. A ∗ := the adjoint of a linear map A between Euclidean spaces Tr( X ) := (cid:80) ni =1 X ii , the trace of X ∈ R n × n I := the identity matrix in the appropriate space := the vector of all-ones in the appropriate space { e , . . . , e n } := the set of canonical basis vectors of R n Im( A ) := the range of A ∈ R n × n Null( A ) := the nullspace of A ∈ R n × n supp( x ) := { i ∈ [ n ] : x i (cid:54) = 0 } , the support of x ∈ R n diag( X ) := (cid:80) ni =1 X ii e i for each X ∈ R n × n so diag : R n × n → R n extracts the diagonal Diag( x ) := (cid:80) ni =1 x i e i e T i ∈ R n × n for each x ∈ R n , so Diag is the adjoint of diag C ⊥ := { x ∈ E : (cid:104) x, s (cid:105) = 0 ∀ s ∈ C } for each subset C of an Euclidean space E ⊕ := the direct sum of two vectors or two sets of vectors x ⊥ y := denotes that x, y ∈ E are orthogonal, i.e., (cid:104) x, y (cid:105) = 0 (cid:23) := the Löwner partial order on S n , i.e., A (cid:23) B ⇐⇒ A − B ∈ S n + for A, B ∈ S n (cid:31) := the partial order on S n defined as A (cid:31) B ⇐⇒ A − B ∈ S n ++ for A, B ∈ S n λ max ( A ) := the largest eigenvalue of A ∈ S n A † := the Moore-Penrose pseudoinverse of A ∈ R m × n ; see [9] vec := the map that sends a matrix in R n × n to a vector indexed by [ n ] × [ n ] Table 3.
Notation for convex analysis on an Euclidean space E . cl( C ) := the closure of C ⊆ E int( C ) := the interior of C ⊆ E ri( C ) := the relative interior of a convex set C ⊆ E bd( C ) := cl( C ) \ int( C ) , the boundary of C ⊆ E rbd( C ) := cl( C ) \ ri( C ) , the relative boundary of a convex set C ⊆ E F (cid:69) C := denotes that F is a face of a convex set C ⊆ E ; see Section 4.2 F (cid:67) C := denotes that F is a proper face of a convex set C ⊆ E ; see Section 4.2 Faces( C ) := the set of faces of a convex set C ⊆ E ; see Section 4.2 Normal( C ; x ) := the normal cone of a convex set C ⊆ E at x ∈ C ; see (9) B := the unit ball in the appropriate Euclidean space B ∞ := the unit ball for the ∞ -norm in the appropriate R n Table 4.
Notation for the theory of Hausdorff measures in a normed space V . H d ( X ) := the d -dimensional Hausdorff outer measure of X ⊆ V ; see (22) λ d ( X ) := the d -dimensional Lebesgue outer measure of X ⊆ R d dim H ( X ) := the Hausdorff dimension of X ⊆ V ; see (26) Proposition 1 ([4, Theorem 2]) . Consider the primal-dual pair of SDPs max { Tr( CX ) : A ( X ) = b, X (cid:23) } and (6) min { b T y : S = A ∗ ( y ) − C, S (cid:23) } , (7) TRICT COMPLEMENTARITY IN MAXCUT SDP 5
Table 5.
Notation for a graph G = ( V, E ) . V ( G ) := the vertex set of GE ( G ) := the edge set of G L G ( w ) := the weighted Laplacian matrix of G with weights w ∈ R E ; see (4) G + H := the cosum of graphs G and H ; see (14)where A : S n → R m is a surjective linear map, C ∈ S n , and b ∈ R m . Assume there exist ˚ X ∈ S n ++ and ˚ y ∈ R m such that A ( ˚ X ) = b and A ∗ (˚ y ) ∈ S n ++ . Suppose that, for every nonzero y ∈ R m , there exists z ∈ R m suchthat b T z (cid:54) = 0 and Null( A ∗ ( y )) ⊆ Null( A ∗ ( z )) . Then (7) has a unique optimal solution. Proof.
Since ˚ X is a Slater point for (6), there exists an optimal solution for (7). Suppose for the sakeof contradiction that y ⊕ S and y ⊕ S are distinct optimal solutions for (7). Set ¯ y := ( y + y ) and ¯ S := A ∗ (¯ y ) − C = ( S + S ) (cid:23) . We have ¯ S (cid:54) = 0 since A is surjective. Then ¯ y ⊕ ¯ S is also optimal in (7).Let ¯ z ∈ R m such that b T ¯ z (cid:54) = 0 and Null( A ∗ ( y − y )) ⊆ Null( A ∗ (¯ z )) , which exists by assumption. Then Null( ¯ S ) ⊆ Null( A ∗ (¯ z )); (8)indeed, if h lies in Null( ¯ S ) = Null( S ) ∩ Null( S ) , then we get A ∗ ( y ) h = Ch = A ∗ ( y ) h , whence h ∈ Null( A ∗ ( y − y )) ⊆ Null( A ∗ (¯ z )) .Define β := − b T ˚ yb T ¯ z , d := ˚ y + β ¯ z, and note that b T d = 0 . Let µ > be the smallest positive eigenvalue of ¯ S ∈ S n + \ { } . Let (cid:107)·(cid:107) denote theoperator -norm. If β (cid:107)A ∗ (¯ z ) (cid:107) = 0 , set ε := 1 ; otherwise set ε := µ | β |(cid:107)A ∗ (¯ z ) (cid:107) > . Also, set ˜ y := ¯ y + εd and ˜ S := A ∗ (˜ y ) − C . Let h ∈ R n . Write h = h + h with h ∈ Null( ¯ S ) and h ∈ [Null( ¯ S )] ⊥ . By (8) we have h T ˜ Sh = h T ¯ Sh + εh T A ∗ ( d ) h ≥ µ (cid:107) h (cid:107) + εh T A ∗ (˚ y ) h + εβh T A ∗ (¯ z ) h ≥ µ (cid:107) h (cid:107) + εh T A ∗ (˚ y ) h − ε | β |(cid:107)A ∗ (¯ z ) (cid:107) (cid:107) h (cid:107) ≥ εh T A ∗ (˚ y ) h. Thus, ˜ S (cid:23) ε A ∗ (˚ y ) (cid:31) , so there exists a feasible solution for (7) with objective value strictly smaller than b T ˜ y = b T ¯ y , a contradiction. (cid:3) Corollary 2 ([4, Theorem 2]) . The dual SDP in (5) has a unique optimal solution.
Proof.
We shall apply Proposition 1 to (5). Let us see that the map A := diag satisfies the required properties.Take ˚ X := I and ˚ y := . Let y ∈ R n be nonzero. Define z ∈ R n as z i := | y i | for every i ∈ [ n ] , and note that Null(Diag( y )) = Null(Diag( z )) and that T z > since y (cid:54) = 0 . (cid:3) Vertices of the Elliptope.
Let C be a convex set in an Euclidean space E . The normal cone of C at ¯ x ∈ C is Normal( C ; ¯ x ) := { a ∈ E : (cid:104) a, x (cid:105) ≤ (cid:104) a, ¯ x (cid:105) ∀ x ∈ C } , (9)i.e., it is the set of all normals to supporting halfspaces of C at ¯ x . Note that we are identifying the dualspace E ∗ of E with E . We say that ¯ x ∈ C is a vertex of C if Normal( C ; ¯ x ) is full-dimensional. The set ofvertices of the elliptope E n := { X ∈ S n + : diag( X ) = } (10)was determined by Laurent and Poljak [13]: Theorem 3 ([13, Theorem 2.5]) . The set vertices of E n is (cid:8) xx T : x ∈ {± } n (cid:9) . MARCEL K. DE CARLI SILVA AND LEVENT TUNÇEL An automorphism of E n is a nonsingular linear operator T on S n that preserves E n , i.e., T ( E n ) = E n .For s ∈ {± } n , the map X ∈ S n (cid:55)→ Diag( s ) X Diag( s ) is easily checked to be an automorphism of E n . If x, y ∈ {± } n , then y = Diag( s ) x for s ∈ {± } n defined by s i := x i y i for each i ∈ [ n ] . Hence, any vertex of E n can be mapped into the vertex T by an automorphism of E n ; i.e., the automorphism group of E n actstransitively on the vertices of E n . This allows us to prove many linear properties about the vertices of E n byjust proving them for the vertex T . We shall make extensive use of this fact without further mention.
Laurent and Poljak [13] also provided formulas for the normal cones of the elliptope. Here we shall useslightly different formulas from [3, Proposition 2.1]:
Normal( E n ; X ) = Im(Diag) − ( S n + ∩ { X } ⊥ )= Im(Diag) − { Y ∈ S n + : Im( Y ) ⊆ Null( X ) } ∀ X ∈ E n . (11)When ¯ X is a vertex of E n , every element of Normal( E n ; ¯ X ) can be described in a unique way as an elementof the Minkowski sum at the RHS of (11): Lemma 4.
Let ¯ X be a vertex of E n . Let ¯ y, ˜ y ∈ R n and ¯ S, ˜ S ∈ S n + ∩ { ¯ X } ⊥ be such that Diag(¯ y ) + ¯ S =Diag(˜ y ) + ˜ S . Then ¯ y = ˜ y and ¯ S = ˜ S . Proof.
We may assume that ¯ X = T . Then ¯ S ∈ S n + ∩ { T } ⊥ implies that ¯ S = 0 . Analogously, ˜ S = 0 .Thus ¯ y = Diag(¯ y ) = (Diag(¯ y ) + ¯ S ) = (Diag(˜ y ) + ˜ S ) = Diag(˜ y ) = ˜ y , so ¯ S = ˜ S . (cid:3) Failure of Strict Complementarity with Laplacian Objectives
Existence of strictly complementary optimal solutions is known to be equivalent to membership of theobjective vector in the relative interior of some normal cone:
Proposition 5 ([3, Proposition 4.2]) . If the feasible region C of the SDP (3) has a positive definite matrix,then strict complementarity holds for (3) and its dual if and only if C ∈ ri(Normal( C ; X )) for some X ∈ C .Hence, strict complementarity is locally generic when the objective function is chosen in the normal coneof a given feasible solution; see [3, Corollary 4.3].By (11) and standard convex analysis, ri(Normal( E n ; X )) = Im(Diag) − ri( S n + ∩ { X } ⊥ )= Im(Diag) − { Y ∈ S n + : Im( Y ) = Null( X ) } ∀ X ∈ E n . (12)When ¯ X is a vertex of E n , we may combine (11) with (12) and Lemma 4 to conclude that bd(Normal( E n ; ¯ X )) = Im(Diag) − rbd( S n + ∩ { ¯ X } ⊥ )= Im(Diag) − { Y ∈ S n + : Im( Y ) (cid:40) Null( ¯ X ) } . (13)In [3], we noted that strict complementarity holds for (5) for every C of the form C = L G ( w ) with w ≥ provided that the polar E ◦ n := { Y ∈ S n : Tr( Y X ) ≤ ∀ X ∈ E n } of the elliptope is facially exposed, and we(implicitly) asked whether the latter holds. It turns out, Laurent and Poljak [14, Example 5.10] showed, evenbefore we raised the question, in a different context and using a slightly different terminology, that strictcomplementarity may fail for (5) for every n ≥ , hence answering the question in the negative. For eachcomplete graph G = K n with n ≥ , they provided a weight function w ≥ for which strict complementarityfails for (5) with C = L G ( w ) .We generalize their construction showing that strict complementarity may fail with a weighted Laplacianobjective for graphs which are cosums, with mild conditions on the (co-)summands. Recall that, if G = ( V, E ) and H = ( U, F ) are graphs such that V ∩ U = ∅ , the cosum of G and H is the graph G + H := ( V ∪ U, E ∪ F ∪ { { v, u } : ( v, u ) ∈ V × U } ) . (14)We shall use a characterization of positive semidefinite matrices partitioned in blocks using Schur complementsand the Moore-Penrose pseudoinverse: Lemma 6 (see [6, Theorem 4.3]) . For A ∈ S m , C ∈ S n , and B ∈ R m × n , we have (cid:20) A BB T C (cid:21) (cid:23) ⇐⇒ A (cid:23) , ( I − AA † ) B = 0 , and C (cid:23) B T A † B. (15) TRICT COMPLEMENTARITY IN MAXCUT SDP 7
Theorem 7.
Let G and H be graphs with n G ≥ and n H ≥ vertices, respectively. Let w G : E ( G ) → R ++ and w H : E ( H ) → R ++ be weight functions, and denote the respective weighted Laplacians by L G := L G ( w G ) and L H := L H ( w H ) . Set µ G := λ max ( L G ) and µ H := λ max ( L H ) . Suppose that n G µ G > n H µ H and that H isconnected. Define ¯ w : E ( G + H ) → R ++ as ¯ w := w G ⊕ w H ⊕ α where α := µ G /n H . For enhanced claritydenote the vectors of all-ones in R V ( G ) and R V ( H ) by G and H , respectively. Then the unique pair ofprimal-dual optimal solutions for (5) with C := L G + H ( ¯ w ) is ( X ∗ , y ∗ ⊕ S ∗ ) where X ∗ := (cid:20) − G H (cid:21) (cid:20) − G H (cid:21) T y ∗ := 2 α (cid:20) n H G n G H (cid:21) , S ∗ := (cid:20) µ G I − L G α G T H α H T G αn G I − L H (cid:21) . (16)In particular, since ( X ∗ + S ∗ )( h ⊕
0) = 0 for any µ G -eigenvector h of L G , there is no strictly complementarypair of primal-dual optimal solutions for (5). Proof.
It is easy to check that X ∗ is feasible in the primal. We have S ∗ = 2 (cid:20) µ G I T αn G I (cid:21) − (cid:20) L G + µ G I − α G T H − α H T H L H + αn G I (cid:21) = Diag( y ∗ ) − L G + H ( ¯ w ) , and the condition S ∗ (cid:23) is equivalent to the conditions A := µ G I − L G (cid:23) , (17a) ( I − AA † ) G = 0 , (17b) αn G I (cid:23) L H + α T G A † G H T H . (17c)Note that (17a) holds trivially. Also A G = µ G G , so G ∈ Im( A ) and (17b) holds since I − AA † is the orthogonal projector onto Null( A ) = Im( A ) ⊥ . Finally, A † G = µ − G G so (17c) is equivalent to αn G I (cid:23) L H + αn G n H H T H , which holds since αn G > µ H by assumption. It follows that y ∗ ⊕ S ∗ is feasiblein the dual. It is easy to check that Tr( X ∗ S ∗ ) = 0 , so X ∗ and y ∗ ⊕ S ∗ are optimal solutions. By Corollary 2, y ∗ ⊕ S ∗ is the unique optimal solution for the dual.It remains to show that X ∗ is the unique optimal solution for the primal. Let X = (cid:20) X G BB T X H (cid:21) be an optimal solution for the primal. Complementary slackness yields XS ∗ = (cid:20) X G ( µ G I − L G ) + αB H T G αX G G T H + B ( αn G I − L H ) B T ( µ G I − L G ) + αX H H T G αB T G T H + X H ( αn G I − L H ) (cid:21) . (18)If h ⊥ H is an eigenvector of L H , (left-)multiplying h by the bottom right block in (18) yields X H h = 0 ,where we used the assumption that αn G > µ H . Since H is connected, this implies that X H is a nonnegativescalar multiple of H T H , and so X H = H T H . Next apply T G · H and T H · G to the top right block and bottom left block of (18), respectively, to get n H T G X G G + n G T G B H , (19) n H T H B T G + n G T H X H H . (20)Hence, T G X G G n G = T H X H H n H and X G = G T G . Finally, by (19) we get T G B H = − n G n H , and so B = − G T H . Hence, X = X ∗ . (cid:3) Note that the dimension of the λ max ( L G ( w )) -eigenspace controls the “degree” to which strict com-plementarity fails in Theorem 7. In particular, when G is the complete graph and w G = , we have rank( X ∗ ) + rank( S ∗ ) = 1 + n H .Theorem 7 shows that, if F is a graph which is a cosum (i.e., the complement of F is not connected) F = G + H , where G has at least one edge and H is connected, then there is a nonnegative weightfunction w : E ( F ) → R ++ such that strict complementarity fails for (5) with C = L F ( w ) ; one may just fix MARCEL K. DE CARLI SILVA AND LEVENT TUNÇEL w H ∈ R E ( H )++ arbitrarily, e.g., w H = , and set w G := M for large enough M so that n G µ G > n H µ H . Anatural question following from this is: Problem 8.
Characterize the set of graphs for which there exists a positive weight function on the edgessuch that strict complementarity fails for (5) when C is the corresponding weighted Laplacian matrix.4. Generic Failure of Strict Complementarity on the Boundaries of Normal Cones
In this section, we consider how often strict complementarity holds for (5) when C lies in the (relative)boundary of Normal( E n ; ¯ X ) for some vertex ¯ X of E n . Note that this boundary is described as a Minkowskisum in (13).We start by considering the case n = 3 , where (13) simplifies to bd(Normal( E ; ¯ x ¯ x T )) = Im(Diag) − { zz T : z ∈ { ¯ x } ⊥ } (21)for every ¯ x ∈ {± } . Proposition 9.
Let ¯ x ∈ {± } , and let C = Diag(¯ y ) − ¯ z ¯ z T for some ¯ y ∈ R and ¯ z ∈ { ¯ x } ⊥ , so that C ∈ bd(Normal( E ; ¯ x ¯ x T )) . Then strict complementarity holds for (5) if and only if ¯ z i = 0 for some i ∈ [3] . Proof.
Set ¯ S := Diag(¯ y ) − C = ¯ z ¯ z T and ¯ X := ¯ x ¯ x T . Clearly, ¯ y ⊕ ¯ S is feasible in the dual and Tr( ¯ S ¯ X ) =(¯ z T ¯ x ) = 0 , so ( ¯ X, ¯ y ⊕ ¯ S ) is a pair of primal-dual optimal solutions. By Corollary 2, ¯ y ⊕ ¯ S is the uniqueoptimal solution in the dual.Suppose that ¯ z i (cid:54) = 0 for every i ∈ [3] . We claim that ¯ X is the unique optimal solution in the primal.Indeed, let X ∈ E be optimal in the primal. Then SX ) = ¯ z T X ¯ z so X ¯ z = 0 . Thus, X X X X X X ¯ z ¯ z ¯ z = ¯ z + ¯ z X + ¯ z X ¯ z X + ¯ z + ¯ z X ¯ z X + ¯ z X + ¯ z , so ¯ z ¯ z z z z ¯ z X X X = − ¯ z. The determinant of the matrix defining the latter linear system is − z ¯ z ¯ z (cid:54) = 0 , so the unique solution isgiven by the off-diagonal entries of ¯ X .Suppose now that ¯ z i = 0 for some i ∈ [3] . If ¯ z = 0 then ( I, ¯ y ⊕ satisfies strict complementarity, so assume ¯ z (cid:54) = 0 . Set ˜ x := Diag( − e i )¯ x and ˜ X := ˜ x ˜ x T + e i e T i ∈ E . Then Tr( ¯ S ˜ X ) = ¯ z T (˜ x ˜ x T + e i e T i )¯ z = (¯ z T ˜ x ) + ¯ z i = 0 since ¯ z T ˜ x = ¯ z T ¯ x = 0 . Hence, ( ˜ X, ¯ y ⊕ ¯ S ) is a strictly complementarity pair of primal-dual optimal solutionsfor (5). (cid:3) For n ≥ , characterization of strict complementarity in (5) is not as easily described. However, we canprove the following condition sufficient for the failure of strict complementarity, which will turn out to besufficient for our purposes. Theorem 10.
Let n ≥ . Let C = Diag(¯ y ) − ¯ S for some ¯ y ∈ R n and ¯ S ∈ S n + , so that C ∈ bd(Normal( E n ; T )) .Suppose that Null( ¯ S ) = span { , h } for some h ∈ { } ⊥ and that h has at least three distinct coordinates.Then strict complementarity fails for (5). Proof.
Set y ∗ := ¯ y and S ∗ := Diag( y ∗ ) − C = ¯ S . Set ¯ X := T . Clearly, y ∗ ⊕ S ∗ is feasible in the dual and Tr( S ∗ X ∗ ) = 0 , so ( ¯ X, y ∗ ⊕ S ∗ ) is a pair of primal-dual optimal solutions. By Corollary 2, y ∗ ⊕ S ∗ is theunique optimal solution in the dual. We shall prove that ¯ X is the unique optimal solution in the primal.Let X ∈ E n be an optimal solution in the primal. By complementary slackness, Tr( XS ∗ ) = 0 , so S ∗ X = 0 and Im( X ) ⊆ Null( S ∗ ) = span { , h } . Hence, X = α T + α hh T + α ( h T + h T ) for some α ∈ R . Since diag( X ) = , we find that α + α h i + 2 α h i = 1 for every i ∈ [ n ] . Let i, j, k ∈ [ n ] such that |{ h i , h j , h k }| = 3 .Then h i h i h j h j h k h k α α α = . The determinant of the matrix defining this linear system is a Vandermonde determinant, and it is equal to ( h j − h i )( h k − h i )( h k − h j ) (cid:54) = 0 by assumption. Hence, α = e is its unique solution. Thus, X = T . (cid:3) TRICT COMPLEMENTARITY IN MAXCUT SDP 9
Theorem 10 seems to indicate that strictly complementarity fails “almost everywhere” on the boundary of
Normal( E n ; T ) , since the high rank matrices make up the bulk of the boundary (consider that the set ofnonsingular matrices is open and dense) and for “most” of them the extra vector h in the nullspace has atleast three distinct coordinates. Unfortunately, we are dealing with somewhat complicated sets (e.g., the highrank matrices in the boundary of a normal cone). In order to make our previous statements precise, we shallmake use of the theory of Hausdorff measures, which we introduce next.4.1. Preliminaries on Hausdorff Measures.
We refer the reader to [20], though we use different notationand more standard terminology. See also [5, 18] for a somewhat similar presentation. We focus our presentationon finite-dimensional normed spaces (over the reals) but most of it could be developed for arbitrary metricspaces. Our main normed spaces are (subspaces of) R n and S n . Since these are Euclidean spaces, they areequipped with a norm induced by their inner-products, and that is the norm that we will consider unlessexplicitly stated otherwise. We shall only use other norms in Section 5.Let V be a finite-dimensional normed space. Let d ∈ R + and ε ∈ R ++ . For each X ⊆ V , define H εd ( X ) := inf (cid:40) ∞ (cid:88) i =0 (cid:2) diam( U i ) (cid:3) d : { U i } i ∈ N ⊆ P ( V ) , X ⊆ ∞ (cid:91) i =0 U i , diam( U i ) < ε ∀ i ∈ N (cid:41) , where the diameter of U ⊆ V is diam( U ) := sup x,y ∈ U (cid:107) x − y (cid:107) . The function H d : P ( V ) → [0 , + ∞ ] definedby H d ( X ) := sup ε> H εd ( X ) = lim ε ↓ H εd ( X ) ∀ X ⊆ V (22)is an outer measure on V . Hence, the restriction of H d to the H d -measurable subsets of V is a completemeasure on V , called the d -dimensional Hausdorff measure on V . The -dimensional Hausdorff measure H is the cardinality of a set, H is its length, H is its area, and so on.Let d be a positive integer and set V := R d . Let λ d : P ( R d ) → [0 , + ∞ ] denote the d -dimensional Lebesgueouter measure on R d . It can be proved [20, Theorem 30] that λ d ( X ) λ d ( B ) = H d ( X )2 d ∀ X ⊆ R d . (23)In particular, the H d -measurable subsets of R d are the same as the λ d -measurable sets.Let a, b ∈ R + with a < b and let X ⊆ V . It is not hard to prove from the definition that H a ( X ) < ∞ = ⇒ H b ( X ) = 0 , (24) H b ( X ) > ⇒ H a ( X ) = ∞ . (25)Hence, sup { d ∈ R + : H d ( X ) = ∞} = inf { d ∈ R + : H d ( X ) = 0 } , (26)and the common value in (26) is the Hausdorff dimension of X , denoted by dim H ( X ) . In particular,if d ∈ R + and X ⊆ V satisfy H d ( X ) ∈ (0 , ∞ ) , then dim H ( X ) = d . (27)We may now define genericity precisely. Let X be a subset of a finite-dimensional normed space V . Let P be a property that may hold or fail for points in X , i.e., P ( x ) is either true or false for each x ∈ X . Wesay that P holds generically on X if H d ( { x ∈ X : P ( x ) is false } ) = 0 for d := dim H ( X ) . We say that P fails generically on X if the negation of P holds generically on X . In Section 4.3, we will use Theorem 10to prove that strict complementarity fails generically at the boundary of the normal cone of any vertex of E n ,for n ≥ , modulo some qualification on the ambient space. In the remainder of this section and in the nextone, we will describe a few more measure-theoretic tools that we shall use towards this goal.Let V and U be finite-dimensional normed spaces. Let X ⊆ V . Recall that a function ϕ : X → U is Lipschitz continuous with Lipschitz constant
L > if (cid:107) ϕ ( x ) − ϕ ( y ) (cid:107) ≤ L (cid:107) x − y (cid:107) ∀ x, y ∈ X . (28)The following is well known and easy to prove: Theorem 11.
Let V and U be finite-dimensional normed spaces. Let X ⊆ V and d ∈ R + . Let ϕ : X → U be Lipschitz continuous with Lipschitz constant L . Then H d (cid:0) ϕ ( X ) (cid:1) ≤ L d H d ( X ) . (29) Theorem 11 is especially useful to determine some Hausdorff dimensions via bi-Lipschitz maps. We recallthe definition here. Let V and U be finite-dimensional normed spaces. Let X ⊆ V , and let ϕ : X → U be a one-to-one function with range Y := ϕ ( X ) . We say that ϕ is bi-Lipschitz continuous with Lipschitzconstants L > and L > if ϕ is Lipschitz continuous with Lipschitz constant L and ϕ − : Y → V isLipschitz continuous with Lipschitz constant L . Corollary 12.
Let V and U be finite-dimensional normed spaces. Let X ⊆ V and d ∈ R + . Let ϕ : X → U be bi-Lipschitz continuous with Lipschitz constants L and L . Then L − d H d ( X ) ≤ H d ( ϕ ( X )) ≤ L d H d ( X ) . (30)In particular, if H d ( X ) ∈ (0 , ∞ ) , then dim H ( ϕ ( X )) = d .This corollary may be used, for instance, to regard any d -dimensional Euclidean space V as R d byconsidering the coordinate map ϕ : V → R d with respect to a fixed orthonormal basis of V . Another frequentuse of Corollary 12 goes as follows. Equip the space S n with the trace inner-product. If Q ∈ R n × n is anorthogonal matrix, the map X ∈ S n (cid:55)→ QXQ T preserves inner-products, and hence norms and distances;hence, the map is Lipschitz continuous with Lipschitz constant 1. Its inverse is X ∈ S n (cid:55)→ Q T XQ and so themap X ∈ S n (cid:55)→ QXQ T is bi-Lipschitz continuous with Lipschitz constants and .The next result is useful for determining the Hausdorff dimension of some simple unbounded sets in the σ -finite case, when (27) is not directly applicable: Proposition 13.
Let X be a subset of a finite-dimensional normed V . For each i ∈ N , let Y i be a subset ofa finite-dimensional normed space U i , and let ϕ i : Y i → V be a Lipschitz continuous function with Lipschitzconstant L i . If X ⊆ (cid:83) i ∈ N ϕ i ( Y i ) , then dim H ( X ) ≤ sup i ∈ N dim H ( Y i ) . Proof.
Set d := sup i ∈ N dim H ( Y i ) . Let ¯ d > d . Then (26) yields H ¯ d ( Y i ) = 0 for each i ∈ N , so by Theorem 11we have H ¯ d ( X ) ≤ (cid:80) i ∈ N L ¯ di H ¯ d ( Y i ) = 0 . (cid:3) For instance, R d = (cid:83) M ∈ N M B and the ball M B ⊆ R d with nonzero M has Hausdorff dimension d by (27)and (23), so Proposition 13 shows that dim H ( R d ) ≤ d . Since R d ⊇ B shows that H d ( R d ) ≥ H d ( B ) > by (23), we conclude by (27) that dim H ( R d ) = d . Together with Corollary 12, this shows that Hausdorffdimension and the usual (linear) dimension coincide on linear subspaces, and hence also for convex sets bytranslation invariance.4.2. Hausdorff Measures and the Boundary Structure of Convex Sets.
In this section we collectsome results relating Hausdorff measures and the boundary structure of convex sets, including a quick reviewof basic facts about faces.The following result is well known:
Theorem 14.
Let E be an Euclidean space. If C ⊆ E is a compact convex set with dimension d ≥ , then dim H (rbd( C )) = d − . Proof.
We may assume that dim( E ) = d so that C has nonempty interior. By choosing an orthonormalbasis for E , we may assume that E = R d . We may also assume that ∈ int( C ) by translation invarianceof Hausdorff measure. Set X := bd( B ∞ ) , and note that H d − ( X ) ∈ (0 , + ∞ ) by (23) and Corollary 12.Let ε, M ∈ R ++ such that ε B ∞ ⊆ C ⊆ M B ∞ . Let p C : R d → C be the metric projection onto C ,i.e., { p C ( x ) } = arg min y ∈ C (cid:107) y − x (cid:107) for each x ∈ R d . Then p C is Lipschitz continuous (with Lipschitzconstant 1). Theorem 11 applied to p C (cid:22) MX and positive homogeneity of H d − (of degree d − ) yield H d − (bd( C )) < ∞ . Similarly, applying Theorem 11 to the restriction to bd( C ) of metric projection onto ε B ∞ yields H d − (bd( C )) > . The theorem now follows from (27). (cid:3) Since we are dealing with convex cones, the previous result will be more useful to us when stated in alifted form about pointed closed convex cones:
Corollary 15.
Let E be an Euclidean space. If K ⊆ E is a pointed closed convex cone with dimension d ≥ , then dim H (rbd( K )) = d − . TRICT COMPLEMENTARITY IN MAXCUT SDP 11
Proof.
We may assume that E = R d . Since K is pointed, after applying some rotation, which preservesHausdorff measures by Corollary 12, we may assume that K = R + (1 ⊕ C ) for some compact convex set C ⊆ R ¯ d where ¯ d := d − . For each N ∈ N , define the compact convex set K N := K ∩ [ N, N + 1] ⊕ R ¯ d . Since rbd( K ) ⊆ ∞ (cid:91) N =0 rbd( K N ) , (31)the result follows from Proposition 13 and Theorem 14. (cid:3) The next result refers to faces of a convex set, so before we state it we shall briefly recall the basic theory;see [19, Sec. 18]. Let E be an Euclidean space. Let C ⊆ E be a convex set. A convex subset F of C is a face of C if, for each x, y ∈ C such that the open line segment ( x, y ) := { (1 − λ ) x + λy : λ ∈ (0 , } between x and y meets F , we have x, y ∈ F . We use the notation F (cid:69) C to denote that F is a face of C , and F (cid:67) C to denote that F is a proper face of C , i.e., F (cid:69) C and F (cid:54) = C . Denote Faces( C ) := { F : F (cid:69) C } .Faces of closed convex sets are closed, and faces of convex cones are convex cones. An arbitrary intersectionof faces of C is a face of C and, since the faces of a convex set are partially ordered by inclusion and C (cid:69) C ,every point x of C lies in a unique minimal face F of C ; this face F is characterized by the property x ∈ ri( F ) . Also, it can be proved that { ri( F ) : ∅ (cid:54) = F (cid:69) C } is a partition of C . If C is a compact convexset, it is not hard to prove that the faces of the homogenization of C are described by: Faces (cid:0) R + (1 ⊕ C ) (cid:1) = (cid:8) ∅ , { } (cid:9) ∪ (cid:8) R + (1 ⊕ F ) : ∅ (cid:54) = F (cid:69) C (cid:9) . (32) Theorem 16 (Larman [12]) . Let E be an Euclidean space. If C ⊆ E is a compact convex set with dimension d ≥ , then H d − (cid:16) (cid:91) F (cid:67) C rbd( F ) (cid:17) = 0 . As before, we shall need a conic version of Larman’s Theorem. We apply tools similar to the ones used tolift Theorem 14 to Corollary 15:
Theorem 17.
Let E be an Euclidean space. If K ⊆ E is a pointed closed convex cone with dimension d ≥ ,then H d − (cid:16) (cid:91) F (cid:67) K rbd( F ) (cid:17) = 0 . Proof.
The case d = 1 is easy to verify; assume that d ≥ . We may assume that E = R ⊕ R ¯ d for ¯ d := d − and, as in the beginning of the proof of Corollary 15, we may assume that K = R + (1 ⊕ C ) for somecompact convex set C ⊆ R ¯ d with nonempty interior. For each N ∈ N , define the compact convex set K N := K ∩ [ N, N + 1] ⊕ R ¯ d . By elementary convex analysis, (cid:91) F (cid:67) K rbd( F ) ⊆ ∞ (cid:91) N =0 (cid:91) F N (cid:67) K N rbd( F N ) . (33)Hence, H d − (cid:32) (cid:91) F (cid:67) K rbd( F ) (cid:33) ≤ ∞ (cid:88) N =0 H d − (cid:32) (cid:91) F N (cid:67) K N rbd( F N ) (cid:33) = 0 , where we used the fact that each summand is zero by Theorem 16. (cid:3) Generic Failure of Strict Complementarity.
In this section, we prove one of our main results:strict complementarity fails generically in the relative boundary of the normal cone of the elliptope at any ofits vertices.We shall apply Theorem 17 to S n + . Let us briefly recall some well-known descriptions of the faces of thepositive semidefinite cone S n + . Let L n denote the set of linear subspaces of R n . For each L ∈ L n , denote F L := { X ∈ S n + : Null( X ) ⊇ L } (34)and note that ri( F L ) = { X ∈ S n + : Null( X ) = L } . (35) Then
Faces( S n + ) = { ∅ } ∪ { F L : L ∈ L n } . (36)Note that, for L ∈ L n such that L (cid:54) = R n , there is an orthogonal matrix Q ∈ R n × n such that F L = (cid:26) Q (cid:20) U
00 0 (cid:21) Q T : U ∈ S r + (cid:27) , (37)where r := n − dim( L ) . Lemma 18.
Let n ≥ be an integer. Then the property “ C (cid:55)→ rank( C ) = n − ” holds generically in bd( S n + ) . Proof.
Set d := dim H ( S n + ) . Note that d − H (bd( S n + )) by Corollary 15. Let X ∈ bd( S n + ) such that rank( X ) = n − fails. Then rank( X ) ≤ n − . For each nonzero h ∈ Null( X ) , let L be the linear subspaceof R n spanned by h and note that X ∈ rbd( F L ) , following the notation from (34). Hence, { X ∈ bd( S n + ) : rank( X ) (cid:54) = n − } = { X ∈ S n + : rank( X ) ≤ n − } ⊆ (cid:91) F (cid:67) S n + rbd( F ) . The ( d − -dimensional Hausdorff measure of the set on the RHS above is zero by Theorem 17. (cid:3) We are ready to prove one of our main results:
Theorem 19.
Let n ≥ , and let ¯ X be a vertex of E n . Then the property “ C (cid:55)→ strict complementarityholds for (5)” fails generically on rbd( S n + ∩ { ¯ X } ⊥ ) . Proof.
By Theorem 3 and the discussion of linear automorphisms of E n from Section 2.2, we may assumethat ¯ X = T . Set m := n − . Let Q ∈ R n × n be an orthogonal matrix such that Q T e n = n − / and Q T e m = 2 − / ( e − e ) . Using themap M ∈ S n (cid:55)→ QM Q T and Corollary 12, we find that rbd( S n + ∩ { ¯ X } ⊥ ) and rbd( S n + ∩ { e n e T n } ⊥ ) have thesame Hausdorff dimension. Since the cone S n + ∩ { e n e T n } ⊥ is an embedding of S m + into S n + , the Hausdorffdimension of rbd( S n + ∩ { e n e T n } ⊥ ) is dim H ( S m + ) − by Corollary 15. Hence, d := dim H (cid:0) rbd (cid:0) S n + ∩ { ¯ X } ⊥ (cid:1)(cid:1) = (cid:18) n (cid:19) − . (38)Set C := (cid:8) C ∈ rbd( S n + ∩ { ¯ X } ⊥ ) : strict complementarity holds in (5) (cid:9) . By Theorem 10, C ⊆ D ∪ D ∪ D ∪ D (39)where D := (cid:8) C ∈ S n + ∩ { ¯ X } ⊥ : rank( C ) ≤ n − (cid:9) , D ij := (cid:8) C ∈ S n + : ∃ h ∈ { , e i − e j } ⊥ , h (cid:54) = 0 , Null( C ) = span { , h } (cid:9) , for each i, j ∈ [ n ] . Clearly all the sets D ij have the same d -dimensional Hausdorff measures, so it suffices toprove that H d ( D ) = 0 , (40) H d ( D ) = 0 . (41)By using the map M ∈ S n (cid:55)→ QM Q T and Corollary 12, D and { C ∈ S m + : rank( C ) ≤ m − } have thesame d -dimensional Hausdorff measure. Hence, (40) follows from Lemma 18 and Corollary 15. Again usingthe map M ∈ S n (cid:55)→ QM Q T and Corollary 12, we find that H d ( D ) = H d ( D (cid:48) ) where D (cid:48) := { U ∈ S m + : rank( U ) = m − , e m ∈ Im( U ) } . Hence, to prove (41) and thus the theorem, it suffices to prove that H d ( D (cid:48) ) = 0 . (42) TRICT COMPLEMENTARITY IN MAXCUT SDP 13
For each k ∈ [ m − define the permutation matrix P k := (cid:80) i ∈ [ m ] \{ k,m } e i e T i + e k e T m + e m e T k ∈ S m . Set P m := I . For each k ∈ [ m ] define the map ϕ k : S m − ⊕ R m − → S m by setting ϕ k ( A ⊕ c ) := P T k (cid:20) A Acc T A c T Ac (cid:21) P k . It is easy to verify that { U ∈ S m + : rank( U ) = m − } = (cid:91) k ∈ [ m ] ϕ k ( S m − ⊕ R m − ) , (43) Null( ϕ k ( A ⊕ c )) = P k span {− c ⊕ } ∀ A ⊕ c ∈ S m − ⊕ R m − . (44)Let U ∈ S m + with rank( U ) = m − , and let k ∈ [ m ] and A ⊕ c ∈ S m − ⊕ R m − such that U = ϕ k ( A ⊕ c ) .Then e m ∈ Im( U ) is equivalent to e m ⊥ P k ( − c ⊕ , which is equivalent to k ∈ [ m − and c ⊥ e k . Hence, D (cid:48) = (cid:91) k ∈ [ m − ϕ k ( S m − ⊕ { e k } ⊥ ) . (45)Let k ∈ [ m − . Since each entry of ϕ k ( A ⊕ c ) is (component-wise) polynomial function of the input, themap ϕ k is Lipschitz continuous on any compact subset of the domain. It follows from Proposition 13 that dim H ( ϕ ( S m − ⊕ { e k } ⊥ )) ≤ (cid:18) m (cid:19) + m − d − (46)note that the subspace { e k } ⊥ in the LHS is ( m − -dimensional, as this subspace is the set of vectors in R m − orthogonal to e k . Now (42) follows from (45) and (46). (cid:3) Failure of Strict Complementarity for Rank-One Objectives
In Section 4, we zoomed into the boundary of the normal cone of an arbitrary vertex of the elliptopeand proved that strict complementarity fails generically there.
Informally , we might say that with zero“probability” a “uniformly chosen” objective function in the boundary of such normal cone yields an SDPthat satisfies strict complementarity. Now we zoom in even further in that boundary, into the set of negativesemidefinite rank-one objectives, and consider again how often strict complementarity holds. We will stateand prove a self-contained result in Theorem 24 below. However, in order to motivate the objects of theconstruction and the intermediate results, we start with an informal discussion.Assume throughout this discussion that n ≥ . We will normalize the “sample space” so that we can have aprobability space. For the sake of discussion, let us focus our attention on the vertex T of E n and considerthe sample space to be Ω M := { C ∈ bd(Normal( E n ; T )) : C (cid:22) , rank( C ) = 1 , (cid:107) vec( C ) (cid:107) ∞ = 1 } . (47)Accordingly, equip S n with the norm X ∈ S n (cid:55)→ (cid:107) vec( X ) (cid:107) ∞ . Set d := dim H (Ω M ) . In order to obtain aprobability space on Ω M , we will define a probability measure P M ( A M ) := H d ( A M ) H d (Ω M ) (48)over all H d -measurable subsets A M of Ω M ; we shall prove that H n − (Ω M ) ∈ (0 , ∞ ) , so that (48) is properlydefined and d = n − . Our goal is to prove that the probability of the event G M := { C ∈ Ω M : strict complementarity holds for (5) with C } . (49)lies in (0 , .In order to achieve this, we shall reduce the problem to the space of vectors that generate the rank-onetensors in Ω M and G M , which lie in the matrix space. In order to carry results back and forth between thesespaces, we rely on Corollary 12. For each s ∈ {± } n , define R ns := Diag( s ) R n + , (50) ϕ s : b ∈ R ns ∩ bd( B ∞ ) (cid:55)→ − bb T . (51)Equip R n with the norm x ∈ R n (cid:55)→ (cid:107) x (cid:107) ∞ . We shall split our analysis to each of the n bi-Lipschitz maps ϕ s ,one for each chamber/orthant of R n , according to their sign vectors: Theorem 20.
Let s ∈ {± } n . Then the map ϕ s defined in (51) is bi-Lipschitz continuous with Lipschitzconstants 2 and 1, where we equip the domain with the ∞ -norm, and we equip the range with the norm (cid:107) vec( · ) (cid:107) ∞ . Proof.
To see that ϕ s is Lipschitz continuous with Lipschitz constant 2, let x, y ∈ R ns ∩ bd( B ∞ ) and note that (cid:107) xx T − yy T ) (cid:107) ∞ = (cid:107) vec[( x − y )( x + y ) T + ( x + y )( x − y ) T ] (cid:107) ∞ ≤ (cid:107) x + y (cid:107) ∞ (cid:107) x − y (cid:107) ∞ ≤ (cid:107) x − y (cid:107) ∞ . The proof that ϕ − s is Lipschitz continuous with Lipschitz constant 1 is also simple but it involves caseanalysis. Set A := xx T − yy T . Let k ∈ [ n ] such that | x k | = 1 , so x k = s k . Similarly, let (cid:96) ∈ [ n ] such that | y (cid:96) | = 1 , so y (cid:96) = s (cid:96) . Let j ∈ [ n ] . We shall make use of the following facts: α k := y k s k ∈ [0 , , β (cid:96) := x (cid:96) s (cid:96) ∈ [0 , , | A kj | = | x j − α k y j | , | A (cid:96)j | = | β (cid:96) x j − y j | . We consider 4 cases, according to which of x j or y j is largest, and according to their signs; note that both x j and y j have the same sign.We have x j ≥ y j ≥ ⇒ ≤ | x j − y j | = x j − y j ≤ x j − α k y j = | A kj | ; y j ≥ x j ≥ ⇒ ≤ | x j − y j | = y j − x j ≤ y j − β (cid:96) x j = | A (cid:96)j | ;0 ≥ x j ≥ y j = ⇒ ≤ | x j − y j | = x j − y j ≤ β (cid:96) x j − y j = | A (cid:96)j | ;0 ≥ y j ≥ x j = ⇒ ≤ | x j − y j | = y j − x j ≤ α k y j − x j = | A kj | . Hence, (cid:107) x − y (cid:107) ∞ ≤ (cid:107) vec( xx T − yy T ) (cid:107) ∞ . (cid:3) Note that restricting the domain of ϕ s in Theorem 20 to chambers of R n is necessary. Indeed, consider x :=(1 , − , ε ) T and y := ( − , , T , for an arbitrary ε ∈ (0 , . Then (cid:107) x − y (cid:107) ∞ = 2 but (cid:107) vec( xx T − yy T ) (cid:107) ∞ = ε .Next we relate the description for Ω M to the vectors that appear in the rank-one tensors: Proposition 21.
For n ≥ , we have Ω M = (cid:26) − bb T : b ∈ R n and either b = e i − αe j for some distinct i, j ∈ [ n ] and α ∈ [0 , ,or ( b ⊥ and | supp( b ) | ≥ and (cid:107) b (cid:107) ∞ = 1) (cid:27) (52) Proof.
We first prove the inclusion ‘ ⊇ ’. If b ⊥ and (cid:107) b (cid:107) ∞ = 1 , it follows from (11) that − bb T ∈ Ω M . Supposethat b = e i − αe j for distinct i, j ∈ [ n ] and α ∈ [0 , . Set β := 1 − α ∈ [0 , and y := − βb . It is easy to verifythat S := Diag( y ) + bb T (cid:23) and S = 0 ; now − bb T = Diag( y ) − S ∈ Ω M follows from (11). In both cases,we rely on n ≥ to ensure that − bb T lies in the boundary.Now we prove the inclusion ‘ ⊆ ’. Let b ∈ R n such that − bb T =: C ∈ Ω M . Clearly (cid:107) b (cid:107) ∞ = 1 . We mayassume that β := T b ≥ and that b > . Use (11) to write C = Diag( y ) − S for some y ∈ R n and S ∈ S n + such that S = 0 . Then − βb = − bb T = C = y − S = y , so (cid:22) S = Diag( y ) + bb T = − β Diag( b ) + bb T . (53)We claim that b i < ∀ i ∈ supp( b ) \ { } . (54)Indeed, by restricting (53) to a principal submatrix we get (cid:20) b b b i b b i b i (cid:21) (cid:23) β (cid:20) b b i (cid:21) . (55)If b i > , then the RHS is positive definite, whereas the LHS is singular. This proves (54).Suppose first that | supp( b ) | ≤ . Then b = e − αe j for some j ∈ supp( b ) \ { } and α ∈ [ − , . By (54),we have α ∈ [0 , , and so − bb T lies in the RHS of (52).Suppose next that | supp( b ) | ≥ . We must prove that b ⊥ . (56)Suppose for the sake of contradiction that β > . Next let i, j ∈ supp( b ) \ { } be distinct. Again by (53) weget that the determinant of (cid:20) b ( b − β ) b b i b b i b i ( b i − β ) (cid:21) (cid:23) , (57) TRICT COMPLEMENTARITY IN MAXCUT SDP 15 is nonnegative. This yields b + b i ≤ β using β > . But (54) implies that β ≤ b + b i + b j < b + b i ,contradiction. This concludes the proof of (56), and hence − bb T lies in the RHS of (52). (cid:3) Finally, we need to relate G M with the vectors that appear in the rank-one tensors. A vector b ∈ R n is strictly balanced if | b i | < (cid:80) j ∈ [ n ] \{ i } | b j | for every i ∈ [ n ] . It is easy to verify that,if b ∈ R n and i ∈ [ n ] is such that | b i | = (cid:107) b (cid:107) ∞ , then b is strictly balanced ⇐⇒ | b i | < (cid:80) j ∈ [ n ] \{ i } | b j | . (58)We shall rely on yet another result by Laurent and Poljak: Theorem 22 ([14, Theorem 2.6]) . Let b ∈ R n such that b ⊥ and supp( b ) = [ n ] . Then there exists X ∈ E n such that Null( X ) = span { b } if and only if b is strictly balanced. Proposition 23.
Let b ∈ R n such that b ⊥ and supp( b ) = [ n ] . Then strict complementarity holds for (5)with C = − bb T if and only if b is strictly balanced. Proof.
Note that T is an optimal solution for (5) if C = − bb T . By Proposition 5, we must show thatexistence of X ∈ E n such that − bb T ∈ ri(Normal( E n ; X )) is equivalent to strict balancedness of b . We willshow that, for each X ∈ E n , − bb T ∈ ri(Normal( E n ; X )) ⇐⇒ bb T ∈ { Z ∈ S n + : Im( Z ) = Null( X ) } . (59)Since existence of X ∈ E n such that the RHS of (59) holds is equivalent to b being strictly balanced byTheorem 22, the result will follow.The proof of sufficiency in (59) follows from (12) and ri( S n + ∩ { X } ⊥ ) = { Z ∈ S n + : Im( Z ) = Null( X ) } . Forthe proof of necessity, recall (12) and suppose that there exists X ∈ E n such that − bb T = Diag( y ) − S forsome y ∈ R n and S ∈ ri( S n + ∩ { X } ⊥ ) . Then − bb T = (Diag( y ) − S ) = y − S shows that y = S . (60)Since X and T are optimal solutions for (5), we find that − bb T T ) = Tr( − bb T X ) = y T diag( X ) − Tr( SX ) so T y = Tr( SX ) = 0 . By (60), T S = T y = 0 , so ∈ Null( S ) and y = 0 . (cid:3) We are now in position to present the main result of this section:
Theorem 24.
Let n ≥ be an integer. Equip S n with the norm (cid:107) vec( · ) (cid:107) ∞ . Set Ω M := { C ∈ bd(Normal( E n ; T )) : C (cid:22) , rank( C ) = 1 , (cid:107) vec( C ) (cid:107) ∞ = 1 } ⊆ S n , G M := { C ∈ Ω M : strict complementarity holds for (5) with C } ,d := dim H (Ω M ) . Let Σ d be the σ -algebra of H d -measurable subsets of S n and set Σ M := { A M ∈ Σ d : A M ⊆ Ω M } . Then(i) Ω M ∈ Σ d and G M ∈ Σ M ,(ii) H n − (Ω M ) ∈ (0 , ∞ ) , so d = n − ,(iii) H d ( G M ) > and H d ( G M ) > , where G M := Ω M \ G M .In particular, if we set P M ( A M ) := H d ( A M ) H d (Ω M ) ∀ A M ∈ Σ M , (61)then (Ω M , Σ M , P M ) is a probability space and the event G M satisfies P M ( G M ) ∈ (0 , . Proof.
We start by proving that Ω M ∈ Σ d . (62)By standard Hausdorff measure theory, Σ d contains every Borel set of S n ; see, e.g., [20, Theorem 27]. Recallthat the Borel sets of S n are the elements of the smallest σ -algebra on S n that contains all the open subsetsof S n . For distinct i, j ∈ [ n ] , set B ij := e i − [0 , e j . For each S ∈ (cid:0) [ n ]3 (cid:1) and m ∈ N \ { } , define B S,m := { b ∈ R n : b ⊥ , (cid:107) b (cid:107) ∞ = 1 , | b i | ≥ m ∀ i ∈ S } . Clearly, each B ij and B S,m is compact. Let ϕ : b ∈ R n (cid:55)→ − bb T ∈ S n . By Proposition 21, Ω M = (cid:91) i ∈ [ n ] (cid:91) j ∈ [ n ] \{ i } ϕ ( B ij ) ∪ ∞ (cid:91) m =1 (cid:91) S ∈ (cid:0) [ n ]3 (cid:1) ϕ ( B S,m ) . (63) Since each ϕ ( B ij ) and each ϕ ( B S,m ) is compact, (63) shows that Ω M is an F σ , i.e., a countable union ofclosed sets, and hence a Borel set. This proves (62).Next we prove that H n − (Ω M ) ∈ (0 , ∞ ) (64)from which it will follow via (27) that d = n − . (65)Again we shall use Proposition 21. By Corollary 12 and Theorem 20, H (cid:16) (cid:91) i ∈ [ n ] (cid:91) j ∈ [ n ] \{ i } ϕ ( B ij ) (cid:17) ∈ (0 , ∞ ) . (66)Moreover, Ω M ⊇ { − bb T : b = − ⊕ c, c ∈ R n − , T c = 1 } = ⇒ H n − (Ω M ) > . For each s ∈ {± } n and i ∈ [ n ] , the polytope B s,i := { b ∈ R ns : b ⊥ , − ≤ b ≤ , b i = s i } has dimensionless than or equal to n − . Since Ω M ⊆ N ∪ (cid:91) s ∈{± } n (cid:91) i ∈ [ n ] ϕ ( B s,i ) for some set N of zero d -dimensional Hausdorff measure, and each ϕ ( B s,i ) has finite d -dimensional Hausdorffmeasure by Corollary 12 and Theorem 20, the proof of (64) is complete.In the remainder of the proof we shall use subsets of R n with constraints on the coordinates that are zero: Z i := { b ∈ R n : b i = 0 } ∀ i ∈ [ n ] , and Z ∅ := R n \ (cid:91) i ∈ [ n ] Z i = { b ∈ R n : supp( b ) = [ n ] } . Define also Ω V := { b ∈ R n : b ⊥ , | supp( b ) | ≥ , (cid:107) b (cid:107) ∞ = 1 } , G V := { b ∈ Ω V : − bb T ∈ G M } , G V := Ω V \ G V , B bal := { b ∈ Ω V : b is strictly balanced } , B bal := Ω V \ B bal . Proposition 23 implies that G V ∩ Z ∅ = B bal ∩ Z ∅ , (67) G V ∩ Z ∅ = B bal ∩ Z ∅ . (68)For each i ∈ [ n ] , we have G V ∩ Z i ⊆ Ω V ∩ Z i and the set on the RHS has zero d -dimensional Hausdorffmeasure. Hence, H d ( G V ∩ Z i ) = 0 ∀ i ∈ [ n ] . (69)Define ϕ s as in (51) for each s ∈ {± } n . By putting together (66), (69), and (67), we find that G M = N ∪ (cid:91) s ∈{± } n ϕ s ( G V ∩ Z ∅ ∩ R ns ) = N ∪ (cid:91) s ∈{± } n ϕ s ( B bal ∩ Z ∅ ∩ R ns ) (70)for some subset N ⊆ Ω M such that H d ( N ) = 0 .Let us prove that G M ∈ Σ M . (71)For each m ∈ N \ { } and each U ∈ (cid:0) [ n ]3 (cid:1) , define B bal ,m,U := (cid:110) b ∈ R n : b ⊥ , (cid:107) b (cid:107) ∞ = 1 , | b i | ≥ m ∀ i ∈ U, | b i | + m ≤ (cid:88) j ∈ [ n ] \{ i } | b j | ∀ i ∈ [ n ] (cid:111) . TRICT COMPLEMENTARITY IN MAXCUT SDP 17
Clearly, B bal = (cid:83) ∞ m =1 (cid:83) U ∈ (cid:0) [ n ]3 (cid:1) B bal ,m,U . Hence, by (70), G M = N ∪ ∞ (cid:91) m =1 (cid:91) U ∈ (cid:0) [ n ]3 (cid:1) (cid:91) s ∈{± } n ϕ s ( B bal ,m,U ∩ Z ∅ ∩ R ns ) . (72)Since each ϕ s ( B bal ,m,U ∩ Z ∅ ∩ R ns ) is compact, it follows that G M is the union of a null set with an F σ , andhence G M ∈ Σ d . This proves (71).Set ˚ x := 1 ⊕ n − ⊕ − n ( n − n − ∈ R n , ε := 34( n − n − , and s ( x ) := 1 ⊕ ⊕ − ∈ {± } n . It is not hard to verify that ˚ x + ε ( B ∞ ∩ { e , } ⊥ ) ⊆ B bal ∩ Z ∅ ∩ R ns ( x ) . (73)Since the set in the LHS of (73) has positive d -dimensional measure, so does the set in the RHS of (73),whence H d ( G M ) > (74)by Corollary 12, Theorem 20, and (70).Set ˚ y := 1 ⊕ − n − ∈ R n , δ := 12( n − , and s ( y ) := 1 ⊕ − ∈ {± } n . It is not hard to verify that ˚ y + δ ( B ∞ ∩ { e , } ⊥ ) ⊆ B bal ∩ Z ∅ ∩ R ns ( y ) . (75)Hence, G M ⊇ ϕ ( G V ∩ Z ∅ ) = ϕ ( B bal ∩ Z ∅ ) ⊇ ϕ s ( y ) ( B bal ∩ Z ∅ ∩ R ns ( y ) ) ⊇ ϕ s ( y ) (˚ y + δ ( B ∞ ∩ { e , } ⊥ )) . Thus, H d ( G M ) > by Corollary 12 and Theorem 20. (cid:3) Conclusion
We proved in Section 4 that the MaxCut SDP (5) has the worst possible behavior with respect to strictcomplementarity when the objective function is in the boundary of the normal cone of the elliptope at any ofits vertices. At a first glance, this may seem surprising since the MaxCut SDP is so elementary and has somany favorable properties. However, as we explain next, from a properly chosen viewpoint this bad behavioris not so surprising.Consider, for instance, the convex set C ⊆ R in Figure 1. For concreteness, an explicit description of C isgiven by C := { x ∈ R : (cid:107) x (cid:107) + | x | ≤ } = (cid:8) x ∈ R : | x | ≤ / , | x | ≤ (cid:112) − | x | (cid:9) , (76)and it is not hard to show that C is the projection of the feasible region of an SDP. It is intuitive and simpleto verify that lies in (the boundary of) the normal cone of C at its vertex e , but is not in the relativeinterior of any normal cone of C . We can trace this phenomenon to the smooth, nonpolyhedral boundaryof C around e . It is straightforward to extend this example to R by considering the solid of revolutionobtained by rotating C around the e axis, i.e., an American football.The elliptope looks somewhat similar to C in the following sense. Let us consider the projection E (cid:48) n ⊆ R (cid:0) n (cid:1) of the elliptope E n into its off-diagonal entries. For n ≥ , the set E (cid:48) n is a compact nonpolyhedral convexset with n − vertices by Theorem 3. Intuitively, E (cid:48) n can be thought of as being obtained from the polytopewhich is the convex hull of these n − vertices by inflating it like a balloon, while preserving the vertices fixed.(In fact, by [13, Proposition 2.9], the line segments between the n − vertices are also kept fixed.) In this way, E (cid:48) n is a round, plump convex set, whose boundary is smooth almost everywhere, and the neighborhoodof E (cid:48) n around any vertex looks like (a generalization of) what is depicted by the set C from the previousparagraph. Thus, when one considers that the elliptope around a vertex “locally” looks like C around e , thepoor behavior of the MaxCut SDP described in Section 4 makes more intuitive sense. The discussion above x x − C N ( C ; e ) Figure 1.
The set C defined in (76) and its normal cone N ( C ; e ) at e .indicates a natural direction for future research. Namely, to extend Theorem 19 to more general SDPs, byrequiring the feasible region to be “locally nonpolyhedral” around its vertices. References [1] F. Alizadeh, J.-P. A. Haeberly, and M. L. Overton. Complementarity and nondegeneracy in semidefinite programming.
Math. Programming , 77(2, Ser. B):111–128, 1997. Semidefinite programming.[2] F. Alizadeh, J.-P. A. Haeberly, and M. L. Overton. Primal-dual interior-point methods for semidefinite programming:convergence rates, stability and numerical results.
SIAM J. Optim. , 8(3):746–768 (electronic), 1998.[3] M. K. de Carli Silva and L. Tunçel. Vertices of spectrahedra arising from the elliptope, the theta body, and their relatives.
SIAM J. Optim. , 25(1):295–316, 2015.[4] C. Delorme and S. Poljak. Laplacian eigenvalues and the maximum cut problem.
Math. Programming , 62(3, Ser. A):557–574,1993.[5] D. Drusvyatskiy and A. S. Lewis. Generic nondegeneracy in convex optimization.
Proc. Amer. Math. Soc. , 139(7):2519–2527,2011.[6] J. Gallier. The Schur complement and symmetric positive semidefinite (and definite) matrices. December 10, 2010.
URL: (visited on 04/11/2018).[7] A. J. Goldman and A. W. Tucker. Theory of linear programming. In
Linear inequalities and related systems , Annals ofMathematics Studies, no. 38, pages 53–97. Princeton University Press, Princeton, N.J., 1956.[8] M. Halická, E. de Klerk, and C. Roos. On the convergence of the central path in semidefinite optimization.
SIAM J. Optim. ,12(4):1090–1099 (electronic), 2002.[9] R. A. Horn and C. R. Johnson.
Matrix analysis . Cambridge University Press, Cambridge, 1990. Corrected reprint of the1985 original.[10] J. Ji, F. A. Potra, and R. Sheng. On the local convergence of a predictor-corrector method for semidefinite programming.
SIAM J. Optim. , 10(1):195–210, 1999.[11] M. Kojima, M. Shida, and S. Shindoh. Local convergence of predictor-corrector infeasible-interior-point algorithms for SDPsand SDLCPs.
Math. Programming , 80(2, Ser. A):129–160, 1998.[12] D. G. Larman. On a conjecture of Klee and Martin for convex bodies.
Proc. London Math. Soc. (3) , 23:668–682, 1971.[13] M. Laurent and S. Poljak. On a positive semidefinite relaxation of the cut polytope.
Linear Algebra Appl. , 223/224:439–461,1995. Special issue honoring Miroslav Fiedler and Vlastimil Pták.[14] M. Laurent and S. Poljak. On the facial structure of the set of correlation matrices.
SIAM J. Matrix Anal. Appl. ,17(3):530–547, 1996.[15] Z.-Q. Luo, J. F. Sturm, and S. Zhang. Superlinear convergence of a symmetric primal-dual path following algorithm forsemidefinite programming.
SIAM J. Optim. , 8(1):59–81, 1998.[16] Yu. Nesterov, M. J. Todd, and Y. Ye. Infeasible-start primal-dual methods and infeasibility detectors for nonlinearprogramming problems.
Math. Program. , 84(2, Ser. A):227–267, 1999.[17] G. Pataki. The geometry of semidefinite programming. In
Handbook of semidefinite programming , volume 27 of
Internat.Ser. Oper. Res. Management Sci. , pages 29–65. Kluwer Acad. Publ., Boston, MA, 2000.[18] G. Pataki and L. Tunçel. On the generic properties of convex optimization problems in conic form.
Math. Program. , 89(3,Ser. A):449–457, 2001.[19] R. T. Rockafellar.
Convex analysis . Princeton Landmarks in Mathematics. Princeton University Press, Princeton, NJ, 1997.Reprint of the 1970 original, Princeton Paperbacks.[20] C. A. Rogers.
Hausdorff measures . Cambridge Mathematical Library. Cambridge University Press, Cambridge, 1998. Reprintof the 1970 original, With a foreword by K. J. Falconer.[21] A. Schrijver.
Combinatorial optimization , volume 24. Springer-Verlag, Berlin, 2003.[22] A. Shapiro and K. Scheinberg. Duality and optimality conditions. In
Handbook of semidefinite programming , volume 27 of
Internat. Ser. Oper. Res. Management Sci. , pages 67–110. Kluwer Acad. Publ., Boston, MA, 2000.[23] L. Tunçel and H. Wolkowicz. Strong duality and minimal representations for cone optimization.
Comput. Optim. Appl. ,53(2):619–648, 2012.
TRICT COMPLEMENTARITY IN MAXCUT SDP 19 [24] Y. Ye, O. Güler, R. A. Tapia, and Y. Zhang. A quadratically convergent O ( √ nL ) -iteration algorithm for linear programming. Math. Programming , 59(2, Ser. A):151–162, 1993.[25] Y. Ye, M. J. Todd, and S. Mizuno. An O ( √ nL ) -iteration homogeneous and self-dual linear programming algorithm. Math.Oper. Res. , 19(1):53–67, 1994.(Marcel K. de Carli Silva)
Instituto de Matemática e Estatística, Universidade de São Paulo
E-mail address : [email protected] (Levent Tunçel) Department of Combinatorics and Optimization, University of Waterloo
E-mail address ::