On semidefinite programming relaxations of the traveling salesman problem
aa r X i v : . [ m a t h . O C ] F e b On semidefinite programming relaxations of thetraveling salesman problem
Etienne de Klerk ∗ Dmitrii V. Pasechnik † Renata Sotirov ‡ February 11, 2009
Abstract
We consider a new semidefinite programming (SDP) relaxation of thesymmetric traveling salesman problem (TSP), that may be obtained viaan SDP relaxation of the more general quadratic assignment problem(QAP). We show that the new relaxation dominates the one in the paper:[D. Cvetkovi´c, M. Cangalovi´c and V. Kovaˇcevi´c-Vujˇci´c. Semidefi-nite Programming Methods for the Symmetric Traveling Salesman Prob-lem. In
Proceedings of the 7th International IPCO Conference on IntegerProgramming and Combinatorial Optimization , 1999, 126–136, Springer-Verlag, London, UK.]Unlike the bound of Cvetkovi´c et al., the new SDP bound is not dom-inated by the Held-Karp linear programming bound, or vice versa.
Keywords: traveling salesman problem, semidefinite programming, quadraticassignment problem, association schemes
AMS classification:
JEL code:
C61
The quadratic assignment problem (QAP) may be stated in the following form:min X ∈ Π n trace( AXBX T ) (1)where A and B are given symmetric n × n matrices, and Π n is the set of n × n permutation matrices. ∗ Department of Econometrics and OR, Tilburg University, The [email protected] † School of Physical and Mathematical Sciences, Nanyang Technological University, Singa-pore. [email protected] ‡ Department of Econometrics and OR, Tilburg University, The [email protected]
1t is well-known that the QAP contains the symmetric traveling salesmanproblem (TSP) as a special case. To show this, we denote the complete graphon n vertices with edge lengths (weights) D ij = D ji > i = j ), by K n ( D ),where D is called the matrix of edge lengths (weights). The TSP is to finda Hamiltonian circuit of minimum length in K n ( D ). The n vertices are oftencalled cities , and the Hamiltonian circuit of minimum length the optimal tour .To see that TSP is a special case of QAP, let C denote the adjacency matrixof C n (the standard circuit on n vertices): C := · · · · · ·
00 1 0 1 . . . ...... . . . . . . . . . . . .0 0 11 0 · · · . Now the TSP problem is obtained from the QAP problem (1) by setting A = D and B = C . To see this, note that every Hamiltonian circuit in a completegraph has adjacency matrix XC X T for some X ∈ Π n . Thus we may conciselystate the TSP as T SP opt := min X ∈ Π n trace (cid:18) DXC X T (cid:19) . (2)The symmetric TSP is NP-hard in the strong sense [20], and therefore so isthe more general QAP. In the special case where the distance function of theTSP instance satisfies the triangle inequality (metric TSP), there is a celebrated3 / / / Main results and outline of this paper
In this paper we will consider semidefinite programming (SDP) relaxations ofthe TSP. We will introduce a new SDP relaxation of TSP in Section 2, thatis motivated by the theory of association schemes. Subsequently, we will showin Section 3 that the new SDP relaxation coincides with the SDP relaxationfor QAP introduced in [28] when applied to the QAP reformulation of TSP in(2). Then we will show in Section 4 that the new SDP relaxation dominatesthe relaxation due to Cvetkovi´c et al. [5]. The relaxation of Cvetkovi´c et al. isknown to be dominated by the Held-Karp linear programming bound [6, 15],but we show in Section 5 that the new SDP bound is not dominated by theHeld-Karp bound (or vice versa). 2 otation
The space of p × q real matrices is denoted by R p × q , the space of k × k symmetricmatrices is denoted by S k , and the space of k × k symmetric positive semidefinitematrices by S + k . We will sometimes also use the notation X (cid:23) X ∈ S + k , if the order of the matrix is clear from the context. By diag( X ) wemean the n -vector composed of the diagonal entries of X ∈ S n .We use I n to denote the identity matrix of order n . Similarly, J n and e n denote the n × n all-ones matrix and all ones n -vector respectively, and 0 n × n isthe zero matrix of order n . We will omit the subscript if the order is clear fromthe context.The Kronecker product A ⊗ B of matrices A ∈ R p × q and B ∈ R r × s is definedas the pr × qs matrix composed of pq blocks of size r × s , with block ij givenby A ij B ( i = 1 , . . . , p ), ( j = 1 , . . . , q ).The Hadamard (component-wise) product of matrices A and B of the samesize will be denoted by A ◦ B . In this section we show that the optimal value of the following semidefiniteprogram provides a lower bound on the length
T SP opt of an optimal tour:min trace (cid:0) DX (1) (cid:1) subject to X ( k ) ≥ , k = 1 , . . . , d P dk =1 X ( k ) = J − I,I + P dk =1 cos (cid:0) πikn (cid:1) X ( k ) (cid:23) , i = 1 , . . . , dX ( k ) ∈ S n , k = 1 , . . . , d, (3)where d = ⌊ n ⌋ is the diameter of C n .Note that this problem involves nonnegative matrix variables X (1) , . . . , X ( d ) of order n . The matrix variables X ( k ) have an interesting interpretation in termsof association schemes . Association schemes
We will give a brief overview of this topic; for an introduction to associationschemes, see Chapter 12 in [10], and in the context of SDP, [11].
Definition 1 (Asssociation scheme) . Assume that a given set of n × n matrices B , . . . , B t has the following properties:(1) B i is a 0 − i and B = I ;(2) P i B i = J ;(3) B i = B Ti ∗ for some i ∗ ; 34) B i B j = B j B i for all i, j ;(5) B i B j ∈ span { B , . . . , B t } .Then we refer to { B , . . . , B t } as an association scheme. If the B i ’s are alsosymmetric, then we speak of a symmetric association scheme.Note that item (4) (commutativity) implies that the matrices B , . . . , B t share a common set of eigenvectors, and therefore can be simultaneously diag-onalized. Note also that an association scheme is a basis of a matrix- ∗ algebra(viewed as a vector space). Moreover, one clearly hastrace( B i B Tj ) = 0 if i = j .Since the B i ’s share a system of eigenvectors, there is a natural ordering of theireigenvalues with respect to any fixed ordering of the eigenvectors. Thus the lastequality may be interpreted as: X k λ k ( B i ) λ k ( B j ) = 0 if i = j , (4)where the λ k ( B i )’s are the eigenvalues of B i with respect to the fixed ordering.The association scheme of particular interest to us arises as follows. Given aconnected graph G = ( V, E ) with diameter d , we define | V | × | V | matrices A ( k ) ( k = 1 , . . . , d ) as follows: A ( k ) ij = (cid:26) i, j ) = k , ( i, j ∈ V ) , where dist( i, j ) is the length of the shortest path from i to j .Note that A (1) is simply the adjacency matrix of G . Moreover, one clearlyhas I + d X k =1 A ( k ) = J. It is well-known that, for G = C n , the matrices A ( k ) ( k = 1 , . . . , d ≡ ⌊ n/ ⌋ )together with A (0) := I form an association scheme, since C n is a distanceregular graph.It is shown in the Appendix to this paper, that for G = C n , the eigenvaluesof the matrix A ( k ) are: λ m ( A ( k ) ) = 2 cos(2 πmk/n ) , m = 0 , . . . , n − , k = 1 , . . . , ⌊ ( n − / ⌋ , and, if n is even, λ n/ ( A ( k ) ) = cos( kπ ) = ( − k . In particular, we have λ m ( A ( k ) ) = λ k ( A ( m ) ) k, m = 1 , . . . , ⌊ ( n − / ⌋ . (5)Also note that λ m ( A ( k ) ) = λ n − m ( A ( k ) ) , k, m = 1 , . . . , ⌊ ( n − / ⌋ , (6)so that each matrix A ( k ) ( k = 1 , . . . , d ) has only 1 + ⌊ n/ ⌋ distinct eigenvalues.4 erifying the SDP relaxation (3) We now show that setting X ( k ) = A ( k ) ( k = 1 , . . . , d ) gives a feasible solutionof (3). We only need to verify that I + d X k =1 cos (cid:18) πikn (cid:19) A ( k ) (cid:23) , i = 1 , . . . , d. We will show this for odd n , the proof for even n being similar.Since the A ( k ) ’s may be simultaneously diagonalized, the last LMI is thesame as: 2 + d X k =1 λ k ( A ( i ) ) λ j ( A ( k ) ) ≥ , i, j = 1 , . . . , d, and by using (5) this becomes:2 + d X k =1 λ k ( A ( i ) ) λ k ( A ( j ) ) ≥ , i, j = 1 , . . . , d. Since λ ( A ( i ) ) = 2 ( i = 1 , . . . , d ), and using (4), one can easily verify that thelast inequality holds. Indeed, one has2 + d X k =1 λ k ( A ( i ) ) λ k ( A ( j ) )= 2 + 12 n − X k =1 λ k ( A ( i ) ) λ k ( A ( j ) ) (by (6))= 2 − λ ( A ( i ) ) λ ( A ( j ) ) + 12 n − X k =0 λ k ( A ( i ) ) λ k ( A ( j ) )= (cid:26) − i = j ), by (4)2 − P n − k =0 (cid:0) λ k ( A ( i ) ) (cid:1) ≥ i = j ) . Thus we have established the following result.
Theorem 1.
The optimal value of the SDP problem (3) provides a lower boundon the optimal value
T SP opt of the associated TSP instance.
An SDP relaxation of the QAP problem (1) was introduced in [28], and furtherstudied for specially structured instances in [7].When applied to the QAP reformulation of TSP in (2), this SDP relaxationtakes the form: 5in trace( C ⊗ D ) Y subject to trace(( I ⊗ ( J − I )) Y + (( J − I ) ⊗ I ) Y ) = 0trace( Y ) − e T y = − n (cid:18) y T y Y (cid:19) (cid:23) , Y ≥ . (7)It is easy to verify that this is indeed a relaxation of problem (2), by notingthat setting Y = vec( X )vec( X ) T and y = diag( Y ) gives a feasible solution if X ∈ Π n .In this section we will show that the optimal value of the SDP problem (7)actually equals the optimal value of the new SDP relaxation (3). The proof isvia the technique of symmetry reduction . Symmetry reduction of the SDP problem (7)
Consider the following form of a general semidefinite programming problem: p ∗ := min X (cid:23) ,X ≥ { trace( A X ) : trace( A k X ) = b k , k = 1 , . . . , m } , (8)where the A i ( i = 0 , . . . , m ) are given symmetric matrices.If we view (7) as an SDP problem in the form (8), the data matrices ofproblem (7) are: „ T C ⊗ D « , „ T I ⊗ ( J − I ) + ( J − I ) ⊗ I, « , „ − e T − e I « , „ T « . (9) Definition 2.
We define the automorphism group of a matrix Z ∈ R k × k asaut( Z ) = { P ∈ Π k : P ZP T = Z } . Symmetry reduction of problem (8) is possible under the assumption thatthe multiplicative matrix group G := m \ i =0 aut( A i )is non-trivial. We call G the symmetry group of the SDP problem (8).For the matrices (9), the group G is given by the matrices G := (cid:26)(cid:18) T P ⊗ I (cid:19) : P ∈ D n (cid:27) , (10)where D n is the (permutation matrix representation of) the dihedral group oforder n , i.e. the automorphism group of C n .The basic idea of symmetry reduction is given by the following result.6 heorem 2 (see e.g. [8]) . If X is a feasible (resp. optimal) solution of the SDPproblem (8) with symmetry group G , then¯ X := 1 |G| X P ∈G P T XP is also a feasible (resp. optimal) solution of (8).Thus there exist optimal solutions in the set A G := ( |G| X P ∈G P T XP : X ∈ R n × n ) . This set is called the centralizer ring (or commutant) of G and it is a matrix ∗ -algebra. For the group defined in (10), it is straightforward to verify that thecentralizer ring is given by: A G := (cid:26)(cid:18) α x T y C ⊗ Z (cid:19) (cid:12)(cid:12)(cid:12)(cid:12) α ∈ R , C = C T circulant , Z ∈ R n × n , x, y ∈ R n (cid:27) (11)where x T = [ x e T . . . x n e T ] and y T = [ y e T . . . y n e T ] for some scalars x i and y i ( i = 1 , . . . , n ), where e ∈ R n is the all-ones vector, as before.Thus we may restrict the feasible set of problem (7) to feasible solutions ofthe form (11).If we divide y and Y in (7) into blocks: y = (cid:18)(cid:16) y (1) (cid:17) T · · · (cid:16) y ( n ) (cid:17) T (cid:19) T , and Y = Y (11) · · · Y (1 n ) ... . . . ... Y ( n · · · Y ( nn ) , where y ( i ) ∈ R n and Y ( ij ) = Y ( ji ) T ∈ R n × n , then feasible solutions of (7) satisfy (cid:0) y (1) (cid:1) T · · · (cid:0) y ( n ) (cid:1) T y (1) Y (11) · · · Y (1 n ) ... ... . . . ... y ( n ) Y ( n · · · Y ( nn ) (cid:23) . (12)Feasible solutions have the following additional structure (see [28] and The-orem 3.1 in [7]): • Y ( ii ) ( i = 1 , . . . , n ) is a diagonal matrix; • Y ( ij ) ( i = j ) is a matrix with zero diagonal;7 trace( JY ( ij ) ) = 1 ( i, j = 1 , . . . , n ); • P ni =1 Y ( ij ) = e (cid:0) y ( j ) (cid:1) T ( j = 1 , . . . , n ); • diag( Y ) = y .Since diag( Y ) = y for feasible solutions, we have y ( i ) = diag( Y ( ii ) ) ( i =1 , . . . , n ). Moreover, since we may also assume the structure (11), we have that y ( i ) = y i e ( i = 1 , . . . , n ) , for some scalar values y i . This implies that the diagonal elements of Y ( ii ) allequal y i . Since the diagonal elements of Y ( ii ) sum to 1, we have y i = 1 /n anddiag( Y ( ii ) ) = (1 /n ) e . Thus the condition: (cid:18) y T y Y (cid:19) (cid:23) Y − n J (cid:23) I ⊗ Q ∗ ) Y ( I ⊗ Q ) − n ( I ⊗ Q ∗ ) J ( I ⊗ Q ) (cid:23) Q is the discrete Fourier transform matrix defined in (25) in the Appendix.Using the properties of the Kronecker product and of Q we get Q ∗ Y (11) Q · · · Q ∗ Y (1 n ) Q ... . . . ... Q ∗ Y ( n Q · · · Q ∗ Y ( nn ) Q − J ⊗ n · · · · · · (cid:23) . Recall that Y ( ii ) = n I and that we may assume Y ( ij ) ( i = j ) to be symmetriccirculant, say Y ( ij ) = d X k =1 x ( ij ) k C k , ( i = j ) , where C k ( k = 1 , . . . , d ) forms a basis of the symmetric circulant matriceswith zero diagonals (see the Appendix for the precise definition). Note thatthe nonnegativity of Y ( ij ) is equivalent to x ( ij ) k ≥ k = 1 , . . . , d ). Sincetrace( JY ( ij ) ) = 1 one has d X k =1 x ( ij ) k = 12 n ( i = j ) . Since P ni =1 Y ( ij ) = e (cid:0) y ( j ) (cid:1) T = n J , one also has d X k =1 n X i =1 x ( ij ) k C k = 1 n J.
8y the definition of the C k ’s, this implies that n X i =1 x ( ij ) k = (cid:26) n if 1 ≤ k ≤ ⌊ ( n − / ⌋ n if k = n/ n even) . (13)Moreover, Q ∗ Y ( ij ) Q = d X k =1 x ( ij ) k D k , ( i = j ) , where D k is the diagonal matrix with the eigenvalues (26) of C k on its diagonal.Thus the LMI becomes n I · · · P dk =1 x (1 n ) k D k ... . . . ... P dk =1 x (1 n ) k D k · · · n I − J ⊗ n · · · · · · (cid:23) . (14)The left hand side of this LMI is a block matrix with each block being a diagonalmatrix. Thus this matrix has a chordal sparsity structure ( n disjoint cliques ofsize n ). We may now use the following lemma to obtain the system of LMI’s(3). Lemma 1 (cf. [14]) . Assume a nt × nt matrix has the block structure M := D (11) · · · D (1 n ) ... . . . ... D ( n · · · D ( nn ) , where D ( ij ) ∈ S t are diagonal ( i, j = 1 , . . . , n ). Then M (cid:23) D (11) ii · · · D (1 n ) ii ... . . . ... D ( n ii · · · D ( nn ) ii (cid:23) i = 1 , . . . , t. Applying the lemma to the LMI (14), and setting X ( k ) ij = 2 nx ( ij ) k , k = 1 , . . . , ⌊ n/ ⌋ , (15)yields the system of LMI’s in (3).Thus we have established the following result. Theorem 3.
The optimal values of the semidefinite programs (3) and (7) areequal. 9
Relation of (3) to an SDP relaxation of Cvetkovi´cet al.
We will now show that the new SDP relaxation (3) dominates an SDP relaxation(16) due to Cvetkovi´c et al. [5]. This latter relaxation is based on the fact thatthe spectrum of the Hamiltonian circuit C n is known. In particular, the smallesteigenvalue of its Laplacian is zero and corresponds to the all ones eigenvector,while the second smallest eigenvalue equals 2 − (cid:0) πn (cid:1) .The relaxation takes the form: T SP opt ≥ min 12 trace( DX )subject to Xe = 2 e, diag( X ) = 0 , ≤ X ≤ J, I − X + (cid:0) − (cid:0) πn (cid:1)(cid:1) ( J − I ) (cid:23) . (16)Note that the matrix variable X corresponds to the adjacency matrix of theminimal length Hamiltonian circuit. Theorem 4.
The SDP relaxation (3) dominates the relaxation (16).
Proof.
Assume that given X ( k ) ∈ S n ( k = 1 , . . . , d ) satisfy (3). Then,diag( X (1) ) = 0 , while (13) and (15) imply X ( k ) e = 2 e ( k = 1 , . . . , ⌊ ( n − / ⌋ ) , and X ( n/ e = e if n is even. In particular, one has X (1) e = 2 e . It remains toshow that 2 I − X (1) + (cid:18) − (cid:18) πn (cid:19)(cid:19) ( J − I ) (cid:23) , which is the same as showing that2 I − X (1) + (cid:18) − (cid:18) πn (cid:19)(cid:19) d X k =1 X ( k ) (cid:23) , (17)since d X k =1 X ( k ) = J − I. We will show that the LMI (17) may be obtained as a nonnegative aggregationof the LMI’s I + d X k =1 X ( k ) (cid:23) I + d X k =1 cos (cid:18) πikn (cid:19) X ( k ) (cid:23) i = 1 , . . . , d ) . The matrix of coefficients of these LMI’s is a ( d + 1) × ( d + 1) matrix, say A ,with entries: A ij = cos (cid:18) πijn (cid:19) ( i, j = 0 , . . . , d ) . Since we may rewrite (17) as2 I + (cid:18) − (cid:18) πn (cid:19)(cid:19) X (1) + (cid:18) − (cid:18) πn (cid:19)(cid:19) d X k =2 X ( k ) (cid:23) , we need to show that the linear system Ax = b has a nonnegative solution,where b := (cid:20) , (cid:18) − (cid:18) πn (cid:19)(cid:19) , (cid:18) − (cid:18) πn (cid:19)(cid:19) , . . . , (cid:18) − (cid:18) πn (cid:19)(cid:19)(cid:21) T . One may verify that, for n odd, the system Ax = b has a (unique) solution givenby x i = 4 n (cid:26) d (cid:0) − cos (cid:0) πn (cid:1)(cid:1) if i = 0cos (cid:0) πn (cid:1) − cos (cid:0) πin (cid:1) for i = 1 , . . . , d .Note that x is nonnegative, as it should be. If n is even, the solution is x i = 4 n ( n − (cid:0) − cos (cid:0) πn (cid:1)(cid:1) if i = 0cos (cid:0) πn (cid:1) − cos (cid:0) πin (cid:1) for i = 1 , . . . , d − cos (cid:0) πn (cid:1) − cos (cid:0) πin (cid:1) for i = d .In the section with numerical examples, we will present instances where thenew SDP relaxation (3) is strictly better than (16). One of the best-known linear programming (LP) relaxations of TSP is the LPwith sub-tour elimination constraints:
T SP opt ≥ min 12 trace( DX )subject to Xe = 2 , diag( X ) = 0 , ≤ X ≤ J, P i ∈I , j / ∈I X ij ≥ ∀ ∅ 6 = I ⊂ { , . . . , n } . (18)11his LP relaxation dates back to 1954 and is due to Dantzig, Fulkerson andJohnson [6]. Its optimal value coincides with the LP bound of Held and Karp [15](see e.g. Theorem 21.34 in [16]), and the optimal value of the LP is commonlyknown as the Held-Karp bound .The last constraints are called sub-tour elimination inequalities and modelthe fact that C n is 2-connected. Although there are exponentially many sub-tour elimination inequalities, it is well-known that the LP (18) may be solvedin polynomial time using the ellipsoid method; see e.g. Schrijver [24], § Theorem 5.
The LP sub-tour elimination relaxation (18) does not dominatethe new SDP relaxation (3), or vice versa.
Proof.
Define the 8 × X as the weighted adjacency matrixof the graph shown in Figure 1.6 5482713 Edge weights : Figure 1: The weighted graph used in the proof of Theorem 5.The matrix ¯ X satisfies the sub-tour elimination inequalities, since the mini-mum cut in the graph in Figure 1 has weight 2.On the other hand, there does not exist a feasible solution of (3) that satisfies X (1) = ¯ X , as may be shown using SDP duality theory.Conversely, in Section 7 we will provide examples where the optimal valueof (18) is strictly greater than the optimal value of (3) (see e.g. the instancesgr17, gr24 and bays24 there). In addition to the sub-tour elimination inequalities, there are several familiesof linear inequalities known for the TSP polytope; for a review, see Naddef [18]and Schrijver [24], Chapter 58. 12f particular interest to us is a valid nonlinear inequality that models thefact that C n has n distinct spanning trees. To introduce the inequality we requirea general form of the matrix tree theorem ; see e.g. Theorem VI.29 in [26] for aproof.
Theorem 6 (Matrix tree theorem) . Let a simple graph G = ( V, E ) be givenand associate with each edge e ∈ E a real variable x e . Define the (generalized)Laplacian of G with respect to x as the | V | × | V | matrix with entries L ( G )( x ) ij := P e : e ∩ i = ∅ x e if i = j, − x e if { i, j } = e, . Now all principal minors of L ( G )( x ) of order | V | − X T Y e ∈ T x e , (19)where the sum is over all distinct spanning trees T of G .In particular, if L ( G )( x ) is the usual Laplacian of a given graph, then x e = 1for all edges e of the graph, and expression (19) evaluates to the number ofspanning trees in the graph.Thus if X corresponds to the approximation of the adjacency matrix of aminimum tour, then one may require that:det (2 I − X ) n, n ≥ n, (20)where X n, n denotes the principle submatrix of X obtained by deleting thefirst row and column.The inequality (20) may be added to the above SDP relaxations (16) and(3) (with X = X (1) ), since the set { Z (cid:23) Z ≥ n } is LMI representable; see e.g. Nemirovski [19], § X (1) of the new relaxation (3). Nevertheless, we have been unable toshow that (20) (with X = X (1) ) is implied by (3). In Table 1 we give the lower bounds on some small TSPLIB instances forthe two SDP relaxations (3) and (16), as well as the LP relaxation with allsub-tour elimination constraints (18) (the Held-Karp bound). These instances Problem SDP bound (16) SDP bound (3) (time) LP bound (18)
T SP opt gr17 1810 2007 (39s) 2085 2085gr21 2707 2707 (139s) 2707 2707gr24 1230 1271 (1046s) 1272 1272bays29 1948 2000 (2863s) 2014 2020
Table 1: Lower bounds on some small TSPLIB instances from various convexrelaxations.Note that the relaxation (3) can indeed be strictly better than (16), as isclear from the gr17, bays24 and bays29 instances. Also, since the LP relaxation(18) gives better bounds than (3) for all four instances, it is worth recalling thatthis will not happen in general, by Theorem 5.The LMI cut from (20) was already satisfied by the optimal solutions of (16)and (3) for the four instances.A second set of test problems was generated by considering all facet defininginequalities for the TSP polytope on 8 nodes; see [3] for a description of theseinequalities, as well as the SMAPO project web site .The facet defining inequalities are of the form trace( DX ) ≥ RHS where D ∈ S n has nonnegative integer entries and RHS is an integer. From each in-equality, we form a symmetric TSP instance with distance matrix D . Thus theoptimal value of the TSP instance is the value RHS . In Table 2 we give the opti-mal values of the LP relaxation (18) ( i.e. the Held-Karp bound), the SDP relax-ation of Cvetkovi´c et al. (16), and the new SDP relaxation (3) for these instances,as well as the right-hand-side
RHS of each inequality trace( DX ) ≥ RHS . For n = 8, there are 24 classes of facet-defining inequalities. The members of eachclass are equal modulo a permutation of the nodes, and we need therefore onlyconsider one representative per class. The first three classes of inequalities aresub-tour elimination inequalities.The numbering of the instances in Table 2 coincides with the numbering ofthe classes of facet defining inequalities on the SMAPO project web site.The new SDP bound (3) is only stronger than the Held-Karp bound (18)for the instances 16, 21 and 23 in Table 2, and for the instances 1, 5, 9, 12, 14,17, 18, 19 and 22 the two bounds coincide. For the remaining 18 instances theHeld-Karp bound is better than the SDP bound (3). However, if the bounds arerounded up, the SDP bound (3) is still better for the instances 16, 21 and 23,whereas the two (rounded) bounds are equal for all the other instances. Adding nequality SDP bound (16) SDP bound (3) Held-Karp bound (18) RHS1 2 2 2 22 1.098 1.628 2 23 1.172 1.172 2 24 8.507 8.671 9 105 9 9 9 106 8.566 8.926 9 107 8.586 8.586 9 108 8.570 8.926 9 109 9 9 9 1010 8.411 8.902 9 1011 8.422 8.899 9 1012 0 0 0 013 10.586 10.667 11 1214 12 12 12 1315 12.408 12.444 12 Table 2: Results for instances on n = 8 cities, constructed from the facet defininginequalities.the LMI cut from (20) did not change the optimal values of the SDP relaxations(16) or (3) for any of the instances.For n = 9, there are 192 classes of facet defining inequalities of the TSPpolytope [4]. Here the SDP bound (3) is better than the Held-Karp boundfor 23 out of the 192 associated TSP instances. Similar to the n = 8 case,when rounding up, the rounded SDP bound remains better in all 23 cases andcoincides with the rounded Held-Karp bound in all the remaining cases. Wolsey [27] showed that the optimal value of the LP relaxation (18) is at least2 / Acknowledgements
Etienne de Klerk would like to thank Drago˘s Cvetkovi´c and Vera Kovaˇcevi´c-Vujˇci´c for past discussions on the SDP relaxation (16). The authors wouldalso like to thank an anonymous referee for suggestions that led to a significantimprovement of this paper.
Appendix: Circulant matrices
Our discussion of circulant matrices is condensed from the review paper by Gray[13].A circulant matrix has the form C = c c c · · · c n − c n − c c c n − c c ...... . . . . . . . . . . . . c c · · · c n − c . (21)Thus the entries satisfy the relation C ij = c ( j − i ) mod n . (22)The matrix C has eigenvalues λ m ( C ) = c + n − X k =1 c k e − π √− mk/n , m = 0 , . . . , n − . If C is symmetric with n odd, this reduces to λ m ( C ) = c + ( n − / X k =1 c k cos(2 πmk/n ) , m = 0 , . . . , n − , (23)and when n is even we have λ m ( C ) = c + n/ − X k =1 c k cos(2 πmk/n ) + c n/ cos( mπ ) , m = 0 , . . . , n − . (24)The circulant matrices form a commutative matrix ∗ -algebra, as do the sym-metric circulant matrices. In particular, all circulant matrices share a set ofeigenvectors, given by the columns of the discrete Fourier transform matrix : Q ij := 1 √ n e − π √− ij/n , i, j = 0 , . . . , n − . (25)16ne has Q ∗ Q = I , and Q ∗ CQ is a diagonal matrix for any circulant matrix C .Also note that Q ∗ e = √ ne .We may define a basis C (0) , . . . , C ⌊ n/ ⌋ for the symmetric circulant matricesas follows: to obtain C ( i ) we set c i = c n − i = 1 in (21) and all other c j ’s to zero.(We set C = 2 I and also multiply C n/ by 2 if n is even).By (23) and (24), the eigenvalues of these basis matrices are: λ m ( C ( k ) ) = 2 cos(2 πmk/n ) , m = 0 , . . . , n − , k = 0 , . . . , ⌊ n/ ⌋ . (26)Also note that λ m ( C ( k ) ) = λ n − m ( C ( k ) ) , m = 1 , . . . , ⌊ n/ ⌋ , k = 0 , . . . , ⌊ n/ ⌋ so that each matrix C ( k ) has only 1 + ⌊ n/ ⌋ distinct eigenvalues. References [1] S. Arora. Polynomial Time Approximation Schemes for Euclidean Traveling Sales-man and other Geometric Problems.
Journal of ACM , :753–782, 1998.[2] B. Borchers. CSDP, a C library for semidefinite programming. OptimizationMethods and Software , (1-4):613–623, 1999.[3] T. Christof, M. J¨unger, and G. Reinelt. A Complete Description of the TravelingSalesman Polytope on 8 Nodes. Operations Research Letters, , 497–500, 1991.[4] T. Christof and G. Reinelt. Combinatorial optimization and small polytopes. TOP (1), 1–53, 1996.[5] D. Cvetkovi´c, M. Cangalovi´c, and V. Kovaˇcevi´c-Vujˇci´c. Semidefinite Program-ming Methods for the Symmetric Traveling Salesman Problem. In Proceedings ofthe 7th International IPCO Conference on Integer Programming and Combinato-rial Optimization , pages 126–136, Springer-Verlag, London, UK, 1999.[6] G.B. Dantzig, D.R. Fulkerson and S.M. Johnson. Solution of a large-scale travelingsalesman problem.
Operations Research , :393–410, 1954.[7] E. de Klerk and R. Sotirov. Exploiting Group Symmetry in Semidefinite Pro-gramming Relaxations of the Quadratic Assignment Problem. CentER Discus-sion Paper 2007-44, Tilburg University, The Netherlands, 2007. Available at: http://arno.uvt.nl/show.cgi?fid=60929 [8] K. Gatermann and P.A. Parrilo. Symmetry groups, semidefinite programs, andsum of squares. Journal of Pure and Applies Algebra , :95–128, 2004.[9] N. Christofides. Worst-case analysis of a new heuristic for the travelling salesmanproblem. Technical Report 388, Graduate School of Industrial Administration,Carnegie-Mellon University, Pittsburgh, 1976.[10] C. Godsil. Algebraic Combinatorics . Chapman and Hall, 1993.[11] M.X. Goemans and F. Rendl. Semidefinite Programs and Association Schemes.
Computing , :331–340, 1999.[12] M.X. Goemans and F. Rendl. Combinatorial Optimization. In: Handbook ofSemidefinite Programming: Theory, Algorithms and Applications , H. Wolkowicz,R. Saigal and L. Vandenberghe, Eds., Kluwer, 2000.
13] R.M. Gray. Toeplitz and Circulant Matrices: A review.
Foundations and Trendsin Communications and Information Theory , (3):155–239, 2006.[14] R. Grone, C.R. Johnson, E.M. S´a, and H. Wolkowicz. Positive definite completionsof partial Hermitian matrices. Linear Algebra and its Applications , 58:109–124,1984.[15] M. Held and R.M. Karp. The traveling-salesman problem and minimum spanningtrees.
Operations Research , 18(6):1138-1162, 1970.[16] B. Korte and J. Vygen.
Combinatorial Optimization: Theory and Algorithms , 4thedition. Algorithms and Combinatorics 21, Springer-Verlag, 2008.[17] J. L¨ofberg. YALMIP : A Toolbox for Modeling and Optimization inMATLAB. Proceedings of the CACSD Conference, Taipei, Taiwan, 2004. http://control.ee.ethz.ch/ ∼ joloef/yalmip.php [18] D. Naddef. Polyhedral theory and branch-and-cut algorithms for the TSP. In G.Gutin & A.P. Punnen (eds), The Traveling Salesman Problem and Its Variations ,Kluwer Academic Publishers, 2002.[19] A. Nemirovskii. Lectures on modern convex optimiza-tion, Lecture notes Georgia Tech., 2005. Available at: ∼ nemirovs/Lect ModConvOpt.pdf [20] P. Orponen and H. Mannila. On approximation preserving reductions: Completeproblems and robust measures. Technical Report C-1987-28, Department of Com-puter Science, University of Helsinki, 1987.[21] C.H. Papadimitriou and S. Vempala. On the approximability of the travelingsalesman problem. Proceedings of the 32nd Annual ACM Symposium on Theoryof Computing, 2000.[22] A. Schrijver. A comparison of the Delsarte and Lov´asz bounds. IEEE Transactionson Information Theory , :425–429, 1979.[23] A. Schrijver. New code upper bounds from the Terwilliger algebra. IEEE Trans-actions on Information Theory , :2859–2866, 2005.[24] A. Schrijver. Combinatorial Optimization – Polyhedra and Efficiency , Volume 2.Springer-Verlag, Berlin, 2003.[25] D. B. Shmoys and D. P. Williamson. Analyzing the Held-Karp TSP bound:a monotonicity property with application.
Information Processing Letters , (6):281–285, 1990.[26] W.T. Tutte. Graph Theory . Addison-Wesley, 1984.[27] L. Wolsey. Heuristic analysis, linear programming, and branch and bound.
Math-ematical Programming Study , 121–134, 1980.[28] Q. Zhao, S.E. Karisch, F. Rendl, and H. Wolkowicz. Semidefinite ProgrammingRelaxations for the Quadratic Assignment Problem. Journal of CombinatorialOptimization , :71–109, 1998.:71–109, 1998.