Balanced Combinations of Solutions in Multi-Objective Optimization
aa r X i v : . [ c s . D S ] J u l Balanced Combinations of Solutionsin Multi-Objective Optimization
Christian Glaßer Christian Reitwießner Maximilian WitekJulius-Maximilians-Universit¨at W¨urzburg, Germany { glasser,reitwiessner,witek } @informatik.uni-wuerzburg.de Abstract
For every list of integers x , . . . , x m there is some j such that x + · · · + x j − x j +1 − · · · − x m ≈ x i are k -dimensional integer vectors ? Using results fromtopological degree theory we show that balancing is still possible, now with k alternations.This result is useful in multi-objective optimization, as it allows a polynomial-time com-putable balance of two alternatives with conflicting costs. The application to two multi-objectiveoptimization problems yields the following results: • A randomized / -approximation for multi-objective maximum asymmetric traveling sales-man, which improves and simplifies the best known approximation for this problem. • A deterministic / -approximation for multi-objective maximum weighted satisfiability. Balancing Sums of Vectors.
Suppose we are given a sequence of goods g , . . . , g m , each of whichhas a value, a weight, and a size. Is it possible to distribute the goods on two trucks such thatthe loads are nearly the same with respect to value, weight, and size? We show that this is alwayspossible by a very easy partition: For suitable indices i, j, k, l , assign g i , g i +1 , . . . , g j , g k , g k +1 , . . . , g l to the first truck and the remaining goods to the second one. In general, if the goods have 2 k criteria (value, weight, size, . . . ), then there exist k intervals of goods such that the goods insideand the goods outside of the intervals are nearly equivalent with respect to all criteria.More formally, let x , . . . , x m ∈ N k be vectors of natural numbers that represent the criteria ofeach good, and let z ∈ N k be an upper bound for these vectors (i.e., x i ≤ z for all i ). Lemma 2.6provides intervals I , . . . , I k ⊆ N such that for I = S ki =1 I i , − kz ≤ X i ∈ I x i − X i/ ∈ I x i ≤ kz, where the ≤ hold with respect to each component. The same is true if x , . . . , x m ∈ Z k are vectorsof integers , where − z ≤ x i ≤ z for all i (Corollary 2.7). The proofs of these balancing results are1ased on the Odd Mapping Theorem, a result from topological degree theory, which we apply in adiscrete setting. The discretization is responsible for the term 4 kz , which is caused by a roundingerror that unavoidably occurs at the boundaries of the intervals I , . . . , I k .The simplicity of the desired partition (i.e., a union of k intervals) is important for the applicationof our balancing results. Algorithmically, it means that for fixed dimension 2 k , the right choice forthe intervals I , . . . , I k can be found by exhaustive search in time polynomial in m . Multi-Objective Optimization.
Many real-life optimization problems have multiple objectivesthat cannot be easily combined into a single value. Thus, one is interested in solutions that aregood with respect to all objectives at the same time. For conflicting objectives we cannot hopefor a single optimal solution, but there will be trade-offs. The
Pareto set captures the notion ofoptimality in this setting. It consists of all solutions that are optimal in the sense that there isno solution that is at least as good in all objectives and better in at least one objective. So thePareto set contains all optimal decisions for a given situation. For a general introduction to multi-objective optimization we refer to the survey by Ehrgott and Gandibleux [EG00] and the textbookby Ehrgott [Ehr05].For many problems, the Pareto set has exponential size and hence cannot be computed in poly-nomial time. Regarding the approximability of Pareto sets, Papadimitriou and Yannakakis [PY00]show that every Pareto set has a (1 − ε )-approximation of size polynomial in the size of the instanceand / ε (for the formal definition of approximation see section 3.1). Hence, even though a Pareto setmight be an exponentially large object, there always exists a polynomial-size approximation. Thisclears the way for a general investigation of the approximability of Pareto sets of multi-objectiveoptimization problems.In general, inapproximability and hardness results directly translate from single-objective opti-mization problems to their multi-objective variants. On the other hand, existing approximationalgorithms for single-objective problems can not always be used for multi-objective approxima-tion. Using our balancing results we demonstrate a translation of single-objective approximationideas to the multi-objective case: We obtain a randomized / -approximation for multi-objectivemaximum asymmetric TSP and a deterministic / -approximation for multi-objective maximumweighted satisfiability. Traveling Salesman Problem.
The (single-objective) maximum asymmetric traveling salesmanproblem (
MaxATSP , for short) is the optimization problem where on input of a complete directedgraph with edge weights from N the goal is to find a Hamiltonian cycle of maximum weight. Enge-bretsen and Karpinski [EK01] show that MaxATSP cannot be (319 /
320 + ε )-approximated (unlessP=NP). In 1979, Fisher, Nemhauser and Wolsey [FNW79] gave a / -approximation algorithm forMaxATSP (remove the lightest edge from each cycle of a maximum cycle cover and connect theremaining parts to a Hamiltonian cycle). Since then, many improvements were achieved and thecurrently best known approximation ratio of / for MaxATSP is given by Kaplan et al. [KLSS05].The k -objective variant k -MaxATSP is defined analogously with edge weights from N k . Thehardness results for MaxATSP directly translate to its multi-objective variant (just set all but one2omponent of the edge weights to a constant), but algorithms have to be newly designed. Bl¨aser etal. [BMP08] show that k -MaxATSP is randomized ( k +1 − ε )-approximable. This was improved byManthey [Man09] to a randomized ( − ε )-approximation for all (fixed) numbers of criteria. Bothalgorithms extend the cycle cover idea to multiple objectives. With a surprisingly simple algorithmwe improve the approximation ratio to / . Satisfiability.
Given a formula in conjunctive normal form and a non-negative weight in N foreach clause, the maximum weighted satisfiability problem ( MaxSAT , for short) aims to find atruth assignment such that the sum of the weights of all satisfied clauses is maximal. The firstapproximation algorithm for MaxSAT is due to Johnson [Joh74]. He proved an approximation ratioof (2 r − / (2 r ) for formulas where each clause has at least r literals. His work showed that the generalMaxSAT problem is / -approximable. Yannakakis [Yan94] improved the approximation ratio ofMaxSAT to / , and Goemans and Williamson [GW94] subsequently gave a simpler algorithm withessentially the same approximation ratio, and later [GW95] improved the approximation ratio to0 . . / , unless P=NP.Only little is known about the multi-objective maximum weighted satisfiability problem( k -MaxSAT , for short), where each clause has a non-negative weight in N k for some fixed k ≥ k -MaxSAT with polynomiallybounded weights. To our knowledge, the approximability of k -MaxSAT has not been investigatedso far.Using our balancing results, we can transfer a simple idea from single-objective optimization tothe multi-objective world: For any truth assignment, the assignment itself or its complementaryassignment satisfies at least one half of all clauses. We obtain a (deterministic) / -approximationfor k -MaxSAT, independent of k . Let a, b ∈ R . We call a function f : [ a, b ] → R integrable , if it is Lebesgue-integrable on [ a, b ].This is especially the case for bounded functions f with only finitely many points of discontinuity.A function g : [ a, b ] → R n is componentwise integrable , if all projections g i are integrable andin this case we write R ba g ( x ) dx as abbreviation for the tuple ( R ba g ( x ) dx, . . . , R ba g n ( x ) dx ). For x = ( x , . . . , x n ), y = ( y , . . . , y n ) ∈ R n we write x ≤ y if x i ≤ y i for all i ∈ { , , . . . , n } . For aset A ⊆ R n , A denotes the (topological) closure of A , and ∂A denotes the boundary of A . The set A ⊆ R n is symmetric if x ∈ A ⇐⇒ − x ∈ A for all x ∈ R n .3or bounded, open sets D ⊆ R n , continuous functions ϕ : D → R n and points p ∈ R n \ ϕ ( ∂D )the integer d ( ϕ, D, p ) is called the Brouwer degree of ϕ and D at the point p . We will not defineit here, but we note that it captures how often p is “covered” by ϕ ( D ), counting “inverse” coversnegatively, and that it generalizes the winding number in complex analysis. We apply the following theorems from topological degree theory to get the analytical version of ourbalancing results.
Theorem 2.1 ([Llo78, Theorem 2.1.1]) . If D ⊆ R n is bounded and open, ϕ : D → R n is continuous, p / ∈ ϕ ( ∂D ) and d ( ϕ, D, p ) = 0 , then p ∈ ϕ ( D ) . Theorem 2.2 (Odd Mapping Theorem, [Llo78, Theorem 3.2.6]) . Let D be a bounded, open, sym-metric subset of R n containing the origin. If ϕ : D → R n is continuous, / ∈ ϕ ( ∂D ) , and for all x ∈ ∂D it holds that ϕ ( x ) | ϕ ( x ) | = ϕ ( − x ) | ϕ ( − x ) | , then d ( ϕ, D, is an odd number (and in particular not zero). Corollary 2.3.
Let D be a bounded, open, symmetric subset of R n containing the origin. If ϕ : D → R n is continuous and for all x ∈ ∂D it holds that ϕ ( − x ) = − ϕ ( x ) , then ∈ ϕ ( D ) .Proof. Assume that 0 / ∈ ϕ ( D ). From ϕ ( − x ) = − ϕ ( x ) for x ∈ ∂D it follows that the inequal-ity condition of Theorem 2.2 is fulfilled (note that 0 / ∈ ϕ ( ∂D )) and thus d ( ϕ, D, = 0 and byTheorem 2.1, 0 ∈ ϕ ( D ). This is a contradiction. Lemma 2.4.
Let n ≥ , a, b ∈ R , and h : [ a, b ] → R n be componentwise integrable. There exist n closed intervals I , . . . , I n ⊆ [ a, b ] such that for I = I ∪ · · · ∪ I n , Z I h ( x ) dx = Z [ a,b ] \ I h ( x ) dx. Proof.
Observe that it suffices to show this for [ a, b ] = [0 , T = { ( t , t , . . . , t n ) ∈ R n | P ni =1 | t i | ≤ } and for every t = ( t , . . . , t n ) ∈ T , let I t = [ ≤ k ≤ nt k > " k − X i =1 | t i | , k X i =1 | t i | and f : T → R n , f ( t ) = Z I t h ( x ) dx − Z [0 , \ I t h ( x ) dx. By the formal definition, I t is a union of (at most) 2 n closed intervals. However, it can always bewritten as a union of at most n closed intervals, by merging adjacent intervals.We now want to show that 0 ∈ f ( T ) by applying Corollary 2.3 to ϕ = f and D being the interiorof T . D is obviously a bounded, open, and symmetric subset of R n containing the origin. The4 t t i > t i ≤ | t | | t | | t | | t | | t | | t | | t | | t | Figure 1:
Illustration of the set I t for some value of t = ( t , . . . , t ), where t , t , t and t are positive and t , t , t and t are negative. function f is continuous because of the fundamental theorem of calculus for the Lebesgue integraland the fact that the endpoints of the intervals in I t depend continuously on t . Furthermore, forany t ∈ ∂D we get that there are only finitely many points in [0 ,
1] which are not in exactly oneof the sets I − t and I t and thus f ( − t ) = − f ( t ) since these finitely many points have no influenceon the values of the integrals. Since all preconditions of the corollary are fulfilled, we get 0 ∈ f ( T )and thus there exists some t ∈ T such that Z I t h ( x ) dx = Z [0 , \ I t h ( x ) dx. As already noted, I t can be written as a union of at most n closed intervals. We obtain a union ofexactly n intervals by adding intervals [ a, a ]. Lemma 2.5.
Let n ≥ , a, b ∈ R , and f, g : [ a, b ] → R n be componentwise integrable. There exist n closed intervals I , . . . , I n ⊆ [ a, b ] such that for I = I ∪ · · · ∪ I n , Z I f ( x ) dx + Z [ a,b ] \ I g ( x ) dx = 12 Z [ a,b ] f ( x ) + g ( x ) dx. Proof.
Applying Lemma 2.4 to h ( x ) = f ( x ) − g ( x ) yields some I ⊆ [ a, b ] that is the union of n closed intervals in [ a, b ] such that Z I h ( x ) dx = Z [ a,b ] \ I h ( x ) dx ⇐⇒ Z I f ( x ) − g ( x ) dx = Z [ a,b ] \ I f ( x ) − g ( x ) dx ⇐⇒ Z I f ( x ) − g ( x ) dx + Z [ a,b ] \ I g ( x ) − f ( x ) dx = 0 ( ∗ ) ⇐⇒ Z I f ( x ) dx + 2 Z [ a,b ] \ I g ( x ) dx = Z [ a,b ] f ( x ) + g ( x ) dx ⇐⇒ Z I f ( x ) dx + Z [ a,b ] \ I g ( x ) dx = 12 Z [ a,b ] f ( x ) + g ( x ) dx. Note that ( ∗ ) is obtained by adding R [ a,b ] f ( x ) + g ( x ) dx to both sides.5 . . . t / / / m f ( t ) 2 x x x x m − . . . t / / / m g ( t ) 2 y y y m − y m − Figure 2:
Graphs of the functions f and g used in the proofs of the Lemmas 2.6 and 2.8. Now we discretize the analytical results which causes a rounding error that cannot be avoided.
Lemma 2.6.
Let n, m ≥ and x , . . . , x m , y , . . . , y m , z ∈ N n such that x i ≤ z and y i ≤ z forall i . There exist natural numbers ≤ a ≤ b ≤ a ≤ b ≤ · · · ≤ a n ≤ b n ≤ m such that for I = S ni =1 { a i , a i + 1 , . . . , b i − } , − nz + 12 m X i =1 ( x i + y i ) ≤ X i ∈ I x i + X i/ ∈ I y i ≤ nz + 12 m X i =1 ( x i + y i ) . Proof.
For the proof it is advantageous to start the indices of x i and y i at 0. We first definetwo functions f and g that distribute the values x , . . . , x m − , y , . . . , y m − ∈ N n equally over theinterval [0 , m ), and then we apply Lemma 2.5. Let f, g : [0 , m ] → R n such that f ( t ) = ( x i if t ∈ [ i, i + / )(0 , . . . ,
0) otherwiseand g ( t ) = ( y i if t ∈ [ i + / , i + 1)(0 , . . . ,
0) otherwise.Figure 2 shows the graph of f and g . Note that both functions are componentwise integrable.Moreover, for i ∈ { , . . . , m − } , Z i +1 i f ( t ) dt = x i and Z i +1 i g ( t ) dt = y i . (1)By Lemma 2.5 there exist closed intervals I i = [ a i , b i ] ⊆ [0 , m ] where 1 ≤ i ≤ n such that for I = S ni =1 [ a i , b i ] it holds that Z I f ( t ) dt + Z [0 ,m ] \ I g ( t ) dt = 12 Z [0 ,m ] f ( t ) + g ( t ) dt. (2)6e may assume 0 ≤ a ≤ b ≤ a ≤ b ≤ · · · ≤ a n ≤ b n ≤ m . For 1 ≤ i ≤ n let a ′ i := ⌊ a i + / ⌋ and b ′ i := ⌊ b i + / ⌋ . Note that the a ′ i , b ′ i are natural numbers such that 0 ≤ a ′ ≤ b ′ ≤ a ′ ≤ b ′ ≤ · · · ≤ a ′ n ≤ b ′ n ≤ m . Bythe definition of f and g , for 1 ≤ i ≤ n it holds that (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)Z a ′ i a i f ( t ) dt (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) + (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)Z a ′ i a i g ( t ) dt (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ z and (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)Z b ′ i b i f ( t ) dt (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) + (cid:12)(cid:12)(cid:12)(cid:12)(cid:12)Z b ′ i b i g ( t ) dt (cid:12)(cid:12)(cid:12)(cid:12)(cid:12) ≤ z, where | ( v , . . . , v n ) | := ( | v | , . . . , | v n | ) for v , . . . , v n ∈ R . So if some a i (resp., b i ) is replaced by a ′ i (resp., b ′ i ), then the left-hand side of (2) changes at most by z . Hence, for I ′ = S ni =1 [ a ′ i , b ′ i ] itholds that − nz + 12 Z [0 ,m ] f ( t ) + g ( t ) dt ≤ Z I ′ f ( t ) dt + Z [0 ,m ] \ I ′ g ( t ) dt ≤ nz + 12 Z [0 ,m ] f ( t ) + g ( t ) dt. (3)Let I ′′ = S ni =1 { a ′ i , a ′ i + 1 , . . . , b ′ i − } . From (1) and (3) we obtain − nz + 12 m − X i =0 ( x i + y i ) ≤ X i ∈ I ′′ x i + X i/ ∈ I ′′ y i ≤ nz + 12 m − X i =0 ( x i + y i ) . Next we state the integer variant of Lemma 2.6.
Corollary 2.7.
Let n, m ≥ , x , . . . , x m ∈ Z n , and z ∈ N n such that − z ≤ x i ≤ z for all i . There exist natural numbers ≤ a ≤ b ≤ a ≤ b ≤ · · · ≤ a n ≤ b n ≤ m such that for I = S ni =1 { a i , a i + 1 , . . . , b i − } , − nz ≤ X i ∈ I x i − X i/ ∈ I x i ≤ nz. Proof.
Let x ′ i := z + x i and y ′ i := z − x i . Thus x ′ i , y ′ i ∈ N n and x ′ i , y ′ i ≤ z . Lemma 2.6 appliedto x ′ i and y ′ i provides natural numbers 1 ≤ a ≤ b ≤ a ≤ b ≤ · · · ≤ a n ≤ b n ≤ m such that for I = S ni =1 { a i , a i + 1 , . . . , b i − } , − nz + 12 m X i =1 ( x ′ i + y ′ i ) ≤ X i ∈ I x ′ i + X i/ ∈ I y ′ i ≤ nz + 12 m X i =1 ( x ′ i + y ′ i ) . Therefore, − nz + 2 mz m X i =1 ( x i − x i ) ≤ mz + X i ∈ I x i − X i/ ∈ I x i ≤ nz + 2 mz m X i =1 ( x i − x i ) . Lemma 2.8.
Let n, m ≥ and x , . . . , x m , y , . . . , y m ∈ N n . There exists an n ′ ∈ { , . . . , n } andnatural numbers ≤ a ≤ b < a ≤ b < · · · < a n ′ ≤ b n ′ ≤ m such that for I = S i ∈{ ,...,n ′ } { a i , a i +1 , . . . , b i } , y b + y b + · · · + y b n ′ + X i ∈ I x i + X i/ ∈ I y i ≥ m X i =1 ( x i + y i ) . Proof.
Again let the indices of x i and y i start at 0 and define the componentwise integrable functions f and g as in the proof of Lemma 2.6. So for i ∈ { , . . . , m − } , Z i + / i f ( t ) dt = x i and Z i +1 i + / g ( t ) dt = y i . (4)By Lemma 2.5 there exists an n ′ ∈ { , . . . , n } and closed intervals I i = [ a i , b i ] ⊆ [0 , m ] where1 ≤ i ≤ n ′ such that for I = S n ′ i =1 [ a i , b i ] it holds that Z I f ( t ) dt + Z [0 ,m ] \ I g ( t ) dt ≥ Z [0 ,m ] f ( t ) + g ( t ) dt. (5)Here we only need the inequality even though Lemma 2.5 states an equality. We may assume0 ≤ a ≤ b ≤ a ≤ b ≤ · · · ≤ a n ′ ≤ b n ′ ≤ m. (6)By the definition of f and g , the following holds for every i ∈ { , . . . , m − } : t ∈ [ i, i + / ) = ⇒ g ( t ) = (0 , . . . , t ∈ [ i + / , i + 1) = ⇒ f ( t ) = (0 , . . . , Claim 2.9.
We may assume that { a j , b j } 6⊆ [ i + / , i + 1] and { b j , a j +1 } 6⊆ [ i, i + / ] for all j ∈ { , . . . , n ′ } and i ∈ { , . . . , m − } .Proof. If a j , b j ∈ [ i + / , i + 1], then f is 0 on [ a j , b j ) and hence R b j a j f ( t ) dt = 0. Thus the left-handside of (5) does not decrease if we remove the interval [ a j , b j ] from I . Similarly, if b j , a j +1 ∈ [ i, i + / ],then g is 0 on [ b j , a j +1 ) and hence R a j +1 b j g ( t ) dt = 0. Thus the left-hand side of (5) does not decreaseif we replace the intervals [ a j , b j ] and [ a j +1 , b j +1 ] by the interval [ a j , b j +1 ]. Note that after thesechanges (which include a decrement of n ′ ), (6) still holds. Claim 2.10.
We may assume that a , . . . , a n ′ ∈ N and b + / , . . . , b n ′ + / ∈ N .Proof. Assume a j ∈ [ i + / , i + 1). By Claim 2.9, b j / ∈ [ i + / , i + 1] and hence b j > i + 1. Since f is 0 on [ i + / , i + 1), the left-hand side of (5) does not decrease if we let a j := i + 1. After thischange, (6) still holds. 8ssume a j ∈ [ i, i + / ). By Claim 2.9, b j − / ∈ [ i, i + / ] and hence b j − < i (for j ≥ g is0 on [ i, i + / ), the left-hand side of (5) does not decrease if we let a j := i . After this change, (6)still holds.Assume b j ∈ [ i + / , i + 1). By Claim 2.9, a j / ∈ [ i + / , i + 1] and hence a j < i + / . Since f is 0 on[ i + / , i + 1), the left-hand side of (5) does not decrease if we let b j := i + / . After this change,(6) still holds.Assume b j ∈ [ i, i + / ) and i < m . By Claim 2.9, a j +1 / ∈ [ i, i + / ] and hence a j +1 > i + / (for j < n ′ ). Since g is 0 on [ i, i + / ), the left-hand side of (5) does not decrease if we let b j := i + / .After this change, (6) still holds.It remains to argue for the special case b j = m . By Claim 2.9, a j / ∈ [ m − / , m ] and hence a j < m − / . Since f is 0 on [ m − / , m ), the left-hand side of (5) does not decrease if we let b j := m − / . After this change, (6) still holds.If we split the integrals on the left-hand side of (5) according to I = S n ′ i =1 [ a i , b i ], we obtain a Z g ( t ) dt + n ′ − X i =1 b i Z a i f ( t ) dt + a i +1 Z b i g ( t ) dt + m Z b n ′ g ( t ) dt ≥ Z [0 ,m ] f ( t ) + g ( t ) dt. (7)For i ∈ { , . . . , n ′ } let c i = b i − / . From Claim 2.10 and (6) it follows that0 ≤ a ≤ c < a ≤ c < · · · < a n ′ ≤ c n ′ ≤ m − . Together with (4) we obtain: a Z g ( t ) dt = y + y + · · · + y a − b i Z a i f ( t ) dt = x a i + x a i +1 + · · · + x c i a i +1 Z b i g ( t ) dt = y c i + y c i +1 + · · · + y a i +1 − m Z b n ′ g ( t ) dt = y c n ′ + y c n ′ +1 + · · · + y m − In these sums, each index j ∈ { c , c , . . . , c n ′ } appears exactly twice, once as x j and once as y j . Allremaining indices j ∈ { , . . . , m − } \ { c , c , . . . , c n ′ } appear exactly once, either as x j or as y j .Therefore, with I ′ = S i ∈{ ,...,n ′ } { a i , a i + 1 , . . . , c i } the left-hand side of (7) is equal to y c + y c + · · · + y c n ′ + X i ∈ I ′ x i + X i/ ∈ I ′ y i . (8)9pplying (4) to the right-hand side of (7) yields the desired inequality y c + y c + · · · + y c n ′ + X i ∈ I ′ x i + X i/ ∈ I ′ y i ≥ m − X i =0 ( x i + y i ) . Corollary 2.11.
Let n, m ≥ and x , . . . , x m , y , . . . , y m , z ∈ N n such that y i ≤ z for all i .There exist n ′ ≤ min( n, m ) disjoint, nonempty intervals I , . . . , I n ′ ⊆ { , . . . , m } such that for I = I ∪ · · · ∪ I n ′ , n ′ · z + X i ∈ I x i + X i/ ∈ I y i ≥ m X i =1 ( x i + y i ) . Consider some multi-objective maximization problem O that consists of a set of instances I , aset of solutions S ( x ) for each instance x ∈ I , and a function w assigning a k -dimensional weight w ( x, s ) ∈ N k to each solution s ∈ S ( x ) depending also on the instance x ∈ I . If the instance x isclear from the context, we also write w ( s ) = w ( x, s ). The components of w are written as w i for i ∈ { , , . . . , k } . For weights a = ( a , . . . , a k ), b = ( b , . . . , b k ) ∈ N k we write a ≥ b if a i ≥ b i for all i ∈ { , , . . . , k } .Let x ∈ I . The Pareto set of x , the set of optimal solutions, is the set { s ∈ S ( x ) | ¬∃ s ′ ∈ S ( x ) ( w ( x, s ′ ) ≥ w ( x, s ) and w ( x, s ′ ) = w ( x, s )) } . For solutions s, s ′ ∈ S ( x ) and α < s is α -approximated by s ′ if w i ( s ′ ) ≥ α · w i ( s ) for all i . We call a set of solutions α -approximatePareto set of x if every solution s ∈ S ( x ) (or equivalently, every solution from the Pareto set) is α -approximated by some s ′ contained in the set.We say that some algorithm is an α -approximation algorithm for O if it runs in polynomial timeand returns an α -approximate Pareto set of x for all input instances x ∈ I . We call it randomized ifit is allowed to fail with probability at most / over all of its executions. An algorithm is an FPTAS(fully polynomial-time approximation scheme) for a given optimization problem, if on input x and0 < ε < − ε )-approximate Pareto set of x in time polynomial in | x | + / ε . Ifthe algorithm is randomized it is called FPRAS (fully polynomial-time randomized approximationscheme). k -Objective Maximum Asymmetric Traveling Salesman Problem Definitions.
Let k ≥
1. An N k -labeled directed graph is a tuple G = ( V, E, w ), where V is somefinite set of vertices, E ⊆ V × V is a set of edges, and w : E → N k is a k -dimensional weightfunction. We denote the i -th component of w by w i and extend w to sets of edges by taking thesum over the weights of all edges in the set. A set of edges M ⊆ E is called matching in G if no twoedges in M share a common vertex. A walk in G is an alternating sequence of vertices and edges10 , e , v , . . . e m , v m , where v i ∈ V , e j ∈ E , and e j = ( v j − , v j ) for all 0 ≤ i ≤ m and 1 ≤ j ≤ m .If the sequence of vertices v , v , . . . , v m does not contain any repetitions, the walk is called a path and if v , v , . . . , v m − does not contain any repetitions and v m = v , it is called a cycle . A cyclein G is called Hamiltonian if it visits every vertex in G . For simplicity, we will interpret paths andcycles as sets of edges and can thus (using the above mentioned extension of w to sets of edges)write w ( C ) for the (multidimensional) weight of a Hamiltonian cycle C of G .Given some N k -labeled directed graph as input, our goal is to find a maximum Hamiltonian cy-cle. We will also use the multi-objective version of the maximum matching problem. These twomaximization problems are defined as follows: k -Objective Maximum Asymmetric Traveling Salesman Problem( k -MaxATSP) Instance: N k -labeled directed complete graph ( V, E, w )Solution: Hamiltonian cycle C Weight: w ( C ) k -Objective Maximum Matching ( k -MM) Instance: N k -labeled directed graph ( V, E, w )Solution: Matching M Weight: w ( M )Papadimitriou and Yannakakis [PY00] give an FPRAS for k -MM, which we will denote by k -MM-Approx R and use as a black box in our algorithm. Since k -MM-Approx R will be called multipletimes, we assume that its success probability is amplified in a way such that the probability that all calls to the FPRAS succeed is at least / . High-Level Explanation of the Algorithm.
We apply the balancing results to the multi-objective maximum asymmetric traveling salesman problem and obtain a short algorithm thatprovides a randomized / -approximation. This improves and simplifies the ( / − ε )-approximationthat was given by Manthey [Man09]. Essentially our algorithm contracts a small number of edges,then computes a maximum matching, adds the contracted edges to the matching, and extends theresult in an arbitrary way to a Hamiltonian cycle.The argument for the correctness of the algorithm is as follows: Each Hamiltonian cycle H inducestwo perfect matchings (the edges with odd and the edges with even sequence number in the cycle).For each objective i , the weight of one of the matchings is at least / · w i ( H ). The balancing resultsassure the existence of a matching M such that for all objectives the inequality w i ( M ) ≥ / · w i ( H )holds up to a small error. This matching can be approximated with the known FPRAS for multi-objective maximum matching. Moreover, by guessing and contracting a constant number of heavyedges in H our algorithm can compensate the errors caused by the balancing and by the FPRAS.11 v xe Figure 3:
Contracting the edge e = ( u, v ) deletes all edges incident to v and sets the weights of every edge ( u, x ) tothe weights of the edge ( v, x ) for x ∈ V \ { u, v } . Any Hamiltonian cycle passes through some edge ( u, x ) and hencecan be expanded to a Hamiltonian cycle through e by replacing ( u, x ) with the detour ( u, v ) , ( v, x ). Contraction and Expansion of Paths.
Suppose that for a given N k -labeled complete directedgraph G = ( V, E, w ) we wish to find some Hamiltonian cycle that contains a particular edge e = ( u, v ). This reduces to the problem of finding some Hamiltonian cycle in the N k -labeledcomplete directed graph G ′ = ( V ′ , E ′ , w ′ ) where the edge e is contracted by combining the nodes u and v into a single node while retaining the ingoing edges of u and the outgoing edges of v .More formally, we remove v and all incident edges from G and set w ′ ( u, x ) = w ( v, x ) for every x ∈ V \ { u, v } (Figure 3). Now suppose we find a Hamiltonian cycle C ′ in G ′ . Then there existssome x such that ( u, x ) ∈ C ′ . Note that w ′ ( u, x ) = w ( v, x ). We replace the edge ( u, x ) in C ′ withthe detour ( u, v ) , ( v, x ) and obtain a Hamiltonian cycle C in G passing through e . Moreover, C preserves the weights of C ′ in the sense that w ( C ) = w ′ ( C ′ ) + w ( e ).The notion of edge contractions and expansions can easily be extended to sets of pairwise vertexdisjoint paths (Figure 4), where each path is contracted edge-by-edge starting at the last edge ofthe path, and different paths can be contracted in an arbitrary order. We make this precise withthe following definitions. Definition 3.1.
Let G = ( V, E, w ) be some N k -labeled complete directed graph, let ( u, v ) ∈ E , let P ⊆ E be a path u , e , u , e , u , . . . , e r , u r , and let Q ⊆ E be a set of pairwise vertex disjoint paths P , P , . . . , P r ⊆ E .1. contract ( u,v ) ( G ) = ( V \ { v } , { e ∈ E | v is not incident to e } , w ′ ) , where w ′ ( x, y ) = w ( x, y ) if x = u and w ′ ( u, z ) = w ( v, z ) .2. contract P ( G ) = contract e (contract e ( . . . contract e r ( G ) . . . )) contract Q ( G ) = contract P (contract P ( . . . contract P r ( G ) . . . )) We sometimes identify a graph with its edge set and apply contract directly to sets of edges and notto graphs. In this case, we also interpret the value of contract as an edge set.
Observe that the result of contracting several pairwise vertex disjoint paths does not depend onthe order in which the paths are contracted.We define edge expansion in a similar manner. Note that Definition 3.2 becomes essential if G ′ isobtained from G by a contraction of some set Q of pairwise vertex disjoint paths in G .12 yu v contract ( v,y ) −−−−−−−−→ xu v contract ( u,v ) −−−−−−−−→ u x xyu v
312 7 expand ( v,y ) ←−−−−−−− xu v expand ( u,v ) ←−−−−−−− u x Figure 4:
Example for the contraction of the path { ( u, v ) , ( v, y ) } in the graph G resulting in the graph G ′′ and the following expansion of the tour { ( u, x ) , ( x, u ) } in G ′′ . The final tour in G includes the contractedpath. Definition 3.2.
Let G = ( V, E, w ) and G ′ = ( V ′ , E ′ , w ′ ) be two N k -labeled complete directedgraphs, let T ⊆ E ′ be a Hamiltonian cycle of G ′ , let ( u, v ) ∈ E , let P ⊆ E be a path u , e , u , e , u , . . . , e r , u r and let Q ⊆ E be a set of pairwise vertex disjoint paths P , P , . . . , P r ⊆ E .1. expand ( u,v ) ( T ) = { ( x, y ) ∈ T | x = u } ∪ { ( u, v ) } ∪ { ( v, x ) | ( u, x ) ∈ T } expand P ( T ) = expand e r (expand e r − ( . . . expand e ( T ) . . . )) expand Q ( T ) = expand P r (expand P r − ( . . . expand P ( T ) . . . ))Again observe that the result of expanding several pairwise vertex disjoint paths does not dependon the order in which the paths are expanded. Proposition 3.3.
Let G = ( V, E, w ) be some N k -labeled complete directed graph, Q ⊆ E be aset of pairwise vertex disjoint paths, and G ′ = ( V ′ , E ′ , w ′ ) = contract Q ( G ) . For any Hamiltoniancycle T ′ ⊆ E ′ of G ′ , the edges T = expand Q ( T ′ ) form a Hamiltonian cycle of G with w ( T ) = w ′ ( T ′ ) + w ( Q ) . Approximation Algorithm.
First we prove that the following algorithm computes a ( / − ε )-approximation for k -MaxATSP. Then Theorem 3.6 shows that a modification of the algorithmprovides a / -approximation. 13 lgorithm : k -MaxATSP-Approx R ( V, E, w, ε ) Input : N k -labeled complete directed graph G = ( V, E, w ) and even V Output: set of Hamiltonian cycles of G F ⊆ E with F ≤ k that is a set of vertex disjoint paths do2 G ′ := contract F ( G ); M := 2 k -MM-Approx R ( G ′ , ε ); M ∈ M do5 extend M in an arbitrary way to a Hamiltonian cycle T ′ in G ′ ; output expand F ( T ′ ); Lemma 3.4.
Let G = ( V, E, w ) be an N k -labeled complete directed graph with an even num-ber of vertices, ε > , and T ⊆ E some Hamiltonian cycle in G . With probability at least / , k -MaxATSP-Approx R ( V, E, w, ε ) outputs a ( / − ε ) -approximation of T within time polynomial in | ( V, E, w ) | + / ε .Proof. Let G = ( V, E, w ) be an N k -labeled complete directed graph with even m = V , and let T be some arbitrary Hamiltonian cycle in G . Claim 3.5.
There is a set F of vertex disjoint paths in T with F ≤ k such that there is amatching M ′ in the graph ( V ′ , E ′ , w ′ ) = contract F ( G ) with w ′ ( M ′ ) ≥ w ( T ) − w ( F ) .Proof. We apply Lemma 2.8 to the sequence of edge weights of T . Having an even number of edges,we can write T sequentially as T = u , e , v , f , u , e , v , f , . . . , u p , e p , v p , f p , u where u i , v i ∈ V and e i , f i ∈ T .Since w ( e i ) , w ( f i ) ∈ N k , Lemma 2.8 shows that there exist k ′ ∈ { , . . . , k } and natural numbers1 ≤ a ≤ b < a ≤ b · · · < a k ′ ≤ b k ′ ≤ p such that for I = S i ∈{ ,...,k ′ } { a i , a i + 1 , . . . , b i } , w ( f b ) + w ( f b ) + · · · + w ( f b k ′ ) + X i ∈ I w ( e i ) + X i/ ∈ I w ( f i ) ≥ p X i =1 ( w ( e i ) + w ( f i )) . (9)Let S = { f b , f b , . . . , f b k ′ } ∪ { e i | i ∈ I } ∪ { f i | i / ∈ I } . Observe that it is possible that adjacentedges are contained in S . Figure 5 gives an example.Observe that for 1 ≤ j ≤ p and f = f p the following holds. f j − , e j ∈ S ⇐⇒ ∃ i ∈ { , . . . , k ′ } , j = a i (10) e j , f j ∈ S ⇐⇒ ∃ i ∈ { , . . . , k ′ } , j = b i (11)14 j f j e j +1 f j +1 e j +2 f j +2 e j +3 f j +3 e j +4 f j +4 e j +5 f j +5 . . .. . . b i a i +1 b i +1 a i +2 b i +2 Figure 5:
Some part of the cycle T , where S ⊆ T contains the depicted edges and is partially defined by b i = j , a i +1 = j + 2, b i +1 = j + 3, and a i +2 = b i +2 = j + 4. Let F = { e a , f b , e a , f b , . . . , e a k ′ , f b k ′ } and note that F = 2 k ′ . We argue that contracting F willtransform any path in S into a single edge such that the resulting edge set is a matching:Suppose S contains some path P = { e r , f r , e r +1 , f r +1 , . . . , e s , f s } , where we assume P to be maximal(i.e., f r − , e s +1 / ∈ S ). From (10), (11), and a ≤ b < a ≤ b · · · < a k ′ ≤ b k ′ we can draw thefollowing conclusions: • e r , f r ∈ P yields r = b i for some 1 ≤ i ≤ k ′ • f r , e r +1 , f r +1 ∈ P yields r + 1 = a i +1 = b i +1 • f r +1 , e r +2 , f r +2 ∈ P yields r + 2 = a i +2 = b i +2 ... • f s − , e s , f s ∈ P yields s = a s − r +1 = b s − r +1 Note that e r / ∈ F , since otherwise e b i ∈ F , hence a i = r , and by (10), f r − ∈ S , which contradictsthe maximality of P . Therefore, contracting { f b i , e a i +1 , f b i +1 , . . . , f b j } ⊆ F transforms P into thesingle edge e r . A similar argumentation shows the same result for paths that start with some edge f r or end with some edge e s . Hence, contracting F transforms every path in S into a single edge,and M ′ = contract F ( S ) is a matching in the graph ( V ′ , E ′ , w ′ ) = contract F ( G ). We further obtain w ′ ( M ′ ) = w ( S ) − w ( F )= w ( f b ) + w ( f b ) + · · · + w ( f b k ′ ) + X i ∈ I w ( e i ) + X i/ ∈ I w ( f i ) − w ( F ) ( ) ≥ p X i =1 ( w ( e i ) + w ( f i )) − w ( F )= 12 w ( T ) − w ( F )which proves the claim.We fix the iteration of k -MaxATSP-Approx R where the algorithm chooses F as in the claim. ByClaim 3.5 we know that there is a matching M ′ of G ′ = ( V ′ , E ′ , w ′ ) with w ′ ( M ′ ) ≥ w ( T ) − w ( F ) . (12)15ence, with probability at least / the set M contains some matching M of G ′ such that w ′ ( M ) ≥ (1 − ε ) w ′ ( M ′ ) ( ) ≥ (1 − ε ) (cid:18) w ( T ) − w ( F ) (cid:19) . We extend M to some Hamiltonian cycle T ′ of G ′ in an arbitrary way without losing weights. ByProposition 3.3 we can expand T ′ with F and obtain a Hamiltonian cycle ˜ T in G with w ( ˜ T ) = w ′ ( T ′ ) + w ( F ) ≥ w ′ ( M ) + w ( F ) ≥ (1 − ε ) (cid:18) w ( T ) − w ( F ) (cid:19) + w ( F ) ≥ (1 − ε ) 12 w ( T ) + εw ( F ) ≥ (1 − ε ) 12 w ( T ) . Moreover, the running time of every operation of the algorithm, including the execution of therandomized maximum matching algorithm, and the number of iterations of the loops are polynomialin the length of the input and / ε , which completes the proof of the lemma. Theorem 3.6. k - MaxATSP is randomized / -approximable.Proof. Let G = ( V, E, w ) be an N k -labeled complete directed graph with even m = V , and let T be some arbitrary Hamiltonian cycle in G . The proof can easily be extended to graphs with anodd number of vertices or objectives.For each 1 ≤ i ≤ k we choose an f i ∈ T with w ( f i ) ≥ m w ( T ) (i.e., f i is a heaviest edge of T withrespect to component i ). We let F ⊆ T be a smallest set with even cardinality containing all the f i . We get F ≤ k and w ( F ) ≥ m w ( T ) . (13) F is a set of vertex disjoint paths in T and hence can be used to contract edges in G and T . Let G ′ = ( V ′ , E ′ , w ′ ) = contract F ( G ) and T ′ = contract F ( T ). Clearly, T ′ is a Hamiltonian cycle in G ′ .Moreover, G ′ has an even number of vertices. By Lemma 3.4, we can find in polynomial time aHamiltonian cycle ˜ T in G ′ such that w ′ ( ˜ T ) ≥ (1 − ε ) w ′ ( T ′ ), where ε = / m . Moreover, we canexpand ˜ T with F to obtain a Hamiltonian cycle ˆ T in G such that w ( ˆ T ) = w ′ ( ˜ T ) + w ( F ) ≥ (1 − ε ) 12 w ′ ( T ′ ) + w ( F )= (cid:18) w ′ ( T ′ ) + 12 w ( F ) (cid:19) + 12 w ( F ) − ε w ′ ( T ′ )= 12 w ( T ) + 12 ( w ( F ) − εw ′ ( T ′ )) ( ∗ ) ≥ w ( T ) , ∗ ) follows from w ( F ) − εw ′ ( T ′ ) = w ( F ) − m w ′ ( T ′ ) ≥ w ( F ) − m w ( T ) (13) ≥ m w ( T ) − m w ( T )= 0 . Note that, although we do not know the set F of heaviest edges in the Hamiltonian cycle, we cansimply try all possible sets of heaviest edges, since the number of objectives 2 k is constant. k -Objective Maximum Weighted Satisfiability Definitions.
We consider formulas over a finite set of propositional variables V . A literal is apropositional variable v ∈ V or its negation v , a clause is a finite, nonempty set of literals, and a formula in conjunctive normal form ( CNF , for short) is a finite set of clauses. A truth assignment is a mapping I : V → { , } . For some v ∈ V , we say that I satisfies the literal v if I ( v ) = 1, and I satisfies the literal v if I ( v ) = 0. We further say that I satisfies the clause C and write I ( C ) = 1if there is some literal l ∈ C that is satisfied by I , and I satisfies a formula in CNF if I satisfiesall of its clauses. For a set of clauses ˆ H and a variable v let ˆ H [ v ] = { C ∈ ˆ H | v ∈ C } be the set ofclauses that are satisfied if this variable is set to one, and analogously ˆ H [ v ] = { C ∈ ˆ H | v ∈ C } bethe set of clauses that are satisfied if this variable is set to zero.Given a formula in CNF and a k -objective weight function that maps each clause to a k -objectiveweight, our goal is to find truth assignments that maximize the sum of the weights of all satisfiedclauses. k -Objective Maximum Weighted Satisfiability ( k -MaxSAT) Instance: Formula H in CNF over a set of variables V , weight function w : H → N k Solution: Truth assignment I : V → { , } Weight: Sum of the weights of all clauses satisfied by I , i. e., w ( I ) = P C ∈ HI ( C )=1 w ( C ) High-Level Explanation of the Algorithm.
We apply the balancing results to k -MaxSAT.For a given formula H in CNF over the variables V , the strategy is as follows: Start with a listof the variables V and guess a partition of this list into 2 k consecutive intervals. Assign 1 to thevariables in every second interval and 0 to the remaining variables. The balancing results assurethe existence of a partition that yields an assignment whose weights are approximately one half ofthe total weights of H , up to an error induced by the variables at the boundaries of the partition.The error can be removed by first guessing a satisfying assignment for several influential variables V of the formula. This results in a / -approximation for k -MaxSAT.17 pproximation Algorithm. We show that the following algorithm computes a / -approxima-tion for 2 k -MaxSAT. Algorithm : 2 k -MaxSAT-Approx( H, w ) Input : Formula H in CNF over the variables V = { v , . . . , v m } , 2 k -objective weightfunction w : H → N k Output: Set of truth assignments I : V → { , } V ⊆ V with V ≤ (2 k ) do2 let I ( v ) := 0 for all v ∈ V ; G := { C ∈ H | ¬∃ v ∈ V ( v ∈ C ) } ; V := { v ∈ V \ V | k · w ( G [ v ]) w ( H \ G ) } ; set I ( v ) := 1 for all v ∈ V ; V ′ := V \ ( V ∪ V ); a , b , a , b , . . . , a k , b k ∈ { i | v i ∈ V ′ } do8 foreach v i ∈ V ′ do9 if ∃ j ( a j ≤ i ≤ b j ) then I ( v i ) := 1 else I ( v i ) := 0
10 end11 output I
12 end13 end
Theorem 3.7. k -MaxSAT is / -approximable.Proof. In the following, we assume without loss of generality that the number of objectives 2 k iseven. We show that the approximation is realized by the algorithm 2 k -MaxSAT-Approx . First notethat this algorithm runs in polynomial time since k is constant. For the correctness, let ( H, w )be the input where H is a formula over the variables V = { v , . . . , v m } and w : H → N k is the2 k -objective weight function. Let I o : V → { , } be an optimal truth assignment. We show thatthere is an iteration of the loops of 2 k -MaxSAT-Approx ( H, w ) that outputs a truth assignment I such that w ( I ) ≥ w ( I o ) /
2. First we show that there is an iteration of the first loop that uses asuitable set V . Claim 3.8.
There is some set V ⊆ { v ∈ V | I o ( v ) = 0 } with V ≤ (2 k ) such that for G = { C ∈ H | ¬∃ v ∈ V ( v ∈ C ) } and any v ∈ V \ V it holds that k · w ( G [ v ]) w ( H \ G ) = ⇒ I o ( v ) = 1 . Proof.
As a special case, if { v ∈ V | I o ( v ) = 0 } < (2 k ) , the assertion obviously holds for V = { v ∈ V | I o ( v ) = 0 } , since I o ( v ) = 1 for all v ∈ V \ V .Otherwise, let V = { u kt + r | r = 1 , , . . . , k and t = 0 , , . . . , k − } , where the u kt + r ∈ V aredefined inductively in the following way:(IB) H := H kt + r − → kt + r : – choose v ∈ V \ { u , . . . , u kt + r − } such that I o ( v ) = 0 and w r ( H kt + r − [ v ]) is maximal – u kt + r := v – H kt + r := H kt + r − \ H kt + r − [ v ] – α kt + r := w ( H kt + r − [ v ]).We now show that the stated implication holds, so let v ∈ V \ V and j ∈ { , , . . . , k } such that2 k · w j ( G [ v ]) > w j ( H \ G ). Because the union S k i =1 H i − [ u i ] = H \ G is disjoint, we get w ( H \ G ) = k X r =1 2 k − X t =0 α kt + r ≥ k − X t =0 α kt + j and thus w j ( G [ v ]) > k − X t =0 ( α kt + j ) j k . Hence, by a pigeonhole argument, there must be some t ∈ { , , . . . , k − } such that w j ( G [ v ]) > ( α kt + j ) j . But since G ⊆ H kt + j − and thus even w j ( H kt + j − [ v ]) > w j ( G [ v ]) > ( α kt + j ) j , the onlyreason we did not choose v in the iteration 2 kt + j (or even earlier) is that I o ( v ) = 1.We choose the iteration of the algorithm where V equals the set whose existence is guaranteedby Claim 3.8. Furthermore let G and V be defined as in the algorithm and observe that by theclaim it holds that I o ( v ) = 1 for all v ∈ V . Since I o ( v ) = 0 for all v ∈ V , the truth assignment I defined in the algorithm coincides with I o on V ∪ V .Let further V ′ = V \ ( V ∪ V ) and G ′ = { C ∈ G | ¬∃ v ∈ V ( v ∈ C ) ∧ ¬∃ v ∈ V ( v ∈ C ) ∧ ∃ v ∈ V ′ ( v ∈ C ∨ v ∈ C ) } be the set of clauses that are not yet satisfied by I but that could be satisfiedby further extending I .Now we apply the balancing result. Let L ′ = V ′ ∪ { v | v ∈ V ′ } . For v i ∈ V ′ let x i = X C ∈ G ′ [ v i ] w ( C ) C ∩ L ′ ) and y i = X C ∈ G ′ [ v i ] w ( C ) C ∩ L ′ ) , and for v i ∈ V ∪ V let x i = y i = 0 . It holds that X v i ∈ V x i + y i = X v i ∈ V ′ x i + y i = w ( G ′ ) . Note that for all v i ∈ V ′ , we have the bound y i ≤ w ( G ′ [ v i ]) ≤ w ( G [ v i ]) ≤ k w ( H \ G ) because ofthe definition of V ′ and V . Hence, for all v i ∈ V , y i ≤ k w ( H \ G ) .
19f we scale all values x i and y i to natural numbers, then by Corollary 2.11, there exist k ′ ≤ k disjoint, nonempty intervals J , . . . , J k ′ ⊆ { , . . . , m } such that for J = J ∪ · · · ∪ J k ′ it holds that X i ∈ J x i + X i/ ∈ J y i ≥ w ( G ′ ) − k ′ k w ( H \ G ) ≥
12 ( w ( G ′ ) − w ( H \ G )) . The algorithm tries all combinations of k (possibly empty) intervals J = [ a , b ] , . . . , J k = [ a k , b k ].In particular, it will test the combination of the k ′ nonempty intervals mentioned in Corollary 2.11.For I being the truth assignment generated in this iteration it holds that w ( { C ∈ G ′ | I ( C ) = 1 } ) ≥ X i ∈ J x i + X i/ ∈ J y i ≥
12 ( w ( G ′ ) − w ( H \ G )) . (14)Furthermore, since I and I o coincide on V \ V ′ , we have w ( { C ∈ H \ G ′ | I ( C ) = 1 } ) = w ( { C ∈ H \ G ′ | I o ( C ) = 1 } ) (15) ≥ w ( { C ∈ H \ G | I o ( C ) = 1 } )= w ( { H \ G } ) . (16)Thus we finally obtain w ( I ) = w ( { C ∈ H \ G ′ | I ( C ) = 1 } ) + w ( { C ∈ G ′ | I ( C ) = 1 } ) (14) ≥ w ( { C ∈ H \ G ′ | I ( C ) = 1 } ) + ( w ( G ′ ) − w ( H \ G )) (15) = w ( { C ∈ H \ G ′ | I o ( C ) = 1 } ) + ( w ( G ′ ) − w ( H \ G )) (16) ≥ w ( { C ∈ H \ G ′ | I o ( C ) = 1 } ) + w ( G ′ ) ≥ w ( I o ) . References [AW02] T. Asano and D. P. Williamson. Improved approximation algorithms for MAX SAT.
Journal of Algorithms , 42(1):173–202, 2002.[BMP08] M. Bl¨aser, B. Manthey, and O. Putz. Approximating multi-criteria Max-TSP. In
ESA ,pages 185–197, 2008.[EG00] M. Ehrgott and X. Gandibleux. A survey and annotated bibliography of multiobjectivecombinatorial optimization.
OR Spectrum , 22(4):425–460, 2000.[Ehr05] M. Ehrgott.
Multicriteria Optimization . Springer Verlag, 2005.[EK01] L. Engebretsen and M. Karpinski. Approximation hardness of TSP with bounded met-rics. In
ICALP ’01: Proceedings of the 28th International Colloquium on Automata,Languages and Programming , pages 201–212, London, UK, 2001. Springer-Verlag.20FNW79] M. L. Fisher, G. L. Nemhauser, and L. A. Wolsey. An analysis of approximations forfinding a maximum weight Hamiltonian circuit.
Operations Research , 27(4):799–809,1979.[GW94] M. X. Goemans and D. P. Williamson. New 3/4-approximation algorithms for the maxi-mum satisfiability problem.
SIAM Journal on Discrete Mathematics , 7(4):656–666, 1994.[GW95] M. X. Goemans and D. P. Williamson. Improved approximation algorithms for maximumcut and satisfiability problems using semidefinite programming.
Journal of the ACM ,42(6):1115–1145, 1995.[H˚as97] J. H˚astad. Some optimal inapproximability results. In
STOC , pages 1–10, 1997.[Joh74] D. S. Johnson. Approximation algorithms for combinatorial problems.
Journal of Com-puter System Sciences , 9(3):256–278, 1974.[KLSS05] H. Kaplan, M. Lewenstein, N. Shafrir, and M. Sviridenko. Approximation algorithmsfor asymmetric TSP by decomposing directed regular multigraphs.
Journal of the ACM ,52(4):602–626, 2005.[Llo78] N. G. Lloyd.
Degree Theory . Cambridge University Press, Cambridge, England, 1978.[Man09] B. Manthey. On approximating multi-criteria TSP. In
Proceedings of 26th Annual Sym-posium on Theoretical Aspects of Computer Science , volume 09001 of
Dagstuhl SeminarProceedings , pages 637–648. Internationales Begegnungs- und Forschungszentrum fuerInformatik (IBFI), Schloss Dagstuhl, Germany, 2009.[PY91] C. H. Papadimitriou and M. Yannakakis. Optimization, approximation, and complexityclasses.
Journal of Computer System Sciences , 43(3), 1991.[PY00] C. H. Papadimitriou and M. Yannakakis. On the approximability of trade-offs andoptimal access of web sources. In
FOCS ’00: Proceedings of the 41st Annual Symposiumon Foundations of Computer Science , pages 86–95, Washington, DC, USA, 2000. IEEEComputer Society.[SBLL09] R. Santana, C. Bielza, J. A. Lozano, and P. Larra˜naga. Mining probabilistic modelslearned by EDAs in the optimization of multi-objective problems. In
GECCO ’09: Pro-ceedings of the 11th Annual conference on Genetic and evolutionary computation , pages445–452, New York, NY, USA, 2009. ACM.[Yan94] M. Yannakakis. On the approximation of maximum satisfiability.