Classical linear vector optimization duality revisited
aa r X i v : . [ m a t h . O C ] M a y Classical linear vector optimization duality revisited ∗ Radu Ioan Bot¸ † , Sorin-Mihai Grad ‡ , and Gert Wanka § Abstract.
With this note we bring again into attention a vector dual problem ne-glected by the contributions who have recently announced the successful healing of the trouble encountered by the classical duals to the classical linear vector optimization prob-lem. This vector dual problem has, different to the mentioned works which are of set-valuednature, a vector objective function. Weak, strong and converse duality for this “new-old”vector dual problem are proven and we also investigate its connections to other vectorduals considered in the same framework in the literature. We also show that the efficientsolutions of the classical linear vector optimization problem coincide with its properly ef-ficient solutions (in any sense) when the image space is partially ordered by a nontrivialpointed closed convex cone, too.
Keywords. linear vector duality, cones, multiobjective optimization
All the vectors we use in this note are column vectors, an upper index “ T ” being used totranspose them into row vectors. Having a set S ⊆ R k , by int( S ) we denote its interior.By R k + we denote the nonnegative orthant in R k . We say that a set K ⊆ R k is a cone if λK ⊆ K for all λ ∈ R + . Then K induces on R k a partial ordering “ ≦ K ” defined by v ≦ K w if w − v ∈ K . If v ≦ K w and v = w we write v ≤ K w . When K = R k + these coneinequality notations are simplified to “ ≦ ” and, respectively, “ ≤ ”. A cone K ⊆ R k whichdoes not coincide with { } or R k is said to be nontrivial . A cone K is called pointed if K ∩ ( − K ) = { } . The set K ∗ = { λ ∈ R k : λ T v ≥ ∀ v ∈ K } is the dual cone of the cone K . The quasi-interior of K ∗ is the set K ∗ = { λ ∈ R k : λ T v > ∀ v ∈ K \{ }} . The recession cone of a convex set M ⊆ R k is 0 + M = { x ∈ R k : M + x ⊆ M } . A set is said tobe polyhedral if it can be expressed as the intersection of some finite collection of closedhalf-spaces. For other notions and notations used in this paper we refer to [9].The vector optimization problems we consider in this note consist of vector-minimizingor vector-maximizing a vector function with respect to the partial ordering induced in theimage space of the vector function by a nontrivial pointed closed convex cone. For thevector-minimization problems we use the notation Min, while the vector-maximization ∗ Research partially supported by DFG (German Research Foundation), project WA 922/1-3. † Faculty of Mathematics, Chemnitz University of Technology, D-09107 Chemnitz, Germany, e-mail:[email protected]. ‡ Faculty of Mathematics, Chemnitz University of Technology, D-09107 Chemnitz, Germany, e-mail:[email protected]. § Faculty of Mathematics, Chemnitz University of Technology, D-09107 Chemnitz, Germany, e-mail:[email protected]. R k be partially ordered by a nontrivial pointed closed convex cone K ⊆ R k and M ⊆ R k be a nonempty set. An element ¯ x ∈ M is said to be a minimal ele-ment of M (regarding the partial ordering induced by K ) if there exits no v ∈ M satisfying v ≤ K ¯ v . The set of all minimal elements of M is denoted by Min( M, K ). Even if in theliterature there are several concepts of proper minimality for a given set (see [1, Section2.4] for a review on this subject), we deal here only with the properly minimal elements ofa set in the sense of linear scalarization. Actually, all these proper minimality concepts co-incide when applied to the primal classical linear vector optimization problem we treat inthis note, as we shall see later. An element ¯ v ∈ M is said to be a properly minimal elementof M (in the sense of linear scalarization) if there exists a λ ∈ K ∗ such that λ T ¯ v ≤ λ T v for all v ∈ M . The set of all properly minimal elements of M (in the sense of linearscalarization) is denoted by PMin( M, K ). It can be shown that every properly minimalelement of M is also minimal, but the reverse assertion fails in general. Correspondingmaximality notions are defined by using the definitions from above. The elements of theset Max( M, K ) := Min( M, − K ) are called maximal elements of M .The classical linear vector optimization problem is( P ) Min x ∈A Lx, A = { x ∈ R n + : Ax = b } where L ∈ R k × n , A ∈ R m × n , b ∈ R m and the space R k is partially ordered by thenontrivial pointed closed convex cone K ⊆ R k . The first relevant contributions to thestudy of duality for ( P ) were brought by Isermann in [4–6] for the case K = R k + . Thedual he assigned to it, ( D I ) Max U ∈B I h I ( U ) , where B I = (cid:8) U ∈ R k × m : ∄ x ∈ R n + such that ( L − U A ) x ≤ (cid:9) and h I ( U ) = U b, turned out to work well only when b = 0 (see [7]).The same drawback was noticed in [7,8] also for the so-called dual abstract optimizationproblem to ( P ) ( D J ) Max ( λ,U ) ∈B J h J ( λ, U ) , where B J = (cid:8) ( λ, U ) ∈ K ∗ × R k × m : ( L − U A ) T λ ∈ R n + (cid:9) and h J ( λ, U ) = U b.
This issue was solved by particularizing the general vector Lagrange-type dual introducedin [7], a vector dual to ( P ) for which duality statements can be given for every choice of b ∈ R m being obtained, namely( D L ) Max ( λ,z,v ) ∈B L h L ( λ, z, v ) , B L = (cid:8) ( λ, z, v ) ∈ K ∗ × R m × R k : λ T v − z T b ≤ L T λ − A T z ∈ R n + (cid:9) and h L ( λ, z, v ) = v. Recently, in [3] another vector dual to ( P ) was proposed for which the duality assertionswere shown via complicated set-valued optimization techniques( D H ) Max U ∈B H h H ( U ) , where B H = (cid:8) U ∈ R k × m : ∄ x ∈ R n + such that ( L − U A ) x ≤ K (cid:9) and h H ( U ) = U b + Min (cid:0) ( L − U A )( R n + ) , K (cid:1) . Motivated by the results given in [3], we show in this note that a vector dual to( P ) already known in the literature (see [2, 10]) has already “closed the duality gap inlinear vector optimization”. Its weak, strong and converse duality statements hold underthe same hypotheses as those for ( D H ) and no trouble appears when b = 0. Moreover,this revisited vector dual to ( P ) has a vector objective function, unlike ( D H ), where theobjective function is of set-valued nature, involving additionally solving another vector-minimization problem. So far this vector dual was given only for the case K = R k + , but itcan be easily extended for an arbitrary nontrivial pointed closed convex cone K ⊆ R k asfollows ( D ) Max ( λ,U,v ) ∈B h ( λ, U, v ) , where B = (cid:8) ( λ, U, v ) ∈ K ∗ × R k × m × R k : λ T v = 0 and ( L − U A ) T λ ∈ R n + (cid:9) and h ( λ, U, v ) = U b + v. We refer the reader to [1, Section 5.5] for a complete analysis of the relations betweenall the vector dual problems to ( P ) introduced above in the case K = R k + .The first new result we deliver in this note is the fact that the efficient and properlyefficient solutions to ( P ) coincide in the considered framework, too, extending the classicalstatement due to Isermann from [6] given for K = R k + . Then we prove for ( D ) weak, strongand converse duality statements. Finally, we show that the relations of inclusions involvingthe images of the feasible sets through their objective functions of the vector duals to ( P )from [1, Remark 5.5.3] remain valid in the current framework, too. Examples when the“new-old” vector dual we propose here does not coincide with the other vector duals indiscussion are provided, too. 3 Weak, strong and converse duality for ( P ) and ( D ) An element ¯ x ∈ A is said to be a properly efficient solution (in the sense of linearscalarization) to ( P ) if L ¯ x ∈ PMin( L ( A ) , K ), i.e. there exists λ ∈ K ∗ such that λ T ( L ¯ x ) ≤ λ T ( Lx ) for all x ∈ A . An element ¯ x ∈ A is said to be an efficient solution to ( P ) if L ¯ x ∈ Min( L ( A ) , K ), i.e. there exists no x ∈ A such that Lx ≤ K L ¯ x . Of courseeach properly efficient solution (in the sense of linear scalarization) ¯ x to ( P ) is also efficientto ( P ). Let us recall now a separation result from the literature, followed by a statementconcerning the solutions of ( P ). Lemma 1. (cf. [3, Lemma 2.2(i)])
Let M ⊆ R k a polyhedral set such that M ∩ K = { } .Then there exists γ ∈ R k \{ } such that γ T k < ≤ γ T m ∀ k ∈ K \{ } ∀ m ∈ M. Theorem 1.
Every efficient solution to ( P ) is properly efficient (in the sense of linearscalarization) to ( P ) . Proof.
Let ¯ x ∈ A be an efficient solution to ( P ). Since A is, by construction, apolyhedral set, via [9, Theorem 19.3] we get that L ( A ) is polyhedral, too. Consequently,also L ( A ) − L ¯ x is a polyhedral set. The efficiency of ¯ x to ( P ) yields ( L ( A ) − L ¯ x ) ∩ ( − K ) = { } , thus we are allowed to apply Lemma 1, which yields the existence of γ ∈ R k \{ } forwhich γ T ( − k ) < ≤ γ T ( Lx − L ¯ x ) ∀ k ∈ K \{ } ∀ x ∈ A . (1)Since γ T k > k ∈ K \{ } , it follows that γ ∈ K ∗ . From (1) we obtain γ T ( L ¯ x ) ≤ γ T ( Lx ) for all x ∈ A , which, taking into account that γ ∈ K ∗ , means ac-tually that ¯ x is a properly efficient solution (in the sense of linear scalarization) to ( P ). (cid:3) Remark 1.
In Theorem 1 we extend the classical result proven in a quite complicatedway in [6] for the special case K = R k + , showing that the efficient solutions to ( P ) andits properly efficient solutions (in the sense of linear scalarization) coincide. Actually, thisstatement remains valid when the feasible set of ( P ) is replaced by a set A for which L ( A )is polyhedral. Remark 2.
In the literature there were proposed several concepts of properly efficientsolutions to a vector optimization problem. Taking into account that all these properly ef-ficient solutions are also efficient to the given vector optimization problem and the fact that(see [1, Proposition 2.4.16]) the properly efficient solutions (in the sense of linear scalar-ization) are properly efficient to the same problem in every other sense, too, Theorem 1yields that for ( P ) the properly efficient solutions (in the sense of linear scalarization) co-incide also with the properly efficient solutions to ( P ) in the senses of Geoffrion, Hurwicz,Borwein, Benson, Henig and Lampe and generalized Borwein, respectively (see [1, Section2.4]). Taking into account Theorem 1, it is obvious that it is enough to deal only withthe efficient solutions to ( P ), since they coincide with all the types of properly efficientsolutions considered in the literature. 4et us show now a Farkas-type result which allows us to formulate the feasible sets ofthe vector dual problems to ( P ) in a different manner. Lemma 2.
Let U ∈ R k × m . Then ( L − U A )( R n + ) ∩ ( − K ) = { } if and only if thereexists λ ∈ K ∗ such that ( L − U A ) T λ ∈ R n + . Proof. “ ⇒ ” The set ( L − U A )( R n + ) is polyhedral and has with the nontrivial pointedclosed convex cone − K only the origin as a common element. Applying Lemma 1 weobtain a λ ∈ R k \{ } for which λ T ( − k ) < ≤ λ T (( L − U A ) x ) ∀ x ∈ R n + ∀ k ∈ K \{ } . (2)Like in the proof of Lemma 1 we obtain that λ ∈ K ∗ and, by (2) it follows immediatelythat ( L − U A ) T λ ∈ R n + .“ ⇐ ” Assuming the existence of an x ∈ R n + for which ( L − U A ) x ∈ ( − K ) \{ } , it follows λ T (( L − U A ) x ) <
0, but λ T (( L − U A ) x ) = (( L − U A ) T λ ) T x ≥ L − U A ) T λ ∈ R n + and x ∈ R n + . The so-obtained contradiction yields ( L − U A )( R n + ) ∩ ( − K ) = { } . (cid:3) Further we prove for the primal-dual pair of vector optimization problems ( P ) − ( D )weak, strong and converse duality statements. Theorem 2. (weak duality)
There exist no x ∈ A and ( λ, U, v ) ∈ B such that Lx ≤ K U b + v . Proof.
Assume the existence of x ∈ A and ( λ, U, v ) ∈ B such that Lx ≤ K U b + v . Then0 < λ T ( U b + v − Lx ) = λ T ( U ( Ax ) − Lx ) = − (( L − U A ) T λ ) T x ≤
0, since ( L − U A ) T λ ∈ R n + and x ∈ R n + . But this cannot happen, therefore the assumption we made is false. (cid:3) Theorem 3. (strong duality) If ¯ x is an efficient solution to ( P ) , there exists (¯ λ, ¯ U , ¯ v ) ∈B , an efficient solution to ( D ) , such that L ¯ x = ¯ U b + ¯ v . Proof.
The efficiency of ¯ x to ( P ) yields via Theorem 1 that ¯ x is also properly efficientto ( P ). Thus there exists ¯ λ ∈ K ∗ such that ¯ λ T ( L ¯ x ) ≤ ¯ λ T ( Lx ) for all x ∈ A .On the other hand, one has strong duality for the scalar optimization probleminf x ∈A (cid:8) ¯ λ T ( Lx ) (cid:9) and its Lagrange dual sup n − η T b : η ∈ R m , L T ¯ λ + A T η ∈ R n + o , i.e. their optimal objective values coincide and the dual has an optimal solution, say ¯ η .Consequently, ¯ λ T ( L ¯ x ) + ¯ η T b = 0 and L T ¯ λ + A T ¯ η ∈ R n + .As ¯ λ ∈ K ∗ , there exists ˜ λ ∈ K \{ } such that ¯ λ T ˜ λ = 1. Let ¯ U := − ˜ λ ¯ η T and¯ v := L ¯ x − ¯ U b . It is obvious that ¯ U ∈ R k × m and ¯ v ∈ R k . Moreover, ¯ λ T ¯ v = ¯ λ T ( L ¯ x − ¯ U b ) =¯ λ T ( L ¯ x ) + ¯ η T b = 0 and ( L − ¯ U A ) T ¯ λ = L T ¯ λ + A T ¯ η ∈ R n + . Consequently, (¯ λ, ¯ U , ¯ v ) ∈ B and ¯ U b + ¯ v = ¯ U b + L ¯ x − ¯ U b = L ¯ x . Assuming that (¯ λ, ¯ U , ¯ v ) is not efficient to ( D ), i.e.the existence of another feasible solution ( λ, U, v ) ∈ B satisfying ¯ U b + ¯ v ≤ K U b + v , itfollows L ¯ x ≤ K U b + v , which contradicts Theorem 2. Consequently, (¯ λ, ¯ U , ¯ v ) is an efficient5olution to ( D ) for which L ¯ x = ¯ U b + ¯ v . (cid:3) Theorem 4. (converse duality) If (¯ λ, ¯ U , ¯ v ) ∈ B is an efficient solution to ( D ) , thereexists ¯ x ∈ A , an efficient solution to ( P ) , such that L ¯ x = ¯ U b + ¯ v . Proof.
Let ¯ d := ¯ U b + ¯ v . Assume that A = ∅ . Then b = 0 and, by Farkas’ Lemma thereexists ¯ z ∈ R m such that b T ¯ z > A T ¯ z ∈ − R n + . As ¯ λ ∈ K ∗ , there exists ˜ λ ∈ K \{ } such that ¯ λ T ˜ λ = 1. Let ˜ U := ˜ λ ¯ z T + ¯ U ∈ R k × m . We have ( L − ˜ U A ) T ¯ λ = ( L − ¯ U A ) T ¯ λ − A T ¯ z ∈ R n + , thus (¯ λ, ˜ U , ¯ v ) ∈ B . But h (¯ λ, ˜ U , ¯ v ) = ˜ U b + ¯ v = ˜ λ ¯ z T b + ¯ U b + ¯ v = ˜ λ ¯ z T b + ¯ d ≥ K ¯ d , whichcontradicts the efficiency of (¯ λ, ¯ U , ¯ v ) to ( D ). Consequently, A 6 = ∅ .Suppose now that ¯ d / ∈ L ( A ). Using Theorem 2 it follows easily that ¯ d / ∈ L ( A ) + K ,too. Since A = A − ( b ) ∩ R n + , we have 0 + A = 0 + ( A − ( b )) ∩ + R n + . As 0 + ( A − ( b )) = A − (0 + { b } ) = A − (0) and 0 + R n + = R n + , it follows 0 + A = A − (0) ∩ R n + . Then 0 + L ( A ) = L (0 + A ) = L ( A − (0) ∩ R n + ) = { Lx : x ∈ R n + , Ax = 0 } ⊆ ( L − ¯ U A )( R n + ) and, obviously,0 ∈ + L ( A ).Using Lemma 2 we obtain ( L − ¯ U A )( R n + ) ∩ ( − K ) = { } , thus, taking into account theinclusions from above, we obtain 0 + L ( A ) ∩ ( − K ) = { } ⊆ K = 0 + K . This assertion andthe fact that L ( A ) is polyhedral and K is closed convex yield, via [9, Theorem 20.3], that L ( A ) + K is a closed convex set. Applying [9, Corollary 11.4.2] we obtain a γ ∈ R k \{ } and an α ∈ R such that γ T ¯ d < α < γ T ( Lx + k ) ∀ x ∈ A ∀ k ∈ K. (3)Assuming that γ / ∈ K ∗ would yield the existence of some k ∈ K for which γ T k < K is a cone, this implies a contradiction to (3), consequently γ ∈ K ∗ . Taking k = 0 in (3) it follows γ T ¯ d < α < γ T ( Lx ) ∀ x ∈ A . (4)On the other hand, for all x ∈ A one has 0 ≤ ¯ λ T (( L − ¯ U A ) x ) = ¯ λ T ( Lx − ¯ U b ) = ¯ λ T ( Lx − ¯ U b ) − ¯ λ T ¯ v = ¯ λ T ( Lx − ¯ d ), therefore¯ λ T ¯ d ≤ ¯ λ T ( Lx ) ∀ x ∈ A . (5)Now, taking δ := α − γ T ¯ d > d T ( s ¯ λ + (1 − s ) γ ) = α − δ + s (¯ λ T ¯ d − α + δ )for all s ∈ R . Note that there exists an ¯ s ∈ (0 ,
1) such that ¯ s (¯ λ T ¯ d − α + δ ) < δ/ s (¯ λ T ¯ d − α ) > − δ/
2, and let λ := ¯ s ¯ λ + (1 − ¯ s ) γ . It is clear that λ ∈ K ∗ .By (4) and (5) it follows s ¯ λ T ¯ d + (1 − s ) α < ( s ¯ λ + (1 − s ) γ ) T ( Lx ) for all x ∈ A and all s ∈ (0 , λ T ¯ d = ¯ s ¯ λ T ¯ d + (1 − ¯ s ) γ T ¯ d = ¯ s ¯ λ T ¯ d + (1 − ¯ s )( α − δ ) < δ s ( α − δ ) + (1 − ¯ s )( α − δ ) = α − δ < λ T ( Lx ) ∀ x ∈ A . (6)Since there is strong duality for the scalar linear optimization probleminf x ∈A (cid:8) λ T ( Lx ) (cid:9) and its Lagrange dual sup (cid:8) − η T b : η ∈ R m , L T λ + A T η ∈ R n + (cid:9) , η , and inf x ∈A λ T ( Lx ) + ¯ η T b = 0 and L T λ + A T ¯ η ∈ R n + . As ¯ λ ∈ K ∗ , there exists ˜ λ ∈ K \{ } such that ¯ λ T ˜ λ = 1. Let U := − ˜ λ ¯ η T . It followsthat ( L − U A ) T λ ∈ R n + and inf x ∈A λ T ( Lx ) = λ T ( U b ).Consider now the hyperplane H := { U b + v : λ T v = 0 } , which is nothing but the set { w ∈ R k : λ T w = λ T ( U b ) } . Consequently, H ⊆ h ( B ). On the other hand, (6) yields λ T ¯ d < λ T ( U b ). Then there exists a ¯ k ∈ K \{ } such that λ T ( ¯ d + ¯ k ) = λ T ( U b ), which hasas consequence that ¯ d + ¯ k ∈ H ⊆ h ( B ). Noting that ¯ d ≤ K ¯ d + ¯ k , we have just arrived toa contradiction to the maximality of ¯ d to the set h ( B ). Therefore our initial suppositionis false, consequently ¯ d ∈ L ( A ). Then there exists ¯ x ∈ A such that L ¯ x = ¯ d = ¯ U b + ¯ v .Employing Theorem 2, it follows that ¯ x is an efficient solution to ( P ). (cid:3) Remark 3.
If ¯ x ∈ A and (¯ λ, ¯ U , ¯ v ) ∈ B are, like in the results given above, such that L ¯ x = ¯ U b + ¯ v , then the complementarity condition ¯ x T ( L − ¯ U A ) T ¯ λ = 0 is fulfilled.Analogously to [3, Theorem 3.14] we summarize the results from above in a generalduality statement for ( P ) and ( D ). Corollary 1.
One has
Min( L ( A ) , K ) = Max( h ( B ) , K ) . To complete the investigation on the primal-dual pair of vector optimization problems( P ) − ( D ) we give also the following assertions. Theorem 5. If A 6 = ∅ , the problem ( P ) has no efficient solutions if and only if B = ∅ . Proof. “ ⇒ ” By [3, Lemma 2.1], the lack of efficient solutions to ( P ) yields 0 + L ( A ) ∩ ( − K ) \{ } 6 = ∅ . Then ( L − U A )( R n + ) ∩ ( − K ) \{ } 6 = ∅ for all U ∈ R k × m and employingLemma 2 we see that B cannot contain in this situation any element.“ ⇐ ” Assuming that ( P ) has efficient solutions, Theorem 3 yields that also ( D ) has anefficient solution. But this cannot happen since the dual has no feasible elements, conse-quently ( P ) has no efficient solutions. (cid:3) Theorem 6. If B 6 = ∅ , the problem ( D ) has no efficient solutions if and only if A = ∅ . Proof. “ ⇒ ” Assume that A 6 = ∅ . If ( P ) has no efficient solutions, Theorem 5 wouldyield B = ∅ , but this is false, therefore ( P ) must have at least an efficient solution.Employing Theorem 3 it follows that ( D ) has an efficient solution, too, contradicting theassumption we made. Therefore A = ∅ .“ ⇐ ” Assuming that ( D ) has an efficient solution, Theorem 4 yields that ( P ) has anefficient solution, too. But this cannot happen since this problem has no feasible elements,consequently ( D ) has no efficient solutions. (cid:3) ( P ) As we have already mentioned in the first section (see also [1, Section 4.5 and Section 5.5]),several vector dual problems to ( P ) were proposed in the literature and for them weak,strong and converse duality statements are valid under different hypotheses. For ( D I )(in case K = R k + ) and ( D J ) weak duality holds in general, but for strong and converse7uality one needs to impose additionally the condition b = 0 to the hypotheses of thecorresponding theorems given for ( D ). For ( D H ) all three duality statements hold underthe hypotheses of the corresponding theorems regarding ( D ). Concerning ( D L ), weak andstrong duality were shown (for instance in [1]), but the converse duality statement doesnot follow directly, so we prove it. Theorem 7. If (¯ λ, ¯ z, ¯ v ) ∈ B L is an efficient solution to ( D L ) , there exists ¯ x ∈ A , anefficient solution to ( P ) , such that L ¯ x = ¯ v . Proof.
Analogously to the proof of Theorem 4 it can be easily shown that
A 6 = ∅ .Since ¯ λ ∈ K ∗ , there exists ˜ λ ∈ K \{ } such that ¯ λ T ˜ λ = 1. Let U := (¯ z ˜ λ T ) T . Then U T ¯ λ = ¯ z ˜ λ T ¯ λ = ¯ z . Thus, ( L − U A ) T ¯ λ = L T ¯ λ − A T U T ¯ λ = L T ¯ λ − A T ¯ z ∈ R n + . Assuming theexistence of some x ∈ R n + for which ( L − U A ) x ∈ − K \{ } , it follows ¯ λ T (( L − U A ) x ) < λ T (( L − U A ) x ) = x T (( L − U A ) T ¯ λ ) ≥
0, since x ∈ R n + and ( L − U A ) T ¯ λ ∈ R n + . Thiscontradiction yields ( L − U A )( R n + ) ∩ ( − K ) = { } . Like in the proof of Theorem 4, thisresult, together with the facts that L ( A ) is polyhedral and K is closed convex implies,via [9, Theorem 20.3], that L ( A ) + K is a closed convex set. The existence of ¯ x ∈ A properly efficient, thus also efficient, solution to ( P ) fulfilling L ¯ x = ¯ v follows in the linesof [1, Theorem 4.3.4] (see also [1, Section 4.5.1]). (cid:3) Let us see now what inclusions involving the images of the feasible sets through theirobjective functions of the vector duals to ( P ) considered in this paper can be established.In [3] it is mentioned that h J ( B J ) ⊆ h H ( B H ), an example when these sets do not coincidebeing also provided. For ( D ) and ( D H ) we have the following assertion. Theorem 8.
It holds h H ( B H ) ⊆ h ( B ) . Proof.
Let d ∈ h H ( B H ). Thus, there exist ¯ U ∈ B H and an efficient solution ¯ x ∈ R n + to Min x ∈ R n + { ( L − ¯ U A ) x } , (7)such that d = h H ( ¯ U ) = ¯ U b + ( L − ¯ U A )¯ x .The efficiency of ¯ x to the problem (7) yields, via Theorem 1, that ¯ x is a properlyefficient solution to this problem, too. Consequently, there exists γ ∈ K ∗ such that γ T (( L − ¯ U A )¯ x ) ≤ γ T (( L − ¯ U A ) x ) ∀ x ∈ R n + . (8)This yields γ T (( L − ¯ U A )¯ x ) ≤
0. On the other hand, taking in (8) x := x + ¯ x ∈ R n + itfollows immediately γ T (( L − ¯ U A ) x ) ≥ x ∈ R n + . Therefore γ T (( L − ¯ U A )¯ x ) ≥ γ T (( L − ¯ U A )¯ x ) = 0. Taking ¯ v = ( L − ¯ U A )¯ x , it follows γ T ¯ v = 0 and, since γ T ( L − ¯ U A ) ∈ R n + , also ( γ, ¯ U , ¯ v ) ∈ B . As d = h H ( ¯ U ) = ¯ U b + ( L − ¯ U A )¯ x = ¯ U b + ¯ v = h ( γ, ¯ U , ¯ v ) ∈ h ( B ), we obtain h H ( B H ) ⊆ h ( B ). (cid:3) Remark 4.
An example showing that the just proven inclusion can be sometimes strictwas given in [1, Example 5.5.1].
Theorem 9.
It holds h ( B ) ⊆ h L ( B L ) . Proof.
Let d ∈ h ( B ). Thus, there exist ( λ, U, v ) ∈ B such that d = h ( λ, U, v ) = U b + v .8et z := U T λ . Then λ T d = λ T ( U b + v ) = ( λ T ( U b + v )) T = b T ( U T λ ) + v T λ = b T z ,while L T λ − A T z = L T λ − A T ( U T λ ) = ( L − U A ) T λ ∈ R n + . Consequently, ( λ, z, d ) ∈ B L and, since d = h ( λ, U, v ) = h L ( λ, z, d ) ∈ h L ( B L ), it follows that h ( B ) ⊆ h L ( B L ). (cid:3) Remark 5.
To show that the inclusion proven above does not turn into equality ingeneral, consider the following situation. Let n = 1, k = 2, m = 2, L = (0 , T , A =(1 , T , b = ( − , − T and K = R . Then, for v = ( − , − T , λ = (1 , T and z = (0 , T we have ( λ, z, v ) ∈ B L since λ T v = − ≤ z T b and L T λ − A T z = (0 , T ∈ R .Consequently, h L ( λ, z, v ) = ( − , − T ∈ h L ( B L ).On the other hand, assuming that ( − , − ∈ h ( B ), there must exist (¯ λ, ¯ U , ¯ v ) ∈ B such that h (¯ λ, ¯ U , ¯ v ) = ¯ U b + ¯ v = ( − , − T . Then ¯ λ T ( ¯ U b + ¯ v ) = − ¯ λ − ¯ λ <
0, where¯ λ = (¯ λ , ¯ λ ) T ∈ ( R ) ∗ = int( R ). But ¯ λ T ( ¯ U b + ¯ v ) = ¯ λ T ¯ U ( − , − T = − ( ¯ U A ) T ¯ λ =( L − ¯ U A ) T ¯ λ ≥
0, which contradicts what we obtained above as a consequence of theassumption we made. Therefore ( − , − / ∈ h ( B ). Taking into consideration what we have proven in this note, one can conclude that theimages of the feasible sets through their objective functions of the vector duals to ( P ) wedealt with respect the following inclusions chain h J ( B J ) $ h H ( B H ) $ h ( B ) $ h L ( B L ) , while the the sets of maximal elements of these sets fulfillMax( h J ( B J ) , K ) $ Min( L ( A ) , K ) = Max( h H ( B H ) , K )= Max( h ( B ) , K ) = Max( h L ( B L ) , K ) . Thus the schemes from [1, Remark 5.5.3] remain valid when the vector-minimization/maximization is considered with respect to a nontrivial pointed closed convex cone K ⊆ R k .Therefore, we have brought into attention with this note a valuable but neglectedvector dual to the classical linear vector optimization problem. The image of the feasibleset through its objective function of this “new-old” vector dual lies between the onesof the vector duals introduced in [3] and [7], respectively, all these three sets sharingthe same maximal elements. Different to the vector dual introduced in [3], the dual weconsider is defined without resorting to complicated constructions belonging to set-valuedoptimization and the duality statements regarding it follow more directly. Solving ourvector dual problem does not involve the determination of the sets of efficient solutionsof two vector optimization problems, as is the case for the vector dual from [3]. Differentto the Lagrange-type vector dual from [7], the revisited vector dual we consider preservesthe way the classical vector dual problem due to Isermann and the dual abstract linearoptimization problem were formulated, improving their duality properties by not failinglike them when b = 0.Moreover, we have extended a classical result due to Isermann, showing that the effi-cient solutions to the classical linear vector optimization problem coincide with the prop-erly efficient elements in any sense to it also when the vector-minimization is consideredwith respect to a nontrivial pointed closed convex cone K ⊆ R k instead of R k + .9 eferences [1] R.I. Bot¸, S.-M. Grad, G. Wanka, Duality in vector optimization , Springer-Verlag,Berlin-Heidelberg, 2009.[2] R.I. Bot¸, G. Wanka,
An analysis of some dual problems in multiobjective optimization(I) , Optimization 53(3):281–300, 2004.[3] A.H. Hamel, F. Heyde, A. L¨ohne, C. Tammer, K. Winkler,
Closing the duality gapin linear vector optimization , Journal of Convex Analysis 11(1):163–178, 2004.[4] H. Isermann,
Duality in multiple objective linear programming , in: S. Zionts (ed.)Multiple criteria problem solving, Lecture Notes in Economics and MathematicalSystems 155, Springer-Verlag, Berlin, 274–285, 1978.[5] H. Isermann,
On some relations between a dual pair of multiple objective linear pro-grams , Zeitschrift f¨ur Operations Research 22(1):A33–A41, 1978.[6] H. Isermann,
Proper efficiency and the linear vector maximum problem , OperationsResearch 22(1):189–191, 1974.[7] J. Jahn,
Duality in vector optimization , Mathematical Programming 25(3):343–353,1983.[8] J. Jahn,
Vector optimization - theory, applications, and extensions , Springer-Verlag,Berlin, 2004.[9] R.T. Rockafellar,
Convex analysis , Princeton University Press, Princeton, 1970.[10] G. Wanka, R.I. Bot¸,