A feasible adaptive refinement algorithm for linear semi-infinite optimization
aa r X i v : . [ m a t h . O C ] J a n MANUSCRIPT
A feasible adaptive refinement algorithm for linear semi-infiniteoptimization
Shuxiong Wang Department of Mathematics, University of California, Irvine, CA,US
ARTICLE HISTORY
Compiled January 26, 2021
ABSTRACT
A numerical method is developed to solve linear semi-infinite programming prob-lem (LSIP) in which the iterates produced by the algorithm are feasible for theoriginal problem. This is achieved by constructing a sequence of standard linearprogramming problems with respect to the successive discretization of the index setsuch that the approximate regions are included in the original feasible region. Theconvergence of the approximate solutions to the solution of the original problemis proved and the associated optimal objective function values of the approximateproblems are monotonically decreasing and converge to the optimal value of LSIP.An adaptive refinement procedure is designed to discretize the index set and updatethe constraints for the approximate problem. Numerical experiments demonstratethe performance of the proposed algorithm.
KEYWORDS
Linear semi-infinite optimization, feasible iteration, concavification, adaptiverefinement
1. Introduction
Linear semi-infinite programming problem (LSIP) refers to the optimization problemwith finitely many decision variables and infinitely many linear constraints associatedwith some parameters, which can be formulated asmin x ∈ R n c ⊤ x s.t. a ( y ) ⊤ x + a ( y ) ≥ ∀ y ∈ Y,x i ≥ , i = 1 , , ..., n, (LSIP)where c ∈ R n , a ( y ) = [ a ( y ) , ..., a n ( y )] ⊤ and a i : R m R , for i = 0 , , ..., n , are real-valued coefficient functions, Y ⊆ R m is the index set. In this paper, we assume that Y = [ a, b ] is an interval with a < b . Denote by F the feasible set of ((LSIP)): F = { x ∈ R n + | a ( y ) ⊤ x + a ( y ) ≥ , ∀ y ∈ Y } , where R n + = { x ∈ R n | x i ≥ , i = 1 , , ..., n } . Email: [email protected] inear semi-infinite programming has wide applications in economics, robust opti-mization and numerous engineering problems, etc. More details can be found in [1–3]and references therein.Numerical methods have been proposed for solving linear semi-infinite program-ming problems such as discretization methods, local reduction methods and descentdirection methods (See [4–7] for an overview of these methods). The main idea ofdiscretization methods is to solve the following linear programmin x ∈ R n + f ( x )s.t. a ( y ) ⊤ x + a ( y ) ≥ ∀ y ∈ T, in which the original index set Y in (LSIP) is replaced by its finite subset T . Theiterates generated by the discretization methods converge to a solution of the originalproblem as the distance between T and Y tends to zero (see [2, 4, 8]). The reduc-tion methods solve nonlinear equations by qusi-Newton method, which require thesmoothing conditions on the functions defining the constraint [9]. The feasible descentdirection methods generate a feasible direction based on the current iterate and achievethe next iterate by such a direction [10].The purification methods proposed in [11, 12] generate a finite feasible sequencewhere the objective function value of each iterate is reduced. The method proposedin [11] requires that the feasible set of (LSIP) is locally polyhedral, and the methodproposed in [12] requires that the coefficient functions a i , i = 0 , , ..., n, are analytic.Feasible iterative methods for nonlinear semi-infinite optimization problems havebeen developed via techniques of convexification or concavification etc [13–15]. Thesemethods might be applicable to solve (LSIP) directly. However, they are not developedspecifically for (LSIP). Computational time will be reduced if the algorithm can beadapted to linear case effectively.In this paper, we develop a feasible iterative algorithm to solve (LSIP). The basicidea is to construct a sequence of standard linear optimization problems with respectto the discretized subsets of the index set such that the feasible region of each linearoptimization problem is included in the feasible region of (LSIP). The proposed methodconsists of two stages. The first stage is based on the restriction of the semi-infiniteconstraint. The second stage is base on estimating the lower bound of the coefficientfunctions using concavification or interval method.The rest of the paper is organized as follows. In section 2, we propose the methodsto construct the inner approximate regions for the feasible region of (LSIP). Numericalmethod to solve the original linear semi-infinite programming problem is proposed insection 3. In section 4, we implement our algorithm to some numerical examples toshow the performance of the method. At last, we conclude our paper in section 5.
2. Restriction of the lower level problem
The restriction of the lower level problem leads to inner approximation of the feasibleregion of (LSIP), and thus, to feasible iterates. Two-stage procedures are performedto achieve the restriction for (LSIP). At the first stage, we construct an uniform lower-bound function w.r.t decision variables for the function defining constraint in (LSIP).This step requires to solve a global optimization associated with coefficient functionsover the index set. The second stage is to estimate the lower bound of the coefficient2unctions over the index set rather than solving the optimization problems globallywhich significantly reduce the computational cost.
The semi-infinite constraint of (LSIP) can be reformulated asmin y ∈ Y { a ( y ) ⊤ x + a ( y ) } ≥ . (1)Since a ( y ) ⊤ x = P ni =1 a i ( y ) x i , (1) is equivalent tomin y ∈ Y { n X i =1 a i ( y ) x i + a ( y ) } ≥ . By exchanging the minimization and summation on the left side of the inequality, weobtain a new linear inequality n X i =1 { min y ∈ Y a i ( y ) } x i + min y ∈ Y a ( y ) ≥ . (2)Since the decision variables x i ≥ i = 1 , , ..., n , we have n X i =1 { min y ∈ Y a i ( y ) } x i + min y ∈ Y a ( y ) ≤ min y ∈ Y { n X i =1 a i ( y ) x i + a ( y ) } . Thus, we obtain an uniform lower-bound function for min y ∈ Y { a ( y ) ⊤ x + a ( y ) } . Andany point x satisfying (2) is a feasible point for LSIP. Let ¯ F be the feasible regiondefined by the inequality (2), i.e.,¯ F = { x ∈ R n + | n X i =1 { min y ∈ Y a i ( y ) } x i + min y ∈ Y a ( y ) ≥ } . From above analysis, we conclude that ¯ F ⊆ F .The main difference between the original constraint (1) and the restriction constraint(2) is that the minimization is independent on the decision variable x in the later case.In order to compute ¯ F , we need to solve a series of problems as follows:min y a i ( y ) s.t. y ∈ Y (3)for i = 0 , , ..., n . Based on ¯ F , we can construct a linear program associated with onelinear inequality constraint such that it has the same objective function as LSIP andany feasible point of the constructed problem is feasible for LSIP. Such a problem isdefined as min x ∈ R n c ⊤ x s.t. x ∈ ¯ F . (R-LSIP)3o characterize how well R-LSIP approximates LSIP, we can estimate the distancebetween g ( x ) = min y ∈ Y { a ( y ) ⊤ x + a ( y ) } and ¯ g ( x ) = P ni =1 { min y ∈ Y a i ( y ) } x i +min y ∈ Y a ( y ) which have been used to define the constraints of LSIP and R-LSIP.Assume that each function a i ( y ) is Lipschitz continuous on Y , i.e., there exist some con-stant L i ≥ | a i ( y ) − a i ( z ) | ≤ L i | y − z | holds for any y, z ∈ Y, i = 0 , , , ..., n .By direct computation, we have | g ( x ) − ¯ g ( x ) | ≤ ( n X i =1 L i x i + L )( b − a ) . It turns out that for any fixed x , the error between g ( x ) and ¯ g ( x ) is bounded linearlywith respect to ( b − a ). Furthermore, if we assume that the decision variables are upperbounded (e.g., 0 ≤ x i ≤ U i for some constants U i > i = 0 , , , ..., n ), we have | g ( x ) − ¯ g ( x ) | ≤ ( n X i =1 L i U i + L )( b − a ) . This indicates that the error between g ( x ) and ¯ g ( x ) goes to zeros uniformly as | b − a | tends to zero. By dividing the index set Y = [ a, b ] into subintervals, one can constructa sequence of linear programs that approximate LSIP exhaustively as the size of thesubdivision (formally defined in section ) tends to zero. Given a subdivision, con-structing R-LSIP on each subinterval requires to solve (R-LSIP) globally which willbecome computationally expensive due to the increasing number of subintervals andnon-convexity of the coefficient functions in general. In fact, it is not necessary to solve(R-LSIP) exactly. In the next section, we will discuss how to estimate a good lowerbound of (R-LSIP) and use it to construct the feasible approximation problems for(LSIP). In order to guarantee that the feasible region ¯ F derived from inequality (2) is an innerapproximation of the feasible region of (LSIP), optimization problem (3) needs to besolved globally. However, computing a lower bound for (3) is enough to generate arestriction problem of (LSIP). In this section, we present two alternative approachesto approximate problem (3). The idea of the first approach comes from the techniquesof interval methods [16, 21]. Given an interval Y = [ a, b ], the range of a i ( y ) on Y is defined as R ( a i , Y ) = [ R li , R ui ] = { a i ( y ) | y ∈ Y } . An interval function A i ( Y ) =[ A li , A ui ] is called a inclusion function for a i ( y ) on Y if R ( a i , Y ) ⊆ A i ( Y ). A naturalinclusion function can be obtained by replacing the decision variable y in a i ( y ) withthe corresponding interval and computing the resulting expression using the rules ofinterval arithmetic [21]. In some special cases, the natural inclusion function is tight(i.e., R ( a i , Y ) = A i ( Y )). However, in more general cases, the natural interval functionoverestimates the original range of a i ( y ) on Y which implies that A li < min y ∈ Y a i ( y ).In such cases, the tightness of the inclusion can be measured bymax {| R li − A li | , | R ui − A ui |} ≤ γ | b − a | p and | A li − A ui | ≤ δ | b − a | p , (4)where p ≥ γ ≥ δ ≥ a i ( y ) and the interval [ a, b ]. By replacing min y ∈ Y a i ( y ) in (2) with4 li for i = 0 , , ..., n , we have a new linear inequality as follows n X i =1 A li x i + A l ≥ . (5)It is obvious that any x satisfying (5) is a feasible point for (LSIP).The second approach to estimate the lower bound of problem (2) is to constructa uniform lower bound function ¯ a i ( y ) such that ¯ a i ( y ) ≤ a i ( y ) holds for all y ∈ Y . Inaddition, we require that the optimal solution formin y ¯ a i ( y ) s.t. y ∈ Y is easy to be identified. Here, we construct a concave lower bound function for a i ( y )by adding a negative quadratic term to it, i.e.,¯ a i ( y ) = a i ( y ) − α i y − a + b , where α ≥ a i ( y ) ≤ a i ( y ) ∀ y ∈ Y . Furthermore, ¯ a i ( y ) istwice continuously differentiable if and only if a i ( y ) is twice continuously differentiableand the second derivative of ¯ a i ( y ) is ¯ a ′′ i ( y ) = a ′′ i ( y ) − α i . Thus ¯ a i ( y ) is concave on Y if the parameter α i satisfies α i ≥ max y ∈ Y a ′′ i ( y ). To sum up, we select the parameter α i such that α i ≥ max { , max y ∈ Y a ′′ i ( y ) } . (6)This guarantees that ¯ a i ( y ) is a lower bound concave function of a i ( y ) on the index set Y . The computation of α i in (6) involves a global optimization. However, we can useany upper bound of the right hand side in (6). Such an upper bound can be obtainedby interval methods proposed above. On the other hand, the distance between ¯ a i ( y )and a i ( y ) on [ a, b ] is max y ∈ Y | a i ( y ) − ¯ a i ( y ) | = α i b − a ) . Since ¯ a y ( y ) is concave on Y , the minimizer of ¯ a i ( y ) on Y is attained on the boundaryof Y (see [22]), i.e., min y ∈ Y ¯ a i ( y ) = min { ¯ a i ( a ) , ¯ a i ( b ) } . By replacing min y ∈ Y a i ( y ) in (2)with min y ∈ Y ¯ a i ( y ), we get the second type of restriction constraint as follows n X i =1 min { ¯ a i ( a ) , ¯ a i ( b ) } x i + min { ¯ a ( a ) , ¯ a ( b ) } ≥ . (7)The two approaches are distinct in the sense that the interval method requiresmild assumptions on the coefficient function while the concave-function based methodadmits better approximation rate. 5 . Numerical method Based on the restriction approaches developed in the previous section, we are ableto construct a sequence of approximations for (LSIP) by dividing the original indexset into subsets successively and constructing linear optimization problems associatedwith restricted constraints on the subsets.
Definition 3.1.
We call T = { τ , ..., τ N } a subdivision of the interval [ a, b ] if a = τ ≤ τ ≤ ... ≤ τ N = b. Let Y k = [ τ k − , τ k ] for k = 1 , , ..., N , the length of Y k is defined by | Y k | = | τ k − τ k − | and the length of the subdivision T is defined by | T | = max ≤ k ≤ N | Y k | . It follows that Y = ∪ Ni =1 Y k .The intuition behind the approximation of (LSIP) through subdivision comes froman observation that the original semi-infinite constraints in (LSIP) a ( y ) ⊤ x + a ( y ) ≥ , ∀ y ∈ Y can be reformulated equivalently as finitely many semi-infinite constraints a ( y ) ⊤ x + a ( y ) ≥ , ∀ y ∈ Y k , k = 1 , , ..., N. Given a subdivision, we can construct the approximate constraint on each subintervaland combine them together to formulate the inner-approximation of the original fea-sible region. The corresponding optimization problem provide a restriction of (LSIP).The solution of the approximate problem approach to the optimal solution of (LSIP)as the size of the subdivision tends to zero.The two different approaches (e.g., interval method and Concavification method)were introduced in section 2 to construct the approximate region that lies inside of theoriginal feasible region. This induces two different types of approximation problemswhen applied to a particular subdivision. We only describe main results for the firsttype (e.g., interval method) and focus on the convergence and algorithm for the secondone.We introduce the Slater condition and a lemma derived from it which will be usedin the following part. We say Slater condition holds for (LSIP) if there exists a point¯ x ∈ R n + such that a ( y ) ⊤ ¯ x + a ( y ) > , ∀ y ∈ Y. Let F o = { x ∈ F | a ( y ) ⊤ ¯ x + a ( y ) > , ∀ y ∈ Y } be the set of all the Slater pointsin F . It is shown that the feasible region F is exactly the closure of F o under theSlater condition [4]. We present this result as a lemma and give a direct proof in theappendix. Lemma 3.2.
Assume that the Slater condition holds for (LSIP) and the index set Y is compact, then we have F = cl ( F o ) , where cl ( F o ) represents the closure of the set F o . .1. Restriction based on interval method Let A i ( Y k ) = [ A li,k , A ui,k ] be the inclusion function of a i ( y ) on Y k . By estimating thelower bound for min y ∈ Y k a i ( y ) via interval method, we can construct the followinglinear constraints n X i =1 A li,k x i + A l ,k ≥ , k = 1 , , ..., N, corresponding to the original constraints a ( y ) ⊤ x + a ( y ) ≥ , ∀ y ∈ Y k , k = 1 , , ..., N .For simplicity, we reformulate the inequalities as A ⊤ T x + b T ≥ , (8)where A T ( i, k ) = A li,k and b T ( k ) = A l ,k for i = 1 , , ..., n, k = 1 , , ..., N . The approxi-mation problem for (LSIP) in such case is formulated asmin x ∈ R n + c ⊤ x s.t. A ⊤ T x + b T ≥ . R1-LSIP(T)Following the analysis in section 2, we know that { x ∈ R n + | A ⊤ T x + b T ≥ } ⊆ F .Therefore, any feasible point of R1-LSIP(T) is feasible for (LSIP) provided that thefeasible region of R1-LSIP(T) is non-empty. By solving R1-LSIP(T), we can obtaina feasible approximate solution for (LSIP) and the corresponding optimal value ofR1-LSIP(T) provides an upper bound for the optimal value of (LSIP).Let F ( T ) = { x ∈ R n + | A ⊤ T x + b T ≥ } be the feasible region of R1-LSIP(T). Wesay that F ( T ) is consistent if F ( T ) = ∅ . In this case, the corresponding problemR1-LSIP(T) is called consistent. The following lemma shows that the approximateproblem R1-LSIP(T) is consistent for all | T | small enough if Slater condition holds for(LSIP). Lemma 3.3.
Assume that the Slater condition holds for (LSIP) and the coefficientfunctions a i ( y ) , i = 0 , , ...n , are Lipschitz continuous on Y , then F ( T ) is nonemptyfor all | T | small enough. In following theorem, we show that any accumulation point of the solutions of theapproximate problems R1-LSIP(T) is a solution to (LSIP) if the size of the subdivisiontends to zero.
Theorem 3.4.
Assume the Slater condition holds for (LSIP) and the level set L (¯ x ) = { x ∈ F | c ⊤ x ≤ c ⊤ ¯ x } is bounded ( ¯ x is a Slater point). Let { T k } be a sequence ofsubdivisions of Y such that T is consistent and lim k →∞ | T k | = 0 with T k ⊆ T k +1 . Let x ∗ k be a solution of R1-LSIP( T k ). Then any accumulation point of the sequence { x ∗ k } is an optimal solution to (LSIP). Given a subdivision T = { τ , ..., τ N } and Y k = [ τ k − , τ k ] , k = 1 , , ...N , by applyingconcavification method in section 2 to each of the finitely many semi-infinite con-7traints a ( y ) ⊤ x + a ( y ) ≥ , ∀ y ∈ Y k , k = 1 , , ..., N, we can construct the linear constraints as follows n X i =1 min { ¯ a i ( τ k − ) , ¯ a i ( τ k ) } x i + min { ¯ a ( τ k − ) , ¯ a ( τ k ) } ≥ , k = 1 , , ..., N, where ¯ a i ( · ) is the concavification function defined on Y k when we calculate ¯ a i ( τ k − ) or¯ a i ( τ k ) (i.e., ¯ a i ( y ) = a i ( y ) − α i,k ( y − τ k − τ k − ) ). We rewrite the above inequalities as¯ A ⊤ T x + ¯ b T ≥ , where ¯ A T ( i, k ) = min { ¯ a i ( τ k − ) , ¯ a i ( τ k ) } and ¯ b T ( k ) = min { ¯ a ( τ k − ) , ¯ a ( τ k ) } . The corre-sponding approximate problem for (LSIP) is defined bymin x ∈ R n + c ⊤ x s.t. ¯ A ⊤ T x + ¯ b T ≥ . R2-LSIP(T)Let ¯ F ( T ) = { x ∈ R n + | ¯ A ⊤ T x + ¯ b T ≥ } be the feasible set of the problem R2-LSIP(T),we can conclude that ¯ F ( T ) ⊆ F .The approximate problem R2-LSIP(T) is similar to R1-LSIP(T) in the sensethat both problems induce restrictions of (LSIP). Therefore, any feasible solutionof R2-LSIP(T) is feasible for (LSIP) and the corresponding optimal value provide anupper bound for the optimal value of the problem (LSIP).The following lemma shows that if the Slater condition holds for (LSIP),R2-LSIP(T) is consistent for all | T | small enough (e.g., ¯ F ( T ) = ∅ ). Proof can befound in appendix. Lemma 3.5.
Assume the Slater condition holds for (LSIP) and a i ( y ) , i = 1 , , ..., n ,are twice continuously differentiable. Then R2-LSIP(T) is consistent for all | T | smallenough. In order to find a good approximate solution for (LSIP), R2-LSIP(T) need to besolved iteratively during which the subdivision will be refined. We present a particu-lar strategy of the refinement here such that the approximate regions of R2-LSIP(T)are monotonically enlarging from the inside of the feasible region F . Consequently,the corresponding optimal values of the approximation problems are monotonicallydecreasing and converge to the optimal value of the original linear semi-infinite prob-lem. Note that such a refinement procedure can not guarantee the monotonic propertywhen applied to solve R1-LSIP(T).Let T = { τ k | k = 0 , , ..., N } be a subdivision of the Y . Assume Y k = [ τ k − , τ k ] isthe subinterval to be refined. Denote by τ k, and τ k, the trisection points of Y k : τ k, = τ k − + 13 ( τ k − τ k − ) , τ k, = τ k − + 23 ( τ k − τ k − ) . Y k is n X i =1 [min[¯ a i ( τ k − ) , ¯ a i ( τ k )] x i + min[¯ a ( τ k − ) , ¯ a ( τ k )] ≥ , (9)where ¯ a i ( y ) = a i ( y ) − α i,k ( y − τ k − + τ k ) and parameter α i,k is calculated in the mannerof (6). The lower bounding functions on each subset after refinement are defined by¯ a i ( y ) = a i ( y ) − α i,k y − τ k − + τ k, , y ∈ Y k, = [ τ k − , τ k, ] , ¯ a i ( y ) = a i ( y ) − α i,k y − τ k, + τ k, , y ∈ Y k, = [ τ k, , τ k, ] , ¯ a i ( y ) = a i ( y ) − α i,k y − τ k, + τ k , y ∈ Y k, = [ τ k, , τ k ] , where α ji,k , j = 1 , , α ji,k ≥ max { , max y ∈ Y k,j ∇ a i ( y ) } and α ji,k ≤ α i,k for j = 1 , ,
3. The refined approximate region ¯ F ( T ∪{ τ k, , τ k, } ) is obtainedby replacing the constraint (9) in ¯ F ( T ) with n X i =1 [min[¯ a i ( τ k − ) , ¯ a i ( τ k, )] x i + min[¯ a ( τ k − ) , ¯ a ( τ k, )] ≥ , n X i =1 [min[¯ a i ( τ k, ) , ¯ a i ( τ k, )] x i + min[¯ a ( τ k, ) , ¯ a ( τ k, )] ≥ , n X i =1 [min[¯ a i ( τ k, ) , ¯ a i ( τ k )] x i + min[¯ a ( τ k, ) , ¯ a ( τ k )] ≥ . Lemma 3.6.
Let T be a consistent subdivision of Y . Assume that ¯ F ( T ∪ { τ k, , τ k, } ) is obtained by the trisection refinement procedure above, then we have ¯ F ( T ) ⊆ ¯ F ( T ∪ { τ k, , τ k, } ) ⊆ F. Proof.
Since x ∈ R n + , it suffices to prove that for i = 0 , , , ..., n, min[¯ a i ( τ k − ) , ¯ a i ( τ k, ) , ¯ a i ( τ k, ) , ¯ a i ( τ k, ) , ¯ a i ( τ k, ) , ¯ a i ( τ k )] ≥ min[¯ a i ( τ k − ) , ¯ a i ( τ k )] . By direct computation, we know ¯ a i ( τ k − ) ≥ ¯ a i ( τ k − ) and ¯ a i ( τ k ) ≥ ¯ a i ( τ k ). Since ¯ a i ( y )is concave on Y k = [ τ k − , τ k ], we have ¯ a i ( τ k,j ) ≥ min[¯ a i ( τ k − ) , ¯ a i ( τ k )] for j = 1 ,
2. Inaddition, direct calculation impliesmin[¯ a i ( τ k, ) , ¯ a i ( τ k, )] ≥ ¯ a i ( τ k, ) , min[¯ a i ( τ k, ) , ¯ a i ( τ k, )] ≥ ¯ a i ( τ k, ) . The last two statements indicate that min[¯ a i ( τ k, ) , ¯ a i ( τ k, )] ≥ min[¯ a i ( τ k − ) , ¯ a i ( τ k )]and min[¯ a i ( τ k, ) , ¯ a i ( τ k, )] ≥ min[¯ a i ( τ k − ) , ¯ a i ( τ k )].This proves our statement. 9e present in the following theorem the general convergence results for approxi-mating (LSIP) via a sequence of restriction problems. Theorem 3.7.
Assume that the assumptions in Theorem 3.4 hold. Let { T k } be asequence of subdivisions of the index set Y , which is obtained by trisection refinementrecursively, such that T is consistent and lim k →∞ | T k | = 0 . Denote by x ∗ k the optimalsolution to R2-LSIP( T k ). Then we have:(1) x ∗ k is feasible for (LSIP) and any accumulation point of the sequence { x ∗ k } is afeasible solution to (LSIP).(2) { f ( x ∗ k ) : f ( x ∗ k ) = c ⊤ x ∗ k } is a decreasing sequence and v ∗ = lim k →∞ f ( x ∗ k ) is anoptimal value to (LSIP). Proof.
The proof of the first statement is similar to the proof in Theorem 3.4.From Lemma 3.6, we know that ¯ F ( T k − ) ⊆ ¯ F ( T k ) holds for k ∈ N which implies thatthe sequence { f ( x ∗ k ) } is decreasing. Since the level set L (¯ x ) is bounded, the sequence { f ( x ∗ k ) } is bounded. Therefore, the limit of the sequence exists which is denoted by v ∗ . From (1), we know that v ∗ is an optimal value to (LSIP).This completes our proof. In this section, we present a specific algorithm to solve (LSIP). The algorithm is basedon solving the approximate linear problems R2-LSIP(T) (or R1-LSIP(T)) for a givensubdivision T and then refine the subdivision to improve the solution. The key ideaof the algorithm is to select the candidate subsets in T to be refined in an adaptivemanner rather than making the refinement exhaustively.We introduce the optimality condition for (LSIP) as follows before presenting thedetails of the algorithm. Given a point x ∈ F , let A ( x ) = { y ∈ Y | a ( y ) ⊤ x + a ( y ) = 0 } be the active index set for (LSIP) at x . If some constraint qualification (e.g., Slatercondition) holds for (LSIP), a feasible point x ∗ ∈ F is an optimal solution if and onlyif x ∗ satisfies the KKT systems ([4]), i.e., c − X y ∈ A ( x ∗ ) λ y a ( y ) = 0for some λ y ≥ y ∈ A ( x ∗ ). Definition 3.8.
We say that x ∗ ∈ F is an ( ǫ, δ ) optimal solution to (LSIP) if thereexist some indices y ∈ Y as well as λ y ≥ || c − X y ∈ A ( x ∗ ,δ ) λ y a ( y ) || ≤ ǫ, where A ( x ∗ , δ ) = { y ∈ Y | ≤ a ( y ) ⊤ x ∗ + a ( y ) ≤ δ } .To obtain a consistent subdivision in the firs step of Algorithm 1, we apply theadaptive refinement algorithm to the following problemmin ( x,z ) ∈ R n + × R z s.t. a ( y ) ⊤ x + a ( y ) ≥ z ∀ y ∈ Y LSIP ———————————————————————————————Algorithm 1 (Adaptive Refinement Algorithm for LSIP) ————————————————————————————————S1. Find an initial subdivision T such that R2-LSIP( T ) is consistent. Choose aninitial point x and tolerances ǫ and δ . Set k = 0. S2.
Solve R2-LSIP( T k ) to obtain a solution x ∗ k and the active index set A ( x ∗ k ). S3.
Terminate if x ∗ k is an ( ǫ, δ ) optimal solution to (LSIP). Otherwise update T k +1 and F ( T k +1 ) by trisection refinement procedure for subintervals in T k that cor-respond to A ( x ∗ k ). S4.
Let k = k + 1 and go to step 2. ———————————————————————————————— until a feasible solution ( x , z ), with z ≥
0, of the problem LSIP ( T ) is found forsome subdivision T . The current subdivision T is consistent and chosen as the initialsubdivision of Algorithm 1. In addition, x is feasible for the original problem andselected as the initial point for the algorithm.The refinement procedure in the third step of the algorithm is taken as follows. In the k th iteration, each [ τ ki − , τ ki ] is divided into three equal length subsets for i ∈ A ( x ∗ k ).New constraints are constructed on the subsets and used to update the constraintcorresponding to [ τ ki − , τ ki ] for each index i ∈ A ( x ∗ k ). Then we have ¯ F ( T k +1 ) and theassociated approximation problem R2-LSIP( T k +1 ). Theorem 3.9 (Convergence of Algorithm 1) . Assume the Slater condition holds for(LSIP) and the coefficient functions a i ( y ) , i = 0 , , ..., n , are twice continuously dif-ferentiable. Then Algorithm 1 terminates in finitely many iterations for any positivetolerances ǫ and δ . Proof.
Let x ∗ k be a solution to the approximate subproblem R2-LSIP( T k ) with T k = { τ kj | j = 0 , , ..., N k } , there exists some λ kj ≥ j ∈ A ( x ∗ k ) such that c − X j ∈ A ( x ∗ k ) λ kj min[¯ a ( τ kj − ) , ¯ a ( τ kj )] = 0 , (10)where A ( x ∗ k ) = { j | min[¯ a ( τ kj − ) , ¯ a ( τ kj )] x ∗ k + min[¯ a ( τ kj − ) , ¯ a ( τ kj )] = 0 } is the activeindex set for R2-LSIP( T k ) at x ∗ k and min[¯ a ( τ kj − ) , ¯ a ( τ kj )] represents a vector in R n suchthat the i th element is defined by min[¯ a i ( τ kj − ) , ¯ a i ( τ kj )]. Since ¯ a i ( y ) = a i ( y ) − α i,k ( y − τ kj − + τ kj ) for y ∈ [ τ kj − , τ kj ], we havemin[¯ a ( τ kj − ) , ¯ a ( τ kj )] = min[ a ( τ kj − ) , a ( τ kj )] −
18 ( τ kj − τ kj − ) α k , where α k = ( α k ,j , α k ,j , ..., α kn,j ) ⊤ is the parameter vector on the subset [ τ kj − , τ kj ] withall elements are uniformly bounded. On the other hand, since a i ( y ) is twice continu-ously differentiable, there exists ¯ τ kj − such that a i ( τ kj ) = a i ( τ kj − ) + a ′ i (¯ τ kj − )( τ kj − τ kj − ) , ≤ i ≤ n which implies that min[ a ( τ kj − ) , a ( τ kj )] = a ( τ kj − ) + ( τ kj − τ kj − ) β k where β k ∈ R n is11 constant vector (e.g., β ki = a ′ i (¯ τ kj − ) if min[ a ( τ kj − ) , a ( τ kj )] = a ( τ kj ) and β ki = 0otherwise). It follows thatmin[¯ a ( τ kj − ) , ¯ a ( τ kj )] = a ( τ kj − ) + ( τ kj − τ kj − ) β k −
18 ( τ kj − τ kj − ) α k . (11)Substitute min[¯ a ( τ kj − ) , ¯ a ( τ kj )] in (11) into (10) and A ( x ∗ k ), we can claim that it sufficesto prove the lengths of all the subsets [ τ kj − , τ kj ] for j ∈ A ( x ∗ k ) converge to zeros as theiteration k tends to infinity. From the algorithm, we know that in each iteration atleast one subset [ τ kj − , τ kj ] is divided into three equal subintervals where the length ofeach subinterval is bounded above by ( τ kj − τ kj − ) ≤ ( b − a ). For each integer p ∈ N ,at least one interval with its length bounded by ≤ p ( b − a ) is generated. Furthermore,all the subintervals [ τ kj − , τ kj ], j ∈ A ( x ∗ k ) are different for all k ∈ N . Since for each p ∈ N , only finitely subintervals with length greater than p ( b − a ) exists. This impliesthat the lengths of the subsets [ τ kj − , τ kj ] for j ∈ A ( x ∗ k ) , k ∈ N must tend to zero.We can conclude from Theorem 3.9 that if the tolerances ǫ and δ are decreasingto zero then any accumulation point of the sequence generated by Algorithm 1 is asolution to the original linear semi-infinite programming. Corollary 3.10.
Let the assumptions in Theorem 3.9 be satisfied and the tolerances ( ǫ k , δ k ) are chosen such that ( ǫ k , δ k ) ց (0 , . If x ∗ k is an ( ǫ k , δ k ) KKT point for (LSIP)generated by Algorithm 1, then any accumulation point x ∗ of the sequence { x ∗ k } is asolution to (LSIP). It follows from Corollary 3.10 the sequence { c ⊤ x ∗ k } is monotonically decreasing tothe optimal value of (LSIP) as k tends to infinity. In the implement of our algorithm,the termination criterion is set as | c ⊤ x ∗ k − c ⊤ x ∗ k − | ≤ ǫ. The convergence of Algorithm 1 is also applicable to the case that the approximateproblem R1-LSIP( T k ) is used in the second step. The proof is similar to that in theorem3.9 as we explained in appendix. However, we can not guarantee the sequence { c ⊤ x ∗ k } is monotonically decreasing. The proposed algorithm can be applied to solve linear semi-infinite optimization prob-lem with finitely many semi-infinite constraints and some extra linear constraints,i.e., min x ∈ X c ⊤ x s.t. a j ( y ) ⊤ x + a j ( y ) ≥ , ∀ y ∈ Y, j = 1 , , ..., m, where X = { x ∈ R n | Dx ≥ d } and a j ( · ) : R R n . In such a case, we spliteach decision variable x i into two non-negative variable y i ≥ z i ≥ x i = y i − z i , and then substitute x i into the above problem. Then the problem isreformulated as a linear semi-infinite programming problem with non-negative decisionvariables in which the Algorithm 1 can be applied to solve it. Such a technique isapplied in the numerical experiments. 12n the case that X = [ X l , X u ] is a box in R n , we can set a new variable transforma-tion as x = z + X l in which z ≥
0. The advantage to reformulate the original problemin such a translation is that the dimension of the new variables is the same as that ofthe original decision variables.
4. Numerical experiments
We present the numerical experiments for a couple of optimizaiton problems selectedfrom the literature. The algorithm is implemented in
M atlab linprog of Optimization T oolbox α inthe second approach are obtained directly if the closed form bound exists. Otherwise,we use M atlab toolbox
Intlab
Problem 1. min n X i =1 i − x i s.t. n X i =1 y i − x i ≥ tan ( y ) , ∀ y ∈ [0 , . This problem is taken from [4] and also tested in [8] for n = 8. For 1 ≤ n ≤ n = 8 is hard to solve and thus a good test of the performance for ouralgorithm. Problem 2.
This problem has same formulation as Problem 1 with n = 9 which isalso tested in [8]. Problem 3. min X i =1 i − x i s.t. X i =1 y i − x i ≥ − y , ∀ y ∈ [0 , . This problem is taken from [19] and also tested in [8].
Problem 4. min X i =1 i − x i s.t. X i =1 y i − x i ≥ − X i =0 y i , ∀ y ∈ [0 , . roblem 5. min X i =1 i − x i s.t. X i =1 y i − x i ≥
11 + y , ∀ y ∈ [0 , . Problem 4 and 5 are taken from [20] and also tested in [8].The following problems, as noted in [8], arise in the design of finite impulse re-sponse(FIR) filters which are more computationally demanding than the previousones (see, e.g., [7, 8]).
Problem 6. min − X i =1 r i − x i s.t. 2 X i =1 cos((2 i − πy ) x i ≥ − , ∀ y ∈ [0 , . , where r i = 0 . i . Problem 7.
This problem is formulated as Problem 6 where r i = 2 ρ cos( θ ) r i − − ρ r i − with ρ = 0 . θ = π/ r = 1, r = 2 ρ cos( θ ) / (1 + ρ ). Problem 8.
This problem is also formulated as Problem 6 where r i = sin(2 πf s i )2 πf s i with f s = 0 . x ∗ obtained by the algorithm whichis defined by min y ∈ ¯ Y g ( x ∗ , y ) with ¯ Y = a : 10 − : b . We also list the numerical resultsfor these problems by MATLAB toolbox f seminf as a reference. We can see that thealgorithm proposed in this paper generates the feasible solutions for all the problemstested. This is coincide with the theoretical results. Furthermore, Algorithm 1 workswell for the computational demanding problems 6-8. The solver f seminf is faster thanour method, however the feasibility is not guaranteed for this kind of method.
5. Conclusion
A new numerical method for solving linear semi-infinite programming problems isproposed which guarantees that each iteration point is feasible for the original problem.The approach is based on a two-stage restriction of the original semi-infinite constraint.The first stage restriction allows us to consider semi-infinite constraint independentlyto the decision variables on the subsets of the index set. In the second stage, the lowerbounds for the optimal values of the optimization problems associated with coefficientfunctions are estimated using two different approaches. The approximation error goesto zero as the size of the subdivisions tends to zero.The approximate problems with finitely many linear constraints is constructed such14 able 1.
Summary of numerical results for the proposed algorithm in this paperAlgorithm CPU Time(sec) Objective Value No. of Iterations Violation
Problem 1.
Approach 1 1.8382 0.6174 169 2.3558e-04Approach 2 1.8910 0.6174 172 1.5835e-04fseminf 0.2109 0.6163 33 -1.2710e-04
Problem 2.
Approach 1 5.2691 0.6163 273 4.1441e-04Approach 2 4.0928 0.6166 266 1.8372e-04fseminf 0.3188 0.6157 46 -7.6194e-04
Problem 3.
Approach 1 0.1646 0.6988 12 2.7969e-03Approach 2 0.1538 0.6988 13 2.8014e-03fseminf 0.2387 0.6932 35 -5.8802e-07
Problem 4.
Approach 1 4.1606 -1.7841 354 1.9689e-05Approach 2 4.1928 -1.7841 356 1.9646e-05fseminf 0.4794 -1.7869 70 -3.4649e-09
Problem 5.
Approach 1 4.2124 0.7861 300 1.9829e-05Approach 2 4.7892 0.7861 302 1.9243e-05fseminf 0.3642 0.7855 32 -8.5507e-07
Problem 6.
Approach 1 1.7290 -0.4832 137 5.0697e-06Approach 2 1.5302 -0.4832 132 5.0914e-06fseminf 1.1476 -0.4754 86 -1.2219e-04
Problem 7.
Approach 1 2.5183 -0.4889 170 2.8510e-04Approach 2 3.2521 -0.4890 219 2.8861e-04fseminf 1.0480 -0.4883 86 -1.5211e-03
Problem 8.
Approach 1 4.4262 -0.4972 252 4.5808e-05Approach 2 4.0216 -0.4972 252 5.0055e-05fseminf 0.4324 -0.4973 45 -4.3322e-07 * Approach 1 represents
Algorithm 1 with R1-LSIP and Apporach 2 represents
Algorithm 1 withR2-LSIP.
References [1] S. Christensen,
A method for pricing American options using Semi-infinite linear pro-gramming , Mathematical Finance, 24 (2014), pp. 156–172.[2] S. Daum and R. Werner,
A novel feasible discretization method for linear semi-infiniteprogramming applied to basket option pricing , Optimization, 60 (2011), pp. 1379–1398.[3] S. ¨Oz¨o˘g¨ur-Aky¨uz and G. W. Weber,
Infinite kernel learning via infinite and semi-infiniteprogramming , Optimisation Methods & Software, 25 (2010), pp. 937–970.[4] M. A. Goberna,and M. A. L´opez,
Linear Semi-Infinite Optimization , Wiley, New York,1998.[5] M. A. Goberna, and M. A. L´opez,
Linear semi-infinite programming theory: an updatedsurvey , European Journal of Operational Research, 143 (2002), pp. 390–405.[6] R. Hettich and K. O. Kortanek,
Semi-infinite programming: theory, methods, and appli-cations , SIAM Rev., 35 (1993), pp. 380–429.[7] R. Reemtsen and S. G¨orner,
Numerical methods for semi-infinite programming: A survey ,in Semi-Infinite Programming, R. Reemtsen and J.-J. R¨uckmann, eds., Kluwer, Boston,1998, pp. 195–275.[8] B. Betr`o,
An accelerated central cutting plane algorithm for linear semi-infinite program-ming , Mathematical programming, 101 (2004), pp. 479–495.[9] S. ˚A. Gustafson,
On the computational solution of a class of generalized moment problems ,SIAM Journal on Numerical Analysis, 7 (1970), pp. 343–357.[10] T. Le ¨® n, S. Sanmatias and E. Vercher, On the numerical treatment of linearly con-strained semi-infinite optimization problems , European Journal of Operational Research,121 (2000) pp. 78–91.[11] E. J. Anderson and A. S. Lewis,
An extension of the simplex algorithm for semi-infinitelinear programming , Mathematical Programming, 44 (1989), pp. 247–269.[12] E. J. Anderson and M. A. Goberna,
Simplex-like trajectories on quasi-polyhedral sets ,Mathematics of Operations Research, 26 (2001), pp. 147–162.[13] C. A., Floudas, O. Stein
The adaptive convexification algorithm: a feasible point methodfor semi-infinite programming , SIAM Journal on Optimization, 18(4), (2007) pp. 1187-1208.[14] S., Wang, Y., Yuan
Feasible method for semi-infinite programs , SIAM Journal on Opti-mization, 25(4), (2015) pp. 2537-2560.[15] A., Mitsos, Y., Yuan
Global optimization of semi-infinite programs via restriction of theright-hand side , Optimization, 60(10-11) (2011) pp. :1291-308.[16] G. Alefeld and G. Mayer,
Interval analysis: theory and applications , J. Comput. Appl.Math., 121 (2000), pp. 421–464.[17] M. A. Goberna,
Post-optimal analysis of linear semi-infinite programs , Optimization andOptimal Control, Springer New York, 2010, pp. 23–53.[18] M. A. Goberna,
Linear semi-infinite optimization: recent advances , In Continuous Opti-mization, Springer US, 2005, pp. 3–22.
19] K. Glashoff and S. A. Gustafson,
Linear optimization and approximation , Springer-Verlag,Berlin, 1983.[20] T. Leon and E. Vercher,
A purification algorithm for semi-infinite programming , EuropeanJournal of Operational Research, 57 (1992), pp. 412–420.[21] R. Moore,
Methods and applications of interval analysis , SIAM, Stud. Appl. Math. 2,Philadelphia, 1979.[22] R. T. Rockafellar,
Convex analysis , Princeton University Press, New Jersey, 1970.[23] S. M. Rump,
INTLAB - INTerval LABoratory, Institute for Reliable Computing, HamburgUniversity of Technology
6. AppendicesProof of Lemma 3.2
Since the Slater condition holds, there exists a point ¯ x ∈ R n such that a ( y ) ⊤ ¯ x + a ( y ) > , ∀ y ∈ Y. It has been shown in [5] that the boundary of F is ∂F = { x ∈ F | min y ∈ Y { a ( y ) ⊤ ¯ x + a ( y ) } = 0 } . It follows that F = F o ∪ ∂F . The compactness of the index set Y implies that thefunction g ( x ) = min y ∈ Y { a ( y ) ⊤ x + a ( y ) } is continuous. Thus, F is closed. It sufficesto prove that ∂F ⊆ cl ( F o ) . For any ˜ x ∈ ∂F , we have a ( y ) ⊤ ˜ x + a ( y ) = 0 for all y ∈ A (˜ x ) with A (˜ x ) = { y ∈ Y | a ( y ) ⊤ ˜ x + a ( y ) = 0 } . Then a ( y ) ⊤ (¯ x − ˜ x ) > , ∀ y ∈ A (˜ x ) . This indicates that for any τ >
0, we have a ( y ) ⊤ (˜ x + τ (¯ x − ˜ x )) + a ( y ) > , ∀ y ∈ A (˜ x ) . For a point y ∈ Y and y / ∈ A (˜ x ), there holds that a ( y ) ⊤ ˜ x + a ( y ) >
0. Therefore, a ( y ) ⊤ (˜ x + τ (¯ x − ˜ x )) + a ( y ) > τ small enough. Since Y is compact, we can chosea uniform τ such that a ( y ) ⊤ (¯ x + τ (¯ x − ˜ x )) + a ( y ) > , ∀ y ∈ Y for τ small enough. Itfollows that we can choose a sequence τ k > k →∞ τ k = 0 such that a ( y ) ⊤ (˜ x + τ k (¯ x − ˜ x )) + a ( y ) > , ∀ y ∈ Y, k ∈ N . Hence x k = ˜ x + τ k (¯ x − ˜ x ) ∈ F o and lim k →∞ x k = ˜ x which implies that ˜ x ∈ cl ( F o ).This completes our proof. Proof of Lemma 3.3 x ∈ R n + such that a ( y ) ⊤ ¯ x + a ( y ) > , ∀ y ∈ Y. Let T = { τ k | k = 0 , , ..., N } , from (4) we know that for each Y k = [ τ k − , τ k ], k =1 , , ..., N , there holds thatmin y ∈ Y k a i ( y ) − A li,k ≤ γ i | Y k | p ≤ γ i | T | p , i = 0 , , ..., n, k = 1 , , ...N, with p ≥
1. By direction computation, we have n X i =1 [min y ∈ Y k a i ( y )]¯ x i + min y ∈ Y k a ( y ) − [ n X i =1 A li,k ¯ x i + A l ,k ] ≤ [ n X i =1 γ i ¯ x i + γ ] | Y k | p . The Lipschitz continuity of a i ( y ), i = 0 , , ..., n , implies thatmin y ∈ Y k [ n X i =1 a i ( y )¯ x i + a ( y )] − [ n X i =1 [min y ∈ Y k a i ( y )]¯ x i + min y ∈ Y k a ( y )] ≤ [ n X i =1 L i ¯ x i + L ] | Y k | . It follows from the last two inequalities that n X i =1 A li,k ¯ x i + A l ,k ≥ min y ∈ Y k [ n X i =1 a i ( y )¯ x i + a ( y )] − { [ n X i =1 γ i ¯ x i + γ ] | Y k | p + [ n X i =1 L i ¯ x i + L ] | Y k |} , which implies that P ni =1 A li,k ¯ x i + A l ,k ≥ k = 1 , , ..., N , for | T | small enough. Thisimplies that ¯ x is a feasible point for the approximate region F ( T ).This completes our proof. Proof of Theorem 3.4
By the construction of the approximate regions, we know that F ( T k ) is included inthe original feasible set, i.e., F ( T k ) ⊆ F for all k ∈ N . Hence, we have { x ∗ k } ⊆ F .Let ¯ x be any Slater point, we can conclude from Lemma 3.3 that ¯ x is containedin F ( T k ) for k large enough. Thus c ⊤ x ∗ k ≤ c ⊤ ¯ x which indicates that x ∗ k ∈ L (¯ x ) forsufficient large k . Since the level set L (¯ x ) is compact, there exists at least an accu-mulation point x ∗ of the sequence { x ∗ k } . Assume without loss of generality that thesequence { x ∗ k } itself converges to x ∗ , i.e., lim k →∞ x ∗ k = x ∗ . It suffices to prove that x ∗ is an optimal solution to (LSIP). It is obvious that x ∗ is feasible for (LSIP).Let x opt be an optimal solution to (LSIP). If x opt ∈ F o , then x opt ∈ F ( T k ) for all k large enough. This indicates that f ( x ∗ k ) ≤ f ( x opt ) for k large enough and thus f ( x ∗ ) = lim k →∞ f ( x ∗ k ) ≤ f ( x opt ) , where f ( x ) = c ⊤ x . If x opt lies on the boundary of the feasible set F , there existsa sequence of the Slater points { ¯ x j | ¯ x j ∈ F o } such that lim j →∞ ¯ x j = x opt . Foreach ¯ x j ∈ F o there exists at least an index k = k ( j ) such that ¯ x j ∈ F ( T k ( j ) ) which18mplies that f ( x ∗ k ( j ) ) ≤ f (¯ x j ) for j ∈ N . Since { x ∗ k } converges to x ∗ and { x ∗ k ( j ) } is asubsequence of { x ∗ k } , the sequence { x ∗ k ( j ) } is convergent and lim j →∞ f ( x ∗ k ( j ) ) = f ( x ∗ ).By the continuity of f we have f ( x ∗ ) = lim j →∞ f ( x ∗ k ( j ) ) ≤ lim j →∞ f (¯ x j ) = f ( x opt ) . To sum up, we have x ∗ ∈ F and f ( x ∗ ) ≤ f ( x opt ).This completes our proof. Proof of Lemma 3.5
Let ¯ x ∈ F be a Slater point, then we have a ( y ) ⊤ ¯ x + a ( y ) > , ∀ y ∈ Y k , k = 1 , , ..., N. Since a i ( · ) , i = 1 , , ..., n are twice continuously differentiable, they are Lipschitz con-tinuous, i.e., there exist a constant L such that | a i ( y ) − a i ( z ) | ≤ L | y − z | , ∀ y, z ∈ Y. Let ¯ g k ( x ) = P ni =1 min { ¯ a i ( τ k − ) , ¯ a i ( τ k ) } x i + min { ¯ a ( τ k − ) , ¯ a ( τ k ) } , then we have a ( y ) ⊤ ¯ x + a ( y ) − ¯ g k (¯ x )= n X i =1 a i ( y )¯ x i + a ( y ) − n X i =1 min { ¯ a i ( τ k − ) , ¯ a i ( τ k ) } ¯ x i + min { ¯ a ( τ k − ) , ¯ a ( τ k ) }≤ ( L n X i =1 (¯ x i + 1)) | Y k | , ∀ y ∈ Y k , k = 1 , , ..., N. It follows that ¯ g k (¯ x ) ≥ | T | is sufficiently small which implies ¯ x ∈ ¯ F ( T ).This completes our proof. Convergence of Algorithm 1 for R1-LSIP
Since x k is a solution of R1-LSIP( T k ) for a consistent subdivision T k = { τ kj | j =0 , , ..., N k } in the k th iteration, it must satisfy the KKT condition as follows: c − X j ∈ A ( x ∗ k ) λ kj A T k (: , j ) = 0 (12)where A ( x ∗ k ) = { j | A (: , j ) ⊤ x ∗ k + b T k ( j ) = 0 } and A T k ( i, j ) = A li,j , b T k ( j ) = A l ,j is thecorresponding lower bound for a i ( y ) and a ( y ) on [ τ kj − , τ kj ]. By (4) we know that forany ¯ τ kj ∈ [ τ kj − , τ kj ] there holds that | a i (¯ τ kj ) − A T k ( i, j ) | ≤ γ ki | τ kj − τ kj − | p , ≤ i ≤ n, j ∈ A ( x ∗ k )19here γ ki , ≤ i ≤ n , p ≥ ≤ β ki ≤ γ ki , i = 0 , , , ..., n such that A T k ( i, j ) = a i (¯ τ kj ) + β ki | τ kj − τ kj − | p . Substitute this into(12) and A ( x ∗ k ) we have c − X j ∈ A ( x ∗ k ) λ kj [ a (¯ τ kj ) + ( | τ kj − τ kj − | p ) β k ] = 0 ,A ( x ∗ k ) = { j | [ a (¯ τ kj ) + ( | τ kj − τ kj − | p ) β k ] x ∗ k + a (¯ τ kj ) + ( | τ kj − τ kj − | p ) β k = 0 } , where a (¯ τ kj ) = ( a (¯ τ kj ) , a (¯ τ kj ) , ..., a n (¯ τ kj )) ⊤ . It follows that x ∗ k is a ( ǫ, δ ) KKT point of(LSIP) if the lengths of the subsets [ τ kj − , τ kj ] for j ∈ A ( x ∗ k ) converge to zero as kk