Runtime Analysis of RLS and the (1+1) EA for the Chance-constrained Knapsack Problem with Correlated Uniform Weights
aa r X i v : . [ c s . D S ] F e b Runtime Analysis of RLS and the (1+1) EA forthe Chance-constrained Knapsack Problem withCorrelated Uniform Weights
Yue Xie , Aneta Neumann , Frank Neumann , Andrew M. Sutton Optimisation and Logistics, School of Computer Science,The University of Adelaide, Adelaide, Australia Department of Computer Science,University of Minnesota Duluth, United States of AmericaFebruary 12, 2021
Abstract
Addressing a complex real-world optimization problem is a challenging task.The chance-constrained knapsack problem with correlated uniform weights playsan important role in the case where dependent stochastic components are consid-ered. We perform a runtime analysis of a randomized search algorithm (RSA) anda basic evolutionary algorithm (EA) for the chance-constrained knapsack problemwith correlated uniform weights. We prove bounds for both algorithms for produc-ing a feasible solution. Furthermore, we investigate the behavior of the algorithmsand carry out analyses on two settings: uniform profit value and the setting inwhich every group shares an arbitrary profit profile. We provide insight into thestructure of these problems and show how the weight correlations and the differenttypes of profit profiles influence the runtime behavior of both algorithms in thechance-constrained setting.
Evolutionary algorithms are bio-inspired randomized optimization techniques and havebeen shown to be very successful when applied to various stochastic combinatorialoptimization problems [11, 26, 24]. A significant challenge for real-world applicationsis that one must often solve large-scale, complex, and uncertain optimization problemswhere constraint violations have extremely disruptive effects.In recent years, evolutionary algorithms for solving dynamic and stochastic combi-natorial optimization problems have been theoretically analyzed in a number of articles[10, 15, 25, 19]. The techniques that used in runtime analysis has significantly in-creased understanding of bio-inspired approaches in theoretical field [6, 2, 12, 21, 13].1hen tackling new problems, such studies typically begin with basic algorithms suchas Randomized Local Search (RLS) and (1+1) EA, which we also investigate in thispaper.An important class of stochastic optimization problems is chance-constrained opti-mization problems [3, 23]. Chance-constrained programming has been carefully stud-ied in the operations research community [8, 9, 4]. In this domain, chance constraintsare used to model problems and relax them into equivalent nonlinear optimization prob-lems which can then be solved by nonlinear programming solvers [14, 22, 29]. Despiteits attention in operations research, chance-constrained optimization has gained com-paratively little attention in the area of evolutionary computation [16].The chance-constrained knapsack problem is a stochastic version of the classicalknapsack problem where the weight of the items are stochastic variables. The goalis to maximize the total profit under the constraint that the knapsack capacity boundis violated with a probability of at most a pre-defined tolerance α . Recent papers[27, 28] study a chance-constrained knapsack problem where the weight of the itemsare stochastic variables and independent to each other. They introduce the use of suit-able probabilistic tools such as Chebyshev’s inequality and Chernoff bounds to estimatethe probability of violating the constraint of a given solution, providing surrogate func-tions for the chance constraint, and present single- and multi-objective evolutionaryalgorithms for the problem.The research of chance-constrained optimization problems associated with evolu-tionary algorithms is an important new research direction from both a theoretical anda practical perspective. Recently, Doerr et al. [5] analyzed the approximation behaviorof greedy algorithms for chance-constrained submodular problems. Assimi et al. [1]conducted an empirical investigation on the performance of evolutionary algorithmssolving the dynamic chance-constrained knapsack problem.From a theoretical perspective, Neumann et al. [18] worked out the first runtimeanalysis of evolutionary multi-objective algorithms for chance-constrained submodu-lar functions and proved that the multi-objective evolutionary algorithms outperformgreedy algorithms. Neumann and Sutton [20] conducted a runtime analysis of thechance-constrained knapsack problem, but only focused on the case of independentweights.In this paper, analyze the expected optimization time of RLS and the (1+1) EA onthe chance-constrained knapsack problem with correlated uniform weights. This vari-ant partitions the set of items into groups, and pairs of items within the same group havecorrelated weights. To the best of our knowledge, this is a new direction in the researcharea of chance-constrained optimization problems. We prove bounds on both the timeto find a feasible solution, as well as the time to obtain the optimal solution which hasboth maximal profit and minimal probability of violating the chance-constrained. Inparticular, we first prove that a feasible solution can be found by RLS in time boundedby O ( n log n ) , and by the (1+1) EA in time bounded by O ( n log n ) . Then, we in-vestigate the optimization time for these algorithms when the profit values are uniformwhich has been study in the deterministic constrained optimization problems [7]. How-ever, the items in our case are divided into different groups and need to take the numberof chosen items from each group into account, and the optimization time bound for RLSbecomes O ( n ) and O ( n log n ) for the (1+1) EA. After that, we consider the more2eneral and complicated case in which profits may be arbitrary as long as each grouphas the same set of profit values. We show that an upper bound of O ( n ) holds for RLSand O ( n (log n + log p max )) holds for the (1+1) EA where p max denotes the maximalprofit among all items.This paper is structured as follows. We describe the problem and the surrogatefunction of the chance constraint in Section 2 as well as the algorithms. Section 3presents the runtime results for different algorithms produce a feasible solution, andthe expected optimization time for different profit setting of the problem present inSection 4 and Section 5. Finally, we finish with some conclusions. The chance-constrained knapsack problem is a constrained combinatorial optimizationproblem which aims to maximize a profit function and subjects to the probability thatthe weight exceeds a given bound is no more than an acceptable threshold. In previousresearch, the weights of items are stochastic and independent of each other. We in-vestigate the chance-constrained knapsack problem in the context of uniform randomcorrelated weights.Formally, in the chance-constrained knapsack problem with correlated uniformweights, the input is given by a set of n items partitioned to K groups of m itemseach. We denote as e ij the j -th item in group i . Each item has an associated stochas-tic weight. The weights of items in different groups are independent, but the weightsof items in the same group k are correlated with one another with a shared covari-ance c > , i.e., we have cov ( e kj , e kl ) = c , and cov ( e kj , e il ) = 0 iff k = i . Thestochastic non-negative weights of items are modeled as n = K · m random variables { w , w , . . . , w m , . . . , w Km } where w ij denotes the weight of j -th item in group i .Item e ij has expected weight E [ w ij ] = a ij , variance σ ij = d and profit p ij .The chance-constrained knapsack problem with correlated uniform weights can beformulated as follows:maximize p ( x ) = K X i =1 m X j =1 p ij x ij (1)subject to Pr( W ( x ) > B ) ≤ α. (2)The objective of this problem is to select a set of items that maximizes profit subject tothe chance constraint, which requires that the solution violates the constraint bound B only with probability at most α .A solution is characterized as a vector of binary decision variables x = ( x , x ,. . . , x m , . . . , x Km ) ∈ { , } n . When x ij = 1 , the j -th item of the i -th group isselected. The weight of a solution x is the random variable W ( x ) = K X i =1 m X j =1 w ij x ij , (3)3ith expectation E [ W ( x )] = K X i =1 m X j =1 x ij , (4)and variance V ar [ W ( x )] = d K X i =1 m X j =1 x ij + 2 c K X i =1 X ≤ j Among all solutions with exactly ℓ one bits, we call a search point x ; | x | = ℓ a balanced solution , denoted by ℓ b if it selects (cid:4) ℓK (cid:5) items from K − (cid:0) ℓ − (cid:4) ℓK (cid:5) · K (cid:1) groups and (cid:4) ℓK (cid:5) + 1 items from the remaining ℓ − (cid:4) ℓK (cid:5) · K groups.This solution has covariance s bℓ = c (cid:8)(cid:2) K − (cid:0) ℓ − (cid:4) ℓK (cid:5)(cid:1)(cid:3) (cid:4) ℓK (cid:5) (cid:0)(cid:4) ℓK (cid:5) − (cid:1) + (cid:0) ℓ − (cid:4) ℓK (cid:5)(cid:1) (cid:0)(cid:4) ℓK (cid:5) + 1 (cid:1) (cid:4) ℓK (cid:5)(cid:9) . Solutions with exactly ℓ bits that are not balanced solutions are called unbalancedsolutions . Among all unbalanced solutions, we call the following one the most unbalancedsolution denoted by ℓ ub , which selects exactly m items from (cid:4) ℓm (cid:5) groups and (cid:0) ℓ − (cid:4) ℓm (cid:5) · m (cid:1) items from another group. Since m is the maximal number of items in each group, inthe most unbalanced solution, there are (cid:4) ℓm (cid:5) full groups and one other group containingthe remaining items. This solution has covariance s ubℓ = c (cid:20)(cid:22) ℓm (cid:23) m ( m − 1) + (cid:18) ℓ − (cid:22) ℓm (cid:23) m (cid:19) (cid:18) i − (cid:22) ℓm (cid:23) m − (cid:19)(cid:21) . We calculate the upper bound of the covariance of acceptable solutions accordingto Chebyshev’s inequality for all solutions with ℓ one bits. Denote by s ℓ = 2 c K X i =1 X ≤ j Theorem 2.2 (One-sided Chebyshev’s inequality) . Let X be a random variable withexpectation E [ X ] and variance V ar [ X ] . Then for any k ∈ R + , Pr( X > E [ X ] + k ) ≤ V ar [ X ] V ar [ X ] + k . (9)For the chance-constrained knapsack problem with correlated uniform weights, wedefine the surrogate function β over decision vectors as β ( x ) = V ar [ W ( x )] V ar [ W ( x )] + ( B − E [ W ( x )]) . (10)It is clear by Theorem 2.2 that Pr( W ( x ) ≥ B ) ≤ β ( x ) , and therefore every x suchthat β ( x ) ≤ α is also feasible.We study the runtime of RLS and the (1+1) EA defined in Algorithm 1 and Algo-rithm 2 for optimization of the chance-constrained knapsack problem with dependent5eights. RLS starts with a randomly initialized solution and iteratively improves it byapplying a series of mutations. In each mutation step, it applies either one- or two-bit mutation with equal probability. Specifically, with probability / , it selects a singleindex uniformly at random from { , . . . , n } an flips the corresponding bit in the currentsolution. Otherwise, it selects two distinct indexes uniformly at random to flip. The(1+1) EA also starts with a randomly initialized solution, but generates new candidatesolutions by flipping each bit of the current solution with a probability of /n , where n is the length of the bit string. In the selection step, both algorithms accept the offspringif it is at least as good as the parent. We are interested in finding the optimal solutionwhich is the feasible solution with maximal profit. We define the optimization time ofRLS and the (1+1) EA as the number of necessary steps until such an optimal solu-tion is constructed. By considering the surrogate function obtained by the one-sidedChebyshev’s inequality, we employ the fitness function f ( x ) := ( p ′ ( x ) , β ′ ( x )) , (11)where p ′ ( x ) = − iff β ′ ( x ) > α and p ′ ( x ) = p ( x ) otherwise, β ′ ( x ) = β ( x ) iff E [ W ( x )] < B and β ′ ( x ) = 1 + E [ W ( x )] − B otherwise. We optimize f in lexico-graphic order where the goal is to maximize p ′ ( x ) and minimize β ( x ) , i.e. we have f ( x ) (cid:23) f ( y ) ⇐⇒ p ′ ( x ) > p ′ ( y ) (12)or ( p ′ ( x ) = p ′ ( y ) ∧ β ( x ) ≤ β ( y )) . Since selection is monotone, once a feasible solution is located, neither algorithmwill subsequently accept an infeasible solution. Therefore, the process of finding anoptimal solution can be separated into two parts: in the first part, the algorithm mayfirst need to find a feasible solution. In the second part, it must find the highest profitamong all feasible solutions. In this section, we analyze the expected time for RLS and the (1+1) EA to find feasiblesolutions. Lemma 3.1. Starting with an arbitrary initial solution, the expected time until RLShas obtained a feasible solution is O ( n log n ) .Proof. Adding a new item to the selected set will increase both the total expectedweight and the probability of violating the chance constraint. Since all items have thesame expected weight a , the sum of expected weight can be simply represented by thenumber of ones in the solution.The fitness function is defined in such a way that the total expected weight of asolution will never increase as long as no feasible solution has been obtained. Thisimplies that RLS never accepts mutations that increase the number of ones, and onlyaccepts a decrease in the number of ones. RLS cannot accept any single bit flips thatflip a one to zero, or 2-bit flips that flip two one-bits to zeros.6herefore, at any solution x ; | x | = ℓ , there are ℓ one bits to decrease, and theprobability to decrease the number of ones is at least ℓ n . Hence, the expected waitingtime until RLS constructs a feasible solution is bounded above by n (cid:0) · · · + n (cid:1) = O ( n log n ) . Lemma 3.2. Starting with an arbitrary initial solution, the expected time until the(1+1) EA obtains a feasible solution is O ( n log n ) .Proof. According to the definition of the fitness function in Equation (11), before find-ing a solution with expected weight less than B , the (1+1) EA never accepts a solutionthat increases the number of one bits. Therefore, before producing such a solution, thealgorithm only accepts mutations that reduce the number of one bits, and thus behavesidentically to the optimization of the classical OneMax problem. The expected time forthe (1+1) EA to find a solution x with E [ W ( x )] < B is thus bounded by O ( n log n ) ,i.e., its expected running time on OneMax [17].After finding a solution with expected weight less than B , the (1+1) EA always ac-cepts the solution with smaller constraint-violation probability according to the Cheby-shev’s inequality. We construct a potential function h : { , } n as the sum of thevariance and covariance of a solution, h ( x ) = dℓ + 2 c K X i =1 X ≤ j Instance 4.1. Given K groups, each group has m items. There are n = K · m itemsin total, the capacity of the knapsack is bounded by B . For ≤ i ≤ K , ≤ j ≤ m ,let p ij = 1 , a ij = a , σ ij = d , where d > is a constant. The covariance of itemswithin any group is c , i.e., we have cov ( e ij , e kl ) = c iff i = k and cov ( e ij , e kl ) = 0 otherwise. Definition 4.2. Let r = max {| x | | ∃ x ∈ { , } n with β ( x ) ≤ α } and partition thefeasible search space by L , L , . . . , L r such that L i = { x ∈ { , } n : | x | = i with β ( x ) ≤ α } . (20) We further bi-partition each partition L i into two sets S iγ and S iζ such that S iγ ∪ S iζ = L i and S iγ ∩ S iζ = ∅ as follows.The set S iγ ⊆ L i contains all feasible solutions where no extra item can be addedwithout violating the chance constraint and S iζ ⊆ L i is the set containing all feasiblesolutions where at least one extra item can be added to obtain a feasible solution withat least i + 1 ones. Lemma 4.3. Starting with an arbitrary initial solution, the expected optimization timeof RLS on the chance-constrained knapsack problem with correlated uniform weight is O ( n ) .Proof. Due to Lemma 3.1, RLS finds a feasible solution in expected time O ( n log n ) .Also, since all feasible solutions dominate any infeasible solution, the algorithm doesnot return to the infeasible region.Let x ∈ L ℓ . If x ∈ S ℓζ , then there is at least one additional item that can befeasibly selected. This selection occurs with probability / n . Otherwise, only a 2-bitflip changing a zero and a one to a zero is accepted if it reduces the covariance of thesolution without changing the profit until the algorithm produces a balanced solutionon the same level.According to Definition 2.1, the balanced solution in each level has the smallestcovariance and number of items selected from each group. Let l i ( x ) , ≤ i ≤ K be9he number of elements chosen by x from group i . Assume, without loss of generality,that the groups are sorted in increasing order with respect to the l i ( x ) . Furthermore, let s ( x ) = K − ( ℓ − ⌊ ℓK ⌋ · K ) X i =1 max (cid:26) , (cid:22) ℓK (cid:23) − l i ( x ) (cid:27) + K X K − ( ℓ − ⌊ ℓK ⌋ · K )+1 max (cid:26) , (cid:22) ℓK (cid:23) + 1 − l i ( x ) (cid:27) (21)be the number of items that belong to an arbitrary balanced solution, but not chosen by x , and let t ( x ) = K X i = K − ( ℓ − ⌊ ℓK ⌋ · K ) +1 max (cid:26) , l i ( x ) − (cid:18)(cid:22) ℓK (cid:23) + 1 (cid:19)(cid:27) + K − ( ℓ − ⌊ ℓK ⌋ · K ) X i =1 max (cid:26) , l i ( x ) − (cid:22) ℓK (cid:23)(cid:27) (22)be the number of items chosen by x , but do not belong to a balanced solution. Notethat s ( x ) should be equal to t ( x ) for any feasible solution in L ℓ . Let g = s ( x ) = t ( x ) .As there are exactly ℓ x , and s ( x ) is a fixed value, this impliesthat there are at s ( x ) g elements in order to reduce the covariance of x . Hence, the probability of suchswapping is at least g / n . Since g cannot increase and g ≤ ℓ , it suffices to sum upthese expected waiting times, and the expected time until reaching g = 0 is ℓ X g =1 (2 n /g ) = O (cid:0) n (1 − /i ) (cid:1) . There are at most n level of L ℓ which implies that the expected time until an optimalsolution has been achieved is n X i = ℓ − ( n − n /i ) = O ( n − n log n ) = O ( n ) , which completes the proof. Lemma 4.4. Let x ∈ S ℓγ , then there exists some q ∈ { , . . . , n − } such that q different accepted 2-bit flips of the (1+1) EA reduce the covariance the solution. Theexpected one-step change of the (1+1) EA is X ( t ) en .Proof. Let x ∈ S ℓγ , | x | = ℓ and let s x denote the covariance of x . Then accordingto the inequality (8), s x is bounded by ( B − ai ) α − α − dℓ . Let x ′ ∈ S iζ , | x ′ | = ℓ be thebalanced solution which take (cid:6) ℓK (cid:7) elements from the first ℓ − (cid:4) ℓK (cid:5) K groups and take (cid:4) ℓK (cid:5) elements from the last K − ℓ − (cid:4) ℓK (cid:5) K groups.10e assume that the K groups in solution x are sorted in increasing order withrespect to the number of elements chosen by x . Then, let q denote the Hammingdistance between x and x ′ , and I = max { x − x ′ , } denotes the set different of theelements chosen by x but not by x ′ , and I ′ = max { x ′ − x, } be the set different ofelement chosen by x ′ but not by x . The number of elements in set I and I ′ should bethe same and equal to q . A 2- bit flip flipping bit i ∈ I from 1 to 0 and bit j ∈ J from 0 to 1 can reduce the covariance of the problem which leads to the reduction ofprobability. As there are q elements in set I and I ′ separately, they can be matchedinto q pairs. Performing all such q x into x ′ .Now, we have the reduction of covariance that flip i ∈ I to zero denoted by c ( r i − , where r i is the number of selected items of the group that i belong to. There are r i − (cid:6) ℓK (cid:7) one bits need to be flipped in this group that achieve this reduction to attendbalance, and r i > (cid:6) ℓK (cid:7) , so the total contribution for this group is c ( r i − (cid:18) r i − (cid:24) ℓK (cid:25)(cid:19) ≥ c ( r i − r i − c (cid:18)(cid:24) ℓK (cid:25) − (cid:19) (cid:24) ℓK (cid:25) . (23)Similarly, we have the increase of covariance when flip a bit j ∈ J denoted by cr j , where r j is the number of selected items of x in the group that j belong to. Thereare (cid:4) ℓK (cid:5) − r ′ j zero bits flip in this group to attend balance and the total contribution forgroup k ′ is c ( r ′ j ) (cid:18)(cid:22) ℓK (cid:23) − r ′ j (cid:19) ≤ c (cid:18)(cid:22) ℓK (cid:23) − (cid:19) (cid:22) ℓK (cid:23) − c ( r ′ j − r ′ j . (24)Define the total reduction of covariance from x to x ′ by inequalities (23) and (24) as q X i =1 c ( r i − − q X j =1 cr ′ j ≥ c ( r i − r i − c (cid:0)(cid:4) ℓK (cid:5) − (cid:1) (cid:4) ℓK (cid:5) − (cid:0) c (cid:0)(cid:4) ℓK (cid:5) − (cid:1) (cid:4) ℓK (cid:5) − c ( r ′ j − r ′ j (cid:1) = s x − s bℓ (25)Therefore performing all q x into x ′ and leadsto a covariance decrease at least as large as s x − s bℓ , where s bℓ denotes the covarianceof the balanced solution with exactly i one bits.For all t ∈ N , let x ( t ) ∈ L i be a fixed, non-empty solution generated at time t bythe (1+1) EA, and let X ( t ) = s x ( t ) − s bi . Then X ( t ) − X ( t +1) = s x ( t ) − s x ( t +1) . (26)Let Y = { y , . . . , y q } with q ∈ { , . . . , n } be the set of q different search points thaton the same level of x in the search space generated from x by one of the q acceptabledifferent 2-bit flips. We have s y i ≤ s x for all i ∈ { , . . . , q } and q X i =1 ( s x − s y i ) ≥ s x − s bi . (27)11ince each y i is generated from x by one of the q Pr[ x ( t +1) ∈ Y | x ( t ) = x ] = q (cid:18) − n (cid:19) n − n ≥ qen (28)of the (1+1) EA. Furthermore, E [ X ( t ) − X ( t +1) | x ( t ) = x, x ( t +1) ∈ Y ] = s x − s bi q = X ( t ) q . (29)The algorithm cannot accept an offspring on the same level that increases the co-variance, that is, X t − X t +1 is non-negative. Thus, we have by (28) and (29) that E [ X ( t ) − X ( t +1) | x ( t ) = x ] ≥ X ( t ) en . (30) Lemma 4.5. The expected time for the (1+1) EA to transform a solution in S iγ to asolution in S iζ ∪ L j where j > i is bounded by O ( n log n ) .Proof. According to Lemma 4.4, the drift on X ( t ) is at least X ( t ) en for the (1+1) EA.Therefore, since the both algorithms start with X ( t ) ≤ s i = O ( n ) and the minimumvalue of X ( t ) before reaching X ( t ) = 0 is 1, by multiplicative drift analysis, the ex-pected time is at most O ( n log n ) to reach a solution in S iζ . Then, if i < r , it ispossible for the (1+1) EA to generate a feasible in L i +1 with probability /en . Thetotal expected time of the (1+1) EA until an solution in S iζ ∪ L j is generated is thusbounded by O ( n log n ) . Theorem 4.6. The expected time until the (1+1) EA working on the fitness function (11) constructs the optimal solution to Instance 4.1 is bounded by O ( n log n ) .Proof. By Lemmas 3.2 and 4.5, for all i < r , it is sufficient to investigate the searchprocess after having found a feasible solution x ∈ S iζ , and after that, the algorithmscan only accept an offspring with a larger number of one bits. It is possible for the(1+1) EA to generate a feasible solution in L i +1 by mutating exactly one zero bit toone. This event occurs with probability en for the (1+1) EA.Therefore, it will takes O ( n log n + en ) steps to produce a feasible solution in level L r +1 when started from a random feasible solution in L r . Altogether, the expectedoptimization time is bounded by O ( n log n ) + r − X i =0 ( n log n + en ) = O ( n log n ) , (31)where r < n . 12 Arbitrary profits mirrored by each group We now turn our attention to the more complicated case where a single group hasarbitrary profits, but this set of profits is the same for each of the K groups. Thisresembles the case of general linear functions, but the chosen function is the shared byall groups. Instance 5.1. Given K groups, each group has m items. There are n = K · m itemsin total, the capacity of knapsack is bounded by B . For ≤ i ≤ K , ≤ j ≤ m , let a ij = a , σ ij = d are constants, and let p i ≥ p i ≥ . . . ≥ p im for i ∈ { , . . . , K } and p iℓ = p kℓ for each i, k ∈ { , . . . , K } , ≤ ℓ ≤ m . The covariance of items in groupsis c , i.e. we have cov ( e ij , e kl ) = c iff i = k and cov ( e ij , e kl ) = 0 otherwise. Theorem 5.2. Starting with an arbitrary initial solution, the expected optimizationtime of RLS on the chance-constrained knapsack problem with correlated uniformweights is O ( n ) on Instance 5.1.Proof. By Lemma 3.1, RLS finds a feasible solution in expected time O ( n log n ) .Also, since all feasible solutions dominate infeasible solutions, the algorithm does notswitch back to the infeasible region again. By the definition of Instance 5.1 that itemsin a group have different profit and the same weight, it is possible to have more thanone balanced solution in each level of this case, but only one balanced solution withmaximum profit, where we ignore the order of groups.We order all items regarding to their profit as p = p = . . . = p K ≥ p = p = . . . = p K ≥ . . . ≥ p m = p m = . . . = p Km .For a given solution x , we call the multi-set P ( x ) = { p i | x i = 1 } the profit profile of x , i.e., the multi-set of profit values selected by x . We say that a profit profile P is contained in P ( x ) if P ⊆ P ( x ) . Let x be a feasible solution whose profit profilecontains P j = { p , . . . , p j } (but which does not contain P j +1 ). We claim that RLSdoes not accept a solution whose profit profile does not contain P j . An operationflipping a single -bit that flips a 1 to 0 is clearly not accepted, as it reduces the profitand therefore cannot lead to a solution not containing P j . A -bit flip is only accepted ifit does not decrease the profit, and therefore also cannot create a solution whose profitprofile does not contain P j , as P j contains the j -largest profits of the given input.We analyze the time to transform a solution x containing profit profile P j = { p i | ≤ i ≤ j } into a solution x ′ containing profit profile P j +1 . Consider the profit p j +1 in the group with the smallest number of elements whose bit x i is set to . Flipping x i adds the profit p j +1 to the profile P j . Assume that bit x i belongs to group r ∈{ , . . . , K } , i.e., x i = x rs . If there is another item selected in group r (selectedby x rs ′ = 1 ) whose profit is less than p j , then flipping both x rs and x rs ′ leads toan accepted solution x ′ with P j +1 ⊆ P ( x ′ ) . This happens with probability / n .Assume now that there is no such item in group r . Then p j +1 is the largest non selectedprofit in group r .Let S be the set of groups with the largest number of items selected and ˆ p thesmallest selected profit from these groups. Assume that x i is not in one of the groupsin S . Then flipping x i to and setting the bit corresponding to ˆ p to is accepted andleads to a solution containing profit profile P j +1 . If x i is in one of the groups in S ,13hen there is another item selected in S with a profit smaller than p j +1 or the solution x is already optimal.Altogether, to produce a solution x ′ containing P j +1 from a solution with P j , RLSneeds at most O ( n ) steps, and since there are at most n items in any solution, theexpected optimization time of RLS is O ( n ) .Let p max = p i , i ∈ { , . . . K } be the maximal profit of the given input. Theorem 5.3. Starting with an arbitrary initial solution, the expected optimizationtime of the (1+1) EA on the chance constrained knapsack problem with correlateduniform weight is O ( n (log n + log p max )) on Instance 5.1.Proof. According to Lemma 3.2, the expected time to reach a feasible solution is O ( n log n ) . Therefore it is sufficient to start the analysis with a feasible solution,after which the (1+1) EA will never sample an infeasible solution during the remainderof the optimization process.For our analysis, we consider the set of all solutions L j = { x | | x | = j ; β ( x ) ≤ α } containing exactly j -bits. For each j we show that the expected number of off-spring created from an individual in L j is O ( n (log n + log p max ) . After this manyiterations, either the optimal solution (contained in L j ) has been created, or a feasi-ble solution y with p ( y ) > max x ∈ L j p ( x ) has been produced, which implies that thealgorithm will not accept any solution in L j afterwards.We now show that the expected number of offspring created from solutions in L j is O ( n (log n + log p max )) . Let x ∈ L j be the current solution, and let x j, max =arg max x ∈ L j p ( x ) be an arbitrary feasible solution in L j with the largest possibleprofit. Denote the loss by l ( x ) = n X i =1 p i x j, max i (1 − x i ) , that is, the sum of the profits chosen by x j, max but not chosen by x . Denote the surplus by s ( x ) = n X i =1 p i (1 − x j, max i ) x i , that is, the sum of the profits chosen by x but not chosen by x max ,j . Define the totalincrease in profit from x to x j, max as g ( x ) = p ( x j, max ) − p ( x ) = l ( x ) − s ( x ) . Let r = P ni =1 x j, max i (1 − x i ) be the number of indexes set to by x j, max and by x . We give a set of r accepted -bit flips where the sum of the increases in profit is g ( x ) .We consider the K groups and w.l.o.g. assume that they are sorted in increasing or-der with respect to the number of elements chosen by x = ( x , x , . . . , x m , . . . , x Km ) .Let ℓ i ( x ) , ≤ i ≤ K be the number of elements chosen by x in group i . We have ℓ ( x ) ≤ . . . ≤ ℓ K ( x ) . We consider the solution ˆ x j, max of maximal profit in L j for14hich ℓ (ˆ x j, max ) ≤ . . . ≤ ℓ K (ˆ x j, max ) and ℓ K (ˆ x j, max ) ≤ ℓ (ˆ x j, max ) + 1 holds. Thisimplies that ˆ x j, max is a balanced solution having the smallest variance in L j . Note thatsuch a solution exists as we may reorder the groups as each group contains the same(multi-)set of profits.We have k X i =1 ℓ i ( x ) ≤ k X i =1 ℓ i (ˆ x j, max ) , ≤ k ≤ K (32)as both solutions contain j elements and the groups are sorted in increasing order ofthe number of elements chosen by x .This implies k X i =1 ℓ i ( x − ˆ x j, max ) ≤ k X i =1 ℓ i (ˆ x j, max − x ) , ≤ k ≤ K (33)as the intersection of ˆ x j, max and x contributes the same to each summand. Here x − y =max { x − y, } denotes the set different of the elements chosen by x but not by y .Therefore, the number of elements chosen by ˆ x j, max but not by x is greater thanor equal to the number of elements chosen by x but not ˆ x j, max for each of the first k groups.We then define our set of r -bit flips. The i th -bit flip flips the i th -bit of x (in theorder given by the bit string x = ( x , x , . . . , x m , . . . , x Km ) ) set to in ˆ x j, max − x and the i th -bit in x − ˆ x j, max .Consider operation i and let p ′ i be the profit introduced and p ′′ i be the profit to beremoved. As per construction we have P ri =1 p ′ i = l ( x ) and P ri =1 p ′′ i = s ( x ) whichimplies that the total gain of the set of r g ( x ) = l ( x ) − s ( x ) . It remainsto show that each of these r -bit flips is accepted by the algorithm. Consider the i thoperation. We show that β ( x ) does not increase. Let r ′ be the group that p ′ i belongsto and r ′′ be the group that p ′′ i belongs to. We have r ′ ≤ r ′′ due to Equation (33) andtherefore ℓ r ′ ( x ) ≤ ℓ r ′′ ( x ) . This implies that the -bit flip leads to a solution y with β ( y ) ≤ β ( x ) . We also have p ′ i ≥ p ′′ i as otherwise we could improve the profit of ˆ x j, max which contradicts that ˆ x j, max is a feasible solution of maximal profit in L j .Given the set of r accepted -bit flips, the expected increase in profit is at least r/ ( en ) · g ( x ) /r = g ( x ) / ( en ) , as the probability of the (1+1) EA to produce such a -bit flip is r/en , and the averagegain of this flip is g ( x ) /r .For any non-maximal solution x ∈ L j , we have ≤ g ( x ) ≤ j · p max . Usingmultiplicative drift analysis, the expected number of offspring created from a solution x ∈ L j before having obtained a feasible solution x ∗ with p ( x ∗ ) ≥ p ( x j, max ) istherefore O ( n (log n + log p max ) . Moreover, x ∗ is as better as the best solution in L j , x ∗ contains the top j elements regarding to the profits of items, this implies that x ∗ hasthe same construction as x j, max and is a balanced solution that has smallest probabilityin L j .If x ∗ is not optimal, then there exists a 1-bit flip adding an additional elementand strictly improving the profit. There are at most n level L j which implies that the15xpected time until an optimal solution has been achieved is O ( n (log n + log p max ) . The chance-constrained knapsack problem with correlated uniform weights plays akey role in situations where dependent stochastic components are involved. We havecarried out a theoretical analysis on the expected optimization time of RLS and the(1+1) EA on the chance-constrained knapsack problem with correlated uniform weightsin this paper. We are interested in minimizing the probability that our solution will vi-olate the constraint. We prove the bounds for both algorithms for producing a feasiblesolution. Then we carried out analyses of two settings for the problem, the one withuniform profits and the groups in the second case has the same profits profile. Ourproofs are designed to provide insight into the structure of these problems and to revealnew challenges in deriving runtime bounds in the chance-constrained setting with thegeneral type of stochastic variables. This research has been supported by the SA Government through the PRIF RCP Indus-try Consortium References [1] H. Assimi, O. Harper, Y. Xie, A. Neumann, and F. Neumann. Evolutionarybi-objective optimization for the dynamic chance-constrained knapsack problembased on tail bound objectives. In ECAI , volume 325 of Frontiers in ArtificialIntelligence and Applications , pages 307–314. IOS Press, 2020.[2] A. Auger and B. Doerr. Theory of Randomized Search Heuristics . WORLDSCIENTIFIC, 2011.[3] A. Charnes and W. W. Cooper. Chance-constrained programming. ManagementScience , 6(1):73–79, 1959.[4] J. Custodio, M. A. Lejeune, and A. Zavaleta. Note on ”a chance-constrainedprogramming framework to handle uncertainties in radiation therapy treatmentplanning”. Eur. J. Oper. Res. , 275(2):793–794, 2019.[5] B. Doerr, C. Doerr, A. Neumann, F. Neumann, and A. M. Sutton. Optimizationof chance-constrained submodular functions. In AAAI , pages 1460–1467. AAAIPress, 2020.[6] S. Droste, T. Jansen, and I. Wegener. On the analysis of the (1+1) evolutionaryalgorithm. Theor. Comput. Sci. , 276(1-2):51–81, 2002.167] T. Friedrich, T. K¨otzing, J. A. G. Lagodzinski, F. Neumann, and M. Schirneck.Analysis of the (1+1) EA on subclasses of linear functions under uniform andlinear constraints. Theor. Comput. Sci. , 832:3–19, 2020.[8] G. A. Hanasusanto, V. Roitch, D. Kuhn, and W. Wiesemann. A distributionallyrobust perspective on uncertainty quantification and chance constrained program-ming. Math. Program. , 151(1):35–62, 2015.[9] G. A. Hanasusanto, V. Roitch, D. Kuhn, and W. Wiesemann. Ambiguousjoint chance constraints under mean and dispersion information. Oper. Res. ,65(3):751–767, 2017.[10] J. He, B. Mitavskiy, and Y. Zhou. A theoretical assessment of solution quality inevolutionary algorithms for the knapsack problem. In Proceedings of the IEEECongress on Evolutionary Computation, CEC 2014, Beijing, China, July 6-11,2014 , pages 141–148. IEEE, 2014.[11] S.-C. Horng, S.-S. Lin, and F.-Y. Yang. Evolutionary algorithm for stochastic jobshop scheduling with random processing time. Expert Syst. Appl. , 39(3):3603–3610, 2012.[12] T. Jansen. Analyzing Evolutionary Algorithms - The Computer Science Perspec-tive . Natural Computing Series. Springer, 2013.[13] P. K. Lehre and C. Witt. General drift analysis with tail bounds. CoRR ,abs/1307.2559, 2013.[14] P. Li, H. Arellano-Garcia, and G. Wozny. Chance constrained programming ap-proach to process optimization under uncertainty. Comput. Chem. Eng. , 32(1-2):25–45, 2008.[15] A. Lissovoi and C. Witt. A runtime analysis of parallel evolutionary algorithmsin dynamic optimization. Algorithmica , 78(2):641–659, 2017.[16] B. Liu, Q. Zhang, F. V. Fern´andez, and G. G. E. Gielen. An efficient evolution-ary algorithm for chance-constrained bi-objective stochastic optimization. IEEETrans. Evol. Comput. , 17(6):786–796, 2013.[17] H. M¨uhlenbein. How genetic algorithms really work: Mutation and hillclimbing.pages 15–26, 01 1992.[18] A. Neumann and F. Neumann. Optimising monotone chance-constrained sub-modular functions using evolutionary multi-objective algorithms. In PPSN (1) ,volume 12269 of Lecture Notes in Computer Science , pages 404–417. Springer,2020.[19] F. Neumann, M. Pourhassan, and V. Roostapour. Analysis of Evolutionary Al-gorithms in Dynamic and Stochastic Environments , pages 323–357. SpringerInternational Publishing, Cham, 2020.1720] F. Neumann and A. M. Sutton. Runtime analysis of the (1 + 1) evolutionaryalgorithm for the chance-constrained knapsack problem. In FOGA , pages 147–153. ACM, 2019.[21] F. Neumann and C. Witt. Bioinspired computation in combinatorial optimization:algorithms and their computational complexity. In GECCO (Companion) , pages1035–1058. ACM, 2012.[22] B. Odetayo, M. Kazemi, J. MacCormack, W. D. Rosehart, H. Zareipour, and A. R.Seifi. A chance constrained programming approach to the integrated planning ofelectric power generation, natural gas network and storage. IEEE Transactionson Power Systems , 33(6):6883–6893, 2018.[23] C. A. Poojari and B. Varghese. Genetic algorithm based technique for solvingchance constrained problems. Eur. J. Oper. Res. , 185(3):1128–1154, 2008.[24] P. Rakshit, A. Konar, and S. Das. Noisy evolutionary optimization algorithms - acomprehensive survey. Swarm and Evolutionary Computation , 33:18–45, 2017.[25] V. Roostapour, A. Neumann, F. Neumann, and T. Friedrich. Pareto optimizationfor subset selection with dynamic cost constraints. In AAAI , pages 2354–2361.AAAI Press, 2019.[26] J. Till, G. Sand, M. Urselmann, and S. Engell. A hybrid evolutionary algorithmfor solving two-stage stochastic integer programs in chemical batch scheduling. Computers & Chemical Engineering , 31(5-6):630–647, 2007.[27] Y. Xie, O. Harper, H. Assimi, A. Neumann, and F. Neumann. Evolutionary algo-rithms for the chance-constrained knapsack problem. In GECCO , pages 338–346.ACM, 2019.[28] Y. Xie, A. Neumann, and F. Neumann. Specific single- and multi-objective evo-lutionary algorithms for the chance-constrained knapsack problem. In GECCO ,pages 271–279. ACM, 2020.[29] M. Zaghian, G. J. Lim, and A. Khabazian. A chance-constrained programmingframework to handle uncertainties in radiation therapy treatment planning.