Specific Single- and Multi-Objective Evolutionary Algorithms for the Chance-Constrained Knapsack Problem
aa r X i v : . [ c s . N E ] A p r Specific Single- and Multi-ObjectiveEvolutionary Algorithms for theChance-Constrained Knapsack Problem
Yue Xie, Aneta Neumann and Frank NeumannOptimisation and Logistics, School of Computer Science,The University of Adelaide, Adelaide, AustraliaApril 9, 2020
Abstract
The chance-constrained knapsack problem is a variant of the classical knap-sack problem where each item has a weight distribution instead of a deterministicweight. The objective is to maximize the total profit of the selected items underthe condition that the weight of the selected items only exceeds the given weightbound with a small probability of α . In this paper, consider problem-specificsingle-objective and multi-objective approaches for the problem. We examinethe use of heavy-tail mutations and introduce a problem-specific crossover oper-ator to deal with the chance-constrained knapsack problem. Empirical results forsingle-objective evolutionary algorithms show the effectiveness of our operatorscompared to the use of classical operators. Moreover, we introduce a new effec-tive multi-objective model for the chance-constrained knapsack problem. We usethis model in combination with the problem-specific crossover operator in multi-objective evolutionary algorithms to solve the problem. Our experimental resultsshow that this leads to significant performance improvements when using the ap-proach in evolutionary multi-objective algorithms such as GSEMO and NSGA-II. Evolutionary Algorithms (EAs) are bio-inspired randomised optimisation techniquesand have been widely applied to many stochastic combinatorial optimisation problems[21, 36, 33, 31]. In this paper, we study the chance-constrained knapsack problem(CCKP), which is a stochastic version of the classical knapsack problem, where eachitem has a random weight based on a known distribution and independent of the weightsof other items. The goal of the chance-constrained knapsack problem is to select asubset of items with the maximum profit when satisfying the chance constraint. Thechance constraint is satisfied if the probability that the weight of the selected subset1iolates the knapsack capacity if at most α , where α is a given parameter. Chance con-straint optimisation problems have so far received little attention in the evolutionarycomputation literature [26] although they capture many relevant stochastic real-worldsettings. Doerr et al. [13] investigated submodular optimisation problems with chanceconstraints and analysed the approximation behaviour of greedy algorithms. Recently,Xie et al. [37] first applied the evolutionary algorithms in solving the CCKP. So far,the chance-constrained knapsack problem has not received much attention in the evo-lutionary computation literature, and the goal of this paper is to pursue this criticalresearch direction further.Evolutionary algorithms (EAs) have been applied to many combinatorial optimi-sation problems and proven to be very successful in solving complex optimisationproblems [21, 31, 33, 34, 7]. Mutation operators and crossover operators are the corefeatures of evolutionary algorithms that have been studied by many researchers in thelast decades [2, 3]. The operators are used to guide the algorithm towards a solution toa given problem, and they play different roles to improve the solutions produced by thealgorithm. The mutation operators are used to maintaining the diversity of the solutionspace, and the crossover operators combine the current chromosomes of solutions intonew solutions [28]. The operators can succeed in finding a good solution when dealingwith combinatorial optimisation problems.However, previous studies of the stochastic knapsack problems have not dealt withdiscussing the performance of evolutionary algorithms in solving the problems. Thereare several versions of the stochastic knapsack problems that have been studied in theliterature [25, 27]. These studies aim to maximize the expected profit resulting fromthe assignment of items to the knapsack. Some researchers consider approximation al-gorithms for stochastic knapsack problems [4, 10, 15]. Recently, for CCKP, Goyal andRavi [19] presented a fully polynomial-time approximation scheme (FPTAS) for thecase when item sizes are normally distributed while satisfying the chance-constrainedstrictly. Klopfenstein and Nace [24] designed a pseudo-polynomial time resolutionalgorithm for the chance-constrained knapsack problem and provided to obtain feasi-ble solutions. Han et al. [20] proved that the resulting robust knapsack problem withthe polyhedral uncertainty set could be solved by solving ordinary knapsack problemsrepeatedly, which makes it then possible to solve the problem in pseudo-polynomialtime. Assimi et al. [1] studied the dynamic chance-constrained knapsack problem andproposed another objective function to deal with the dynamic capacity of the knapsack.In this work, we consider the same chance-constrained knapsack problem that hasalready been examined by Xie et al. [37]. In their study, the authors have appliedChebyshev’s inequality and Chernoff bound to estimate the probability of constraint vi-olation. They have reformulated the chance-constrained knapsack problem as a multi-objective model concerning the total profit and the probability of chance-constrained.To improve the algorithms presented in this paper, we introduce a problem-specificcrossover operator and examine the use of heavy-tail mutation operator in dealing withthe chance-constrained knapsack problem. Moreover, we apply the operators in single-objective and multi-objective evolutionary algorithms to solve the CCKP. The problem-specific crossover operator is a combination of uniform crossover operator and greedymethod. A uniform crossover operator allows the offspring chromosomes to searchall possibilities of re-combining the different genes in parents [35, 32, 16, 6]. The2eavy-tailed mutation operators have been regarded in many sub-fields of evolutionarycomputation [38, 39], and proved to be effective in solving the combinatorial optimiza-tion problems [14, 17]. Besides, we introduce a new effective multi-objective modelfor the chance-constrained knapsack problem, which improves diversity in the solutionspace according to the problem.The remaining parts of the paper are organized as follows. In the next section,we describe the chance-constrained knapsack problem. In Section 3, we introduce theheavy-tail mutation operator and the new problem-specific crossover operator. Sec-tion 4 presents the single-objective evolutionary approach for the chance-constrainedknapsack problem and describes experimental results. In Section 5, we introduce anew multi-objective model for the chance-constrained knapsack problem and presentempirical results. Section 6 concludes the paper. Let N = { , ..., n } be a set of items with vectors p ∈ R n + and w ∈ R n + assigningpositive profits and weights to the items. In addition, a knapsack capacity C > isgiven. The classical knapsack problem can be defined as: max x ∈{ , } n { P i ∈ N p i x i | P i ∈ N w i x i ≤ C } . Hence, the goal is to find a selection of items that have maximumprofit among all sets of items that do not violate the capacity of knapsack. We considerthe search space is { , } n and a candidate solution x = { x , ..., x n } ∈ { , } n is abit-string of length n , where item i is chosen iff x i = 1 .In the chance-constrained knapsack problem, we assume that the weight vector w is not known with accuracy, w can take on values according to a given probabilitydistribution. In this paper, we assume that the weights of the items are independentof each other. In the chance-constrained knapsack problem, the goal is to maximizethe profit of the selection items under the condition that the probability of violatingthe knapsack constraint is less than a given threshold of α . Formally, the chance-constrained knapsack problem is given as: Maximize P ( x ) = X i ∈ N p i x i (1) P rob ( X i ∈ N w i x i ≥ C ) ≤ α (2) x ∈ { , } n (3)where α ∈ [0 , is a small value. We are looking for a solution x of maximum profitthat violates the capacity bound C with probability at most α . In this section, we introduce some surrogate functions presented in [37] which areconstructed by well-known deviation inequalities, namely Chebyshev’s inequality andChernoff bound, to tackle the chance constraints (2) by evolutionary algorithms.3hebyshev’s inequality has a high utility because it can be applied to any proba-bility distribution with known expectation and standard deviation of design variables.It also gives a tighter bound in comparison to the weaker tails such as Markov’s in-equality [12]. The standard Chebyshev’s inequality is two-side and provides tails forupper and lower bounds [5]. As we are only interested in the probability of violatingthe knapsack capacity C , we use a one-sided Chebyshev’s inequality known as Can-telli’s inequality [9]. For brevity, we refer to this one-sided Chebyshev’s inequality asChebyshev’s inequality in this paper. Theorem 1 (Chebyshev’s inequality) . Let X be a random variable with expectation µ X and standard deviation σ X . Then for any k ∈ R + , P rob ( X ≥ µ X + k ) ≤ σ X σ X + k . We assume that the weights of the items are all independent to each other andthat the weight of items have uniform distributions and take values in a real interval [ a i − δ, a i + δ ] , where a i is the expected weight of item i and δ is a parameter thatdetermines the uncertainty of the weights.We use the surrogate functions given in [37] to transform the constraint (2) byapplying Chebyshev’s inequality are as follow: P rob ( W ≥ C ) ≤ δ P ni =1 x i δ P ni =1 x i + 3( C − P ni =1 a i x i ) . (4)Compared to Chebyshev’s inequality, Chernoff bounds provides a sharper tail withexponential decay behaviour. In Chernoff bounds, the random variable is a summationof independent random variables, which take on values in [0 , [29]. There are severaltypes of Chernoff bounds. In this paper, we use the following one which proposed by[12] (Theorem 10.1). Theorem 2 (Chernoff bound) . Let X , ..., X n be independent random variables takingvalues in [0 , . Let X = P ni =1 X i . Let ǫ ≥ . Then P rob ( X ≥ (1 + ǫ ) E ( X )) ≤ (cid:18) e ǫ (1 + ǫ ) (1+ ǫ ) (cid:19) E ( X ) . (5)Only in the case that the weight of the items is independent to each other andhave an additive uniform distribution, we can apply the Chernoff bound to the chanceconstraint, then we have the surrogate function as follow: P rob ( W ≥ C ) ≤ e C − EW ( X ) δ P ni =1 xi (cid:16) δ P ni =1 x i + C − E W ( X ) δ P ni =1 x i (cid:17) δ P ni =1 xi + C − EW ( X ) δ P si =1 xi P ni =1 x i (6)Here W = P i ∈ I w i x i and E W ( X ) = P i ∈ I a i x i are the summary weight and sum-mary expected weight of selection items. For details on how to obtain this inequality,we refer to Theorem 3.3 in [37]. 4onsidering the surrogate functions of the chance constraint, we distinguish twotypes of infeasible solutions as we consider small values of α , i.e. α = 10 − . The firsttype of infeasible solutions have expected weight that exceeds the capacity of the knap-sack, the probability of violating the constraint is at least / due to our assumptionson the probability distributions. In the second type of infeasible solutions, the expectedweight is less than the capacity, but the probability of violating the capacity is largerthan the given α . In this section, we propose a mutation operator and a crossover operator to work inevolutionary algorithms when solving the chance-constrained knapsack problem. Interms of mutation, we propose to use heavy-tail mutations that have recently gainedsignificant attention, particularly in the area of theory of evolutionary computation [14,17].Compare to the standard mutation operator, the heavy-tail mutation operator canflip more than one bit in each step, and it has shown to be useful in some single-objective combinatorial optimization problems [17]. Therefore, we examine the use ofheavy-tail mutation operator in single-objective evolutionary algorithms to deal withthe chance-constrained knapsack problem. The proposed problem-specific crossoveroperator (PS crossover operator) combines uniform crossover and a greedy method,and generates an offspring to inherit the common ”genes” from the parents selectsmore effective uncommon ”genes”.
Doerr et al. [14] pointed out that when a multi-bit flip is necessary to leave a localoptimum, it needs much time to find the right bits to be flipped if using standard bitmutations. Then higher mutation probabilities may be justified. Neumann and Sut-ton [30] proved that even for the most straightforward cases of CCKP, it is possible tohave local optima in the search space that are difficult to escape when using standard bitmutations. The conclusion of [30] motivates our investigations on applying heavy-tailmutations for CCKP.The heavy-tail mutation operator overcomes the mentioned negative effect whenusing standard bit mutations, and is at the same time structurally close to the traditionalway of performing mutations. There is a general belief that a dynamic choice of themutation rate as done in heavy-tail mutation can be profitable. Theoretical studies showthat the performance of the (1 + 1)
EA using a heavy-tail mutation operator is betterthan the standard (1 + 1)
EA in solving jump functions [14].In the heavy-tail mutation operator, the mutation rate is chosen randomly in eachiteration according to a power-law distribution with (negative) exponent β > . Theheavy-tailed choice of the mutation rate ensures that with probability ⊖ ( k − β ) , andexactly k bits are flipped. The power low distribution is given as follows. Theorem 3 (Discrete power-law distribution: D βn/ ) . Let β > be a constant. If a lgorithm 1 The heavy-tail mutation operator x = { x , .., x n } ∈ { , } n ; Choose θ ∈ [1 , .., n/ randomly according to D βn/ ; for j = 1 to n do if rand ([0 , ≤ θ/n then y i ← − x i else y i ← x i end if end for return y = { y , .., y n } random variable X follows the distribution D βn/ , then P rob ( X = θ ) = (cid:16) C βn/ (cid:17) − θ − β (7) for all θ ∈ [1 , .., n/ , where the normalization constant is C βn/ := P n/ i =1 i − β . In this paper, we use the definition of the heavy-tail mutation operator proposedin [14] as follow: when the parent individual is a bit string x ∈ { , } n , the mutationoperator first chooses a random mutation rate θ/n with θ ∈ [1 , .., n/ chosen accord-ing to the power-law distribution D βn/ and then creates an offspring by flipping eachbit of string independently with probability θ/n . The working principle of this operatoris given in Algorithm 10. The proposed crossover operator is a combination of the uniform crossover and consid-ers the specificity of CCKP and the standard KP. The uniform crossover operator caneasily preserve all parent similarities when generation new offspring. Indeed, for manycombinatorial optimization problems, good solutions being close in the objective spaceare expected to be rather similar in the decision space [23, 22]. Therefore, the uniformcrossover can maintain the so-called good gene combinations which are constructedduring the search process.The problem-specific crossover operator, which we shall call PS crossover operator,adapts the benefit of the uniform crossover operator. For all genes that are different inthe two parents, we evaluate the quality of these genes, specifically to KP. We use the profit/weight ratio to determine the quality of genes. Then, genes (items) are sorted indescending order according to the quality, and we apply a greedy insertion heuristicto iterative inserts a candidate item that has the highest profit/weight ratio. In thisstage, we insert the first k items according to the ordering of the non-common genes,where k is a number that randomly chooses according to the Normal distribution k ∼ N (cid:0) m , m (cid:1) , m denotes the number of genes where the two parents differ.6 lgorithm 2 (1 + 1) EA Choose x ∈ { , } n uniformly at random. while stopping criterion not meet do y ← flip each bit of x independently with probability of p ; if f ( y ) ≥ f ( x ) then x ← y ; end if end while In this section, we first introduce the single-objective evolutionary algorithm that westudy in this paper. Then we examine the impact of the heavy-tail mutation operator inthe algorithm. Furthermore, we test the performance of a population-base evolution-ary algorithm using the problem-specific crossover operator for solving the chance-constrained knapsack problem.
The first single-objective evolutionary algorithm we consider is (1 + 1)
EA, which isthe most simple evolutionary algorithm. (1 + 1)
EA is also known as a baseline single-objective optimization algorithm to solve the chance-constrained knapsack problem inthe previous research [37].The (1 + 1)
EA, given in Algorithm 2, initializes a random solution x ∈ { , } n .In the main optimization loop, one offspring y is generated by flipping each bit of theparent with probability p . The offspring replaces the parent unless it has an inferiorfitness. In this paper, we define the fitness of a solution x same to the one in the studyof Xie et al. [37]: f ( X ) = ( u ( x ) , v ( x ) , P ( x )) (8)where u ( x ) = max { P i ∈ N a i x i − C, } , v ( x ) = max { P rob ( P i ∈ N w i x i ≥ C ) − α, } , P ( x ) = P i ∈ N p i x i . For this fitness function, u ( x ) and v ( x ) need to beminimized and P ( x ) maximized respectively, and it is optimized in the lexicographicorder. Usually, in the mutation step, the probability of flip each bit is /n , where n isthe length of a solution. In this paper, we take this as a standard mutation probabilityand compare it with the heavy-tail mutation operator.We then introduce a population-based single-objective evolutionary algorithm todeal with the chance-constrained knapsack problem instances. This kind of algorithmmaintains a population of binary solutions presented as a bit string. We set the popu-lation size to and examine the performance of this ( µ + 1) EA using the heavy-tailmutation operator and the problem-specific crossover operator separately and its com-bination. Algorithm 9 is the ( µ + 1) EA using the heavy-tail mutation operator and thePS crossover operator. 7 lgorithm 3 ( µ + 1) EA Randomly generate µ initial solutions as the initially population; while stopping criterion not meet do Choose x ∈ { , } n and x ∈ { , } n uniformly at random from the popula-tion X ; x = x . apply the PS crossover operator in x and x , generate an offspring y ; apply the heavy-tail mutation operator to y ; if y is better than the worse solution in X then replace the worst solution with y ; end if end while The benchmarks used in this paper are the same as in [37]. We consider two typesof instances:
Uncorrelated and
Bounded Strong Correlated , for each instance, theweights of items have uniform additive distribution. The values of probability α are [0 . , . , . , and the uncertainty of the weights are δ = [25 , . We report themean profit and the standard deviation of 30 independent runs for all algorithms. Eachrun is using ∗ fitness evaluations. A Kruskal-Wallis test [8] with confi-dence interval integrated with the posterior Bonferroni test is used to compare multiplesolutions.In the following subsection, we consider all combinations of algorithms and op-erators. We set the β in power-law distribution equal to . , which is the recom-mended value of β from Doerr et al. [14]. For each instance, we investigate differ-ent settings together with the difference between the uncertainty of weights and thechance-constrained probability. We report the results obtained by all algorithms withChebyehsv inequality and Chernoff bound separately. To show the differences between the evolutionary algorithms using the standard mu-tation operator and the heavy-tail mutation operator, we investigate the performanceof (1 + 1)
EA using the heavy-tail mutation operator for solving the CCKP instances.Table 1 and Table 2 list the average and standard deviation of profit for 30 indepen-dent runs concerning the probability estimate methods. For clarity, we use
Standard (1 + 1) EA to represent the (1 + 1) EA using standard mutation operator, and show the (1 + 1)
EA using heavy-tail mutation operator as (1 + 1)
EA with HT in the tables.In Table 1 and 2, the stat column shows the rank of each algorithm in the instances.If two algorithms can be compared with each other significantly, X(+) denotes that thecurrent algorithm is outperforming algorithm X. Besides, X(-) signifies that the currentalgorithm is worse than the algorithm X significantly. For example, the numbers 2(-)listed in the first row under the
Standard (1 + 1)
EA(1) mean that the current one issignificantly worse than the solutions obtained by (1 + 1)
EA with HT mutation (2) .The results in Table 1 and 2 indicate that there is a significant difference between8able 1: Statistic results of (1 + 1)
EA with Chernoff bound for instance eil101 with500 itemscapacity delta alpha Standard (1 + 1)
EA (1) (1 + 1)
EA with HT (2)Mean Std stat Mean Std stat bound e d - s t r ong l y - c o rr e l a t e d (+) (+) (+)
50 0.001 75625.30 114.08 2(-) 75796.10 124.82 (+) (+) (+) (+) (+) (+)
50 0.001 187930.55 200.91 2(-) 188244.65 87.04 (+) (+) (+) un c o rr e l a t e d (+) (+) (+)
50 0.001 83617.85 175.42 2(-) 83746.00 72.36 (+) (+) (+) (+) (+) (+)
50 0.001 145675.90 73.28 2(-) 145767.40 88.57 (+) (+) (+) using the two mutation operators in the single-objective evolutionary algorithm. The (1 + 1) EA with the heavy-tail mutation operator outperforms the standard (1 + 1)
EAin all cases. Moreover, in the most uncorrelated type of instances, the (1 + 1)
EA withheavy-tail mutations obtains solutions with a lower standard deviation in comparisonwith the other algorithm. In summary, the results show that the heavy-tail mutationoperator leads to better performance when solving CCKP instances, which we haveshown in bold at stat columns. ( µ + 1) EA The purpose of this section is to investigate the effectiveness of the problem-specificcrossover operator associated with single-objective evolutionary algorithms. Here, werun the ( µ + 1) EA with the population size 10. To simplify the name of algorithms,we use the following notations:
Standard ( µ + 1) EA is the ( µ + 1) EA using standardmutation operator, ( µ + 1) EA with HT is the ( µ + 1) EA using heavy-tail mutation9able 2: Statistic results of (1 + 1)
EA with Chebyshev’s inequality for instance eil101with 500 itemscapacity delta alpha Standard (1 + 1)
EA (1) (1 + 1)
EA with HT (2)Mean Std stat Mean Std stat bound e d - s t r ong l y - c o rr e l a t e d (+) (+) (+)
50 0.001 69136.30 210.29 2(-) 69394.60 121.15 (+) (+) (+) (+) (+) (+)
50 0.001 179824.75 167.43 2(-) 180107.35 126.69 (+) (+) (+) un c o rr e l a t e d (+) (+) (+)
50 0.001 74345.30 232.35 2(-) 74483.10 152.61 (+) (+) (+) (+) (+) (+)
50 0.001 137111.55 102.85 2(-) 137262.15 75.68 (+) (+) (+) operator and ( µ + 1) EA with HT and PS represents the ( µ + 1) EA using the heavy-tailmutation operator and the problem-specific crossover operator.Table 3 and 4 list the results when using Chernoff bound and Chebyshev’s inequal-ity to estimate the constraint violation probability of a CCKP solution separately. Ascan be seen from the tables, the performance of using the heavy-tail mutation operatoris significantly better than using the standard mutation operator on all instances. There-fore the conclusion is the same as for the (1 + 1)
EA. Another insight of these tablescan be drawn from the values of the columns ( µ + 1) EA with HT (4) and ( µ + 1) EAwith HT and PS (5) . We can clearly see that the results obtained by ( µ + 1) EA with HTand PS (5) are significantly better than ( µ + 1) EA with HT (4) . It shows the effective-ness of the PS crossover operator when solving the CCKP instances in single-objectiveevolutionary algorithms compared to mutation only.Moreover, by comparing the values of the corresponding columns in the Table 1and 3, and in Table 2 and 4 respectively according to the probability tails. It canbe seen that in both estimated methods, the results in ( µ + 1) EA with HT and PS ( µ + 1) EA with Chernoff bound for instance eil101 with500 items capacity delta alpha Standard ( µ + 1) EA (3) ( µ + 1) EA with HT (4) ( µ + 1) EA with HT and PS (5)Mean Std stat Mean Std stat Mean Std stat bound e d - s t r ong l y - c o rr e l a t e d (+) , (+) (+) , (+) (+) , (+)
50 0.001 75562.23 201.47 4(-),5(-) 75816.83 141.98 3(+),5(-) 75934.03 102.39 (+) , (+) (+) , (+) (+) , (+) (+) , (+) (+) , (+) (+) , (+)
50 0.001 188027.17 138.12 4(-),5(-) 188308.87 129.41 3(+),5(-) 188575.15 146.49 (+) , (+) (+) , (+) (+) , (+) un c o rr e l a t e d (+) , (+) (+) , (+) (+) , (+)
50 0.001 83586.80 204.58 4(-),5(-) 83745.63 101.20 3(+),5(-) 83819.10 91.67 (+) , (+) (+) , (+) (+) , (+) (+) , (+) (+) , (+) (+) , (+)
50 0.001 145648.43 90.76 4(-),5(-) 145770.03 65.78 3(+),5(-) 145838.10 59.74 (+) , (+) (+) , (+) (+) , (+) (5) are better than the other combinations of algorithms and operators. We bold thestatistic results in tables. In summary, the results in this section indicate that the heavy-tail mutation operator and the problem-specific crossover operator are active in single-objective evolutionary algorithms when dealing with the CCKP instances. The nextsection, therefore, moves on to discuss the performance of these operators in multi-objective approaches. In this section, we introduce a new multi-objective model for the chance-constrainedknapsack problem. The model considers both feasible solutions and the second type ofinfeasible solutions than mentioned in Section 2.2. Then we apply the new model toGSEMO previous considered for CCKP [37]. GSEMO can generate a Pareto front withboth feasible solutions and infeasible solutions. For further investigation of our multi-objective optimization, we also apply the Non-dominated Sorting Genetic Algorithm(NSGA-II) [11], which is a state of the art multi-objective EA when dealing with twoobjectives. We run NSGA-II using 20 as a population size with Chebyshev’s inequalityand Chernoff bound, respectively. 11able 4: Statistic results of ( µ + 1) EA with Chebyshev’s inequality for instance eil101with 500 items capacity delta alpha Standard ( µ + 1) EA (3) ( µ + 1) EA with HT (4) ( µ + 1) EA with HT and PS (5)Mean Std stat Mean Std stat Mean Std stat bound e d - s t r ong l y - c o rr e l a t e d (+) , (+) (+) , (+) (+) , (+)
50 0.001 69178.33 151.69 4(-),5(-) 69393.30 103.48 3(+),5(-) 69439.10 110.33 (+) , (+) (+) , (+) (+) , (+) (+) , (+) (+) , (+) (+) , (+)
50 0.001 179826.70 163.99 4(-),5(-) 180083.90 118.26 3(+),5(-) 180218.70 106.71 (+) , (+) (+) , (+) (+) , (+) un c o rr e l a t e d (+) , (+) (+) , (+) (+) , (+)
50 0.001 74378.33 174.08 4(-),5(-) 74498.87 119.64 3(+),5(-) 74597.45 120.22 (+) , (+) (+) , (+) (+) , (+) (+) , (+) (+) , (+) (+) , (+)
50 0.001 137174.83 102.32 4(-),5(-) 137303.53 64.42 3(+),5(-) 137317.90 53.63 (+) , (+) (+) , (+) (+) , (+) To keep more diversity in the solution space, the new model makes other solutionsdominate the infeasible solutions that the expected weight of selection items is over-loading the capacity. The difference between the new multi-objective model and theold multi-objective model in [37] is that the old model made all feasible solutions dom-inate all infeasible solutions. The fitness functions of this new model are proposed asfollow. g ( X ) = (cid:26) P rob ( W ≥ C ) E W ( X ) < C E W ( X ) − C ) E W ( X ) ≥ C (9) g ( X ) = (cid:26) P ni =1 p i x i g ( X ) ≤ − g ( X ) > (10)The first function calculates the probability of a solution by overloading the ca-pacity of the knapsack, and it forces the probability of an infeasible solution whoseexpected weight exceeds the capacity larger than . The second fitness function is theobjective of the chance-constrained knapsack problem. It calculates the profit of feasi-ble solutions that the probability less than α and infeasible solutions with probabilitymore than α but less than in . We say solution Y dominates solution X w.r.t. g ,denoted by Y < X , iff g ( Y ) ≤ g ( X ) ∧ g ( Y ) ≥ g ( X ) .12 lgorithm 4 GSEMO Choose x ∈ { , } n uniformly at random S ← { x } ; while stopping criterion not met do choose x ∈ S uniformly at random; y ← flip each bit of x independently with probability of n ; if ( w ∈ S : w < GSEMO y ) then S ← ( S ∪ { y } ) \{ z ∈ S | y < GSEMO z } ; end if end while The objective function g guarantees that the search process is guided towards allconsidering solutions, that trade-offs in terms of confidence level and profit are com-puted for the solutions in the Pareto front. However, even the algorithm can store fea-sible solutions and a bunch of infeasible solutions, we output the best feasible solutionin every iteration. The first multi-objective approach we consider here is a simple multi-objective evolu-tionary algorithm as a baseline multi-objective approach, which is named for GlobalSimple Evolutionary Multi-Objective Optimizer (GSEMO) [18]. The GSEMO, givenin Algorithm 4 works like (1 + 1)
EA, starts with a random search point x ∈ { , } n .In the mutation step, flip each bit with a probability /n , but the algorithm stores a setof solutions in the main optimization loop where any solution does not dominate eachother.We also apply the Non-dominated Sorting Genetic Algorithm (NSGA-II), whichis a state-of-art multi-objective EA. NSGA-II was proposed by Deb et al. [11]. Forthe detail of this algorithm, we refer to read the paper [11]. We run NSGA-II with apopulation size of 20 using Chebyshev and Chernoff inequality tails respectively, andcompare the performance of GSEMO and NSGA-II at the end of this section. To compare the performance of the old model with the new model, we first applythe GSEMO to solve the same instances as those proposed in Section 4.2, but withdifferent fitness functions. Next, to test the effectiveness of the PS crossover operator inthe multi-objective algorithm, we combined the PS crossover operators in GSEMO tosolve the new model. Table 5 and 6 show the results obtained using different probabilitytails separately. To simplify the algorithm names in the tables, we use old denotes theold multi-objective model and new is the new multi-objective model. The uniform , HT and PS denote the uniform crossover operator, heavy-tail mutation operator andproblem-specific crossover operator separately.Table 5 shows that in some of the instances, for example, for the first instance in the bounded-strongly-correlated type, GSEMO with the new model outperforms GSEMO13able 5: Statistic results of GSEMO with Chernoff bound for the instance eil101 with500 items capacity delta alpha GSEMO with old model (6) GSEMO with new model (7) GSEMO with new model and PS (8)Mean Std stat Mean Std stat Mean Std stat bound e d - s t r ong l y - c o rr e l a t e d un c o rr e l a t e d with the old model. For the other instances, both algorithms perform as good as eachother. It can be observed more clearly in Table 6. The new model performs signif-icantly better than the old model when dealing with the bounded-strongly-correlated type of instances. Furthermore, as can be seen from the column GSEMO with the newmodel and PS , this algorithm reported significantly better results than the other twoalgorithms. These results indicate that the new multi-objective model is effective forsolving CCKP instance and performs better than the old model in most cases. The PScrossover operator can improve the performance of the multi-objective evolutionaryalgorithm when dealing with the CCKP.
In this section, we investigate the performance of NSGA-II with the combination oftwo crossover operators, the uniform crossover and PS crossover, and the two modelsshown in Section 5.1. We modify NSGA-II to keep the best feasible solution for CCKPin each iteration. Since the NSGA-II generates ten offspring in each step with thepopulation size 20, and GSEMO only generates one offspring, we set the iteration ofNSGA-II to ∗ (instead of ∗ for GSEMO) which results in the same numberof fitness evaluations for both algorithms.Table 7 and 8 show the results obtained when using Chernoff bound and Cheby-14able 6: Statistic results of GSEMO with Chebyshev’s inequality for the instanceeil101 with 500 items capacity delta alpha GSEMO with old model (6) GSEMO with new model (7) GSEMO with new model and PS (8)Mean Std stat Mean Std stat Mean Std stat bound e d - s t r ong l y - c o rr e l a t e d un c o rr e l a t e d stat columns of Table 7, it can be seenthat with the uniform crossover operator, the solutions in the new model NSGA-II withnew and uniform (11) are mostly better than the solutions in the old model
NSGA-IIwith old and uniform (9) . However, in some instances with particular δ = 50 , the oldmodel outperforms the new model. The solutions in NSGA-II with old and PS (10) and
NSGA-II with new and PS (12) indicate that in most instances, the new model performsbetter than the old model when using the PS crossover operator.However, an opposite conclusion can be drawn from Table 8. The solutions in the
NSGA-II with new and uniform (11) and
NSGA-II with old and uniform (9) show thatNSGA-II with uniform crossover performs better when dealing with the old model thanthe new model. However, the correlation between the old model and the new model isinteresting when using NSGA-II with PS crossover operator to solve the problem. Therelationship between the results obtained from solving the two models is related to thetype of instances. In other words, for the bounded-strongly-correlated instances, theold model outperforms the new model, while for the uncorrelated instances, the newmodel is better than the old model in most cases.The next insight can be drawn from the values in the
GSEMO with new and PS (8) and
NSGA-II with new and PS (12) columns. It can be observed that the results obtainedfrom GSEMO are significantly better than NGSA-II for all instances. The comparisoncan point out the possible research line to further investigate state-of-art multi-objectiveevolutionary algorithms such as NSGA-II and SPEA2 for solving CCKP. Moreover, wecompare the performance of the best single-objective algorithm: ( µ + 1) EA with HTand PS and the best multi-objective algorithm
GSEMO with new and PS accordingto the estimated methods. It is observed that the performance of the multi-objectivealgorithm is significantly better than the single-objective algorithm for all instances.
In this study, we considered the chance-constrained knapsack problem, which is a vari-ant of the classical knapsack problem. The chance-constrained knapsack problem playsa vital role in various real-world applications, and it allows for constraint violationwith a small probability. We have considered the chance-constrained knapsack prob-lem and proposed a problem-specific crossover operator and the heavy-tail mutationoperator to deal with the CCKP. Our experimental results show that the proposed oper-ators improve the performance of single-objective evolutionary algorithms when solv-ing CCKP instances. Furthermore, we have introduced a new multi-objective modelfor the CCKP. The experimental results show that combining this new model with theproblem-specific crossover operator in GSEMO and NSGA-II leads to significant per-formance improvements for solving the CCKP.16
Acknowledgements
This work has been supported by the Australian Research Council through grant DP160102401and by the South Australian Government through the Research Consortium ”UnlockingComplex Resources through Lean Processing”.
References [1] H. Assimi, O. Harper, Y. Xie, A. Neumann, and F. Neumann. Evolution-ary bi-objective optimization for the dynamic chance-constrained knapsackproblem based on tail bound objectives. In
ECAI 2020 - 24th EuropeanConference on Artificial Intelligence, 2020 , 2020. To appear, available athttps://arxiv.org/abs/2002.06766.[2] D. Beasley, D. R. Bull, and R. R. Martin. An overview of genetic algorithms:Part 1, fundamentals.
University Computing , 15((2)):56–69, 1993.[3] D. Beasley, D. R. Bull, and R. R. Martin. An overview of genetic algorithms:Part 2, research topics.
University Computing , 15((4)):170–181, 1993.[4] A. Bhalgat, A. Goel, and S. Khanna.
Improved approximation results for stochas-tic knapsack problems , pages 1647–1665. 2011.[5] G. Casella and R. L. Berger.
Statistical inference , volume 2. Duxbury ThomsonLearning, 2002.[6] F. Chicano, D. Whitley, and E. Alba. Exact computation of the expectation sur-faces for uniform crossover along with bit-flip mutation.
Theoretical ComputerScience , 545:76 – 93, 2014.[7] R. Chiong, T. Weise, and Z. Michalewicz.
Variants of evolutionary algorithmsfor real-world applications . Springer, 2012.[8] G. W. Corder and D. I. Foreman.
Nonparametric statistics: A step-by-step ap-proach . John Wiley & Sons, 2014.[9] A. DasGupta.
A Collection of Inequalities in Probability, Linear Algebra, andAnalysis , pages 633–687. Springer New York, 2008.[10] B. C. Dean, M. X. Goemans, and J. Vondr´ak. Approximating the stochastic knap-sack problem: the benefit of adaptivity.
Mathematics of Operations Research ,33(4):945–964, 2008.[11] K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan. A fast and elitist multiobjectivegenetic algorithm: Nsga-ii.
IEEE Transactions on Evolutionary Computation ,6(2):182–197, 2002.[12] B. Doerr. Probabilistic tools for the analysis of randomized optimization heuris-tics. In
Theory of Evolutionary Computation , Natural Computing Series, pages1–87. Springer, Cham, 2020. 1713] B. Doerr, C. Doerr, A. Neumann, F. Neumann, and A. M. Sutton. Optimizationof chance-constrained submodular functions. In
Proceedings of the Thirty-FourthAAAI Conference on Artificial Intelligence, AAAI 2020 , 2020.[14] B. Doerr, H. P. Le, R. Makhmara, and T. D. Nguyen. Fast genetic algorithms. In
Proceedings of the Genetic and Evolutionary Computation Conference, GECCO2017 , pages 777–784. ACM, 2017.[15] W. C. Driscoll. Robustness of the anova and tukey-kramer statistical tests. In
International Conference on Computers and Industrial Engineering, CIE 1996 ,pages 265–268. Pergamon Press, Inc., 1996.[16] E. Falkenauer. The worth of the uniform [uniform crossover]. In
Proceedingsof the 1999 Congress on Evolutionary Computation CEC 1999 , volume 1, pages776–782. IEEE, 1999.[17] T. Friedrich, A. G¨obel, F. Quinzan, and M. Wagner. Heavy-tailed mutation oper-ators in single-objective combinatorial optimization. In
Parallel Problem Solvingfrom Nature – PPSN XV , pages 134–145, Cham, 2018. Springer InternationalPublishing.[18] O. Giel and I. Wegener. Evolutionary algorithms and the maximum matchingproblem. In
Symposium on Theoretical Aspects of Computer Science, STACS2003 , pages 415–426. Springer, 2003.[19] V. Goyal and R. Ravi. A ptas for the chance-constrained knapsack problem withrandom item sizes.
Operations Research Letters , 38(3):161–164, 2010.[20] J. Han, K. Lee, C. Lee, K.-S. Choi, and S. Park. Robust optimization approachfor a chance-constrained binary knapsack problem.
Mathematical Programming ,157(1):277–296, 2016.[21] S.-C. Horng, S.-S. Lin, and F.-Y. Yang. Evolutionary algorithm for stochastic jobshop scheduling with random processing time.
Expert Systems with Applications ,39(3):3603–3610, 2012.[22] H. Ishibuchi, K. Narukawa, N. Tsukamoto, and Y. Nojima. An empirical study onsimilarity-based mating for evolutionary multiobjective combinatorial optimiza-tion.
European Journal of Operational Research , 188(1):57–75, 2008.[23] A. Jaszkiewicz.
Multiple objective metaheuristic algorithms for combinatorialoptimization . Wydawnictwo Politechniki Pozna´nskiej, 2001.[24] O. Klopfenstein and D. Nace. A robust approach to the chance-constrained knap-sack problem.
Operations Research Letters , 36(5):628–632, 2008.[25] S. Kosuch and A. Lisser. Upper bounds for the 0-1 stochastic knapsack problemand a b&b algorithm.
Annals of Operations Research , 176(1):77–93, 2010.1826] B. Liu, Q. Zhang, F. V. Fern´andez, and G. G. E. Gielen. An efficient evolutionaryalgorithm for chance-constrained bi-objective.
IEEE Trans. Evolutionary Com-putation , 17(6):786–796, 2013.[27] Y. Merzifonluo˘glu, J. Geunes, and H. E. Romeijn. The static stochastic knap-sack problem with normally distributed item sizes.
Mathematical Programming ,134(2):459–489, 2012.[28] M. Mitchell.
An introduction to genetic algorithms . MIT Press, 1998.[29] R. Motwani and P. Raghavan.
Randomized Algorithms . Cambridge UniversityPress, 1995.[30] F. Neumann and A. M. Sutton. Runtime analysis of the (1 + 1) evolutionaryalgorithm for the chance-constrained knapsack problem. In
Proceedings of the15th ACM/SIGEVO Conference on Foundations of Genetic Algorithms , FOGA2019, page 147153. Association for Computing Machinery, 2019.[31] T. T. Nguyen and X. Yao. Continuous dynamic constrained optimization - thechallenges.
IEEE Transactions on Evolutionary Computation , 16(6):769–786,2012.[32] J. Page, R. Poli, and W. B. Langdon. Mutation in genetic programming: A pre-liminary study. In
European Conference on Genetic Programming , pages 39–48.Springer, 1999.[33] P. Rakshit, A. Konar, and S. Das. Noisy evolutionary optimization algorithms - acomprehensive survey.
Swarm and Evolutionary Computation , 33:18–45, 2017.[34] F. Shi, M. Schirneck, T. Friedrich, T. K¨otzing, and F. Neumann. Reoptimiza-tion times of evolutionary algorithms on linear functions under dynamic uniformconstraints. In
Proceedings of the Genetic and Evolutionary Computation Con-ference, GECCO 2017 , pages 1407–1414. ACM, 2017.[35] G. Syswerda. Uniform crossover in genetic algorithms. In
Proceedings of the 3rdInternational Conference on Genetic Algorithms , pages 2–9, 1989.[36] J. Till, G. Sand, M. Urselmann, and S. Engell. A hybrid evolutionary algorithmfor solving two-stage stochastic integer programs in chemical batch scheduling.
Computers & Chemical Engineering , 31(5):630 – 647, 2007. ESCAPE-15.[37] Y. Xie, O. Harper, H. Assimi, A. Neumann, and F. Neumann. Evolutionary al-gorithms for the chance-constrained knapsack problem. In
Proceedings of theGenetic and Evolutionary Computation Conference, GECCO 2019 , page 338346.ACM, 2019.[38] Xin Yao, Yong Liu, and Guangming Lin. Evolutionary programming made faster.
IEEE Transactions on Evolutionary Computation , 3(2):82–102, 1999.[39] X. Yao and Y. Liu. Fast evolution strategies. In
Evolutionary Programming VI ,pages 149–161. Springer Berlin Heidelberg, 1997.19able 7: Statistic results of NSGA-II with Chernoff bound for instance eil101 with 500 items capacity delta alpha NSGA-II with old and uniform (9) NSGA-II with old and PS (10) NSGA-II with new and uniform (11) NSGA-II with new and PS (12)Mean Std stat Mean Std stat Mean Std stat Mean Std stat bound e d - s t r ong l y - c o rr e l a t e d un c o rr e l a t e d able 8: Statistic results of NSGA-II with Chebyshev’s inequality for the instance eil101 with 500 items capacity delta alpha NSGA-II with old and uniform (9) NSGA-II with old and PS (10) NSGA-II with new and uniform (11) NSGA-II with new and PS (12)Mean Std stat Mean Std stat Mean Std stat Mean Std stat bound e d - s t r ong l y - c o rr e l a t e d un c o rr e l a t e d37686 25 0.001 81097.67 120.79 10(-),11(+),12(-) 81438.43 29.77 9(+),11(+),12(-) 79826.57 314.75 9(-),10(-),12(-) 81457.00 0.00 9(+),10(-),11(+)0.01 86056.53 59.24 10(-),11(+),12(-) 86172.97 18.39 9(+),11(+),12(-) 85932.27 120.74 9(-),10(-),12(-) 86178.23 26.97 9(+),10(-),11(+)0.1 87692.13 32.70 10(-),11(+),12(-) 87718.70 11.17 9(+),11(+),12(-) 87666.30 59.59 9(-),10(-),12(-) 87725.37 8.55 9(+),10(-),11(+)50 0.001 74116.47 201.70 10(-),11(+),12(-) 74763.10 45.28 9(+),11(+),12(+) 71446.10 514.91 9(-),10(-),12(-) 74759.37 10.02 9(+),10(+),11(+)0.01 83765.50 102.91 10(-),11(+),12(-) 83983.30 23.59 9(+),11(+) 83404.57 514.91 9(-),10(-),12(-) 83987.43 22.76 9(+),11(+)0.1 86993.97 43.19 10(-),11(-),12(-) 87057.27 17.19 9(+),11(+),12(-) 87024.87 35.61 9(+),10(-),12(-) 87062.53 5.07 9(+),10(+),11(+)93559 25 0.001 142887.10 161.18 10(-),11(+),12(-) 143493.53 33.13 9(+),11(+),12(-) 141780.10 281.11 9(-),10(-),12(-) 143521.00 17.66 9(+),10(+),11(+)0.01 147612.27 78.26 10(-),11(-),12(-) 147799.43 16.74 9(+),11(+) 147602.77 89.57 9(+),10(-),12(-) 147802.23 13.51 9(+),11(+)0.1 149125.07 33.53 10(-),11(+),12(-) 149197.40 16.73 9(+),11(+) 149102.27 45.91 9(-),10(-),12(-) 149200.67 14.26 9(+),11(+)50 0.001 136249.50 232.14 10(-),11(+),12(-) 137380.00 53.81 9(+),11(+),12(-) 133715.83 452.04 9(-),10(-),12(-) 137442.47 26.81 9(+),10(+),11(+)0.01 144655.87 33.53 10(-),11(-),12(-) 145808.87 18.50 9(+),11(+),12(-) 145025.17 193.57 9(+),10(-),12(-) 145822.27 19.88 9(+),10(+),11(+)0.1 148484.37 45.83 10(-),11(-),12(-) 148588.10 18.61 9(+),11(+),12(-) 148517.47 38.88 9(+),10(-),12(-) 148609.63 8.25 9(+),10(+),11(+)