A Hybrid ACO Algorithm for the Next Release Problem
AA Hybrid ACO Algorithm for the Next Release Problem
He Jiang
School of Software Dalian University of Technology Dalian 116621, China [email protected]
Jingyuan Zhang
School of Software Dalian University of Technology Dalian 116621, China [email protected]
Jifeng Xuan
School of Mathematical Sciences Dalian University of Technology Dalian 116024, China [email protected]
Zhilei Ren
School of Mathematical Sciences Dalian University of Technology Dalian 116024, China [email protected]
Yan Hu
School of Software Dalian University of Technology Dalian 116621, China [email protected]
Abstract —In this paper, we propose a Hybrid Ant Colony Optimization algorithm (HACO) for Next Release Problem (NRP). NRP, a NP-hard problem in requirement engineering, is to balance customer requests, resource constraints, and requirement dependencies by requirement selection. Inspired by the successes of Ant Colony Optimization algorithms (ACO) for solving NP-hard problems, we design our HACO to approximately solve NRP. Similar to traditional ACO algorithms, multiple artificial ants are employed to construct new solutions. During the solution construction phase, both pheromone trails and neighborhood information will be taken to determine the choices of every ant. In addition, a local search (first found hill climbing) is incorporated into HACO to improve the solution quality. Extensively wide experiments on typical NRP test instances show that HACO outperforms the existing algorithms (GRASP and simulated annealing) in terms of both solution quality and running time.
Keywords-next release problem (NRP); ant colony optimization; local search; requirment engineering I. I NTRODUCTION
As a well-known problem arising in software requirement engineering, Next Release Problem (NRP) seeks to maximize the customer benefits from a set of interdependent requirements, under the constraint of a predefined budget bound. NRP was firstly formulated as an optimization problem in software evolution [1] by Bagnall in 2001 [2]. To optimize software requirements, NRP and its variants have attracted much attention from the research community in recent years [3-5], such as Component Selection and Prioritization [6], Fairness Analysis [7], Multi-Objective Next Release Problem (MONRP) [7, 8], and Release Planning [1, 9]. Since NRP has been proved as NP-hard, no exact algorithm exists to find optimal solutions in polynomial time unless P=NP [10]. Therefore, many heuristic algorithms have been proposed for NRP to obtain near optimal solutions in reasonable time, including greedy algorithms, hill climbing, simulated annealing (SA) [2, 6], and genetic algorithms [3, 8]. Among these algorithms, LMSA (a Simulated Annealing algorithm by Lundy and Mees) can work efficiently on some problem instances [2]. Ant Colony Optimization (ACO) is one of the new technologies in approximately solving NP-hard problems since 1991 [11]. With a colony of artificial ants, ACO can achieve good solutions for numerous hard discrete optimization problems. To improve the performance of ACO, some hybrid ACO algorithms (i.e. ACO with local search) have also been proposed for solving classical optimization problems, including Traveling Salesman Problem (TSP) [11], Quadratic Assignment Problem (QAP) [12], 3-Dimensional Assignment Problem (AP3) [13], etc. Motivated by the great successes of ACO in tackling NP-hard problems, we propose a new algorithm named Hybrid ACO (HACO) for solving large NRP instances in this paper. HACO updates the solutions under the guideline of artificial ants and pheromone trails. At every iteration, HACO constructs the solutions and updates the pheromones for the next iteration. In contrast to traditional ACO algorithms, a first found hill climbing (FHC) operator is incorporated into HACO to improve the solution quality. This operator always updates solutions when the first better solution is found. Since it’s a challenging and time consuming task to tune appropriate parameters for ACO algorithms, we employ CALIBRA procedure (an automatic tuning procedure by Adenso-Diaz and Laguna in 2006) [14] to effectively determine those interrelated parameter settings. Experimental results on typical NRP test instances show that our new algorithm outperforms those existing algorithms in both solution quality and running time. The remainder of this paper is organized as follows. Section II gives out the related definitions of NRP. Section III introduces ACO and HACO for NRP. Section IV describes the
This work is partially supported by the Natural Science Foundation of China under Grant No. 60805024 and the National Research Foundation for the Doctoral Program of Higher Education of China under Grant No. 20070141020. xperimental results on NRP instances. Section V discusses some related work and Section VI concludes this paper. II. P RELIMINARIES
This section gives the definitions of NRP and some notations which will be used in the following part of this paper. In a software system, let R ={ r , r , …, r n } denotes all the possible software requirements. Each requirement i r R ∈ is associated with cost . A directed acyclic graph G =( R , E ) denotes the dependency of requirements, where ( , i c Z + ∈ ') r r E ∈ indicates that r must be satisfied before r' . A set of requirements parent ( r ) contains all requirements which must be satisfied before r . Let S ={1, 2, …, m } denotes all the customers related to the requirements. Each customer i is satisfied, if and only if a set of requirements i is satisfied. A profit i denotes the priority of customer i . To satisfy a customer, both the requirements of R R R ⊆ w Z + ∈ i and the ones satisfied before them ( ) ( ) i i r R parent R parent r ∈ = ∪ must be satisfied . Let denotes all the requirements which have to be developed to satisfy customer i . Let cost ' ( ) i i i R R parent R = ∪ ' ' ( ) j Ri i r R ∈ = j c ∑ be the cost of satisfying customer i . The cost of a subset is defined as and the overall profit obtained is defined as ' S S ⊆ '' ( ') ( ) i S i cost S cost R ∈ = ∪ ' ( ') i S i S w ω ∈ = ∑ . Based on above definitions, the goal of NRP is to find a subset to maximize ' S S ⊆ ( ') S ω , subject to ( ') cost S B ≤ , where B is the predefined development budget bound [2]. Given a NRP instance (denoted as NRP ( S , R , W )), a feasible solution is a subset subject to . ' S S ⊆ ( ') cost S B ≤ III. A H YBRID
ACO A LGORITHM
In this section, we present the HACO algorithm for solving NRP in detail. In contrast to traditional ACO algorithms, a local search operator called FHC is incorporated into HACO to improve the quality of solutions. We present the framework of this HACO in Part A and the local search operator in Part B, respectively. A. HACO
Ant Colony Optimization (ACO) is a meta-heuristic algorithm based on communication of a colony of simple agents (artificial ants), which are directed by artificial pheromone trails. The ants use pheromone trails and heuristic information to probabilistically construct solutions. In addition, the ants also use these pheromone trails and heuristic information during the algorithm’s execution to reflect their search experiences [11, 12]. In this subsection, we propose HACO algorithm to solve NRP. The main framework of HACO is presented in Algorithm 1. After the initialization, a series of iterations are conducted to find solutions. At every iteration, HACO employs those ants to construct solutions by a probabilistic action choice rule and incorporates a local search operator L to further improve those constructed solutions (see Step (2.1)). Then, HACO updates the pheromone trails for the next iteration, including the pheromone evaporation (see Step (2.2)) and deposition (see Step (2.3)). During the iterations, when a better solution is found, our best solution will be updated. Finally, the best solution will be returned. Algorithm 1:
HACO
Input:
NRP( S , R , W ), local search operator L , iteration times t , ant number h Output:
Solution S ∗ BeginB (1) S ∗ = ∅ , ( ) 0 S ω ∗ = ; B (2) for i =1 to t do BB (2.1) for k =1 to h do BBBBBBB // SolutionConstruction phase
BBBBBBB ant k construct solutions S ; k BBBBBBB // Local search phase
BBBBBBB call L to optimize S ; k BB // Pheromone update phase BB (2.2) evaporate pheromone on all customers, by (2); BB (2.3) for k =1 to h do BBBBBBB deposit pheromone on customers in S , by (4); k BBBBBBB if ω ( S ∗ )< ω ( S ) then = S ; k S ∗ k B (3) return S ∗ ; End
More details related to HACO are discussed as follows: Pheromone trail and heuristic information:
The pheromone trails i τ for NRP refers to the desirability of adding a customer i to the current partial solution. We define i i w τ θ= , where θ is a parameter associated with customer profits w i to make sure that the initial pheromone value varies between 0 and 1. We also define the heuristic information i η = w i / cost ( R i ' ). Obviously, the higher profit and the lower requirements cost a customer has, the higher i η is. Solution Construction:
Initially, there’re no customers in ants for NRP. At each construction step, ant k applies a probabilistic action choice rule (random proportional rule [11]) to decide which customer to add next. In particular, the probability with which ant k chooses customer i is given by: [ ] [ ] , if .[ ] [ ] k k ki ii l ll N p α βα β τ ητ η ∈ = ∑ i N ∈ (1) where α and β are two parameters which determine the relative influences of the pheromone trail and the heuristic information, and N k is the set of customers that ant k has not chosen yet. The pseudo-code for the ant solution construction is given in Algorithm 2. This procedure works as follows. Firstly, there is no customer in ant k (see Step (1)). Secondly, the next customer is chosen probabilistically by the roulette wheel selection procedure of evolutionary computation, proposed by oldberg in 1989 [15]. The roulette wheel is divided into slices proportional to weights of customers that ant k has not chosen yet (see Step (2.3.2)). Finally, the wheel is rotated and the next customer i chosen by ant k is the one which the marker points to (see Step (2.3.3)). Algorithm 2:
SolutionConstruction
Input:
NRP( S , R , W ), budget B , ant k Output:
Solution S ∗ BeginB (1) , ; S ∗ = ∅ ( ) 0 S ω ∗ = B (2) while (cost( ) ≤ B ) S ∗ BB (2.1) sumProb = 0; BB (2.2) let r be a value randomly generated in [0, 1]; BB (2.3) for each customer i do BBB (2.3.1) if ( i has been added to ) S ∗ BBBBBBBB ki p =0.0; BBB (2.3.2) else
BBBBBBBB ki p is calculated by (1); BBBBBBBB sumProb += ki p ; BBB (2.3.3) if ( sumProb r ) ≥ BBBBBBBB add customer i to , break; S ∗ B (3) return ; S ∗ End Update of pheromone trails:
The pheromone trails are updated after all the ants have constructed their solutions. Firstly, we decrease the pheromone value of every customer by a constant factor, and then add pheromone to those customers that the ants have chosen in their solutions. Pheromone evaporation is implemented by (1 ) . ii τ ρ τ← − (2) where ρ< ≤ is the pheromone evaporation rate, which enables the algorithm to forget bad decisions previously taken and have more opportunities to choose other customers. All ants deposit pheromone on the customers which they have chosen in their solutions after evaporation: h ki i k . i τ τ = ← + Δ ∑ τ (3) where ki τ Δ is the amount of pheromone that ant k deposits on those chosen customers. It is defined as follows: (4) , is selected0, otherwise kki W i γτ ⎧Δ = ⎨⎩ where W k , the quality of the solution S k built by ant k , is computed as the sum of customer profits in S k . The parameter γ is used to tune ki τ Δ . B. FHC
To enhance the performance of HACO, we incorporate a local search operator L named FHC into HACO. In this section, we introduce the framework of FHC (see Algorithm 3). Algorithm 3:
FHC
Input:
NRP( S , R , W ), budget B , iteration times t Output:
Solution S ∗ Begin B (1) S ∗ = ∅ , ( ) 0 S ω ∗ = ; B (2) for i =1 to t do BB (2.1) let be a random feasible solution; S BB (2.2) flag = true; BB (2.3) while (flag = true) do BBB (2.3.1) randomly choose \ j S S ∈ ; BBB (2.3.2) let ; { }
S S j = ∪ BBB // when S is feasible BBB (2.3.3) if cost ( S )< B then ; S S = BBBBBBB else // swap a customer in S with j BBBB (2.3.3.1) flag = false;
BBBB (2.3.3.2)
S S = ; BBBB (2.3.3.3) for every customer do l S ∈ BBBBBBBBBBB if cost ( ) < B and { }\{ } S j l ∪ BB ( { }\{ }) ( S j l S ω ω> ∪ ) BBBBBBBBBBB then , flag = true; { }\{ }
S S j l = ∪ BBBB (2.3.3.4)
S S = ; BB (2.4) if then ; *0 ( ) ( ) S S ω ω> * 0
S S = B (3) return S ∗ ; End
The hill climbing algorithm is a classic local search technology, which can find local optimal solutions in the solution neighborhood. FHC always updates the solution when a first local optimal solution is found. FHC mainly consists of a series of iterations. At every iteration, a feasible solution will be randomly generated as the current solution (see Step (2.1)). After that, this current solution will be further improved by local search as follows (see Step (2.3)). Firstly, an unselected customer S \ j S S ∈ will be arbitrarily chosen out (see Step (2.3.1)). A new solution can be achieved by adding customer to (see Step (2.3.2)). If such action will result in a new feasible solution, then the current solution will be updated with the new generated solution and the local search process is further conducted to improve it (see Step (2.3.3)). Otherwise, we try to replace a customer j S S l S ∈ with such that the resulting solution is feasible and its profit is maximized. If succeeded, the current solution will be updated with the resulting solution and the local search is further conducted to improve it (see Step (2.3.3.1)-(2.3.3.4)). After every iteration, when a better solution is obtained, the best solution will be updated (see Step (2.4)). Finally, the best solution is returned (see Step (3)). j TABLE I. T HE D ETAILS OF I NSTANCE G ENERATION
Instance NRP-1 NRP-2 NRP-3 NRP-4 NRP-5
Number 20/40/80 20/40/80/160/320 250/500/750 250/500/750/1000/750 500/500/500 Cost 1~5/2~8/5~10 1~5/2~7/3~9/4~10/5~15 1~5/2~8/5~10 1~5/2~7/3~9/4~10/5~15 1~3/2/3~5 Max. 8/2/0 8/6/4/2/0 8/2/0 8/6/4/2/0 4/4/0 Customer 100 500 500 750 1000 Request 1~5 1~5 1~5 1~5 1 Profit 1~30 1~30 1~30 1~30 1~30
IV. E XPERIMENTAL R ESULTS
In this section, we describe the experimental results to evaluate the performance of HACO and existing algorithms. A. Instances generation
Since the customer requirements are usually private data of a company, no public NRP instance is available. We generate NRP test instances by following the methods in the classic paper of NRP [2]. These instances consist of five randomly generated problems, each of which contains some multi-level requirements. Each requirement in a level is with the constraint of numbers, dependency, and cost. For each problem, the predefined development budget bound is defined as 30%, 50%, and 70% of the total cost of requirements. But the ranges of customer profits have not been mentioned before. We assume each customer profit is selected randomly from 1 to 30. Table I gives the details of instance generation. The 6 rows of this table show the instance names, number of requirement in each level, cost of requirements, the maximum number of requirement dependency, number of customers, request of each customer, and the customer profits. B. Parameters for HACO
The performance of ACO algorithms usually depends on the configuration of five parameters. In this section, we use CALIBRA procedure [14], proposed by Adenso-Diaz and Laguna in 2006, to determine the interrelated HACO parameter settings that improve the algorithm defaults. CALIBRA is an automated tool for finding performance-optimizing parameter settings, which liberates algorithm designers from the tedious task of manually searching the parameter space. It uses Taguchi’s fractional factorial experimental designs coupled with local search to finely tune algorithm parameters with a wide range of possible values. The benefit of using CALIBRA is evident in situations where the algorithm being fine-tuned has parameters whose values have significant impact on performance. When linking the online supplement and using the current version of CALIBRA , 5 instances are selected randomly for testing, one for each group size, to ensure proper representation of problem instances from all sizes. We pick lower and upper bounds for parameters and discretize the intervals uniformly. For example, the continuous pheromone influence parameter http://or.pubs.informs.org/org/Pages.collect.html α in the range 0.1 to 4.0 with an accuracy of one decimal place is internally handled as an integer variable with a lower bound of 1 and an upper bound of 40. The other four parameters are handled in the same way. Table II shows the parameters, the original range considered before discretization and the values found by CALIBRA. With these parameter values, HACO can achieve good solutions on all problem instances. TABLE II.
HACO P ARAMETER R ANGES C ONSIDERED FOR T UNING AND R ESULTS OF R UNNING
CALIBRA
Parameters Ranges Values by CALIBRA α [0.1, 4.0] 1.1 β [0.1, 6.0] 1.5 γ [0.010, 0.050] 0.020 ρ [0.01, 0.99] 0.13 h [1, 30] 10 C. Results and analysis
In this section, we show the experimental results of heuristic algorithms, including GRASP, SA [2], FHC, ACO, and HACO, where ACO is referred as the HACO version using no local search. All the algorithms are implemented in C++ and run on a personal computer with Microsoft Windows XP, Intel Core 2.53GHz CPU and 4GB memory. For those heuristic algorithms, we set their parameters as follows. Both in GRASP and in FHC, each solution quality is the best found after 100 re-starting times. For GRASP, the construction phase is implemented according to [2] and the length of Restricted Candidate List (RCL) is set to 10. The local search phase is similar to FHC. For SA algorithm, the Lundy and Mees cooling schedule in [2] is conducted with temperature control parameter to be 10 -8 . In ACO and HACO, we set its five parameters to 1.1, 1.5, 0.020, 0.13, and 10 as suggested by CALIBRA. For the HACO, 10 iterations are executed and 10 ants apply local search at each iteration for comparison with FHC within similar running time. Table III illustrates ACO and HACO outperform other algorithms in terms of solution quality and running time. The first column shows the instance names, and the following 5 columns show the 5 algorithms in the experiments. The 2 sub-columns of each algorithm present the solution quality and running time, respectively. TABLE III. E XPERIMENTAL R ESULTS OF F IVE A LGORITHMS ON
NRP I NSTANCES
GRASP SA FHC ACO HACO Instance solution time(s) solution time(s) solution time(s) solution time(s) solution time(s)
NRP-1-0.3 810 1.406 827 14.453 817 0.968 829 1.703 835 0.734 NRP-1-0.5 1364 1.281 1397 27.312 1353 1.281 1416 1.671 1418 0.89 NRP-1-0.7 1945 1.359 1960 32.921 1939 0.812 1963 1.656 1968 0.859 NRP-2-0.3 4084 144.328 3902 401.031 3802 334.062 4139 42.593 4262 187.218 NRP-2-0.5 6616 333.187 6629 655.687 6352 401.64 6712 43.125 6786 337.484 NRP-2-0.7 9284 351.437 9602 801.859 9436 349.937 9628 42.5 9732 274.593 NRP-3-0.3 6061 140.25 5556 820.812 5423 678.39 6063 48.984 6067 397.531 NRP-3-0.5 9131 300.14 8836 940.687 8647 666.921 9136 48.812 9196 464.312 NRP-3-0.7 11723 102.078 11648 352.312 11572 191.25 11726 47.187 11728 79.812 NRP-4-0.3 9161 1329.59 8498 5763.55 7935 4729.84 9165 137.156 9183 2810.8 NRP-4-0.5 13731 1220.11 13279 8827.31 12822 12208.4 13751 136.968 13810 3059.88 NRP-4-0.7 17999 273.312 17932 1809.71 17771 1433.06 18022 131.156 18029 424.234 NRP-5-0.3 14940 465.14 14808 2535.55 13322 506.296 15592 158.515 15616 282.76 NRP-5-0.5 20030 686.718 19991 2682.19 19608 1461.38 20784 156.531 20946 1036.47 NRP-5-0.7 24508 138.203 24439 375.656 24410 245.781 24568 155.578 24570 163.312
On the small instances as NPR-1, the running time of ACO is nearly the same as the existing ones, but on large NRP instances, ACO algorithm can get better solutions in much less computing time than other algorithms. However, HACO can achieve better solutions than ACO and need less running time to obtain a local optimal solution than FHC. Fig. 1 shows the solution quality comparison of FHC, ACO and HACO algorithms in a more intuitively way. ACO and HACO prove to be much more sufficient than FHC on all instances, especially on large ones like NRP-4 and NRP-5. HACO can obtain the best solution quality on all instances among these 3 algorithms and can be implemented as a general approach for NRP problems.
Figure 1.
Solution qualities obtained by FHC, ACO and HACO algorithms on all instances.
Fig. 2 illustrates the running time of FHC, ACO and HACO algorithms. On all the NRP instances, ACO uses the shortest computing time. The running time of FHC on larger instances like NRP-4 is much longer than that of HACO. It implies that when we incorporate local search operator FHC into HACO, those high-quality solutions constructed by ACO can reduce the time of searching for local optima to some extent. -20000200040006000800010000 R unn i ng T i m e ( s ) NRP Instances FHC ACO HACO S o l u ti on Q u a lit y NRP Instances
FHC
ACO
HACO
Figure 2.
An illustration of running time by FHC, ACO and HACO algorithms on all instances. V. R ELATED W ORK
There already existed several heuristic approaches for the NRP problem. Among those algorithms, the hill climbing (HC) and simulated annealing (SA) were implemented in [2, 6]. When using greedy algorithm in [2] as the construction phase, GRASP can find better solutions than SA on some larger instances within only modest amounts of computing time (as shown in Table III in Section IV). Recently, an approximate backbone based multilevel algorithm (ABMA) was proposed in [17] to solve large scale NRP instances. ACO is a kind of meta-heuristic approach and Ant System (AS) [18] was the first ACO algorithm, using Traveling Salesman Problem (TSP) as an example application. Then a number of direct extensions of AS were provided [19-21], eing different in the way the pheromone updated, as well as the management details of the pheromone trails. When using ACO algorithms, automatic tuning procedures could effectively find better parameter settings. There already existed several approaches for automatic algorithm parameters tuning. Boyan and Moore [22] proposed a tuning algorithm based on machine learning techniques. But there was no empirical analysis of this algorithm when applied to parameters that have wide range of possible values. Audet and Orban [23] introduced a mesh adaptive direct search that used surrogate models for algorithmic tuning. Nevertheless, this approach has never been used for tuning local search algorithms. Adenso-Diaz and Laguna [14] designed an algorithm called CALIBRA for fine tuning algorithms. It can narrow down choices for parameter values when facing a wide range of possible values and search for the best possible parameter settings. VI. C ONCLUSION AND F UTURE W ORK
In this paper, we propose HACO for NRP, which incorporates FHC into ACO to solve NRP. We compare the performance of standard ACO and HACO with existing heuristic algorithms on some typical NRP instances. The experimental results illustrate that ACO outperforms the existing algorithms in solution quality and running time. When coupled with FHC, HACO can achieve better solution quality than ACO. In future work, we will investigate how to further improve the performance of HACO and extend HACO for multi-objective NRP. A
CKNOWLEDGMENT
We would like to thank Prof. Belarmino Adenso-Diaz and Prof. Manuel Laguna for providing the latest online version of CALIBRA. Prof. Adenso-Diaz is with Computer Engineering School and Industrial Engineering School, University of Oviedo and Prof. Laguna is with Leeds School of Business, University of Colorado. R
EFERENCES [1]
K. H. Bennett and V. T. Rajlich, “Software maintenance and evolution: a roadmap,” Proceedings of the 2000 Conference on The Future of Software Engineering (FOSE’00), pp. 73–87, June 2000. [2]
A. J. Bagnall, V. J. Rayward-Smith, and I. M. Whittley, “The next release problem,” Information and Software Technology, vol. 43, pp. 883–890, November 2001. [3]
G. Ruhe and M. O. Saliu, “The art and science of software release planning,” IEEE Software, vol. 22, pp. 47–53, November 2005. [4]
M. Harman and B. F. Johns, “Search based software engineering,” Information and Software Technology, vol. 43, pp. 833–839, November 2001. [5]
M. Harman, “The current state and future of search based software engineering,” Proceedings of the 2007 Conference on The Future of Software Engineering (FOSE’07), pp. 342–357, May 2007. [6]
P. Baker, M. Harman, K. Steinhofel, and A. Skaliotis, “Search based approaches to component selection and prioritization for the next release problem,” Proceedings of the 22nd International Conference on Software Maintenance (ICSM’06), pp. 176–185, September 2006. [7]
A. Finkelstein, M. Harman, S. A. Mansouri, J. Ren, and Y. Zhang, “Fairness analysis in requirements assignment,” Proceedings of the 16th IEEE International Requirements Engineering Conference (RE’08), pp. 115–124, September 2008. [8]
Y. Zhang, M. Harman, and S. A. Mansouri, “The multi-objective next release problem,” Proceedings of the 9th Genetic and Evolutionary Computation Conference (GECCO’07), pp. 1129–1136, July 2007. [9]
D. Greer and G. Ruhe, “Software release planning: an evolutionary and iterative approach,” Information and Software Technology, vol. 46, pp. 243–253, April 2004. [10]
M. R. Garey and D. S. Johnson, Computers and Intractability. W. H. Freeman and Company, San Francisco, 1979. [11]
M. Dorigo and T. Stützle, Ant Colony Optimization. The MIT Press, London, 2004. [12]
M. Dorigo and C. Blum, “Ant colony optimization theory: A survey,” Theoretical Computer Science, vol. 344, pp. 243–278, November 2005. [13]
H. Jiang, Z. Ren, and Y. Hu, “A sampling based fant for the 3-dimensional assignment problem,” Proceedings of IEEE Congress on Evolutionary Computation 2008 (CEC’08), pp. 4118–4124, June 2008. [14]
B. Adenso-Diaz and M. Laguna, “Fine-tuning of algorithms using fractional experimental design and local search,” Operations Research, vol. 54, pp. 99–114, January 2006. [15]
D. E. Goldberg, Genetic algorithms in search, optimization and machine learning. Reading, MA, Addison-Wesley, 1989. [16]
R. E. Bixby, M. Fenelon, X. Z. Gu, and R. Rothberg, “Mip: theory and practice closing the gap,” Proceedings of the 19th IFIP TC7 Conference on System Modelling and Optimization Methods, Theory and Applications, pp. 19–49, July 2000. [17]
H. Jiang, J. Xuan, and Z. Ren, “Approximate backbone based multilevel algorithm for next release problem,” Proceedings of Genetic and Evolutionary Computation Conference (GECCO’10), in press. [18]
M. Dorigo, “Optimization, learning and natural algorithms,” PhD thesis, Dipartimento di Elettronica, Politecnico di Milano, Italy, 1992. [19]
M. Dorigo, V. Maniezzo, and A. Colorni, “The Ant System: optimization by a colony of cooperating agents,” IEEE Transactions on Systems, Man, and Cybernetics-PartB, vol. 26, pp. 29–41, January 1996. [20]
B. Bullnheimer, R. F. Hartl, and C. Strauss, “A new rank-based version of the Ant System: A computational study,” Central European Journal for Operations Research and Economics, vol. 7, pp. 25–38, May 1999. [21]
T. Stützle and H. H. Hoos, “The MAX-MIN Ant System and local search for the traveling salesman problem,” Proceedings of the 1997 IEEE International Conference on Evolutionary Computation (ICEC’97), pp. 309–314, April 1997. [22]
J. Boyan and A. Moore, “Using prediction to improve combinatorial optimization search,” Sixth International Workshop on Artificial Intelligence and Statistics, January 1997. [23]