Genetic and Memetic Algorithm with Diversity Equilibrium based on Greedy Diversification
GGENETIC AND MEMETIC ALGORITHM WITH DIVERSITYEQUILIBRIUM BASED ON GREEDY DIVERSIFICATION
ANDR´ES HERRERA-POYATOS AND FRANCISCO HERRERA , R esearch group “Soft Computing and Intelligent Information Systems”, University ofGranada, 18071 Granada, Spain. D epartment of Computer Science and Artificial Intelligence, University of Granada, 18071Granada, Spain. E-mail: [email protected] , [email protected] Abstract.
The lack of diversity in a genetic algorithm’s population may lead to a badperformance of the genetic operators since there is not an equilibrium between explorationand exploitation. In those cases, genetic algorithms present a fast and unsuitable convergence.In this paper we develop a novel hybrid genetic algorithm which attempts to obtaina balance between exploration and exploitation. It confronts the diversity problem usingthe named greedy diversification operator. Furthermore, the proposed algorithm applies acompetition between parent and children so as to exploit the high quality visited solutions.These operators are complemented by a simple selection mechanism designed to preserve andtake advantage of the population diversity.Additionally, we extend our proposal to the field of memetic algorithms, obtaining animproved model with outstanding results in practice.The experimental study shows the validity of the approach as well as how important istaking into account the exploration and exploitation concepts when designing an evolution-ary algorithm.
Keywords: genetic algorithms, memetic algorithms, exploration vs exploitation, popu-lation diversity, hybridization. Introduction
Optimization problems are a relevant topic of artificial intelligence. In order to solve theseproblems, computer scientists have found inspiration in nature, developing bio-inspired algo-rithms [5], [30] and, in particular, evolutionary algorithms [25].Genetic algorithms [7] are one of the most famous evolutionary algorithms. They arefounded in the concepts of evolution and genetic. A solution to an optimization problemis view as a chromosome. Genetic algorithms maintain a population of chromosomes whichevolves thanks to the selection, crossover and mutation operators. The evolution process endswhen a predefined criteria is achieved.The equilibrium between exploration and exploitation is the key for success when designingan evolutionary algorithm. M. Crepinsek et al. [2] define exploration as “the process of visitingentirely new regions of the search space”, whereas exploitation is “the process of visitingthose regions of the search space within the neighborhood of previously visited points”. If anheuristic is mainly focused in exploration, then it may not find the high quality neighbors ofthe promising visited solutions. Conversely, if an heuristic is mainly focused in exploitation,then it may not explore the regions of the search space which lead to most of the highquality solutions for the problem. Hence, our purpose is developing a genetic algorithm which a r X i v : . [ c s . A I] F e b ANDR´ES HERRERA-POYATOS AND FRANCISCO HERRERA , intercalates the exploration and exploitation phases as needed, focusing the attention in thepopulation diversity.The population diversity is one of the cornerstone of the genetic algorithms’ performance.Note that a genetic algorithm’s population converges if, and only if, the population diversityconverges to zero. If this happens, then the heuristic has entered in a never-ending exploitationphase. We say that it has converged to a local optimum due to the lack capability forincreasing the population diversity. Hence, the diversity problem – maintaining a healthypopulation diversity – is closely related to achieving a proper equilibrium between explorationand exploitation. There are various proposals of the specialized literature which address thisproblem [6].In this proposal we tackle the diversity problem formulating a diversification operator whichintroduces diversity to the population when it is needed. The inserted new chromosomes aregenerated by a randomized greedy algorithm. Afterwards, we use this operator to designan hybrid genetic algorithm, which is shown to maintain a stable population diversity. Thehybridization between greedy randomized and genetic algorithms produces great results be-cause the greedy chromosomes allow the heuristic to explore the promising regions of thesearch space. Hybridization of evolutionary algorithms with other heuristics is a commonpractice which helps to improve the evolutionary algorithms’ performance [18], [21]. Further-more, the proposed genetic algorithm use a competition between parent and children, similarto the one used by differential evolution [26], so as to exploit the high quality visited solutions.These operators are complemented by simple selection mechanism which we call randomizedadjacent selection and is designed to preserve and take advantage of the population diversity.We refer to the proposed algorithm as genetic algorithm with diversity equilibrium based ongreedy diversification (GADEGD).In order to obtain an improved model, we also extend the previous algorithm to the fieldof memetic algorithms [12]. The new algorithm is called memetic algorithm with diversityequilibrium based on greedy diversification (MADEGD).We have developed an experimental study for each of both models using the traveling sales-man problem [16], [9] as the case of study. In GADEGD’s study we analyze its parametersand we match it against other state of the art genetic algorithms (CHC [3] and Micro-GA[13]) in terms of the solutions quality, the convergence to optimal solutions and the populationdiversity. Furthermore, we show how GADEGD’s components contribute to its performance.In MADEGD’s study we also analyze its parameters and compare it with GADEGD. Addi-tionaly, MADEGD is matched against other state of the art metaheuristics based on localsearch (GRASP [4] and iterated greedy [23], [10]) from a triple perspective, the solutionsquality, the population diversity and the number of calls to the local search.The remainder of this article is organized as follows. In Section 2, we shortly introducegenetic and memetic algorithms. In Section 3, we study the diversity problem in genetic algo-rithms and we also present the greedy diversification operator, the other GADEGD’s compo-nents and the corresponding experimental analysis. In Section 4, we formulate MADEGD andshow the associated experimental results. In Section 5, we point out the obtained conclusions.2. Genetic and memetic algorithms
In this section we briefly introduce genetic and memetic algorithms (Sections 2.1 and 2.2 re-spectively) and provide the pseudo-codes which are used in the experimental analysis. Lastly,we particularize in the application of these algorithms to the traveling salesman problem(Section 2.3), which is employed as the case of study.
ENETIC AND MEMETIC ALGORITHM WITH DIVERSITY EQUILIBRIUM BASED ON GREEDY DIVERSIFICATION Genetic Algorithms.
Let f be the objective function associated to an optimizationproblem, f : S → R , where S is the set of all the possible solutions. The purpose is minimizing(resp. maximizing) f . Thus, a solution s is better than another if its objective value f ( s ) issmaller (resp. greater).Let P t be a finite subset of S . P t is called the population of the genetic algorithm. We candefine a genetic algorithm as a population based metaheuristic [29], [1], [27] which uses theselection, crossover and mutation operators to obtain a new population P t +1 from P t . Theprocess is repeated until a stopping criteria is achieved. Then, the best solution found or thebest solution in the last population is returned.A genetic algorithm with the previous definition does not guarantee that there is a chromo-some in the new population as good as the previous populations’ chromosomes. However, thisstatement can be achieved applying the elitism criteria, appending the best solution in P t ,denoted bs ( P t ), to P t +1 . Afterwards, some models also delete the worst solution from P t +1 ,denoted ws ( P t +1 ). Elitism has been proved to improve the genetic algorithm results in mostcases, even theoretically [22]. Consequently, genetic algorithms with elitism are a popularmodel among computer scientists. Algorithm 1
BuildNewPopulation( P ) Require:
A population P . P (cid:48) ← ∅ Select | P | pairs of chromosomes from P using binary tournament selection . Cross each pair, with a probability p c ∈ (0 , Add the new children and the pairs that have not been crossed to P (cid:48) . Produce a mutation in each solution of P (cid:48) with a probability p m ∈ (0 , Elitism: ws ( P (cid:48) ) ← bs ( P ) return P (cid:48) Algorithm 1 shows how a new population is built in a usual generational genetic algorithmwith elitism. The binary tournament selection [8] is a widely used selection scheme in geneticalgorithms. The variables p c and p m are known as the crossover and mutation probabilityrespectively. We have used the values p c = 0 . p m = 0 . Algorithm 2
Generational genetic algorithm with elitism
Require:
The population size, named n . Initialize P with n random elements from S . t ← while stopping criteria is not achieved do P t +1 ← BuildNewP opulation ( P t ) t ← t + 1 end while return bs ( P t ) Memetic Algorithms.
Memetic algorithms hybridize evolutionary algorithms and lo-cal search procedures in order to obtain a model with a better exploration and exploitation.We will focus our attention in the subset of memetic algorithms in which the evolutionaryscheme is carried out by a genetic algorithm.An usual hybridization consists in applying the local search once per each genetic algorithmiteration. The chromosome to which the local search is applied is the one with the best
ANDR´ES HERRERA-POYATOS AND FRANCISCO HERRERA , objective value among those population’s solutions that have not been improved by the localsearch yet, what is indicated by a boolean variable. Other approaches apply the local search toeach population element. However, these waste too much time improving low quality solutions.It is better to use the computational resources improving only the promising chromosomes asthe first approach did.Memetic algorithms with high quality local searches usually outperform genetic algorithms.One of the reasons is that the local search improves the population quality introducing diver-sity at the same time. Hence, we could classify local search as an excellent mutation operatorbut with a high complexity cost. Furthermore, the evolutionary character of the algorithmimplies that the local search is likely applied to better solutions as time passes, obtaining agood synergy.Algorithm 3 shows a memetic algorithm’s pseudo-code. It has two differences with Algo-rithm 2. First, the population is initialized with a randomized greedy algorithm, explained inAlgorithm 5, so as to not apply the local search to random solutions. Otherwise, too muchtime would be consumed by the local search at the beginning of the algorithm. Secondly, thelocal search is applied once per iteration as we discussed before. Algorithm 3
Memetic algorithm
Require:
The population size, named n . Initialize P with n solutions obtained by a greedy randomized algorithm. t ← while stopping criteria is not achieved do P t +1 ← BuildNewP opulation ( P t ) Apply the local search to the best solution not previously improved of P t +1 (if it exits). t ← t + 1 end while return bs ( P t ) Application to the traveling salesman problem.
We have used the traveling sales-man problem as the case of study for our proposal. Given a complete and weighed graph,this problem consists in obtaining the Hamiltonian cycle which minimize the sum of its edges’weighs. This sum is named the solution cost. Therefore, it is a minimization problem andthe objective function provides the cost of each solution.We have chosen the traveling salesman problem because it is a classical NP Hard problemwhich has been extensively employed to study heuristics in the specialized literature [28].Researchers have developed a huge amount of genetic operators for the traveling salesmanproblem [14]. We use the well known crossover OX and exchange mutation which have showna good performance in experimental studies.One of the best heuristics for the traveling salesman problem is a local search named Lin-Kernighan [17]. We have chosen a modern version [11] as the local search for the experimentalstudy.3.
GADEGD: Genetic algorithm with diversity equilibrium based on greedydiversification
In this section we propose a novel genetic algorithm with the aim of obtaining a goodbalance between exploration and exploitation.First, we introduce a measure of the population diversity and we show the diversity problemin genetic algorithms. Secondly, we develop an operator to tackle the diversity problem, calledthe greedy diversification operator. Thirdly, we introduce the genetic algorithm with diversity
ENETIC AND MEMETIC ALGORITHM WITH DIVERSITY EQUILIBRIUM BASED ON GREEDY DIVERSIFICATION equilibrium based on greedy diversification (GADEGD). At last, we show the experimentalresults of the proposal from a triple perspective: solutions quality, convergence to optimalsolutions and population diversity.3.1. Population diversity in genetic algorithms.
The diversity of a population is a mea-sure of how different its chromosomes are. If the diversity is low, then the chromosomes aresimilar. On the other hand, if the diversity is high, then the chromosomes are quite different.We need a distance measure, d : S × S → R +0 , in order to quantify the differences betweentwo solutions. Then, we can define the diversity of the population as the mean of the distancebetween all pairs of chromosomes, which can be written as following: D t = (cid:80) s,s (cid:48) ∈ P t d ( s, s (cid:48) ) n ( n − berlin52 , which consists of 52 cities and can befound in TSPLIB [20]. Each figure’s point corresponds to the average population diversity inthe last 0 .
01 seconds. The diversity starts near the maximum possible value since the initialchromosomes are randomly chosen. Afterwards, the diversity quickly decreases because thealgorithm focuses the search in a specific region of the search space. However, the diversitydiminution is excessive, converging to a number close to zero eventually. This fact indicatesthat the algorithm has converged to a local optimum, not being able to reach better solutions.Consequently, if the local optimum is not good enough, then the algorithm results will bedisappointing. We aim to avoid this fast and unsuitable convergence so as to improve thealgorithm performance.
Figure 1.
Diversity in a genetic algorithm’s population (Algorithm 2).
ANDR´ES HERRERA-POYATOS AND FRANCISCO HERRERA , In genetic algorithms, the population diversity is maintained by the mutation operator.The diversity depends on the value p m which was defined as the probability of mutating achromosome in an iteration. If p m is equal to zero, then the diversity will tend to zero after fewiterations. If p m is increased, then the diversity will converge to a higher value. Nonetheless,the mutation operator introduces diversity at the cost of deteriorating, most of the time, thequality of the solutions to which it is applied. Hence, low values are assigned to the mutationprobability in the specific literature (between 0 . . p m is 0 . Greedy diversification operator.
Population diversity is a double-edged sword. It isneeded to explore the solutions space but it can imply not finishing the exploration process.If it is the case, then not enough time is dedicated to the exploitation phase which is essentialto get higher quality solutions. Therefore, it is desired a diversification operator that onlyintroduces diversity if it is necessary.This operator would be applied to every new population as it is shown in Algorithm 4.
Algorithm 4
Genetic algorithm with diversification
Require:
The population size, named n . Initialize P with n random elements from S . t ← while stopping criteria is not achieved do P t +1 ← BuildNewP opulation ( P t ) P t +1 ← Diversification ( P t +1 ) t ← t + 1 end while return bs ( P t ) The diversification operator should delete the population’s repeated chromosomes becausethey waste the population’s slots and reduce the diversity. Furthermore, the chromosomesthat are left in the population should have a good objective value and be potentially good forthe crossover operator. The diversification operator ought also to have a low computationalcost since the optimization is done by the evolutionary scheme. We propose using a greedyrandomized algorithm to obtain chromosomes satisfying these conditions.Greedy randomized algorithms provide acceptable chromosomes from the objective valueperspective that also contain high quality genetic material thanks to the greedy selectionfunction. The randomized aspect of the algorithm supplies the diversity required in thegenerated solutions. There are some conditions to implement a greedy randomized algorithmfor an optimization problem. First, the solution must be represented as a set or list of elements.Secondly, it is needed a greedy function which provides the quality of an element accordingto those that have been already added to the solution. The building process is iterative.In each step a new element is added to the solution until it is fully completed. In order toadd a new element, a restricted candidate list (RCL) must be determined. Afterwards, anelement randomly chosen from the RCL is added to the solution. This process is presentedin Algorithm 5.The RCL contains the best elements conforming to the greedy function. The list’s size canbe constant or variable, in which case it depends on the elements quality. The variable sizeRCL contains the elements whose greedy value is less than (1 + σ ) times the best element’svalue, where σ is a fixed real value greater than zero. This model obtains better solutionsbecause it controls the quality of the elements added to the list. It also keeps the diversityin the generated solutions since the RCL can be very large when multiple elements are good ENETIC AND MEMETIC ALGORITHM WITH DIVERSITY EQUILIBRIUM BASED ON GREEDY DIVERSIFICATION Algorithm 5
Greedy Randomized Algorithm solution ← ∅ while solution is not finished do Build the
RCL . x ← randomElement ( RCL ) solution ← solution ∪ x Adapt the greedy function to the new partial solution. end while return solution enough. In our experiments we use σ = 0 . C be the set of all its possible values. The function g : S → C provides, given a solution s , the value g ( s ) ∈ C which the solution possesses. Forinstance, a characteristic could be the solution’s objective value or whether the solution hasa concrete element or not. It could even be the solution itself. Algorithm 6
Greedy diversification operator
Require:
The genetic algorithm population, named P , and the characteristic function g : S → C . P (cid:48) ← ∅ Sort P by the objective function (the better solutions are placed first). k ← for s in P do if exists s (cid:48) in P (cid:48) such as g ( s ) = g ( s (cid:48) ) then k ← k + 1 else P (cid:48) ← P (cid:48) ∪ s end if end for for i = 1 to k do P (cid:48) ← P (cid:48) ∪ GreedyRandomizedAlgorithm () end for return P (cid:48) Algorithm 6 uses this terminology to show a general definition of the greedy diversificationoperator. This operator removes the population’s worst solutions that share the characteris-tic’s value with other ones. Then, it fills the new population with greedy randomized solutions.The efficiency in the worst case is θ ( | P | (log | P | + φ + µ )), where φ and µ are the complexityof applying g to a solution and obtaining a greedy randomized solution respectively.The choice of g affects the amount of diversity introduced and the operator complexity. Afirst approach is using the identity function ( Id : S → S and Id ( s ) = s ) as g . In this case thealgorithm just substitutes the repeated solutions in the population. Algorithm 7 provides an ANDR´ES HERRERA-POYATOS AND FRANCISCO HERRERA , efficient implementation for this approach. In the case of the traveling salesman problem, wecan implement the identity function and the greedy randomized algorithm with efficiencies θ ( m ) and θ ( m ) respectively, where m is the number of nodes in the instance. Consequently,the efficiency in the worst case is θ ( | P | (log | P | + m )). However, the experimental analysisin Section 3.4 shows that Algorithm 7 complexity in practice is O ( | P | log | P | + m ) since twosolutions usually have different objective values and few repeated solutions are found after agenetic algorithm’s iteration. Algorithm 7
Greedy diversification operator with g = Id Require:
The genetic algorithm population, named P . P (cid:48) ← { bs ( P ) } Sort P by the objective function (the better solutions are placed first). for i = 1 to n − do if f ( P [ i − f ( P [ i ]) and P [ i −
1] = P [ i ] then P (cid:48) ← P (cid:48) ∪ GreedyRandomizedAlgorithm () else P (cid:48) ← P (cid:48) ∪ P [ i ] end if end for return P (cid:48) A second approach is using the objective function as g . In this case more diversity isintroduced but some interesting solutions might be lost. The implementation is the samethat the one given in Algorithm 7 but without comparing two solutions in the line 4. Thepractical complexity remains the same too. Both approaches’ results are contrasted in Section3.4.1.3.3. Genetic algorithm with diversity equilibrium based on greedy diversification.
Algorithm 4 with the greedy diversification operator given in Algorithm 7 presents a muchbetter performance than Algorithm 2 as we show in Section 3.4. However, the synergy amongthe genetic and diversification operators can be improved. Therefore, we propose a novelgenetic algorithm with the following characteristics:(1) A novel selection mechanism which does not apply pressure and helps to preserve thediversity in the new population. We call it randomized adjacent selection.(2) The crossover probability is equal to 1.(3) A competition between parent and children to increase the pressure applied to thepopulation.(4) The greedy diversification operator is used instead of the mutation operator.The new algorithm is named genetic algorithm with diversity equilibrium based on greedydiversification since it gets a healthy diversity thanks to the greedy diversification operatorand it is referred as GADEGD. The mentioned algorithm’s components are explained in therest of the section.Selection schemes in genetic algorithms usually ignore the population’s worst solutions.Some examples are the tournament or ranking selection [8], which select the worst solutionswith a very low probability. If we use these mechanisms, then the greedy solutions introducedby the diversification operator will not be selected eventually. Furthermore, we desire everychromosome to be crossed in order to take advantage of the population diversity. As aconsequence, we propose randomly sorting the population and crossing the adjacent solutions,considering the first and last solution also as contiguous. Each pair of adjacent solutions iscrossed with probability 1, generating only one child. We call it randomized adjacent selection.
ENETIC AND MEMETIC ALGORITHM WITH DIVERSITY EQUILIBRIUM BASED ON GREEDY DIVERSIFICATION Note that this scheme assures that each solution has exactly two children. Consequently, allthe genetic material is used to build the new population, what preserves the diversity.The randomized adjacent selection conserves the diversity but does not apply any pressureto the population. The competition between parent and children is the mechanism chosenfor that purpose. We propose a process similar to the one used by the differential evolutionalgorithms; each child only competes with its left parent and the best of both solution isadded to P t +1 . Consequently, the population P t +1 contains a descendant for each solution of P t or the solution itself. This statement implies that if the population P t is diverse, then thepopulation P t +1 will likely be diverse too. Furthermore, the population P t +1 is always betterthan P t in terms of the objective function. The competition between parent and childrencan be considered a strong elitism that, in our case, preserves the diversity thanks to therandomized adjacent selection. Algorithm 8
BuildNewPopulationGADEGD( P ) Require:
A population P . P (cid:48) ← ∅ Sort P randomly. for i = 0 to n − do parent ← P ( i ) parent ← P (( i + 1) mod n ) child ← Crossover ( parent , parent if f ( child ) is better than f ( parent then P (cid:48) ← P (cid:48) ∪ child else P (cid:48) ← P (cid:48) ∪ parent end if end for return P (cid:48) Algorithm 8 shows how a new population is built in GADEGD. Note that the code is verysimple, what is an advantage versus more complicated models.In genetic algorithms, the mutation operator introduces diversity and allows the algorithmto explore the neighborhood of the population’s solutions. However, GADEGD does not needit any more since it is able to keep the population diversity by itself. Consequently, themutation operator just decrease the solutions quality and should not be used. Algorithm 9contains the pseudo-code of GADEGD.
Algorithm 9
Genetic algorithm with diversity equilibrium based on greedy diversification
Require:
The population size, named n , and the characteristic function g : S → C . Initialize P with n random elements from S . t ← while stopping criteria is not achieved do P t +1 ← BuildNewP opulationGADEGD ( P t ) P t +1 ← GreedyDiversification ( P t +1 , g ) t ← t + 1 end while return bs ( P t ) Figure 2 shows how the population diversity evolves for GADEGD and the implementedgenetic algorithms (Algorithms 9 and 2 respectively) in the instance berlin52 . Here GADEGDhas been executed with g = Id . Note that GADEGD is designed to maintain a diverse popu-lation and so it does. The initial diversity decreases quickly in both algorithms. Afterwards, AND FRANCISCO HERRERA , GADEGD keeps the diversity in a high and stabilized value. Its components allows the al-gorithm to work with good solutions in multiple zones of the search space. Besides, if thepopulation diversity decreases, then the greedy diversification introduces new chromosomes.
Figure 2.
Diversity: GADEGD vs generational genetic algorithm (Algorithm 2)3.4.
Experimental analysis.
The experiments were done in a computer with 8 GB of RAMand a processor Intel I5 with 2.5 GHz. The 18 instances of the traveling salesman problemcan be found in the TSPLIB library. Each result is computed as the average of 30 executions.The experimental analysis contains 3 subsections. First, we provide a study of the GADEGD’sparameters: the population size and the characteristic function. In the second subsection thealgorithm is compared against other state of the art algorithms from a triple perspective: thesolutions quality, the convergence to the instances’ optimums and the population diversity.Lastly, we analyze how much the GADEGD’s components contribute to its performance.3.4.1.
GADEGD’s parameters analysis.
The population size has a huge impact on a geneticalgorithm behavior. On the one hand, a greater population size contribute to the explorationof the solutions’ space, avoiding a fast and unsuitable convergence. However, a large popula-tion needs much more computational time to exploit the most promising solutions. On theother hand, a smaller population size implies a higher exploitation and a sooner convergence.The optimal population size depends on the execution’s time and the algorithm facilities tomaintain a diverse population. If this optimal value is very large, then the algorithm hasprobably difficulties to explore the solutions space and keep the population diversity. If thisis the case, then the algorithm is probably improvable.Genetic algorithms are usually assigned a population size between 30 and 100 in the liter-ature although this value tends to grow with the improvements in hardware. There are alsomodels which work under small populations [13]. In our case, we want the algorithm to havea medium sized population because we try to achieve an equilibrium between exploration andexploitation.
ENETIC AND MEMETIC ALGORITHM WITH DIVERSITY EQUILIBRIUM BASED ON GREEDY DIVERSIFICATION Table 1 compares the population sizes 32 and 64 in terms of the mean and standard devia-tion of the obtained solutions’ objective value. In these experiments the GADEGD’s charac-teristic function is g = Id and the execution’s time is 0 . m seconds, where m is the instance’snumber of nodes. The experiments show that 64 is a better population size than 32, obtainingthe best results most of the time. We also have executed the algorithm with smaller and largerpopulation sizes and they had a significant worse performance. Consequently, we are using64 as the standard population size for the GADEGD algorithm. Table 1.
GADEGD with g = Id and population sizes 32 and 64. The execu-tion time is 0 . m seconds. The best results are highlighted in bold. The lastrow indicates the number of times that each model got the best result in anyinstance. Problem Optimum Mean objective value Standard deviation n = 32 n = 64 n = 32 n = 64eil51 426 g = Id and g = f explained in Section 3.2. More complex models did not obtainedbetter results in practice. Table 2 compares both functions’ performance. The model g = Id reaches better solutions in most instances. The function g = f introduces too much diversityand it might substitute not repeated chromosomes with unique characteristics. Hence, themodel g = Id is the one chosen for the rest of the study.Table 2 also shows the percent of explored solutions which are generated in the greedydiversification. This value is usually between 2 and 10 %. In average, this means that thealgorithm introduces between 1 and 7 greedy solutions per iteration for both characteristicsfunctions. Consequently, we can consider the practical complexity of these greedy diversifi-cation algorithms as O ( | P | log | P | + m ) as we mentioned in Section 3.2. Note that if the AND FRANCISCO HERRERA , Table 2.
GADEGD with population size 64 and the characteristic functions g = Id and g = f . The execution time is 0 . m seconds. The best resultsare highlighted in bold. The last row indicates the number of times that eachmodel got the best result in any instance. Problem Optimum Mean objective Percent of exploredvalue solutions generatedin the greedydiversification g = Id g = f g = Id g = f eil51 426 Comparison with other genetic algorithms which use diversity mechanisms instead ofmutations.
In this section we compare GADEGD with the genetic algorithm given in Algo-rithm 2 and other recognized models which do not use the mutation operator: CHC [3] andMicro-GA [13]. We study the quality of the obtained solutions, the convergence to the prob-lems’ optimums and the population diversity in order to illustrate GADEGD’s performance.CHC was the first genetic algorithm which applies a competition between parent and chil-dren. CHC has already been applied to the traveling salesman problem variations [24]. Ourimplementation has the following characteristics: • Population size = 60 • Random selection with incest prevention mechanism that avoids crossing similar so-lutions.
ENETIC AND MEMETIC ALGORITHM WITH DIVERSITY EQUILIBRIUM BASED ON GREEDY DIVERSIFICATION • Competition between parent and children: the population P t +1 contains the bestchromosomes between parent and children. • Reinitialization of the population when it converges (detected by the incest preventionmechanism): the best chromosome is left and the other ones are replaced by randomsolutions.The Micro-GA was proposed as a genetic algorithm with a small population and fastconvergence. It was the first genetic algorithm which uses a reinitialization of the populationwhen it converges. It has the following characteristics: • Population size = 5 • The best solution in P t is added to P t +1 . • Two pairs of parent are selected by a variation of the tournament selection. • Both pairs are crossed, generating two children per pair that are added to P t +1 . • Reinitialization of the population when it converges (all the solutions have the sameobjective value): the best chromosome is left and the other ones are replaced byrandom solutions.
Table 3.
CHC and Migro-GA compared against the same models with greedyreinitialization. The execution time is 0 . m seconds. The best results arehighlighted in bold. The last row indicates the number of times that eachmodel got a better result than the same algorithm with a different reinitializa-tion. Problem Optimum Mean objective value
CHC Micro-GAClassical Greedy Classical Greedymodel Reinitialization model Reinitializationeil51 426 496.8 berlin52 7542 8041.8 st70 675 889 eil76 538 665.2 pr76 108159 114084 kroA100 21282 24010.4 rd100 7910 9496.23 eil101 629 827.7 lin105 14379 19445.4 ch150 6528 9311.93 rat195 2323 3515.8 d198 15780 21395.6 ts225 126643 214322 a280 2579 5109.53 lin318 42029 82239.7 fl417 11861 32020.3 pcb442 50778 117600 rat575 6773 18170.7 AND FRANCISCO HERRERA , Both algorithms assign 1 to the crossover probability and do not use the mutation operator.In this sense, they are similar to our proposal. However, they use a reinitialization of thepopulation in contrast to GADEGD’s greedy diversification.Table 3 shows the results obtained by these algorithms. They are good in instances withfew nodes. However, if the instances are harder, then they do not perform well, the randomsolutions are not good enough as a reinitialization mechanism. Consequently, we propose agreedy reinitialization for CHC and Micro-GA, replacing the population by greedy solutionsobtained from Algorithm 5 instead of random chromosomes. The results are also presentedin Table 3. As we expected, the new models with the greedy reinitialization outperform theolder ones in any instance. This fact shows that genetic algorithms hybridize fairly well withgreedy algorithms, there is a great synergy between the greedy chromosomes and the crossoveroperator as we mentioned in Section 3.2.
Table 4.
Comparison among GADEGD, GA, CHC and Micro-GA withgreedy reinitialization in terms of the solutions’ quality. The execution timeis 0 . m seconds. The best results are highlighted in bold and the worst areunderlined. The last row indicates the number of times that each model gotthe best and worst result in any instance. Problem Optimum Mean objective value
GADEGD GA CHC with G.R.
Micro-GA with G.R. eil51 426
ENETIC AND MEMETIC ALGORITHM WITH DIVERSITY EQUILIBRIUM BASED ON GREEDY DIVERSIFICATION it constantly keeps a high quality and diverse population, what can not been achieved by the(greedy) reinitialization used in CHC and Micro-GA. In those algorithms the diversity and thequality of the solutions are not stable and they generally vary inversely until the populationis reinitialized. When the population is reinitialized, the algorithms do a lot of effort to builda high quality population again and, consequently, computation time is wasted.Note the poor results that the generational genetic algorithm offers, which are due to thelow diversity and fast convergence. It is known that this model can not reach the perfor-mance of CHC and Micro-GA with the classic reinitialization, see Table 3. Consequently,the performance’s differences compared with those algorithms with greedy reinitialization arehuge.GADEGD not only obtains high quality solutions but is also able to reach the problems’optimal solutions. We have developed Table 5 in order to study how difficult is for the algo-rithms to converge to the instances’ optimums. Each entry contains the number of times thatthe corresponding algorithm has reached an optimal solution and the average time needed todo so. The results are taken from 30 executions per algorithm and instance, each of which lastsat most 20 seconds. GADEGD presents the fastest convergence. It also reaches the optimumsmore often than the other algorithms. The greedy diversification contributes to this conver-gence since it introduces new greedy chromosomes progressively, allowing the population’ssolutions to find the genetic material which they need to generate better descendants. Table 5.
Convergence to the optimum solutions.
Problem Heuristics
GADEGD Classical GA CHC with G.R.
Micro-GA with G.R. berlin52 24 /
Not reached / 0.35 23 / 0.67kroA100
11 / 3.95
Not reached 8 / 12.576 6 / 10.77rd100
30 / 3.81
Not reached 25 / 5.26 6 / 7.35Figure 3 shows how the algorithms’ best solution evolve in the instance d AND FRANCISCO HERRERA , Figure 3.
Convergence: GADEGD vs CHC with G.R. vs Micro-GA with G.R.Figure 4 shows how the population diversity evolves for the four algorithms studied:GADEGD, a generational GA and both CHC and Micro-GA with greedy reinitialization.The data corresponds to the execution given in Figure 3. Each value is computed as the meanof the diversity in an interval of time. As we showed before, the generational genetic algorithmcan not maintain a suitable diversity. On the other hand, GADEGD, CHC and Micro-GApresent similar diversity in average thanks to the diversification and reinitialization operators.Note that the reinitialization procedure makes radical changes in the population and, as aconsequence, the real diversity (not average) varies from zero to high values throughout theCHC and Micro-GA execution.
Figure 4.
Diversity: GADEGD vs GA (Algorithm 2) vs CHC with G.R. vsMicro-GA with G.R.
ENETIC AND MEMETIC ALGORITHM WITH DIVERSITY EQUILIBRIUM BASED ON GREEDY DIVERSIFICATION GADEGD’s components analysis.
One could wonder if GADEGD would perform equallywell introducing random solutions in the diversification operator instead of greedy ones, whatwould increase the diversity even more. However, this model does not achieve the same re-sults in practice. As we pointed out in Section 3.2, greedy solutions contains a high qualitygenetic material that is transferred to its children and, after a few generations, spread to thewhole population. The good performance of the hybridization between greedy and geneticalgorithms is corroborated in Section 3.4.2, where we compared a reinitialization with greedysolutions with a randomized reinitialization for CHC and Micro-GA.Another important question is how the greedy diversification does actually influence thealgorithm’s performance. Table 2 showed that this mechanism generates between 2 and 5 percent of the solutions for g = Id , what is an considerable amount of solutions. We introduceTable 6 in order to check if these solutions were important for the algorithm’s results. Table 6.
Comparison among GADEGD, the same model without the greedydiversification, the genetic algorithm given in Algorithm2 and this model withgreedy diversification. The execution time is 0 . m seconds. The best resultsare highlighted in bold and the worst are underlined. The last row indicatesthe number of times that each model got the best and worst result in anyinstance. Problem Optimum Mean objective value
GADEGD GADEGD without G.D.
GA GA with G.D. eil51 426 st70 675 AND FRANCISCO HERRERA , to the generational genetic algorithm given in Algorithm 2. The results prove that the greedydiversification makes a huge positive impact in genetic algorithms’ performance. However, aswe indicated in Section 3.2, the synergy among the components of this model was improvablein theory. The results also show that this synergy was increased in the GADEGD algorithm,which obtains the best results in 17 out of 18 instances.The competition between parent and children plays a crucial role in using the diversityefficiently since it allows to select and exploit the most promising region of the solutions’space. If an usual elitism is used instead of the competition scheme in GADEGD, then thediversity is not properly controlled and the algorithm results are not good enough as it isshown in Table 7. This table also includes the results obtained from a GADEGD versionin which the binary tournament selection replaces the randomized adjacent selection. Inthis case the pressure applied to the population is excessive and the population diversity ispartially lost, as we explained in Section 3.3. Consequently, it can not reach the performanceof GADEGD. Table 7.
Comparison with other pressure and selection mechanisms. Theexecution time is 0 . m seconds. The best results are highlighted in bold andthe worst are underlined. The last row indicates the number of times that eachmodel got the best and worst result in any instance. Problem Mean objective value
GADEGD GADEGD without competitionbetween parent and childrenand with elitism
GADEGD with Tournamentselection eil51
ENETIC AND MEMETIC ALGORITHM WITH DIVERSITY EQUILIBRIUM BASED ON GREEDY DIVERSIFICATION In summary, each GADEGD’s component is relevant for the algorithm’s performance. Thecooperation among all the introduced components allows to achieve a healthy diversity andan equilibrium between exploration and exploitation.4.
Memetic algorithm with diversity equilibrium based on greedydiversification
In this section we extend GADEGD to the field of memetic algorithms. First, we arguehow to define this new metaheuristic, called MADEGD. Secondly, we develop an experimentalstudy in which MADEGD’s behaviour is analysed and compared with other state of the artheuristics based on local search.4.1.
Memetic algorithm with diversity equilibrium based on greedy diversification.
MADEGD is obtained when GADEGD is hybridized with a local search procedure, as it isdone in memetic algorithms. In Section 2.2 we argued that a good hybridization is applyingthe local search once per iteration to the best population’s chromosome that has not beenimproved before. Hence, we use this scheme in MADEGD. However, we must decide whetherthe greedy diversification operator is applied before of after the local search. We choose touse the greedy diversification first in order to avoid that a repeated solution introduced by acrossover is improved.Another important question is how to initialize the population. If the population wererandomly chosen, then the local search would be applied to very low quality solution in theinitial iterations, what consumes too much time. Therefore, we initialize the population withsolutions obtained by a greedy randomized algorithm as we did in Algorithm 3.Lastly, GADEGD has two parameters, the characteristic function and the population size.GADEGD obtained the best results when the characteristic function was g = Id . Hence, weuse this function in MADEGD. The population size is analyzed in Section 4.2.1. Algorithm 10
Memetic algorithm with diversity equilibrium based on greedy diversification
Require:
The population size, named n . Initialize P with n solutions obtained by a greedy randomized algorithm. t ← while stopping criteria is not achieved do P t +1 ← BuildNewP opulationGADEGD ( P t ) P t +1 ← GreedyDiversification ( P t +1 , Id ) Apply the local search to the best solution not previously improved of P t+1 (if it exits). t ← t + 1 end while return bs ( P t ) Algorithm 10 shows the pseudo-code of MADEGD. Note that if a greedy solution is addedto MADEGD’s population, then it will be crossed with the population’s solutions (which arepresumably better) until it is good enough to be improved by the local search. Consequently,the algorithm is finding potential chromosomes which are in the path between various greedyand high quality population’s solutions. This fact will allow the local search to perform thebest it is able to.The application of MADEGD to the traveling salesman problem is straightforward. Thegreedy randomized algorithm is the same used in the greedy diversification (see Section 3.2).Furthermore, we use Lin-Kernighan as the local search procedure. AND FRANCISCO HERRERA , Experimental analysis.
The experimental analysis contains 3 subsections. First, westudy how the population size affects MADEGD. Secondly, we compare it with GADEGDin order to understand how the local search change the algorithm’s behaviour, contrastingits better performance. Thirdly, MADEGD is matched against another memetic algorithm,GRASP and iterated greedy from a triple perspective, solutions quality, population diversityand calls to the local search.4.2.1.
Analysis of the population size.
In Section 3.4.1 we mentioned how important the pop-ulation size is for a genetic algorithm. The same arguments are valid in the field of memeticalgorithms. Table 8 contains the results obtained by MADEGD with population sizes 8, 16,32 and 64. Note that the performance is better when the population is smaller. The reasonis that most of the computational time is wasted in the local search. Consequently, less iter-ations of the genetic operators are applied and a higher pressure is needed, what is providedby the smaller population size.
Table 8.
MADEGD with different population sizes. The execution time is0 . m seconds. The best results are highlighted in bold and the worst areunderlined. The last row indicates the number of times that each model gotthe best and worst result in any instance. Problem Optimum Mean objective value n = 8 n = 16 n = 32 n = 64eil51 426 st70 675
675 675
676 677.767eil76 538
538 538 kroA100 21282
629 629 ch150 6528
ENETIC AND MEMETIC ALGORITHM WITH DIVERSITY EQUILIBRIUM BASED ON GREEDY DIVERSIFICATION Note that population based heuristics usually need a bigger population to avoid prematureconvergence. However, MADEGD does not necessary needs a big size thanks to the greedydiversification.4.2.2.
Comparison with the genetic algorithm with diversity equilibrium based on greedy diver-sification.
Table 9 presents the results obtained by MADEGD and GADEGD with populationsizes 16 and 64 respectively. The mean objective value of the solutions obtained by MADEGDis drastically better, what shows how effective is the local search combined with the geneticoperators and the greedy diversification. In fact, MADEGD finds the optimum solutions inmost instances.In Section 3.4.1 it was noticed that GADEGD needs 64 as the population size to keep anequilibrium between exploration and exploitation. However, as it is pointed out previously,MADEGD requires a smaller population because it performs less iterations. This statementis corroborated in Table 9, which provides the average number of solutions computed in eachinstance by both algorithms. GADEGD generates between 10 and 50 more solutions thanMADEGD. However, MADEGD iterations are much more effective thanks to the local search.
Table 9.
MADEGD vs GADEGD with population sizes 16 y 64 respectively.The execution time is 0 . m seconds. The best results are highlighted in boldand the worst are underlined. The last row indicates the number of times thateach model got the best result in any instance. Problem Optimum Mean objective value Number of generatedsolutions
MADEGD GADEGD MADEGD GADEGDeil51 426 AND FRANCISCO HERRERA , Comparison with other local search based multi-start metaheuristics.
Local search basedmulti-start metaheuristics try to apply the local search to promising solutions placed in dif-ferent regions of the search space. Consequently, they require an underneath procedure whichsupplies high quality and diverse solutions on which local search will be executed. Hence,local search based metaheuristics can be understood as an hybridization between local searchand other heuristics.Memetic algorithms are local search based multi-start metaheuristics in which the localsearch is applied to new solutions obtained by the evolutionary operators. This hybridizationpresents several advantages. First, the evolution scheme guarantees that the local search willbe applied to better solutions as time passes. Secondly, the solutions obtained by the localsearch contain some information which can be used in the evolutionary algorithm’s iterations,obtaining a better performance. However, if the evolutionary scheme doesn’t pay enoughattention to the exploration, then the local search is applied to similar solutions over andover. Consequently, it might always find the same local optimums and the computationtime is wasted. Hence, the evolutionary scheme should have a good equilibrium betweenexploration and exploitation in order to obtain a high performance memetic algorithm.Other local search based multi-start metaheuristics, such as GRASP [4] and iterated greedy[23], [10], use techniques founded on randomized greedy algorithms. Greedy solutions areplaced in promising regions of the solutions space and, thus, the local search is highly pro-ductive on them.Algorithm 11 describes how GRASP works, at each iteration a greedy solution is obtainedby a randomized greedy algorithm and it is improved by the local search. The best solutionfound is returned at the end of the algorithm.
Algorithm 11
GRASP while stopping criteria is not achieved do s ← GreedyRandomizedAlgorithm () s (cid:48) ← LocalSearch ( s ) if s (cid:48) is the best solution found then best solution ← s (cid:48) end if end while return best solution GRASP does not use the information obtained in the past computations, its iterationsare independent and equally productive in average. Iterated greedy try to overcome thisissue modifying a previously visited solution with a greedy technique in order to create newelements of the solution space. Algorithm 12 provides an usual implementation of iteratedgreedy. At each step, a destruction procedure is applied to the best solution found. Thedestruction procedure removes a subset of the solution’s data, obtaining a partial solution.Afterwards, the partial solution is reconstructed by a randomized greedy technique and theobtained solution is improved by the local search.Particularizing in the traveling salesman problem, the destruction procedure consist inremoving a random sublist of the solution’s representation. The reconstruction step is carriedout by the randomized greedy algorithm based on the nearest neighbor philosophy whichwas introduced in Section 3.2. We have implemented GRASP and iterated greedy usingLin-Kernighan as the local search procedure.Note that the destruction - reconstruction step of iterated greedy can be understood as acrossover between the best solution found and a greedy solution. Thus, we can see iteratedgreedy as an hybridization between greedy and memetic algorithms. From this perspective the
ENETIC AND MEMETIC ALGORITHM WITH DIVERSITY EQUILIBRIUM BASED ON GREEDY DIVERSIFICATION Algorithm 12
Iterated Greedy best solution ← GreedyRandomizedAlgorithm () best solution ← LocalSearch ( best solution ) while stopping criteria is not achieved do s ← Destruction ( best solution ) s (cid:48) ← RandomizedGreedyReconstruction ( s ) s (cid:48)(cid:48) ← LocalSearch ( s (cid:48) ) if s (cid:48)(cid:48) is the best solution found then best solution ← s (cid:48)(cid:48) end if end while return best solution model is improvable in terms of exploration and exploitation. The population size is 1 and,thus, it usually operates in the same region of the search space. Furthermore, the local searchis always applied in each iteration even if the obtained solution is not good enough. Hence,we can conclude that iterated greedy is more focused on exploitation than on exploration. Table 10.
Comparison of MADEGD, MA (Algorithm 3), GRASP and IG.The execution time is 0 . m seconds. The best results are highlighted in boldand the worst are underlined. The last row indicates the number of times thateach model got the best and worst result in any instance. Problem Optimum Mean objective value
MADEGD MA GRASP IGeil51 426
548 551.867 550.733pr76 108159 AND FRANCISCO HERRERA , between parent and children control the population’s quality and, as a consequence, the localsearch is likely applied to better solutions each iteration. As we mentioned before, if a greedysolution enters in the population, then it will get crossed with better chromosomes until it isgood enough to be improved by the local search.This synergy is the reason behind the results presented in Table 10, which compares theperformance of MADEGD, the memetic algorithm given in Algorithm 3 (MA), GRASP anditerated greedy (IG). MADEGD loosely obtains the best result in every instance. MA alsooutperforms GRASP and IG thanks to the evolutionary character. Note that GADEGD’sresults are better than the MA’s ones in instances with less than 110 cities (see Table 9) inspite of implementing a version of Lin-Kernighan as the local search, one of the best heuristicsfor the travelling salesman problem.Table 11 provides the average number of calls to the local search in Table 10’s executions.MA is the heuristic which presents more calls to the local search. This fact is due to thepopulation convergence, the local search is much faster because the solutions to which it isapplied are near to local optimums. GRASP and IG are the algorithms with less numberof calls to the local search. Each iteration of those algorithms consists in applying the localsearch to a greedy or partially greedy solution, what is very time consuming since there is a lotof room for optimization. MADEGD mixes the best of both worlds again since it constantlyexplores the search space but the local search is only applied to the best possible solutions. Figure 5.
Diversity: MADEGD vs Memetic algorithm (Algorithm 3).The number of calls to the local search of GRASP and IG is equal to the number ofgenerated solutions. However, both MA and MADEGD just apply the local search in aniteration if there is a solution no previously improved by the local search. Table 11 also showsthe percent of iterations in which the local search was applied to a population’s solutionfor both memetic algorithms. MA always applies the local search since the population isalmost fully generated by the crossover operator. However, MADEGD, after a fair numberof iterations, only applies the local search if a new solution has entered the population afterthe competition between parent and children. Nonetheless, the percent is always greater than85%, what shows that the crossover operator is able to find better chromosomes than the
ENETIC AND MEMETIC ALGORITHM WITH DIVERSITY EQUILIBRIUM BASED ON GREEDY DIVERSIFICATION parents. This fact is essential for the algorithm’s good behavior since if the crossover were notgood enough, then no solution would enter the population and the algorithm would convergeto a local optimum.Finally, Figure 5 shows how the diversity evolves in an execution of MADEGD and thememetic algorithm. It is very similar to Figure 2. The memetic algorithm converges too fastto a local optimum, what avoids a proper exploration of the search space. Table 11.
Average number of calls to the local search in the executions ofTable 10. Percent of MADEGD and memetic algorithm (MA)’s iterations inwhich the local search is applied to a population’s solution.
Problem Number of calls Percent of iterationsto the local search in which the localsearch is applied
MADEGD MA GRASP IG MADEGD MAeil51 7709.63 7478.47 3309.6 3256.13 100 100berlin52 2759 4054.87 1863.37 1825.3 100 100st70 6869.37 6690.73 2269.07 2247.83 100 100eil76 7682.43 7756.63 2514.67 2479.27 100 100pr76 3653.83 4940.3 722.633 712.6 100 100kroA100 3096.7 4161.97 942.133 931.267 100 100rd100 3268.77 4268.37 854.7 843.933 100 100eil101 5543.07 5605.2 1390.47 1363.07 100 100lin105 2085.3 3203.63 392.1 394.8 100 100ch150 3067.7 3762.7 765.433 760 100 100rat195 2871.6 4212.27 394.133 393.533 100 100d198 968.467 1381.4 119.9 119.567 100 100ts225 3284.07 4003.83 746.9 736.4 99.9918 100a280 2729.97 3423.83 240.3 239.733 100 100lin318 663.267 1093.1 56.4 56.0333 99.7588 100fl417 331.533 586.8 33.0667 33.4333 100 100pcb442 992.133 1765.13 74.1 74.9333 97.6112 100rat575 216.3 1059.77 28.9667 27.8333 85.2672 1005.
Conclusion
In this paper we have introduced a novel genetic algorithm, GADEGD, which attemptsto achieve a balance between exploration and exploitation. The algorithm’s key operator isthe greedy diversification, which maintains a diversity equilibrium in the population. Fur-thermore, the algorithm uses the randomized adjacent selection and a competition betweenparent and children. These operators have been selected in order to increase the components’synergy.We have also extended the algorithm to the field of memetic algorithms, MADEGD, obtain-ing a more competitive metaheuristic which outperforms a generational memetic algorithm,GRASP and iterated greedy in our studies.The greedy diversification has been proved to be a relevant operator for designing popula-tion based metaheuristics and, in particular, genetic and memetic algorithms. An heuristic AND FRANCISCO HERRERA , which uses this operator has much more facilities to constantly keep a high quality and diversepopulation, what can not be achieved by the widely used mutation operator.The developed work reaffirms our initial assertions, the equilibrium between exploration andexploitation and the diversity problem should be taken into account when designing geneticand memetic algorithms. Hybridization helps to solve both problems, providing explorationand exploitation mechanisms to evolutionary algorithms.Finally, we believe that the proposed metaheuristics and operators can be fruitfully appliedto high dimensional or large scale problems [15], [19], where memetic algorithms are one ofthe most powerful metaheuristics. These problem require a careful exploration of the searchspace and an effective exploitation of the best solutions found. Therefore, as a future workwe will be extending the current results to the large scale framework. References [1] I. Boussa¨ıd, J. Lepagnot, and P. Siarry. “A survey on optimization metaheuristics.” In:
Information Sciences
237 (2013), pp. 82–117.[2] M. Crepinsek, S. H. Liu, and M. Mernik. “Exploration and exploitation in evolutionaryalgorithms: a survey”. In:
ACM Computing Surveys
45 (2013), p. 35.[3] L. J. Eshelman. “The CHC adaptive search algorithm: How to have safe search whenengaging in nontraditional genetic recombination”. In:
Foundations of genetic algorithms (1991), pp. 265–283.[4] T. A. Feo and M. G. Resende. “Greedy randomized adaptive search procedures”. In:
Journal of global optimization
Imitation of life: how biology is inspiring computing . Cambridge: Mit Press,2004.[6] Brian Mc Ginley et al. “Maintaining healthy population diversity using adaptive crossover,mutation, and selection”. In:
IEEE Transactions on Evolutionary Computation
15 (2011),pp. 692–714.[7] D. E. Goldberg.
Genetic Algorithms in Search, Optimization & Machine Learning .Addison-Wesley, Reading, MA, 1989.[8] D. E. Goldberg and K. Deb. “A comparative analysis of selection schemes used in geneticalgorithms”. In:
Foundations of Genetic Algorithms (1991), pp. 69–93.[9] Gregory Gutin and Abraham P. Punnen, eds.
The traveling salesman problem and itsvariations . Springer Science & Business Media, 2006.[10] K. Karabulut and M. F. Tasgetiren. “A variable iterated greedy algorithm for the travel-ing salesman problem with time windows”. In:
Information Sciences
279 (2014), pp. 383–395.[11] D. Karapetyan and G. Gutin. “Lin-Kernighan heuristic adaptations for the generalizedtraveling salesman problem”. In:
European Journal of Operational Research
208 (2011),pp. 221–232.[12] N. Krasnogor and J. Smith. “A tutorial for competent memetic algorithms: model,taxonomy, and design issues”. In:
IEEE Transactions on Evolutionary Computation
Advances in Intelligent Robotics Systems Conference, InternationalSociety for Optics and Photonics (1990), pp. 289–296.[14] P. Larra˜naga et al. “Genetic algorithms for the travelling salesman problem: A reviewof representations and operators”. In:
Artificial Intelligence Review
13 (1999), pp. 129–170.
EFERENCES 27 [15] Miguel Lastra, Daniel Molina, and Jos´e Manuel Ben´ıtez. “A high performance memeticalgorithm for extremely high-dimensional problems”. In:
Information Sciences
293 (2015),pp. 35–58.[16] E. L. Lawler et al.
The traveling salesman problem: a guided tour of combinatorialoptimization . New York: Wiley, 1985.[17] S. Lin and B. W. Kernighan. “An effective heuristic algorithm for the traveling-salesmanproblem”. In:
Operations research
21 (1973), pp. 498–516.[18] M. Lozano and C. Garc´ıa-Mart´ınez. “Hybrid metaheuristics with evolutionary algo-rithms specializing in intensification and diversification: Overview and progress report”.In:
Computers & Operations Research
37 (2010), pp. 481–497.[19] Sedigheh Mahdavi, Mohammad Ebrahim Shiri, and Shahryar Rahnamayan. “Meta-heuristics in large-scale global continues optimization: A survey”. In:
Information Sci-ences
295 (2015), pp. 407–428.[20] G. Reinelt. “TSPLIB - A traveling salesman problem library”. In:
ORSA Journal onComputing url : .[21] F. J. Rodriguez, C. Garcia-Martinez, and M. Lozano. “Hybrid metaheuristics based onevolutionary algorithms and simulated annealing: taxonomy, comparison, and synergytest”. In: IEEE Transactions on Evolutionary Computation
16 (2012), pp. 787–800.[22] G. Rudolph. “Convergence analysis of canonical genetic algorithms”. In:
IEEE Trans-actions on Neural Networks
European Journal of Operational Research
177 (2007), pp. 2033–2049.[24] A. Simoes and E. Costa. “CHC-based algorithms for the dynamic traveling salesmanproblem”. In:
Applications of Evolutionary Computation
Volume 6624 of the series Lec-ture Notes in Computer Science (2011), pp. 354–363.[25] Moshe Sipper. “Notes on the origin of evolutionary computation.” In:
Complexity
Journal of global optimization
11 (1997),pp. 341–359.[27] E. G. Talbi.
Metaheuristics: from design to implementation . John Wiley & Sons, 2009.[28] T. Weise et al. “Benchmarking optimization algorithms: an open source frameworkfor the traveling salesman problem”. In:
IEEE Computational Intelligence Magazine
International Journal of Computational IntelligenceSystems