Centric selection: a way to tune the exploration/exploitation trade-off
David Simoncini, Sébastien Verel, Philippe Collard, Manuel Clergue
aa r X i v : . [ c s . A I] J u l Centric Selection: a Way to Tunethe Exploration/Exploitation Trade-off
David Simoncini, S´ebastien VerelPhilippe Collard, Manuel Clergue
Abstract
In this paper, we study the exploration / exploitation trade-off in cel-lular genetic algorithms. We define a new selection scheme, the centric se-lection, which is tunable and allows controlling the selective pressure witha single parameter. The equilibrium model is used to study the influenceof the centric selection on the selective pressure and a new model whichtakes into account problem dependent statistics and selective pressure inorder to deal with the exploration / exploitation trade-off is proposed: thepunctuated equilibria model. Performances on the quadratic assignmentproblem and NK-Landscapes put in evidence an optimal exploration /exploitation trade-off on both of the classes of problems. The punctuatedequilibria model is used to explain these results.
The exploration/exploitation trade-off is an important issue in evolutionarycomputation. By tuning the selective pressure on the population, one can findan optimal (or near-optimal) tradeoff between exploitation and exploration. Incellular Evolutionary Algorithms (cEAs), the population is embedded on a bidi-mensional toroidal grid and each solution interacts with its neighbors thanks toa certain neighborhood. The convergence rate of the algorithm is then depen-dent of the shape and size of the grid and of the neighborhood. The smallestsymetric neighborhood that can be defined is the well-known Von Neumannneighborhood of radius 1. It guarantees a slow isotropic diffusion of genetic in-formation through the grid. But when solving complex multimodal problems, itis necessary to slow down even more the propagation speed of the best solutionbecause the algorithm still often converges over a local optimum.Our goal in this paper is to establish a relation between the selective pressureon the population and the effects of recombination and mutation operators,in order to find an optimal exploration/exploitation trade-off. To do so, wepropose a new selection scheme able to control the selective pressure and atheoretical model which takes into account the effects of stochastic variationson an optimization problem. In section 2 we define a selection scheme able to1une the selective pressure and present the algorithm used in the experiments. Insection 3, we analyze the selective pressure with respect to the selection methodand present a new model which takes into account the stochastic variations. Insection 4 we present performances on Quadratic assignment problem instancesand on NK-Landscapes and we explain the results with the model proposed.
A cellular Evolutionary Algorithm (cEA) [23] is an EA in which the populationis embedded on a bidimensionnal toroidal grid (see figure 1). Each cell of thegrid contains a solution. Embedding the solutions on a grid allows defining aneighborhood between the cells. The most commonly used one in cEAs is theVon Neumann neighborhood (shown on figure 1). At each generation, everycell on the grid is updated by selecting parents in its neighborhood and ap-plying stochastic operators such as crossover and mutation. Several strategiesexist, synchronous and asynchronous, to update the cells. The small overlappedneighborhoods guarantee the diffusion of solutions through the grid [18]. Suchalgorithms are especially well suited for complex problems [9] and are of advan-tage when dealing with dynamic problems [20].Figure 1: Representation of a cEA and Von Neumann neighborhood in dashedline.
One of the main properties that differs between EAs and cEAs is the rate ofconvergence (propagation speed of the best solution) : It is exponential for EAsand quadratic for cEAs. Therefore, the selective pressure on the population isweaker for a cEA than for an EA. Controlling the selective pressure is criticalsince it can avoid premature convergence of the algorithm when solving complexmultimodal problems. Several parameters related to the selective pressure canprevent the algorithm from getting stuck in a local optimum. The topologyof the grid, the local neighborhood, the properties of the selection operator aresuch parameters. By correctly tuning these parameters for a given problem, onecan find a good exploration/exploitation tradeoff and minimize the risks of pre-mature convergence. Sarma et al. established a link between the radius of the2eighborhood and the radius of the grid : changing this ratio directly affects theselective pressure on the population [16]. Alba et al. analyzed performances ofcEAs with a fixed size neighborhood and different grid shapes. They arrived tothe conclusion that thin grids are well-suited for complex multi modal problemsand large grids are well-suited for simple problems. The main explanation isthat thinner grids give lower selective pressure [3]. Takeover times and growthcurves analysis are useful to measure the selective pressure on a population, butit is not sufficient to decide of a trade-off between exploration and exploitation:it is necessary to include effects of the stochastic variations due to the operatorsin the analysis. Janson et al. proposed a hierarchical cEA which allows achiev-ing different levels of exploration /exploitation tradeoff in distinct zones of thepopulation simultaneously [8].A standard technique to study the induced selective pressure without in-troducing the perturbing effect of variation operators is to let selection be theonly active operator, and then monitor the number of best solution copies inthe population [6]. The takeover time is the time needed for one single bestsolution to colonize the population with selection as the only active operator.Let λ be the size of the population, t the number of generations and N ( t ) thenumber of best solution copies at generation t . The population is initializedwith one solution of good fitness and λ − t such that N ( t ) = λ . Analysing the growth of N ( t ) as a function of t also gives an indi-cation on the selective pressure. It shows the convergence rate of the algorithmwhen selection is the only active operator. When the slope of the growth curveof N ( t ) is low, the convergence rate is low and the takeover time is high. Onthe other hand, a high slope of the growth curve leads to a short takeover time.In the first case, the selective pressure on the population is weaker than in thesecond case. Characterizing the growth curves and the takeover time is an important issuein the study of the selective pressure [6]. Many models have been proposed todefine the behaviour of structured population evolutionary algorithms. Sarmaand De Jong proposed a logistic model in which the coefficient of the growthcurve of the best solution is shown to be an inverse exponential of the ratio be-tween radii of the neighborhood and the underlying grid [16]. This conclusionwas guided by an empirical analysis of the effects on the convergence rate andthe takeover time of several neighborhood sizes and shapes. Sprave proposed ahypergraph based model of population structures and a method for the estima-tion of growth curves and takeover times based on the probabilistic diameterof the population [19]. Gorges-Schleuter proposed a study about takeover timeand growth curves for cellular evolution strategies. She obtained a linear modelfor ring populations and a quadratic model for a torus population structure [7].Several authors wrote about theoretical or empirical models of growth curves3nd takeover time. Giacobini et. al. proposed a model for cellular evolutionaryalgorithms with asynchronous update policies [4]. He summarized his resultsand proposed models for synchronous updates [5] that will be evoked later inthis paper. Alba proposed a model for distributed evolutionary algorithms con-sisting in the sum of logistic definition of the component takeover regimes [2]. Inhis paper, he made an interesting review of existing models and compared twoof them (the logistic model and the hypergraph model) with his newly proposedone. For a detailed state of the art of cEAs, see [1].
In this section we present a new selection scheme for cEAs that allows tuningaccurately the selective pressure.
Algorithm 1
Centric Selection algorithm
CentricSelection index: int, β : doubleneighbors ←− GetNeighborhood(index) candidate ←− Select(neighbors, β ) candidate ←− Select(neighbors, β )Best( candidate , candidate )The centric selection (CS) idea is to change the probability of selecting thecenter cell of the neighborhood. This scheme allows slowing down the conver-gence speed while keeping an isotropic diffusion of good solutions through thegrid. The CS is a determinist tournament selection. But unlike the standarddeterministic tournament, cells in the neighborhood may have different proba-bilities of being selected for the competition. The anisotropic selection [17] is an-other selection scheme which modifies the probability of selection a cell for a de-terministic tournament. With the anisotropic selection, the diffusion of solutionsis not isotropic, so we propose the CS which is easier to study. We have p c = β the probability of selecting the center cell and p n = p s = p e = p w = (1 − β )the probability of selecting either north, south, east or west cell. When β = ,all cells have the same probability of being selected for the competition: thisparticular case of CS is the standard binary tournament selection. When β = 1,only the center cell can be selected for the tournament: in this particular casewhere the same solution is selected two times, the crossover operator is not ap-plied in the cEA. Only mutations are applied to the solution, and with an elitistreplacement strategy, the algorithm behaves as the parallelisation of as manyhill climbers as there are solutions in the population. The CS is described inalgorithm 1. The candidates compete in a deterministic tournament returningthe best one. For each cell on the grid, two parents are selected per generation,as we can see in the algorithm 2. Stochastic variations operators are applied tothe parents, generating two children. The replacement strategy is elitist: thebest child replaces the current solution on the grid if it has a better fitness. Theuse of a temporary grid is necessary for a synchronous update of the cells.4 lgorithm 2 Description of our cEA cEA population: vector, β : doubletempGrid: vector while continue() dofor i = 1 to GridSize do parent ←− CentricSelection(i, β ) parent ←− CentricSelection(i, β )( child , child ) ←− Crossover( parent , parent );Mutate( child )Mutate( child )tempGrid[i] ←− Best( population [ i ], child , child ) end for Replace(population, tempGrid) end while
In this section, we present two models of the search dynamic in cEAs. In thefirst one, the Equilibrium Model (EM) we consider that the optimal solutionhas been found and observe how it colonizes the grid. This model is classical inthe studies on the selective pressure and one the exploration/exploitation trade-off. The informations given by this model are takeover times and best solutiongrowth curves. As the stochastic variations operators are not taken into account,the same dynamic occurs in experimental runs when the recombination andmutation operators are ineffective: when the system has reached an equilibrium.In the second one, we consider that a better solution can be found with acertain probability and observe the frequency of apparition of this new solutionwith respect to our algorithm’s parameters. It is a model of the transitionbetween two periods of fitness stability. We call this new model the PunctuatedEquilibria Model (PEM).
In order to measure the selective pressure induced by the CS, we observe whathappens when no more solution improvement is possible. In this case, crossoverand mutation are no longer useful and the evolution process has reached anequilibrium. Hence, we observe the time needed for a single best solution toconquer the whole grid, and look at the growth curve obtained and the takeovertime.We measure the effects of CS on selective pressure by observing these growthcurves and takeover times on a square grid of side 64. Figure 2 shows thetakeover time as a function of β . The takeover time is not defined for β = 1.The selective pressure drops when the value of β increases. We can see on figure3 the growth of the number of copies of the best solution in the population(top) and its growth rate (bottom). There are two stages in the shape of the5 Figure 2: Average takeover time as a function of β .curve. The growth rate is linear in the first part and quadratic in the secondpart. When using this selection scheme, the diffusion of the best solution isstill isotropic. So the best solution roughly propagates describing an obtusesquare as long as no side of the grid is reached. This corresponds to the firstpart of the growth rate curve. Once the sides are reached by some copies of thebest solution, the dynamic changes as we can observe on the second part of thegrowth rate curves. In this section, we propose a new model which will help in the understandingof the search dynamics of an Evolutionary Algorithm. This model was firstdesigned for a cellular EA but can be easily extended to any kind of evolution-ary algorithm. We consider a cEA initialized with random solutions. We makesure that the best solution in the population is unique. Our goal is to simulatean evolutionary run: We simulate recombination and mutation operators withprobabilities that the mating is efficient or not (i.e. produces a new best solu-tion). We consider three different types of matings: between two copies of thebest solution (mating 11), between one copy of the best solution and one sub-optimal solution (mating 01) and between two sub-optimal solutions (mating00). We introduce probabilities P , P and P that matings of type 11, 01and 00 produce a new best solution, fitter than the previous best one. Figure 4is an example of evolutionary run on some optimization problem (minimisationtask). We can see that there are some stagnation periods where the best so-lution don’t improve. Then, an amelioration occurs and the population entersanother stability period. An evolutionary run is a sum of stagnation periodsand punctual improvements. Our punctuated equilibria model computes the6 B e s t s o l u t i on c op i e s Generations β = 0 β = 0.3 β = 0.6 β = 0.7 β = 0.8 β = 0.85 β = 0.9 0 20 40 60 80 100 120 0 50 100 150 200 250 300 350 400 G r o w t h r a t e Generations β = 0 β = 0.3 β = 0.6 β = 0.7 β = 0.8 β = 0.85 β = 0.9 Figure 3: Growth curves of N ( t ) (top) and its growth rate (bottom) for differentvalues of β .probability of improving the best solution in the population according to thevariables described above.With this model, the probability of finding a new best solution at a givengeneration t is : p ( t ) = 1 − (1 − P ) n ( t ) (1 − P ) n ( t ) (1 − P ) n ( t ) where n ( t ), n ( t ) and n ( t ) are the number of matings of each type for thegeneration t . 7 B e s t f i t ne ss Generations
Figure 4: Example of evolutionary runThe average time to find a new best solution is given by : E = X t ≥ tp ( t )The performance of an algorithm can be measured by the time E needed to finda new best solution but also by the probability P of improvement in a presettime T . We have the probability of improving the best solution in T generations: P = 1 − Q Tt =1 (1 − p ( t )) P = 1 − (1 − P ) Σ ( T ) (1 − P ) Σ ( T ) (1 − P ) Σ ( T ) with Σ ij ( T ) = P Tt =1 n ij ( t ) the sum over T of mating of each type.The parameters P ij are problem dependent and the values of Σ ij are givenby the selection scheme used. The selection process is usually controlled bya parameter such as the tournament size or in the case of the CS: β . Thisparameter should be used to maximize the probability P . Intuitively, the idealselection process maximizes the Σ ij which have the higher P ij . More precisely,assuming that the control parameter of the selection process is β , the parameter β ∗ which maximizes the probability P ( T ) verifies: dPdβ ( β ∗ ) = 0which gives: In the following equations, we only denote the dependance on β for P and Σ ij for read-ability. − P ) ∂ Σ ∂β ( β ∗ )+ log(1 − P ) ∂ Σ ∂β ( β ∗ )+ log(1 − P ) ∂ Σ ∂β ( β ∗ ) = 0 (1)If it is possible to have a model of Σ ij ( β ), it would be possible to calculatethe optimal β as a function of P ij .In this model, the exploration/exploitation tradeoff is given by the numberof each possible matting (00, 01 and 11). The model could be used to explainthe probability and the time to find a new best solution according to the selec-tive pressure, and also to tune the value of parameters which have an impacton the selective pressure, such as β , to have the highest probability to evolvetoward a new best solution. Equation 1 gives precisely the best exploration andexploitation tradeoff and allows computing the optimal value of β (in our case)for this trade-off. In the following, we will show the validity of the PEM onsome optimization problems. In this section, we study the effect of selective pressure on performances throughexperiments of a cEA with CS on two well-known classes of problems. Theoptimal exploration / exploitation tradeoff found will be explained thanks tothe PEM presented in the previous section.
The problems proposed, Quadratic Assignment Problem and NK landscapes,are known to be difficult to optimize. The important number of instances of theQuadratic Assignment Problem and the tunable parameters of the NK land-scapes allow managing the difficulty of the problems.
This section presents the Quadratic Assignment Problem (QAP) which is knownto be difficult to optimize. The QAP is an important problem in theory andpractice as well. It was introduced by Koopmans and Beckmann in 1957 andis a model for many practical problems [11]. The QAP can be described as theproblem of assigning a set of facilities to a set of locations with given distancesbetween the locations and given flows between the facilities. The goal is to placethe facilities on locations in such a way that the sum of the products betweenflows and distances is minimal.Given n facilities and n locations, two n × n matrices D = [ d kl ] and F = [ f ij ]where d kl is the distance between locations k and l and f ij the flow betweenfacilities i and j , the objective function is :9 = X i X j d p ( i ) p ( j ) f ij where p ( i ) gives the location of facility i in the current permutation p .Nugent, Vollman and Ruml proposed a set of problem instances of differentsizes noted for their difficulty [14]. The instances they proposed are known tohave multiple local optima, so they are difficult for an evolutionary algorithm.The best algorithm known is the fast hybrid evolutionary algorithm [13] whichcombines an evolutionary algorithm with an improvement of the fast tabu searchof Taillard. Set up
We use a population of 400 solutions placed on a square grid (20 × N where N is the size of a solution.The algorithm uses a crossover that preserves the permutations: • Select two solutions p and p as genitors. • Choose a random position i . • Find j and k so that p ( i ) = p ( j ) and p ( i ) = p ( k ). • exchange positions i and j from p and positions i and k from p . • repeat N/ N is the size of an solution.This crossover is an extended version of the UPMX crossover proposed in[12]. The mutation operator consists in randomly selecting two positions fromthe solution and exchanging those positions. The crossover rate is 1 and wedo a mutation per solution. We perform 200 runs for each tuning of the twoselection operators. An elitism replacement procedure guarantees the solutionsstay on the grid if they are fitter than their offspring. The NK landscapes were proposed by Kaufmann to model the boolean net-work and used in optimisation in order to explore how epistasis is linked to theruggedness of search spaces [10]. Epistasis corresponds to the degree of inter-actions between the “loci” of a solution and ruggedness is the number of localoptima of the search space. The main characteristic of NK Landscapes is thatthey allow tuning the epistasis level with a single parameter K . The parameter N determines the length of the solutions.The fitness of solutions for a NK landscape is given by the function f : { , } N → [0 , N . Each binary string is a solution with N locations. An atom with fixed epistasis level is represented by a fitnesscomponent f i : { , } K +1 → [0 , i . It depends on the value of the bit i and on the value of K other bits of the string ( K must fall between 0 and N − f ( x )of x ∈ { , } N is the average of the values of the N fitness components f i : f ( x ) = 1 N N X i =1 f i ( x i , x i , ..., x ik )where { i , ..., i k } ⊂ { , ..., i − , i + 1 , ..., N } . Many ways have been proposedto choose the K other locations from the N of the solutions. The mainly usedones are adjacent and random neighborhoods. With the first one, the K near-est locations of the location i are chosen (the solution is taken to have periodicboundaries). With the random neighborhood, K locations are randomly se-lected from the solution. Each fitness component f i is specified by extension, ie a random number y i, ( x i ,x i ,...,x ik ) from [0 ,
1] is associated with each element( x i , x i , ..., x ik ) from { , } K +1 . Those numbers are uniformly distributed in theinterval [0 , Set up
The size of the population is 400 solutions. The crossover used is a one pointcrossover, applied with a probability of 1. The mutation is a bit flip appliedwith a probability of n where n is the size of a solution. We perform 200 runsfor every parameter set, and each run stops after 1500 generations. Runs areperformed on instances of sizes N = 32, with K ∈ .. Figure 5 and table 1 show performances of a cEA using CS on some QAPinstances of various sizes. The instance in figure 5 is a well-known instanceof size 30. The first fact that we notice when looking at these results is thatthere is an optimal setting, different from the extreme values 0 and 1 for theparameter β . This indicates that for a certain setting of the parameters, andthus for a certain selective pressure, the search dynamic leads to optimal results.Curves representing the instances summarized in table 1 have the same shapeas figure 5. On each instance, the optimal value of β is around 0 .
85. Theperformances increase up to these values and then decrease. Performances ofCS are significantly better than the one obtained with a cEA using standardbinary tournament selection. The standard cEA is observable on the curve atthe points β = 0 . β than with a cEAwith binary tournament selection. 11 C o s t β Figure 5: Performances on nug30 for a cEA using CS.Table 1: Avg. results and std.dev.on QAP instancesInstance Std cGA Best avg. results Opt. β Nug30 6178 [28] [14] . . × . × . [75] [34] . . × . × . [27760] [19391] . N = 32 and K = 10for the figure 6 and are summarized in table 2 for the other instances.We can see that the shape of the performances’ curve is different from theQAP curve. The performance increases until β reaches its maximum value.The same results are obtained for all the instances in table 2. The parameter K tunes the difficulty of the instance. We can see that for K = 2, there isno optimal value for β . The reason is that the optimum is always found. For K = 4, the standard cEA sometimes get stuck in a local optimum, and with β = 1 our algorithm always find the optimum. On every instance, except K = 2,the optimal value for β is 1.However, this value β = 1 is a particular one, since it breaks all communica-tions on the grid. As long as the value of the parameter increases, the chancesof selecting two different solutions for recombination decrease. For β = 1, thealgorithm is the parallelisation of as much hill climbers as there are cells on thegrid : It constantly selects the center cells of the neighborhoods, so there is nocrossover and any amelioration is due to a bit flip.12able 2: Avg. performances and std.dev.on NK instances with N = 32 K Std cGA Best avg. results Best β . [0] . [0] [0 , . [0 . . [0]
16 0 . [0 . . [0 .
18 0 . [0 . . [0 .
110 0 . [0 . . [0 .
112 0 . [0 . . [0 . P e r f o r m an c e β Figure 6: Performances on NK with N = 32 and K = 10 for a cEA using CS. In order to explain the optimal values of β for QAP and NK-Landscape andto validate the PEM, we compute P , the probability of discovering a new bestsolution in the population taken from the PEM, with real data. We calculatedit for one instance of QAP and one instance of NK-Landscapes. With thiscalculation we want to find the value of β that maximizes the probability ofdiscovering a new solution. This probability depends on the value of Σ ij , andthus on time: if at a generation t no new solution is discovered, the actualbest solution spreads in the population according to the selective pressure. Ifduring an interval of time corresponding to the takeover time no new solution13s discovered then the population converges. We can compute the ideal β valuefor a given number of generations T because Σ ij ( T ) relies on β and on time:after T generations Σ ij ( T ) is different according to β , and for the optimal valueof β Σ ij ( T ) leads to the best probability P .We estimated the Σ ij with the same experiments done to compute growthcurves and takeover time. We averaged the number of matings of each typeat each generation over 10 runs. Then, we needed to know the probabilities P , P and P . We estimated these probabilities using a Bayesian processduring the runs. We averaged the values obtained by generations over 500runs. Figure 7 shows the result of the estimation of probabilities on the QAPinstance Nug30. The ordonate scale is logarithmic because of the variations ofprobabilities. The curves representing the P ij intersect, so the value of β whichmaximizes P may change during a run. We computed P with estimated valuesof P ij taken by steps of 50 generations. The values of Σ ij are also generationdependent. For each value of β , we took the Σ ij value after 100 generations:that is Σ ij (100). During a run, it would correspond to allowing a stagnationperiod of 100 generations before stopping the run, which is reasonnable. P r obab ili t i e s Generations 000111
Figure 7: Estimated P ij on QAP instance nug30The figure 8 shows the optimal values of β as a function of generations forthe QAP instance Nug30. During the first 700 generations, the optimal value is β = 0 .
2. Then, there is a transition of approximatively 150 generations. Duringthis phase, the ideal value of β grows until it reaches 1. In our experiments, weobserve optimal values of β between 0 . . β are constant during the runs. But the PEM shows that the selectivepressure should be strong at the beginning (low values of β ) and then weak(high values of β ). If β is constant, intermediate values in the range [0 . , . β as a function of generations for14 β Generations
Figure 8: Optimal values of β on QAP instance nug30 P r obab ili t i e s Generations 000111
Figure 9: Estimated P ij on NK-Landscape with N = 32 and K = 10a NK-Landscape with N = 32 and K = 10. We can see that the optimal β value increases fastly and reaches 1 in the early generations. The ideal selectivepressure is weak, and it is not surprising that the best performances are obtainedwhen β = 1 in our experiments. Figure 9 shows the estimation of P ij as afunction of generations. We can see that the curve representing P drops downvery fast. With a negligible probability of improving the current best solutionwith mutations, there is no sense in spreading this solution. With β = 1, thecurrent best solution in the population cannot spread.The PEM has been used in order to explain the exploration / exploitation15 β Generations
Figure 10: Optimal values of β on NK-Landscape N = 32, K = 10trade-off on two different classes of problems. Coupled with the centric selection,it showed the ideal selective pressure along the search process. This model canbe used to tune any parameter which has some influence on the number ofmatings of each type defined in the previous section. The computation costis low, since the estimation of probabilities by a Bayesian process is precise:we averaged the estimation on 500 runs but the standard deviation was low( ≈ − ).The Σ ij are only computed once since they are independent from theoptimization problem tackled. 16 Conclusion
The exploration/exploitation trade-off is an important issue in evolutionary al-gorithms. In this paper, we propose a model that takes into account stochas-tic variations and improvement of the quality of the solutions, the punctuatedequilibria model. In order to study the exploration / exploitation trade-off wepropose a tunable selection operator: the centric selection. By monitoring theprobability of selecting the center cell of neighborhoods for a tournament se-lection, this selection operator allows tuning accurately and continuously theselective pressure with one single parameter ( β ). The performance results onQAP instances and NK-Landscapes showed different optimal settings of the cen-tric selection, and thus different ideal selective pressures. Using the punctuatedequilibria model, we put in evidence the optimal values of the centric selection’scontrol parameter observed on QAP instances and NK-Landscapes. The punc-tuated equilibria model also put in evidence that the ideal selective pressure isnot constant during the search process in the case of QAP instances.In this paper, we used the PEM in order to explain experimental results. Infuture works, we will use it in order to predict optimal exploration / exploitationtrade-offs and to adapt the selective pressure during the runs. To do so, we willboth estimate the P ij and tune the selection operator online during the searchprocess. The centric selection will be used in auto-adaptative algorithms withthe advantage of modifying the exploration / exploitation ratio with a singleparameter. It will also be applied on real problems and compared to otheroptimization methods. References [1] E. Alba and B. Dorronsoro.
Cellular Genetic Algorithms . Springer-Verlag,2008.[2] E. Alba and G. Luque. Growth Curves and Takeover Time in DistributedEvolutionary Algorithms. In K. D. et al., editor,
Genetic and EvolutionaryComputation Conference (GECCO-2004) , volume 3102 of
Lecture Notes inComputer Science , pages 864–876, Seattle, Washington, 2004.[3] E. Alba and J. M. Troya. Cellular evolutionary algorithms: Evaluating theinfluence of ratio. In
PPSN , pages 29–38, 2000.[4] M. Giacobini, E. Alba, and M. Tomassini. Selection intensity in asyn-chronous cellular evolutionary algorithms. In
GECCO , pages 955–966,2003.[5] M. Giacobini, M. Tomassini, A. Tettamanzi, and E. Alba. Selection inten-sity in cellular evolutionary algorithms for regular lattices.
IEEE Trans.Evolutionary Computation , 9(5):489–505, 2005.[6] D. E. Goldberg and K. Deb. A comparative analysis of selection schemesused in genetic algorithms. In
FOGA , pages 69–93, 1990.177] M. Gorges-Schleuter. An analysis of local selection in evolution strategies.In
GECCO , pages 847–854, 1999.[8] S. Janson, E. Alba, B. Dorronsoro, and M. Middendorf. Hierarchical cellu-lar genetic algorithm. In J. Gottlieb and G. R. Raidl, editors,
EvolutionaryComputation in Combinatorial Optimization EvoCOP , volume 3906, page111.122, 2006. 189] K. A. D. Jong and J. Sarma. On decentralizing selection algorithms. In
ICGA , pages 17–23, 1995.[10] S. A. Kauffman.
The Origins of Order . Oxford University Press, New York,1993.[11] T. Koopmans and M. Beckmann. Assignment problems and the locationof economic activities.
Econometrica , 25(1):53–76, 1957.[12] V. Migkikh, E. Topchy, V. Kureichik, and E. Tetelbaum. Combined geneticand local search algorithm for the quadratic assignment. In
Proceedings ofthe SecondAsia-Pacific Conference on Genetic Algorithms and Applications(APGA , pages 144–151, 2000.[13] A. Misevicius. A fast hybrid genetic algorithm for the quadratic assignmentproblem. In
GECCO ’06: Proceedings of the 8th annual conference onGenetic and evolutionary computation , pages 1257–1264, 2006.[14] C. Nugent, T. Vollman, and J. Ruml. An experimental comparison of tech-niques for the assignment of techniques to locations.
Operations Research ,16:150–173, 1968.[15] G. Ochoa, M. Tomassini, S. Verel, and C. Darabos. A study of nk land-scapes’ basins and local optima networks. In
Genetic and EvolutionaryComputation – GECCO-2008 , pages 555–562, Atlanta, 12-16 July 2008.ACM.[16] J. Sarma and K. A. De Jong. An analysis of the effects of neighborhood sizeand shape on local selection algorithms. In
PPSN , pages 236–244, 1996.[17] D. Simoncini, S. Verel, P. Collard, and M. Clergue. Anisotropic selection incellular genetic algorithms. In M. K. et al., editor,
Genetic and EvolutionaryComputation – GECCO-2006 , pages 559–566, Seatle, 8-12 July 2006. ACM.[18] P. Spiessens and B. Manderick. A massively parallel genetic algorithm:Implementation and first analysis. In
ICGA , pages 279–287, 1991.[19] J. Sprave. A unified model of non-panmictic population structures in evo-lutionary algorithms.
Proceedings of the congress on Evolutionary compu-tation , 2:1384–1391, 1999.[20] M. Tomassini.
Spatially Structured Evolutionary Algorithms: ArtificialEvolution in Space and Time (Natural Computing Series) . Springer-VerlagNew York, Inc., Secaucus, NJ, USA, 2005.[21] S. Verel, M. Tomassini, and G. Ochoa. The connectivity of nk landscapes’basins: a network analysis. In
Proceedings of Artificial Life XI , pages 648–655, 8-5 2008.[22] E. Weinberger. Correlated and un correlated fitness landscapes and howto tell the difference. In
Biological Cybernetics , pages 63:325–336, 1990.1923] D. Whitley. Cellular genetic algorithms. In