A Review of Evolutionary Multi-modal Multi-objective Optimization
11 A Review of Evolutionary Multi-modalMulti-objective Optimization
Ryoji Tanabe,
Member, IEEE, and Hisao Ishibuchi,
Fel-low, IEEE
Abstract —Multi-modal multi-objective optimization aims tofind all Pareto optimal solutions including overlapping solutionsin the objective space. Multi-modal multi-objective optimizationhas been investigated in the evolutionary computation communitysince 2005. However, it is difficult to survey existing studiesin this field because they have been independently conductedand do not explicitly use the term “multi-modal multi-objectiveoptimization”. To address this issue, this paper reviews existingstudies of evolutionary multi-modal multi-objective optimization,including studies published under names that are different from“multi-modal multi-objective optimization”. Our review alsoclarifies open issues in this research area.
Index Terms —Multi-modal multi-objective optimization, evo-lutionary algorithms, test problems, performance indicators
I. I
NTRODUCTION
A multi-objective evolutionary algorithm (MOEA) is anefficient optimizer for a multi-objective optimization problem(MOP) [1]. MOEAs aim to find a non-dominated solutionset that approximates the Pareto front in the objective space.The set of non-dominated solutions found by an MOEA isusually used in an “a posteriori” decision-making process [2].A decision maker selects a final solution from the solution setaccording to her/his preference.Since the quality of a solution set is usually evaluated inthe objective space, the distribution of solutions in the solutionspace has not received much attention in the evolutionarymulti-objective optimization (EMO) community. However, thedecision maker may want to compare the final solution to otherdissimilar solutions that have an equivalent quality or a slightlyinferior quality [3], [4]. Fig. 1 shows a simple example. In Fig.1, the four solutions x a , x b , x c , and x d are far from each otherin the solution space but close to each other in the objectivespace. x a and x b have the same objective vector. x c and x a are similar in the objective space. x d is dominated by thesesolutions. This kind of situation can be found in a numberof real-world problems, including functional brain imagingproblems [3], diesel engine design problems [5], distillationplant layout problems [6], rocket engine design problems [7],and game map generation problems [8].If multiple diverse solutions with similar objective vectorslike x a , x b , x c , and x d in Fig. 1 are obtained, the decisionmaker can select the final solution according to her/his pref-erence in the solution space. For example, if x a in Fig. 1becomes unavailable for some reason (e.g., material shortages, R. Tanabe and H. Ishibuchi are with Shenzhen Key Laboratory of Computa-tional Intelligence, University Key Laboratory of Evolving Intelligent Systemsof Guangdong Province, Department of Computer Science and Engineering,Southern University of Science and Technology, Shenzhen 518055, China.e-mail: ([email protected], [email protected]). (Corresponding au-thor: Hisao Ishibuchi)
Solution space Objective space
Fig. 1:
Illustration of a situation where the four solutions are identicalor close to each other in the objective space but are far from eachother in the solution space (a minimization problem). mechanical failures, traffic accidents, and law revisions), thedecision maker can select a substitute from x b , x c , and x d .A practical example is given in [4], which deals with two-objective space mission design problems. In [4], Sch¨utze et al.considered two dissimilar solutions x = (782 , , T and x = (1222 , , T for a minimization problem,whose objective vectors are f ( x ) = (0 . , . T and f ( x ) = (0 . , . T , respectively. Although x domi-nates x , the difference between f ( x ) and f ( x ) is smallenough. The first design variable is the departure time fromthe Earth (in days). Thus, the departure times of x and x differ by days ( = 1222 − ). If the decision makeraccepts x with a slightly inferior quality in addition to x ,the two launch plans can be considered. If x is not realizablefor some reason, x can be the final solution instead of x .As explained here, multiple solutions with almost equivalentquality support a reliable decision-making process. If thesesolutions have a large diversity in the solution space, they canprovide insightful information for engineering design [3], [5].A multi-modal multi-objective optimization problem(MMOP) involves finding all solutions that are equivalentto Pareto optimal solutions [3], [9], [10]. Below, we explainthe difference between MOPs and MMOPs using the two-objective and two-variable Two-On-One problem [11]. Figs.2 (a) and (b) show the Pareto front F and the Pareto optimalsolution set O of Two-On-One, respectively. Two-On-Onehas two equivalent Pareto optimal solution subsets O and O that are symmetrical with respect to the origin, where O = O ∪ O . Figs. 2 (c) and (d) show O and O ,respectively. In Two-On-One, the three solution sets O , O ,and O (Figs. 2 (b), (c) and (d)) are mapped to F (Fig. 2(a)) by the objective functions. On the one hand, the goal ofMOPs is generally to find a solution set that approximatesthe Pareto front F in the objective space. Since O and O are mapped to the same F in the objective space, it issufficient for MOPs to find either O or O . On the otherhand, the goal of MMOPs is to find the entire equivalentPareto optimal solution set O = O ∪ O in the solutionspace. In contrast to MOPs, it is necessary to find both O and O in MMOPs. Since most MOEAs (e.g., NSGA-II[12] and SPEA2 [13]) do not have mechanisms to maintainthe solution space diversity, it is expected that they do notwork well for MMOPs. Thus, multi-modal multi-objectiveevolutionary algorithms (MMEAs) that handle the solutionspace diversity are necessary for MMOPs.This paper presents a review of evolutionary multi-modal a r X i v : . [ c s . N E ] S e p f f (a) F − − x − − x (b) O − − x − − x (c) O − − x − − x (d) O Fig. 2: (a) The Pareto front F and (b) the Pareto optimal solutionset O of Two-On-One [11]. Figs. (c) and (d) show the two Paretooptimal solution subsets O and O , respectively. multi-objective optimization. This topic is not new and hasbeen studied for more than ten years. Early studies include [3],[5], [11], [14]–[16]. Unfortunately, most existing studies wereindependently conducted and did not use the term “MMOPs”(i.e., they are not tagged). For this reason, it is difficult tosurvey existing studies of MMOPs despite their significantcontributions. In this paper, we review related studies ofMMOPs including those published under names that weredifferent from “multi-modal multi-objective optimization”. Wealso clarify open issues in this field. Multi-modal single-objective optimization problems (MSOPs) have been wellstudied in the evolutionary computation community [10].Thus, useful clues to address some issues in studies of MMOPsmay be found in studies of MSOPs. We discuss what can belearned from the existing studies of MSOPs.This paper is organized as follows. Section II gives def-initions of MMOPs. Section III describes MMEAs. SectionIV presents test problems for multi-modal multi-objectiveoptimization. Section V explains performance indicators forbenchmarking MMEAs. Section VI concludes this paper.II. D EFINITIONS OF
MMOP S
1) Definition of MOPs:
A continuous MOP involves find-ing a solution x ∈ S ⊆ R D that minimizes a given objectivefunction vector f : S → R M . Here, S is the D -dimensionalsolution space, and R M is the M -dimensional objective space.A solution x is said to dominate x iff f i ( x ) ≤ f i ( x ) forall i ∈ { , ..., M } and f i ( x ) < f i ( x ) for at least one index i . If x ∗ is not dominated by any other solutions, it is called aPareto optimal solution. The set of all x ∗ is the Pareto optimalsolution set, and the set of all f ( x ∗ ) is the Pareto front. Thegoal of MOPs is generally to find a non-dominated solutionset that approximates the Pareto front in the objective space.
2) Definitions of MMOPs:
The term “MMOP” was firstcoined in [3], [14] in 2005. However, “MMOP” was not usedin most studies from 2007 to 2012. Terms that representMMOPs were not explicitly defined in those studies. Forexample, MMOPs were referred to as problems of obtaininga diverse solution set in the solution space in [17]. It seemsthat “multi-modal multi-objective optimization” has been usedagain as of 2016. Apart from these instances, MMOPs weredenoted as “Multi-objective multi-global optimization” and“Multi-modal multi-objective wicked problems” in [18] and[19], respectively.Although MMOPs have been addressed for more than tenyears, the definition of an MMOP is still controversial. Inthis paper, we define an MMOP using a relaxed equivalencyintroduced by Rudolph and Preuss [17] as follows:
Definition 1.
An MMOP involves finding all solutions thatare equivalent to Pareto optimal solutions.
Definition 2.
Two different solutions x and x are said tobe equivalent iff (cid:107) f ( x ) − f ( x ) (cid:107) ≤ δ .where (cid:107) a (cid:107) is an arbitrary norm of a , and δ is a non-negativethreshold value given by the decision maker. If δ = 0 , theMMOP should find all equivalent Pareto optimal solutions. If δ > , the MMOP should find all equivalent Pareto optimalsolutions and dominated solutions with acceptable quality. Themain advantage of our definition of an MMOP is that thedecision maker can adjust the goal of the MMOP by changingthe δ value. Most existing studies (e.g., [9], [20], [21]) assumeMMOPs with δ = 0 . MMOPs with δ > were discussed in[3], [4], [19], [22]. For example, x a , x b , and x c in Fig. 1should be found for MMOPs with δ = 0 . In addition, thenon-Pareto optimal solution x d should be found for MMOPswith δ > if (cid:107) f ( x d ) − f ( x a ) (cid:107) ≤ δ .Although there is room for discussion, MMOPs with δ > may be more practical in real-world applications. This isbecause the set of solutions of an MMOP with δ > canprovide more options for the decision maker than that ofan MMOP with δ = 0 . While it is usually assumed in theEMO community that the final solution is selected from non-dominated solutions, the decision maker may also be interestedin some dominated solutions in practice [3], [4]. Below, weuse the term “MMOP” regardless of the δ value for simplicity.III. MMEA S This section describes 12 dominance-based MMEAs, 3decomposition-based MMEAs, 2 set-based MMEAs, and apost-processing approach. MMEAs need the following threeabilities: (1) the ability to find solutions with high quality,(2) the ability to find diverse solutions in the objective space,and (3) the ability to find diverse solutions in the solutionspace. MOEAs need the abilities (1) and (2) to find a solutionset that approximates the Pareto front in the objective space.Multi-modal single-objective optimizers need the abilities (1)and (3) to find a set of global optimal solutions. In contrast,MMEAs need all abilities (1)–(3). Here, we mainly describemechanisms of each type of MMEA to handle (1)–(3).
1) Pareto dominance-based MMEAs:
The most representa-tive MMEA is Omni-optimizer [9], [14], which is an NSGA-II-based generic optimizer applicable to various types of prob-lems. The differences between Omni-optimizer and NSGA-IIare fourfold: the Latin hypercube sampling-based populationinitialization, the so-called restricted mating selection, the (cid:15) -dominance-based non-dominated sorting, and the alternativecrowding distance. In the restricted mating selection, an indi-vidual x a is randomly selected from the population. Then, x a and its nearest neighbor x b in the solution space are comparedbased on their non-domination levels and crowding distancevalues. The winner among x a and x b is selected as a parent.The crowding distance measure in Omni-optimizer takesinto account both the objective and solution spaces. For the i -th individual x i in each non-dominated front R , the crowdingdistance in the objective space c obj i is calculated in a similarmanner to NSGA-II. In contrast, the crowding distance valueof x i in the solution space c sol i is calculated in a differentmanner. First, for each j ∈ { , ..., D } , a “variable-wise”crowding distance value of x i in the j -th decision variable c sol i,j is calculated as follows: c sol i,j = (cid:16) x i +1 ,j − x i,j x max j − x min j (cid:17) if x i,j = x min j (cid:16) x i,j − x i − ,j x max j − x min j (cid:17) else if x i,j = x max jx i +1 ,j − x i − ,j x max j − x min j otherwise , (1)where we assume that all individuals in R are sorted based ontheir j -th decision variable values in descending order. In (1), x min j = min x ∈ R { x j } and x max j = max x ∈ R { x j } . Unlike thecrowding distance in the objective space, an infinitely largevalue is not given to a boundary individual.Then, an “individual-wise” crowding distance value c sol i iscalculated as follows: c sol i = ( (cid:80) Dj =1 c sol i,j ) /D . The averagevalue c solavg of all individual-wise crowding distance values isalso calculated as follows: c solavg = ( (cid:80) | R | i =1 c sol i ) / | R | . Finally,the crowding distance value c i of x i is obtained as follows: c i = (cid:40) max { c obj i , c sol i } if c obj i > c objavg or c sol i > c solavg min { c obj i , c sol i } otherwise , (2)where c objavg is the average value of all crowding distance valuesin the objective space. As shown in (2), c i in Omni-optimizeris the combination of c obj i and c sol i . Due to its alternativecrowding distance, the results presented in [9] showed thatOmni-optimizer finds more diverse solutions than NSGA-II.In addition to Omni-optimizer, two extensions of NSGA-II for MMOPs have been proposed. DNEA [23] is similar toOmni-optimizer but uses two sharing functions in the objectiveand solution spaces. DNEA requires fine-tuning of two sharingniche parameters for the objective and solution spaces. Thesecondary criterion of DN-NSGA-II [24] is based on thecrowding distance only in the solution space. DN-NSGA-IIuses a solution distance-based mating selection.The following are other dominance-based MMEAs. AnMMEA proposed in [25] utilizes DBSCAN [26] and the rakeselection [27]. DBSCAN, which is a clustering method, isused for grouping individuals based on the distribution of individuals in the solution space. The rake selection, which isa reference vector-based selection method similar to NSGA-III[28], is applied to individuals belonging to each niche for theenvironmental selection. SPEA2 + [5], [15] uses two archives A obj and A sol to maintain diverse non-dominated individualsin the objective and solution spaces, respectively. While theenvironmental selection in A obj is based on the density ofindividuals in the objective space similar to SPEA2 [13], thatin A sol is based on the density of individuals in the solutionspace. For the mating selection in SPEA2 + , neighborhoodindividuals in the objective space are selected only from A obj . P Q,(cid:15) -MOEA [4], 4D-Miner [3], [29], and MNCA [19] arecapable of handling dominated solutions for MMOPs with δ > . P Q,(cid:15) -MOEA uses the (cid:15) -dominance relation [30] sothat an unbounded archive can maintain individuals with ac-ceptable quality according to the decision maker. Unlike otherMMEAs, P Q,(cid:15) -MOEA does not have an explicit mechanism tomaintain the solution space diversity. 4D-Miner was speciallydesigned for functional brain imaging problems [3]. Thepopulation is initialized by a problem-specific method. 4D-Miner maintains dissimilar individuals in an external archive,whose size is ten times larger than the population size. Theenvironmental selection in 4D-Miner is based on a problem-specific metric. Similar to DIOP [22] (explained later), MNCAsimultaneously evolves multiple subpopulations P , ..., P S ,where S is the number of subpopulations. In MNCA, theprimary subpopulation P aims to find an approximationof the Pareto front that provides a target front for othersubpopulations P , ..., P S . While the update of P is basedon the same selection mechanism as in NSGA-II, the updateof P , ..., P S is performed with a complicated method thattakes into account both the objective and solution spaces.Although the above-mentioned MMEAs use genetic varia-tion operators (e.g., the SBX crossover and the polynomialmutation [12]), the following MMEAs are based on otherapproaches. Niching-CMA [20] is an extension of CMA-ES [31] for MMOPs by introducing a niching mechanism.The number of niches and the niche radius are adaptivelyadjusted in Niching-CMA. An aggregate distance metric inthe objective and solution spaces is used to group individ-uals into multiple niches. For each niche, individuals withbetter non-domination levels survive to the next iteration.MO Ring PSO SCD [21], a PSO algorithm for MMOPs,uses a diversity measure similar to Omni-optimizer. However,MO Ring PSO SCD handles the boundary individuals in theobjective space in an alternative manner. In addition, an index-based ring topology is used to create niches.Two extensions of artificial immune systems [32] havebeen proposed for MMOPs: omni-aiNet [18] and cob-aiNet[33]. These two methods use a modified version of thepolynomial mutation [12]. The primary and secondary criteriaof omni-aiNet are based on (cid:15) -nondomination levels [30] anda grid operation, respectively. In addition, omni-aiNet usessuppression and insertion operations. While the suppressionoperation deletes an inferior individual, the insertion operationadds new individuals to the population. The population sizeis not constant due to these two operations. The primaryand secondary criteria of cob-aiNet are based on the fitness assignment method in SPEA2 [13] and a diversity measurewith a sharing function in the solution space, respectively. Themaximum population size is introduced in cob-aiNet.
2) Decomposition-based MMEAs:
A three-phase multi-start method is proposed in [16]. First, (1 , λ ) -ES is carriedout on each M objective functions K times to obtain M × K best-so-far solutions. Then, an unsupervised clustering methodis applied to the M × K solutions to detect the number ofequivalent Pareto optimal solution subsets s . Finally, s runsof (1 , λ ) -ES are performed on each N single-objective sub-problem decomposed by the Tchebycheff function. The initialindividual of each run is determined in a chained manner.The best solution found in the j -th subproblem becomes aninitial individual of (1 , λ ) -ES for the j + 1 -th subproblem( j ∈ { , ..., N − } ). It is expected that s equivalent solutionsare found for each N decomposed subproblems.Two variants of MOEA/D [34] for MMOPs are proposedin [35], [36]. MOEA/D decomposes an M -objective probleminto N single-objective subproblems using a set of weight vec-tors, assigning a single individual to each subproblem. Then,MOEA/D simultaneously evolves the N individuals. UnlikeMOEA/D, the following two methods assign one or moreindividuals to each subproblem to handle the equivalency.The MOEA/D algorithm presented in [35] assigns K indi-viduals to each subproblem. The selection is conducted basedon a fitness value combining the PBI function value [34]and two distance values in the solution space. K dissimilarindividuals are likely to be assigned to each subproblem.The main drawback of the above methods [16], [35] is thedifficulty in setting a proper value for K , because it is problemdependent. MOEA/D-AD [36] does not need such a parameterbut requires a relative neighborhood size L . For each iteration,a child u is assigned to the j -th subproblem whose weightvector is closest to f ( u ) , with respect to the perpendiculardistance. Let X be a set of individuals already assigned to the j th-subproblem. If x in X is within the L nearest individualsfrom the child u in the solution space, x and u are comparedbased on their scalarizing function values g ( x ) and g ( u ) . If g ( u ) ≤ g ( x ) , x is deleted from the population and u entersthe population. u also enters the population when no x in X is in the L neighborhood of u in the solution space.
3) Set-based MMEAs:
DIOP [22] is a set-based MMEAthat can maintain dominated solutions in the population. Inthe set-based optimization framework [37], a single solutionin the upper level represents a set of solutions in the lowerlevel (i.e., a problem). DIOP simultaneously evolves an archive A and a target population T . While A approximates only thePareto front and is not shown to the decision maker, T obtainsdiverse solutions with acceptable quality by maximizing thefollowing G indicator: G ( T ) = w obj D obj ( T ) + w sol D sol ( T ) .Here, w obj + w sol = 1 . D obj is a performance indicator inthe objective space, and D sol is a diversity measure in thesolution space. In [22], D obj and D sol were specified by thehypervolume indicator [38] and the Solow-Polasky diversitymeasure [39], respectively. Meta-individuals in T that are (cid:15) -dominated by any meta-individuals in A are excluded for thecalculation of the G metric. At the end of the search, T is likely to contain meta-individuals (i.e., solution sets of a TABLE I: Properties of 18 MMEAs. µ and n max denote thepopulation size and the maximum number of evaluations used ineach paper, respectively. “ δ > ” indicates whether each method canhandle MMOPs with δ > . “U” means whether each method has anunbounded population/archive. Initial µ values are reported for omni-aiNet, cob-aiNet, P Q,(cid:15) -MOEA, and MOEA/D-AD. µ and n max usedin the post-processing step are shown for a method in [17]. MMEAs Year µ n max δ > USPEA2 + [5], [15] 2004
100 50 000
Omni-optimizer [9], [14] 2005
200 8 000 (cid:88) omni-aiNet [18] 2006
400 40 000 (cid:88)
Niching-CMA [20] 2009
50 50 000 D o m i n a n ce A method in [25] 2010 Not clearly reported P Q,(cid:15) -MOEA [4] 2011
200 5 000 (cid:88) (cid:88) cob-aiNet [33] 2011
100 40 000
MNCA [19] 2013
100 100 000 (cid:88)
DN-NSGA-II [24] 2016
800 80 000
MO Ring PSO SCD [21] 2017
800 80 000
DNEA [23] 2018
210 63 000 D ec o m p . A method in [16] 2007
10 20 000
A method in [35] 2018
MOEA/D-AD [36] 2018
100 30 000 (cid:88) S e t DIOP [22] 2010
50 100 000 (cid:88)
A method in [40] 2012
200 400 000 P . A method in [17] 2009
20 2 000 problem) (cid:15) -nondominated by meta-individuals in A .Another set-based MMEA is presented in [40]. UnlikeDIOP, the proposed method evolves only a single population.Whereas DIOP maximizes the weighted sum of values of D obj and D sol , the proposed method treats D obj and D sol as metatwo-objective functions. NSGA-II is used to simultaneouslymaximize D obj and D sol in [40].
4) A post-processing approach:
As pointed out in [17], itis not always necessary to locate all Pareto optimal solutions.Suppose that a set of non-dominated solutions A has alreadybeen obtained by an MOEA (e.g., NSGA-II) but not an MMEA(e.g., Omni-optimizer). After the decision maker has selectedthe final solution x final from A according to her/his preferencein the objective space, it is sufficient to search solutions whoseobjective vectors are equivalent to f ( x final ) .A post-processing approach is proposed in [17] to han-dle this problem. First, the proposed approach formulates ameta constrained two-objective minimization problem where f meta1 = (cid:107) f ( x ) − f ( x final ) (cid:107) , f meta2 = −(cid:107) x − x final (cid:107) , and g meta ( x ) = f meta1 ( x ) − θ < . The meta objective functions f meta1 and f meta2 represent the distance between x and x final inthe objective and solution spaces. Thus, smaller f meta1 ( x ) and f meta2 ( x ) indicate that x is similar to x final in the objectivespace and far from x final in the solution space, respectively.The constraint g meta with θ > prevents f meta2 ( x ) frombecoming an infinitely small value in unbounded problems.NSGA-II is used as a meta-optimizer in [17].
5) Open issues:
Table I summarizes the properties of the18 MMEAs reviewed in this section.While some MMEAs require an extra parameter (e.g., L in MOEA/D-AD), Omni-optimizer does not require such a parameter. This parameter-less property is an advantageof Omni-optimizer. However, Omni-optimizer is a Paretodominance-based MMEA. Since dominance-based MOEAsperform poorly on most MOPs with more than three objectives[28], Omni-optimizer is unlikely to handle many objectives.In addition to MMEAs, some MOEAs handling the solutionspace diversity have been proposed, such as GDEA [41],DEMO [42], DIVA [43], “MMEA” [44], DCMMMOEA [45],and MOEA/D-EVSD [46]. Note that solution space diversitymanagement in these MOEAs aims to efficiently approximatethe Pareto front for MOPs. Since these methods were notdesigned for MMOPs, they are likely to perform poorly forMMOPs. For example, “MMEA”, which stands for a model-based multi-objective evolutionary algorithm, cannot find mul-tiple equivalent Pareto optimal solutions [44]. Nevertheless,helpful clues for designing an efficient MMEA can be foundin these MOEAs.The performance of MMEAs has not been well analyzed.The post-processing method may perform better than MMEAswhen the objective functions of a real-world problem arecomputationally expensive. However, an in-depth investigationis necessary to determine which approach is more practical.Whereas the population size µ and the maximum number ofevaluations n max were set to large values in some studies,they were set to small values in other studies. For example,Table I shows that µ = 1 000 and n max = 500 000 forOmni-optimizer, while µ = 50 and n max = 50 000 forNiching-CMA. It is unclear whether an MMEA designed withlarge µ and n max values works well with small µ and n max values. While MMOPs with four or more objectives appearin real-world applications (e.g., five-objective rocket enginedesign problems [7]), most MMEAs have been applied to onlytwo-objective MMOPs. A large-scale benchmarking study isnecessary to address the above-mentioned issues.The decision maker may want to examine diverse dominatedsolutions. As explained in Section I, dominated solutionsfound by P Q,(cid:15) -MOEA support the decision making in spacemission design problems [4]. The results presented in [29]showed that diverse solutions found by 4D-Miner help neuro-scientists analyze brain imaging data. Although most MMEAsassume MMOPs with δ = 0 as shown in Table I, MMEAsthat can handle MMOPs with δ > may be more practical.Since most MMEAs (e.g., Omni-optimizer) remove dominatedindividuals from the population, they are unlikely to finddiverse dominated solutions. Some specific mechanisms arenecessary to handle MMOPs with δ > (e.g., the multiplesubpopulation scheme in DIOP and MNCA).As explained at the beginning of this section, MMEAsneed the three abilities (1)–(3). While the abilities (1) and(2) are needed to approximate the Pareto front, the ability(3) is needed to find equivalent Pareto optimal solutions.Most existing studies (e.g., [9], [20], [21], [36]) report thatthe abilities (1) and (2) of MMEAs are worse than those ofMOEAs. For example, the results presented in [36] showedthat Omni-optimizer, MO Ring PSO SCD, and MOEA/D-AD perform worse than NSGA-II in terms of IGD [47](explained in Section V). If the decision maker is not interestedin the distribution of solutions in the solution space, it would be better to use MOEAs rather than MMEAs. The poor perfor-mance of MMEAs for multi-objective optimization is mainlydue to the ability (3), which prevents MMEAs from directlyapproximating the Pareto front. This undesirable performanceregarding the abilities (1) and (2) is an issue in MMEAs. • What to learn from MSOPs:
An online data repository(https://github.com/mikeagn/CEC2013) that provides results ofoptimizers on the CEC2013 problem suite [48] is available forMSOPs. This repository makes the comparison of optimizerseasy, facilitating constructive algorithm development. A simi-lar data repository is needed for studies of MMOPs.The number of maintainable individuals in the popula-tion/archive strongly depends on the population/archive size.However, it is usually impossible to know the number ofequivalent Pareto optimal solutions of an MMOP a priori. Thesame issue can be found in MSOPs. To address this issue, thelatest optimizers (e.g., dADE [49] and RS-CMSA [50]) havean unbounded archive that maintains solutions found duringthe search process. Unlike modern optimizers for MSOPs,Table I shows that only three MMEAs have such a mechanism.The adaptive population sizing mechanisms in omni-aiNet, P Q,(cid:15) -MOEA, and MOEA/D-AD are advantageous. A generalstrategy of using an unbounded (external) archive could im-prove the performance of MMEAs.IV. M
ULTI - MODAL MULTI - OBJECTIVE TEST PROBLEMS
This section describes test problems for benchmarkingMMEAs. Unlike multi-objective test problems (e.g., the DTLZ[51] test suite), multi-modal multi-objective test problemswere explicitly designed such that they have multiple equiv-alent Pareto optimal solution subsets. The two-objective andtwo-variable SYM-PART1 [16] is one of the most represen-tative test problems for benchmarking MMEAs: f ( y ) =( y + a ) + y and f ( y ) = ( y − a ) + y . Here, y and y aretranslated values of x and x as follows: y = x − t ( c +2 a ) and y = x − t b . In SYM-PART1, a controls the region ofPareto optimal solutions, and b and c specify the positionsof the Pareto optimal solution subsets. The so-called tileidentifiers t and t are randomly selected from {− , , } .Fig. 3(a) shows the shape of the Pareto optimal solutions ofSYM-PART1 with a = 1 , b = 10 , and c = 8 . As shown inFig. 3(a), the equivalent Pareto optimal solution subsets areon nine lines in SYM-PART1.Other test problems include the Two-On-One [11] problem,the Omni-test problem [9], the SYM-PART2 and SYM-PART3problems [16], the Superspheres problem [52], the EBNproblem [53], the two SSUF problems [24], and the Polygonproblems [54]. Fig. 3 also shows the distribution of their Paretooptimal solutions. Since there are an infinite number of Paretooptimal solutions in the EBN problem, we do not show them.Source codes of the ten problems can be downloaded from thesupplementary website (https://sites.google.com/view/emmo/).In Omni-test, equivalent Pareto optimal solution subsets areregularly located. SYM-PART2 is a rotated version of SYM-PART1. SYM-PART3 is a transformed version of SYM-PART2 using a distortion operation. The Superspheres prob-lem with D = 2 has six equivalent Pareto optimal solution −
15 0 15 x − x (a) SYM-PART1 −
15 0 15 x − x (b) SYM-PART2 − − x − x (c) SYM-PART3 − − x − − x (d) Two-On-One x x (e) Omni-test π/ π/ x x (f) Superspheres x − x (g) SSUF1 x × − x × − (h) SSUF3 x x (i) Polygon Fig. 3:
Distribution of the Pareto optimal solutions for the eightproblems. Only x and x are shown on Omni-test. subsets. However, the number of its P is unknown for D > .EBN can be considered as a real-coded version of the so-calledbinary one-zero max problem. All solutions in the solutionspace are Pareto optimal solutions. SSUF1 and SSUF3 areextensions of the UF problems [55] to MMOPs. There aretwo symmetrical Pareto optimal solution subsets in SSUF1 andSSUF3. Polygon is an extension of the distance minimizationproblems [56] to MMOPs, where P equivalent Pareto optimalsolution subsets are inside of P regular M -sided polygons.In addition, the eight MMF problems are presented in [21].Similar to SSUF1 and SSUF3, the MMF problems are derivedfrom the idea of designing a problem that has multiple equiv-alent Pareto optimal solution subsets by mirroring the originalone. A bottom-up framework for generating scalable testproblems with any D is proposed in [57]. P equivalent Paretooptimal solution subsets are in P hyper-rectangular located inthe solution space similar to the SYM-PART problems. Whilethe first k variables play the role of “position” parametersin the solution space, the other D − k variables represent“distance” parameters. The six HPS problem instances wereconstructed using this framework in [57].If a given problem has the multi-modal fitness landscape, itmay have multiple non-Pareto fronts whose shapes are similarto the true Pareto front. Such a problem (e.g., ZDT4 [58]) isreferred to as a multi-frontal test problem [59]. If the δ value(defined in Subsection II-2) is sufficiently large, a multi-frontaltest problem can be regarded as a multi-modal multi-objectivetest problem. In fact, ZDT4 was used in [19] as a test problem.The Kursawe problem [60] is a multi-modal and nonseparabletest problem with a disconnected Pareto front. The Kursaweproblem has two fronts in the objective space similar to multi- TABLE II: Properties of multi-modal multi-objective test problems,where M , D , and P denote the number of objectives, designvariables, and equivalent Pareto optimal solution subsets, respectively.If a problem has irregularity, the shapes of its multiple equivalentPareto optimal solution subsets differ from each other. Test problems
M D P
IrregularitySYM-PART problems [16] 2 2 9 (cid:88)
Two-On-One problem [11] 2 2 2Omni-test problem [9] 2 Any D Superspheres problem [52] 2 Any UnknownEBN problem [53] 2 Any ∞ Polygon problems [54] Any 2 AnySSUF problems [24] 2 2 2MMF suite [21] 2 2 2 or 4HPS suite [57] 2 Any Any frontal problems. Thus, the Kursawe problem can be used asa multi-modal multi-objective test problem.
1) Open issues:
Table II summarizes the properties ofmulti-modal multi-objective test problems reviewed here. InTable II, P of Omni-test adheres to [22].Table II indicates that scalable test problems do not exist,in terms of M , D , and P . Although the SYM-PART problemshave some desirable properties (e.g., their adjustable andstraightforward Pareto optimal solution shapes), M , D , and P are constant in these problems. Only Polygon is scalable in M . While most test problems have only two design variables,Omni-test and HPS are scalable in D . Unfortunately, P increases exponentially with increased D in Omni-test dueto the combinatorial nature of variables. Although the ideaof designing scalable SYM-PART and Polygon problems to D is presented in [61], [62], they have similar issues toOmni-test. Although the HPS problems do not have suchan issue, it is questionable whether there exists a real-worldproblem with design variables affecting only the distancebetween the objective vectors and the Pareto front. Only SYM-PART3 has irregularity. Since the shapes of the Pareto optimalsolution subsets may be different from each other in real-worldproblems, we believe that test problems with the irregularityare necessary to evaluate the performance of MMEAs. Theperformance of an MMEA with an absolutely defined nichingradius (e.g., DNEA) is likely to be overestimated in testproblems without irregularity.In addition, the relation between synthetic test problemsand real-world problems has not been discussed. The idea ofdesigning a Polygon problem based on a real-world map ispresented in [63]. However, this does not mean that such aPolygon problem is an actual real-world problem. • What to learn from MSOPs:
Some construction methodsfor multi-modal single-objective test problems are available,such as the software framework proposed in [64], the con-struction method for various problems [65], and Ahrari andDeb’s method [66]. Borrowing ideas from such sophisticatedconstruction methods is a promising way to address theabove-mentioned issues of multi-modal multi-objective test problems. In [64], R¨onkk¨onen et al. present eight desirableproperties for multi-modal single-objective problem generatorssuch as scalability in D , control of the number of global andlocal optima, and regular and irregular distributions of optima.These eight properties can be a useful guideline for designingmulti-modal multi-objective problem generators.V. P ERFORMANCE INDICATORS FOR
MMEA S Performance indicators play an important role in quanti-tatively evaluating the performance of MOEAs as well asMMEAs. Since performance indicators for MOEAs consideronly the distribution of objective vectors (e.g., the hypervol-ume, GD, and IGD indicators [38], [47]), they cannot be usedto assess the ability of MMEAs to find multiple equivalentPareto optimal solutions. For this reason, some indicators havebeen specially designed for MMEAs. Performance indicatorsfor MMEAs can be classified into two categories: simpleextensions of existing performance indicators for MOEAs andspecific indicators based on the distributions of solutions.IGDX [4], [44] is a representative example of the firstapproach. The IGD and IGDX indicators are given as follows:
IGD( A ) = 1 | A ∗ | (cid:88) z ∈ A ∗ min x ∈ A (cid:110) ED (cid:0) f ( x ) , f ( z ) (cid:1)(cid:111) , (3) IGDX( A ) = 1 | A ∗ | (cid:88) z ∈ A ∗ min x ∈ A (cid:110) ED (cid:0) x , z (cid:1)(cid:111) , (4) where A is a set of solutions obtained by an MMEA and A ∗ isa set of reference solutions in the Pareto optimal solution set. ED( x , x ) denotes the Euclidean distance between x and x . While A with a small IGD value is a good approximationof the Pareto front, A with a small IGDX approximates Paretooptimal solutions well. Other indicators in the first categoryinclude GDX [4], the Hausdorff distance indicator [67] in thesolution space [4], CR [21], and PSP [21]. GDX is a GDindicator in the solution space similar to IGDX. CR is analternative version of the maximum spread [38] to measurethe spread of A . PSP is a combination of IGDX and CR.Performance indicators in the second category include themean of the pairwise distance between two solutions [20], CS[16], SPS [16], the Solow-Polasky diversity measure [39] usedin [22], [40], and PSV [57]. CS is the number of Pareto optimalsolution subsets covered by at least one individual. SPS is thestandard deviation of the number of solutions close to eachPareto optimal solution subset. PSV is the percentage of thevolume of A in the volume of A ∗ in the solution space.
1) Open issues:
Table III shows the properties of perfor-mance indicators for MMEAs reviewed in this section, wherethe properties are assessed based on the description of eachindicator. While the properties of the performance indicatorsfor MOEAs have been examined (e.g., [38], [67]), those forMMEAs have not been well analyzed.Performance indicators for MMEAs should be able toevaluate the three abilities (1)–(3) explained in Section III.Although IGDX is frequently used, it should be noted thatIGDX does not evaluate the distribution of solutions in theobjective space. Fig. 4 shows the distribution of two solu-tion sets A and A for SYM-PART1 in the solution and TABLE III: Properties of performance indicators for MMEAs(convergence to Pareto optimal solution subsets, diversity,uniformity, spread, the use of reference solution sets, andpossibility to compare solution sets with different sizes). Indicators Conv. Div. Unif. Spr. Ref. Dif.GDX [4] (cid:88) (cid:88)
IGDX [4], [44] (cid:88) (cid:88) (cid:88) (cid:88) (cid:88)
Hausdorff distance [4] (cid:88) (cid:88) (cid:88) (cid:88) (cid:88)
CR [21] (cid:88) (cid:88) (cid:88)
PSP [21] (cid:88) (cid:88) (cid:88) (cid:88) (cid:88)
Pairwise distance [20] (cid:88) (cid:88)
CS [16] (cid:88) (cid:88) (cid:88) (cid:88) (cid:88)
SPS [16] (cid:88) (cid:88) (cid:88) (cid:88)
Solow-Polasky [39] (cid:88) (cid:88) (cid:88) (cid:88)
PSV [57] (cid:88) (cid:88) (cid:88) (cid:88) (cid:88) −
15 0 15 x − x (a) A in the solution space −
15 0 15 x − x (b) A in the solution space f f (c) A in the objective space f f (d) A in the objective space Fig. 4:
Comparison of solution sets A and A for SYM-PART1. objective spaces, where | A | and | A | are 27. While thesolutions in A are evenly distributed on one of the ninePareto optimal solution subsets, the solutions in A are evenlydistributed on all of them. Although A has 27 objectivevectors that cover the Pareto front, A has only 3 equivalentobjective vectors. The IGDX and IGD values of A and A are as follows: IGDX( A ) = 15 . , IGDX( A ) = 0 . , IGD( A ) = 0 . , and IGD( A ) = 0 . . We used Pareto optimal solutions for A ∗ . Although A has a worsedistribution in the objective space than A , IGDX( A ) issignificantly better than IGDX( A ) . As demonstrated here,IGDX can evaluate the abilities (1) and (3) but cannot evaluatethe ability (2) to find diverse solutions in the objective space.Since the other indicators in Table III do not take into accountthe distribution of objective vectors similar to IGDX, theyare likely to have the same undesirable property. For a fairperformance comparison, it is desirable to use the indicators for MOEAs (e.g., hypervolume and IGD) in addition to theindicators for MMEAs in Table III. • What to learn from MSOPs:
It is desirable that the indicatorsfor multi-modal single-objective optimizers evaluate a solutionset without the knowledge of the fitness landscape such as thepositions of the optima and the objective values of the optima[68]. The same is true for indicators for MMEAs. Table IIIshows that most indicators (e.g., IGDX) require A ∗ . Since A ∗ is usually unavailable in real-world problems, it is desirablethat indicators for MMEAs evaluate A without A ∗ .Since the archive size in modern multi-modal single-objective optimizers is unbounded in order to store a numberof local optima [10], most indicators in this field can handlesolution sets with different sizes (e.g., the peak ratio and thesuccess rate [48]). For the same reason, it is desirable thatindicators for MMEAs evaluate solution sets with differentsizes in a fair manner. However, it is difficult to directlyuse indicators for multi-modal single-objective optimizers toevaluate MMEAs. VI. C ONCLUSION
The contributions of this paper are threefold. The firstcontribution is that we reviewed studies in this field in termsof definitions of MMOPs, MMEAs, test problems, and perfor-mance indicators. It was difficult to survey the existing studiesof MMOPs for the reasons described in Section I. Our reviewhelps to elucidate the current progress on evolutionary multi-modal multi-objective optimization. The second contributionis that we clarified open issues in this field. In contrast tomulti-modal single-objective optimization, multi-modal multi-objective optimization has not received much attention despiteits practical importance. Thus, some critical issues remain.The third contribution is that we pointed out an issue as-sociated with performance indicators for MMEAs. Reliableperformance indicators are necessary for the advancement ofMMEAs. We hope that this paper will encourage researchersto work in this research area, which is not well explored.A
CKNOWLEDGMENT
This work was supported by the Program for Guang-dong Introducing Innovative and Enterpreneurial Teams(Grant No. 2017ZT07X386), Shenzhen Peacock Plan (GrantNo. KQTD2016112514355531), the Science and Technol-ogy Innovation Committee Foundation of Shenzhen (GrantNo. ZDSYS201703031748284), the Program for Univer-sity Key Laboratory of Guangdong Province (Grant No.2017KSYS008), and National Natural Science Foundation ofChina (Grant No. 61876075).R
EFERENCES[1] K. Deb,
Multi-Objective Optimization Using Evolutionary Algorithms .John Wiley & Sons, 2001.[2] K. Miettinen,
Nonlinear Multiobjective Optimization . Springer, 1998.[3] M. Sebag, N. Tarrisson, O. Teytaud, J. Lef`evre, and S. Baillet, “AMulti-Objective Multi-Modal Optimization Approach for Mining StableSpatio-Temporal Patterns,” in
IJCAI , 2005, pp. 859–864.[4] O. Sch¨utze, M. Vasile, and C. A. C. Coello, “Computing the Set ofEpsilon-Efficient Solutions in Multiobjective Space Mission Design,”
JACIC , vol. 8, no. 3, pp. 53–70, 2011. [5] T. Hiroyasu, S. Nakayama, and M. Miki, “Comparison study of SPEA2+,SPEA2, and NSGA-II in diesel engine emissions and fuel economyproblem,” in
IEEE CEC , 2005, pp. 236–242.[6] M. Preuss, C. Kausch, C. Bouvy, and F. Henrich, “Decision SpaceDiversity Can Be Essential for Solving Multiobjective Real-WorldProblems,” in
MCDM , 2008, pp. 367–377.[7] F. Kudo, T. Yoshikawa, and T. Furuhashi, “A study on analysis of designvariables in Pareto solutions for conceptual design optimization problemof hybrid rocket engine,” in
IEEE CEC , 2011, pp. 2558–2562.[8] J. Togelius, M. Preuss, and G. N. Yannakakis, “Towards multiobjectiveprocedural map generation,” in
PCGames , 2010.[9] K. Deb and S. Tiwari, “Omni-optimizer: A generic evolutionary algo-rithm for single and multi-objective optimization,”
EJOR , vol. 185, no. 3,pp. 1062–1087, 2008.[10] X. Li, M. G. Epitropakis, K. Deb, and A. P. Engelbrecht, “SeekingMultiple Solutions: An Updated Survey on Niching Methods and TheirApplications,”
IEEE TEVC , vol. 21, no. 4, pp. 518–538, 2017.[11] M. Preuss, B. Naujoks, and G. Rudolph, “Pareto Set and EMOABehavior for Simple Multimodal Multiobjective Functions,” in
PPSN ,2006, pp. 513–522.[12] K. Deb, S. Agrawal, A. Pratap, and T. Meyarivan, “A fast and elitistmultiobjective genetic algorithm: NSGA-II,”
IEEE TEVC , vol. 6, no. 2,pp. 182–197, 2002.[13] E. Zitzler, M. Laumanns, and L. Thiele, “SPEA2: Improving the StrengthPareto Evolutionary Algorithm,” ETHZ, Tech. Rep., 2001.[14] K. Deb and S. Tiwari, “Omni-optimizer: A Procedure for Single andMulti-objective Optimization,” in
EMO , 2005, pp. 47–61.[15] M. Kim, T. Hiroyasu, M. Miki, and S. Watanabe, “SPEA2+: Improvingthe Performance of the Strength Pareto Evolutionary Algorithm 2,” in
PPSN , 2004, pp. 742–751.[16] G. Rudolph, B. Naujoks, and M. Preuss, “Capabilities of EMOA toDetect and Preserve Equivalent Pareto Subsets,” in
EMO , 2007, pp. 36–50.[17] G. Rudolph and M. Preuss, “A multiobjective approach for finding equiv-alent inverse images of pareto-optimal objective vectors,” in
MCDM ,2009, pp. 74–79.[18] G. P. Coelho and F. J. V. Zuben, “omni-aiNet: An Immune-InspiredApproach for Omni Optimization,” in
ICARIS , 2006, pp. 294–308.[19] E. M. Zechman, M. H. G., and M. E. Shafiee, “An evolutionaryalgorithm approach to generate distinct sets of non-dominated solutionsfor wicked problems,”
Eng. Appl. of AI , vol. 26, no. 5-6, pp. 1442–1457,2013.[20] O. M. Shir, M. Preuss, B. Naujoks, and M. T. M. Emmerich, “EnhancingDecision Space Diversity in Evolutionary Multiobjective Algorithms,” in
EMO , 2009, pp. 95–109.[21] C. Yue, B. Qu, and J. Liang, “A Multi-objective Particle SwarmOptimizer Using Ring Topology for Solving Multimodal Multi-objectiveProblems,”
IEEE TEVC , 2018 (in press).[22] T. Ulrich, J. Bader, and L. Thiele, “Defining and Optimizing Indicator-Based Diversity Measures in Multiobjective Search,” in
PPSN , 2010,pp. 707–717.[23] Y. Liu, H. Ishibuchi, Y. Nojima, N. Masuyama, and K. Shang, “ADouble-Niched Evolutionary Algorithm and Its Behavior on Polygon-Based Problems,” in
PPSN , 2018, pp. 262–273.[24] J. J. Liang, C. T. Yue, and B. Y. Qu, “Multimodal multi-objectiveoptimization: A preliminary study,” in
IEEE CEC , 2016, pp. 2454–2461.[25] O. Kramer and H. Danielsiek, “DBSCAN-based multi-objective nichingto approximate equivalent pareto-subsets,” in
GECCO , 2010, pp. 503–510.[26] M. Ester, H. Kriegel, J. Sander, and X. Xu, “A Density-Based Algorithmfor Discovering Clusters in Large Spatial Databases with Noise,” in
KDD , 1996, pp. 226–231.[27] O. Kramer and P. Koch, “Rake Selection: A Novel Evolutionary Multi-Objective Optimization Algorithm,” in KI , 2009, pp. 177–184.[28] K. Deb and H. Jain, “An evolutionary many-objective optimizationalgorithm using reference-point-based nondominated sorting approach,part I: solving problems with box constraints,” IEEE TEVC , vol. 18,no. 4, pp. 577–601, 2014.[29] V. Krmicek and M. Sebag, “Functional Brain Imaging with Multi-objective Multi-modal Evolutionary Optimization,” in
PPSN , 2006, pp.382–391.[30] M. Laumanns, L. Thiele, K. Deb, and E. Zitzler, “Combining Conver-gence and Diversity in Evolutionary Multiobjective Optimization,”
Evol.Comput. , vol. 10, no. 3, pp. 263–282, 2002.[31] N. Hansen and A. Ostermeier, “Completely derandomized self-adaptation in evolution strategies,”
Evol. Comput. , vol. 9, no. 2, pp.159–195, 2001. [32] D. Dasgupta, S. Yu, and F. Ni˜no, “Recent Advances in Artificial ImmuneSystems: Models and Applications,”
Appl. Soft Comput. , vol. 11, no. 2,pp. 1574–1587, 2011.[33] G. P. Coelho and F. J. V. Zuben, “A Concentration-Based ArtificialImmune Network for Multi-objective Optimization,” in
EMO , 2011, pp.343–357.[34] Q. Zhang and H. Li, “MOEA/D: A multiobjective evolutionary algorithmbased on decomposition,”
IEEE TEVC , vol. 11, no. 6, pp. 712–731, 2007.[35] C. Hu and H. Ishibuchi, “Incorporation of a decision space diversitymaintenance mechanism into MOEA/D for multi-modal multi-objectiveoptimization,” in
GECCO (Companion) , 2018, pp. 1898–1901.[36] R. Tanabe and H. Ishibuchi, “A Decomposition-Based EvolutionaryAlgorithm for Multi-modal Multi-objective Optimization,” in
PPSN ,2018, pp. 249–261.[37] E. Zitzler, L. Thiele, and J. Bader, “On Set-Based MultiobjectiveOptimization,”
IEEE TEVC , vol. 14, no. 1, pp. 58–79, 2010.[38] E. Zitzler, L. Thiele, M. Laumanns, C. M. Fonseca, and V. G. da Fon-seca, “Performance assessment of multiobjective optimizers: an analysisand review,”
IEEE TEVC , vol. 7, no. 2, pp. 117–132, 2003.[39] A. R. Solow and S. Polasky, “Measuring biological diversity,”
Environ.Ecol. Stat. , vol. 1, no. 2, pp. 95–103, 1994.[40] H. Ishibuchi, M. Yamane, N. Akedo, and Y. Nojima, “Two-objectivesolution set optimization to maximize hypervolume and decision spacediversity in multiobjective optimization,” in
SCIS , 2012, pp. 1871–1876.[41] A. Toffolo and E. Benini, “Genetic Diversity as an Objective in Multi-Objective Evolutionary Algorithms,”
Evol. Comput. , vol. 11, no. 2, pp.151–167, 2003.[42] T. Robiˇc and B. Filipiˇc, “DEMO: differential evolution for multiobjectiveoptimization,” in
EMO , 2005, pp. 520–533.[43] T. Ulrich, J. Bader, and E. Zitzler, “Integrating decision space diversityinto hypervolume-based multiobjective search,” in
GECCO , 2010, pp.455–462.[44] A. Zhou, Q. Zhang, and Y. Jin, “Approximating the Set of Pareto-Optimal Solutions in Both the Decision and Objective Spaces by anEstimation of Distribution Algorithm,”
IEEE TEVC , vol. 13, no. 5, pp.1167–1189, 2009.[45] H. Xia, J. Zhuang, and D. Yu, “Combining Crowding Estimation inObjective and Decision Space With Multiple Selection and SearchStrategies for Multi-Objective Evolutionary Optimization,”
IEEE Trans.Cyber. , vol. 44, no. 3, pp. 378–393, 2014.[46] J. C. Castillo, C. Segura, A. H. Aguirre, G. Miranda, and C. Le´on,“A multi-objective decomposition-based evolutionary algorithm withenhanced variable space diversity control,” in
GECCO (Companion) ,2017, pp. 1565–1571.[47] C. A. C. Coello and M. R. Sierra, “A Study of the Parallelization ofa Coevolutionary Multi-objective Evolutionary Algorithm,” in
MICAI ,2004, pp. 688–697.[48] X. Li, A. Engelbrecht, and M. G. Epitropakis, “Benchmark Functionsfor CEC’2013 Special Session and Competition on Niching Methods forMultimodal Function Optimization,” RMIT Univ., Tech. Rep., 2013.[49] M. G. Epitropakis, X. Li, and E. K. Burke, “A dynamic archive nichingdifferential evolution algorithm for multimodal optimization,” in
IEEECEC , 2013, pp. 79–86.[50] A. Ahrari, K. Deb, and M. Preuss, “Multimodal Optimization byCovariance Matrix Self-Adaptation Evolution Strategy with RepellingSubpopulations,”
Evol. Comput. , vol. 25, no. 3, pp. 439–471, 2017.[51] K. Deb, L. Thiele, M. Laumanns, and E. Zitzler, “Scalable Test Prob-lems for Evolutionary Multi-Objective Optimization,” in
EvolutionaryMultiobjective Optimization. Theoretical Advances and Applications .Springer, 2005, pp. 105–145.[52] M. T. M. Emmerich and A. H. Deutz, “Test problems based on lam´esuperspheres,” in
EMO , 2006, pp. 922–936.[53] N. Beume, B. Naujoks, and M. T. M. Emmerich, “SMS-EMOA:multiobjective selection based on dominated hypervolume,”
EJOR , vol.181, no. 3, pp. 1653–1669, 2007.[54] H. Ishibuchi, Y. Hitotsuyanagi, N. Tsukamoto, and Y. Nojima, “Many-Objective Test Problems to Visually Examine the Behavior of Multiob-jective Evolution in a Decision Space,” in
PPSN , 2010, pp. 91–100.[55] Q. Zhang, A. Zhou, S. Zhao, P. N. Suganthan, W. Liu, and S. Tiwari,“Multiobjective optimization Test Instances for the CEC 2009 SpecialSession and Competition,” Univ. of Essex, Tech. Rep., 2008.[56] M. K¨oppen and K. Yoshida, “Substitute Distance Assignments in NSGA-II for Handling Many-objective Optimization Problems,” in
EMO , 2007,pp. 727–741.[57] B. Zhang, K. Shafi, and H. A. Abbass, “On Benchmark Problems andMetrics for Decision Space Performance Analysis in Multi-ObjectiveOptimization,”
IJCIA , vol. 16, no. 1, pp. 1–18, 2017. [58] E. Zitzler, K. Deb, and L. Thiele, “Comparison of MultiobjectiveEvolutionary Algorithms: Empirical Results,”
Evol. Comput. , vol. 8,no. 2, pp. 173–195, 2000. [Online]. Available: http://dx.doi.org/10.1162/106365600568202[59] S. Huband, P. Hingston, L. Barone, and R. L. While, “A review ofmultiobjective test problems and a scalable test problem toolkit,”
IEEETEVC , vol. 10, no. 5, pp. 477–506, 2006.[60] F. Kursawe, “A Variant of Evolution Strategies for Vector Optimization,”in
PPSN , 1990, pp. 193–197.[61] V. L. Huang, A. K. Qin, K. Deb, E. Zitzler, P. N. Suganthan, J. J.Liang, M. Preuss, and S. Huband, “Problem Definitions for PerformanceAssessment on Multi-objective Optimization Algorithms,” NTU, Tech.Rep., 2007.[62] H. Ishibuchi, M. Yamane, N. Akedo, and Y. Nojima, “Many-objectiveand many-variable test problems for visual examination of multiobjectivesearch,” in
IEEE CEC , 2013, pp. 1491–1498.[63] H. Ishibuchi, N. Akedo, and Y. Nojima, “A many-objective test problemfor visually examining diversity maintenance behavior in a decisionspace,” in
GECCO , 2011, pp. 649–656.[64] J. R¨onkk¨onen, X. Li, V. Kyrki, and J. Lampinen, “A framework forgenerating tunable test functions for multimodal optimization,”
SoftComput. , vol. 15, no. 9, pp. 1689–1706, 2011.[65] B. Y. Qu, J. J. Liang, Z. Y. Wang, Q. Chen, and P. N. Suganthan,“Novel benchmark functions for continuous multimodal optimizationwith comparative results,”
SWEVO , vol. 26, pp. 23–34, 2016.[66] A. Ahrari and K. Deb, “A Novel Class of Test Problems for PerformanceEvaluation of Niching Methods,”
IEEE TEVC , vol. 22, no. 6, pp. 909–919, 2018.[67] O. Sch¨utze, X. Esquivel, A. Lara, and C. A. C. Coello, “Using theAveraged Hausdorff Distance as a Performance Measure in EvolutionaryMultiobjective Optimization,”
IEEE TEVC , vol. 16, no. 4, pp. 504–522,2012.[68] J. Mwaura, A. P. Engelbrecht, and F. V. Nepocumeno, “Performancemeasures for niching algorithms,” in