A Decomposition-based Large-scale Multi-modal Multi-objective Optimization Algorithm
AA Decomposition-based Large-scale Multi-modalMulti-objective Optimization Algorithm
Yiming Peng, Hisao Ishibuchi † Guangdong Provincial Key Laboratory of Brain-inspired Intelligent Computation,Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen 518055, [email protected], [email protected]
Abstract —A multi-modal multi-objective optimization problemis a special kind of multi-objective optimization problem withmultiple Pareto subsets. In this paper, we propose an efficientmulti-modal multi-objective optimization algorithm based on thewidely used MOEA/D algorithm. In our proposed algorithm, eachweight vector has its own sub-population. With a clearing mech-anism and a greedy removal strategy, our proposed algorithmcan effectively preserve equivalent Pareto optimal solutions (i.e.,different Pareto optimal solutions with same objective values).Experimental results show that our proposed algorithm caneffectively preserve the diversity of solutions in the decisionspace when handling large-scale multi-modal multi-objectiveoptimization problems.
Keywords — Evolutionary multi-objective optimization; multi-modal multi-objective optimization; diversity maintenance;MOEA/D
I. I
NTRODUCTION
A multi-objective optimization problem (
MOP ) is an op-timization problem which has multiple objective functions.Usually, these objective functions are conflicting and cannot beoptimized simultaneously. For convenience, all objective func-tions should be converted into minimization functions. Thefollowing equation formulates an MOP without constraints: min F ( x ) = ( f ( x ) , . . . , f M ( x )) T , (1)where x is a D -dimensional decision vector, and F is a map-ping from a D -dimensional domain Ω to an M -dimensionalrange R M .In the past three decades, researchers have developed avariety of multi-objective evolutionary algorithms ( MOEAs ).For example, the well-known NSGA-II [1] and MOEA/D [2]can efficiently solve various types of MOPs.In this paper, we mainly focus on a special type ofMOPs called multi-modal multi-objective optimization prob-lems (
MMOPs ). In MMOPs, the function F in Eq. (1) canbe a many-to-one mapping from Ω to R M . That is, for a This work was supported by Guangdong Provincial Key Laboratory (GrantNo. 2020B121201001), the Program for Guangdong Introducing Innovativeand Enterpreneurial Teams (Grant No. 2017ZT07X386), Shenzhen Scienceand Technology Program (Grant No. KQTD2016112514355531), the Pro-gram for University Key Laboratory of Guangdong Province (Grant No.2017KSYS008). † Corresponding author: Hisao Ishibuchi, [email protected] 𝒙 ! 𝒙 " A B (a) Decision space. 𝒚 (b) Objective space. Fig. 1. The two equivalent Pareto subsets ( A and B in (a))and the Pareto front (the blue curve in (b)) of the SUF3 testproblem. Solution y has two inverse images x and x in thedecision space.Pareto optimal solution in the objective space, there mayexist multiple inverse images in the decision space. Formally,solutions x and x are equivalent iff F ( x ) = F ( x ) and x (cid:54) = x . As reported in [3], local optimal solutions is notwell-defined for MOPs. Therefore, we only consider globalPareto optimal solutions. Due to the existence of equivalentsolutions, an MMOP may have multiple equivalent Paretosubsets, each of which is mapped to the whole Pareto front. Forexample, in Fig. 1, the SUF3 test problem [4] has two equiv-alent Pareto subsets A and B and each of them is mapped tothe whole Pareto front in the objective space. When handlingMMOPs, all equivalent Pareto subsets should be covered. Awide variety of real-world problems are MMOPs. For instance,the space mission design problems [5] and the multi-objectiveknapsack problems [6] are MMOPs. Solving MMOPs is veryuseful since equivalent Pareto optimal solutions offer morealternatives for the decision maker [7].As pointed out in the literature [4], [8], MOEAs can-not efficiently solve MMOPs since they do not considerthe diversity of solutions in the decision space. Therefore,several multi-modal multi-objective optimization evolution-ary algorithms ( MMEAs ) are proposed such as DNEA [9],MO Ring PSO SCD [10] and MOEA/D-AD [8]. As reportedin [11], these algorithms perform well on MMOPs with low-dimensional decision spaces. However, only a few of them a r X i v : . [ c s . N E ] A p r ig. 2. Explanation of MOEA/D based on the sub-populationframework.can obtain a diverse solution set in the decision space whenhandling MMOPs with high-dimensional decision spaces (i.e.,large-scale MMOPs). To alleviate this issue, we propose anefficient algorithm for large-scale MMOPs.The rest of the paper is organized as follows. We brieflyreview related studies on decomposition-based MOEAs andMMEAs in Section II. In Section III, the proposed MMEAis outlined. Then we compare the proposed MMEA withrepresentative state-of-the-art algorithms on several MMOPsin Section IV. Finally, we summarize the paper and suggestsome future research topics in Section V.II. R ELATED W ORK
A. Decomposition-based MOEAs
In 2007, Zhang and Li proposed MOEA/D [2], the firstdecomposition-based MOEA. MOEA/D convert an MOPinto a set of scalar optimization problems. For this reason,MOEA/D can maintain a strong search ability when handlingMOPs with many objectives (i.e., many-objective optimizationproblems). In addition, MOEA/D is capable of obtaining auniformly distributed solution set if the given weight vectorsuniformly intersect with the Pareto front. Several efficientmethods such as MOEA/D-AWA [12] are proposed for theadaptive adjustment of weight vectors.
B. MOEA/D variants based on the sub-population framework
Although MOEA/D shows promising performance on vari-ous types of MOPs, it is not suitable for solving MMOPs asreported in [8]. The main reason is that only one solutionis assigned to each weight vector which guides a searchdirection to a Pareto optimal solution on the Pareto front. Thus,MOEA/D cannot preserve multiple equivalent solutions, i.e.,is unable to solve MMOPs. A very straightforward remedyis to assign multiple solutions to each weight vector. Inthis way, each weight vector is assigned a sub-populationinstead of a single solution. In Fig. 2, each weight vector hasa sub-population containing four solutions. Solutions in thesame sub-population search toward equivalent Pareto optimalsolutions. To our best knowledge, three MOEA/D variants use this sub-population framework: MOEA/D-AD [8], MOEA/D-M2M [13], and an algorithm from [14]. MOEA/D-M2M is astandard MOEA while the other two algorithms are MMEAs.MOEA/D-M2M lacks a mechanism to preserve equivalentPareto optimal solutions, which means that it is not suitablefor solving MMOPs. Here we briefly introduce MOEA/D-ADand the algorithm from [14].In MOEA/D-AD, the sub-population size of each weightvector changes adaptively. The mechanism of MOEA/D-ADcan be briefly described as follows. In each iteration,1) An offspring y is generated and assigned to the closestweight vector w i in the objective space. The sub-population of w i is denoted as P i
2) The closest L solutions (denoted as Q ) to y in thedecision space are selected from the whole population.3) The offspring y will be added to P i if one of thefollowing two conditions is met: • If P i ∩ Q (cid:54) = ∅ , and at least one solution in P i ∩ Q has the worse scalarizing function value than y .Solutions worse than y will be removed from P i • If P i ∩ Q = ∅ .The niching structure in the decision space is maintained bythe procedures described in step 3). The offspring y onlycompetes with solutions in the current sub-population thatare its neighbors in the decision space. In this way, the sub-population of a weight vector can keep multiple equivalentsolutions. Since MOEA/D-AD uses an unbounded population,the size of the whole population may become very large aftermany iterations.In the algorithm from [14], each weight vector is assigned k solutions. In this way, a population is separated into k grids,each of which contains one solution from each weight vector.For each grid, the fitness of a solution x which is assigned to aweight vector w is evaluated based on the following equation: f ( x ) = w g ( w , x ) + w d min + w d avg , (2)where g is an aggregation function, d min is the minimumEuclidean distance from x to other solutions assigned to w ,and d avg is the average Euclidean distance from x to solutionsassigned to other weight vectors. These three functions arecomposed using weights w , w and w .III. P ROPOSED A LGORITHM
A. Basic ideas
Let us consider solving an MMOP with k equivalent Paretosubsets. Suppose that a weight vector w intersects with thePareto front of the given MMOP at p ∗ . As shown in Eq.(3), we can find at most k different Pareto optimal solutions { x ∗ , x ∗ . . . x ∗ k } in the decision space that are corresponding to p ∗ . In addition, the scalarizing function values g (with respectto w ) of these k solutions are also equal to the minimum value(i.e., 0). F ( x ∗ ) = F ( x ∗ ) = . . . = F ( x ∗ k ) = p ∗ ,g ( w , x ∗ ) = g ( w , x ∗ ) = . . . = g ( w , x ∗ k ) = 0 , s.t. ∀ x ∗ i , x ∗ j , x ∗ i (cid:54) = x ∗ j if i (cid:54) = j. (3)ccording to the above equations, for each weight vector w , Eq. (4) is a multi-modal single-objective optimizationproblem with k global optimal solutions. In this way, theoriginal MMOP is decomposed into a set of multi-modalsingle-objective optimization sub-problems. min x g ( w , x ) . (4)Any multi-modal single-objective optimization algorithmcan be used to optimize Eq. (4). In our proposed algorithm, weapply a greedy iterative algorithm to optimize it. The detailedimplementation of the proposed MMEA will be discussed inthe next section.Our proposed algorithm uses a similar sub-populationframework to the above mentioned algorithms in Section II-B.In our algorithm, each weight vector has a sub-populationcontaining the same number of solutions. In real-world ap-plications, it is challenging to specify the sub-population sizein prior since usually the number of equivalent Pareto subsetsis unknown. In Section IV-B3, we will further examine theperformance of our proposed MMEA with different specifica-tions of the sub-population size. B. Implementation
Algorithm 1 shows the framework of our decompositionbased MMEA called MOEA/D-MM (MOEA/D for Multi-modal Multi-objective optimization). At the beginning, λ = (cid:98) N/µ (cid:99) weight vectors are generated with the same methodas MOEA/D. In line 3, µ solutions are randomly assignedto each weight vector. With this settings, each weight vectorcan preserve at most µ equivalent solutions. For convenience,solutions assigned to w i are denoted as P i .In each iteration, every sub-population is updated based onthe ( µ + 1) scenario. For a sub-population P i , an offspringsolution is generated by the procedures described in Algorithm2. Firstly, we randomly select a solution from P i as thefirst parent. To generate more diverse offspring solutions, thesecond parent is randomly selected from the union of neigh-borhood weight vectors’ sub-populations. Then the generatedoffspring is added to P i . In line 14, a solution is removed from P i based on the environmental selection procedure describedin Algorithm 4.In the environmental selection process, the clearing [15]method is introduced to create a niching structure in thedecision space. The main idea of clearing is that within agiven clearing radius σ , the best solution takes all resources,i.e., other solutions will be removed. In MOEA/D-MM, theclearing radius is estimated using Algorithm 3. Before updat-ing any sub-population, the clearing radius is set to the averageEuclidean distance from each solution in the whole populationto its L -th nearest neighbor in the decision space. In this paper, L is set to (cid:98) N/ (cid:99) .Unlike the original clearing method, MOEA/D-MM onlyapplies the clearing once in order to keep the sub-populationsize unchanged. In line 2 of Algorithm 4, a pair of points Algorithm 1:
Proposed MOEA/D-MM
Parameters: N : population size; µ : sub-population size; g : scalarizing function; Output:
Found solutions /* The number of weight vectors */ λ = (cid:98) N/µ (cid:99) ; Generate λ weight vectors W = { w , w , . . . , w λ } ; Randomly generate and assign µ solutions to each weightvector, i.e., P = { P , . . . , P λ } ; Ideal point z = { z , z , . . . , z M } T where z i = min x ∈ P f i ( x ) ; T = (cid:98) λ/ (cid:99) ; repeat σ = Estimate-Clearing-Radius ( P ) ; foreach w i ∈ W do /* Mating */ y = Mating ( w i ) ; /* Update the ideal point z */ foreach j = 1 . . . M do z j ← min { f j ( y ) , z j } ; end S = P i ∪ { y } ; /* Environmental selection */ P i = Environmental-Selection ( w i , S, σ ) ; end until Termination criteria are met ; P (cid:48) ← non-dominated solutions in P ; return P (cid:48) ; Algorithm 2:
Mating
Parameters: w i : input weight vector; Output:
Generated offspring; W (cid:48) ← T neighborhood weight vectors of w i ; B ← the union of sub-populations of weight vectors in W (cid:48) ; x ← a randomly selected individual from P i ; x ← a randomly selected individual from B ; return y ← an offspring generated from { x , x } ;with the smallest distance between them in the current sub-population are found. If the Euclidean distance between themin the decision space is smaller than the clearing radius, theone with the better scalarizing function value will survive.Otherwise, the solution with the worst scalarizing functionvalue in the current sub-population is removed. C. Effectiveness of MOEA/D-MM
In this section, we give some examples to illustrate theenvironmental selection mechanism in MOEA/D-MM. Fig. 3shows a multi-polygon test problem [16] with four equivalentPareto subsets. In this test problem, any solution inside the fourhexagons (including solutions on the boundaries) is Pareto op-timal. We assume that the sub-population size of each weight lgorithm 3:
Estimate-Clearing-Radius
Parameters: P : population; Output:
Clearing radius; /* Neighborhood size */ L = (cid:98) N/ (cid:99) ; D ← { distance from x to its L -th nearest neighborhoodsolution in the decision space | x ∈ P } ; return σ = D ; Algorithm 4:
Environmental-Selection
Parameters: w : weight vector; S : candidate solutions; σ : clearing radius; Output:
Surviving solutions; /* scalarizing function value */ G = { g ( w , x ) | x ∈ S } ; x i , x j ← the closest pair of points in S ; if d ( x i , x j ) < σ then /* Clearing in decision space */ x ← the solution with the worst G value in { x i , x j } ; else /* Greedy strategy */ x ← the solution with the worst G value in S ; end S ← S \{ x } ; return S ;vector for MOEA/D-MM is µ = 4 . For clarity, we focus onthe sub-population of the weight vector w whose scalarizingfunction value is minimized at the center of each hexagon.Therefore, solutions in this sub-population are searching to-ward the four equivalent Pareto optimal solutions located atthe centers of the hexagons. In each generation, an offspring isgenerated and added to the sub-population of w , and a solutionin this sub-population is removed. In each figure in Fig. 3,one of the five solutions (denoted by ( A, B, C, D, E ) ) willbe removed by the environmental selection mechanism. Thedashed circle(s) in each figure represent the clearing radius.In Fig. 3 (a), the distance between solutions A and B issmaller than the clearing radius σ . Therefore, A is removedsince it is worse than B . In some cases, the clearing doesnot remove any solution since all solutions in the current sub-population are not close to each other. Then the environmentalselection is based on the greedy removal strategy, i.e., removethe worst solution in the current sub-population. For example,in Fig. 3 (b), solution C is removed. However, the greedyremoval strategy cannot always make the best decision. In Fig.3 (c), solutions D and E are searching toward the same Paretooptimal solution in the right bottom hexagon. Although oneof them is expected to be removed, the clearing mechanismdoes not work because the distance between solutions D and E is larger than σ . As opposed to our expectation, apotentially good solution C is removed. Although the greedy strategy may make some mistakes, the clearing method canrecover from the situation that several solutions in the samesub-population converge to the same Pareto optimal solution.After a number of iterations, the distance between solutions D and E will eventually become smaller than σ , then oneof them will be removed. In Fig. 3 (d), solution D will beremoved by the clearing mechanism. Notice that solution C is Fig. 3 (d) is a newly generated solution. With the clearingmechanism and the greedy removal strategy, MOEA/D-MMcan efficiently preserve equivalent Pareto optimal solutions insub-populations.IV. E XPERIMENTAL S TUDIES
A. Experimental settings1) Test problems:
The following four MMOPs are usedto benchmark the performance of MMEAs: the SYM-PARTproblems [17], the SSUF1 and SUF3 test problems [4] andthe multi-polygon test problems [16]. Similar to [11], weuse the multi-polygon test problems to test the scalability ofMOEA/D-MM regarding the dimension of the decision space.Parameters of the selected test problems are listed in TableI. In this table, x i denotes the i th decision variable, and thedimension of the objective space and the decision space ofeach test problem are denoted by M and D , respectively. Forthe SYM-PART test problem, the length of each Pareto subsetis a , and the vertical and horizontal distances between thecenters of two adjacent Pareto subsets are specified by b and c , respectively.
2) Performance indicators:
In our experiments, we usethe modified inverted generational distance (
IGD + ) [18] andthe IGDX [19] indicators for performance comparison. Thesetwo indicators are used to assess the quality of the obtainedsolution set in the objective and decision space, respectively.The IGD + indicator improves the original IGD [20] indicatorby using a special distance function. In contrast to the originalIGD indicator, the IGD + indicator is weakly Pareto compliant.Formally, given a reference point set P in the objective space,the IGD + value of a set A can be calculated using Eq. (5). d + ( z , a ) = (cid:118)(cid:117)(cid:117)(cid:116) m (cid:88) i =1 (max { a i − z i , } ) , IGD + ( A ) = 1 | P | (cid:88) p ∈ P min { d + ( p , a ) |∀ a ∈ A } . (5)The IGDX value of a set A for a reference point set S inthe decision space is given by Eq. (6). IGDX( A ) = 1 | S | (cid:88) x ∈ S min { d ( x , a ) |∀ a ∈ A } , (6)where d is the Euclidean distance function.For the IGDX indicator, the reference point set S is gener-ated by uniformly sampling 10,000 points on the Pareto setsof each test problem. Then, the image of S in the objectivespace (i.e., P ) is used to calculate the IGD + indicator. a) A is removed. (b) C is removed.(c) C is removed. (d) D is removed. Fig. 3. Illustration of the effect of the environmental selection mechanism of MOEA/D-MM on multi-polygon test problem.TABLE I. P
ARAMETER S ETTINGS F OR S ELECTED T EST P ROBLEMS . ParametersProblems
M D
Search Space Special Parameters Number of Pareto subsetsSYM-PART 2 2 x i ∈ [ − , a = 2 , b = 10 , c = 10 x ∈ [1 , and x ∈ [ − , - 2SUF3 2 2 x ∈ [0 , and x ∈ [1 , - 2Multi-polygon 6 { , , , , } x i ∈ [ − , D centers: (0 , , (0 , , (5 , , (5 , polygon radius = 1 4
3) Selected algorithms:
To verify the effectiveness ofMOEA/D-MM on solving MMOPs, we compare its per-formance with the original MOEA/D with Tchebycheffand PBI scalarizing functions as well as three recently-proposed MMEAs: MOEA/D-AD [8], DNEA [9] andMO Ring PSO SCD [10]. All algorithms are implementedon the PlatEMO [21] platform. Parameters for each algorithmare set to the suggested values in the original paper. In ourexperiments, each algorithm is evaluated with population size300 and 100, 000 evaluations. All algorithms are examinedon each test problem for 31 times in order to obtain reliablecomparison results. For each algorithm on each test problem,a single run with the median HV over the 31 runs is selectedfor visualization.
B. Experimental results1) Comparison with MOEA/D:
In this section, we comparethe performance of MOEA/D-MM and MOEA/D on the SUF3test problem. Both of them use the Tchebycheff scalarizing function. In Fig. 4, most solutions obtained by MOEA/D arelocated in one of the two Pareto subsets. However, MOEA/D-MM successfully locates all Pareto subsets. The simulation re-sults clearly show that MOEA/D-MM can effectively preserveequivalent Pareto optimal solutions with the sub-populationframework.
2) Benchmark results on MMOPs:
In this section wespecify the sub-population size as 4 in MOEA/D-MM. Thisspecification is further discussed in the next section.Figs. 5 and 6 visualize the non-dominated solutions in the fi-nal population of MOEA/D-MM with Tchebycheff (MOEA/D-MM-TCH) and PBI (MOEA/D-MM-PBI) scalarizing func-tions in the decision space on each test problem, respectively.The black lines in Figs. 5 (a)-(c) and Figs. 6 (a)-(c) representthe Pareto set of the corresponding test problem. In Figs. 5(d) and 6 (d), the four hexagons are equivalent Pareto subsets.From the visualization results, we can see that MOEA/D-MMis able to obtain solution sets with very good coverage on allPareto subsets. a) MOEA/D (b) MOEA/D-MM
Fig. 4. Comparison of MOEA/D-MM and MOEA/D on theSUF3 test problem. In each figure, the thin lines denote theequivalent Pareto subsets, and the dark circles represent theobtained solutions.Tables II and III present the numerical comparison re-sults among the seven algorithms. In each table, we use theWilconxon rank-sum test with p < . to compare eachalgorithm with MOEA/D-MM-TCH. The symbols + , ≈ and − indicate that the corresponding algorithm is significantly betterthan, no significant difference from, and significantly worsethan the performance of MOEA/D-MM-TCH, respectively.Best results in each table are highlighted.As shown in Table II, MOEA/D-MM-TCH has the bestperformance among the tested algorithms regarding the IGDXindicator, especially on the multi-polygon test problems withhigher dimensional decision spaces, e.g. D = 4 , , , . No-tice that MOEA/D-MM and MOEA/D with the PBI functionperform worse than the Tchebycheff function almost on alltest problems. Numerical results indicate that the Tchebychefffunction can obtain more uniform solutions than the PBIfunction on selected test problems. The main reason is thatthese four test problems are all convex test problems. ThePBI scalarizing function does not work well on such kindof problems. In particular, MO Ring PSO SCD performsthe best on the SUF3, SSUF1 test problems and DNEAoutperforms others on the SYM-PART test problem.Table III shows the average IGD + indicator value of eachalgorithm on each test problem. As observed in the literature[4] [8], the IGD values of MMEAs are usually worse thanMOEAs. This is because equivalent solutions are the same inthe objective space, which means that they do not contribute tothe IGD indicator. Since IGD + is similar to IGD, MMEAs areexpected to have worse IGD + indicator values than MOEAs.Among all algorithms, MOEA/D with the Tchebycheff func-tion achieves the best performance with respect to the IGD + indicator. Among tested MMEAs, MO Ring PSO SCD andDNEA have the best performance on MMOPs with the 2-dimensional decision space while MOEA/D-MM-TCH outper-forms other MMEAs with higher dimensional decision spacesin terms of the IGD + indicator.
3) Influence of sub-population size:
In this section, weinvestigate the influence of the sub-population size on theperformance of MOEA/D-MM. In MOEA/D-MM, the sub-population size determines the maximum number of equivalent solutions that a weight vector can preserve. Suppose we arehandling an MMOP with k equivalent Pareto subsets, andthe sub-population size is µ . Since the population size is N ,the number of weight vectors is (cid:98) N/µ (cid:99) . When µ = k , allequivalent solutions can be covered. When µ < k , each weightvector can only preserve at most µ equivalent solutions. In thiscase, the decision maker has less choices when selecting a finalsolution. However, MOEA/D-MM may have a better searchability in the objective space since more weight vectors areused. When µ > k , the performance of MOEA/D-MM will bedeteriorated mainly due to the following two reasons. Firstly,the maximum number of equivalent solutions in each sub-population is not changed, which means that some solutionsin the sub-population of each weight vector may have nocontribution to the quality of the solution set. Secondly, alarger sub-population size means that less weight vectors areused, which will also reduce the search ability.Fig. 7 shows the average IGDX and IGD + indicator valuesobtained by MOEA/D-MM-TCH on the multi-polygon testproblems with four equivalent Pareto subsets. Five specifi-cations of µ ( µ = 2 , , , , ) are examined for the multi-polygon test problems in five decision spaces ( D -dimensionaldecision space for D = 2 , , , , ). In Fig. 8, we show thefinal populations of MOEA/D-MM-TCH on the polygon testproblem with two-dimensional decision space for two settingsof µ : µ = 2 and µ = 6 . From Fig. 7 (a), when µ < ,IGDX values do not change too much. Because the IGDXindicator only measures the coverage of the Pareto set insteadof the number of equivalent solutions. As shown in Fig. 8(a), when µ = 2 , although a weight vector can only preservetwo equivalent solutions, the coverage of Pareto subsets arealmost the same as Fig. 5 (d). This is because the numberof weight vectors is two times as large as in the case of µ = 4 . According to Fig. 7 (b), when µ > , the performanceof the algorithm degrades in terms of the IGDX and IGD + indicators. In Fig. 8 (b), we can also observe that compared to µ = 2 and µ = 4 , fewer non-dominated solutions are obtainedwhen µ = 6 although all Pareto subsets are covered. Theexperimental results indicate that a smaller sub-population sizeis preferred in MOEA/D-MM.V. C ONCLUDING R EMARKS
In this paper, we proposed MOEA/D-MM, a simple yetefficient multi-modal multi-objective optimization algorithmbased on MOEA/D. We introduce a clearing mechanism anda greedy removal strategy to MOEA/D with the sub-populationframework. The proposed algorithm shows promising per-formance in comparison with recently-proposed MMEAs onvarious MMOPs, especially on large-scale test problems.Several interesting research topics are left for future work.For example, a dynamic sub-population size adjustment strat-egy is needed for the handling of real-world optimizationproblems without knowledge about the number of Pareto sets.The development of new test problems and new performanceindicators in the decision space is also an interesting researchtopic in the filed of multi-modal multi-objective optimization. a) SYM-PART. (b) SSUF1. (c) SUF3. (d) Multi-polygon( D = 2 ). Fig. 5. Visualization of non-dominated solutions in the final population of MOEA/D-MM with the Tchebycheff function inthe decision space on each test problem. (a) SYM-PART. (b) SSUF1. (c) SUF3. (d) Multi-polygon( D = 2 ). Fig. 6. Visualization of non-dominated solutions in the final population obtained by MOEA/D-MM with the PBI function inthe decision space on each test problem.TABLE II. A
VERAGE
IGDX
VALUES OVER RUNS . B
EST RESULTS ARE HIGHLIGHTED . MOEA/D-MM MOEA/DProblems
M D
TCH PBI MO Ring PSO SCD MOEA/D-AD DNEA TCH PBISUF3 2 2 1.2359e-2 2.0712e-2 − + ≈ ≈ − − SSUF1 2 2 4.2628e-2 6.0374e-2 − + − + − − SYM-PART 2 2 1.5503e-1 1.9154e-1 − + + − − − Multi-polygon 6 2 1.0902e-1 2.1422e-1 − − − + − − − − − − − − − − − − − − − − − − − − ≈ − − − − − − − − + − ≈ + / − / ≈ Baseline (a) IGDX versus µ . (b) IGD + versus µ . Fig. 7. The average of IGDX and
IGD + indicator values withrespect to sub-population size µ . (a) µ = 2 (b) µ = 6 Fig. 8. The final populations of MOEA/D-MM-TCH with twosettings of µ : µ = 2 and µ = 6 .ABLE III. A VERAGE
IGD + VALUES OVER RUNS . B
EST RESULTS ARE HIGHLIGHTED . MOEA/D-MM MOEA/DProblems
M D
TCH PBI MO Ring PSO SCD MOEA/D-AD DNEA TCH PBISUF3 2 2 4.1228e-3 6.5426e-3 − + − + + ≈ SSUF1 2 2 2.4583e-3 3.2304e-3 − + − + + + SYM-PART 2 2 4.2798e-2 8.9186e-2 − + + + + − Multi-polygon 6 2 9.0390e-2 1.8361e-1 − + − + + − − − − − + − − − − − + − − − − − + − − − − − + − + − − + + ++ / − / ≈ Baseline R EFERENCES[1] K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan, “A fast and elitistmultiobjective genetic algorithm: NSGA-II,”
IEEE Transactions onEvolutionary Computation , vol. 6, no. 2, pp. 182–197, April 2002.[2] Q. Zhang and H. Li, “MOEA/D: a multiobjective evolutionary algorithmbased on decomposition,”
IEEE Transactions on Evolutionary Compu-tation , vol. 11, no. 6, pp. 712–731, December 2007.[3] A. Moshaiov, “The paradox of multimodal optimization: concepts vs.species in single and multi-objective problems,” in
Proc. of 2016 IEEECongress on Evolutionary Computation , Vancouver, BC, Canada, July24–29 2016, pp. 1743–1748.[4] J. J. Liang, C. T. Yue, and B. Y. Qu, “Multimodal multi-objectiveoptimization: a preliminary study,” in
Proc. of 2016 IEEE Congress onEvolutionary Computation , Vancouver, BC, Canada, July 24–29 2016,pp. 2454–2461.[5] O. Sch¨utze, M. Vasile, and C. A. Coello Coello, “Computing the setof epsilon-efficient solutions in multiobjective space mission design,”
Journal of Aerospace Computing, Information, and Communication ,vol. 8, no. 3, pp. 53–70, March 2011.[6] A. Jaszkiewicz, “On the performance of multiple-objective genetic localsearch on the 0/1 knapsack problem - a comparative experiment,”
IEEETransactions on Evolutionary Computation , vol. 6, no. 4, pp. 402–412,August 2002.[7] K. Deb,
Multi-objective optimization using evolutionary algorithms .USA: John Wiley & Sons, Inc., July 2001.[8] R. Tanabe and H. Ishibuchi, “A decomposition-based evolutionaryalgorithm for multi-modal multi-objective optimization,” in
Proc. ofParallel Problem Solving from Nature - PPSN XV , Coimbra, Portugal,September 8–12 2018, pp. 249–261.[9] Y. Liu, H. Ishibuchi, Y. Nojima, N. Masuyama, and K. Shang, “Adouble-niched evolutionary algorithm and its behavior on polygon-basedproblems,” in
Proc. of Parallel Problem Solving from Nature - PPSN XV ,Coimbra, Portugal, September 8–12 2018, pp. 262–273.[10] C. Yue, B. Qu, and J. Liang, “A multiobjective particle swarm optimizerusing ring topology for solving multimodal multiobjective problems,”
IEEE Transactions on Evolutionary Computation , vol. 22, no. 5, pp.805–817, October 2018.[11] Y. Peng, H. Ishibuchi, and K. Shang, “Multi-modal multi-objectiveoptimization: problem analysis and case studies,” in
Proc. of IEEE Sym- posium Series on Computational Intelligence , Xiamen, China, December6–9 2019, pp. 1865–1872.[12] Y. Qi, X. Ma, F. Liu, L. Jiao, J. Sun, and J. Wu, “MOEA/D with adaptiveweight adjustment,”
Evolutionary Computation , vol. 22, no. 2, pp. 231–264, June 2014.[13] H. Liu, F. Gu, and Q. Zhang, “Decomposition of a multiobjective opti-mization problem into a number of simple multiobjective subproblems,”
IEEE Transactions on Evolutionary Computation , vol. 18, no. 3, pp.450–455, June 2014.[14] C. Hu and H. Ishibuchi, “Incorporation of a decision space diversitymaintenance mechanism into MOEA/D for multi-modal multi-objectiveoptimization,” in
Proc. of Genetic and Evolutionary Computation Con-ference Companion , Kyoto, Japan, July 15–19 2018, pp. 1898–1901.[15] A. Petrowski, “A clearing procedure as a niching method for geneticalgorithms,” in
Proc. of IEEE International Conference on EvolutionaryComputation , Nagoya, Japan, May 20–22 1996, pp. 798–803.[16] H. Ishibuchi and Y. Peng, “A scalable multimodal multiobjective testproblem,” in
Proc. of 2019 IEEE Congress on Evolutionary Computa-tion , Wellington, New Zealand, June 10–13 2019, pp. 302–309.[17] G. Rudolph, B. Naujoks, and M. Preuss, “Capabilities of EMOA todetect and preserve equivalent Pareto subsets,” in
Proc. of EvolutionaryMulti-Criterion Optimization , Matsushima, Japan, March 5–8 2007, pp.36–50.[18] H. Ishibuchi, H. Masuda, Y. Tanigaki, and Y. Nojima, “Modified distancecalculation in generational distance and inverted generational distance,”in
Proc. of Evolutionary Multi-Criterion Optimization , Guimares, Por-tugal, March 29–April 1 2015, pp. 110–125.[19] A. Zhou, Q. Zhang, and Y. Jin, “Approximating the set of Pareto-optimalsolutions in both the decision and objective spaces by an estimation ofdistribution algorithm,”
IEEE Transactions on Evolutionary Computa-tion , vol. 13, no. 5, pp. 1167–1189, October 2009.[20] C. A. Coello Coello and M. Reyes Sierra, “A study of the parallelizationof a coevolutionary multi-objective evolutionary algorithm,” in
Proc. ofMICAI 2004: Advances in Artificial Intelligence , Mexico City, Mexico,April 26–30 2004, pp. 688–697.[21] Y. Tian, R. Cheng, X. Zhang, and Y. Jin, “PlatEMO: A MATLABplatform for evolutionary multi-objective optimization,”