Multiobjective Multitasking Optimization Based on Decomposition with Dual Neighborhoods
aa r X i v : . [ c s . C E ] J a n JOURNAL OF L A TEX CLASS FILES, VOL. X, NO. X, X XXXX 1
Multiobjective Multitasking Optimization Based onDecomposition with Dual Neighborhoods
Xianpeng Wang,
Member, IEEE , Zhiming Dong,Lixin Tang,
Senior Member, IEEE and Qingfu Zhang
Fellow, IEEE , Abstract —This paper proposes a multiobjective multitaskingoptimization evolutionary algorithm based on decomposition withdual neighborhood. In our proposed algorithm, each subproblemnot only maintains a neighborhood based on the Euclideandistance among weight vectors within its own task, but also keepsa neighborhood with subproblems of other tasks. Gray relationanalysis is used to define neighborhood among subproblems ofdifferent tasks. In such a way, relationship among differentsubproblems can be effectively exploited to guide the search. Ex-perimental results show that our proposed algorithm outperformsfour state-of-the-art multiobjective multitasking evolutionary al-gorithms and a traditional decomposition-based multiobjectiveevolutionary algorithm on a set of test problems.
Index Terms —Multiobjective multitasking optimization, evolu-tionary algorithm, decomposition, grey relation analysis.
I. I
NTRODUCTION M ULTITASKING evolutionary optimization [1] is a newgrowing research area. Borrowing the idea from mul-titasking learning, multitasking optimization (MTO) exploresrelationship among different tasks for improving the searchefficiency, and it can also distinguish and make use of differ-ences among these tasks.Evolutionary algorithms (EAs) are widely used for solvingoptimization problems [2], [3]. Multitasking evolutionary al-gorithms (MTEAs) [1], [4] help the optimization of differenttasks by sharing a same population and mining the implicitinformation among tasks. On the one hand, such sharing andmining strategies can speed up the optimization process ofdifferent tasks. On the other hand, it can help each individualoptimization process escape its local optima through theinteraction between tasks. Typically, for bi-level optimizationproblems [5], when multiple upper level candidate solutions
This research was supported by the National Key Research and Develop-ment Program of China (2018YFB1700404), the Fund for the National NaturalScience Foundation of China (62073067), the Major Program of NationalNatural Science Foundation of China (71790614), the Major InternationalJoint Research Project of the National Natural Science Foundation of China(71520107004), and the 111 Project (B16009). (Corresponding author: LixinTang)X. Wang is with the Key Laboratory of Data Analytics and Optimization forSmart Industry (Northeastern University), Ministry of Education, Shenyang,110819, China (e-mail: [email protected]).Z. Dong is with the Liaoning Engineering Laboratory of operation An-alytics and Optimization for Smart Industry, Liaoning Key Laboratory ofManufacturing System and Logistics, Shenyang, 110819, China (e-mail:[email protected]).L. Tang is with the Institute of Industrial and Systems En-gineering, Northeastern University, Shenyang, 110819, China (e-mail:[email protected]).Q. Zhang is with the City University of Hong Kong, Shenzhen ResearchInstitute, Shenzhen 518057, China ([email protected]). are analyzed simultaneously, the lower level optimizationproblem can be considered as an MTO problem [6]. In the fieldof expensive optimization [7], knowledge of computationallycheap optimization problems is transferred to expensive opti-mization problem through multitasking evolutionary algorithmto improve the convergence speed of expensive problem. Yi et al. [8] transformed the problem with interval uncertaintyinto an MTO problem. Feng et al. [9] used a multitaskingevolutionary algorithm to deal with a generalized variantof vehicle routing problem with occasional drivers to copewith the requirement that multiple tasks in cloud computingservices should be optimized at the same time. In addition,MTEAs have been studied and employed to successfully solvedifferent problems such as the optimization of operationalindices in beneficiation processes [10], composition of cloudcomputing service [11], sparse reconstruction [12], bi-fidelityoptimization [13], hyper-heuristics [14], and multiobjectivepollution-routing problem [15].Based on NSGA-II [16], Gupta et al. [4] proposed anevolutionary algorithm for solving multiobjective multitaskingoptimization problems, called MO-MFEA, and used it to solvetwo composites manufacturing problems (two multiobjectiveoptimization problems, i.e., MOPs).Decomposition-based multiobjective evolutionary algorithm(MOEA/D) has been widely used in multiobjective optimiza-tion [17], [18]. Yao et al. [19] proposed a decomposition basedalgorithm for solving multiobjective multitasking optimizationproblem. Their algorithm does not make use of relationshipamong different tasks very well. Neighborhood structure isused for establishing relationship among different subproblemsin MOEA/D algorithms. It is assumed that subproblems withclose Euclidean distances between weight vectors have similaroptimal solutions [20] in MOEA/D. However, relationshipbetween subproblems of different tasks cannot be measuredby weight vectors because they belong to different tasks. Toefficiently mine and use the relationship between subprob-lems of different tasks, this paper proposes a multiobjectivemultitasking optimization evolutionary algorithm based ondecomposition with dual neighborhood, denoted as MTEA/D-DN. It defines and uses a neighborhood structure based onthe Euclidean distance between the weight vectors within itsown task, denoted as internal neighborhood, each subprob-lem also has an external neighborhood relationship with thesubproblems of other tasks defined by grey relation analysis[21]. During the evolution of the population, the transfer ofinformation between different tasks is achieved by exchanginginformation between these two neighborhoods.
OURNAL OF L A TEX CLASS FILES, VOL. X, NO. X, X XXXX 2
The remaining part of the paper proceeds as follows. SectionII introduces some basic concepts and related work. Theproposed multiobjective multitasking evolutionary algorithmbased on decomposition with dual neighborhood is describedin details in Section III. Then, some computational experi-ments and discussion are presented in Section IV. Finally, thepaper is concluded in Section V.II. B
ACKGROUND AND R ELATED W ORK
In this section, we first describe some basic definitions of themultiobjective MTO. Then, the decomposition strategy and thegrey relation analysis are briefly introduced. Finally, a reviewof currently existing evolutionary algorithms for multiobjectiveMTO is presented.
A. Multiobjective Multitasking Optimization
In general, a multiobjective multitasking optimization prob-lem that minimizes all objectives of each task can be definedas follows [4]: min F ( x , x , . . . , x K )= min ( F ( x ) , F ( x ) , ..., F K ( x K ))s . t . x i ∈ Ω i , i ∈ { , , . . . , K } (1)where, F i is the i -th task to be optimized (an MOP), x i =( x i , x i , . . . , x D i i ) is the D i -dimensional decision variable vec-tor, and Ω i denotes the feasible domain of task i . Here, foreach task, there are multiple objectives that need to be opti-mized at the same time, and there are conflicting relationshipbetween these objectives. The goal is to find a representativePareto front (PF) for each multiobjective optimization task, soas to help decision makers analyze the relationship betweendifferent objectives and then make reasonable trade-offs anddecisions [22]. Optimizing each task individually is the moststraightforward approach. However, in similar environments,there may be implicit relationship between these tasks, so it isnecessary to explore and exploit potentially useful informationbetween them to improve the efficiency of task solving. B. Decomposition Strategy
A series of weight vectors and a scalarizing function[23] are two components of the decomposition strategy indecomposition-based multiobjective evolutionary algorithms.Each weight vector and multiple objectives based on thescalarizing function constitute a single-objective optimizationsubproblem, and then a population is employed to optimizethese subproblems simultaneously [17], [18]. The Euclideandistance between the weight vectors defines the neighbor-hood structure between these subproblems. It is generallyassumed that subproblems with closer weight vectors in thesame multiobjective optimization problem have similar op-timal solutions [20]. Therefore, during population evolution,the exploitation and exploration of the algorithm are balancedby the neighborhood structure (whether the selection andupdate of parents comes from the neighborhood). Based on theidea of decomposition, some variants of decomposition-basedevolutionary algorithm have been proposed, such as embedded dynamical resource allocation [20], [24], [25], angle-baseddecomposition [26], decomposing a multiobjective probleminto some simple MOPs [27], [28], combining domination-based strategy [29]–[31], etc..
C. Grey Relation Analysis
Grey relation analysis quantifies the degree of similarityor dissimilarity between different factors by calculating thenumerical relationship between them, i.e., reference sequenceand a number of compared sequences, in order to evaluatewhether the factors are closely related or not. For a givennormalized reference sequence Y = { y ( k ) | k = 1 , , . . . , n } and the compared sequence X i = { x i ( k ) | k = 1 , , . . . , n } , i = 1 , , . . . , m , the grey relational degree of the referencesequence to a compared sequence can be calculated in thefollowing form [21]: r i = 1 n n X k =1 m min i =1 n min k =1 ∆ i ( k ) + ρ × m max i =1 n max k =1 ∆ i ( k )∆ i ( k ) + ρ × m max i =1 n max k =1 ∆ i ( k ) (2)where ∆ i ( k ) = | y ( k ) − x i ( k ) | , n is the dimension of eachfactor, and m is the number of compared sequences. ρ ∈ [0 , is the distinguished coefficient, the smaller the value of ρ , thegreater its distinguished ability will be, and its value is usuallyset as ρ = 0 . [21]. D. Related Work
The basic scheme of MTEA is to map the decision space ofall tasks into a unified search space , and to optimize these taskssimultaneously with a single population that is encoded in the unified search space [1], [4]. Furthermore, information transferbetween tasks is performed by updating the current populationwith offspring that is generated by the genetic operator, with acertain probability (i.e., random mating probability rmp ), fromindividuals associated with different tasks. Such informationtransfer is also known as implicit information transfer [32].Feng et al. [32] argued that the method of knowledge transferthrough genetic operator in [1], [4] limits the use of otherevolutionary search operators so that some high-performanceevolutionary search operators cannot be embedded in exist-ing multitasking evolutionary algorithms. Further, Feng etal. [32] established the mapping relationship of differenttasks in the decision space through autoencoder technologyto transfer knowledge, which is called explicit transfer. Itshould be noted that training an autoencoder network is atime-consuming process. Tang et al. [33] applied principlecomponent analysis method to map the domains of multipletasks to low-dimensional aligned subspace, and employed thissubspace for information transfer. In addition, Feng et al. [34]tackled the problem of knowledge transfer between tasks byconstructing weighted l -norm-regularized reconstruction errorbetween different combinatorial optimization problems. Inorder to harness the unique performance of different crossoveroperators, Zhou et al. [35] proposed an adaptive knowledgetransfer strategy based on multiple crossover operators. Yao etal. [19] proposed a multiobjective multitasking evolutionary OURNAL OF L A TEX CLASS FILES, VOL. X, NO. X, X XXXX 3 algorithm based on decomposition strategy [17], in which theway of information transfer between tasks can be summarizedas: the offspring generated by the parents associated with thesame task can be used to update the individuals associatedwith other tasks. However, this information transfer methoddid not explore and mine the relationship between tasks.On the other hand, Lin et al. [36] used incremental NaiveBayes classifier to select several individuals from other taskto the target task as transfer knowledge to participate inthe production of offspring, and then used these selectedindividuals as training data to update the classification model.Bali et al. [37], [38] pointed out that negative informationtransfer may impair the solution of the task. Furthermore, theyemployed data-driven technology to analyze the overlaps in theprobabilistic search distributions between the different tasksand adjusted the probability of information transfer betweentasks to prevent negative transfer. Similarly, Zheng et al. [39]suggested that the fixed probability of information transferlimits the sharing and utilization of useful knowledge betweentasks, and proposed a self-regulated strategy based on the ability vector . III. P
ROPOSED A LGORITHM
In this section, we first give a detailed description of ourproposed algorithm, and then some discussion of the proposedalgorithm are presented.
A. Algorithm Framework
The framework of our proposed algorithm is shown inAlgorithm 1. At the beginning, the initialization of the itemssuch as the internal and external neighborhoods structure B and e B , and the task index Φ in the proposed algorithm arecarried out, as shown in line 1, which is explained in detail inAlgorithm 2. In the main loop of the algorithm, the joint set U of each sub-population P i that corresponds to task i aftershuffling is firstly traversed, as shown in line 3. Then, for eachindividual x in the joint set U , the following procedure willbe performed step by step. First, the index of the task forwhich the current individual x belongs to and the subproblemindex that x is matched are obtained respectively (lines 5 and6). Secondly, the candidate set Q , i.e., a set of subproblemindexes that is used to generate the offspring, is determined(line 7). It should be noted that the determination of Q ispreceded by determining which task the candidate set Q comes from (please refer to Algorithm 3 for details). Thenext step is the production of offspring, which is introducedin Algorithm 4. Finally, the generated offspring is used toupdate the current state of some items, whose detailed steps areshown in Algorithm 5. In the following, we further elaborateon Algorithms 2-5, respectively.
1) Initialization:
We use superscripts to symbolize theindex of tasks that an item is associated with. For task k , theitems to be initialized include: the set of weight vectors W k ,the internal neighborhood structure B k of each subproblem,the task index Φ k of the external neighborhood of eachsubproblem, the external neighborhood structure e B k of eachsubproblem, and the sub-population P k and the ideal point Algorithm 1:
Framework of MTEA/D-DN
Input: sub-population size N ; internal neighborhoodsize T ; neighborhood selection probability β ; Output: final sub-population set P ; // Algorithm 2 [ W , B , e B , Φ , P , Z ] := Initialization ( N , T ); while stopping criterion is not met do U := K S i =1 P i ; // joint and shuffling foreach x in U do cur ← task index where x is located; τ ← subproblem index of individual x ; // Algorithm 3 [ tar , Q ] := CandidateSetSelection ( cur , τ , β ); // Algorithm 4 b x := Reproduction ( x , tar , Q , P ); // Algorithm 5 [ Z , P , Φ , e B ] := Update ( cur , tar , b x , τ , Z , W , Q , P , Φ , e B ); return P ; Z k . For the weight vectors, we use the method mentionedin [40] to initialize them. The internal neighborhood of eachsubproblem is defined as the T closest weight vectors basedon the Euclidean distance between the weight vectors [17].And its external neighborhood, at the initialization step, isdefined as all the subproblems of a randomly selected task.All individuals are encoded within the unified search space [1]. Note that when an individual is evaluated by a task, e.g.,task k , it is necessary to map the individual from the unifiedsearch space to its decision space, as shown in formula (3): y kj = L kj + ( U kj − L kj ) ∗ x kj (3)where L kj and U kj represent the lower and upper boundsof the j -th decision variable of task k , respectively, j ∈{ , , . . . , D k } , and x kj denotes the value of the j -th dimensionof the individual in the unified search space . Finally, the idealpoint Z k is initialized. The main procedure of the initializationprocess is presented in Algorithm 2.
2) Candidate Set Selection:
Here, the neighborhood selec-tion probability β is used to control whether the candidateset comes from the sub-population of the current task orthe neighborhood of the current subproblem. If it comesfrom the neighborhood, then it is randomly selected fromthe internal neighborhood and external neighborhood of thecurrent subproblem, as shown in lines 3-8 of Algorithm 3.
3) Reproduction:
We employ the differential evolution(DE) [41] crossover operator to generate the offspring, asshown in Algorithm 4. Here, P tar ( Q ) denotes the set ofindividuals whose index values belong to Q in the sub-population P tar which are associated with task tar . Finally,the offspring is obtained by the polynomial mutation (PM)[42] operator.
4) Update:
Specifically, the update process is organizedinto two parts. One is to update the candidate set of the
OURNAL OF L A TEX CLASS FILES, VOL. X, NO. X, X XXXX 4
Algorithm 2:
Initialization
Input: sub-population size N , internal neighborhoodsize T ; Output: weight vectors W , internal neighborhood B ,external neighborhood e B , task index ofexternal neighborhood Φ , population set P ,ideal point Z ; for k := 1.. K do W k := { w k , w k , . . . , w kN } ← Generate a set ofweight vectors by the method proposed in [40]; B k := { B k , B k , . . . , B kN } , where B ki representsthe index set of the T weight vectors that areclosest to w ki in W k ; Φ k := { φ k , φ k , . . . , φ kN } , where φ ki is randomlyselect from set {
1, 2, . . . , K } , and φ ki = k ; e B k := { e B k , e B k , . . . , e B kN } , where e B ki := {
1, 2, . . . , N } ; P k := { x k , x k , . . . , x kN } ← Randomly generate N individuals to form a sub-population for task k ; Z k := ( z k , z k , . . . , z km k ) ← Initialize the idealpoint of task k that is represented by P k ; return [ W , B , e B , Φ , P , Z ] ; Algorithm 3:
CandidateSetSelection
Input: current task index cur , current subproblemindex τ , neighborhood selection probability β ; Output: target task index tar , candidate set Q ; tar := cur ; Q := {
1, 2, . . . , N } ; if rand (0 , < β then // randomly select a neighborhood if rand (0 , < . then Q := B curτ ; // internalneighborhood else tar := φ curτ ; Q := e B curτ ; // externalneighborhood return [ tar , Q ] ; Algorithm 4:
Reproduction
Input: individual x , index of target task tar , candidateset Q , population set P ; Output: offspring b x ; x := x ; u ← Randomly select two individuals from P tar ( Q ) ,which are denoted as x and x , generate anoffspring u through the differential evolution operator(DE-rand/1/bin) based on x , x , and x ; b x ← Polynomial mutation operator is employed on u to generate the offspring ; return b x ; Algorithm 5:
Update
Input: current task index cur , target task index tar ,offspring b x , subproblem index τ , ideal point Z ,weight vector W , candidate set Q , populationset P , task index of external neighborhood Φ ,external neighborhood e B ; Output: updated ideal point Z , population P , taskindex of external neighborhood Φ , andexternal neighborhood e B ; // step 1: update candidate set if cur = tar then // randomly select a neighborhood if rand (0 , < . then Q := B curτ ; tar := cur ; Evaluate b x by task tar ; Update ideal point Z tar ; A := ∅ ; // record the updatedsubproblem foreach q in Q do if u ( F ( b x ) , w tarq ) < u ( F ( x tarq ) , w tarq ) then x tarq := b x ; A := A ∪ { q } ; // step 2: update externalneighborhood if cur = tar then if | A | = 0 then φ curτ ← Randomly select an element from { K } , and φ curτ = cur ; e B curτ := {
1, 2, . . . , N } ; else Y ← Set the mean value of the decisionvariables of P cur ( B curτ ) as the referencesequence; X := { ¯ x tari | i ∈ A } , set the comparedsequences, where ¯ x tari is the mean value ofthe decision variables of P tar ( B tarA i ); n := | X | arg max i =1 r i , where r i is the grey relationaldegree calculated by (2) ; e B curτ := B tarA n ; return [ Z , P , Φ , e B ] ;target task, as shown in lines 1-11 of Algorithm 5, and theother is to update the external neighborhood of the currentsubproblem, as shown in lines 12-20 of Algorithm 5. Here, ifthe external neighborhood of a subproblem participates in theproduction of the offspring, the updated candidate set is chosenat random from the internal and external neighborhoods of thatsubproblem, as in lines 1-4 of Algorithm 5. It can be called abidirectional update. Next, the target task is used to evaluatethe offspring and the updating of the ideal point of the targettask, as shown in lines 5-6. Lines 8-11 are the specific steps OURNAL OF L A TEX CLASS FILES, VOL. X, NO. X, X XXXX 5 for updating the candidate set, and Achievement ScalarizingFunction (ASF) [23] is used as the scalarizing function, andits expression is shown in (4): u ASF ( F ( x ); w ) = m max i =1 w i | f i ( x ) − z ∗ i | (4)The index set A is used to record which subproblemshave been updated by the newly generated offspring. Whenthe current task and the target task are inconsistent, thosesubproblems that can bring benefits are worth further miningthe valuable information between them. Here, that is, the sub-problem in set A and the current subproblem are more worthexploring, and the corresponding tasks have more investmentvalue. If A is an empty set, i.e., the current subproblemfails to update any of the subproblems within its externalneighborhood, then the external neighborhood of the currentsubproblem is reset, as shown in lines 13-15 of Algorithm5. Otherwise, we use the grey relation analysis to selectthe internal neighborhood of a subproblem (in set A ) in thetarget task with the largest correlation value as the externalneighborhood of the current subproblem, as shown in lines17-20 of Algorithm 5. B. Discussion
In our proposed algorithm, the internal neighborhood con-structs the relationship between subproblems within the sameoptimization task. Whereas, the external neighborhood con-structs the relationship between the subproblems belonging todifferent tasks. For a subproblem, its internal neighborhood isdetermined by the pre-defined weight vectors, and the weightvectors fix the internal neighborhood structure. Whereas itsexternal neighborhood is explored and mined through theexchange of information between tasks, and will changedynamically with the evolutionary process of the population.When the optimization of a subproblem of the current task cangive a promotion to a subproblem of the target task, then theneighborhood of this subproblem in the target task is also moreworthy to be explored and mined. Of course, for a stochasticalgorithm, it is difficult to guarantee that each exchange ofinformation between tasks will get a significant reward. Thatis, individuals in the candidate set (external neighborhood)might not be updated by their offspring. In this case, then, tak-ing all subproblems of a task as the external neighborhood ofthe current subproblem can make the communication betweentasks more exploratory. Besides, we consider the internaland external neighborhoods of the subproblem to be equallyimportant, and this is the reason that the candidate set to beupdated is randomly selected (lines 2-4 of Algorithm 5) fromthe internal and external neighborhoods of the subproblemwhen there is an exchange of information between differenttasks. A more detailed discussion of internal neighborhood andexternal neighborhood is presented in Section IV-F.IV. E
XPERIMENTAL S TUDIES
In this section, we first compare our proposed algorithmwith five state-of-the-art algorithms. Then, the effect of inter-nal and external neighborhoods on our proposed algorithm isfurther analyzed. Finally, sensitivity analysis experiments areconducted on some parameters of our proposed algorithm.
A. Competing Algorithms
Four state-of-the-art algorithms, namely MO-MFEA [4],MO-MFEA-II [38], EMTIL [36], MFEA/D-DE [19] and atraditional decomposition-based multiobjective evolutionaryalgorithm, MOEA/D [18], are used as the compared algo-rithms. Among them, MO-MFEA is the first paradigm ofthe multiobjective multitasking evolutionary algorithm. MO-MFEA-II is a variant of MO-MFEA which employs data-driven technology to establish a similar relationship modelbetween different tasks. This model is used to adjust thefrequency of information transfer in order to guarantee thatuseful information can be fully utilized and at the same timeuseless information can be abandoned. Incremental learningmethod is utilized in EMTIL to exploit potentially valuableinformation between different tasks. The selection pressure inthe above three algorithms is based on the dominant strategy.MFEA/D-DE is the first attempt to use the decomposition-based strategy in MOMTO. Our algorithm is also based on adecomposition strategy, so here, the traditional decomposition-based multiobjective evolutionary algorithm MOEA/D [18] isalso used as a compared algorithm.
B. Test Instances
We evaluate the MTEA/D-DN algorithm on 9 multiobjectiveMTO benchmark test instances, and each test instance iscomposed of two MOPs. In terms of similarity between thetasks of a test instance, these test instances can be categorizedinto three groups: high similarity (HS), medium similarity(MS) and low similarity (LS). From the intersection of theglobal minima, they can be categorized into complete in-tersection (CI), partial intersection (PI) and no intersection(NI). Combining these two perspectives together, these 9 testinstances can be denoted as CIHS, CIMS, CILS, PIHS, PIMS,PILS, NIHS, NIMS and NILS. Taking CIHS as an example,it indicates that the test instance is Complete Intersectionwith High Similarity. For the detailed settings of these testinstances, such as the variable range and dimension of eachtask, etc., please refer to the literature [43].
C. Performance Metrics
The most straightforward manner of evaluating the perfor-mance of a multitasking optimization algorithm is to analyzethe quality of solutions of each task independently. For anMOP, it is difficult to comprehensively measure the perfor-mance of an algorithm with a single metric. In general, thediversity and convergence of the approximate PF obtained bythe algorithm are two important metrics to measure the perfor-mance of an algorithm. The inverse generation distance (IGD)[44] and Hypervolume (HV) [45] are both composite metricsthat measure the quality of the approximate PF obtained by thealgorithm. Herein, the formula for calculating the IGD metriccan be described in the following form:
IGD(
S, P F ∗ ) = 1 | P F ∗ | s X x ∈ P F ∗ (min y ∈ S dist ( x , y )) (5)where, S represents the set of approximate PF obtained bythe algorithm, and P F ∗ represents the subset of true PF. OURNAL OF L A TEX CLASS FILES, VOL. X, NO. X, X XXXX 6 dist ( x , y ) is the Euclidean distance between points x and y . A smaller IGD metric means a better performance of thealgorithm.Then, given a reference point z r = ( z r , z r , . . . , z rm ) , theHypervolume metric can be computed by using formula (6). HV( S, z r ) = VOL [ y ∈ S [ y , z r ] × . . . × [ y m , z rm ] (6)where, VOL( · ) represents a Lebesgue measure, and nadir pointcan be used as reference point or defined by the user. The HVmetric is the opposite of the IGD metric, where a larger valueindicates a better performance of the algorithm.Note that for both metrics, we have normalized the pointson the approximate PF obtained by the algorithm by using thenadir point and ideal point derived from the true PF. For eachtest instance, all algorithms are run independently 21 times. D. Parameter Settings
For the same test instance, the termination condition for allalgorithms is the number of evaluations (
Eva ) of the objectivefunction, which is set to K × , , where K is the number oftasks. Here, the number of evaluations of the objective functionrefers to the sum of the number of evaluations of all tasks ina test instance. TABLE IO
PERATOR P ARAMETERS
SBX DE PM η c p c F Cr η m p m MO-MFEA
10 1 . – –
10 1 /D MO-MFEA-II
10 1 . – –
10 1 /D EMTIL
15 1 . – –
20 1 /D MFEA/D-DE – – . . /D MOEA/D – – . . /D MTEA/D-DN – – . . /D For the reproduction of offspring, the MO-MFEA, MO-MFEA-II and EMTIL algorithms apply the simulated bi-nary crossover (SBX) operator [46], and the MFEA/D-DE,MOEA/D and MTEA/D-DN algorithms apply the DE [41]crossover operator. Moreover, the final offspring in all algo-rithms are obtained from PM [42] operator. The parametersettings of these operators for producing offspring are shownin Table I. Here, η c and p c are the distribution index andcrossover probability for SBX operator, F and Cr are constantfactor and crossover constant for DE operator, and η m and p m are the distribution index and mutation probability forPM operator. The symbol D represents the dimension ofthe decision variable in the unified search space (except forMOEA/D, which is the dimension of the decision variable ofthe optimized task).In addition, the special parameters of some algorithms, suchas the random mating probability rmp in the MO-MFEA andMFEA/D-DE algorithms, are set to 0.3 and 0.1 respectively.The number of transferred solutions in EMTIL is set to 10.The settings of neighborhood selection probability β , neigh-borhood size T , and the maximum number of replacement n r in MFEA/D-DE, MOEA/D and our proposed algorithm areshown in Table II. TABLE IID
ECOMPOSITION S TRATEGY PARAMETERS β T n r MFEA/D-DE . MOEA/D . MTEA/D-DN . – E. Compared to State-of-the-Art Algorithms
Tables III and IV show the results of the statistical analysisof the means and standard deviations of our proposed algo-rithm and its rival algorithms with respect to the IGD and HVmetrics, respectively. Here, the algorithm that achieves the bestperformance is marked with dark gray shading and the one thatachieves the second best is marked with light gray shading. Onthe whole, it is clear that, for both IGD and HV metrics, thestatistical results demonstrate that our proposed algorithm per-forms better than its rival algorithms. Since all the algorithmsfailed to obtain a solution that dominates the reference pointeach time they were run, the HV metrics corresponding toPIHS2, PIMS2, NIHS1, and NILS2 are all 0 in Table IV. Themultitasking evolutionary algorithm MFEA/D-DE, also basedon the decomposition strategy, performs second best, with thelight gray shading covering the most. This means that thedecomposition-based multiobjective evolutionary algorithmsare more preferred to solve the MOMTO problems.Taking the test instances of CIHS, CIMS and CILS asexamples, we present the final non-dominated solution setsof all testing algorithms in terms of the median IGD metric inFigure 1. From this figure, it can be found that our proposedMTEA/D-DN can always achieve the best or competitiveresults.
F. Discussion of the Internal and External Neighborhoods
To analyze the effect of the proposed dual neighborhoodstrategy, in this experiment we further discuss the performanceof our proposed algorithm under the condition of employingonly internal neighborhood, or only external neighborhood.In the case of employing only the internal neighborhood, wedenote the corresponding algorithm as MTEA/D-IN, whichmeans that the information exchange channel between differ-ent tasks is closed, and that the external neighborhood of thesubproblem is not participated in the candidate set selectionand update. On the other hand, in the case of employing onlythe external neighborhood, the resulting algorithm is denotedas MTEA/D-EN, which means that the generated offspringwill only be used to update its external neighborhood. In thissituation there is some information exchange between tasks,but it is unidirectional. Here, the above 9 test instances are usedas benchmark test problems and the IGD metric is adoptedas a performance metric. Our proposed algorithm is taken asthe compared algorithm and statistical analysis is performedutilizing the Friedman’s test. The results of average rankingobtained by the three algorithms are shown in Table V. Basedon the results, it can be seen that the proposed algorithmperforms much better when it has the dual neighborhood, andthat the algorithm with only the external neighborhood (i.e.,
OURNAL OF L A TEX CLASS FILES, VOL. X, NO. X, X XXXX 7
TABLE IIIT HE P ERFORMANCE A NALYSIS OF
MTEA/D-DN
AND I TS TATE - OF - THE -A RT C OMPARISON A LGORITHMS IN T ERMS OF M EAN AND S TANDARD D EVIATION OF
IGD M
ETRIC
MO-MFEA MO-MFEA-II EMTIL MFEA/D-DE MOEA/D MTEA/D-DNCIHS1 . e − . e − . e − . e − . e − . e − . e − . e − . e − . e − . e − . e − CIHS2 . e − . e − . e − . e − . e − . e − . e − . e − . e − . e − . e − . e − CIMS1 . e − . e − . e − . e − . e − . e − . e − . e − . e − . e − . e − . e − CIMS2 . e − . e − . e − . e − . e − . e − . e − . e − . e − . e − . e − . e − CILS1 . e − . e − . e − . e − . e − . e − . e − . e − . e + 00 . e − . e − . e − CILS2 . e − . e − . e − . e − . e − . e − . e − . e − . e − . e − . e − . e − PIHS1 . e − . e − . e − . e − . e − . e − . e − . e − . e − . e − . e − . e − PIHS2 . e + 00 . e − . e − . e − . e − . e − . e + 00 . e − . e + 00 . e − . e − . e − PIMS1 . e − . e − . e − . e − . e − . e − . e − . e − . e − . e − . e − . e − PIMS2 . e + 01 . e +00 . e + 01 . e +00 . e + 00 . e +00 . e + 01 . e +00 . e + 01 . e +00 . e + 01 . e +00 PILS1 . e − . e − . e − . e − . e − . e − . e − . e − . e − . e − . e − . e − PILS2 . e − . e − . e − . e − . e − . e − . e − . e − . e − . e − . e − . e − NIHS1 . e + 00 . e − . e + 00 . e − . e + 00 . e − . e + 00 . e − . e + 01 . e +01 . e + 00 . e − NIHS2 . e − . e − . e − . e − . e − . e − . e − . e − . e − . e − . e − . e − NIMS1 . e − . e − . e − . e − . e − . e − . e − . e − . e − . e − . e − . e − NIMS2 . e − . e − . e − . e − . e − . e − . e − . e − . e − . e − . e − . e − NILS1 . e − . e − . e − . e − . e − . e − . e − . e − . e − . e − . e − . e − NILS2 . e − . e − . e − . e − . e − . e − . e − . e − . e − . e − . e − . e − TABLE IVT HE P ERFORMANCE A NALYSIS OF
MTEA/D-DN
AND I TS TATE - OF - THE -A RT C OMPARISON A LGORITHMS IN T ERMS OF M EAN AND S TANDARD D EVIATION OF
HV M
ETRIC
MO-MFEA MO-MFEA-II EMTIL MFEA/D-DE MOEA/D MTEA/D-DNCIHS1 . e − . e − . e − . e − . e − . e − . e − . e − . e − . e − . e − . e − CIHS2 . e − . e − . e − . e − . e − . e − . e − . e − . e − . e − . e − . e − CIMS1 . e − . e − . e − . e − . e − . e − . e − . e − . e + 00 . e +00 . e − . e − CIMS2 . e − . e − . e − . e − . e − . e − . e − . e − . e − . e − . e − . e − CILS1 . e − . e − . e − . e − . e − . e − . e − . e − . e + 00 . e +00 . e − . e − CILS2 . e − . e − . e − . e − . e − . e − . e − . e − . e − . e − . e − . e − PIHS1 . e − . e − . e − . e − . e − . e − . e − . e − . e − . e − . e − . e − PIHS2 . e + 00 . e +00 . e + 00 . e +00 . e + 00 . e +00 . e + 00 . e +00 . e + 00 . e +00 . e + 00 . e +00 PIMS1 . e − . e − . e − . e − . e − . e − . e − . e − . e − . e − . e − . e − PIMS2 . e + 00 . e +00 . e + 00 . e +00 . e + 00 . e +00 . e + 00 . e +00 . e + 00 . e +00 . e + 00 . e +00 PILS1 . e − . e − . e − . e − . e − . e − . e − . e − . e − . e − . e − . e − PILS2 . e + 00 . e +00 . e + 00 . e +00 . e + 00 . e +00 . e + 00 . e +00 . e + 00 . e +00 . e − . e − NIHS1 . e + 00 . e +00 . e + 00 . e +00 . e + 00 . e +00 . e + 00 . e +00 . e + 00 . e +00 . e + 00 . e +00 NIHS2 . e − . e − . e − . e − . e − . e − . e − . e − . e − . e − . e − . e − NIMS1 . e + 00 . e +00 . e + 00 . e +00 . e + 00 . e +00 . e + 00 . e +00 . e + 00 . e +00 . e − . e − NIMS2 . e − . e − . e − . e − . e − . e − . e − . e − . e − . e − . e − . e − NILS1 . e − . e − . e − . e − . e − . e − . e − . e − . e − . e − . e − . e − NILS2 . e + 00 . e +00 . e + 00 . e +00 . e + 00 . e +00 . e + 00 . e +00 . e + 00 . e +00 . e + 00 . e +00 MTEA/D-EN) can achieve a slightly better average rankingthan the algorithm with only the internal neighborhood (i.e.,MTEA/D-IN).
TABLE VA
VERAGE R ANKING OF THE A LGORITHMS
Algorithm Average RankingMTEA/D-EN 2.17MTEA/D-IN 2.33MTEA/D-DN 1.50
Further, the convergence curves of the median process IGDmetrics for these test instances are shown in Figures 2-4. FromFigure 2 focusing on the tasks with complete intersections,it indicates that the adoption of external neighborhood cansignificantly improve the convergence speed. The main reasonis that the information exchange will become more efficientbetween subproblems with external neighborhoods because thetasks have complete intersections. From Figure 3 focusingon the tasks with partial intersections, the power of externalneighborhood tends to deteriorate, but it is still very efficientfor some tasks. With respect to Figure 4 focusing on the tasks
OURNAL OF L A TEX CLASS FILES, VOL. X, NO. X, X XXXX 8 f f MO−MFEAMO−MFEA−IIEMTILMFEA/D−DEMOEA/DMTEA/D−DNCIHS1 0 0.2 0.4 0.6 0.8 1 1.2 1.4 0 0.2 0.4 0.6 0.8 1 f f MO−MFEAMO−MFEA−IIEMTILMFEA/D−DEMOEA/DMTEA/D−DNCIHS2 0 1 2 3 4 5 6 0 0.5 1 1.5 2 f f MO−MFEAMO−MFEA−IIEMTILMFEA/D−DEMOEA/DMTEA/D−DNCIMS1 0 0.2 0.4 0.6 0.8 1 1.2 1.4 0 0.2 0.4 0.6 0.8 1 1.2 1.4 f f MO−MFEAMO−MFEA−IIEMTILMFEA/D−DEMOEA/DMTEA/D−DNCIMS2 0 0.2 0.4 0.6 0.8 1 1.2 0 0.2 0.4 0.6 0.8 1 1.2 f f MO−MFEAMO−MFEA−IIEMTILMFEA/D−DEMOEA/DMTEA/D−DNCILS1 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 f f MO−MFEAMO−MFEA−IIEMTILMFEA/D−DEMOEA/DMTEA/D−DNCILS2
Fig. 1. Non-dominated solution set of median IGD metric obtained by MTEA/D-ND and its rival algorithms for CIHS, CIMS and CILS −4−3−2−1 0 1 2 3 10 20 30 40 50 60 70 80 90 100 l og10 − m e d i a n − I GD Evalutions (*500)MTEA/D−INMTEA/D−ENMTEA/D−DNCIHS1−3−2.5−2−1.5−1−0.5 0 0.5 1 10 20 30 40 50 60 70 80 90 100 l og10 − m e d i a n − I GD Evalutions (*500)MTEA/D−INMTEA/D−ENMTEA/D−DNCIHS2 −4−3.5−3−2.5−2−1.5−1−0.5 0 0.5 1 10 20 30 40 50 60 70 80 90 100 l og10 − m e d i a n − I GD Evalutions (*500)MTEA/D−INMTEA/D−ENMTEA/D−DNCIMS1−4−3.5−3−2.5−2−1.5−1−0.5 10 20 30 40 50 60 70 80 90 100 l og10 − m e d i a n − I GD Evalutions (*500)MTEA/D−INMTEA/D−ENMTEA/D−DNCIMS2 −3.5−3−2.5−2−1.5−1−0.5 0 0.5 1 10 20 30 40 50 60 70 80 90 100 l og10 − m e d i a n − I GD Evalutions (*500)MTEA/D−INMTEA/D−ENMTEA/D−DNCILS1−4−3.5−3−2.5−2−1.5−1 10 20 30 40 50 60 70 80 90 100 l og10 − m e d i a n − I GD Evalutions (*500)MTEA/D−INMTEA/D−ENMTEA/D−DNCILS2
Fig. 2. The convergence curves of the median process IGD metric for CIHS, CIMS and CILS with no intersections, the performance difference between theinternal neighborhood and the external neighborhood becomesinsignificant. Based on these results, it can be concluded thatmining valuable information between tasks (i.e., MTEA/D-DNand MTEA/D-EN) can improve the convergence speed whensolving different tasks, especially when the tasks have highintersections between each other. However, when the internaland external neighborhoods of the subproblem work together,they can make the algorithm perform better and more robust.
G. Sensitivity Analysis
In this experiment, we analyze the sensitivity of two hyper-parameters in our proposed algorithm, i.e., the probability ofneighborhood selection β and the internal neighborhood size T , on the performance of the proposed algorithm. The valuesof the two parameters are set as β ∈ { . , . , . . . , . } , and T ∈ { , , . . . , } , respectively, and the other parametersare the same as the above settings. Based on the test instancesmentioned before, the Friedman’s test is adopted for statisticalanalysis. Figure 5 illustrates the change of average ranking OURNAL OF L A TEX CLASS FILES, VOL. X, NO. X, X XXXX 9 −4−3−2−1 0 1 2 3 10 20 30 40 50 60 70 80 90 100 l og10 − m e d i a n − I GD Evalutions (*500)MTEA/D−INMTEA/D−ENMTEA/D−DNPIHS1−1−0.5 0 0.5 1 1.5 2 2.5 3 10 20 30 40 50 60 70 80 90 100 l og10 − m e d i a n − I GD Evalutions (*500)MTEA/D−INMTEA/D−ENMTEA/D−DNPIHS2 −3−2.5−2−1.5−1−0.5 0 0.5 10 20 30 40 50 60 70 80 90 100 l og10 − m e d i a n − I GD Evalutions (*500)MTEA/D−INMTEA/D−ENMTEA/D−DNPIMS1 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2 10 20 30 40 50 60 70 80 90 100 l og10 − m e d i a n − I GD Evalutions (*500)MTEA/D−INMTEA/D−ENMTEA/D−DNPIMS2 −4−3.5−3−2.5−2−1.5−1−0.5 10 20 30 40 50 60 70 80 90 100 l og10 − m e d i a n − I GD Evalutions (*500)MTEA/D−INMTEA/D−ENMTEA/D−DNPILS1−2−1.8−1.6−1.4−1.2−1−0.8−0.6−0.4−0.2 0 10 20 30 40 50 60 70 80 90 100 l og10 − m e d i a n − I GD Evalutions (*500)MTEA/D−INMTEA/D−ENMTEA/D−DNPILS2
Fig. 3. The convergence curves of the median process IGD metric for PIHS, PIMS and PILS l og10 − m e d i a n − I GD Evalutions (*500)MTEA/D−INMTEA/D−ENMTEA/D−DNNIHS1−4−3−2−1 0 1 2 3 10 20 30 40 50 60 70 80 90 100 l og10 − m e d i a n − I GD Evalutions (*500)MTEA/D−INMTEA/D−ENMTEA/D−DNNIHS2 −1.5−1−0.5 0 0.5 1 1.5 2 2.5 3 3.5 10 20 30 40 50 60 70 80 90 100 l og10 − m e d i a n − I GD Evalutions (*500)MTEA/D−INMTEA/D−ENMTEA/D−DNNIMS1−3.5−3−2.5−2−1.5−1−0.5 0 0.5 1 1.5 10 20 30 40 50 60 70 80 90 100 l og10 − m e d i a n − I GD Evalutions (*500)MTEA/D−INMTEA/D−ENMTEA/D−DNNIMS2 −3.2−3−2.8−2.6−2.4−2.2−2−1.8−1.6−1.4 10 20 30 40 50 60 70 80 90 100 l og10 − m e d i a n − I GD Evalutions (*500)MTEA/D−INMTEA/D−ENMTEA/D−DNNILS1−0.193−0.192−0.191−0.19−0.189−0.188−0.187−0.186−0.185 10 20 30 40 50 60 70 80 90 100 l og10 − m e d i a n − I GD Evalutions (*500)MTEA/D−INMTEA/D−ENMTEA/D−DNNILS2
Fig. 4. The convergence curves of the median process IGD metric for NIHS, NIMS and NILS obtained by different values of β and T . Here, Figure 5(a)shows that for a certain internal neighborhood size, the per-formance of the algorithm (in terms of average ranking) firstgets better but then worse as the increase of the neighborhoodselection probability. This indicates that when β increasesfrom 0, the selection strategy of internal neighborhood andexternal neighborhood starts to work and thus improves thealgorithm’s performance. But when β becomes too large, mostof the search efforts are allocated to information exchange,instead of the search within each task, which will in turn deteriorate the performance of the algorithm. Therefore, thesetting of β should be able to guarantee a good balancebetween the search within each task and the informationexchange between different tasks. The results indicate that thealgorithm is more competitive for the neighborhood selectionprobabilities of 0.1-0.4. In addition, Figure 5(b) shows that theaverage ranking of the algorithm is relatively robust to changesof the internal neighborhood size for a given neighborhoodselection probability. Based on the two sub-figures, it canbe concluded that the adoption of external neighborhood has OURNAL OF L A TEX CLASS FILES, VOL. X, NO. X, X XXXX 10
20 30 40 50 60 70 80 90 0 0.2 0.4 0.6 0.8 1 a v e r a g e r a nk i ng neighborhood selection probability b T = 10T = 20T = 30 T = 40T = 50T = 60 T = 70T = 80T = 90 (a)
20 30 40 50 60 70 80 90 10 20 30 40 50 60 70 80 90 a v e r a g e r a nk i ng internal neighborhood size T b = 0.0 b = 0.1 b = 0.2 b = 0.3 b = 0.4 b = 0.5 b = 0.6 b = 0.7 b = 0.8 b = 0.9 b = 1.0 (b)Fig. 5. Average ranking with different hyperparameters under the IGD metric more significant effect on the performance of our proposedalgorithm, which is consistent with the analysis and conclusionderived in section IV-F.V. C ONCLUSION AND F UTURE W ORK
In this paper, we proposed a multiobjective multitaskingevolutionary algorithm based on decomposition with dualneighborhood. For each optimization task, an MOP, likethe traditional decomposition strategy, is decomposed into anumber of single-objective optimization subproblems by usinga set of pre-defined weight vectors. Further, in addition toan internal neighborhood defined by the Euclidean distancebetween the weight vectors, each subproblem is also associatedwith subproblems of other tasks, which is called the externalneighborhood, using grey relation analysis. The internal andexternal neighborhoods of the subproblem are used to explorethe correlations and potentially valuable information betweendifferent tasks to further improve the efficiency of solvingdifferent tasks. The experimental results demonstrated that ourproposed algorithm works better than four state-of-the-art al-gorithms and a traditional decomposition-based multiobjectiveevolutionary algorithm. Our future work will be focused onhow to prevent negative transfer of information between tasksand how to explore the valuable information between tasks inthe case of many tasks. R
EFERENCES[1] A. Gupta, Y. Ong, and L. Feng, “Multifactorial evolution: Toward evolu-tionary multitasking,”
IEEE Transactions on Evolutionary Computation ,vol. 20, no. 3, pp. 343–357, June 2016.[2] X. Wang, Z. Dong, and L. Tang, “Multiobjective differential evolutionwith personal archive and biased self-adaptive mutation selection,”
IEEETransactions on Systems, Man, and Cybernetics: Systems , 2018, in press,doi:10.1109/TSMC.2018.2875043.[3] L. Tang and Y. Meng, “Data analytics and optimization for smartindustry,”
Frontiers of Engineering Management , 2020, in press,doi:10.1007/s42524-020-0126-0.[4] A. Gupta, Y. Ong, L. Feng, and K. C. Tan, “Multiobjective multifac-torial optimization in evolutionary multitasking,”
IEEE Transactions onCybernetics , vol. 47, no. 7, pp. 1652–1665, July 2017.[5] P. Huang and Y. Wang, “A framework for scalable bilevel optimization:Identifying and utilizing the interactions between upper-level and lower-level variables,”
IEEE Transactions on Evolutionary Computation , 2020,in press, doi:10.1109/TEVC.2020.2987804.[6] A. Gupta, J. Ma´ndziuk, and Y.-S. Ong, “Evolutionary multitasking inbi-level optimization,”
Complex & Intelligent Systems , vol. 1, no. 1, pp.83–95, Dec 2015.[7] J. Ding, C. Yang, Y. Jin, and T. Chai, “Generalized multitasking forevolutionary optimization of expensive problems,”
IEEE Transactionson Evolutionary Computation , vol. 23, no. 1, pp. 44–58, Feb 2019.[8] J. Yi, J. Bai, H. He, W. Zhou, and L. Yao, “A multifactorialevolutionary algorithm for multitasking under interval uncertainties,”
IEEE Transactions on Evolutionary Computation , 2020, in press,doi:10.1109/TEVC.2020.2975381.[9] L. Feng, L. Zhou, A. Gupta, J. Zhong, Z. Zhu, K. Tan, and K. Qin,“Solving generalized vehicle routing problem with occasional drivers viaevolutionary multitasking,”
IEEE Transactions on Cybernetics , 2019, inpress, doi:10.1109/TCYB.2019.2955599.[10] C. Yang, J. Ding, Y. Jin, C. Wang, and T. Chai, “Multitasking multi-objective evolutionary operational indices optimization of beneficiationprocesses,”
IEEE Transactions on Automation Science and Engineering ,vol. 16, no. 3, pp. 1046–1057, 2019.[11] L. Bao, Y. Qi, M. Shen, X. Bu, J. Yu, Q. Li, and P. Chen, “An evolution-ary multitasking algorithm for cloud computing service composition,” in
Services – SERVICES 2018 . Cham: Springer International Publishing,2018, pp. 130–144.[12] H. Li, Y. Ong, M. Gong, and Z. Wang, “Evolutionary multitaskingsparse reconstruction: Framework and case study,”
IEEE Transactionson Evolutionary Computation , vol. 23, no. 5, pp. 733–747, Oct 2019.[13] H. Wang, Y. Jin, C. Yang, and L. Jiao, “Transfer stacking from low-to high-fidelity: A surrogate-assisted bi-fidelity evolutionary algorithm,”
Applied Soft Computing , vol. 92, p. 106276, 2020.[14] X. Hao, R. Qu, and J. Liu, “A unified framework of graph-basedevolutionary multitasking hyper-heuristic,”
IEEE Transactions on Evolu-tionary Computation , 2020, in press, doi:10.1109/TEVC.2020.2991717.[15] A. Rauniyar, R. Nath, and P. K. Muhuri, “Multi-factorial evolutionaryalgorithm based novel solution approach for multi-objective pollution-routing problem,”
Computers & Industrial Engineering , vol. 130, pp.757 – 771, 2019.[16] K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan, “A fast and elitistmultiobjective genetic algorithm: NSGA-II,”
IEEE Transactions onEvolutionary Computation , vol. 6, no. 2, pp. 182–197, 2002.[17] Q. Zhang and H. Li, “MOEA/D: A multiobjective evolutionary algorithmbased on decomposition,”
IEEE Transactions on Evolutionary Compu-tation , vol. 11, no. 6, pp. 712–731, 2007.[18] H. Li and Q. Zhang, “Multiobjective optimization problems with com-plicated pareto sets, MOEA/D and NSGA-II,”
IEEE Transactions onEvolutionary Computation , vol. 13, no. 2, pp. 284–302, April 2009.[19] S. Yao, Z. Dong, X. Wang, and L. Ren, “A multiobjective multifactorialoptimization algorithm based on decomposition and dynamic resourceallocation strategy,”
Information Sciences , vol. 511, pp. 18 – 35, 2020.[20] A. Zhou and Q. Zhang, “Are all the subproblems equally important?resource allocation in decomposition-based multiobjective evolutionaryalgorithms,”
IEEE Transactions on Evolutionary Computation , vol. 20,no. 1, pp. 52–64, 2016.[21] R.-H. Liang, “Application of grey relation analysis to hydroelectricgeneration scheduling,”
International Journal of Electrical Power &Energy Systems , vol. 21, no. 5, pp. 357 – 364, 1999.[22] Z. Dong, X. Wang, and L. Tang, “MOEA/D with a self-adaptive weightvector adjustment strategy based on chain segmentation,”
InformationSciences , vol. 521, pp. 209–230, 2020. [23] M. Pescador-Rojas, R. H. G´omez, E. Montero, N. Rojas-Morales, M.-C.Riff, and C. A. C. Coello, “An overview of weighted and unconstrainedscalarizing functions,” in International Conference on EvolutionaryMulti-Criterion Optimization . Springer, 2017, pp. 499–513.[24] Q. Zhang, W. Liu, and H. Li, “The performance of a new version ofMOEA/D on CEC09 unconstrained MOP test instances,” in , 2009, pp. 203–208.[25] Z. Dong, X. Wang, and L. Tang, “Color-coating scheduling witha multiobjective evolutionary algorithm based on decomposition anddynamic local search,”
IEEE Transactions on Automation Science andEngineering , 2020, in press, doi:10.1109/TASE.2020.3011428.[26] R. Cheng, Y. Jin, M. Olhofer, and B. Sendhoff, “A reference vectorguided evolutionary algorithm for many-objective optimization,”
IEEETransactions on Evolutionary Computation , vol. 20, no. 5, pp. 773–791,2016.[27] H. Liu, F. Gu, and Q. Zhang, “Decomposition of a multiobjective opti-mization problem into a number of simple multiobjective subproblems,”
IEEE Transactions on Evolutionary Computation , vol. 18, no. 3, pp.450–455, 2014.[28] F. Gu and Y. Cheung, “Self-organizing map-based weight designfor decomposition-based many-objective evolutionary algorithm,”
IEEETransactions on Evolutionary Computation , vol. 22, no. 2, pp. 211–225,2018.[29] K. Deb and H. Jain, “An evolutionary many-objective optimizationalgorithm using reference-point-based nondominated sorting approach,part I: Solving problems with box constraints,”
IEEE Transactions onEvolutionary Computation , vol. 18, no. 4, pp. 577–601, 2014.[30] K. Li, K. Deb, Q. Zhang, and S. Kwong, “An evolutionary many-objective optimization algorithm based on dominance and decomposi-tion,”
IEEE Transactions on Evolutionary Computation , vol. 19, no. 5,pp. 694–716, 2015.[31] L. Tang, X. Wang, and Z. Dong, “Adaptive multiobjective differentialevolution with reference axis vicinity mechanism,”
IEEE Transactionson Cybernetics , vol. 49, no. 9, pp. 3571–3585, 2019.[32] L. Feng, L. Zhou, J. Zhong, A. Gupta, Y. Ong, K. Tan, and A. K. Qin,“Evolutionary multitasking via explicit autoencoding,”
IEEE Transac-tions on Cybernetics , vol. 49, no. 9, pp. 3457–3470, Sep. 2019.[33] Z. Tang, M. Gong, Y. Wu, W. Liu, and Y. Xie, “Regularized evolution-ary multi-task optimization: Learning to inter-task transfer in alignedsubspace,”
IEEE Transactions on Evolutionary Computation , 2020, inpress, doi:10.1109/TEVC.2020.3023480.[34] L. Feng, Y. Huang, L. Zhou, J. Zhong, A. Gupta, K. Tang, and K. C. Tan,“Explicit evolutionary multitasking for combinatorial optimization: Acase study on capacitated vehicle routing problem,”
IEEE Transactionson Cybernetics , 2020, in press, doi:10.1109/TCYB.2019.2962865.[35] L. Zhou, L. Feng, K. C. Tan, J. Zhong, Z. Zhu, K. Liu, andC. Chen, “Toward adaptive knowledge transfer in multifactorial evolu-tionary computation,”
IEEE Transactions on Cybernetics , 2020, in press,doi:10.1109/TCYB.2020.2974100.[36] J. Lin, H. L. Liu, B. Xue, M. Zhang, and F. Gu, “Multiobjective multi-tasking optimization based on incremental learning,”
IEEE Transactionson Evolutionary Computation , vol. 24, no. 5, pp. 824–838, 2020.[37] K. K. Bali, Y. Ong, A. Gupta, and P. S. Tan, “Multifactorial evolutionaryalgorithm with online transfer parameter estimation: MFEA-II,”
IEEETransactions on Evolutionary Computation , vol. 24, no. 1, pp. 69–83,Feb 2020.[38] K. K. Bali, A. Gupta, Y. Ong, and P. S. Tan, “Cognizant multitasking inmultiobjective multifactorial evolution: MO-MFEA-II,”
IEEE Transac-tions on Cybernetics , 2020, in press, doi:10.1109/TCYB.2020.2981733.[39] X. Zheng, A. K. Qin, M. Gong, and D. Zhou, “Self-regulated evo-lutionary multitask optimization,”
IEEE Transactions on EvolutionaryComputation , vol. 24, no. 1, pp. 16–28, Feb 2020.[40] J. Blank, K. Deb, Y. Dhebar, S. Bandaru, and H. Seada, “Generatingwell-spaced points on a unit simplex for evolutionary many-objectiveoptimization,”
IEEE Transactions on Evolutionary Computation , 2020,in press, doi:10.1109/TEVC.2020.2992387.[41] R. Storn and K. Price, “Differential evolution – a simple and efficientheuristic for global optimization over continuous spaces,”
Journal ofGlobal Optimization , vol. 11, no. 4, pp. 341–359, Dec 1997.[42] K. Deb and S. Tiwari, “Omni-optimizer: A generic evolutionary algo-rithm for single and multi-objective optimization,”
European Journal ofOperational Research , vol. 185, no. 3, pp. 1062 – 1087, 2008.[43] Y. Yuan, Y.-S. Ong, L. Feng, A. K. Qin, A. Gupta, B. Da, Q. Zhang,K. C. Tan, Y. Jin, and H. Ishibuchi, “Evolutionary multitasking for mul-tiobjective continuous optimization: Benchmark problems, performancemetrics and baseline results,” arXiv e-prints , Jun. 2017. [44] O. Schuetze, X. Equivel, A. Lara, and C. A. Coello Coello, “Somecomments on gd and igd and relations to the hausdorff distance,” in
Proceedings of the 12th annual conference companion on Genetic andevolutionary computation . ACM, 2010, pp. 1971–1974.[45] M. Emmerich, N. Beume, and B. Naujoks, “An emo algorithm using thehypervolume measure as selection criterion,” in
Proceedings of the ThirdInternational Conference on Evolutionary Multi-Criterion Optimization ,ser. EMO’05. Berlin, Heidelberg: Springer-Verlag, 2005, pp. 62–76.[46] R. B. Agrawal, K. Deb, and R. B. Agrawal, “Simulated binary crossoverfor continuous search space,”