ASBSO: An Improved Brain Storm Optimization With Flexible Search Length and Memory-Based Selection
aa r X i v : . [ c s . N E ] M a r IEEE ACCESS 1
ASBSO: An Improved Brain Storm Optimizationwith Flexible Search Length and Memory-basedSelection
Yang Yu, Shangce Gao,
Senior Member, IEEE , Yirui Wang, Jiujun Cheng and Yuki Todo,
Member, IEEE
Abstract —Brain storm optimization (BSO) is a newly proposedpopulation-based optimization algorithm which uses a logarith-mic sigmoid transfer function to adjust its search range duringthe convergent process. However, this adjustment only varieswith the current iteration number, and lacks of flexibility andvariety which makes a poor search efficiency and robustness ofBSO. To alleviate this problem, an adaptive step length structuretogether with a success memory selection strategy are proposedto be incorporated into BSO. This proposed method, adaptivestep length based on memory selection BSO, namely ASBSO,applies multiple step lengths to modify the generation processof new solutions, thus supplying a flexible search accordingto corresponding problems and convergent periods. The novelmemory mechanism which is capable of evaluating and storingthe degree of improvements of solutions is used to determinethe selection possibility of step lengths. A set of 57 benchmarkfunctions are used to test ASBSO’s search ability, and four real-world problems are adopted to show its application value. Allthese test results indicate the remarkable improvement in solutionquality, scalability and robustness of ASBSO.
Index Terms —Brain storm optimization, adaptive step length,memory-based selection, population-based optimization andswarm intelligence
I. I
NTRODUCTION N Owadays, many swarm intelligence algorithms have beenproposed to solve complex real-world problems [1],[2].Brain storm optimization algorithm (BSO) which is one ofthe swarm intelligence algorithm, is promising in solving com-plex problems [3]. It is inspired by the human brain stormingbehaviors. Each idea generated by the human brain representsan individual in search space. In a brain storming process,humans firstly generate some rough ideas, then exchange anddiscuss these ideas with each other. The inferior ideas aresifted out while the superior ones are left. This operationcircles over and over, which makes ideas become more andmore mature. In the meanwhile, new ideas are kept beinggenerated and joined in the circle. With the process ends, afeasible and effective idea spurts out.
This research was partially supported by the National Natural ScienceFoundation of China (Grant Nos. 61472284, 61673403) and JSPS KAKENHIGrant Number JP17K12751, JP15K00332. (Corresponding authors: ShangceGao ([email protected]), Jiujun Cheng ([email protected]))Yang Yu, Shangce Gao and Yirui Wang are with the Faculty of Engineer-ing, University of Toyama, Toyama 930-8555, Japan. (e-mail: [email protected]).Jiujun Cheng is with Key Laboratory of Embedded System and ServiceComputing, Ministry of Education, Department of Computer Science andTechnology, Tongji University, Shanghai, 200092 China.Yuki Todo is with the School of Electrical and Computer Engineering,Kanazawa University, Kanazawa-shi, 920-1192 Japan.
Since the announcement of BSO in 2011, it gets lots ofattention from the researchers in swarm intelligence commu-nity due to its novelty and efficiency. It has been successfullyapplied in different scenarios, such as function optimization,engineering problems and financial prediction [4], [5], [6],[7], [8]. Moreover, some modifications for BSO have beenmade to enhance its performance from several perspectives.For example, a new multi-objective BSO (MBSO) is proposedin [9] for solving multi-objective optimization problems. Theclustering strategy is applied in the objective search spaceto handle multi-objective optimization problems, while it isoriginally performed in the solution search space for solvingsingle objective problems. With different characteristics ofdiverging operation, MBSO becomes a promising algorithmwith an outstanding ability to solve multi-objective optimiza-tion problems. In [10], BSO in objective space (BSOOS) isproposed to cut down the computation time of the convergentoperation. A clustering operation is replaced by taking p percentage individuals as elitists. An updating operation ismodified to suit for an elitists mechanism in one-dimensionalobjective space instead of solution space. By doing so, BSOOSachieves a better convergent speed and solution quality incomparison with the traditional BSO.Improving the population diversity is an alternative modifi-cation besides the usage of objective space. As the balance be-tween convergence and divergence is very important to swarmintelligence optimization algorithms, a premature convergenceleads to a low population diversity and bad solution quality,while the opposite brings very slow search speed. The issueof how to find the balance between convergence and diver-gence of solutions is still very challenging and it reflects thealgorithm’s exploration and exploitation ability. In [11], [12],chaotic sequences are used as variables to initialize populationand generate new individuals. As a universal phenomenonof nonlinear dynamic systems, chaos has an unpredictablerandom behavior [13]. Thus, its randomicity and ergodicitycan help BSO improve its population diversity and solutionquality effectively. In [14], Cheng et al. propose a new BSOwhich uses different kinds of partial reinitialization strategiesto increase its population diversity. Duan et al. [15] propose anovel predator-prey model to improve the population diversityof BSO for a DC brush-less motor. This model can enable thealgorithm structure to explore the search space more evenly.By using the predator-prey strategy, the population can sharebetter global information with each other to improve searchefficiency in exploitation phase. In [16], quantum-behaved IEEE ACCESS
BSO (QBSO) which aims to improve population diversityand generate new individuals by using global information isproposed. Moreover, QBSO for the first time combines BSOwith quantum theories. It analyzes the quantum behavior andquantum state of each individual by depicting a wave functionto solve the drawback of BSO that easily sticks into localoptima on multimodal functions. In addition, Wang et al.[17] discover a power law distribution in BSO which opensa new way of thinking to boost the population interactionand improve population diversity via adjusting the populationstructure.Although above mentioned modifications have improved theperformance of BSO, they are limited and the performanceof BSO is still fatigued and week [18]. Most efforts attemptto modify BSO for solving specific problems while thesemodifications are not suitable for other applications. It is stilla great demand to enhance its search ability and robustness.To achieve this goal, we propose an adaptive step lengthmechanism based on memory selection to combine with BSO(namely, ASBSO) which exhibits a notable performance. Thismethod can modify BSO by providing strategies with variousstep lengths which are adaptively applied to generate newindividuals. As it can supply a specific step length accordingto corresponding problems and convergent periods, it is morepossible that ASBSO can avoid or jump out of the localoptima. In other words, the search efficiency and robustnessof BSO can be greatly improved.Besides the adaptive step length mechanism, a modifiedselection method is also proposed based on memory. Differentfrom the conventional storage mode in [19] which applies asuccess memory and a failure memory with 0 and 1 as theinformation stored in these memories, the modified methodonly employs the success memory and considers the differ-ence between two compared fitness values instead of simplenumbers (i.e., 0 and 1). This is a modification which directlydemonstrates the improvement of each selected strategy andextrudes a strategy with a better performance. A detaileddescription is presented in Section III.The contributions of this paper can be summarized as:(1) An adaptive step length mechanism based on memoryselection method is proposed to enhance the robustness ofBSO evidently, therefore makes it more suitable for variousapplications. (2) It’s for the first time that we use the differencebetween two compared fitness values instead of simple num-bers such as 0 and 1 to be stored in memory. This modificationcan increase the efficiency of the selection method, and therebyimprove solution quality observably. An experimental compar-ison between the new storage mode and the old one brings anintuitional conclusion that the proposed method is significantlybetter. (3) Sufficient experimental data and statistical analysesof performance comparisons between traditional BSO and ourproposed ASBSO at different dimensions show that ASBSOoutperforms BSO entirely. The contrast between ASBSO andother well-known algorithms also indicates the superiority ofASBSO. (4) ASBSO is verified to be a competent and robustalgorithm for different optimization problems.The organization of this paper can be presented as follows.A brief introduction of BSO is given in Section II. Section III introduces the proposed ASBSO in details. The experimentalresults are shown in Section IV. Some discussions are assignedin Section V. We conclude this paper in Section VI.II. A B
RIEF I NTRODUCTION ABOUT
BSOBSO is a swarm intelligence algorithm which is inspiredby the human brain storming behaviors and it assumes theindividuals in search procedure as the ideas generated by thehuman brain. In its execution process, three main operationsincluding the clustering, selection and generation of individu-als are implemented to maintain the population diversity andconvergence speed [17].(A) Clustering: The original BSO uses a k -means clusteringmethod to divide individuals in current population into severalclusters according to the distance among individuals. Theyare continually updated, in the meantime, the distribution ofindividuals moves towards a smaller and smaller range by thelapse of iterations via k -means method. Therefore, for a givenproblem, the clustering results can show the distribution ofindividuals in the search space.(B) Selection: New individuals are generated based on oneindividual or the combination of two individuals. BSO controlsthe selection operation by presetting some parameters [3].If a random value is smaller than a replacement parameter p c ( p c = 0 . in [3]), one cluster center is replaced by arandomly generated individual. Another parameter p g ( = 0 . )controls the number of selected individuals in the generationphase. If a random value is smaller than p g , one cluster isselected, otherwise, two clusters are applied. After comparingwith p g , there are two parameters p c and p c which furtherconfirm the selected individuals from one and two clusters. Tobe specific, new generated individual from one cluster centeror one general individual is decided by p c . Similarly, p c determines new generated individual from two cluster centersor two general individuals.(C) Generation: After implementing the selection of indi-viduals, the generation method of BSO can be exhibited inEqs. (1) and (2). X new = X + ξ · N (0 , (1)where X and X new are the selected and newly generatedindividuals, respectively. Standard normal distribution N (0 , is used to generate a random variation. ξ is a step length whichis calculated in Eq. (2). ξ = logsig (( M i / − C i ) /K ) · rand (2)where logsig () means a logarithmic sigmoid transfer functionwhich ranges in the interval (0,1). M i and C i refer to themaximum iteration and current iteration. K is used to changethe scale of logsig () function and rand generates a randomvalue in the interval (0 , . If the fitness value f ( X new ) isbetter than f ( X ) , X is replaced by X new .III. ASBSO A. Motivation
In the new individual generating operation of BSO intro-duced in Section II, the search step length only varies with the current iteration number and lacks of flexibility, thus it makesa poor search efficiency and robustness. BSO only appliesan invariable scale parameter K = 20 to render the searchrange to shrink during iterations, therefore the shrink is limitedand inflexible. In ASBSO, an adaptive step length mechanismis motivated to alleviate this issue. Various optional scaleparameters make BSO have adjustable search ranges insteadof the traditional step length which only varies according tothe current iteration number. As ASBSO applies multiple steplengths in the search process, the probability of getting intothe gorge or jumping out of the valley in the search landscapecan be increased a lot.As we described in Section II that BSO lacks of a powerfulsearch ability and robustness, it is a motivation for us toalleviate these drawbacks. An example is shown below tomake us further understand the utilization of search abilityand robustness.A popular approach to comprehensively observe the searchability and robustness of optimization algorithms in evo-lutionary community is to optimize benchmark functions.Some famous benchmark function suits such as 23 standardbenchmark functions [20], CEC’05 [21], CEC’13 [22] andCEC’17 benchmark functions [23] have been widely used.These functions become more and more complicated anddifficult in order to emulate the real world problems whosecomplexities increase in a geometric ratio. Therefore, the per-formances of optimization algorithms on benchmark functionshave become an important standard to judge whether they canbe implemented into practical applications or not. For instance,Fig. ?? illustrates the 3D and contour graphs of F8 and F11in CEC’13 function suit. F8 is a rotated Ackley’s functionwhich has the same properties of multi-modal, non-separableand asymmetrical as F11 does. In addition, the local optima’snumber of F11 is very huge. The global optimum of F8 seemsto be in a gorge surrounded by many steep precipices. Theentrance of this gorge is so narrow and secluded that it couldeasily be missed by a search step length which is beyondthe distance between X ( t ) and X ′ ( t ) . Once the entrance hasbeen missed, individual could only find a mass of similarlocal optimum. It will take a lot of computational time toobtain another chance for exploiting the gorge where globaloptima hides. On the contrary, in F11, a step length smallerthan the distance between X ( t ) and X ′ ( t ) means that theindividual couldn’t jump out of the valley of local optimaand is hard to know the global optima lays just beside it.These are two representative cases which could happen notonly in benchmark functions but also in real world. Therefore,it has become an urgent task to alleviate and solve them viaproposing more suitable optimization algorithms .To address the above issues, two main modifications includ-ing multiple step lengths and new memory mechanism areproposed in ASBSO. They are interpreted in the followingsubsections in detail. B. Multiple Step Lengths
The parameter K in Eq. (2) is used to change the scale of logsig () . In the strategy of multiple step lengths, different K TABLE II
LLUSTRATION OF THE FLEXIBLE MULTIPLE SEARCH LENGTH STRATEGY .Strategy 1 Strategy 2 Strategy 3 ... Strategy M
K k k + H k + 2 H ... k + ( M − H values listed in Table I is applied to provide different scalesto adjust the search step length. The strategies which haverelatively small K values indicate that they can provide adiffusion to search radius. It makes BSO be effective to explorethe objective space and accelerate convergence. In the earlysearch phase, optimization algorithm is required to have effi-cient exploration competence when facing the unknown searchspace. If we pay much attention to exploit local informationbefore the whole space has been explored, the search costwill become very expensive and influence the solution quality[24]. Thus, it’s necessary to provide large search step length toeffectively detect the region with promising solutions. Whilein exploitation phase, a local search which applies short steplength is needed urgently to excavate solutions with a highaccuracy. Therefore, strategies with relatively large K valuescan improve solution quality in exploitation phase as large K values generally lead to a localized search.As we discussed that changeless K value makes BSO onlycan shrink its search range according to the current iterationnumber while couldn’t flexibly adjust step length to fit varioussearch periods and problems, assigning multiple values to K naturally equips BSO with flexible search ability to replydifferent situations. C. New Memory Mechanism
To adaptively carry out multiple step lengths, we introducean improved memory storing mechanism (IMS) which isoriginated from the success-failure-based memory structure(SFMS) [19], [25], In SFMS, a success memory shown inTable II and a failure memory shown in Table III is appliedto store the number of succeeding or failing to generatebetter solutions, respectively. In the beginning, M strategiesare randomly selected by roulette wheel selection method togenerate new individuals. As Eqs. (3) and (4) shown, if the newindividual X ′ t − outperforms and replaces the old individual X t − , it is indicated as a success and let α j,t equal to 1,where j ( j = 1 , , ..., M ) refers to the used strategy and t is the current iteration. If the opposite, it becomes a failuretrial and β j,t equals to 1. If the iteration count is over thepreset iteration length L ( L =
50 is empirically set accordingto [19]), the first row of Tables II and III will be removed tomake space for the newest one. The selection of strategies isdescribed as follows. α j,t = (cid:26) , f ( X ′ t − ) < f ( X t − )0 , otherwise (3) β j,t = (cid:26) , f ( X ′ t − ) < f ( X t − )1 , otherwise (4) IEEE ACCESS
TABLE IIT
RADITIONAL S UCCESS M EMORY
Index Strategy 1 Strategy 2 Strategy 3 ... Strategy M1 α ,t − L α ,t − L α ,t − L ... α M,t − L α ,t − L +1 α ,t − L +1 α ,t − L +1 ... α M,t − L +1 ... ... ... ... ... ... L α ,t − α ,t − α ,t − ... α M,t − TABLE IIIT
RADITIONAL F AILURE M EMORY
Index Strategy 1 Strategy 2 Strategy 3 ... Strategy M1 β ,t − L β ,t − L β ,t − L ... β M,t − L β ,t − L +1 β ,t − L +1 β ,t − L +1 ... β M,t − L +1 ... ... ... ... ... ... L β ,t − β ,t − β ,t − ... β M,t − The chosen probability of each strategy is calculated asshown in Eqs. (5) and (6) after the memories record the results: p j,t = S j,t P j =1 S j,t (5) S j,t = P t − t − L α j,t P t − t − L α j,t + P t − t − L β j,t + δ (6)where p j,t denotes the probability to use the j -th strategyin current iteration t when t > L . P t − t − L α j,t calculates thetotal number of the j -th strategy successfully generating a newindividual to replace X t − . P t − t − L β j,t is the total number forthe failure circumstances. Eq. (6) calculates the success rateand δ = 0 . is used for avoiding a null value. It is obviousthat the strategy with higher success rate has a higher chanceto be selected to generate new individuals.However, the SFMS mechanism has one drawback that nomatter how better a new individual obtained by a strategy,it only records in the success memory. One case is givento interpret this drawback in detail. Let’s define that D represents the improvement in fitness (if f ( X ′ t − ) < f ( X t − ) , D = | f ( X ′ t − ) − f ( X t − ) | ) obtained by Strategy 1, D is thatobtained by Strategy 2, and so on. Supposing D = 2 D ,which means Strategy 1 is suitable for the current searchperiod and can find a much better solution than Strategy 2does in one generation. However, they score the same points(both 1) in success memory which leads to same possibilitiesto be selected. This mechanism evidently has relatively lowefficiency which causes a slowness in convergence speed, andfurther decrease the solution quality. To alleviate this issue,in IMS, the improvement value in fitness D j ( j indicatesthe executed strategy) is recorded into a success memory toreplace the numbers of 0 and 1. In the meanwhile, failurememory is not implemented in the new mechanism, since wefocus on the quality not quantity that each strategy obtains. Iffailure memory is applied, a poor search attempt may decreasethe quality of solutions and hinder the evolutionary directionof algorithm. Table IV shows the structure of IMS. Eachimprovement value D jt in fitness obtained by strategy j is TABLE IVN EW S UCCESS M EMORY (IMS)Index Strategy 1 Strategy 2 Strategy 3 ... Strategy M1 D t − L D t − L D t − L ... D Mt − L D t − L +1 D t − L +1 D t − L +1 ... D Mt − L +1 ... ... ... ... ... ... L D t − D t − D t − ... D Mt − stored in it. The selection possibility of strategy j at iteration t can be calculated by Eq. (7). p newj,t = D jt P Mj =1 D jt (7)Algorithm 1 illustrates the main procedures of ASBSO. Ineach generation of new individuals, a strategy j is selectedaccording to its selection possibility p newj,t to produce a searchstep length. The new individual is generated by adding thestep length to the selected X by using Eq. (1) and its fitnessis calculated. If the new individual is better than the old one,then it will replace the old one. In the meanwhile, the selectedstrategy is marked as a success trial. The improvement infitness D jt is stored in memory and the selection possibilityfor each strategy is updated.IV. E XPERIMENTAL R ESULTS
Two groups of comparisons have been carried out whichinclude internal comparisons and external comparisons usingCEC’13 and CEC’17 test functions. It should be noticed thatF2 in CEC’17 has been excluded because it shows unstablebehavior especially for higher dimensions, and significantperformance variations for the same algorithm implementedin Matlab, or C Language [22], [23]. The internal comparisonaims to demonstrate that ASBSO can achieve better perfor-mance than BSO not only at low dimension, but also at highdimension. Therefore, these comprehensive comparisons canshow the search ability and robustness of ASBSO for solvingthe problems with different difficulty levels.After proving the superiority of ASBSO, in the externalcomparison, some meta-heuristic algorithms have been takeninto account to further evaluate the performance of ASBSO.Artificial bee colony algorithm (ABC) [26] is very popularin literature and its influence is next only to particle swarmoptimization (PSO) [27] in swarm-based meta-heuristic al-gorithms [28], [29], [30]. Differential evolution (DE) [31],[32], [33] is the most famous optimization algorithm withvery powerful search ability. MABC and CGSA-M [34], [19]which are two variations based on ABC and gravitationalsearch algorithm (GSA) [35], [36], [37] implement memory-based selection strategies. Thus, they are very suitable to bechosen to compare with ASBSO. Furthermore, two newlyproposed effective swarm intelligence based algorithms, i.e.,whale optimization algorithm (WOA) [38] and sine cosinealgorithm (SCA) [39], have been implemented. The populationsize of all compared algorithms is 100. All these contrastexperiments are run for 30 times to reduce the random error,and the maximum number of function evaluation is set to10000 D ( D is the dimension number). Algorithm 1:
Pseudo code of ASBSO.Randomly generate a population with N individuals;Calculate the fitness of each individual; while termination not satisfied do Divide N individuals into C clusters by using k − means clustering method;Choose the best individual in each cluster as the center ; if random (0 , < p c = 0 . then replace one cluster center by a randomlygenerated individual endif random (0 , < p g = 0 . then select one cluster; if random (0 , < p c = 0 . then choose the cluster center as X else choose a randomly selected individual inthe cluster as X endelse randomly select two clusters; if random (0 , < p c = 0 . then choose the combination of two centers as X else choose the combination of two randomlyselected individuals in two clusters as X endend Choose a strategy to generate a search step lengthaccording to Eq. (7);Generate new individual by adding the step lengthto the selected X by using Eqs. (1) and (2); if new individual is better than old one then replace the old individual and update thememory endend A. Parameter Analysis
The aim of implementing multiple strategies and memorybased selection method is to provide multiple step lengthsin order to suit different search phases. Too few strategiescouldn’t satisfy this demand while too many strategies areredundant and will increase computational cost. Thus, weattempt M = 4 in this paper and preliminary experimentsprove the validity of this parameter setting. A parameteranalysis is executed to find an applicable value for H . Threevalues are applied involving 10, 20 and 30. In this comparison, k is set to 10. The contrast experiment is implement onCEC’13 and CEC’17 to find the most suitable value for fourstrategies.Friedman test for multiple comparison is applied to analyzethe results [40]. Table V lists statistical results obtained byFriedman test and H = 20 is the control algorithm. Ranking evaluates the performance of each algorithm, and a lower rank-ing indicates a better performance. Unadjusted p -value doesn’t consider the probability error in a multiple comparison. Thus,two commonly used post-hoc procedures, Holm and Hochbergprocedures [41], are taken into account and their conservativeadjusted p -values are convincing enough to eliminate Type Ierror [42]. H = 20 which maintains the best ranking of 1.4298indicates that it’s the best value for H . Therefore, k = 10 and H = 20 are chosen to be applied into the flexible multiplesearch length strategy. B. Internal Comparison
In the first experiment, the CEC’13 and CEC’17 are usedto compare the performance between traditional BSO and theproposed ASBSO. The experiments are tested at dimension D = 10 , , and 100 respectively.The experimental results of CEC’13 are summarized inTables VI and VII, while Tables IX and X show the results ofCEC’17. All the better Mean and standard deviation (Std Dev)values are highlighted for convenience. From these tables, wecan intuitively find out that ASBSO can obtain more numberof better results than BSO. The former obtains better resultson F4, F7, F9, F11, F14, F15, F17-F19, F27 and F28 at alltested dimensions, while BSO only obtains better result onF6 in CEC’13. In CEC’17, ASBSO outperforms on F29, F32,F43 and F46 while BSO can’t obtain better performance at alldimensions on any function.Wilcoxon signed-rank test is conducted to prove that AS-BSO can beat BSO as it’s a pairwise test which is used toanalyze significant difference between the performance of twoalgorithms. R + and R − values in Tables VIII and XI canindicate the degree that ASBSO outperforms BSO. As weconduct ASBSO versus BSO, R + represents the sum of ranksfor the functions on which ASBSO outperforms BSO, and R − means the opposite.With the null hypothesis H for thetest assumes two compared algorithms have no difference, abetter performance of our proposed algorithm can be shownvia a higher R + value and p -value indicates the possibilitythat the null hypothesis happens. If p -value is lower than thelevel of significance α = 0 . , we can accept the hypothesisthat ASBSO is significantly better than BSO. Moreover, weset a more rigorous level α = 0 . to further exhibit theimprovement of ASBSO in solution quality.All the comparisons in Table VIII can reach the level of α = 0 . , while in Table XI, ASBSO can beat BSO on thelevel of α = 0 . at all dimensions but only has a significantdifference at D = 100 when α = 0 . . It’s understandablebecause CEC’17 is a newly proposed benchmark function suit,all test functions have a promotion in difficulty and complexitycompared with CEC’13.From these results, it can be concluded that ASBSO hasobvious advantage in comparison with BSO in terms of searchability and solution quality. C. External Comparison
To investigate the performance of ASBSO when comparingwith other swarm intelligence optimization algorithms, somewell-known meta-heuristic algorithms, involving CGSA-M,
IEEE ACCESS
TABLE VF
RIEDMAN TEST RESULT FOR H = 10 , AND .Algorithm Ranking unadjusted p p Bonf p Holm p Hochberg H = 20 vs. 1.4298 H = 30 H = 10 XPERIMENTAL RESULTS OF
CEC’13
BENCHMARK FUNCTIONS (F1-F28)
USING
BSO
AND
ASBSO AT D = 10 AND D = 30 . D =10 D =30BSO ASBSO BSO ASBSOMean (Std Dev) Mean (Std Dev) Mean (Std Dev) Mean (Std Dev)F1 -1.40E+03 (0.00E+00) -1.40E+03 (0.00E+00) F1 -1.40E+03 (4.22E-14) -1.40E+03 (1.98E-13)F2 6.79E+04 (5.55E+04) F2 1.54E+06 (4.79E+05) 1.54E+06 (4.26E+05)F3 4.01E+07 (7.33E+07)
F3 1.11E+08 (1.74E+08)
F4 7.58E+03 (4.24E+03)
F4 2.17E+04 (5.65E+03)
F5 -1.00E+03 (1.35E-04) -1.00E+03 (1.88E-04) F5 -1.00E+03 (1.52E-03) -1.00E+03 (2.98E-03)F6 -8.97E+02 (2.65E+00) -8.93E+02 (4.23E+00) F6 -8.66E+02 (2.47E+01) -8.64E+02 (2.76E+01)F7 -7.05E+02 (3.36E+01) -7.24E+02 (3.17E+01)
F7 -6.71E+02 (7.57E+01) -7.08E+02 (3.90E+01)
F8 -6.80E+02(9.14E-02) -6.80E+02 (9.48E-02) F8 -6.79E+02 (7.60E-02) -6.79E+02 (6.70E-02)F9 -5.93E+02 (1.41E+00) -5.94E+02 (1.48E+00)
F9 -5.68E+02 (2.90E+00) -5.71E+02 (2.58E+00)
F10 -5.00E+02 (3.15E-02) -5.00E+02 (4.81E-02) F10 -5.00E+02 (1.93E-01) -5.00E+02 (5.35E-02)F11 -3.42E+02 (1.90E+01) -3.53E+02 (2.38E+01)
F11 6.40E+01 (7.12E+01) -1.82E+02 (5.40E+01)
F12 -2.46E+02(1.86E+01) -2.46E+02 (2.17E+01) F12 2.06E+02 (8.43E+01) -7.64E+01 (4.85E+01)
F13 -1.30E+02 (2.14E+01) -1.31E+02 (2.09E+01)
F13 3.55E+02 (8.77E+01)
F14 1.04E+03 (2.33E+02)
F14 3.88E+03 (5.17E+02)
F15 1.17E+03 (2.79E+02)
F15 4.25E+03 (5.57E+02)
F16 2.00E+02 (2.02E-02) 2.00E+02 (7.00E-02) F16 2.00E+02 (4.14E-02) 2.00E+02 (1.13E-01)F17 3.55E+02 (1.86E+01)
F17 7.31E+02 (8.13E+01)
F18 4.49E+02 (2.27E+01)
F18 7.41E+02 (5.30E+01)
F19 5.02E+02 (5.95E-01)
F19 5.09E+02 (2.04E+00)
F20 6.04E+02 (6.46E-01)
F20 6.14E+02 (1.77E-01) 6.14E+02 (2.94E-01)F21 1.10E+03 (4.63E-13) 1.10E+03 (2.16E-11) F21 1.03E+03 (8.91E+01)
F22 2.24E+03 (3.35E+02)
F22 6.04E+03 (7.04E+02)
F23
F25 1.32E+03 (4.24E+00) 1.32E+03 (1.99E+01) F25 1.46E+03 (2.50E+01)
F26 1.39E+03 (3.26E+01) 1.39E+03 (2.64E+01) F26 1.50E+03 (8.60E+01)
F27 1.81E+03 (1.12E+02)
F27 2.49E+03 (9.32E+01)
F28 2.26E+03 (7.51E+01)
F28 5.73E+03 (5.38E+02)
MABC, ABC, DE, WOA and SCA, are implemented into nu-merical tests. Parameter settings can be investigated accordingto [19], [34], [26], [38], [39]. In DE, we use the efficientparameter set F = 0 . and CR = 0 . as suggested in [43],[44]. All tests have been executed at D = 30 with maximumnumber of function evaluation equals 10000 D for 30 runs.The results are listed in Tables XII and XIII. The best resultsare marked in boldface. It’s visual that ASBSO obtains thelargest number of the best results among all compared algo-rithms and we can draw a preliminary conclusion that ASBSOis very competitive in contrast with others. To more preciselyanalyze the results of multiple comparisons, Friedman test[40] which is widely used in [45], [46], [47] is employed.Table XIV lists statistical results obtained by Friedman testand ASBSO is the control algorithm. ASBSO maintains thebest ranking of 2.5 while the second best is only 3.5526 whichbelongs to CGSA-M. Although adjusted p -values of Holm andHochberg procedures are multiplied bigger than unadjusted p -values, they still reach the significant level of α = 0 . .Furthermore, in terms of MABC, ABC, WOA and SCA,adjusted p -values satisfy the level of α = 0 . . Wilcoxon testis also conducted to verify the results of Friedman test and obtains similar p -values in Table XV. From all these results,it is obvious that ASBSO is significantly better than othercontrast algorithms in benchmark function tests.To visually demonstrate the comparisons among ASBSOand other contrast algorithms, six functions, F4, F14, F22, F39,F43 and F48 with different properties, including unimodal,simple multimodal, hybrid and composition, are selected sincethey are representative to show the properties of all tested func-tions. The convergent procedures and final solutions obtainedby these algorithms in all 30 runs are exhibited.Fig. ?? is the box-and-whisker diagrams and Fig. ?? is theconvergence graphs. Five values including median, maximum,minimum, first quartile and third quartile are shown in box-and-whisker plots. The range between the first quartile andthe third quartile is called interquartile range (IQR), and ifthe points locate either 1.5*IQR above the third quartile (i.e.1.5*IQR) below the first quartile, they are marked as outliers.Extreme outliers refer to the points locate either 3*IQR abovethe third quartile or 3*IQR below the first quartile. In thesesix plots, the median values of ASBSO are the smallest and itsIQRs are lower and shorter than most other algorithms. Theseindicate that the solution quality and stability obtained by TABLE VIIE
XPERIMENTAL RESULTS OF
CEC’13
BENCHMARK FUNCTIONS (F1-F28)
USING
BSO
AND
ASBSO AT D = 50 AND D = 100 . D =50 D =100BSO ASBSO BSO ASBSOMean (Std Dev) Mean (Std Dev) Mean (Std Dev) Mean (Std Dev)F1 -1.40E+03 (1.25E-06) -1.40E+03 (1.54E-02) F1 -1.40E+03 (5.99E-02) -1.40E+03 (8.62E-01)F2 F4 6.38E+03 (1.79E+03)
F5 -1.00E+03 (4.09E-03) -1.00E+03 (1.31E-02) F5 -1.00E+03 (2.86E-02) -1.00E+03 (5.38E-01)F6 -8.27E+02 (3.53E+01) -8.17E+02 (3.26E+01) F6 -7.05E+02 (4.96E+01) -6.73E+02 (5.28E+01)F7 -6.37E+02 (6.43E+01) -6.82E+02 (3.23E+01)
F7 -6.71E+02 (2.76E+01) -7.05E+02 (2.05E+01)
F8 -6.79E+02 (5.55E-02) -6.79E+02 (4.13E-02) F8 -6.79E+02 (4.28E-02) -6.79E+02 (4.98E-02)F9 -5.43E+02 (3.70E+00) -5.49E+02 (4.38E+00)
F9 -4.72E+02 (4.81E+00) -4.82E+02 (6.31E+00)
F10 -4.99E+02 (1.71E-01) -4.99E+02 (4.52E-01) F10 -4.95E+02 (6.45E-01) -4.94E+02 (2.06E+00)F11 3.27E+02 (9.56E+01)
F11 1.54E+03 (1.94E+02)
F12
F13
F14 7.00E+03 (7.93E+02)
F14 1.52E+04 (1.12E+03)
F15 7.93E+03 (1.01E+03)
F15 1.53E+04 (1.27E+03)
F16 2.00E+02 (9.01E-02) 2.00E+02 (1.55E-01) F16 2.01E+02 (1.39E-01)
F17 1.14E+03 (9.10E+01)
F17 2.36E+03 (1.89E+02)
F18 1.01E+03 (7.72E+01)
F18 1.94E+03 (1.47E+02)
F19 5.16E+02 (2.43E+00)
F19 5.45E+02 (4.78E+00)
F20 6.24E+02 (4.50E-01) 6.24E+02 (4.15E-01) F20 6.50E+02 (2.11E-14) 6.50E+02 (4.71E-12)F21 1.65E+03 (3.18E+02)
F21 1.14E+03 (6.13E+01) 1.14E+03 (6.00E+01)F22 1.08E+04 (1.36E+03)
F22 2.34E+04 (1.87E+03)
F23 1.09E+04 (1.07E+03)
F23 2.21E+04 (1.59E+03)
F24 1.44E+03 (6.62E+01)
F24 2.44E+03 (5.16E+02)
F25 1.59E+03 (3.53E+01)
F25 1.96E+03 (1.01E+02) 1.96E+03 (1.12E+02)F26 1.64E+03 (6.94E+01)
F26 1.85E+03 (1.88E+01)
F27 3.48E+03 (1.65E+02)
F27 5.57E+03 (2.44E+02)
F28 9.11E+03 (7.24E+02)
F28 1.97E+04 (1.72E+03)
TABLE VIIIR
ESULTS OBTAINED BY THE W ILCOXON SIGNED - RANK TEST FOR
ASBSO VS . BSO ON CEC’13.
Dimension R + R − p -value α =0.05 α =0.0110 330.5 75.5 2.782E-3 YES YES30 319.0 59.0 1.132E-3 YES YES50 319.0 87.0 7.072E-3 YES YES100 321.0 85.0 6.06E-3 YES YES ASBSO is much better than those of other contrast algorithms.The convergence graphs can not only demonstrate the pre-cision of solutions but also compare the convergence speeds.Fig. ?? shows that, ASBSO can possess the fast convergencespeed. In details, all algorithms’ convergence behaviors shownin Fig. ?? (a) are quite illuminating to further elaborate thesearch behavior of ASBSO. It is clear that ASBSO continuesconverging when other algorithms stop in the latter of thesearch iteration. Although ABC starts with a better initialposition, it doesn’t have the ability to jump out of local optimaand ultimately be transcended by ASBSO. In the comparisonbetween ASBSO and BSO, it illustrates that the former alwayshas a better solution precision and convergence speed than thelatter. When comparing with other algorithms, ASBSO alsoobtains fabulous performances. Thus, it can be concluded thatthe proposed adaptive step length based on memory selectionmethod enhances the search ability and efficiency for ASBSO. D. Real World Optimization Problems
It has been demonstrated that ASBSO can outperform tra-ditional BSO and other well-known algorithms on benchmarkfunctions. To further testify its application value, four prob-lems introduced in CEC’11 [48] are used to execute this test:(1) RF1: Parameter Estimation for Frequency-Modulated (FM)Sound Waves, (2) RF2: Lennard-Jones Potential Problem, (3)RF4: Optimal Control of a Non-Linear Stirred Tank Reac-tor, and (4) RF7: Transmission Network Expansion Planning(TNEP) problem [48]. All these problems are run for 30independent times and the maximum function evaluation is setto D . The experimental results are presented in TableXVI. It’s obvious that ASBSO obtains dominance over alltested problems when compared with other algorithms, whichwell exhibiting its application value. E. ASBSO vs. previous BSO variants
To further discuss the competitiveness of ASBSO, morecomparisons between it and previous BSO variants should beexecuted. In this part, two BSO variants: BSO in objectivespace (BSOOS) [10] and global-best BSO (GBSO) [49] aretested on CEC’13 and 17 benchmark functions. The resultsare listed in Tables XXI and XXII.From the results, ASBSO shows a great advantage com-paring with BSOOS, and can be competitive with GBSO.Although the p -value for ASBSO vs. GBSO is not less than0.05, ASBSO still obtains a greater R + value, which indicatesthat it has a better overall performance than GBSO on total 57 IEEE ACCESS
TABLE IXE
XPERIMENTAL RESULTS OF
CEC’17
BENCHMARK FUNCTIONS (F29-F57)
USING
BSO
AND
ASBSO AT D = 10 AND D = 30 . D =10 D =30BSO ASBSO BSO ASBSOMean (Std Dev) Mean (Std Dev) Mean (Std Dev) Mean (Std Dev)F29 8.95E+02 ( 1.09E+03 ) F29 2.47E+03 ( 1.95E+03 )
F30 3.00E+02 ( 0.00E+00 ) 3.00E+02 ( 1.67E-09 ) F30 5.34E+02 ( 2.66E+02 )
F31 4.04E+02 ( 7.05E+00 )
F31
F32 6.87E+02 ( 4.05E+01 )
F33 6.24E+02 ( 8.60E+00 ) 6.24E+02 ( 7.25E+00 ) F33 6.52E+02 ( 7.01E+00 )
F34 7.57E+02 ( 1.79E+01 )
F34
F35 9.47E+02 ( 2.82E+01 )
F36 1.09E+03 ( 1.58E+02 ) 1.09E+03 ( 1.03E+02 ) F36 3.98E+03 ( 6.95E+02 )
F37 2.10E+03 ( 2.57E+02 )
F37 5.30E+03 ( 5.16E+02 )
F38
F39 1.77E+06 ( 1.25E+06 )
F40 8.96E+03 ( 5.30E+03 )
F40 5.36E+04 ( 2.85E+04 )
F41 1.72E+03 ( 1.06E+03 )
F41
F42
F43 3.20E+03 ( 4.24E+02 )
F44 1.77E+03 ( 4.19E+01 ) 1.77E+03 ( 4.48E+01 ) F44 2.48E+03 ( 2.56E+02 )
F45 1.03E+04 ( 1.26E+04 )
F45
F46 1.52E+05 ( 6.43E+04 )
F47 2.13E+03 ( 6.33E+01 ) 2.13E+03 ( 6.22E+01 ) F47 2.67E+03 ( 1.74E+02 ) 2.67E+03 (2.17E+02)F48 2.29E+03 ( 6.44E+01 )
F48 2.50E+03 ( 4.48E+01 )
F49 2.30E+03 ( 1.03E+01 ) 2.30E+03 ( 1.14E+01 ) F49 6.03E+03 ( 1.77E+03 )
F50 2.70E+03 ( 3.40E+01 )
F50 3.29E+03 ( 1.27E+02 )
F51 2.78E+03 ( 1.19E+02 )
F51 3.50E+03 ( 1.13E+02 )
F52
F54
F56 3.26E+03 ( 8.93E+01 ) 3.26E+03 ( 6.44E+01 ) F56
TABLE XE
XPERIMENTAL RESULTS OF
CEC’17
BENCHMARK FUNCTIONS (F29-F57)
USING
BSO
AND
ASBSO AT D = 50 AND D = 100 . D =50 D =100BSO ASBSO BSO ASBSOMean (Std Dev) Mean (Std Dev) Mean (Std Dev) Mean (Std Dev)F29 2.30E+03 ( 2.15E+03 ) F29 5.23E+05 ( 1.62E+05 )
F30 7.44E+03 ( 2.73E+03 )
F30 8.99E+04 ( 1.69E+04 )
F31 5.47E+02 ( 5.86E+01 )
F31 6.81E+02 ( 4.83E+01 )
F32 8.17E+02 ( 3.72E+01 )
F32 1.31E+03 ( 7.77E+01 )
F33 6.61E+02 ( 4.98E+00 )
F33 6.65E+02 ( 4.33E+00 )
F34 1.66E+03 ( 1.22E+02 )
F34 3.34E+03 ( 2.43E+02 )
F35 1.13E+03 ( 4.31E+01 ) 1.13E+03 ( 4.61E+01 ) F35 1.73E+03 ( 8.70E+01 )
F36 1.12E+04 ( 1.32E+03 )
F36 2.69E+04 ( 3.07E+03 )
F37
F38 1.31E+03 ( 3.87E+01 )
F38 2.45E+03 ( 2.72E+02 )
F39 1.15E+07 ( 6.98E+06 ) 1.15E+07 ( 5.00E+06 ) F39 7.74E+07 ( 1.56E+07 )
F40 5.26E+04 ( 2.34E+04 )
F40 3.84E+04 ( 1.60E+04 )
F41 4.11E+04 ( 2.81E+04 )
F41 3.54E+05 ( 1.24E+05 )
F42 3.06E+04 ( 2.08E+04 )
F42
F43 6.76E+03 ( 7.95E+02 )
F44 3.61E+03 ( 3.70E+02 ) 3.61E+03 ( 3.71E+02 ) F44 5.58E+03 ( 6.39E+02 )
F45 3.53E+05 ( 1.31E+05 )
F45 5.04E+05 ( 1.67E+05 )
F46 5.35E+05 ( 2.17E+05 )
F46 2.41E+06 ( 1.20E+06 )
F47 3.63E+03 ( 2.15E+02 )
F47
F48
F49
F52
F53 1.29E+04 ( 2.04E+03 )
F53
F54 7.79E+03 ( 1.49E+03 )
F55
F56 9.11E+03 ( 5.95E+02 )
F57
TABLE XIR
ESULTS OBTAINED BY THE W ILCOXON SIGNED - RANK TEST FOR
ASBSO VS . BSO ON CEC’17.
Dimension R + R − p -value α =0.05 α =0.0110 295.0 111.0 3.576E-2 YES NO30 300.5 105.5 2.555E-2 YES NO50 302.0 104.0 2.322E-2 YES NO100 338.0 97.0 8.008E-3 YES YES test functions. Moreover, GBSO adopts multiple modifications,i.e., fitness-based grouping, per-variable updates, the global-best update and the re-initialization step, but ASBSO usingfewer modifications obtains competitive results, which couldbe regarded as a successful variant of BSO.V. D ISCUSSION
As shown fully detailed in Section IV, our proposed ASBSOoutperforms traditional BSO and other meta-heuristic opti-mization algorithms. Especially in comparison with MABCand CGSA-M which also implement memory-based selectionmechanism, ASBSO obtains much better results in solutionaccuracy. It is interpreted in Section III that ASBSO has twomain novelties: first, it adapts several step length update meth-ods to deal with different situations; second, these methods areadaptively selected via a new memory storing mechanism. Inthis section, we will further discuss the effectiveness of thesetwo modifications by comparing them with the classical / success rule used in evolutionary strategy (ES) [50] and SFMSused in [19], [25], respectively. These tests are executed at D = 30 with maximum number of function evaluation equals10000 D for 30 runs. A. Comparison with / Success Rule / success rule is a parameter adaptive strategy proposedby Rechenberg [50] which is used to adjust deviation δ inorder to make mutational step size be dynamically adaptedaccording to the search performance.The offspring generation equation can be exhibited asfollow: X offspring = X + N (0 , δ ( t )) (8)where X is the parent and X offspring is the offspring. It isgenerated by adding a Gaussian noise N (0 , δ ( t )) of whichmean value equals 0 and deviation δ ( t ) changes according toiteration t .Its variation equation can be shown as: δ ( t + 1) = δ ( t ) r if s r > . δ ( t ) ∗ r if s r < . δ ( t ) if s r = 0 . (9)where r is a scale factor that is usually set in interval [0 . , . , and s r is a success rate to represent the rate thatmutation procedure successfully generates a better offspringin a certain period. If the success rate s r is larger than 0.2,deviation δ will increase; in the opposite, if s r is smallerthan 0.2, δ will decrease. As an adaptive mechanism, it makes algorithm can adjust its search radius to be suitable for specificproblems and different search periods. Not only in ES, but alsoin some other newly proposed algorithms, such as negativelycorrelated search proposed by Tang et al. [51], / successrule has exhibited a great performance in search ability. Thus,we combine BSO with / success rule to conduct a contrastexperiment to assess the effectiveness of ASBSO.Table XVII lists the experimental results between ASBSOand BSO with / success rule on 57 test functions. It isobvious that although / success rule can obtain better solu-tions on a few problems, ASBSO still dominates most numberof the problems. Table XVIII shows the Wilcoxon statisticalanalysis result between ASBSO and BSO with / successrule, where ASBSO is the control algorithm. p -value that issmaller than significant level α = 0 . demonstrates that themultiple step length update method proposed in ASBSO canprovide more adaptive and suitable search mechanisms thanthe / success rule to be applied to various problems. B. IMS vs. SFMS
The second modification of the proposed method is that anew memory storing mechanism IMS replaces the traditionalmemory mechanism (SFMS). Both mechanisms are introducedin Section III and it is necessary to discuss whether the formercan provide a better search efficiency than the latter. Hence, acomparison between ASBSO and the BSO with adaptive steplength based on SFMS is conducted and the results are listedin Table XIX. Visually, ASBSO maintains most better resultsespecially on CEC’13. Table XX also can prove that IMS issignificantly better than SFMS.
C. Computational Complexity
ASBSO has shown a superior ability for a majority ofbenchmark functions. In this subsection, we calculate itscomputational time complexity together with BSO’s.The time complexity in each procedure of BSO is describedas follows:(1) In BSO, the time complexity for initializing is O ( N ) where N is the population size.(2) Evaluating the fitness of population is O ( N ) .(3) Using K-means to divide the population into c clustersneeds O ( cN ) .(4) The process of individual selection and step lengthgeneration both cost O ( N ) .(5) The generation of new individuals and the fitness cal-culation need O ( N ) , respectively.Thus, the overall time complexity of BSO is O ( N ) + O ( N ) + O ( cN ) + O ( N ) + O ( N )= 2 O ( N ) + O ( cN ) + 2 O ( N ) (10)To be simplified, its overall time complexity is O ( N ) .ASBSO is modified based on BSO. Its procedure is shownas:(1) The initialization needs O ( N ) .(2) Evaluating the fitness of population is O ( N ) . TABLE XIIE
XPERIMENTAL RESULTS OF
CEC’13 (F1-F28)
USING
ASBSO, CGSA-M, MABC, ABC, DE, WOA
AND
SCA.Algorithm F1 F2 F3 F4ASBSO -1.40E+03 ± ± ± ± CGSA-M -1.40E+03 ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± -7.28E+02 ± -6.79E+02 ± ± -8.67E+02 ± -5.73E+02 ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± -5.71E+02 ± ± -1.82E+02 ± -7.64E+01 ± CGSA-M -5.69E+02 ± ± ± ± ± ± -1.94E+02 ± -7.01E+01 ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± WOA 8.89E+02 ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ABC 1.00E+03 ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ABC 1.44E+03 ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± TABLE XIIIE
XPERIMENTAL RESULTS OF
CEC’17 (F29-F57)
USING
ASBSO, CGSA-M, MABC, ABC, DE, WOA
AND
SCA.Algorithm F29 F30 F31 F32ASBSO 2.21E+03 ± ± ± ± CGSA-M ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ABC 6.00E+02 ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± WOA 6.06E+03 ± ± ± ± ± ± ± ± ± ± ± ± CGSA-M 4.79E+05 ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± CGSA-M 2.99E+05 ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ABC 2.64E+03 ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± WOA 7.24E+03 ± ± ± ± ± ± ± ± ± ± MABC 2.38E+07 ± ± ± ± ± TABLE XIVA
DJUSTED p - VALUES (FRIEDMAN).Algorithm Ranking unadjusted p p
Holm p Hochberg α = 0 . α = 0 . ASBSO vs. 2.5CGSA-M 3.5526 0.009286 0.011049 0.009286 YES NOMABC 3.9737 0.000271 0.000812 0.000812 YES YESABC 4.3509 0.000005 0.000019 0.000019 YES YESDE 3.6228 0.005524 0.011049 0.009286 YES NOWOA 4.7982 0 0 0 YES YESSCA 5.2018 0 0 0 YES YESTABLE XVR
ESULTS OBTAINED BY THE W ILCOXON SIGNED - RANK TEST FOR
ASBSO VS . SOME OTHER TYPICAL ALGORITHMS .Algorithm R + R − p -value α = 0 . α = 0 . ASBSO vs.CGSA-M 1046.5 549.5 4.1615E-2 YES NOMABC 1208.5 387.5 7.81E-4 YES YESABC 1256.0 340.0 1.73E-4 YES YESDE 1040.5 555.5 4.195E-2 YES NOWOA 1473.0 123.0 0.00 YES YESSCA 1510.0 143.0 0.00 YES YESTABLE XVIE
XPERIMENTAL RESULTS ON REAL - WORLD PROBLEMS . BSO ASBSO CGSA-M MABCRF1 1.30E+01 ± ± ± ± ± -2.52E+01 ± -3.24E+00 ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± ± (3) Using K-means to divide the population into c clustersneeds O ( cN ) .(4) Generate multiple step lengths needs O (4 N ) .(5) The memory selection costs O ( N ) .(6) The generation of new individuals and the fitness cal-culation need O ( N ) , respectively.Thus, the overall time complexity of ASBSO is O ( N ) + O ( N ) + O ( cN ) + O (4 N ) + O ( N ) + O ( N )= O ( cN ) + O (4 N ) + O ( N ) + 3 O ( N ) (11)The overall time complexity of ASBSO can be seen as O ( N ) . The main differences between ASBSO and BSO arein Steps (4) and (5). As ASBSO applies multiple step lengthstrategies, it costs O (4 N ) which is greater than O ( N ) ofBSO, and the memory selection needs O ( N ) . Thus, ASBSOand BSO have the same time complexity, which indicates thatboth are competitive in computational efficiency.VI. C ONCLUSION
In this paper, an adaptive step length mechanism basedon memory is proposed for BSO, namely ASBSO. It applies multiple step length generation strategies and a new memorymechanism in aim to generate better individuals for differentsearch periods and problems. The strategies with different steplengths are produced by using four different scale parametersand they are selected based on a memory structure in eachiteration. Different from the conventional memory mechanism,the proposed memory structure method is created to record theimprovement value in fitness obtained by each strategy. Byimplementing this, the strategy which can increase solutionquality substantially has a higher possibility to be selectedcompared with the original one which can similarly successwhile obtains only a little improvement. The performanceof ASBSO has been tested by using CEC’13 and CEC’17benchmark function suits (57 functions in total) which includedifferent characteristics. Some well-known optimization algo-rithms also have been added into comparison. Experimentaland statistical results show that the proposed ASBSO cansucceed in improving the performance of BSO in terms ofglobal search ability, convergence speed, robustness and solu-tion quality. Moreover, some real-world problems in CEC’11are introduced to present the application value of ASBSO.These results can encourage our future research into self-adaptive search mechanism. Furthermore, this will broaden TABLE XVIIE
XPERIMENTAL RESULTS OF USING
ASBSO
AND
BSO
WITH / S UCCESS R ULE ON
CEC’13
AND
CEC’17
BENCHMARK FUNCTIONS (F1-F57). D =30ASBSO BSO with / Rule ASBSO BSO with / RuleMean (Std Dev) Mean (Std Dev) Mean (Std Dev) Mean (Std Dev)F1 -1.40E+03 ( 1.98E-13 ) -1.40E+03 ( 7.26E-13 ) F29
F32 6.86E+02 ( 3.45E+01 )
F5 -1.00E+03 ( 2.98E-03 ) -1.00E+03 ( 3.99E-04 ) F33 -8.64E+02 ( 2.46E+01 ) -8.50E+02 ( 2.90E+01 ) F34 1.16E+03 ( 9.94E+01 ) F7 -7.08E+02 ( 3.90E+01 ) -6.52E+02 ( 4.02E+01 ) F35 -5.71E+02 ( 3.23E+00 ) -5.65E+02 ( 2.53E+00 ) F37 -5.00E+02 ( 5.35E-02 ) -4.99E+02 ( 4.49E-01 ) F38 -1.82E+02 ( 5.40E+01 ) -6.33E+01 ( 7.12E+01 ) F39 -7.64E+01 ( 4.85E+01 ) F16
F17
F21 1.02E+03 ( 8.42E+01 ) 1.02E+03 ( 7.70E+01 ) F49 5.79E+03 ( 2.04E+03 )
F22
F23
F24 1.31E+03 ( 2.60E+01 )
F52
F26
F27
ESULTS OBTAINED BY THE W ILCOXON SIGNED - RANK TEST FOR
ASBSO VS . BSO WITH / R ULE .vs. R + R − p -value α =0.05 α =0.01BSO with / Rule 1282.0 371.0 2.22E-4 YES YES our perspective of BSO for dynamic and multiobjective op-timization. R
EFERENCES[1] G. Yang, S. Wu, Q. Jin, and J. Xu, “A hybrid approach based onstochastic competitive hopfield neural network and efficient geneticalgorithm for frequency assignment problem,”
Applied Soft Computing ,vol. 39, pp. 104–116, 2016.[2] S. Gao, Y. Wang, J. Cheng, Y. Inazumi, and Z. Tang, “Ant colonyoptimization with clustering for solving the dynamic location routingproblem,”
Applied Mathematics and Computation , vol. 285, pp. 149–173, 2016.[3] Y. Shi, “Brain storm optimization algorithm,” in
International Confer-ence in Swarm Intelligence . Springer, 2011, pp. 303–309.[4] X. Guo, Y. Wu, and L. Xie, “Modified brain storm optimizationalgorithm for multimodal optimization,” in
International Conference inSwarm Intelligence . Springer, 2014, pp. 340–351.[5] X. Guo, Y. Wu, L. Xie, S. Cheng, and J. Xin, “An adaptive brainstorm optimization algorithm for multiobjective optimization problems,”in
International Conference in Swarm Intelligence . Springer, 2015, pp.365–372.[6] L. Li and K. Tang, “History-based topological speciation for multimodaloptimization,”
IEEE Transactions on Evolutionary Computation , vol. 19,no. 1, pp. 136–150, 2015.[7] H. Qiu and H. Duan, “Receding horizon control for multiple uavformation flight based on modified brain storm optimization,”
Nonlineardynamics , vol. 78, no. 3, pp. 1973–1988, 2014. [8] Y. Sun, “A hybrid approach by integrating brain storm optimization algo-rithm with grey neural network for stock index forecasting,” in
Abstractand Applied Analysis , vol. 2014. Hindawi Publishing Corporation,2014.[9] Y. Shi, J. Xue, and Y. Wu, “Multi-objective optimization based onbrain storm optimization algorithm,”
International Journal of SwarmIntelligence Research (IJSIR) , vol. 4, no. 3, pp. 1–21, 2013.[10] Y. Shi, “Brain storm optimization algorithm in objective space,” in
Evolutionary Computation (CEC), 2015 IEEE Congress on . IEEE,2015, pp. 1227–1234.[11] C. Li and H. Duan, “Information granulation-based fuzzy RBFNNfor image fusion based on chaotic brain storm optimization,”
Optik-International Journal for Light and Electron Optics , vol. 126, no. 15,pp. 1400–1406, 2015.[12] Y. Yu, S. Gao, S. Cheng, Y. Wang, S. Song, and F. Yuan, “CBSO: amemetic brain storm optimization with chaotic local search,”
MemeticComputing , pp. In Press, DOI: 10.1007/s12 293–017–0247–0, 2017.[13] S. Gao, C. Vairappan, Y. Wang, Q. Cao, and Z. Tang, “Gravitationalsearch algorithm combined with chaos for unconstrained numericaloptimization,”
Applied Mathematics and Computation , vol. 231, pp. 48–62, 2014.[14] S. Cheng, Y. Shi, Q. Qin, Q. Zhang, and R. Bai, “Population diversitymaintenance in brain storm optimization algorithm,”
Journal of ArtificialIntelligence and Soft Computing Research , vol. 4, no. 2, pp. 83–97, 2014.[15] H. Duan, S. Li, and Y. Shi, “Predator–prey brain storm optimization fordc brushless motor,”
IEEE Transactions on Magnetics , vol. 49, no. 10,pp. 5336–5340, 2013.[16] H. Duan and C. Li, “Quantum-behaved brain storm optimization ap-proach to solving loney’s solenoid problem,”
IEEE Transactions onMagnetics , vol. 51, no. 1, pp. 1–7, 2015.[17] Y. Wang, S. Gao, Y. Yu, and Z. Xu, “The discovery of populationinteraction with a power law distribution in brain storm optimization,”
Memetic Computing , pp. In Press, DOI: 10.1007/s12 293–017–0248–z,2017.[18] S. Cheng, Q. Qin, J. Chen, and Y. Shi, “Brain storm optimizationalgorithm: a review,”
Artificial Intelligence Review , vol. 46, no. 4, pp.445–458, 2016.
TABLE XIXE
XPERIMENTAL RESULTS OF USING
ASBSO
AND
SFMS ON CEC’13
AND
CEC’17
BENCHMARK FUNCTIONS (F1-F57). D =30ASBSO SFMS ASBSO SFMSMean (Std Dev) Mean (Std Dev) Mean (Std Dev) Mean (Std Dev)F1 -1.40E+03 ( 1.98E-13 ) -1.40E+03 ( 4.72E-13 ) F29 -8.64E+02 ( 2.46E+01 ) -8.61E+02 ( 2.83E+01 ) F34 -7.08E+02 ( 3.90E+01 ) -7.07E+02 ( 3.46E+01 ) F35 -1.82E+02 ( 5.40E+01 ) F12 -7.64E+01 ( 4.85E+01 )
F13
F15
F17
F26 1.46E+03 ( 7.82E+01 )
F54 3.85E+03 ( 2.17E+02 )
F27
ESULTS OBTAINED BY THE W ILCOXON SIGNED - RANK TEST FOR
IMS VS .SFMS.vs. R + R − p -value α =0.05 α =0.01SFMS 1343.5 309.5 2.0E-5 YES YES[19] Z. Song, S. Gao, Y. Yu, J. Sun, and Y. Todo, “Multiple chaos embeddedgravitational search algorithm,” IEICE Transactions on Information andSystems , vol. 100, no. 4, pp. 888–900, 2017.[20] X. Yao, Y. Liu, and G. Lin, “Evolutionary programming made faster,”
IEEE Transactions on Evolutionary computation , vol. 3, no. 2, pp. 82–102, 1999.[21] P. N. Suganthan, N. Hansen, J. J. Liang, K. Deb, Y. Chen, A. Auger,and S. Tiwari, “Problem definitions and evaluation criteria for the CEC2005 special session on real-parameter optimization,”
KanGAL Report ,vol. 2005005, p. 2005, 2005.[22] J. Liang, B. Qu, P. Suganthan, and A. G. Hern´andez-D´ıaz, “Problemdefinitions and evaluation criteria for the CEC 2013 special sessionon real-parameter optimization,”
Computational Intelligence Laboratory,Zhengzhou University, Zhengzhou, China and Nanyang TechnologicalUniversity, Singapore, Technical Report , vol. 201212, 2013.[23] N. Awad, M. Ali, J. Liang, B. Qu, and P. Suganthan, “Problem definitionsand evaluation criteria for the CEC 2017 special session and competitionon single objective real-parameter numerical optimization,” in
TechnicalReport . NTU, Singapore, 2016.[24] D. Molina, M. Lozano, C. Garc´ıa-Mart´ınez, and F. Herrera, “Memeticalgorithms for continuous optimisation based on local search chains,”
Evolutionary Computation , vol. 18, no. 1, pp. 27–63, 2010.[25] A. K. Qin, V. L. Huang, and P. N. Suganthan, “Differential evolutionalgorithm with strategy adaptation for global numerical optimization,”
IEEE Transactions on Evolutionary Computation , vol. 13, no. 2, pp.398–417, 2009.[26] V. Tereshko and A. Loengarov, “Collective decision making in honey-bee foraging dynamics,”
Computing and Information Systems , vol. 9,no. 3, p. 1, 2005. [27] J. Kennedy, “Particle swarm optimization,” in
Encyclopedia of machinelearning . Springer, 2011, pp. 760–766.[28] Y. Zhang, S. Wang, and G. Ji, “A comprehensive survey on particleswarm optimization algorithm and its applications,”
Mathematical Prob-lems in Engineering , vol. 2015, 2015.[29] Y. Shi, C.-M. Pun, H. Hu, and H. Gao, “An improved artificial beecolony and its application,”
Knowledge-Based Systems , vol. 107, pp.14–31, 2016.[30] R. Zhang, P.-C. Chang, S. Song, and C. Wu, “A multi-objective artificialbee colony algorithm for parallel batch-processing machine schedulingin fabric dyeing processes,”
Knowledge-Based Systems , vol. 116, pp.114–129, 2017.[31] R. Storn and K. Price, “Differential evolution–a simple and efficientheuristic for global optimization over continuous spaces,”
Journal ofGlobal Optimization , vol. 11, no. 4, pp. 341–359, 1997.[32] S. Gao, Y. Wang, J. Wang, and J. Cheng, “Understanding differentialevolution: A poisson law derived from population interaction network,”
Journal of Computational Science , vol. 21, pp. 140–149, 2017.[33] R. P. Parouha and K. N. Das, “A robust memory based hybrid differen-tial evolution for continuous optimization problem,”
Knowledge-BasedSystems , vol. 103, pp. 118–131, 2016.[34] X. Li and G. Yang, “Artificial bee colony algorithm with memory,”
Applied Soft Computing , vol. 41, pp. 362–372, 2016.[35] E. Rashedi, H. Nezamabadi-Pour, and S. Saryazdi, “GSA: a gravitationalsearch algorithm,”
Information Sciences , vol. 179, no. 13, pp. 2232–2248, 2009.[36] J. Ji, S. Gao, S. Wang, Y. Tang, H. Yu, and Y. Todo, “Self-adaptivegravitational search algorithm with a modified chaotic local search,”
IEEE Access , vol. 5, pp. 17 881–17 895, 2017.[37] G. Sun, P. Ma, J. Ren, A. Zhang, and X. Jia, “A stability constrainedadaptive alpha for gravitational search algorithm,”
Knowledge-BasedSystems , vol. 139, pp. 200–213, 2018.[38] S. Mirjalili and A. Lewis, “The whale optimization algorithm,”
Advancesin Engineering Software , vol. 95, pp. 51–67, 2016.[39] S. Mirjalili, “SCA: a sine cosine algorithm for solving optimizationproblems,”
Knowledge-Based Systems , vol. 96, pp. 120–133, 2016.[40] J. Luengo, S. Garc´ıa, and F. Herrera, “A study on the use of statistical TABLE XXIE
XPERIMENTAL RESULTS OF USING
ASBSO, BSOOS
AND
GBSO ON CEC’13
BENCHMARK FUNCTIONS (F1-F28).ASBSO BSOOS GBSOMean (Std Dev) Mean (Std Dev) Mean (Std Dev)F1 -1.40E+03 ( 1.98E-13 ) -1.40E+03 ( 1.64E-13 ) -1.40E+03 ( 1.06E-10 )F2 -1.04E+03 ( 3.33E+01 )
F5 -1.00E+03 ( 2.98E-03 ) -1.00E+03 ( 1.72E-03 ) -1.00E+03 ( 8.05E-04 ) F6 -8.64E+02 ( 2.46E+01 ) -8.62E+02 ( 2.76E+01 ) -8.50E+02 ( 3.16E+01 )F7 -7.08E+02 ( 3.90E+01 ) -6.82E+02 ( 6.08E+01 ) -6.99E+02 ( 2.76E+01 )F8 -6.79E+02 ( 6.70E-02 ) -6.79E+02 ( 7.04E-02 ) -6.79E+02 ( 7.42E-02 )F9 -5.71E+02 ( 3.23E+00 ) -5.69E+02 ( 3.36E+00 ) -5.71E+02 ( 3.40E+00 )F10 -5.00E+02 ( 5.35E-02 ) -5.00E+02 ( 1.39E-01 ) -5.00E+02 ( 9.27E-02 )F11 -1.82E+02 ( 5.40E+01 ) -7.64E+01 ( 4.85E+01 ) F14
F18 5.98E+02 ( 2.85E+01 ) 5.99E+02 ( 3.31E+01 )
F19
F22
F25 1.41E+03 ( 1.03E+01 ) 1.45E+03 ( 2.30E+01 )
F26 1.46E+03 ( 7.82E+01 ) 1.54E+03 ( 7.40E+01 )
F27 2.42E+03 ( 1.07E+02 ) 2.53E+03 ( 1.08E+02 )
F28
XPERIMENTAL RESULTS OF USING
ASBSO, BSOOS
AND
GBSO ON CEC’17
BENCHMARK FUNCTIONS (F29-F57).ASBSO BSOOS GBSOMean (Std Dev) Mean (Std Dev) Mean (Std Dev)F29 2.21E+03 ( 2.00E+03 )
F31 4.72E+02 ( 2.92E+01 )
F34 1.16E+03 ( 9.94E+01 ) 1.12E+03 ( 6.90E+01 )
F35 9.41E+02 ( 3.19E+01 )
F37
F42
F44
F50 3.26E+03 ( 1.24E+02 ) 3.31E+03 ( 9.91E+01 )
F51 3.49E+03 ( 9.56E+01 ) 3.47E+03 ( 2.09E+02 )
F52 2.89E+03 ( 1.25E+01 )
F54 3.85E+03 ( 2.17E+02 ) 3.86E+03 ( 2.64E+02 )
F55
TABLE XXIIIR
ESULTS OBTAINED BY THE W ILCOXON TEST FOR ALGORITHM
ASBSO VS . BSOOS AND
GBSO.Algorithms R + R − p -value α =0.05 α =0.01ASBSO vs.BSOOS 1292.5 303.5 0.000031 YES YESGBSO 961.0 635.0 0.163962 NO NOtests for experimentation with neural networks: Analysis of parametrictest conditions and non-parametric tests,” Expert Systems with Applica-tions , vol. 36, no. 4, pp. 7798–7808, 2009.[41] S. Garc´ıa, D. Molina, M. Lozano, and F. Herrera, “A study on theuse of non-parametric tests for analyzing the evolutionary algorithms’behaviour: a case study on the CEC’2005 special session on realparameter optimization,”
Journal of Heuristics , vol. 15, no. 6, pp. 617–644, 2009.[42] S. Garc´ıa, A. Fern´andez, J. Luengo, and F. Herrera, “Advanced non-parametric tests for multiple comparisons in the design of experimentsin computational intelligence and data mining: Experimental analysis ofpower,”
Information Sciences , vol. 180, no. 10, pp. 2044–2064, 2010.[43] J. Brest, S. Greiner, B. Boskovic, M. Mernik, and V. Zumer, “Self-adapting control parameters in differential evolution: A comparativestudy on numerical benchmark problems,”
IEEE Transactions on Evo-lutionary Computation , vol. 10, no. 6, pp. 646–657, 2006.[44] J. Liu and J. Lampinen, “A fuzzy adaptive differential evolution algo-rithm,”
Soft Computing , vol. 9, no. 6, pp. 448–462, 2005.[45] Y. Cai and J. Wang, “Differential evolution with neighborhood anddirection information for numerical optimization,”
IEEE Transactionson Cybernetics , vol. 43, no. 6, pp. 2202–2215, 2013.[46] J. Wang, J. Liao, Y. Zhou, and Y. Cai, “Differential evolution enhancedwith multiobjective sorting-based mutation operators,”
IEEE Transac-tions on Cybernetics , vol. 44, no. 12, pp. 2792–2805, 2014.[47] J. Wang, W. Zhang, and J. Zhang, “Cooperative differential evolutionwith multiple populations for multiobjective optimization,”
IEEE Trans-actions on Cybernetics , vol. 46, no. 12, pp. 2848–2861, 2016.[48] S. Das and P. N. Suganthan, “Problem definitions and evaluationcriteria for CEC 2011 competition on testing evolutionary algorithmson real world optimization problems,”
Jadavpur University, NanyangTechnological University, Kolkata , 2010.[49] M. El-Abd, “Global-best brain storm optimization algorithm,”
Swarmand Evolutionary Computation , vol. 37, pp. 27–44, 2017.[50] I. Rechenberg, “Evolutionsstrategien,” in
Simulationsmethoden in derMedizin und Biologie . Springer, 1978, pp. 83–114.[51] K. Tang, P. Yang, and X. Yao, “Negatively correlated search,”