A Novel Meta-Heuristic Optimization Algorithm Inspired by the Spread of Viruses
AA Novel Meta-Heuristic Optimization Algorithm Inspired by theSpread of Viruses
Zhixi Li ∗ , Vincent Tam Department of Electrical and Electronic Engineering, The University of Hong Kong, Pokfulam, Hong Kong
Abstract
According to the no-free-lunch theorem, there is no single meta-heuristic algorithm that can opti-mally solve all optimization problems. This motivates many researchers to continuously developnew optimization algorithms. In this paper, a novel nature-inspired meta-heuristic optimizationalgorithm called virus spread optimization (VSO) is proposed. VSO loosely mimics the spreadof viruses among hosts, and can be e ff ectively applied to solving many challenging and contin-uous optimization problems. In VSO, the ribonucleic acid (RNA) of the virus represents a solu-tion to the problem at hand. Here, we devise a new representation scheme and viral operationsthat are radically di ff erent from all the previously proposed virus-based optimization algorithms.First, the viral RNA of each host in VSO denotes a potential solution for which di ff erent viraloperations will help to diversify the searching strategies in order to largely enhance the solu-tion quality. In addition, an imported infection mechanism, inheriting the searched optima fromanother colony, is introduced to possibly avoid the prematuration of any potential solution insolving complex problems. VSO has an excellent capability to conduct adaptive neighborhoodsearches around the discovered local and global optima for achieving better solutions. Further-more, with a flexible infection mechanism, VSO is able to quickly escape from local optima soas to look for other globally (sub-)optimal solution(s). To clearly demonstrate both its e ff ective-ness and e ffi ciency, the newly proposed VSO is critically evaluated on a series of well-knownbenchmark functions. Moreover, VSO is validated on its applicability through two real-worldexamples including the financial portfolio optimization and optimization of hyper-parameters ofsupport vector machines for classification problems. The experimental results show that VSO hasattained superior performance in terms of solution fitness, convergence rate, scalability, reliabil-ity, and flexibility when compared to those results of the conventional as well as state-of-the-artmeta-heuristic optimization algorithms. Keywords:
Virus Spread Optimization, Nature-Inspired Algorithms, Meta-HeuristicOptimization, Continuous Optimization
1. Introduction
Optimization techniques have been widely applied in many scientific and engineering ap-plications. For instance, in the field of artificial intelligence, researchers often attempt to op- ∗ Corresponding author.
Email addresses: [email protected] (Zhixi Li), [email protected] (Vincent Tam)
Preprint submitted to Elsevier June 12, 2020 a r X i v : . [ c s . N E ] J un imize various machine learning models, e.g. tuning hyper-parameters of support vector ma-chines (SVMs) [1] and optimizing deep neural network architecture [2, 3], to obtain a betterperformance. In the areas of industrial design and manufacturing, engineers always encounternumerous optimization problems for various products and scenarios, such as the optimizationof aerodynamic shapes for aircraft, cars, bridges, etc. [4] and the optimization of supply chainmanagement [5]. In finance, investors usually pursue an optimal portfolio aiming to maximizethe return while minimizing the risk [6, 7]. There are many optimization problems in our dailylives like finding the shortest vehicle route to a destination [8], resource allocation to satisfyperformance goals [9], and so on.Since many real-world optimization problems are too complex to be solved with a goodsolution by conventional optimization approaches in a reasonable time, meta-heuristic optimiza-tion algorithms have recently captured much attention and achieved some success [10]. In thepast decades, researchers have invented several nature-inspired meta-heuristic optimization algo-rithms to imitate some phenomena or behaviors of the nature. Such algorithms can be classifiedinto five categories: evolution-based, swarm-intelligence-based, physics-based, chemistry-basedand human-based algorithms. Evolutionary algorithms (EAs) are inspired by the biological evo-lutionary process. Genetic algorithm (GA) [11], evolution strategies (ES) [12] and di ff erentialevolution (DE) [13] can be regarded as representative algorithms in EAs. For the second category,swarm intelligence algorithms (SIs) imitate the intelligent behaviors of creatures in nature. Par-ticle swarm optimization (PSO) is the most pioneering work of SIs [14]. Up to now, the researchof SIs has been very active such that new algorithms are being proposed from time to time. Somewell-known examples of SIs include: Ant colony optimization (ACO) [15], artificial bee colony(ABC) [16], social spider algorithm (SSA) [17], whale optimization algorithm (WOA) [18], greywolf optimizer (GWO) [19], etc. For both physics-based and chemistry-based optimization al-gorithms, that are motivated by physical phenomena and chemical reactions, examples includesimulated annealing (SA) [20], chemical reaction optimization (CRO) [21], nuclear reaction opti-mization (NRO) [22] and so on. Lastly, collective decision optimization algorithm (CDOA) [23]and queuing search algorithm (QSA) [24] are examples of the last category.According to the no-free-lunch theorem (NFL), there is no single meta-heuristic algorithmthat can optimally tackle all optimization problems [25]. Undoubtedly, this motivates researchersto continuously develop new algorithms for various applications. In particular, the proposed algo-rithm should be very competitive with the few existing successful optimization approaches suchas PSO for solving the well-known benchmark functions as well as various real-world problemsin terms of the solution quality, rate of convergence, scalability, reliability and flexibility, etc.In this paper, we propose a novel, powerful and nature-inspired meta-heuristic algorithmnamely the virus spread optimization (VSO) for tackling continuous optimization problems.VSO mimics the mighty spread of viruses among hosts. Here, we devise a new representationscheme and operations that are radically di ff erent from all the previously proposed virus-basedoptimization algorithms. First, the viral ribonucleic acid (RNA) of each host in VSO denotesa potential solution to the problem at hand for which di ff erent viral infection, mutation and re-covery operations will help to diversify the searching strategies in order to largely enhance thesolution quality. In addition, an imported infection mechanism, inheriting the searched optimafrom another colony, is introduced to possibly avoid the prematuration of any potential solutionin solving complex problems. The VSO algorithm has an excellent capability to conduct adap-tive neighborhood searches around the discovered local and global optima for achieving bettersolutions. Furthermore, with a flexible infection mechanism, VSO can quickly escape from localoptima in order to look for other globally (sub-)optimal solution(s).2o evaluate the performance of the proposed optimization algorithm, experiments are con-ducted on a series of well-known benchmark functions including 16 classical examples listedin [26] [27] [28] and 30 problems specially designed by the IEEE CEC 2014 for competi-tion [29]. In addition, VSO is applied to two real-world applications such as the financial port-folio optimization and optimization of hyper-parameters of SVMs for classification problems.To investigate the scalability, the algorithm was well-tested on the classical benchmark functionsand portfolio optimization problems with di ff erent ranges of dimensions including: low (30 &100 dimensions), medium (300 & 500 dimensions) and high (1 ,
000 dimensions) for the bench-mark functions, and di ff erent numbers as 30, 100 and 250 of stocks for portfolio optimization.A standardized running environment and settings are used for a fair comparison of the perfor-mance of the VSO algorithm with those of the conventional meta-heuristic algorithms includingGA [11], DE [13], PSO [14], ABC [16], as well as state-of-the-art ones, i.e. SSA [17], WOA [18]and covariance matrix adaptation evolution strategy (CMA-ES) [30] with their outstanding per-formance reported in literature. The experimental results verify that VSO achieves impressiveperformances in terms of solution quality, convergence rate, scalability, reliability and flexibilitywhen compared to those of the above conventional and state-of-the-art meta-heuristic algorithms.To the best of our knowledge, two virus-based algorithms namely the virus colony search(VCS) [31] and virus optimization algorithm (VOA) [32] have been proposed to tackle variousoptimization problems. However, VSO is radically di ff erent from these two existing algorithmsin their analogies, motivation, implementations and search behaviors. We will further reveal thedetails in Section 2.In summary, the major contributions of this paper are as follows. • A new meta-heuristic algorithm as a very competitive and potential approach is proposedto solve challenging and continuous optimization problems; • The proposed optimization algorithm, combining the strengths of EAs and SIs, can achievean excellent trade-o ff between exploitation and exploration by the unique design of thediversification of the search strategies. This makes the algorithm applicable to a widerrange of problems in practice; • The imported infection mechanism, as a novel search strategy cooperating with other meta-heuristic algorithms, helps to significantly enhance the overall optimization algorithm fortackling more complex problems; • The outstanding performance of the proposed algorithm is demonstrated not only on thesolution quality but also the rate of convergence, scalability and reliability through per-forming a series of experiments on 46 well-recognized benchmark functions and two real-world optimization problems.The rest of this paper is organized as follows. Section 2 describes the analogies, operations,implementations and work flow of the VSO algorithm in details. The experimental results andrelated discussion on the benchmark functions are presented in Section 3. The performancesfor two real-world applications including financial portfolio optimization and optimization ofhyper-parameters of SVMs for classification problems are shown and discussed in Section 4and Section 5 respectively. We conclude this work and shed lights on various potential futuredirections in Section 6. 3 . The Virus Spread Optimization Algorithm
Considering the powerful spread of viruses with a great diversity of viral behaviors, VSO isproposed to simulate such process loosely. The analogies of VSO are listed in Table 1. The hostand virus are essential components of the algorithm.
Table 1: Analogy of VSOTerminology Natural Meaning Algorithmic MeaningViral Spread To infect all hosts. To search the solution space and find an optimal one.Virus A virus that contains an RNA which may mutate. The RNA represents a solution to the problem.Host Organism (e.g. animals, humans) that is infectedby the virus. The infected host may show symptomsof various degrees. The symptom intensity generally represents thefitness of a feasible solution. The critical hostdenotes the best fitness.
In VSO, the population is composed of hosts. There are four types of hosts imitating thespread of viruses and the immunological di ff erences in nature: healthy, mild, severe and critical .Each host including the healthy one carries a virus. In fact, many animals including humansmay carry all kinds of non-infectious viruses in nature [33]. For instance, a healthy human maycarry a few viruses like endogenous retrovirus (ERV) that are in fact beneficial to our immunesystem [34]. Besides, bats carry a lot of unknown viruses yet may not get sick from thoseviruses [35]. The main di ff erence between healthy hosts and other hosts is that healthy hosts actas healthy carriers with non-infectious viruses while the infected (also called infectious) ones,i.e. mild, severe, and critical hosts, can infect the healthy hosts.More importantly, there are di ff erent viral infection and mutation operations for each type ofhosts in VSO to diversify the searching strategies so that the optimizing capability and flexibilitycan be largely enhanced. More definitions are provided as follows. • Definition 1: A Viral RNAEach host has a viral RNA that represents a possible solution as shown in (1). X i = [ x i , x i , . . . , x Di ] (1)where X i (vector) is the RNA of the virus denoting a possible solution to the problem athand, i is the iteration number, and D is the dimensionality, i.e. the number of decisionvariables, of the problem. • Definition 2: A Healthy HostA healthy host is a host carrying a non-infectious virus whose RNA is generated randomlyin every iteration. The host conducts a random search in the solution domain as listedin (2). X i = U ( S ) (2)where S is the whole search space while U is a random number generator function basedon the uniform distribution of S . 4 Definition 3: A Mild HostA mild host is carrying an infectious virus. As shown in (3)-(4), the virus of this host canmutate with a mutation intensity intensity Mi and also infect other healthy hosts with a rate R M that is relatively low when compared to other infectious hosts. intensity Mi = α ∗ intensity Mi − + γ ∗ rand (0 , ∗ ( gbest i − − X i ) (3) X i + = X i + intensity Mi (4)where intensity Mi (vector) is the mutation intensity of the mild host at the iteration i , α ∈ [0 ,
1] and γ ∈ [1 ,
2] are the scaling factors, gbest i − is the best solution obtained by thepopulation at the iteration i −
1, and rand (0 ,
1) is a random number between 0 and 1. • Definition 4: A Severe HostAs shown in (5)-(6), a severe host carries an infectious virus that can mutate with a muta-tion intensity intensity Si and also infect other healthy hosts with its own rate R S . Overallspeaking, its infectious ability is medium as compared to that of the critical host. intensity Si = δ s ∗ intensity Si − (5) X i + = X i + Gaussian (0 , intensity Si ) ∗ X i (6)where intensity Si (scalar) is the mutation intensity of the severe host at the iteration i , δ s ∈ (0 ,
1] is the decay rate, and
Gaussian (0 , intensity Si ) is the Gaussian function with themean as 0 and the standard deviation as intensity Si . • Definition 5: A Critical HostIn VSO, there is only one critical host which represents the currently most optimal solutionobtained so far. As shown in (7), its viral mutation is paused yet with the highest infectionrate R C to carry its relatively good solution quality to other healthy hosts. X i + = X i (7) In VSO, the initialization, selection, mutation, infection and recovery are five essential oper-ations while the imported infection serves as an additional operation to enhance the optimizingperformance.
At the starting point with the number of iterations as 0, the whole population is initialized as healthy hosts. The viral RNA of each host is randomly generated in the search space accordingto (8). X i = = bound l + rand (0 , ∗ ( bound u − bound l ) (8)where i is the iteration number, bound l and bound u denote the lower and upper bounds of thecorresponding domain of the variable being considered.The mutation intensities intensity Si and intensity Mi of the mild and severe hosts are initializedin (9) and (10) as below. intensity Mi = = U ( bound l , bound u )10 (9) intensity Si = = rand (0 ,
1) (10)Algorithm 1 shows the detailed initialization process.5 lgorithm 1:
Initialization
Input:
Population size: N pop Searching bound: bound l and bound u Random number generator based on the uniform distritbution: U Output:
Newly created hosts: hosts hosts ← ∅ ; i ← while ( i < N pop ) do Initialize a new host h with a viral RNA according to (8); Initialize the mutation intensities of the host h according to (9) & (10); h .type ← ’healthy’; Insert h into hosts ; i ← i + end return hosts In VSO, the host with the best solution will be selected as the critical host after calculatingfitness for all hosts at each iteration. As presented in Algorithm 2, the host that has achievedthe best solution up to current iteration will be designated as the critical host while the previous critical one will be downgraded to the severe host.In nature, due to the complicated viral mutation, immune response and outside environment,some viruses infecting a healthy host may develop into deadly viruses shortly. Analogously, a healthy host conducting a random search will possibly become the critical one directly as wellin VSO.
Algorithm 2:
Selection
Input:
Hosts: hosts
Current number of iterations: i Output: critialHost Get the host gBestHost with best solution from hosts; gBestHost . type ← ’critical’; if ( gBestHost (cid:44) prev gBestHost ) then prev gBestHost.type ← ’severe’; gBest i ← gBestHost.virus.rna; end The mutation behavior of the searching strategy is one of the key factors to the success ofVSO. Depending on the type of hosts, the mutation operation will work according to (2)-(7).Algorithm 3 clearly shows the pseudo-code of the mutation operation. The viral RNAs of allhosts will be updated accordingly by the mutation operation at each iteration.
The main objective of the infection mechanism is to spread the viral information among allthe hosts so as to empower the search e ff ectiveness of the VSO algorithm. In the real world, thetransmission route, such as direct contact, is necessary for the spread of many viral diseases [36].We hereby design a three-step mechanism for the infection operation in VSO.6 lgorithm 3: Mutation
Input:
All hosts: hosts
Output:
All hosts with updated viral RNAs: hosts (cid:48) for each host in hosts do switch host.type do case healthy do Update host.virus.rna according to (2); end case mild do Update host.virus. intensity M according to (3); Update host.virus.rna according to (4); end case severe do Update host.virus. intensity S according to (5); Update host.virus.rna according to (6); end case critical do Update host.virus.rna according to (7); end end end At first, every infectious host has one or more chances to contact healthy hosts at each itera-tion.Secondly, we have to decide whether that contacted healthy host will be infected or not.Therefore, di ff erent infection rates are assigned to the hosts according to their types as shownin (11). R in f ect = [ R M , R S , R C ] (11)where 0 < R M ≤ R S < R C <
1. They are the infection rates for mild , severe , and critical host,respectively. More specifically, the infection rate is the probability of an infectious host infectinga healthy host when they contact.Lastly, in case of a healthy host infected by an infectious host successfully, it will becomea severe or mild host at di ff erent probabilities. We hereby design a transformation matrix asillustrated in (12). P trans = P CH − > M P CH − > S P SH − > M P SH − > S P MH − > M P MH − > S (12)where P trans is the matrix of transformation probabilities. For instance, P SH − > M is the conditionalprobability of a healthy host becoming the mild host given by being infected by a severe host.As there are only two events here that are mutually exclusive, i.e. becoming a mild or severehost, the summation of each row of probabilities is equal to 1.At each iteration, the specific procedure of the infection is summarized as follows. • As indicated in (11), the healthy host contacting with an infectious host will be infected7ith probabilities R C , R S and R M respectively as dependent on the type of the infectioushost; • During the infection, the healthy host may be infected as the severe or mild host accordingto the transformation probabilities as described in (12). Specially, the host infected by the mild host can become the mild host only so that the transformation probability P MH − > M isalways 1 and P MH − > S is equal to 0; • In addition, two solution sharing mechanisms may be performed during the infection pro-cess. When a healthy host (destination) infected by an infectious host (source) to become a severe host, the viral RNA of the source will be copied to the destination directly as shownin Figure 1. In addition, when a healthy host is infected by a mild host, each assignedvalue of the viral RNA of the destination will be randomly replaced by the source with afixed probability of 0 . Figure 1: The host to be infected as the severe hostFigure 2: The host to be infected as the mild host
The implementation of the above viral infection is described in Algorithm 4. The infectiousand healthy hosts are firstly sorted according to the ascending and descending order of theirfitness values respectively. Since the VSO algorithm is designated for solving minimizationproblems, the smaller the fitness value, the better the solution. Thus, the infectious host with abetter solution quality will be more likely selected to infect a healthy host. Conversely, a healthy host with a worse fitness value will be more likely to be infected. Moreover, an integer parameter H as mentioned above is used to limit the maximum number of healthy hosts to be contacted byeach infectious host. This can help to avoid any premature convergence of the whole populationto any local minimum. The recovery operation is another key mechanism of the VSO algorithm. Due to the powerfulviral spread in the infection, all hosts may be infected very soon so that the searching capacityof the algorithm may still be quickly converged into a local minima even though with the afore-mentioned parameter H to restrain the maximum number of contacted hosts.8 lgorithm 4: Infection
Input:
All hosts: hosts
The maximum number of healthy hosts to be contacted by each infectious host: H Problem dimensionality: D Output:
All hosts with possibly updated viral RNAs: hosts (cid:48) Select in f ectiousHosts from hosts ; Sort in f ectiousHosts by the ascending order of fitness values; for each in f ectiousHost ∈ in f ectiousHosts do Select healthyHosts from hosts ; if ( | healthyHosts | ≥ H ) then Sort healthyHosts by the descending order of fitness values; contactedHosts ← { the first H hosts } ⊆ healthyHosts ; for each healthyHost ∈ contactedHosts do in f ected ← f alse ; T ← in f ectiousHost . type ; Get the infection rate R T from (11); if rand(0,1) ≤ R T then in f ected ← true ; end if ( in f ected ) then Get the transformation probabilities (cid:104) P TH − > M , P TH − > S (cid:105) from (12); if (0 ≤ rand (0 , ≤ P TH − > M ) then to be in f ected type ← M ; else // P TH − > M + P TH − > S = 1 to be in f ected type ← S ; end switch to be in f ected type do case M do healthyHost . type ← ’mild’; for idx = to D − do if rand (0 , ≤ . then healthyHost .virus.rna[ idx ] ← in f ectiousHost .virus.rna[ idx ]; end end end case S do healthyHost .type ← ’severe’; healthyHost .virus.rna ← in f ectiousHost .virus.rna; end end end end end end Thus, in case all the hosts are infected, the recovery operation will be performed to carefullyreset some of the infected hosts to continue with the exploration process. We have not adoptedthe simple random or scheduled restart approaches used by many algorithms such as [37, 38,39]. Instead, an interesting mechanism to gradually downgrade the infected hosts is devised asinspired by the nature in which an infected host has to recover gradually. Likewise, each infectedhost will be downgraded to the less severe host type of the VSO framework. For example, a severe host will be recovered to the mild host while a mild host will become the healthy host. As9he searching restrictions will be relaxed for the “recovered” host types, the searching capacityof the algorithm will be enhanced gradually as well so as to explore the other parts of the searchspace.Furthermore, a parameter recPercent called the recovery rate is used to specify the percent-age of the infected hosts with the worst solution quality to be recovered. This can help to avoidlosing all the search information accumulated so far during the search process. The detailedimplementation is given in Algorithm 5.
Algorithm 5:
Recovery
Input:
Infectious hosts: in f ectiousHosts
Population size: N pop Recovery percentage: revPercent
Output:
Hosts recovered from in f ectiousHosts Sort in f ectiousHosts by the descending order of fitness values; if ( | in f ectiousHosts | = N pop ) then revNum ← N pop ∗ revPercent ; RH ← { the first revNum hosts } ⊆ in f ectiousHosts ; for each host h ∈ RH do Initialize h referring to Algorithm 1 ; switch h.type do case severe do h .type ← ’mild’ ; end case mild do h .type ← ’healthy’ ; end end end end As inspired by the possible migration of hosts from one place to another that may increasethe spread of a viral disease in the real world, the concept of “imported infection” is introducedas an additional operation of the VSO framework to enhance its search performance for solvingcomplex optimization problems.Accordingly, a new colony is developed through the DE algorithm to construct some poten-tially better solution to the whole population of VSO. However, this simple heuristic operationmay break the searching patterns of the concerned VSO algorithm, thus possibly leading to apoorer performance. Therefore, an adaptive probability is predefined to export the DE colony tothe whole population of VSO in a probabilistic manner as illustrated in Algorithm 6.As an additional operation, the imported infection may help to improve the search perfor-mance in some complex cases yet it will also increase the overall computational complexity ofthe VSO algorithm. Hence, we may flexibly skip this additional operation in some cases. Moreimportantly, this novel design provides a useful interface for researchers or users to integratetheir own algorithms for some specific problems.
Figure 3 manifests the algorithmic flow of VSO. Firstly, the concerned parameters of thealgorithm and the features of the problem are provided as the input to start the execution of VSO.10 lgorithm 6:
Imported Infection
Input:
Critial host: criticalHost
DE algorithm: DE with the populaztion size as N im (refer to [40])Infection probability: P im Current number of iterations: i Total number of iterations: j Output:
A critical host with the updated viral RNA: criticalHost (cid:104) bestS olution , bestFitness (cid:105) ← DE ; if ( rand (0 , ≤ ( P im ∗ i / j )) then if ( bestFitness < criticalHost . f itness ) then criticalHost .virus.rna ← bestS olution end end Then, the involved operations as described in Section 2.2 are performed successively.
Exploitation and exploration are the two cornerstones of search techniques in solving opti-mization algorithms. If the exploitation ability is too strong, the algorithm may easily fall intolocal optima. On the other hand, the algorithm may not be able to converge to any possiblesolution of a relatively high fitness value in case it solely relies on a very powerful explorationmechanism.With the novel design of VSO as clearly explained in the previous subsections, it is obviousthat VSO combines both advantages of SIs and EAs in order to achieve an excellent balancebetween exploitation and exploration. The search behavior of the VSO algorithm is summarizedas follows. • As the critical host representing the best solution obtained so far, it has the highest in-fection rate. Thus, it is more likely to infect healthy hosts to become the severe hostsin the next iteration. To perform such infection operation, the viral RNA of the critical host will be directly replicated to the newly infected host according to Section 2.2.4. Thisimplies that an increasing number of healthy hosts may acquire the valuable search infor-mation of the currently best solution to become severe hosts. On the other hand, sincethe mutation intensity of the severe hosts will be decreased rapidly as described in Sec-tion 2.2.3, the severe hosts will conduct neighborhood searches around the locally optimalsolution. Overall speaking, this will surely help to enhance the exploitation ability of VSOto improve its solution quality; • Each time when a better solution is found, the previous critical host will be automaticallydowngraded to a severe host to continue its neighborhood search around the previous bestsolution for a certain duration as seen in Section 2.2.4 before any possible transformationto another host type. Meanwhile, the downgraded host is able to infect other healthy hoststo search this area together. • All healthy hosts of the VSO algorithm perform random exploration to try to find a bettersolution of the whole search space; 11 igure 3: The Algorithmic Flow of VSO • The main role of mild hosts is to improve the exploration capacity of the VSO algorithm.When a healthy host is infected to become a mild host, the viral RNA of the infectiousone will not be replicated directly to the healthy host. Instead, a uni-directional infectionmechanism as presented in Section 2.2.4 is performed, that is di ff erent from the two-sidedcrossover operation used in EAs. Moreover, a mild host can always mutate with a higherdegree of freedom as guided by the computed intensity. This infection scheme empowersthe VSO algorithm with an outstanding exploration ability; • Due to the recovery mechanism, the infected hosts will be recovered and re-initializedfrom time to time. The recovery mechanism helps to escape from any local minimum fora better exploration; • The imported infection mechanism hybridizes the whole population of the VSO algorithm12ith another new colony using a totally di ff erent searching approach. This may possiblyenlarge the search scope of the VSO algorithm for tackling more complex optimizationproblems.As illustrated in Figure 4, the red cross denotes the globally optimal solution of the specificfunction while the only red dot represents the critical host as the best solution obtained so far.Clearly, this critical host infect several severe hosts as denoted by gray dots around the centralcircle to look better solutions whereas the mild hosts as represented by the orange dots willcontinue to search toward the red dot that is very likely to achieve a near optimal solution. Figure 4: The Searching Pattern of VSO
Below is a few basic rules for the parameters setting of the VSO algorithm. • The higher the value of R M , P CH − > M or γ , the better the global search ability of the VSOalgorithm, and vice versa; • On the other hand, the larger the value of R C , R S , P CH − > S , P SH − > S , or α , or the smaller thevalue of δ s , the better the local search capability of the VSO algorithm, and vice versa; • R and revPercent are the conflicting parameters to balance the convergence of the algo-rithm. R should not be very large, and generally depends on the population size N pop of theVSO algorithm. For instance, R can be set to 1 for a specific problem with the populationsize as 50 to be discussed in the subsequent section; • A larger value of P im may sometimes help to get some quick improvement in solvingspecific complex optimization problems. Yet for a relatively large value of P im , it mayalso break the good searching patterns. From the empirical observations, P im ∈ (0 , .
5] istypically a good choice for most benchmark problem sets carefully examined in this work.Because of the diverse searching strategies utilized in the VSO algorithm, the number ofparameters is relatively larger than other popular meta-heurisitc algorithms such as GA, PSO, etc.13et from the preliminary observations, the performance of the VSO algorithm is relatively robustwhen only a few of the aforementioned parameters are changed at the same time. Moreover, itis found that the VSO algorithm can flexibly tackle a variety of optimization problems usingthe same parameter settings without much tuning. As revealed in Section 3, the same parametersettings of the VSO algorithm are consistently used in all the following experiments.
3. Evaluations on Benchmark Functions
To validate both the e ffi ciency and e ff ectiveness of the proposed algorithm, the VSO algo-rithm is utilized to solve two benchmark function groups including the classical and IEEE CEC2014 benchmark functions.For the classical benchmark functions, a total of 16 well-known functions given in Table A.15are used. These functions have been well-tested in all kinds of studies of meta-heuristic al-gorithms in previous research. Among the functions, F − F F − F
16 are multi-modal functions. Besides, all functions can be scalable from 2 to 1 ,
000 di-mensions so that the scalability of the concerned algorithms can be investigated. The motivationfor testing these classical functions is outlined as follows: • To quickly evaluate the searching capability of VSO when compared to those of otherpopular meta-heuristic algorithms, especially in terms of the solution quality; • To evaluate the rate of convergence; • To test the reliability of the algorithm; • To investigate the scalability of the algorithm.On the other hand, the IEEE CEC 2014 benchmark functions, as shown in Table B.16,are specially designed for evaluating the performance of meta-heuristic algorithms in the com-petition of single objective real-parameter numerical optimization problems. The functions(
CEC − CEC
30) contain various novel characteristics such as shiftings and rotations, it is muchmore di ffi cult to solve them than the classical set. Up to our understanding, no algorithm hassolved all functions optimally. More details about these functions can be found in [29]. Despitethe di ffi culty, we evaluate the e ff ectiveness and robustness of the VSO algorithm on this set ofchallenging functions.In the following experiments, all results are collected on the same computer with the IntelCore i9-7900X CPU running at 3 . ∼ . Python
3. Table 2 lists the parameter settings of each concerned algorithm accordingto the recommended values reported in the literature. Except for the population size, there aretotally 11 unique parameters in VSO as listed in Table 2. In fact, for other parameters that are notlisted, they can be derived according to the relationships mentioned in Section 2.2. For instance,since P CH − > S is set to 0 . P CH − > M is 0 .
2. Furthermore, with the imported infection operation, thepopulation size of the main process of the VSO algorithm is consistently set as 30 while thatof the imported infection is 20. It is worth noting that the parameters of each algorithm remainunchanged in all experiments in order to evaluate the adaptability of the underlying algorithmwith the same parameter settings on various problem sets for a fair comparison.14 able 2: Parameters SettingAlgorithm Parameter ValueABC Population size 50Elite bees num 16Onlooker bess num 4Patch size 5Patch factor 0.985Sites num 3Elite sites num 1CMA-ES Population size 4 + σ ff erential weight, 0.5GA Population size 50Probability of mutation 0.001Selection tournsize 3PSO Population size 50Inertial weight 0.8Cognitive constant 0.5Social constant 0.5SSA Population size 50 P a P c P m R C R S R M P CH − > S P SH − > S δ s α γ revPercent P im H The classical benchmark functions with 30 dimensions have been widely used for evaluatingmany meta-heurisitic algorithms like PSO, GA, etc., in many previous studies. In the followingevaluation, each function is tested over 31 runs for each algorithm. The maximum number ofiterations in each run is 10 . Table 3 shows the relevant results with the mean as the averagevalue of the fitness values obtained over all runs. The standard deviation of the fitness values iscalculated to examine the robustness of the algorithms. Furthermore, the best and worst resultsare carefully considered. To investigate the computational complexity, the average computationaltime in CPU seconds is recorded. Finally, two rankings in terms of the averaged fitness valuesand computational times are listed in order to make more precise and objective comparisons on15he di ff erent performance measures of the underlying algorithms. • In respect of the uni-modal functions F F
8, VSO consistently beats other algorithmsin all the rankings. For multi-modal functions F F
16, the VSO algorithm gets the firstplaces for 6 functions as well. More importantly, VSO achieves the exact global optimafor all the 12 functions, i.e. F F F F
13, and F
15. The standard deviations are 0 forall these cases, thus showing the excellent robustness of VSO; • As for other algorithms, we can observe that the performance of CMA-ES and WOA arenot bad for the uni-modal functions. Regarding multi-modal ones ( F F • For both GA and ABC, their performances are not satisfied for multi-modal functionsbecause they may not be good at solving these relatively high dimensional and complexproblems; • Regarding the DE and SSA algorithms, although they acquire very small errors in somefunctions, they cannot find the exact global optima; • Due to the simple and e ffi cient searching strategies, PSO is very fast. It ranks as the firstplace in computational time in 9 cases. Unfortunately, the performance of fitness is worstamong all the algorithms.Table 4 shows the summary of classical function evaluations where the average of the rank-ings in all functions for each algorithm is computed. VSO ranks as the first place with respect tothe fitness values whereas it is ranked as the fourth place in terms of the computational time. Table 3: Results of Classical Benchmarking FunctionsFunction Metric ABC CMA-ES DE GA PSO SSA VSO WOAF1 Mean 3.18E +
06 0.00E +
00 3.33E-54 2.97E-11 4.40E +
04 6.15E-40 0.00E +
00 0.00E + +
05 0.00E +
00 2.46E-54 1.08E-11 2.57E +
04 4.07E-40 0.00E +
00 0.00E + +
06 0.00E +
00 3.29E-55 6.56E-12 1.18E +
04 1.57E-40 0.00E +
00 0.00E + +
06 0.00E +
00 6.12E-54 5.69E-11 1.30E +
05 2.19E-39 0.00E +
00 0.00E + Time Rank 4 8 7 6 +
00 4.57E-53 5.15E-14 2.84E +
01 9.53E-44 0.00E +
00 0.00E + +
00 4.87E-53 1.26E-29 2.20E +
01 7.65E-44 0.00E +
00 0.00E + +
00 4.88E-55 5.15E-14 6.73E +
00 7.26E-45 0.00E +
00 0.00E + +
00 1.22E-52 5.15E-14 7.42E +
01 3.30E-43 0.00E +
00 0.00E + Time Rank 6 8 5 3 +
05 5.51E-41 0.00E +
00 0.00E + +
00 9.36E-52 0.00E +
00 1.49E +
05 5.80E-41 0.00E +
00 0.00E + +
03 9.96E-42 0.00E +
00 0.00E + +
05 2.17E-40 0.00E +
00 0.00E + Time Rank 4 8 7 2 +
01 0.00E +
00 1.37E-54 2.30E-13 1.50E +
01 1.09E-41 0.00E +
00 5.32E-233Std 4.15E +
00 0.00E +
00 1.40E-54 7.78E-14 2.53E +
00 7.21E-42 0.00E +
00 0.00E + +
01 0.00E +
00 4.14E-55 1.17E-13 9.14E +
00 2.72E-42 0.00E +
00 1.77E-269 able 3: Results of Classical Benchmarking FunctionsFunction Metric ABC CMA-ES DE GA PSO SSA VSO WOAWorst 6.56E +
01 0.00E +
00 4.15E-54 4.81E-13 2.13E +
01 2.81E-41 0.00E +
00 1.60E-231Time(s) 11.67 229.63 71.92 43.76 5.06 8.95 9.68 30.90Fitness Rank 8 +
00 1.04E-52 6.77E-13 1.27E +
01 4.09E-43 0.00E +
00 0.00E + +
00 1.21E-52 2.02E-28 6.41E +
00 3.27E-43 0.00E +
00 0.00E + +
00 1.22E-53 6.77E-13 5.29E +
00 6.17E-44 0.00E +
00 0.00E + +
00 3.30E-52 6.77E-13 2.97E +
01 1.09E-42 0.00E +
00 0.00E + Time Rank 4 8 7 5 2 +
00 0.00E + +
00 0.00E + +
00 0.00E + +
00 0.00E + Time Rank +
02 1.2e-322 3.33E-31 6.05E +
00 1.31E +
02 5.26E-03 0.00E +
00 1.04E-20Std 3.99E +
01 0.00E +
00 3.36E-31 2.54E +
00 6.93E +
01 2.43E-03 0.00E +
00 5.59E-20Best 1.20E +
02 0.00E +
00 6.92E-33 2.33E +
00 2.71E +
01 1.84E-03 0.00E +
00 1.29E-60Worst 2.74E +
02 5.93e-322 9.46E-31 1.22E +
01 3.12E +
02 1.35E-02 0.00E +
00 3.12E-19Time(s) 60.39 368.70 137.53 63.75 40.19 69.38 115.98 53.24Fitness Rank 8 2 3 6 7 5 +
04 3.14E-128 1.87E-34 1.83E +
01 1.63E +
04 6.07E +
01 0.00E +
00 4.14E-67Std 5.04E +
03 5.31E-128 2.55E-34 8.94E +
00 7.08E +
03 1.65E +
01 0.00E +
00 2.23E-66Best 1.43E +
04 6.57E-258 1.50E-35 3.12E +
00 3.63E +
03 2.65E +
01 0.00E +
00 1.32E-245Worst 3.54E +
04 1.37E-127 6.91E-34 3.95E +
01 3.12E +
04 1.02E +
02 0.00E +
00 1.24E-65Time(s) 378.85 534.32 321.64 298.88 257.02 264.40 271.99 175.60Fitness Rank 8 2 4 5 7 6 F9 Mean 1.24E +
01 4.36E +
01 2.49E +
01 3.80E +
01 8.67E +
01 2.25E-13 0.00E +
00 0.00E + +
00 9.55E +
00 1.89E +
00 9.42E +
00 1.78E +
01 7.95E-13 0.00E +
00 0.00E + +
00 3.18E +
01 2.19E +
01 1.79E +
01 4.21E +
01 0.00E +
00 0.00E +
00 0.00E + +
01 5.97E +
01 2.69E +
01 5.79E +
01 1.30E +
02 4.23E-12 0.00E +
00 0.00E + Time Rank 4 8 7 6 +
01 2.00E +
01 8.97E-15 1.27E-07 9.47E +
00 7.90E-15 4.44E-16 4.44E-16Std 8.01E-01 1.17E-02 2.84E-15 1.26E-08 1.23E +
00 1.41E-15 0.00E +
00 0.00E + +
01 2.00E +
01 7.55E-15 1.19E-07 7.36E +
00 7.55E-15 4.44E-16 4.44E-16Worst 1.84E +
01 2.00E +
01 1.47E-14 1.71E-07 1.20E +
01 1.47E-14 4.44E-16 4.44E-16Time(s) 21.05 112.15 83.13 47.54 15.59 18.52 23.63 40.88Fitness Rank 7 8 4 5 6 3
Time Rank 3 8 7 6 +
00 1.97E-03 0.00E +
00 6.93E-02 1.10E +
00 0.00E +
00 0.00E +
00 0.00E + +
00 1.37E-01 7.31E-02 0.00E +
00 0.00E +
00 0.00E + +
00 0.00E +
00 0.00E +
00 5.77E-15 9.54E-01 0.00E +
00 0.00E +
00 0.00E + +
00 9.86E-03 0.00E +
00 7.73E-01 1.33E +
00 0.00E +
00 0.00E +
00 0.00E + Time Rank 2 8 7 6 +
03 -8.92E +
02 -7.51E +
02 -1.17E +
03 -1.01E +
03 -5.52E +
02 -1.17E +
03 -1.17E + +
01 1.07E +
02 0.00E +
00 2.12E-13 3.76E +
01 4.16E +
01 6.48E +
00 4.49E-05Best -1.09E +
03 -1.01E +
03 -7.51E +
02 -1.17E +
03 -1.09E +
03 -6.32E +
02 -1.17E +
03 -1.17E + able 3: Results of Classical Benchmarking FunctionsFunction Metric ABC CMA-ES DE GA PSO SSA VSO WOAWorst -9.91E +
02 -7.51E +
02 -7.51E +
02 -1.17E +
03 -9.47E +
02 -4.74E +
02 -1.16E +
03 -1.17E + +
00 0.00E +
00 1.92E-150 1.42E-13 4.05E-06 2.08E-106 0.00E +
00 0.00E + +
00 0.00E +
00 3.77E-150 1.88E-13 4.15E-06 2.45E-106 0.00E +
00 0.00E + +
00 0.00E +
00 7.65E-158 2.22E-22 1.05E-06 3.40E-108 0.00E +
00 0.00E + +
00 0.00E +
00 9.46E-150 4.71E-13 1.23E-05 5.73E-106 0.00E +
00 0.00E + Time Rank 3 8 7 6 F15 Mean 2.24E-14 2.00E-12 3.60E-18 1.75E-06 7.26E +
00 2.24E-03 0.00E +
00 0.00E + +
00 5.30E-04 0.00E +
00 0.00E + +
00 1.22E-03 0.00E +
00 0.00E + +
01 3.48E-03 0.00E +
00 0.00E + Time Rank 6 8 7 4 2 +
01 -1.86E +
01 -2.40E +
01 -2.62E +
01 -1.73E +
01 -4.33E +
00 -2.88E +
01 -1.25E + +
00 1.25E +
00 6.03E-01 2.04E +
00 6.37E-01 1.91E-01 1.86E + +
01 -2.03E +
01 -2.64E +
01 -2.73E +
01 -2.05E +
01 -6.17E +
00 -2.91E +
01 -1.57E + +
01 -1.56E +
01 -2.29E +
01 -2.46E +
01 -1.18E +
01 -3.15E +
00 -2.85E +
01 -7.73E + Table 4: Summary of Classical Benchmarking Function EvaluationsAlgorithm Avg FitnessRank Avg TimeRank Overall FitnessRank Overall TimeRankABC 5.69 4.38 7 4CMA-ES 3.69 8.00 3 8DE 4.06 6.69 4 7GA 5.44 4.75 6 6PSO 7.25 1.50 8 SSA 5.00 2.69 5 2VSO 1.19 4.38 Following the CEC 2014 recommendation [29], the dimension of problems is selected as 30as well. Function
CEC CEC
CEC CEC
16 are sim-ple multi-modal functions but with various shiftings and rotations.
CEC
CEC
22 are hybrid18unctions while
CEC
CEC
30 are the composition functions. As the global optimum of eachfunction is di ff erent (from 100 to 3000), the fitness result is converted to the error as calculatedin (13) to make the comparison more straightforward. In other words, when the result gets closerto 0, it implies that the best solution obtained by the algorithm is closer to the global optimum ofthe corresponding function. f itness = f ( x ) − f ( x ∗ ) (13)where x is the best solution obtained by the algorithm while f ( x ∗ ) is the real global optimum ofthe function.Table 5 reports the results over 31 independent runs on each function for each algorithm. Afew observations are specified as follows. • For the uni-modal functions
CEC CEC
3, no algorithm is dominated. Because of thecomplicated rotation, the errors for
CEC . In thecase of CEC
2, it is similar to
CEC
1, only SSA and VSO are on the smallest order (10 ) ofmagnitude. Meanwhile, the best metric of all runs for VSO is 0, which demonstrates thatonly VSO has once achieved the exact global optima. Interestingly, WOA performs verywell in the classical uni-modal functions but does not work in these complicated uni-modalcases. It is remarkable that only VSO is able to achieve the global optimum exactly (withfitness error 0) in CEC • For multi-modal functions
CEC CEC
16, it is obvious that the VSO algorithm attains abetter performance than those of other algorithms. In terms of the mean of fitness, VSOranks the best for over half of the functions, including the
CEC CEC CEC
CEC
CEC
CEC
15 and
CEC
16. SSA followed by VSO acquires the best performance offitness in 4 functions; • For 6 hybrid functions
CEC
CEC
22, the VSO algorithm achieves the best performancein all functions except for
CEC
18 where it ranks the second place; • For the composition functions of
CEC
CEC
30, VSO outperforms all compared algo-rithms in the first 6 functions, i.e.
CEC
CEC
28. It can also be observed that WOAobtains the best performance in
CEC
23 and
CEC
25. This verifies the outstanding opti-mizing capacity of VSO on such complex functions.Table 6 indicates that VSO generally outperforms all other algorithms in terms of the fitnessvalues.In addition, VSO with the imported infection operation powered by DE works well hereyet the performance of the standalone DE is the worst of all.19 able 5: Results of CEC Benchmarking FunctionsFunction Metric ABC CMA-ES DE GA PSO SSA VSO WOACEC1 Mean 5.15E +
08 5.83E +
07 2.36E +
09 2.07E +
06 2.05E +
08 6.49E +
06 3.17E +
06 3.67E + +
08 2.90E +
07 4.92E-07 1.35E +
06 2.53E +
08 1.68E +
06 1.28E +
06 1.85E + +
08 9.84E +
06 2.36E +
09 3.56E +
05 3.31E +
07 2.68E +
06 1.21E +
06 1.11E + +
08 1.49E +
08 2.36E +
09 5.79E +
06 1.40E +
09 1.02E +
07 6.67E +
06 8.87E + +
10 1.12E +
04 9.65E +
10 9.99E +
03 1.32E +
10 3.28E +
02 5.97E +
02 2.64E + +
09 8.97E +
03 3.33E-05 1.16E +
04 1.09E +
10 4.64E +
02 3.21E +
03 6.55E + +
10 2.09E +
02 9.65E +
10 2.15E +
01 5.28E +
08 1.29E-01 0.00E +
00 1.48E + +
10 4.04E +
04 9.65E +
10 3.43E +
04 4.98E +
10 1.58E +
03 1.79E +
04 3.95E + +
05 7.61E +
04 2.30E +
07 1.76E +
04 7.29E +
04 2.48E +
02 0.00E +
00 7.97E + +
04 8.26E +
03 7.57E-09 1.40E +
04 3.70E +
04 3.32E +
02 0.00E +
00 6.30E + +
04 5.75E +
04 2.30E +
07 1.80E +
02 1.78E +
04 9.16E +
00 0.00E +
00 6.25E + +
05 8.60E +
04 2.30E +
07 5.46E +
04 2.07E +
05 1.76E +
03 0.00E +
00 8.89E + +
03 7.83E +
01 2.39E +
04 7.34E +
01 1.60E +
03 2.56E +
01 6.99E +
01 3.34E + +
03 3.41E +
01 5.01E-12 4.53E +
01 1.16E +
03 2.12E +
00 3.33E +
01 1.25E + +
03 8.65E +
00 2.39E +
04 5.98E-05 2.31E +
02 1.74E +
01 1.76E +
00 1.50E + +
04 1.45E +
02 2.39E +
04 1.49E +
02 5.80E +
03 3.02E +
01 1.37E +
02 6.53E + +
01 2.08E +
01 2.09E +
01 2.00E +
01 2.01E +
01 2.05E +
01 2.00E +
01 2.08E + +
01 2.00E +
01 2.08E +
01 2.00E +
01 2.00E +
01 2.04E +
01 2.00E +
01 2.05E + +
01 2.10E +
01 2.10E +
01 2.00E +
01 2.05E +
01 2.06E +
01 2.04E +
01 2.09E + +
01 2.84E +
01 4.60E +
01 2.03E +
01 2.88E +
01 1.40E +
01 7.67E +
00 3.91E + +
00 1.05E +
01 8.30E-02 3.18E +
00 3.25E +
00 1.42E +
00 5.76E +
00 3.21E + +
01 1.51E +
00 4.59E +
01 1.40E +
01 2.40E +
01 1.14E +
01 0.00E +
00 2.91E + +
01 4.42E +
01 4.62E +
01 2.59E +
01 3.66E +
01 1.66E +
01 1.87E +
01 4.51E + +
02 1.49E-02 9.89E +
02 3.33E-02 1.83E +
02 1.18E-05 1.39E-03 1.69E + +
01 2.09E-02 3.02E-13 2.82E-02 9.56E +
01 1.14E-05 6.27E-03 5.68E + +
02 1.64E-04 9.89E +
02 1.93E-12 2.89E +
01 3.80E-07 0.00E +
00 8.95E + +
02 8.44E-02 9.89E +
02 1.01E-01 4.01E +
02 5.21E-05 3.44E-02 2.75E + +
02 1.67E +
02 4.02E +
02 3.38E +
01 1.56E +
02 1.03E-10 6.57E +
00 2.47E + +
01 2.82E +
00 1.41E-13 9.00E +
00 3.95E +
01 7.87E-12 2.31E +
00 2.75E + +
02 1.62E +
02 4.02E +
02 1.99E +
01 7.27E +
01 8.95E-11 2.98E +
00 2.03E + +
02 1.73E +
02 4.02E +
02 5.97E +
01 2.68E +
02 1.18E-10 1.29E +
01 3.17E + able 5: Results of CEC Benchmarking FunctionsFunction Metric ABC CMA-ES DE GA PSO SSA VSO WOATime Rank 4 7 8 6 2 +
02 1.86E +
02 3.81E +
02 1.18E +
02 1.85E +
02 5.60E +
01 5.15E +
01 3.00E + +
01 3.57E +
00 1.35E-01 3.13E +
01 4.15E +
01 6.30E +
00 2.02E +
01 2.81E + +
02 1.84E +
02 3.81E +
02 7.16E +
01 1.15E +
02 4.27E +
01 2.39E +
01 2.48E + +
02 1.96E +
02 3.81E +
02 1.97E +
02 2.53E +
02 6.76E +
01 1.08E +
02 3.47E + +
03 4.25E +
03 8.05E +
03 4.53E +
02 4.17E +
03 8.32E +
01 7.70E +
01 5.58E + +
02 2.28E +
02 1.15E-12 2.65E +
02 8.11E +
02 1.67E +
01 7.60E +
01 5.71E + +
03 3.78E +
03 8.05E +
03 9.76E +
00 2.20E +
03 4.45E +
01 2.51E +
00 4.38E + +
03 4.74E +
03 8.05E +
03 1.03E +
03 5.85E +
03 1.14E +
02 2.59E +
02 6.80E + +
03 4.33E +
03 9.17E +
03 3.08E +
03 4.58E +
03 2.45E +
03 3.85E +
03 6.40E + +
02 2.03E +
02 1.91E-12 6.08E +
02 7.71E +
02 2.52E +
02 7.35E +
02 5.58E + +
03 3.69E +
03 9.17E +
03 1.89E +
03 2.97E +
03 1.79E +
03 1.86E +
03 5.31E + +
03 4.62E +
03 9.17E +
03 4.82E +
03 5.95E +
03 2.87E +
03 5.09E +
03 7.77E + + +
00 6.91E-01 1.41E +
00 2.70E + +
00 4.73E-01 1.06E +
01 5.17E-01 3.63E +
00 2.82E-01 2.20E-01 4.36E + +
00 3.08E-02 5.33E-02 7.00E-01Best 5.16E +
00 3.19E-01 1.06E +
01 2.65E-01 3.78E-01 2.29E-01 9.46E-02 2.92E + +
00 5.63E-01 1.06E +
01 7.41E-01 7.30E +
00 3.52E-01 3.29E-01 5.61E + +
02 2.72E-01 3.76E +
02 3.18E-01 5.76E +
01 2.62E-01 2.40E-01 9.28E + +
01 3.15E-02 1.34E-13 6.34E-02 3.15E +
01 3.03E-02 2.89E-02 2.99E + +
01 2.15E-01 3.76E +
02 2.03E-01 1.23E +
00 1.98E-01 1.44E-01 3.74E + +
02 3.32E-01 3.76E +
02 4.46E-01 1.14E +
02 3.09E-01 2.94E-01 1.45E + +
05 1.89E +
01 5.86E +
05 2.96E +
01 6.99E +
04 7.31E +
00 5.93E +
00 1.16E + +
05 1.05E +
00 5.48E-01 1.13E +
01 1.28E +
05 7.70E-01 1.54E +
00 6.90E + +
04 1.71E +
01 5.86E +
05 1.27E +
01 1.29E +
02 5.35E +
00 3.31E +
00 2.58E + +
06 2.08E +
01 5.86E +
05 6.30E +
01 6.04E +
05 8.72E +
00 9.90E +
00 3.08E + +
01 1.30E +
01 1.38E +
01 1.15E +
01 1.23E +
01 1.10E +
01 1.09E +
01 1.25E + +
01 1.27E +
01 1.37E +
01 1.05E +
01 1.15E +
01 1.01E +
01 8.79E +
00 1.12E + +
01 1.32E +
01 1.38E +
01 1.25E +
01 1.30E +
01 1.15E +
01 1.20E +
01 1.32E + able 5: Results of CEC Benchmarking FunctionsFunction Metric ABC CMA-ES DE GA PSO SSA VSO WOAFitness Rank 6 7 8 3 4 2 +
06 3.96E +
06 8.83E +
08 1.02E +
06 3.75E +
06 4.59E +
05 1.85E +
05 3.22E + +
06 2.60E +
06 2.16E-01 6.86E +
05 3.17E +
06 1.90E +
05 1.07E +
05 2.98E + +
05 5.28E +
05 8.83E +
08 2.54E +
05 3.43E +
05 4.93E +
04 5.94E +
04 2.37E + +
07 1.08E +
07 8.83E +
08 2.75E +
06 1.27E +
07 8.58E +
05 4.34E +
05 1.07E + +
08 2.75E +
07 1.47E +
10 3.37E +
03 2.20E +
08 1.28E +
02 3.07E +
02 2.27E + +
08 2.68E +
07 2.23E-06 4.25E +
03 3.42E +
08 2.49E +
01 2.45E +
02 3.55E + +
02 2.20E +
06 1.47E +
10 6.90E +
01 6.36E +
02 8.66E +
01 8.54E +
01 3.89E + +
09 1.29E +
08 1.47E +
10 2.10E +
04 1.57E +
09 1.97E +
02 1.06E +
03 1.29E + +
02 2.36E +
01 8.10E +
02 1.64E +
01 1.32E +
02 8.85E +
00 5.66E +
00 2.30E + +
01 8.62E +
00 7.92E-02 1.78E +
01 7.27E +
01 1.61E +
00 1.00E +
00 9.71E + +
02 1.23E +
01 8.10E +
02 7.51E +
00 1.88E +
01 6.01E +
00 3.83E +
00 5.72E + +
02 5.78E +
01 8.11E +
02 7.02E +
01 3.39E +
02 1.18E +
01 8.21E +
00 4.61E + +
04 8.04E +
04 2.50E +
09 3.82E +
04 3.43E +
04 3.18E +
03 1.62E +
02 1.07E + +
04 4.02E +
04 5.18E-01 1.51E +
04 3.78E +
04 2.19E +
03 8.41E +
01 8.41E + +
04 1.57E +
04 2.50E +
09 8.10E +
03 3.49E +
03 6.29E +
02 4.08E +
01 4.92E + +
04 2.05E +
05 2.50E +
09 6.54E +
04 1.95E +
05 8.82E +
03 3.57E +
02 3.80E + +
06 1.14E +
06 2.26E +
09 5.19E +
05 1.61E +
06 7.07E +
04 4.41E +
04 1.09E + +
06 6.04E +
05 7.34E-07 4.31E +
05 3.91E +
06 3.97E +
04 3.22E +
04 8.53E + +
05 1.72E +
05 2.26E +
09 2.22E +
04 1.68E +
04 2.10E +
04 6.85E +
03 2.53E + +
06 2.76E +
06 2.26E +
09 1.59E +
06 2.23E +
07 2.19E +
05 1.24E +
05 3.27E + +
02 6.79E +
02 4.71E +
06 7.85E +
02 8.53E +
02 2.07E +
02 1.86E +
02 1.15E + +
02 1.53E +
02 3.30E-01 2.47E +
02 2.42E +
02 6.01E +
01 8.46E +
01 4.03E + +
02 3.59E +
02 4.71E +
06 1.90E +
02 4.52E +
02 8.41E +
01 3.10E +
01 5.50E + +
03 9.29E +
02 4.71E +
06 1.21E +
03 1.33E +
03 3.48E +
02 3.08E +
02 2.54E + +
02 2.00E +
02 2.00E +
02 3.15E +
02 4.05E +
02 3.14E +
02 2.00E +
02 2.00E + +
01 9.25E-14 0.00E +
00 1.32E-03 5.83E +
01 1.58E-05 0.00E +
00 0.00E + +
02 2.00E +
02 2.00E +
02 3.15E +
02 3.40E +
02 3.14E +
02 2.00E +
02 2.00E + +
02 2.00E +
02 2.00E +
02 3.15E +
02 5.75E +
02 3.14E +
02 2.00E +
02 2.00E + Time Rank 3 8 7 6 +
02 2.02E +
02 2.00E +
02 2.43E +
02 3.05E +
02 2.25E +
02 2.00E +
02 2.00E + +
01 5.15E-01 2.77E-02 7.36E +
00 2.73E +
01 4.38E-01 0.00E +
00 5.11E-03Best 3.35E +
02 2.01E +
02 2.00E +
02 2.29E +
02 2.55E +
02 2.24E +
02 2.00E +
02 2.00E + +
02 2.03E +
02 2.00E +
02 2.59E +
02 3.61E +
02 2.26E +
02 2.00E +
02 2.00E + able 5: Results of CEC Benchmarking FunctionsFunction Metric ABC CMA-ES DE GA PSO SSA VSO WOATime(s) 15.21 70.55 32.01 27.17 6.23 8.63 14.25 17.63Fitness Rank 8 4 3 6 7 5 +
02 2.00E +
02 2.00E +
02 2.14E +
02 2.32E +
02 2.02E +
02 2.00E +
02 2.00E + +
00 8.53E-14 1.83E-13 8.31E +
00 1.46E +
01 3.83E-01 0.00E +
00 0.00E + +
02 2.00E +
02 2.00E +
02 2.04E +
02 2.16E +
02 2.01E +
02 2.00E +
02 2.00E + +
02 2.00E +
02 2.00E +
02 2.31E +
02 2.73E +
02 2.03E +
02 2.00E +
02 2.00E + Time Rank 3 8 7 6 +
02 1.01E +
02 2.00E +
02 1.30E +
02 1.13E +
02 1.00E +
02 1.00E +
02 1.53E + +
00 8.30E-14 4.57E +
01 5.15E +
01 4.38E-02 7.81E-02 4.72E + +
02 1.00E +
02 2.00E +
02 1.00E +
02 1.01E +
02 1.00E +
02 1.00E +
02 1.04E + +
02 1.08E +
02 2.00E +
02 2.00E +
02 3.90E +
02 1.01E +
02 1.01E +
02 2.00E + +
02 2.00E +
02 2.00E +
02 9.10E +
02 1.10E +
03 4.17E +
02 2.00E +
02 3.82E + +
01 6.29E-13 2.57E-13 6.43E +
01 2.54E +
02 4.90E +
00 0.00E +
00 4.36E + +
02 2.00E +
02 2.00E +
02 8.04E +
02 4.24E +
02 4.08E +
02 2.00E +
02 2.00E + +
02 2.00E +
02 2.00E +
02 1.02E +
03 1.37E +
03 4.26E +
02 2.00E +
02 1.65E + +
03 2.00E +
02 2.00E +
02 1.65E +
03 2.11E +
03 4.13E +
02 2.00E +
02 4.12E + +
02 7.13E-13 1.71E-13 4.25E +
02 5.66E +
02 4.85E +
00 0.00E +
00 8.35E + +
03 2.00E +
02 2.00E +
02 1.08E +
03 1.24E +
03 4.02E +
02 2.00E +
02 2.00E + +
03 2.00E +
02 2.00E +
02 2.48E +
03 3.43E +
03 4.24E +
02 2.00E +
02 4.38E + +
07 2.00E +
02 2.00E +
02 2.81E +
05 1.83E +
07 2.09E +
02 3.03E +
02 2.88E + +
07 8.53E-14 2.25E-13 1.51E +
06 1.32E +
07 9.25E-01 3.89E +
02 3.53E + +
03 2.00E +
02 2.00E +
02 7.99E +
02 2.68E +
04 2.07E +
02 2.00E +
02 2.00E + +
08 2.00E +
02 2.00E +
02 8.40E +
06 6.13E +
07 2.11E +
02 1.88E +
03 1.15E + +
05 2.00E +
02 2.00E +
02 3.30E +
03 1.67E +
05 3.33E +
02 2.49E +
02 7.25E + +
05 3.91E-04 2.16E-04 9.40E +
02 1.27E +
05 5.25E +
01 2.63E +
02 6.08E + +
05 2.00E +
02 2.00E +
02 1.66E +
03 1.05E +
04 2.66E +
02 2.00E +
02 2.00E + +
05 2.00E +
02 2.00E +
02 5.20E +
03 4.82E +
05 4.51E +
02 1.66E +
03 2.55E + able 6: Summary of Evaluations for CEC Benchmarking FunctionsAlgorithm Avg FitnessRank Avg TimeRank Overall FitnessRank Overall TimeRankABC 6.07 3.23 7 3CMA-ES 3.93 7.93 4 8DE 6.60 6.73 8 7GA 3.83 5.77 3 6PSO 5.37 1.20 5 SSA 2.63 2.03 2 2VSO 1.67 4.03 In addition to the solution quality, we are also interested in the rate of convergence. Therefore,the convergence test on those classical benchmark functions is conducted.Figure 5 displays the convergence results based on the median fitness of all trials. The resultsare given as below. • VSO generally converges faster than other algorithms and hence possesses superior con-vergence capability for such optimization problems; • For the uni-modal functions, it seems that almost all algorithms can quickly converge.This is because most algorithms can achieve small errors as stated in Table 3. However,only VSO achieves the exact global optima for these functions which has been discussedin Section 3.1; • With the more complicated multi-modal functions F F
16, it is obvious that the VSOperforms very well with respect to the rate of convergence. The convergence rates of someother algorithms decrease, such as GA, DE and SSA on F
9; DE, ABC and SSA on F10;and CMA-ES and DE on F
16. WOA has a fast rate of convergence as well.
Figure 6 plots a series of box plots through all runs for the classical benchmark functions foreach algorithm. From the obtained results, the following observations can be drawn. • For the uni-modal functions, the reliability of VSO is impressive over other algorithms.For example, the performances of both ABC and PSO are quite unstable; • For the multi-modal functions, VSO can constantly generate stable results. The only ex-ception is F
12 where DE achieves the best reliability but it fails to acquire a good solution.The reliability of WOA is followed by VSO. But it becomes much worse in F .5. Scalability Test In addition to the above low dimensional benchmark functions, a series of evaluations areperformed on the medium and high dimensional classical benchmark problems, including 100,300, 500 and 1000 dimensions to test the scalability of VSO. To make a thorough comparison,we also employ other algorithms in this evaluation. As aforementioned, the parameter settingsof each algorithm are the same as the test on the 30- D problems.From the results listed in Table 7 we have the following observations: • VSO achieves the best performance in almost all functions with all dimensions except for F
12 and F
14 with 100 dimensions. In other words, the VSO algorithm ranks first on 59out of the total 64 cases ( ≈ F F F F F • More specifically, VSO shows the advantage of the computational time for some 1000- D high dimensional problems. For instance, the ranking of VSO for the computational timegoes up to the second or even the first place in F F F F F F • As for other algorithms, the solution qualities drop down with increasing dimensions. Forinstance, for CMA-ES, the mean of fitness of 30-
D F . E −
284 as reported in Table 3.Nevertheless, the values are 1 . E +
01, 2 . E +
04, 1 . E +
05, and 6 . E +
05 for 100,300, 500, and 1000- D problems, respectively. Likewise, the mean value of SSA for F D is 2 . E −
13 while the values become 1 . E +
02, 1 . E +
03, 2 . E +
03 and6 . E +
03 for 100, 300, 500, and 1000- D problems, accordingly; • The only competitor is WOA probably due to its sophisticated design of the searchingstrategies as inspired by searching for prey and attacking the prey of the whales [18].Table 8 summaries the results of the scalability test.25 igure 5: Convergence Test of Results of Classic Benchmarking Functions igure 6: Box Plot of Results of Classic Benchmarking Functions able 7: Results of Scalability TestFunction Dimension Metric ABC CMA-ES DE GA PSO SSA VSO WOAF1 100 Mean 1.99E +
07 2.17E-145 2.85E-05 7.78E +
01 7.12E +
05 8.46E-21 0.00E +
00 0.00E + Time Rank 3 8 7 6 +
07 1.59E-61 5.41E +
00 6.08E +
05 3.09E +
06 1.15E-05 0.00E +
00 0.00E + Time Rank +
08 1.16E-36 5.54E +
01 4.89E +
06 5.06E +
06 6.52E-01 0.00E +
00 0.00E + Time Rank +
08 5.65E-16 4.54E +
02 3.14E +
07 1.04E +
07 9.98E +
03 0.00E +
00 0.00E + Time Rank +
45 5.58E-148 2.03E-09 8.44E-04 8.66E +
02 1.22E-24 0.00E +
00 0.00E + Time Rank 7 8 6 +
130 2.26E-62 1.36E +
01 6.32E +
00 2.02E +
04 9.86E-07 0.00E +
00 0.00E + Time Rank 2 8 7 6 3 +
164 2.58E-35 1.44E +
02 5.79E +
01 8.83E +
05 2.84E +
16 0.00E +
00 0.00E + Time Rank 2 8 6 +
15 3.98E-10 1.59E +
03 6.44E +
02 2.39E +
09 1.08E +
18 0.00E +
00 0.00E + Time Rank 7 8 6 +
04 1.74E +
01 7.32E-08 2.24E +
00 2.21E +
05 1.19E-21 0.00E +
00 0.00E + Time Rank +
07 2.34E +
04 1.56E +
03 2.47E +
04 2.78E +
07 1.13E-06 0.00E +
00 0.00E + Time Rank +
07 1.14E +
05 3.84E +
04 3.65E +
05 6.18E +
06 5.65E-02 0.00E +
00 0.00E + Time Rank 5 8 7 6 4 2 3 +
08 6.38E +
05 1.50E +
06 6.30E +
06 1.66E +
07 6.86E +
02 0.00E +
00 0.00E + Time Rank 3 8 7 6 +
01 2.63E-148 9.22E-09 7.68E-01 2.54E +
01 1.11E-22 0.00E +
00 2.34E-234Time(s) 10.82 1086.66 115.86 77.10 6.51 8.06 12.09 69.00Fitness Rank 8 3 5 6 7 4 +
01 2.29E-64 6.85E +
00 6.28E +
03 3.08E +
01 1.16E-07 0.00E +
00 8.95E-226Time(s) 13.83 2197.76 673.23 313.06 7.14 17.55 18.97 193.64Fitness Rank 7 3 5 8 6 4 able 7: Results of Scalability TestFunction Dimension Metric ABC CMA-ES DE GA PSO SSA VSO WOATime Rank 2 8 7 6 +
01 1.84E-39 5.52E +
01 4.89E +
04 3.15E +
01 6.72E-03 0.00E +
00 2.38E-214Time(s) 8.68 2924.74 1086.18 593.71 10.21 24.05 47.52 317.68Fitness Rank 7 3 6 8 5 4 +
01 7.61E-19 4.66E +
02 3.15E +
05 3.54E +
01 9.90E +
01 0.00E +
00 1.16E-220Time(s) 10.37 6599.45 2189.51 784.09 23.12 51.56 79.31 320.46Fitness Rank 5 3 7 8 4 6 +
02 3.03E-128 6.43E-08 8.69E-02 8.16E +
02 1.38E-23 0.00E +
00 0.00E + Time Rank 2 8 7 6 +
05 1.26E-20 3.80E +
02 1.90E +
03 1.17E +
04 3.69E-08 0.00E +
00 0.00E + Time Rank 5 8 7 +
05 8.65E-04 1.03E +
04 2.58E +
04 3.26E +
04 3.52E-03 0.00E +
00 0.00E + Time Rank 5 8 7 6 +
06 1.97E +
01 1.82E +
05 3.62E +
05 1.38E +
05 1.16E +
02 0.00E +
00 0.00E + Time Rank 3 8 7 6 +
09 8.88E-16 2.95E-07 2.41E-106 0.00E +
00 0.00E + Time Rank 5 8 7 4 +
59 8.88E-16 8.82E-06 6.75E-79 0.00E +
00 0.00E + Time Rank 6 8 7 4 +
137 8.88E-16 2.23E-08 1.01E-72 0.00E +
00 0.00E + Time Rank 5 8 7 6 4 3 +
290 inf 8.88E-16 4.97E-08 6.01E-57 0.00E +
00 0.00E + Time Rank 2 8 7 6 3 4 +
03 6.73E-64 3.40E-01 6.64E +
02 2.74E +
03 8.36E +
02 0.00E +
00 6.07E + +
16 3.47E +
04 7.63E +
13 4.19E +
03 3.00E +
04 7.25E +
03 0.00E +
00 4.36E + +
19 5.79E +
04 1.62E +
17 7.33E +
03 8.29E +
04 1.23E +
04 0.00E +
00 7.22E + +
22 1.03E +
05 4.55E +
20 1.51E +
04 2.94E +
05 3.14E +
04 0.00E +
00 1.56E + able 7: Results of Scalability TestFunction Dimension Metric ABC CMA-ES DE GA PSO SSA VSO WOAFitness Rank 8 5 7 2 6 4 +
05 3.00E-66 3.19E-02 1.19E +
04 2.28E +
05 8.36E +
04 0.00E +
00 1.52E-25Time(s) 1930.00 5669.32 1553.35 875.01 1283.42 1367.57 1212.83 1374.31Fitness Rank 8 2 4 5 7 6 +
06 1.67E +
05 2.38E +
05 3.98E +
05 1.76E +
06 1.22E +
06 0.00E +
00 2.80E-05Time(s) 11839.67 17670.98 8880.38 4825.99 4783.50 8349.41 7074.65 6134.58Fitness Rank 8 3 4 5 7 6 +
06 1.75E +
07 4.96E +
06 1.67E +
06 4.73E +
06 3.90E +
06 0.00E +
00 4.32E-07Time(s) 18368.06 20186.46 15586.21 21407.55 21072.82 19527.32 17817.33 14203.77Fitness Rank 7 8 6 3 5 4 +
07 9.30E +
07 1.29E +
08 8.47E +
06 1.77E +
07 1.57E +
07 0.00E +
00 3.77E-12Time(s) 76351.35 50589.75 66361.20 45854.39 51756.09 43272.60 56929.47 45467.26Fitness Rank 6 7 8 3 5 4 +
02 2.64E +
02 2.96E +
02 1.64E +
02 6.02E +
02 1.77E +
02 0.00E +
00 0.00E + Time Rank 3 8 7 6 +
03 1.14E +
03 1.45E +
03 7.53E +
02 2.44E +
03 1.42E +
03 0.00E +
00 0.00E + Time Rank 3 8 7 6 +
03 2.26E +
03 1.48E +
03 1.63E +
03 4.42E +
03 2.95E +
03 0.00E +
00 0.00E + Time Rank +
04 5.43E +
03 3.37E +
03 5.91E +
03 9.38E +
03 6.98E +
03 0.00E +
00 0.00E + Time Rank 2 8 7 6 +
01 2.00E +
01 1.28E +
00 1.43E-01 1.16E +
01 2.87E-13 4.44E-16 4.44E-16Time(s) 28.51 2127.15 228.14 92.33 13.32 21.11 37.82 83.52Fitness Rank 7 8 5 4 6 3
Time Rank 3 8 7 6 +
01 2.00E +
01 3.53E +
00 6.66E +
00 1.22E +
01 1.72E-05 4.44E-16 4.44E-16Time(s) 35.79 1986.00 642.70 355.50 25.24 33.66 29.50 151.63Fitness Rank 7 8 4 5 6 3
Time Rank 4 8 7 6 +
01 2.03E +
01 3.83E +
00 1.10E +
01 1.18E +
01 1.15E +
00 4.44E-16 4.44E-16Time(s) 27.35 6705.36 1111.68 592.61 33.01 46.70 37.97 328.41Fitness Rank 7 8 4 5 6 3
Time Rank +
01 2.09E +
01 4.33E +
00 1.52E +
01 1.19E +
01 3.20E +
00 4.44E-16 4.44E-16Time(s) 59.59 8052.04 2180.74 792.46 52.64 78.86 98.00 319.91Fitness Rank 7 8 4 6 5 3
Time Rank 2 8 7 6 +
01 2.22E-17 8.21E-12 6.70E-02 2.78E +
00 1.12E-13 0.00E +
00 0.00E + Time Rank 3 8 7 6 +
02 4.66E-16 1.56E-02 2.55E +
00 8.73E +
00 2.47E-04 0.00E +
00 0.00E + able 7: Results of Scalability TestFunction Dimension Metric ABC CMA-ES DE GA PSO SSA VSO WOATime(s) 69.70 3351.32 709.85 366.91 47.71 56.63 72.30 230.70Fitness Rank 8 3 5 6 7 4 Time Rank 3 8 7 6 +
02 1.38E-15 1.70E-01 1.32E +
01 1.37E +
01 7.68E-05 0.00E +
00 0.00E + Time Rank 3 8 7 6 +
02 1.45E-08 7.01E-01 7.93E +
01 2.70E +
01 1.69E-01 0.00E +
00 0.00E + Time Rank 3 8 7 6 +
03 -2.82E +
03 -2.50E +
03 -3.92E +
03 -2.70E +
03 -1.55E +
03 -3.81E +
03 -3.92E + Time Rank 6 8 2 +
03 -8.12E +
03 -7.51E +
03 -1.16E +
04 -5.97E +
03 -4.24E +
03 -1.09E +
04 -1.17E + Time Rank 6 8 7 5 +
04 -1.35E +
04 -1.25E +
04 -1.87E +
04 -9.18E +
03 -6.75E +
03 -1.77E +
04 -1.96E + Time Rank 4 8 7 5 2 6 3 +
04 -2.74E +
04 -2.46E +
04 -3.34E +
04 -1.58E +
04 -1.18E +
04 -3.45E +
04 -3.92E + Time Rank 6 8 5 7 4 3 2 F13 100 Mean 2.87E-10 1.18E-293 3.41E-06 1.55E-13 7.17E-02 5.96E-45 0.00E +
00 0.00E + Time Rank +
00 1.16E-123 1.03E +
00 2.24E-04 6.19E +
00 1.55E-07 0.00E +
00 0.00E + Time Rank 3 8 7 6 +
01 1.13E-76 1.02E +
02 4.55E-02 1.55E +
01 3.89E-06 0.00E +
00 0.00E + Time Rank 2 8 7 6 +
01 6.42E-40 2.91E +
03 2.79E +
00 5.49E +
01 3.41E-05 0.00E +
00 0.00E + Time Rank 2 8 7 6 able 7: Results of Scalability TestFunction Dimension Metric ABC CMA-ES DE GA PSO SSA VSO WOA1000 Mean 1.16E-213 8.41E-68 0.00E +
00 0.00E +
00 0.00E +
00 6.75E-130 0.00E +
00 0.00E + Time Rank 6 8 7 3 2 +
01 2.13E-02 0.00E +
00 3.23E-08Time(s) 208.79 1565.64 465.04 217.85 264.19 228.16 364.99 184.12Fitness Rank 7 4 3 6 8 5
300 Mean 2.47E +
02 1.64E +
01 2.04E +
02 5.27E +
01 3.88E +
02 1.07E-02 0.00E +
00 0.00E + Time Rank 6 8 7 5 3 +
02 4.74E +
01 4.40E +
02 1.77E +
02 3.38E +
02 1.81E-02 0.00E +
00 0.00E + Time Rank 5 8 7 6 +
03 1.41E +
02 1.02E +
03 7.08E +
02 6.58E +
02 1.10E +
00 0.00E +
00 0.00E + Time Rank 5 8 7 6 2 +
01 -5.34E +
01 -4.02E +
01 -8.57E +
01 -4.57E +
01 -8.00E +
00 -9.06E +
01 -2.82E +
300 Mean -2.05E +
02 -4.07E +
01 -8.35E +
01 -2.20E +
02 -8.53E +
01 -1.30E +
01 -2.41E +
02 -7.06E + +
02 -5.12E +
01 -1.28E +
02 -2.82E +
02 -1.29E +
02 -1.77E +
01 -3.64E +
02 -1.11E + +
02 -7.14E +
01 -2.23E +
02 -3.93E +
02 -2.85E +
02 -2.47E +
01 -6.07E +
02 -2.01E + SSA 4.81 2.88 5 2VSO 1.13 3.89 .6. Evaluation on VSO without Imported Infection To investigate the performance of VSO without the imported infection operation, which isan additional function, another evaluation was conducted on the same set of classical and CECbenchmark functions. From Table 9 & 10, we can observe that:
Table 9: Results of Benchmarking Functions by VSO without Imported InfectionFunction Dimension Mean Std Best Worst Time(s)F1 30 0.00E +
00 0.00E +
00 0.00E +
00 0.00E +
00 10.15F2 30 0.00E +
00 0.00E +
00 0.00E +
00 0.00E +
00 75.92F3 30 0.00E +
00 0.00E +
00 0.00E +
00 0.00E +
00 52.46F4 30 0.00E +
00 0.00E +
00 0.00E +
00 0.00E +
00 10.13F5 30 0.00E +
00 0.00E +
00 0.00E +
00 0.00E +
00 35.65F6 30 0.00E +
00 0.00E +
00 0.00E +
00 0.00E +
00 44.40F7 30 0.00E +
00 0.00E +
00 0.00E +
00 0.00E +
00 40.06F8 30 0.00E +
00 0.00E +
00 0.00E +
00 0.00E +
00 128.28F9 30 0.00E +
00 0.00E +
00 0.00E +
00 0.00E +
00 11.86F10 30 4.44E-16 0.00E +
00 4.44E-16 4.44E-16 14.87F11 30 0.00E +
00 0.00E +
00 0.00E +
00 0.00E +
00 14.94F12 30 -1.00E +
03 2.38E +
01 -1.05E +
03 -9.63E +
02 63.79F13 30 0.00E +
00 0.00E +
00 0.00E +
00 0.00E +
00 13.75F14 30 3.51E-12 4.11E-26 3.51E-12 3.51E-12 58.31F15 30 0.00E +
00 0.00E +
00 0.00E +
00 0.00E +
00 42.11F16 30 -1.93E +
01 2.73E +
00 -2.34E +
01 -1.48E +
01 86.54CEC1 30 1.92E +
06 4.75E +
05 1.13E +
06 2.67E +
06 11.39CEC2 30 1.30E +
04 1.42E +
04 1.45E +
03 3.45E +
04 10.89CEC3 30 5.25E-02 3.27E-02 9.03E-03 1.12E-01 10.93CEC4 30 8.01E +
01 3.71E +
01 4.99E +
00 1.42E +
02 12.02CEC5 30 2.00E +
01 7.14E-02 2.00E +
01 2.02E +
01 11.74CEC6 30 3.05E +
01 4.08E +
00 1.95E +
01 3.45E +
01 45.94CEC7 30 1.52E-02 1.76E-02 1.02E-12 4.92E-02 11.85CEC8 30 1.33E +
02 2.09E +
01 1.00E +
02 1.65E +
02 11.21CEC9 30 1.85E +
02 1.82E +
01 1.64E +
02 2.23E +
02 11.77CEC10 30 3.38E +
03 7.10E +
02 2.40E +
03 4.44E +
03 11.04CEC11 30 4.41E +
03 5.31E +
02 3.85E +
03 5.42E +
03 11.50CEC12 30 1.05E +
00 3.46E-01 4.64E-01 1.67E +
00 20.79CEC13 30 5.18E-01 1.28E-01 2.80E-01 7.47E-01 10.51CEC14 30 2.88E-01 4.45E-02 2.38E-01 3.84E-01 10.51CEC15 30 3.22E +
01 9.39E +
00 1.70E +
01 5.08E +
01 11.10CEC16 30 1.22E +
01 4.07E-01 1.14E +
01 1.29E +
01 11.16CEC17 30 9.86E +
04 6.53E +
04 1.58E +
04 2.45E +
05 12.53CEC18 30 8.46E +
03 1.06E +
04 3.21E +
02 2.62E +
04 12.00CEC19 30 1.68E +
01 1.34E +
00 1.37E +
01 1.91E +
01 18.34CEC20 30 3.48E +
02 7.91E +
01 1.92E +
02 4.86E +
02 11.96CEC21 30 4.98E +
04 3.32E +
04 7.41E +
03 1.06E +
05 11.40CEC22 30 6.05E +
02 1.20E +
02 4.38E +
02 7.86E +
02 12.42CEC23 30 2.00E +
02 0.00E +
00 2.00E +
02 2.00E +
02 16.59CEC24 30 2.00E +
02 0.00E +
00 2.00E +
02 2.00E +
02 14.84CEC25 30 2.00E +
02 0.00E +
00 2.00E +
02 2.00E +
02 15.40CEC26 30 1.00E +
02 1.64E-01 1.00E +
02 1.01E +
02 57.77CEC27 30 2.00E +
02 0.00E +
00 2.00E +
02 2.00E +
02 58.32CEC28 30 2.00E +
02 0.00E +
00 2.00E +
02 2.00E +
02 19.64CEC29 30 2.00E +
02 0.00E +
00 2.00E +
02 2.00E +
02 22.62CEC30 30 2.00E +
02 0.00E +
00 2.00E +
02 2.00E +
02 16.57 Through a comparison of Table 3 and 9, the mean values of fitness by VSO without anyimported infection operation are same as VSO with DE except for F
12 and F • As for the complicated CEC benchmark functions, the performances of two approachesare same for 5 cases, i.e.
CEC
CEC
CEC
CEC
27, and
CEC
28. VSO withoutDE even achieves better in the cases of
CEC CEC
CEC
29, and
CEC
30. For theremaining 21 functions, the VSO algorithm with the imported infection powered by DE isreadily better; • In terms of the computational time, the average time of running all 46 classical and CEC30- D functions for VSO with and without DE are 50 . s and 26 . s , respectively. Thismeans the introduction of such imported infection operation almost doubles the computa-tional time; • Taking 100- D classical benchmark functions as examples, the scalability of VSO withoutthe imported infection mechanism is also impressive.On the other hand, a more thorough investigation should be conducted in the future workon what specific condition(s) and how this additional operator can actually help to enhance thesearch performance of VSO in handling various complex real-world applications. Furthermore,other meta-heuristic algorithms can be studied as the algorithm in the imported infection opera-tion. Table 10: Results of Scalability Test by VSO without Imported InfectionFunction Dimension Mean Std Best Worst Time(s)F01 100 0.00E +
00 0.00E +
00 0.00E +
00 0.00E +
00 13.04F02 100 0.00E +
00 0.00E +
00 0.00E +
00 0.00E +
00 240.49F03 100 0.00E +
00 0.00E +
00 0.00E +
00 0.00E +
00 157.39F04 100 0.00E +
00 0.00E +
00 0.00E +
00 0.00E +
00 12.95F05 100 0.00E +
00 0.00E +
00 0.00E +
00 0.00E +
00 101.21F06 100 0.00E +
00 0.00E +
00 0.00E +
00 0.00E +
00 129.01F07 100 0.00E +
00 0.00E +
00 0.00E +
00 0.00E +
00 111.41F08 100 0.00E +
00 0.00E +
00 0.00E +
00 0.00E +
00 634.53F09 100 0.00E +
00 0.00E +
00 0.00E +
00 0.00E +
00 15.03F10 100 4.44E-16 0.00E +
00 4.44E-16 4.44E-16 18.14F11 100 0.00E +
00 0.00E +
00 0.00E +
00 0.00E +
00 21.25F12 100 -3.28E +
03 4.98E +
01 -3.39E +
03 -3.22E +
03 196.11F13 100 0.00E +
00 0.00E +
00 0.00E +
00 0.00E +
00 20.28F14 100 4.66E-42 1.41E-54 4.66E-42 4.66E-42 176.46F15 100 0.00E +
00 0.00E +
00 0.00E +
00 0.00E +
00 119.50F16 100 -4.71E +
01 5.59E +
00 -5.96E +
01 -4.09E +
01 269.93
4. Real-world Application I: Financial Portfolio Optimization
Portfolio optimization is one of the most important problems in finance. Investors usuallywant to maximize returns and minimize risks through allocating a fixed amount of capital into acollection of assets. 34ccording to the mean-variance model, which is a well-known and widely-used portfolio op-timization theory formulated by Markowitz [41], the variance is a risk measure. The optimizationproblem is presented in (14)-(15) as below. max E ( R ( x )) = n (cid:88) i = x i u i (14) min V ( R ( x )) = n (cid:88) i = n (cid:88) j = x i x j σ i j (15) sub ject to : x ∈ X = { x ∈ R | n (cid:88) i = x i = , x i ≥ } where x i is the proportion weight of the initial capital that will be allocated in the i th asset, u i isthe return of the i th asset, σ i j stand for the covariance of returns of the i th and j th assets, E ( R ( x ))and V ( R ( x )) are the expected return and variance of the whole portfolio, respectively.To optimize above two objectives simultaneously, we combine them into one single objectivefunction as shown in (16). max S R = E ( R ( x )) − R f V ( R ( x )) (16)where SR is called the sharpe ratio that represents the return and risk simultaneously of theportfolio, R f is a risk-free rate.Also, the sharpe ratio has been one of the most important measurement tools to evaluate theperformance of investment portfolio in the real-world financial industry.Since the VSO and other comparative algorithms are designated for solving minimizationproblems, the problem should be changed to the minimization problem as given in (17). min f itness = S R (17) sub ject to : S R = − i f S R ≤ − is assigned to the SR for this case.In order to avoid handling the equality constraint, the solution can be converted to the uncon-strained form as shown in 18. x (cid:48) i = x i (cid:80) ni = | x i | (18) Considering that the U.S. stock market is the biggest developed market and the Chinesestock market is the biggest emerging market all over the world, we select these two markets asour experimental targets. For the U.S. market, S & P
500 represents 500 large companies listed onstock exchanges in the U.S. Likewise,
CS I
300 constituent stocks are the top 300 stocks traded onthe Shanghai Stock Exchange and the Shenzhen Stock Exchange. As the lists of both S & P CS I
300 were adjusted from time to time, we selected the maximum number up to 250 stocks35n each group according to the order of their stock symbols to make a fair comparison. The fullstocks list is illustrated in Table C.17.Following the previous practice, the information of mean and covariance is acquired from thehistorical data. In the experiment, we calculated such values through the 5-year historical dailydata of the candidate stocks, i.e. from 1 Jan 2015 to 31 Dec 2019 excluding non-transaction days.More specifically, the average daily return on the historical data is computed as the expectedreturn of each stock.We also tried di ff erent number of stocks, i.e. 10, 30, 100 and 250, to further investigate thescalability of the algorithm on this practical application. More specially, in addition to longingthe stocks, Additionally, we studied a real-world scenario in which short-selling is allowed, i.e. x i can be negative, which enlarges the searching space of the problem. The US 5- year treasuryyield of 2 .
57% and China 3- year fixed deposit interest rate of 4 .
22% are performed as the risk-free rates for S & P
500 and
CS I due to a large amount of the data. Di ff erent from the benchmark functions tests, the portfolio optimization is a maximizationproblem here. From Table 11-12, we can see that: • For the group of
CS I .
802 in the case of
CS I
300 Long / Short 250 stocks, whichtotally beat other algorithms (the second best 1 . . . • For the group of S & P . / Short 250 stocks; • Above phenomenon may imply that VSO is good at optimizing high-dimensional problemswith large searching spaces;
Table 11: Results of Financial Portfolio OptimizationProblem Metric ABC CMA-ES DE GA PSO SSA VSO WOACSI 300 Mean 1.224294 1.154355 1.224411 1.223784 1.181852 1.224364 1.224416 1.182489Long 10 stocks Std 0.000042 0.017363 0.000005 0.000555 0.021924 0.000017 0.000005 0.067856Best 1.224359 1.170395 1.224418 1.224194 1.214714 1.224398 1.224423 1.224423Worst 1.224248 1.123408 1.224403 1.222685 1.149505 1.224348 1.224409 1.047452Time(s) 7.73 3.92 7.14 6.44 5.06 5.32 7.90 6.66Fitness Rank 4 8 2 5 7 3 able 11: Results of Financial Portfolio OptimizationProblem Metric ABC CMA-ES DE GA PSO SSA VSO WOALong 100 stocks Std 0.013223 0.008872 0.023740 0.017892 0.018854 0.001025 0.011146 0.087356Best 1.650535 1.678654 1.447667 1.760895 1.168363 1.793024 1.790115 1.337276Worst 1.609472 1.654061 1.387057 1.710490 1.121737 1.790113 1.758177 1.110095Time(s) 247.62 558.56 406.05 338.75 189.69 213.90 221.46 310.89Fitness Rank 5 4 6 3 8 / Short 10 stocks Std 0.000210 0.004408 0.000006 0.000824 0.017956 0.000048 0.000001 0.024955Best 1.343532 1.342048 1.224418 1.343786 1.315884 1.344052 1.344058 1.323401Worst 1.342999 1.329250 1.224400 1.341641 1.277836 1.343925 1.344056 1.258641Time(s) 4.27 5.12 7.14 6.45 5.04 5.42 7.92 6.62Fitness Rank 3 5 8 4 7 2 / Short 30 stocks Std 0.000273 0.001680 0.000138 0.003954 0.069596 0.000118 0.000104 0.035088Best 1.717570 1.714529 1.344153 1.714949 1.566683 1.716941 1.718096 1.396849Worst 1.716837 1.710087 1.343752 1.706446 1.368270 1.716580 1.717788 1.299481Time(s) 7.74 22.40 10.66 8.57 2.94 5.38 8.04 7.87Fitness Rank 2 4 7 5 6 3 / Short 100 stocks Std 0.010157 0.009911 0.006376 0.007563 0.143766 0.016540 0.013460 0.049987Best 2.466945 2.568735 1.437844 2.547923 1.713999 2.551748 2.564053 1.394569Worst 2.439633 2.538832 1.421629 2.527592 1.359364 2.504176 2.527447 1.285514Time(s) 220.08 661.09 530.60 350.70 160.47 160.98 278.92 336.33Fitness Rank 5 2 7 3 6 4 able 11: Results of Financial Portfolio OptimizationProblem Metric ABC CMA-ES DE GA PSO SSA VSO WOAS&P 500 Mean 1.796196 1.834040 1.597832 1.904823 1.303445 1.945929 1.933929 1.478434Long 100 stocks Std 0.007650 0.009496 0.010696 0.015495 0.066804 0.001441 0.006118 0.079415Best 1.806243 1.841459 1.608517 1.927645 1.409888 1.947760 1.942193 1.629986Worst 1.783113 1.815415 1.580175 1.879508 1.212952 1.943413 1.923150 1.396049Time(s) 293.12 616.08 442.56 270.22 184.70 157.67 277.37 255.48Fitness Rank 5 4 6 3 8 / Short 10 stocks Std 0.000052 0.002013 0.000017 0.007460 0.022674 0.000023 0.000012 0.033570Best 1.484250 1.482771 1.328504 1.483829 1.459236 1.484319 1.484392 1.472233Worst 1.484091 1.477682 1.328459 1.464090 1.390961 1.484256 1.484357 1.373205Time(s) 7.94 5.84 7.08 6.41 2.93 5.33 4.67 6.58Fitness Rank 3 4 8 5 7 2 / Short 30 stocks Std 0.000472 0.000462 0.000208 0.002906 0.038295 0.000151 0.000222 0.089208Best 1.981584 1.981570 1.483868 1.977541 1.816093 1.981377 1.982493 1.668017Worst 1.980143 1.980188 1.483235 1.969869 1.714468 1.980971 1.981798 1.418432Time(s) 4.24 21.08 10.58 8.91 4.97 5.37 7.96 4.40Fitness Rank 3 4 8 5 6 2 / Short 100 stocks Std 0.014769 0.004338 0.022500 0.011072 0.147366 0.004458 0.007368 0.110773Best 2.886660 3.042640 1.610707 2.990547 2.223260 3.010032 3.010575 1.905505Worst 2.848956 3.031767 1.556008 2.961458 1.761334 2.998492 2.990579 1.585622Time(s) 208.25 635.56 443.09 374.14 186.03 196.92 285.41 259.36Fitness Rank 5 / Short 250 stocks Std 0.028481 0.078095 0.004100 0.080018 0.120503 0.058260 0.015140 0.234846Best 2.897627 3.388885 1.229140 3.634716 1.899798 3.344593 3.642228 1.769273Worst 2.811898 3.164605 1.216557 3.403792 1.558034 3.175441 3.596449 1.102636Time(s) 196.71 15869.66 1105.67 481.04 153.82 225.50 326.26 546.53Fitness Rank 5 3 8 2 6 4
5. Real-world Application II: Optimization of Hyper-parameters of Support Vector Ma-chines
SVMs are widely adopted machine learning algorithms particularly useful for some lim-ited sample datasets within the framework of the statistical learning theory. According to theliterature, SVMs have achieved impressive success in various applications, such as image classi-fication [42], natural language processing [43], and financial prediction [44], etc.38 able 12: Summary of Evaluations for Financial Portfolio OptimizationAlgorithm Avg FitnessRank Avg TimeRank Overall FitnessRank Overall TimeRankABC 3.94 4.06 3 3CMA-ES 4.63 6.56 5 7DE 5.75 6.94 6 8GA 4.13 5.31 4 6PSO 7.06 1.44 8 SSA 2.56 2.50 2 2VSO 1.38 4.38 In practice, the performance of SVMs usually depends on its hyper-parameters. There aretwo major types of algorithms in SVMs: classification and regression. In this experiment, weapply SVMs to classify some real-world practical datasets.The mathematical expressions of SVM is shown as in (19).max α (cid:88) j α j − (cid:88) j , k α j , α k y j y k k ( x j , x k ) sub ject to : 0 ≤ α j ≤ C and (cid:88) j α j y j = C is the tunable penalty factor and K is the kernal function. Due to the outstandingperformance of RBF kernel function, it is used in this test as stated in (20). k ( x j , x k ) = exp (cid:32) − (cid:107) x j − x k (cid:107) σ (cid:33) (20)where σ is another tunable parameter. Using this kernel in the SVM classifier, we can get thedecision function as shown in (21). f ( x ) = sign (cid:34) (cid:88) i α i y i exp (cid:32) − (cid:107) x − x i (cid:107) σ (cid:33) + b (cid:35) (21)In this test, we have to optimize two hyper-parameters: the penalty factor C and σ for classi-fication problems. There are five datasets involved: • Australian Credit Approval: A well-known dataset that concerns credit card applicationsapproval in Australia [45]; • HCC Survival: HCC dataset was obtained at a University Hospital in Portugal and con-tais several demographic, risk factors, laboratory and overall survival features of 165 realpatients diagnosed with HCC [46]; • Iris:This is perhaps the best known dataset to be found in machine learning. It is to classifytype of iris plant [45]; 39
Somerville Happiness Survey: A dataset about life survey [47]. • Wine: This dataset is the results of a chemical analysis of wines grown in the same regionin Italy but derived from three di ff erent cultivars [45];All datasets as listed above are publicly available at [45]. As for the searching space, we set C , σ ∈ [10 − , ]. The accuracy of 10- f old cross validation is computed as the fitness in theevaluation. The maximum iteration is set as 500. The detailed results are illustrated into Table 13 where the mean fitness represents the averageclassification accuracy of 30 runs. The findings are stated as follows. • VSO outperforms over all other algorithms on 4 out of 5 datasets. For example, VSOachieves 83 .
83% of mean accuracy for the first dataset; • The performance of CMA-ES is very bad in this test. On the other hand, ABC that per-forms unwell in previous benchmark functions tests becomes not bad here; • Although VSO can get an enhancement of accuracy, it does not show a big advantagecompared with other candidates in this low-dimensional problem optimization.
Table 13: Results of SVMs OptimizationDataset Metric ABC CMA-ES DE GA PSO SSA VSO WOAAustralian Credit Mean 0.8023 0.6058 0.7203 0.7119 0.7470 0.8075 0.8383 0.8203Approval Std 0.0159 0.0000 0.0000 0.0550 0.0628 0.0521 0.0020 0.0062Best 0.8203 0.6058 0.7203 0.8217 0.8246 0.8435 0.8406 0.8261Worst 0.7826 0.6058 0.7203 0.6812 0.6957 0.7043 0.8348 0.8101Time(s) 1136.98 82.23 815.05 789.18 814.23 798.28 989.86 823.65Fitness Rank 4 8 6 7 5 3 able 13: Results of SVMs OptimizationDataset Metric ABC CMA-ES DE GA PSO SSA VSO WOABest 0.9663 0.7157 0.6513 0.9497 0.9663 0.9663 0.9663 0.9663Worst 0.9497 0.7157 0.6513 0.9212 0.9497 0.9497 0.9608 0.9608Time(s) 925.41 108.37 654.10 632.19 649.95 637.86 911.85 626.50Fitness Rank 4 7 8 6 5 2 DE 5.6 5.2 6 6GA 6.6 3.4 7 2PSO 4.0 4.2 5 5SSA 3.2 3.4 3 2VSO 1.2 7.2
6. Conclusion
In summary, a novel and powerful meta-heuristic optimization algorithm called VSO is pro-posed for tackling challenging continuous optimization problems in many real-life applications.Inspired by the spread and behavior of viruses, the algorithm is carefully devised with di ff erentviral operations to diversify the searching strategies in order to highly improve its optimizingcapacity.In this paper, VSO is firstly evaluated on a total of 46 well-known benchmark functionscovering many di ff erent types of optimization problems. The rate of convergence, scalability,and reliability of the algorithm are well-validated on all these benchmark functions. Moreover,VSO is used to solve two real-world applications including the financial portfolio optimizationand optimization of hyper-parameters of SVMs for classification problems. All the obtainedresults are carefully compared and analyzed with those of classical algorithms such as GA, PSO,and DE as well as the state-of-the-art optimization approaches including CMA-ES, WOA, andSSA.The results demonstrate the outstanding performance of our proposed algorithm in terms ofsolution fitness, convergence rate, scalability, reliability, and flexibility. Especially, VSO showsa unique potential for high-dimensional continuous optimization problems. Additionally, thealgorithmic framework is much flexible to provide an interface to hybridization with other algo-rithms.The drawbacks of VSO are summarized as follows. First, the number of algorithmic param-eters is larger than those of the existing popular optimization approaches like GA. Second, theimplementation is a bit complicated. Last but not least, the computational speed is not mostly inthe dominant position due to its multiple searching strategies.41oncerning the future work, how to make the parameters of VSO to be self-adaptive is worthexploring. A more thorough investigation should be conducted on the imported infection op-eration. In addition, the applicability of VSO can be further investigated in various real-worldapplications. Lastly, VSO has a great potential to be extended for solving mixed continuous-discrete as well as multi-objective optimization problems. Appendix A. Classical Benchmarking Functions
Table A.15: Classical Benchmarking FunctionsFunction Name Expression Search Range Global Optimum f ( x ∗ )F1 Sphere f ( x ) = (cid:80) Di = x i [-1000,1000] 0F2 Brown f ( x ) = (cid:80) D − i = ( x i ) ( x i + + + ( x i + ) ( x i + [-1,4] 0F3 Ellipsoid f ( x ) = (cid:80) Di = (cid:20)(cid:18) D − x i (cid:19)(cid:21) [-5.12,5.12] 0F4 Schwefel 2.21 f ( x ) = max i = ,..., D | x i | [-100,100] 0F5 Weighted Sphere f ( x ) = i (cid:80) Di = x i [-5.12,5.12] 0F6 Sum of Di ff erent Powers f ( x ) = (cid:80) Di = | x i | i + [-1,1] 0F7 Zakharov f ( x ) = (cid:80) Di = x i + ( (cid:80) Di = . ix i ) + ( (cid:80) Di = . ix i ) [-5,10] 0F8 Schwefel 1.2 f ( x ) = (cid:80) Di = (cid:16)(cid:80) ij = x j (cid:17) [-100,100] 0F9 Rastrigin f ( x ) = d + (cid:80) Di = ( x i − cos (2 π x i )) [-5.12,5.12] 0F10 Ackley f ( x ) = − exp ( − . (cid:113) D (cid:80) Di = x i ) [-32,32] 0 − exp ( D (cid:80) Di = cos (2 x i )) + + exp (1)F11 Griewank f ( x ) = + (cid:80) Di = x i − (cid:81) Di = cos ( x i √ i ) [-100,100] 0F12 Styblinski-Tank f ( x ) = (cid:80) Di = ( x i − x i + x i ) [-5,5] -39.16599DF13 Csendes f ( x ) = (cid:80) Di = x i (2 + sin x i ) [-1,1] 0F14 Xin-She Yang N.2 f ( x ) = ( (cid:80) Di = | x i | ) exp ( − (cid:80) Di = sin ( x i )) [-2 π ,2 π ] 0F15 Alpine N.1 f ( x ) = (cid:80) i = D | x i sin ( x i ) + . x i | [-10,10] 0F16 Michalewicz f ( x ) = − (cid:80) Di = sin ( x i ) sin (cid:32) ix i π (cid:33) [0, π ] -1.8013 (D = D ∈ { , , , , } ppendix B. CEC Benchmarking Functions Table B.16: CEC Benchmark FunctionsFunction Name Dimension Search Range Global Optimum f ( x ∗ )CEC1 Rotated High Conditioned Elliptic Function 30 [-100,100] 100CEC2 Rotated Bent Cigar Function 30 [-100,100] 200CEC3 Rotated Discus Function 30 [-100,100] 300CEC4 Shifted and Rotated Rosenbrocks Function 30 [-100,100] 400CEC5 Shifted and Rotated Ackleys Function 30 [-100,100] 500CEC6 Shifted and Rotated Weierstrass Function 30 [-100,100] 600CEC7 Shifted and Rotated Griewanks Function 30 [-100,100] 700CEC8 Shifted Rastrigins Function 30 [-100,100] 800CEC9 Shifted and Rotated Rastrigins Function 30 [-100,100] 900CEC10 Shifted Schwefels Function 30 [-100,100] 1000CEC11 Shifted and Rotated Schwefels Function 30 [-100,100] 1100CEC12 Shifted and Rotated Katsuura Function 30 [-100,100] 1200CEC13 Shifted and Rotated HappyCat Function 30 [-100,100] 1300CEC14 Shifted and Rotated HGBat Function 30 [-100,100] 1400CEC15 Shifted and Rotated Expanded Griewanks plus Rosenbrocks Function 30 [-100,100] 1500CEC16 Shifted and Rotated Expanded Sca ff ers F6 Function 30 [-100,100] 1600CEC17 Hybrid Function 1 30 [-100,100] 1700CEC18 Hybrid Function 2 30 [-100,100] 1800CEC19 Hybrid Function 3 30 [-100,100] 1900CEC20 Hybrid Function 4 30 [-100,100] 2000CEC21 Hybrid Function 5 30 [-100,100] 2100CEC22 Hybrid Function 6 30 [-100,100] 2200CEC23 Composition Function 1 30 [-100,100] 2300CEC24 Composition Function 2 30 [-100,100] 2400CEC25 Composition Function 3 30 [-100,100] 2500CEC26 Composition Function 4 30 [-100,100] 2600CEC27 Composition Function 5 30 [-100,100] 2700CEC28 Composition Function 6 30 [-100,100] 2800CEC29 Composition Function 7 30 [-100,100] 2900CEC30 Composition Function 8 30 [-100,100] 3000 ppendix C. Full Stocks List Table C.17: Stocks ListS&P500A ALB ATO C CMG DAL ED FCX GPN HSTAAL ALGN ATVI CAG CMI DD EFX FDX GPS HSYAAP ALK AVB CAH CMS DE EIX FE GRMN HUMAAPL ALL AVGO CAT CNC DFS EL FFIV GS IBMABBV ALLE AVY CB CNP DG EMN FIS GWW ICEABC ALXN AWK CBOE COF DGX EMR FISV HAL IDXXABMD AMAT AXP CBRE COG DHI EOG FITB HAS IEXABT AMCR AZO CCI COO DHR EQIX FLIR HBAN IFFACN AMD BA CCL COP DIS EQR FLS HBI ILMNADBE AME BAC CDNS COST DISCA ES FLT HCA INCYADI AMGN BAX CDW COTY DISCK ESS FMC HD INFOADM AMP BBY CE CPB DISH ETFC FRC HES INTCADP AMT BDX CERN CPRI DLR ETN FRT HFC INTUADS AMZN BEN CF CPRT DLTR ETR FTI HIG IPADSK ANET BIIB CFG CRM DOV EVRG FTNT HII IPGAEE ANSS BK CHD CSCO DRE EW GD HLT IPGPAEP ANTM BKNG CHRW CSX DRI EXC GE HOG IQVAES AON BKR CHTR CTAS DTE EXPD GILD HOLX IRAFL AOS BLK CI CTL DUK EXPE GIS HON IRMAGN APA BLL CINF CTSH DVA EXR GL HP ISRGAIG APD BMY CL CTXS DVN F GLW HPE ITAIV APH BR CLX CVS DXC FANG GM HPQ ITWAIZ APTV BSX CMA CVX EA FAST GOOG HRB IVZAJG ARE BWA CMCSA CXO EBAY FB GOOGL HRL JAKAM ARNC BXP CME D ECL FBHS GPC HSIC JBHTCSI300000001 000651 000983 002352 300144 600066 600352 600637 600900 601377000002 000671 002007 002385 300168 600068 600362 600660 600958 601390000008 000686 002008 002415 300251 600085 600369 600663 600959 601398000009 000709 002024 002424 300315 600089 600372 600674 600999 601555000060 000718 002027 002456 600000 600100 600373 600685 601006 601600000063 000725 002049 002465 600008 600104 600376 600688 601009 601601000069 000728 002065 002466 600009 600109 600383 600690 601018 601607000100 000738 002074 002470 600010 600111 600406 600703 601021 601608000156 000750 002081 002475 600015 600115 600415 600704 601088 601618000157 000768 002131 002500 600016 600118 600436 600705 601099 601628000166 000776 002142 002508 600018 600150 600446 600718 601111 601633000333 000783 002146 002555 600019 600170 600482 600737 601117 601668000338 000792 002152 002594 600021 600177 600489 600739 601118 601669000402 000793 002153 002673 600023 600188 600498 600741 601166 601688000413 000826 002174 002714 600028 600196 600518 600795 601169 601718000423 000839 002183 002736 600029 600208 600519 600804 601186 601766000425 000858 002195 300017 600030 600221 600522 600816 601198 601788000538 000876 002202 300024 600031 600233 600535 600820 601211 601800000555 000895 002230 300027 600036 600256 600547 600827 601216 601818000559 000917 002236 300033 600037 600271 600549 600837 601225 601857000568 000938 002241 300059 600038 600276 600570 600871 601288 601866000623 000959 002292 300070 600048 600297 600583 600886 601318 601872000625 000961 002299 300072 600050 600309 600585 600887 601328 601877000627 000963 002304 300124 600060 600332 600588 600893 601333 601888000630 000977 002310 300133 600061 600340 600606 600895 601336 601899 eferences [1] Steven R Young, Derek C Rose, Thomas P Karnowski, Seung-Hwan Lim, and Robert M Patton. Optimizingdeep learning hyper-parameters through an evolutionary algorithm. In Proceedings of the Workshop on MachineLearning in High-Performance Computing Environments , pages 1–5.[2] Frauke Friedrichs and Christian Igel. Evolutionary tuning of multiple svm parameters.
Neurocomputing , 64:107–117, 2005.[3] Yaz Nalakan and Tolga Ensari. Decision of neural networks hyperparameters with a population-based algorithm.In
International Conference on Machine Learning, Optimization, and Data Science , pages 276–281. Springer.[4] Shaun N Skinner and Hossein Zare-Behtash. State-of-the-art in aerodynamic shape optimisation methods.
AppliedSoft Computing , 62:933–962, 2018.[5] Omid Bozorg-Haddad, Mohammad Solgi, and Hugo A Loiciga.
Meta-heuristic and evolutionary algorithms forengineering optimization . John Wiley & Sons, 2017.[6] Khin T Lwin, Rong Qu, and Bart L MacCarthy. Mean-var portfolio optimization: A nonparametric approach.
European Journal of Operational Research , 260(2):751–766, 2017.[7] Konstantinos Metaxiotis and Konstantinos Liagkouras. Multiobjective evolutionary algorithms for portfolio man-agement: A comprehensive literature review.
Expert Systems with Applications , 39(14):11685–11698, 2012.[8] Victor Pillac, Michel Gendreau, Christelle Guret, and Andrs L Medaglia. A review of dynamic vehicle routingproblems.
European Journal of Operational Research , 225(1):1–11, 2013.[9] Z. Huang, X. Lu, and H. Duan. A task operation model for resource allocation optimization in business processmanagement.
IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans , 42(5):1256–1270, 2012.[10] Xin-She Yang.
Nature-inspired metaheuristic algorithms . Luniver press, 2010.[11] John Henry Holland.
Adaptation in natural and artificial systems: an introductory analysis with applications tobiology, control, and artificial intelligence . MIT press, 1992.[12] Hans-Georg Beyer and Hans-Paul Schwefel. Evolution strategiesa comprehensive introduction.
Natural computing ,1(1):3–52, 2002.[13] Rainer Storn and Kenneth Price. Di ff erential evolutiona simple and e ffi cient heuristic for global optimization overcontinuous spaces. Journal of global optimization , 11(4):341–359, 1997.[14] Russell Eberhart and James Kennedy. A new optimizer using particle swarm theory. In
MHS’95. Proceedings ofthe Sixth International Symposium on Micro Machine and Human Science , pages 39–43. Ieee.[15] Marco Dorigo, Mauro Birattari, and Thomas Stutzle. Ant colony optimization.
IEEE computational intelligencemagazine , 1(4):28–39, 2006.[16] Dervis Karaboga, Beyza Gorkemli, Celal Ozturk, and Nurhan Karaboga. A comprehensive survey: artificial beecolony (abc) algorithm and applications.
Artificial Intelligence Review , 42(1):21–57, 2014.[17] JQ James and Victor OK Li. A social spider algorithm for global optimization.
Applied Soft Computing , 30:614–627, 2015.[18] Seyedali Mirjalili and Andrew Lewis. The whale optimization algorithm.
Advances in engineering software ,95:51–67, 2016.[19] Seyedali Mirjalili, Seyed Mohammad Mirjalili, and Andrew Lewis. Grey wolf optimizer.
Advances in engineeringsoftware , 69:46–61, 2014.[20] Peter JM Van Laarhoven and Emile HL Aarts.
Simulated annealing , pages 7–15. Springer, 1987.[21] Albert YS Lam, Victor OK Li, and JQ James. Real-coded chemical reaction optimization.
IEEE Transactions onEvolutionary Computation , 16(3):339–353, 2011.[22] Zhenglei Wei, Changqiang Huang, Xiaofei Wang, Tong Han, and Yintong Li. Nuclear reaction optimization: Anovel and powerful physics-based algorithm for global optimization.
IEEE Access , 7:66084–66109, 2019.[23] Qingyang Zhang, Ronggui Wang, Juan Yang, Kai Ding, Yongfu Li, and Jiangen Hu. Collective decision optimiza-tion algorithm: a new heuristic optimization method.
Neurocomputing , 221:123–137, 2017.[24] Jinhao Zhang, Mi Xiao, Liang Gao, and Quanke Pan. Queuing search algorithm: A novel metaheuristic algorithmfor solving engineering optimization problems.
Applied Mathematical Modelling , 63:464–490, 2018.[25] David H Wolpert and William G Macready. No free lunch theorems for optimization.
IEEE transactions onevolutionary computation , 1(1):67–82, 1997.[26] Momin Jamil and Xin-She Yang. A literature survey of benchmark functions for global optimization problems.
Int.Journal of Mathematical Modelling and Numerical Optimisation , 4(2):44, 2013.[27] Ali R. Al-Roomi. Unconstrained Single-Objective Benchmark Functions Repository, 2015.[28] Marcin Molga and Czesław Smutnicki. Test functions for optimization needs.
Test functions for optimization needs ,101, 2005.[29] JJ Liang, BY Qu, and PN Suganthan. Problem definitions and evaluation criteria for the cec 2014 special sessionand competition on single objective real-parameter numerical optimization.
Computational Intelligence Labora- ory, Zhengzhou University, Zhengzhou China and Technical Report, Nanyang Technological University, Singa-pore , 635, 2013.[30] Nikolaus Hansen, Sibylle D Mller, and Petros Koumoutsakos. Reducing the time complexity of the derandomizedevolution strategy with covariance matrix adaptation (cma-es). Evolutionary computation , 11(1):1–18, 2003.[31] Mu Dong Li, Hui Zhao, Xing Wei Weng, and Tong Han. A novel nature-inspired algorithm for optimization: Viruscolony search.
Advances in Engineering Software , 92:65–88, 2016.[32] Yun-Chia Liang and Josue Rodolfo Cuevas Juarez. A novel metaheuristic for continuous optimization problems:Virus optimization algorithm.
Engineering Optimization , 48(1):73–93, 2016.[33] Christopher B Brooke. Biological activities of’noninfectious’ influenza a virus particles.
Future virology , 9(1):41–51, 2014.[34] Ming Zeng, Zeping Hu, Xiaolei Shi, Xiaohong Li, Xiaoming Zhan, Xiao-Dong Li, Jianhui Wang, Jin Huk Choi,Kuan-wen Wang, and Tiana Purrington. Mavs, cgas, and endogenous retroviruses in t-independent b cell responses.
Science , 346(6216):1486–1492, 2014.[35] Lin-Fa Wang. Bats and viruses: a brief review.
Virologica Sinica , 24(2):93–99, 2009.[36] Ben Killingley and Jonathan Nguyen-Van-Tam. Routes of influenza transmission.
Influenza and other respiratoryviruses , 7:42–51, 2013.[37] Farzad Ghannadian, Cecil Alford, and Ron Shonkwiler. Application of random restart to genetic algorithms.
Information Sciences , 95(1):81 – 102, 1996.[38] Michal Pluhacek, Adam Viktorin, Roman Senkerik, Tomas Kadavy, and Ivan Zelinka. Pso with partial populationrestart based on complex network analysis. In
International Conference on Hybrid Artificial Intelligence Systems ,pages 183–192. Springer, 2017.[39] G Hossein Hajimirsadeghi, Mahdy Nabaee, and Babak N Araabi. Ant colony optimization with a genetic restartapproach toward global optimization. In
Computer Society of Iran Computer Conference , pages 9–16. Springer,2008.[40] Tarik Eltaeib and Ausif Mahmood. Di ff erential evolution: A survey and analysis. Applied Sciences , 8(10):1945,Oct 2018.[41] Harry Markowitz. Portfolio selection.
Journal of Finance , 7(1):77–91, 1952.[42] Mayank Arya Chandra and SS Bedi. Survey on svm and their application in image classification.
InternationalJournal of Information Technology , pages 1–11, 2018.[43] Z. Li and V. Tam. A comparative study of a recurrent neural network and support vector machine for predictingprice movements of stocks of di ff erent volatilites. In , pages 1–8, 2017.[44] Mohammad Al-Smadi, Omar Qawasmeh, Mahmoud Al-Ayyoub, Yaser Jararweh, and Brij Gupta. Deep recurrentneural network vs. support vector machine for aspect-based sentiment analysis of arabic hotels reviews. Journal ofcomputational science , 27:386–393, 2018.[45] Dheeru Dua and Casey Gra ff . UCI machine learning repository, 2017.[46] Miriam Seoane Santos, Pedro Henriques Abreu, Pedro J. Garca-Laencina, Adlia Sim£o, and Armando Carvalho.A new cluster-based oversampling method for improving survival prediction of hepatocellular carcinoma patients. Journal of Biomedical Informatics , 58:49 – 59, 2015.[47] W. W. Koczkodaj, T. Kakiashvili, A. SzymaÅska, J. Montero-Marin, R. Araya, J. Garcia-Campayo, K. Rutkowski,and D. StrzaÅka. How to reduce the number of rating scale items without predictability loss?
Scientometrics ,111(2):581–593, 2017.,111(2):581–593, 2017.