The Archerfish Hunting Optimizer: a novel metaheuristic algorithm for global optimization
Farouq Zitouni, Saad Harous, Abdelghani Belkeram, Lokman Elhakim Baba Hammou
AAppeared in Fundamenta Informaticae 178(1) : 1–41 (2021). 1Available at IOS Press through https://doi.org/10.3233/FI-2021-0001
The Archerfish Hunting Optimizer: a novel metaheuristicalgorithm for global optimization
Farouq Zitouni
Department of Computer Science, Kasdi Merbah University – OuarglaLIRE Laboratory, Abdelhamid Mehri University – [email protected]
Saad Harous
Department of Computer Science and Software Engineering UAE UniversityAbu Dhabi, United Arab [email protected]
Abdelghani Belkeram
Department of Computer Science, Kasdi Merbah UniversityOuargla, [email protected]
Lokman Elhakim Baba Hammou
Department of Computer Science, Kasdi Merbah UniversityOuargla, [email protected]
Abstract.
Global optimization solves real-world problems numerically or analytically by mini-mizing their objective functions. Most of the analytical algorithms are greedy and computation-ally intractable. Metaheuristics are nature-inspired optimization algorithms. They numericallyfind a near-optimal solution for optimization problems in a reasonable amount of time. We pro-pose a novel metaheuristic algorithm for global optimization. It is based on the shooting and
Address for correspondence: [email protected] a r X i v : . [ c s . A I] F e b F. Zitouni, S. Harous, A. Belkeram, and LE. Baba Hammou / AHO jumping behaviors of the archerfish for hunting aerial insects. We name it the Archerfish HuntingOptimizer (AHO). We Perform two sorts of comparisons to validate the proposed algorithm’s per-formance. First, AHO is compared to the 12 recent metaheuristic algorithms (the accepted algo-rithms for the 2020’s competition on single objective bound-constrained numerical optimization)on ten test functions of the benchmark CEC 2020 for unconstrained optimization. Second, theperformance of AHO and 3 recent metaheuristic algorithms, is evaluated using five engineeringdesign problems taken from the benchmark CEC 2020 for non-convex constrained optimization.The experimental results are evaluated using the Wilcoxon signed-rank and the Friedman tests.The statistical indicators illustrate that the Archerfish Hunting Optimizer has an excellent abilityto accomplish higher performance in competition with the well-established optimizers.
Keywords:
Global optimization, Metaheuristic algorithms, Unconstrained optimization, Con-strained optimization, Hunting behavior of archerfish, Benchmark CEC 2020.
1. Introduction
Approximation algorithms were proposed for the first time in the 1960s to solve challenging opti-mization problems [1]. They are called approximation algorithms because they generate near-optimalsolutions. They were mainly used to solve optimization problems that could not be solved efficientlyusing computational techniques available at that period [2]. The NP -completeness theory also had aconsiderable contribution to approximation algorithms’ maturing since the need to solve NP -hard op-timization problems became the most priority to deal with the computational intractability [3]. Someoptimization problems are easy to solve (i.e., generating near-optimal solutions is quick), while forother ones, that task is as hard as finding optimal solutions [4].Approximation algorithms using probabilistic and randomized techniques had known tremendousadvances between the 1980s and 1990s. They were named metaheuristic algorithms [5]. Famousmetaheuristic algorithms might include, for example, Simulated Annealing [6], Ant Colony Opti-mization [7], Evolutionary Computation [8], Tabu Search [9], Memetic Algorithms [10], and ParticleSwarm Optimization [11], to name but a few. During the past three decades, many metaheuristic al-gorithms have been proposed in the literature. Most of them have been assessed experimentally andhave shown good performance for solving real-world optimization problems [12]. Metaheuristic al-gorithms aim to find the best possible solutions and guarantee that such solutions satisfy some criteria[13]. The No-Free-Lunch theorem [14] proves that universal metaheuristic algorithms for solving allthe optimization problems are non-existent, which justifies the growing amount of the proposed state-of-the-art. In other words, if a certain metaheuristic algorithm efficiently solves some optimizationproblems, it will systematically show mediocre performance for other ones. Also, the theorem statesthat all metaheuristic algorithms’ averaged performance on all the optimization problems is the same.Tables 1 and 2 encompass some well-known metaheuristic algorithms (these algorithms are split intofour families, such as evolutionary-based, swarm-based, physical-based, and human-based algorithms[15]).Mainly, there are two groups of metaheuristic algorithms: population-based and individual-basedalgorithms [93]. Population-based algorithms use several agents, whereas individual-based algorithmsuse one agent. In the first group, several individuals swarm in the search space and cooperatively . Zitouni, S. Harous, A. Belkeram, and LE. Baba Hammou / AHO Table 1: Popular evolutionary-based and swarm-based metaheuristic algorithms.
Evolutionary-based metaheuristic algorithms Swarm-based metaheuristic algorithms
Genetic algorithm [16] Particle swarm optimization [17]Evolution strategies [18] Ant colony optimization [7]Evolutionary programming [19] Artificial bee colony [20]Genetic programming [21] Grey wolf optimizer [22]Differential evolution [23] Bat algorithm [24]Biogeography-based optimization [25] Whale optimization algorithm [26]Covariance matrix adaptation evolution strategy [27] Dragonfly algorithm [28]Quantum-inspired evolutionary algorithm [29] Dolphin echolocation [30]Fruit fly optimization [31]Krill herd [32]Bird mating optimizer [33]Hunting search [34]Firefly algorithm [35]Dolphin partner optimization [36]Cuckoo search [37]Social spider optimization [38]Bee collecting pollen algorithm [39]Marriage in honey bees [40]Monkey search [41]Termite [42]Fish swarm algorithm [43]Grasshopper optimisation algorithm [44]Seagull optimization algorithm [45]Salp swarm algorithm [46]Selfish herd optimizer [47]Moth-flame optimization algorithm [48]Ant lion optimizer [49]Harris hawks optimization [50]Slime mould algorithm [51]Moth search algorithm [52]Elephant herding optimization [53]Earthworm optimisation algorithm [54]Monarch butterfly optimization algorithm [55]Rooted tree optimization algorithm [56]Tunicate swarm algorithm [57]
F. Zitouni, S. Harous, A. Belkeram, and LE. Baba Hammou / AHO
Table 2: Popular physical-based and human-based metaheuristic algorithms.
Physical-based metaheuristic algorithms Human-based metaheuristic algorithms
Simulated annealing [58] Teaching learning-based optimization [59]Thermodynamic laws [60] Harmony search [61]Gravitation [62, 63] Taboo search [64]Big bang–big crunch[65] Group search optimizer [66]Charged system [67] Imperialist competitive algorithm [68]Central force [69] League championship algorithm [70]Chemical reaction [71] Colliding bodies optimization [72]Black hole [73] Interior search algorithm [74]Ray [75] Mine blast algorithm [76]Small-world [77] Soccer league competition algorithm [78]Galaxy-based [79] Seeker optimization algorithm [80]General relativity theory [81] Social-based algorithm [82]Sine cosine algorithm [83] Exchange market algorithm [84]Multi-verse optimizer [85] Nomadic people optimizer [86]Inclined planes system optimization [87] Group counseling optimization algorithm [88]Firework algorithm [89]Modified inclined planes system optimization [90]Simplified inclined planes system optimization [91]Spherical search [92]Solar system algorithm [15] . Zitouni, S. Harous, A. Belkeram, and LE. Baba Hammou / AHO evolve towards the global optimum [94]. In the second group, one individual moves in the search spaceand evolves towards the global optimum [95]. In both groups, individuals’ positions are randomlyinitialized in the search space and are modified over generations until the satisfaction of specific criteria[93].Metaheuristic algorithms have two fundamental components: exploration and exploitation [96].The exploration is called global optimization or diversification. The exploitation is named local op-timization or intensification. The exploration allows metaheuristic algorithms to discover new searchspace regions and avoid being trapped in local optimums [97]. The exploitation permits metaheuristicalgorithms to zoom on a particular area to find the best solution [97]. Any metaheuristic algorithmshould find the best balance between diversification and intensification; otherwise, the found solutions’quality is compromised [98]. Too many exploration operations may result in a considerable waste ofeffort: i.e., the algorithm jumps from one location to another without concentrating on enhancing thecurrent solution’s quality [99]. Excessive exploitation operations may lead the algorithm to be trappedin local optimums and to converge prematurely [100]. The main weakness of metaheuristic algorithmsis the sensitivity to the tuning of controlling parameters. Also, the convergence to the global optimumis not always guaranteed [101].We propose a novel swarm-based metaheuristic algorithm for global optimization, named theArcherfish Hunting Optimizer (AHO). The algorithm is inspired by the shooting and jumping be-haviors of archerfish when catching prey. The prominent features of AHO are:• AHO has three controlling parameters to set, the population size, the swapping angle betweenthe exploration and exploitation phases, and the attractiveness rate between the archerfish andthe prey.• AHO uses elementary laws of physics (i.e., equations of the general ballistic trajectory) to de-termine the positions of new solutions.• The swapping angle controls the balance between the exploration and exploitation of the searchspace.The performance of AHO is assessed using the benchmark CEC 2020 for unconstrained optimiza-tion. The considered benchmark contains ten challenging single objective test functions. For furtherinformation on this benchmark, the reader is referred to [102]. The obtained results are compared to12 most recent state-of-the-art metaheuristic algorithms (the accepted algorithms for the 2020’s com-petition on single objective bound-constrained numerical optimization). In addition, the performanceof AHO is evaluated on five engineering design problems selected from the benchmark CEC 2020 fornon-convex constrained optimization. More details on this benchmark is available in [103]. The col-lected results are opposed to 3 most recent state-of-the-art metaheuristic algorithms. The experimentaloutcomes are judged using the Wilcoxon signed-rank and the Friedman tests. The statistical resultsshow that AHO produces very encouraging and most of the times competitive results compared to thewell-established metaheuristic methods.The rest of the paper is organized as follows. Section 2 illustrates the hunting behavior of archer-fish and provides the source of inspiration for AHO. Section 3 describes the proposed metaheuristicalgorithm and its mathematical model. Sections 4 and 5 represent and discuss the statistical results F. Zitouni, S. Harous, A. Belkeram, and LE. Baba Hammou / AHO
Figure 1: The hunting mechanisms of archerfish.and comparative study on some unconstrained and constrained optimization problems. Section 6 sumsup the paper and concludes with some future directions.
2. Source of inspiration
The archerfish form a monotypic family, called Toxotes chatareus. They mainly live in mangrove areasof the Indo Pacific [104]. They possess one of the most complex and exciting feeding behaviors: preyaerial insects by shooting them down with water droplets spit from their mouths. Figure 1 shows theshape of an archerfish and its hunting mechanisms. An archerfish uses two ways to capture insects:i) it dislodges the target with a powerful jet of water (left archerfish), or ii) it jumps at the prey if thelatter is close enough (right archerfish) [105].In practice, the shooting technique is less tiring than jumping. It permits many consecutive shots.However, retrieving a fallen prey is uncertain because other archerfish might steal it [106, 107]. Whenjumping, an archerfish locates itself directly below the target [108]. When shooting, an archerfishtakes a more lateral position [109]. Archerfish eject water droplets at aerial insects, beating them ontothe water surface to be eaten. Since the archerfish’s eyes remain entirely below the water surfaceduring the sighting and spitting, it needs to deal with refraction effects at the air-water interface [110].In this paper, we adopt the following principles to outline the instructions of AHO.• Archerfish live in a flock (AHO is a population-based metaheuristic algorithm).• Archerfish use two different hunting mechanisms: shooting and jumping (exploration and ex-ploitation phases).• Archerfish can steal captured prey from each other (cooperative search and information sharing).• Swapping between jumping and shooting behaviors is controlled by the perceiving angle (bal-ancing between exploration and exploitation phases). . Zitouni, S. Harous, A. Belkeram, and LE. Baba Hammou / AHO Figure 2: Shooting behavior of the archerfish.
3. Archerfish hunting optimizer
We illustrate the exploration and exploitation phases of the proposed AHO, inspired by archerfish’shooting and jumping behaviors when hunting insects. AHO is a gradient-free optimization methodthat can solve any optimization problem with a proper formulation of the objective function. Weassume a search space of dimension d that contains several archerfish. The flock size (i.e., the numberof archerfish) is N , and the location of archerfish i at iteration t is given as follows. X (cid:104) i , t (cid:105) = ( x , x , . . . , x d ) Each entry of X (cid:104) i , t (cid:105) has a range of allowed values: i.e., X (cid:104) i , t (cid:105) = ( x j ) ∈ [ x min j , x max j ] (where i ∈{ , . . . , N } and j ∈ { , . . . , d } ). At iteration t =
0, the location X (cid:104) i , (cid:105) is initialized randomly usingEquation 1. X (cid:104) i , (cid:105) = ( α × ( x max1 − x min1 ) + x min1 , . . . , α d × ( x max d − x min d ) + x min d ) (1)where α , . . . , α d : Uniformly distributed random numbers between 0 and 1.Figure 2 depicts the shooting behavior of an archerfish (exploration of the search space). The waterdroplet motion is modeled using equations of the general ballistic trajectory [111]. It is determined bythe acceleration of gravity ( g ), the launch speed ( ν ), and the perceiving angle ( θ ), provided that the airfriction is negligible. We suppose that the prey (i.e., dragonfly) is located at the trajectory diagram’speak. When the insect is shot by an archerfish k , it will vertically fall onto the water surface. When anarcherfish i senses the vibrations initiated by the prey, it moves towards its location using Equation 2. X (cid:104) i , t + (cid:105) = X (cid:104) i , t (cid:105) + e −(cid:107) X (cid:104) k , t (cid:105) prey − X (cid:104) i , t (cid:105) (cid:107) ( X (cid:104) k , t (cid:105) prey − X (cid:104) i , t (cid:105) ) (2) F. Zitouni, S. Harous, A. Belkeram, and LE. Baba Hammou / AHO where X (cid:104) i , t + (cid:105) : The next location of archerfish i . X (cid:104) i , t (cid:105) : The current location of archerfish i . (cid:107) . (cid:107) : The Euclidean distance. X (cid:104) k , t (cid:105) prey : The prey’s location. It is computed using Equation 3. ε : A vector of random numbers generated by a uniform distribution.It represents refraction effects at the air-water interface. X (cid:104) k , t (cid:105) : The location of archerfish k , which has shot the insect. X (cid:104) k , t (cid:105) prey = X (cid:104) k , t (cid:105) + ( , . . . , ν g × sin 2 θ , . . . , ) + ε (3)The position of the entry given the by term ν g × sin 2 θ is a random number in the range { , . . . , d } .For simplicity, fraction ν g will be replaced by variable ω . It defines the attractiveness rate of anarcherfish to a specific prey.Figure 3 describes the jumping behavior of an archerfish (exploitation of the search space). Thearcherfish jumps at the prey and catch it. Similarly, the motion of the archerfish is defined by theacceleration of gravity ( g ), its launch speed ( ν ), and its perceiving angle ( θ ), provided that the airfriction is negligible. We suppose that the prey (i.e., dragonfly) is located at the trajectory diagram’speak. When an archerfish i decides to capture an insect, it moves towards its location using Equation4. X (cid:104) i , t + (cid:105) = X (cid:104) i , t (cid:105) + e −(cid:107) X (cid:104) i , t (cid:105) prey − X (cid:104) i , t (cid:105) (cid:107) ( X (cid:104) i , t (cid:105) prey − X (cid:104) i , t (cid:105) ) (4)where X (cid:104) i , t + (cid:105) : The next location of archerfish i . X (cid:104) i , t (cid:105) : The current location of archerfish i . (cid:107) . (cid:107) : The Euclidean distance. X (cid:104) i , t (cid:105) prey : The prey’s location. It is computed using Equation 5. ε : A vector of random numbers drawn from a uniform distribution.It represents refraction effects at the air-water interface. X (cid:104) i , t (cid:105) prey = X (cid:104) i , t (cid:105) + ( , . . . , ν g × sin 2 θ , . . . , ν g × sin θ , . . . , ) + ε (5)The positions of the entries given by the terms ν g × sin 2 θ and ν g × sin θ are mandatory distinctrandom numbers in the range { , . . . , d } . For simplicity, fraction ν g will be replaced by variable ω . Itdefines the attractiveness rate of an archerfish to a specific prey.The value of perceiving angle ( θ ) guarantees the swapping between the exploration and exploita-tion phases. Figure 4 delimits the ranges of perceiving angles, where AHO is supposed to explore(green areas) or exploit (orange regions) the search space. Hence, the closer the value of θ is to π . Zitouni, S. Harous, A. Belkeram, and LE. Baba Hammou / AHO Figure 3: Jumping behavior of the archerfish.Figure 4: Swapping between exploration and exploitation.or − π , the better AHO tends to exploit the search space and vice versa. The value of θ is generatedrandomly using Equation 6. θ = ( − ) b × α × π (6)where b ∼ B ( . ) : Bernoulli distribution (success’ probability equal to 0.5) [112]. α : Uniformly distributed random number between 0 and 1. π : Archimedes’ constant equal to 3.14.To avoid getting trapped in local optimums, AHO uses a simple strategy. Suppose a given archer-fish location X (cid:104) i , t (cid:105) at iteration t is not enhanced for a fixed number of iterations (e.g., d × N ). In thiscase, the corresponding archerfish moves to a new place according to a L´evy Flight [35]. The newlocation of X (cid:104) i , t (cid:105) is generated using Equations 7 and 8. Algorithm 1 shows the pseudo-code of AHO. F. Zitouni, S. Harous, A. Belkeram, and LE. Baba Hammou / AHO X (cid:104) i , t + (cid:105) = X (cid:104) i , t (cid:105) + α (cid:20) u ( v ) / β , . . . , u d ( v d ) / β (cid:21) (7) u i ∼ N ( , σ ) , σ = (cid:18) Γ ( + β ) sin ( πβ ) Γ ( + β ) × β × β − (cid:19) β , i ∈ { , . . . , d } v i ∼ N ( , ´ σ ) , ´ σ = , i ∈ { , . . . , d } (8)where Γ : Gamma function [113]. N ( µ , σ ) : Normal distribution of mean µ and standard deviation σ [112]. β : The power law index ( β = . α : Uniformly distributed random number between 0 and 1.AHO’s computational complexity depends on the following steps: initialization, fitness evaluation,and updating of candidate solutions. The computational complexity of the first step is O ( N ) . Thecomputational complexity of the second step is O ( Iter max × N × d ) . The computational complexityof the third step is O ( Iter max × N × N ) Therefore, the computational complexity of AHO is O ( N × ( Iter max × ( N + d ) + )) .
4. Experimental results on unconstrained optimization problems
The benchmark CEC 2020 for unconstrained optimization problems [102] is used to investigate theeffectiveness of AHO. This benchmark is composed of four groups of test functions: unimodal (UM),basic (BC), hybrid (HD), and composition (CM). The UM functions have one global optimum. Theyare used to assess the exploitation’s capacity of AHO. The BC functions possess several global op-timums. They are employed to evaluate, on the one hand, the exploration’s ability of AHO, and onthe other hand, its potential of local optimums’ avoidance. The HD and CM functions are obtainedfrom the hybridization and the composition of several elementary test functions. They are utilizedto disclose the well-balancing between the exploration and exploitation of AHO. The mathematicalformulation and characteristics of UM, BC, HD, and CM functions are available in [102].All the experiments were run using the Java programming language on a workstation with a Win-dows 10 familial edition (64-bit). The processor is Intel(R) Core(TM) i7–9750H CPU @ 2.60GHz2.59 GHz, with 16 GB of RAM. The dimension of the search space ( d ) is set to 5, 10, 15, or 20. Thepopulation size ( N ) is set to (cid:98) × d . (cid:99) , where the term (cid:98) x (cid:99) expresses the truncation of the real number x . For each dimension, the maximum number of iterations ( Iter max ) is equal to N , N , N ,or N , respectively. The range of allowed values for each decision variable is [ − , ] . Allthe results are averaged over 30 independent runs. The swapping angle values ( θ ) are set to π , π , π , π , or π . The attractiveness rate values ( ω ) are estimated to 0.01, 0.05, 0.25, 1.25, or 6.25. Hence, inall, we have 25 different configurations for each dimension. Tables 3, 4, 5, and 6 present the standarddeviation values (STD) for each dimension. It is worth mentioning that the closer the value of STD isto 0, the closer the result is to the global optimum. . Zitouni, S. Harous, A. Belkeram, and LE. Baba Hammou / AHO Input: d (the dimension of the search space). Input: [ x min1 , x max1 ] , . . . , [ x min d , x max d ] (the decision variables’ domains). Input: f (the objective function to be minimized). Input: θ (the swapping angle between the exploration and exploitation phases). Input: ω (the attractiveness rate). for i ← to N do Generate a random location X (cid:104) i , (cid:105) using Equation 1; end for t ← to Iter max do for i ← to N do θ ← generate a random perceiving angle using Equation 6; /* Shooting behavior (exploration of the search space) */ if ( | θ | ∈ ] , θ [ ∪ ] π − θ , π [ ) then Compute X (cid:104) i , t (cid:105) prey using Equation 3; for j ← to N do if ( f ( X (cid:104) i , t (cid:105) prey ) < f ( X (cid:104) j , t (cid:105) ) ) then Update the location X (cid:104) j , t (cid:105) using Equation 2, and adjust its components; end else If the location X (cid:104) j , t (cid:105) has not been changed for a given number of iterations.In this case, generate a new location for X (cid:104) j , t (cid:105) using Equations 7 and 8,and adjust its components; end end end /* Jumping behavior (exploitation of the search space) */ else Compute X (cid:104) i , t (cid:105) prey using Equation 5; if ( f ( X (cid:104) i , t (cid:105) prey ) < f ( X (cid:104) i , t (cid:105) ) ) then Update the location X (cid:104) i , t (cid:105) using Equation 4, and adjust its components; end else If the location X (cid:104) i , t (cid:105) has not been changed for a given number of iterations. Inthis case, generate a new location for X (cid:104) i , t (cid:105) using Equations 7 and 8, andadjust its components; end end end end Algorithm 1: The Archerfish Hunting Optimizer. F. Zitouni, S. Harous, A. Belkeram, and LE. Baba Hammou / AHO
We perform the Friedman test [114] to show if the tuning of controlling parameters (i.e., θ and ω ) impacts the performance of AHO. We consider each dimension separately (see Tables 3, 4, 5, and6). When d =
5, we have 25 configurations (i.e., treatments) and 8 test functions (i.e., blocks). When d =
10, 15, or 20, we have 25 configurations (i.e., treatments) and 10 test functions (i.e., blocks). Thevalue of α is set to 0.05, and the value of the degree of freedom ( d f ) is set to 24. We define the null andalternative hypotheses respectively as follows: H there is no difference between the 25 configurationsfor each dimension, and H there is a difference between the 25 configurations for each dimension.The critical value for α = .
05 and d f =
24 is 36.4150 [115]. We compute the F r value using Equation9 [114], where n is the number of blocks, k is the number of treatments, and T , T , . . . , T k are the sumsof ranks for each treatment separately. If F r is greater than 36.4150, we reject the hypothesis H . F r = nk ( k + ) ( T + T + . . . + T k ) − n ( k + ) (9)• For Table 3, we have F r = . > . H is rejected. The bestconfiguration is when θ = π and ω = .
01 because its rank is the smallest with the value 57.5.• For Table 4, we have F r = . > . H is rejected. The bestconfiguration is when θ = π and ω = .
01 because its rank is the smallest with the value 68.• For Table 5, we have F r = . > . H is rejected. Thebest configuration is when θ = π and ω = .
01 because its rank is the smallest with the value58.• For Table 6, we have F r = . > . H is rejected. Thebest configuration is when θ = π and ω = .
01 because its rank is the smallest with the value52.From Tables 3, 4, 5, and 6, we observe that the problem’s dimension does not significantly impactthe quality of the obtained results. On the other hand, we notice that the perceiving angle and attrac-tiveness rate’s values do not influence the performance of AHO. In all configurations, we observe thatAHO can expose excellent results. Its performance remains consistently the same when achievingruns with various decision variables and multiple values of its controlling parameters. We believe thatsuch behavior is justified by the law of large numbers in probability theory [116]. In other words, if thenumber of generations is large enough, AHO tends to reach an equitable balance between explorationand exploitation phases regardless of the value of perceiving angle.AHO’s statistical results on the benchmark CEC 2020 with 5, 10, 15, and 20 dimensions are sum-marized in Table 7. It introduces the obtained best, worst, median, mean values, and the standarddeviations of the error from the optimum solution over 30 independent runs for the ten benchmarkfunctions. It is worth pointing out that the results presented in Table 7 are taken from the best config-urations of Tables 3, 4, 5, and 6 when performing the Friedman test.For function F , AHO successfully obtained the optimal solution in the dimension d = d = d =
15, and d =
20. For functions F , F , F , F , F , and F , AHO was very close to the optimum in all dimensions. For functions F and F , . Zitouni, S. Harous, A. Belkeram, and LE. Baba Hammou / AHO Table 3: The standard deviation values for dimension d = F F F F F F F F θ = π ω = .
01 0,00E+00 6,36E-05 0,00E+00 0,00E+00 1,94E-05 5,59E-05 2,04E-08 1,46E-03 ω = .
05 0,00E+00 6,36E-05 0,00E+00 0,00E+00 2,77E-05 5,59E-05 1,33E-07 1,96E-03 ω = .
25 1,78E-07 6,36E-05 0,00E+00 0,00E+00 5,20E-05 5,59E-05 4,28E-07 3,42E-03 ω = .
25 1,52E-06 6,36E-05 3,12E-08 0,00E+00 9,38E-05 5,59E-05 3,66E-06 3,16E-03 ω = .
25 1,16E-05 6,36E-05 1,58E-06 0,00E+00 1,66E-04 5,59E-05 1,37E-05 4,35E-03 θ = π ω = .
01 0,00E+00 6,36E-05 0,00E+00 0,00E+00 2,14E-05 5,59E-05 1,75E-08 1,80E-03 ω = .
05 1,41E-08 6,36E-05 0,00E+00 0,00E+00 3,27E-05 5,59E-05 9,89E-08 2,51E-03 ω = .
25 1,44E-07 6,36E-05 0,00E+00 0,00E+00 4,83E-05 5,59E-05 3,54E-07 2,57E-03 ω = .
25 1,43E-06 6,36E-05 3,02E-08 0,00E+00 9,55E-05 5,59E-05 3,71E-06 4,11E-03 ω = .
25 1,01E-05 6,36E-05 1,06E-06 0,00E+00 1,61E-04 5,59E-05 5,56E-06 4,22E-03 θ = π ω = .
01 0,00E+00 6,36E-05 0,00E+00 0,00E+00 1,83E-05 5,59E-05 2,26E-08 1,81E-03 ω = .
05 1,58E-08 6,36E-05 0,00E+00 0,00E+00 2,84E-05 5,59E-05 9,87E-08 2,15E-03 ω = .
25 8,39E-08 6,36E-05 0,00E+00 0,00E+00 4,28E-05 5,59E-05 2,78E-07 3,02E-03 ω = .
25 1,44E-06 6,36E-05 4,33E-08 0,00E+00 1,02E-04 5,59E-05 4,43E-06 3,57E-03 ω = .
25 1,41E-05 6,36E-05 6,71E-07 0,00E+00 1,98E-04 5,59E-05 9,97E-06 4,08E-03 θ = π ω = .
01 0,00E+00 6,36E-05 0,00E+00 0,00E+00 1,87E-05 5,59E-05 2,62E-08 1,64E-03 ω = .
05 0,00E+00 6,36E-05 0,00E+00 0,00E+00 2,82E-05 5,59E-05 9,44E-08 2,57E-03 ω = .
25 8,98E-08 6,36E-05 0,00E+00 0,00E+00 3,91E-05 5,59E-05 7,26E-07 2,60E-03 ω = .
25 1,27E-06 6,36E-05 4,17E-08 0,00E+00 8,51E-05 5,59E-05 6,87E-07 3,32E-03 ω = .
25 1,64E-05 6,36E-05 8,99E-07 0,00E+00 2,56E-04 5,59E-05 1,35E-05 4,48E-03 θ = π ω = .
01 0,00E+00 6,36E-05 0,00E+00 0,00E+00 2,00E-05 5,59E-05 2,33E-08 1,74E-03 ω = .
05 0,00E+00 6,36E-05 0,00E+00 0,00E+00 2,39E-05 5,59E-05 9,41E-08 2,22E-03 ω = .
25 8,15E-08 6,36E-05 0,00E+00 0,00E+00 4,69E-05 5,59E-05 4,27E-07 2,84E-03 ω = .
25 4,69E-07 6,36E-05 3,73E-08 0,00E+00 8,75E-05 5,59E-05 7,58E-07 3,14E-03 ω = .
25 2,35E-05 6,36E-05 1,09E-06 0,00E+00 1,84E-04 5,59E-05 1,37E-05 4,39E-03 F. Zitouni, S. Harous, A. Belkeram, and LE. Baba Hammou / AHO
Table 4: The standard deviation values for dimension d = F F F F F F F F F F θ = π ω = .
01 4,55E-08 1,27E-04 0,00E+00 0,00E+00 3,82E-05 4,74E-05 4,44E-05 1,09E-04 0,00E+00 2,94E-03 ω = .
05 2,71E-07 1,27E-04 0,00E+00 0,00E+00 3,82E-05 5,23E-05 5,18E-05 1,09E-04 0,00E+00 3,97E-03 ω = .
25 7,83E-06 1,27E-04 5,77E-08 0,00E+00 3,82E-05 6,49E-05 8,36E-05 1,09E-04 0,00E+00 5,60E-03 ω = .
25 2,08E-04 1,27E-04 2,24E-06 0,00E+00 3,87E-05 9,97E-05 1,43E-04 1,09E-04 2,80E-08 6,08E-03 ω = .
25 5,55E-03 1,27E-04 4,68E-05 0,00E+00 4,02E-05 2,70E-04 5,38E-04 1,09E-04 7,02E-07 6,82E-03 θ = π ω = .
01 4,30E-08 1,27E-04 0,00E+00 0,00E+00 3,82E-05 4,51E-05 3,95E-05 1,09E-04 0,00E+00 3,31E-03 ω = .
05 4,96E-07 1,27E-04 0,00E+00 0,00E+00 3,83E-05 5,32E-05 5,49E-05 1,09E-04 0,00E+00 3,81E-03 ω = .
25 7,04E-06 1,27E-04 5,16E-08 0,00E+00 3,84E-05 7,53E-05 9,26E-05 1,09E-04 0,00E+00 5,15E-03 ω = .
25 2,11E-04 1,27E-04 2,19E-06 0,00E+00 3,89E-05 1,30E-04 1,81E-04 1,09E-04 2,76E-08 5,85E-03 ω = .
25 3,79E-03 1,27E-04 4,97E-05 0,00E+00 3,98E-05 2,58E-04 5,94E-04 1,09E-04 7,54E-07 8,15E-03 θ = π ω = .
01 3,68E-08 1,27E-04 0,00E+00 0,00E+00 3,82E-05 4,54E-05 3,63E-05 1,09E-04 0,00E+00 3,47E-03 ω = .
05 5,90E-07 1,27E-04 0,00E+00 0,00E+00 3,82E-05 5,49E-05 5,30E-05 1,09E-04 0,00E+00 3,58E-03 ω = .
25 9,16E-06 1,27E-04 5,93E-08 0,00E+00 3,84E-05 8,31E-05 8,57E-05 1,09E-04 0,00E+00 5,46E-03 ω = .
25 2,01E-04 1,27E-04 1,83E-06 0,00E+00 3,87E-05 1,36E-04 2,09E-04 1,09E-04 3,11E-08 7,60E-03 ω = .
25 4,04E-03 1,27E-04 3,70E-05 0,00E+00 4,06E-05 2,46E-04 5,57E-04 1,09E-04 7,41E-07 7,55E-03 θ = π ω = .
01 3,53E-08 1,27E-04 0,00E+00 0,00E+00 3,82E-05 4,46E-05 4,30E-05 1,09E-04 0,00E+00 3,57E-03 ω = .
05 4,63E-07 1,27E-04 0,00E+00 0,00E+00 3,82E-05 5,68E-05 6,02E-05 1,09E-04 0,00E+00 3,79E-03 ω = .
25 1,04E-05 1,27E-04 8,70E-08 0,00E+00 3,84E-05 7,60E-05 1,04E-04 1,09E-04 0,00E+00 6,02E-03 ω = .
25 2,30E-04 1,27E-04 1,88E-06 0,00E+00 3,94E-05 1,24E-04 1,71E-04 1,09E-04 2,41E-08 6,68E-03 ω = .
25 5,94E-03 1,27E-04 5,31E-05 0,00E+00 3,99E-05 2,49E-04 4,32E-04 1,09E-04 7,23E-07 8,14E-03 θ = π ω = .
01 1,36E-08 1,27E-04 0,00E+00 0,00E+00 3,82E-05 4,58E-05 3,83E-05 1,09E-04 0,00E+00 3,37E-03 ω = .
05 4,78E-07 1,27E-04 0,00E+00 0,00E+00 3,82E-05 5,23E-05 5,80E-05 1,09E-04 0,00E+00 4,61E-03 ω = .
25 1,10E-05 1,27E-04 5,59E-08 0,00E+00 3,84E-05 7,95E-05 9,62E-05 1,09E-04 0,00E+00 4,44E-03 ω = .
25 1,81E-04 1,27E-04 2,10E-06 0,00E+00 3,86E-05 1,40E-04 1,63E-04 1,09E-04 2,93E-08 5,76E-03 ω = .
25 2,59E-03 1,27E-04 5,23E-05 0,00E+00 4,13E-05 2,55E-04 5,91E-04 1,09E-04 7,24E-07 8,28E-03 . Zitouni, S. Harous, A. Belkeram, and LE. Baba Hammou / AHO Table 5: The standard deviation values for dimension d = F F F F F F F F F F θ = π ω = .
01 1,75E-07 1,91E-04 0,00E+00 0,00E+00 6,36E-05 5,61E-05 5,64E-05 1,52E-04 0,00E+00 8,56E-03 ω = .
05 3,46E-06 1,91E-04 1,64E-08 0,00E+00 6,37E-05 6,47E-05 7,13E-05 1,52E-04 0,00E+00 1,23E-02 ω = .
25 5,06E-05 1,91E-04 4,61E-07 0,00E+00 6,37E-05 8,23E-05 1,34E-04 1,52E-04 1,35E-08 1,90E-02 ω = .
25 1,15E-03 1,91E-04 1,14E-05 0,00E+00 6,43E-05 1,33E-04 2,79E-04 1,52E-04 3,17E-07 2,56E-02 ω = .
25 2,93E-02 1,91E-04 2,31E-04 0,00E+00 8,48E-05 2,96E-04 8,09E-04 1,52E-04 6,83E-06 2,78E-02 θ = π ω = .
01 1,20E-07 1,91E-04 0,00E+00 0,00E+00 6,36E-05 5,48E-05 5,81E-05 1,52E-04 0,00E+00 9,22E-03 ω = .
05 2,77E-06 1,91E-04 1,20E-08 0,00E+00 6,36E-05 6,67E-05 7,26E-05 1,52E-04 0,00E+00 1,37E-02 ω = .
25 6,11E-05 1,91E-04 4,25E-07 0,00E+00 6,38E-05 9,27E-05 1,26E-04 1,52E-04 1,26E-08 1,94E-02 ω = .
25 1,23E-03 1,91E-04 1,34E-05 0,00E+00 6,59E-05 1,28E-04 2,08E-04 1,52E-04 4,28E-07 2,53E-02 ω = .
25 2,50E-02 1,92E-04 2,17E-04 1,14E-08 7,15E-05 2,83E-04 9,32E-04 1,52E-04 6,86E-06 3,11E-02 θ = π ω = .
01 1,37E-07 1,91E-04 0,00E+00 0,00E+00 6,36E-05 5,80E-05 5,64E-05 1,52E-04 0,00E+00 8,59E-03 ω = .
05 2,66E-06 1,91E-04 1,55E-08 0,00E+00 6,37E-05 6,62E-05 7,61E-05 1,52E-04 0,00E+00 1,33E-02 ω = .
25 4,35E-05 1,91E-04 3,65E-07 0,00E+00 6,38E-05 9,05E-05 1,13E-04 1,52E-04 1,24E-08 1,52E-02 ω = .
25 1,01E-03 1,91E-04 1,13E-05 0,00E+00 6,49E-05 1,53E-04 2,20E-04 1,52E-04 3,01E-07 2,14E-02 ω = .
25 2,69E-02 1,92E-04 3,55E-04 0,00E+00 7,49E-05 2,60E-04 7,91E-04 1,52E-04 8,24E-06 2,81E-02 θ = π ω = .
01 8,59E-08 1,91E-04 0,00E+00 0,00E+00 6,36E-05 5,64E-05 5,49E-05 1,52E-04 0,00E+00 9,07E-03 ω = .
05 2,14E-06 1,91E-04 1,53E-08 0,00E+00 6,37E-05 6,79E-05 6,76E-05 1,52E-04 0,00E+00 1,35E-02 ω = .
25 3,09E-05 1,91E-04 3,77E-07 0,00E+00 6,40E-05 8,43E-05 1,18E-04 1,52E-04 1,46E-08 1,67E-02 ω = .
25 9,38E-04 1,91E-04 1,02E-05 0,00E+00 6,64E-05 1,44E-04 2,41E-04 1,52E-04 3,34E-07 2,51E-02 ω = .
25 3,02E-02 1,91E-04 2,54E-04 1,48E-08 7,67E-05 3,16E-04 8,32E-04 1,52E-04 7,65E-06 2,70E-02 θ = π ω = .
01 1,39E-07 1,91E-04 0,00E+00 0,00E+00 6,36E-05 5,60E-05 5,54E-05 1,52E-04 0,00E+00 9,71E-03 ω = .
05 3,84E-06 1,91E-04 1,52E-08 0,00E+00 6,37E-05 6,15E-05 6,85E-05 1,52E-04 0,00E+00 1,35E-02 ω = .
25 6,00E-05 1,91E-04 3,43E-07 0,00E+00 6,39E-05 8,24E-05 1,14E-04 1,52E-04 1,42E-08 1,61E-02 ω = .
25 1,46E-03 1,91E-04 1,13E-05 0,00E+00 6,50E-05 1,58E-04 2,39E-04 1,52E-04 3,94E-07 2,26E-02 ω = .
25 2,12E-02 1,91E-04 2,66E-04 0,00E+00 7,21E-05 3,00E-04 5,61E-04 1,52E-04 1,05E-05 2,76E-02 F. Zitouni, S. Harous, A. Belkeram, and LE. Baba Hammou / AHO
Table 6: The standard deviation values for dimension d = F F F F F F F F F F θ = π ω = .
01 4,99E-07 2,55E-04 0,00E+00 0,00E+00 7,68E-05 1,02E-04 7,70E-05 2,00E-04 0,00E+00 9,84E-03 ω = .
05 5,51E-06 2,55E-04 7,33E-08 0,00E+00 7,76E-05 1,24E-04 9,12E-05 2,00E-04 0,00E+00 1,23E-02 ω = .
25 1,74E-04 2,55E-04 1,94E-06 0,00E+00 8,25E-05 1,68E-04 1,62E-04 2,00E-04 2,39E-08 1,48E-02 ω = .
25 3,85E-03 2,55E-04 4,56E-05 0,00E+00 9,08E-05 2,51E-04 3,23E-04 2,00E-04 7,21E-07 1,61E-02 ω = .
25 9,70E-02 2,56E-04 1,18E-03 5,45E-08 1,06E-04 8,19E-04 1,06E-03 2,00E-04 1,39E-05 2,20E-02 θ = π ω = .
01 6,77E-07 2,55E-04 0,00E+00 0,00E+00 7,67E-05 1,03E-04 6,94E-05 2,00E-04 0,00E+00 9,34E-03 ω = .
05 7,47E-06 2,55E-04 6,52E-08 0,00E+00 7,82E-05 1,14E-04 9,49E-05 2,00E-04 0,00E+00 1,09E-02 ω = .
25 2,23E-04 2,55E-04 1,50E-06 0,00E+00 8,53E-05 1,94E-04 1,52E-04 2,00E-04 2,63E-08 1,48E-02 ω = .
25 4,83E-03 2,55E-04 5,00E-05 0,00E+00 9,10E-05 3,65E-04 3,27E-04 2,00E-04 6,01E-07 1,64E-02 ω = .
25 1,11E-01 2,56E-04 1,08E-03 3,13E-08 1,10E-04 7,06E-04 1,01E-03 2,00E-04 1,74E-05 2,21E-02 θ = π ω = .
01 4,42E-07 2,55E-04 0,00E+00 0,00E+00 7,69E-05 1,02E-04 7,23E-05 2,00E-04 0,00E+00 9,39E-03 ω = .
05 1,11E-05 2,55E-04 7,81E-08 0,00E+00 7,92E-05 1,20E-04 1,04E-04 2,00E-04 0,00E+00 1,06E-02 ω = .
25 2,21E-04 2,55E-04 1,50E-06 0,00E+00 8,31E-05 1,73E-04 1,60E-04 2,00E-04 2,89E-08 1,54E-02 ω = .
25 3,39E-03 2,55E-04 4,15E-05 0,00E+00 8,87E-05 3,09E-04 3,79E-04 2,00E-04 7,14E-07 1,56E-02 ω = .
25 1,37E-01 2,55E-04 1,43E-03 3,15E-08 1,17E-04 7,20E-04 1,16E-03 2,00E-04 1,68E-05 2,05E-02 θ = π ω = .
01 4,87E-07 2,55E-04 0,00E+00 0,00E+00 7,67E-05 1,04E-04 7,21E-05 2,00E-04 0,00E+00 9,62E-03 ω = .
05 1,33E-05 2,55E-04 5,31E-08 0,00E+00 7,86E-05 1,19E-04 9,82E-05 2,00E-04 0,00E+00 1,05E-02 ω = .
25 1,43E-04 2,55E-04 2,07E-06 0,00E+00 8,09E-05 1,85E-04 1,65E-04 2,00E-04 2,46E-08 1,38E-02 ω = .
25 3,63E-03 2,55E-04 4,64E-05 0,00E+00 8,94E-05 3,54E-04 2,94E-04 2,00E-04 7,25E-07 1,70E-02 ω = .
25 1,33E-01 2,56E-04 7,94E-04 2,71E-08 1,14E-04 7,15E-04 1,01E-03 2,00E-04 1,59E-05 2,17E-02 θ = π ω = .
01 5,09E-07 2,55E-04 0,00E+00 0,00E+00 7,67E-05 1,00E-04 6,74E-05 2,00E-04 0,00E+00 8,83E-03 ω = .
05 1,06E-05 2,55E-04 6,13E-08 0,00E+00 7,91E-05 1,29E-04 1,07E-04 2,00E-04 0,00E+00 1,15E-02 ω = .
25 1,92E-04 2,55E-04 1,77E-06 0,00E+00 8,25E-05 1,81E-04 1,69E-04 2,00E-04 3,08E-08 1,32E-02 ω = .
25 3,39E-03 2,55E-04 5,49E-05 0,00E+00 8,97E-05 3,12E-04 2,61E-04 2,00E-04 7,85E-07 1,53E-02 ω = .
25 1,06E-01 2,56E-04 1,28E-03 8,83E-08 1,24E-04 9,45E-04 1,05E-03 2,00E-04 1,54E-05 1,99E-02
Table 7: Performance of AHO on 5d, 10d, 15d, and 20d functions averaged over 30 independent runs. F F F F F F F F F F d = θ = π ,and ω = .
01 Worst 8.54E-08 6.36E-05 0.00E+00 0.00E+00 3.70E-05 - - 5.59E-05 4.58E-08 2.95E-03Best 0.00E+00 6.36E-05 0.00E+00 0.00E+00 1.48E-05 - - 5.59E-05 0.00E+00 7.61E-04Median 0.00E+00 6.36E-05 0.00E+00 0.00E+00 2.27E-05 - - 5.59E-05 2.62E-08 1.71E-03Mean 0.00E+00 6.36E-05 0.00E+00 0.00E+00 1.94E-05 - - 5.59E-05 2.04E-08 1.46E-03Std 0.00E+00 6.36E-05 0.00E+00 0.00E+00 1.94E-05 - - 5.59E-05 2.04E-08 1.46E-03 d = θ = π ,and ω = .
01 Worst 7.12E-07 1.27E-04 0.00E+00 0.00E+00 3.85E-05 6.59E-05 6.15E-05 1.09E-04 0.00E+00 5.67E-03Best 1.32E-08 1.27E-04 0.00E+00 0.00E+00 3.82E-05 4.13E-05 2.95E-05 1.09E-04 0.00E+00 2.04E-03Median 1.33E-07 1.27E-04 0.00E+00 0.00E+00 3.82E-05 5.05E-05 4.40E-05 1.09E-04 0.00E+00 3.64E-03Mean 1.36E-08 1.27E-04 0.00E+00 0.00E+00 3.82E-05 4.58E-05 3.83E-05 1.09E-04 0.00E+00 3.37E-03Std 1.36E-08 1.27E-04 0.00E+00 0.00E+00 3.82E-05 4.58E-05 3.83E-05 1.09E-04 0.00E+00 3.37E-03 d = θ = π ,and ω = .
01 Worst 8.70E-07 1.91E-04 0.00E+00 0.00E+00 6.37E-05 6.62E-05 7.69E-05 1.52E-04 0.00E+00 1.34E-02Best 8.20E-08 1.91E-04 0.00E+00 0.00E+00 6.36E-05 5.37E-05 5.07E-05 1.52E-04 0.00E+00 7.33E-03Median 3.05E-07 1.91E-04 0.00E+00 0.00E+00 6.36E-05 5.88E-05 6.12E-05 1.52E-04 0.00E+00 9.74E-03Mean 8.59E-08 1.91E-04 0.00E+00 0.00E+00 6.36E-05 5.64E-05 5.49E-05 1.52E-04 0.00E+00 9.07E-03Std 8.59E-08 1.91E-04 0.00E+00 0.00E+00 6.36E-05 5.64E-05 5.49E-05 1.52E-04 0.00E+00 9.07E-03 d = θ = π ,and ω = .
01 Worst 4.91E-06 2.55E-04 1.20E-08 0.00E+00 7.81E-05 1.34E-04 1.05E-04 2.00E-04 0.00E+00 1.40E-02Best 4.16E-07 2.55E-04 0.00E+00 0.00E+00 7.65E-05 9.20E-05 5.96E-05 2.00E-04 0.00E+00 6.20E-03Median 1.30E-06 2.55E-04 0.00E+00 0.00E+00 7.70E-05 1.14E-04 8.00E-05 2.00E-04 0.00E+00 9.90E-03Mean 5.09E-07 2.55E-04 0.00E+00 0.00E+00 7.67E-05 1.00E-04 6.74E-05 2.00E-04 0.00E+00 8.83E-03Std 5.09E-07 2.55E-04 0.00E+00 0.00E+00 7.67E-05 1.00E-04 6.74E-05 2.00E-04 0.00E+00 8.83E-03 . Zitouni, S. Harous, A. Belkeram, and LE. Baba Hammou / AHO AHO successfully reached the optimal solution in all dimensions. For function F , AHO successfullyattained the optimal solution in dimensions d = d =
15, and d =
20 and was very close to theoptimum in dimension d =
5. In conclusion, we can say that:• From function F , AHO has a good exploitation of the search space.• From functions F , F , and F , AHO has a good exploration of the search space and avoidsgetting trapped in local optimums.• From functions F , F , F , F , F , and F , AHO has a good balance between the explorationand exploitation phases.The performance of the archerfish hunting optimizer is compared to some well established meta-heuristics algorithms mentioned next. The numerical results are reported in: i) Tables 8, 9, 10, and 11for d =
5; ii) Tables 12, 13, 14, and 15 for d =
10, iii) Tables 16, 17, 18, and 19 for d =
15; and iv)Tables 20, 21, 22, and 23 for d = F. Zitouni, S. Harous, A. Belkeram, and LE. Baba Hammou / AHO
Table 8: Statistical results of AHO in comparison to CSsin, MP-EEH, and RASPSHADE ( d = AHO CSsin MP-EEH RASPSHADEMean STD Mean STD P Mean STD P Mean STD P F F F F F F F F Table 9: Statistical results of AHO in comparison to IMODE, DISH-XX, and AGSK ( d = AHO IMODE DISH-XX AGSKMean STD Mean STD P Mean STD P Mean STD P F F F F F F F F • Gaining-sharing knowledge based algorithm for solving optimization problems: a novel natureinspired algorithm (GSK) [128].We perform the Wilcoxon signed-rank test [129] to establish if AHO is better/worst than CSsin,MP-EEH, RASPSHADE, IMODE, DISH-XX, AGSK, j2020, jDE100e, OLSHADE, mpmLSHADE,SOMA-CL, and GSK in all dimensions. We consider the one-sided test with α = .
05. For eachdimension, the used values for controlling parameters of AHO are reported in Table 7. For the algo-rithms selected for the comparative study, the parameters’ values are the same as the recommendedsettings in their original works. Tables 24, 25, 26, and 27 summarize the results of the Wilcoxonsigned-rank test.From Tables 24, 25, 26, and 27, AHO successfully outperforms all the metaheuristic algorithmsconsidered for the comparative study in all dimension, except for the dimension d = d =
20, over 30 independent runs. We observe that the exploitation with AHO onmost test functions is dominant, leading to faster convergence to the global optimum because theexploitation phase is strengthened by Equations 4 and 5. . Zitouni, S. Harous, A. Belkeram, and LE. Baba Hammou / AHO Table 10: Statistical results of AHO in comparison to j2020, jDE100e, and OLSHADE ( d = AHO j2020 jDE100e OLSHADEMean STD Mean STD P Mean STD P Mean STD P F F F F F F F F Table 11: Statistical results of AHO in comparison to mpmLSHADE, SOMA-CL, and GSK ( d = AHO mpmLSHADE SOMA-CL GSKMean STD Mean STD P Mean STD P Mean STD P F F F F F F F F Table 12: Statistical results of AHO in comparison to CSsin, MP-EEH, and RASPSHADE ( d = AHO CSsin MP-EEH RASPSHADEMean STD Mean STD P Mean STD P Mean STD P F F F F F F F F F F F. Zitouni, S. Harous, A. Belkeram, and LE. Baba Hammou / AHO
Table 13: Statistical results of AHO in comparison to IMODE, DISH-XX, and AGSK ( d = AHO IMODE DISH-XX AGSKMean STD Mean STD P Mean STD P Mean STD P F F F F F F F F F F Table 14: Statistical results of AHO in comparison to j2020, jDE100e, and OLSHADE ( d = AHO j2020 jDE100e OLSHADEMean STD Mean STD P Mean STD P Mean STD P F F F F F F F F F F Table 15: Statistical results of AHO in comparison to mpmLSHADE, SOMA-CL, and GSK ( d = AHO mpmLSHADE SOMA-CL GSKMean STD Mean STD P Mean STD P Mean STD P F F F F F F F F F F . Zitouni, S. Harous, A. Belkeram, and LE. Baba Hammou / AHO Table 16: Statistical results of AHO in comparison to CSsin, MP-EEH, and RASPSHADE ( d = AHO CSsin MP-EEH RASPSHADEMean STD Mean STD P Mean STD P Mean STD P F F F F F F F F F F Table 17: Statistical results of AHO in comparison to IMODE, DISH-XX, and AGSK ( d = AHO IMODE DISH-XX AGSKMean STD Mean STD P Mean STD P Mean STD P F F F F F F F F F F Table 18: Statistical results of AHO in comparison to j2020, jDE100e, and OLSHADE ( d = AHO j2020 jDE100e OLSHADEMean STD Mean STD P Mean STD P Mean STD P F F F F F F F F F F F. Zitouni, S. Harous, A. Belkeram, and LE. Baba Hammou / AHO
Table 19: Statistical results of AHO in comparison to mpmLSHADE, SOMA-CL, and GSK ( d = AHO mpmLSHADE SOMA-CL GSKMean STD Mean STD P Mean STD P Mean STD P F F F F F F F F F F Table 20: Statistical results of AHO in comparison to CSsin, MP-EEH, and RASPSHADE ( d = AHO CSsin MP-EEH RASPSHADEMean STD Mean STD P Mean STD P Mean STD P F F F F F F F F F F Table 21: Statistical results of AHO in comparison to IMODE, DISH-XX, and AGSK ( d = AHO IMODE DISH-XX AGSKMean STD Mean STD P Mean STD P Mean STD P F F F F F F F F F F . Zitouni, S. Harous, A. Belkeram, and LE. Baba Hammou / AHO Table 22: Statistical results of AHO in comparison to j2020, jDE100e, and OLSHADE ( d = AHO j2020 jDE100e OLSHADEMean STD Mean STD P Mean STD P Mean STD P F F F F F F F F F F Table 23: Statistical results of AHO in comparison to mpmLSHADE, SOMA-CL, and GSK ( d = AHO mpmLSHADE SOMA-CL GSKMean STD Mean STD P Mean STD P Mean STD P F F F F F F F F F F Table 24: The Wilcoxon signed rank test for d = k W + W − min ( W + , W − ) Critical value Result of the Wilcoxon signed rank testAHO vs. CSsin 7 0 28 0 4 4 >
0, AHO outperforms CSsinAHO vs. MP-EEH 7 1 27 1 4 4 >
1, AHO outperforms MP-EEHAHO vs. RASPSHADE 7 3 25 3 4 4 >
3, AHO outperforms RASPSHADEAHO vs. IMODE 5 6 9 6 1 1 <
6, IMODE outperforms AHOAHO vs. DISH-XX 7 0 28 0 4 4 >
0, AHO outperforms DISH-XXAHO vs. AGSK 7 3 25 3 4 4 >
3, AHO outperforms AGSKAHO vs. j2020 7 0 28 0 4 4 >
0, AHO outperforms j2020AHO vs. jDE100e 7 0 28 0 4 4 >
0, AHO outperforms jDE100eAHO vs. OLSHADE 7 3 25 3 4 4 >
3, AHO outperforms OLSHADEAHO vs. mpmLSHADE 7 1 27 1 4 4 >
1, AHO outperforms mpmLSHADEAHO vs. SOMA-CL 8 0 36 0 6 6 >
0, AHO outperforms SOMA-CLAHO vs. GSK 7 0 28 0 4 4 >
0, AHO outperforms GSK F. Zitouni, S. Harous, A. Belkeram, and LE. Baba Hammou / AHO
Table 25: The Wilcoxon signed rank test for d = k W + W − min ( W + , W − ) Critical value Result of the Wilcoxon signed rank testAHO vs. CSsin 9 2 43 2 8 8 >
2, AHO outperforms CSsinAHO vs. MP-EEH 10 1 54 1 11 11 >
1, AHO outperforms MP-EEHAHO vs. RASPSHADE 9 3 42 3 8 8 >
3, AHO outperforms RASPSHADEAHO vs. IMODE 9 4 41 4 8 8 >
4, AHO outperforms IMODEAHO vs. DISH-XX 10 3 52 3 11 11 >
3, AHO outperforms DISH-XXAHO vs. AGSK 10 1 54 1 11 11 >
1, AHO outperforms AGSKAHO vs. j2020 10 1 54 1 11 11 >
1, AHO outperforms j2020AHO vs. jDE100e 10 1 54 1 11 11 >
1, AHO outperforms jDE100eAHO vs. OLSHADE 9 6 39 6 8 8 >
6, AHO outperforms OLSHADEAHO vs. mpmLSHADE 10 1 54 1 11 11 >
1, AHO outperforms mpmLSHADEAHO vs. SOMA-CL 10 1 54 1 11 11 >
1, AHO outperforms SOMA-CLAHO vs. GSK 10 1 54 1 11 11 >
1, AHO outperforms GSK
Table 26: The Wilcoxon signed rank test for d = k W + W − min ( W + , W − ) Critical value Result of the Wilcoxon signed rank testAHO vs. CSsin 9 2 43 2 8 8 >
2, AHO outperforms CSsinAHO vs. MP-EEH 10 2 53 2 11 11 >
2, AHO outperforms MP-EEHAHO vs. RASPSHADE 10 6 49 6 11 11 >
6, AHO outperforms RASPSHADEAHO vs. IMODE 9 3 42 3 8 8 >
3, AHO outperforms IMODEAHO vs. DISH-XX 9 6 39 6 8 8 >
6, AHO outperforms DISH-XXAHO vs. AGSK 10 3 52 3 11 11 >
3, AHO outperforms AGSKAHO vs. j2020 10 1 54 1 11 11 >
1, AHO outperforms j2020AHO vs. jDE100e 10 6 49 6 11 11 >
6, AHO outperforms jDE100eAHO vs. OLSHADE 9 6 39 6 8 8 >
6, AHO outperforms OLSHADEAHO vs. mpmLSHADE 10 9 46 9 11 11 >
9, AHO outperforms mpmLSHADEAHO vs. SOMA-CL 10 3 52 3 11 11 >
3, AHO outperforms SOMA-CLAHO vs. GSK 10 6 49 6 11 11 >
6, AHO outperforms GSK . Zitouni, S. Harous, A. Belkeram, and LE. Baba Hammou / AHO (a) F (b) F (c) F (d) F (e) F (f) F (g) F (h) F (i) F (j) F Figure 5: Convergence curves of AHO for test functions of CEC 2020 in 20d, averaged over 30 runs. F. Zitouni, S. Harous, A. Belkeram, and LE. Baba Hammou / AHO
Table 27: The Wilcoxon signed rank test for d = k W + W − min ( W + , W − ) Critical value Result of the Wilcoxon signed rank testAHO vs. CSsin 9 0 45 0 8 8 >
0, AHO outperforms CSsinAHO vs. MP-EEH 10 1 54 1 11 11 >
1, AHO outperforms MP-EEHAHO vs. RASPSHADE 10 6 49 6 11 11 >
6, AHO outperforms RASPSHADEAHO vs. IMODE 9 1 44 1 8 8 >
1, AHO outperforms IMODEAHO vs. DISH-XX 10 3 52 3 11 11 >
3, AHO outperforms DISH-XXAHO vs. AGSK 9 2 43 2 8 8 >
2, AHO outperforms AGSKAHO vs. j2020 10 1 54 1 11 11 >
1, AHO outperforms j2020AHO vs. jDE100e 10 3 52 3 11 11 >
3, AHO outperforms jDE100eAHO vs. OLSHADE 10 3 52 3 11 11 >
3, AHO outperforms OLSHADEAHO vs. mpmLSHADE 10 9 46 9 11 11 >
9, AHO outperforms mpmLSHADEAHO vs. SOMA-CL 10 1 54 1 11 11 >
1, AHO outperforms SOMA-CLAHO vs. GSK 10 4 51 4 11 11 >
4, AHO outperforms GSK
Table 28: Characteristics of the considered constrained engineering design problems.Problem Name
D g h f ( x ∗ ) RC15 Weight Minimization of a Speed Reducer 7 11 0 2.9944244658E+03RC18 Pressure vessel design 4 4 0 5.8853327736E+03RC19 Welded beam design 4 5 0 1.6702177263E+00RC17 Tension/compression spring design (case 1) 3 3 0 1.2665232788E-02RC21 Multiple disk clutch brake design problem 5 7 0 2.3524245790E-01 D : the number of decision variables of the problem. g : the number of inequality constraints. h : the number of equality constraints. f ( x ∗ ) : the best known feasible objective function value.
5. Experimental results on constrained optimization problems
We consider five constrained engineering design problems [103] to evaluate AHO’s performance.Their characteristics are reported in Table 28, while the complete mathematical equations of theseproblems are provided in the following subsections.All the experiments were run using the Java programming language on a workstation with a Win-dows 10 familial edition (64-bit). The processor is Intel(R) Core(TM) i7–9750H CPU @ 2.60GHz2.59 GHz, with 16 GB of RAM. The dimension of the search space ( D ) of each problem is given inTable 28. The population size ( N ) is set to (cid:98) × D . (cid:99) , where the term (cid:98) x (cid:99) returns the truncation of thereal number x . For each problem, the maximum number of iterations ( Iter max ) is equal to N . Therange of allowed values for each decision variable is given in the following subsections. All the resultsare averaged over 25 independent runs. The swapping angle value ( θ ) is set to π . The attractive- . Zitouni, S. Harous, A. Belkeram, and LE. Baba Hammou / AHO Table 29: Results of the considered mechanical engineering problems using IUDE, ε MAgES,iLSHADE ε , and AHO. Problem Algorithm Best Median Mean Worst STD FR MV SRRC15 IUDE ε MAgES iLSHADE ε AHO
RC18 IUDE ε MAgES iLSHADE ε AHO
RC19 IUDE ε MAgES iLSHADE ε AHO
RC17 IUDE ε MAgES iLSHADE ε AHO
RC21 IUDE ε MAgES iLSHADE ε AHO ness rate value ( ω ) is estimated to 0.01. For comparative purposes, the results of 3 well-establishedmetaheuristic algorithms from the literature are presented to have a conclusive decision on AHO’sperformance. The following metaheuristic algorithms are used for the comparative study.• An improved unified differential evolution algorithm for constrained optimization problems(IUDE) [130].• A matrix adaptation evolution strategy for constrained real-parameter optimization ( ε MAgES)[131].• LSHADE44 with an Improved ε Constraint-Handling Method for Solving Constrained Single-Objective Optimization Problems (iLSHADE ε ) [132].Table 29 summarizes the performance of AHO on five challenging engineering problems. Inaddition, the statistical results of AHO are compared to IUDE, ε MAgES, and iLSHADE ε , which are3 well-established metaheuristic algorithms [103]. The statistical results of metaheuristic algorithmsfor 25 independent runs are shown in terms of the worst value, the median, the mean, the best value,and the standard deviation value for each optimizer. The following criteria are used to assess thedifficulty level of the considered problems. F. Zitouni, S. Harous, A. Belkeram, and LE. Baba Hammou / AHO
Table 30: PM values of IUDE, ε MAgES, iLSHADE ε , and AHO of the considered problems. Algorithm PM Rank
IUDE 0.0119 4 ε MAgES 0.0116 1.5iLSHADE ε MV = ∑ pi = max ( g i ( x ) , ) + ∑ mi = p + max ( h i ( x ) − − , ) m (10)• Success Rate (SR): the ratio of the total number of runs in which an algorithm has reached afeasible solution x satisfying f ( x ) − f ( x ∗ ) ≤ − within MaxFEs and total runs.To compare the performance of IUDE, ε MAgES, iLSHADE ε , and AHO on the considered con-strained engineering design problems, we have used the ranking scheme proposed in [103]. Table 30gives the ranking of IUDE, ε MAgES, iLSHADE ε , and AHO based on the performance measure (PM)proposed in [103]. We observe that ε MAgES and iLSHADE ε have the same rank which means theirperformance is the same. On the other hand, AHO outperforms only IUDE. It requires the design of a speed reducer for a miniature aircraft engine. The optimization problem hasthe following form.Minimize: f ( x ) = . x x ( . x − . + . x )+ . ( x x + x x ) − . x ( x + x ) + . ( x + x ) Subject to: . Zitouni, S. Harous, A. Belkeram, and LE. Baba Hammou / AHO g ( x ) = − x x x + ≤ g ( x ) = − x x x + . ≤ g ( x ) = − x x x x − + . ≤ g ( x ) = − x x x x − + . ≤ g ( x ) = x − (cid:113) . × + ( x x − x − ) − ≤ g ( x ) = x − (cid:113) . × + ( x x − x − ) − ≤ g ( x ) = x x − ≤ g ( x ) = − x x − + ≤ g ( x ) = x x − − ≤ g ( x ) = . x − x + . ≤ g ( x ) = . x − x + . ≤ . ≤ x ≤ . . ≤ x ≤ . ≤ x ≤ . ≤ x ≤ . . ≤ x ≤ . . ≤ x ≤ . ≤ x ≤ . The objective of this problem is to optimize the welding cost, material, and forming of a vessel. Thisproblem has four constraints to be satisfied, and four variables are used to compute the objectivefunction value: shell thickness ( z ), head thickness ( z ), inner radius ( x ), and length of the vesselwithout including the head ( x ). This problem can be formulated as follows.Minimize: f ( x ) = . z x + . z x x + . z x + . z x Subject to: g ( x ) = . x ≤ z g ( x ) = . x ≤ z g ( x ) = x ≤ g ( x ) = − π x x − π x ≤ − F. Zitouni, S. Harous, A. Belkeram, and LE. Baba Hammou / AHO
Where: (cid:40) z = . x z = . x With bounds: x ∈ { , . . . , } x ∈ { , . . . , } ≤ x ≤ ≤ x ≤ The objective of this problem is to design a welded beam with minimum cost. This problem hasfive constraints, and four variables. The mathematical description of this problem can be defined asfollows.Minimize: f ( x ) = . x x ( x + ) + . x x Subject to: g ( x ) = x − x ≤ g ( x ) = δ ( x ) − δ max ≤ g ( x ) = P ≤ P c ( x ) g ( x ) = τ max ≥ τ ( x ) g ( x ) = σ ( x ) − σ max ≤ . Zitouni, S. Harous, A. Belkeram, and LE. Baba Hammou / AHO τ = (cid:113) τ (cid:48) + τ (cid:48)(cid:48) + τ (cid:48) τ (cid:48)(cid:48) x R τ (cid:48)(cid:48) = RMJ τ (cid:48) = P √ x x M = p ( x + L ) R = (cid:113) x + ( x + x ) J = (( x + ( x + x ) ) √ x x ) σ ( x ) = PLx x δ ( x ) = PL Ex x P c ( x ) = . Ex x L ( − x L (cid:113) E G ) L = inP = lbE = × psi σ max = , psi τ max = , psiG = . psi δ max = . in With bounds: . ≤ x ≤ . ≤ x ≤ . ≤ x ≤ . ≤ x ≤ The objective of this problem is to optimize the weight of a tension or compression spring. Thisproblem contains four constraints and three variables: the diameter of the wire ( x ), the mean of thediameter of coil ( x ), and the number of active coils ( x ). This problem is defined in the followingway.Minimize: f ( x ) = x x ( + x ) Subject to: F. Zitouni, S. Harous, A. Belkeram, and LE. Baba Hammou / AHO g ( x ) = − x x x ≤ g ( x ) = x − x x ( x x − x ) + x + − ≤ g ( x ) = − . x x x ≤ g ( x ) = x + x . − ≤ . ≤ x ≤ . . ≤ x ≤ . . ≤ x ≤ . The objective of this problem is to minimize the mass of a multiple disk clutch brake. This problem hasfive integer decision variables: the inner radius ( x ), the outer radius ( x ), the disk thickness ( x ), theforce of actuators ( x ), and the number of frictional surfaces ( x ). This problem contains 8 non-linearconstraints. The problem can be defined as follows.Minimize: f ( x ) = π ( x − x ) x ( x + ) Subject to: g ( x ) = − p max + p rz ≤ g ( x ) = p rz V sr − V sr , max p max ≤ g ( x ) = ∆ R + x − x ≤ g ( x ) = − L max + ( x + )( x + δ ) ≤ g ( x ) = sM s − M h ≤ g ( x ) = T ≥ g ( x ) = − V sr , max p max + V sr ≤ g ( x ) = T − T max ≤ . Zitouni, S. Harous, A. Belkeram, and LE. Baba Hammou / AHO M h = µ x x x − x x − x N . mm ω = π n rad / sA = π ( x − x ) mm p rz = x A N / mm V sr = π R sr n mm / sR sr = x − x x x mmT = I z ω M h + M f ∆ R = mmL max = mm µ = . V sr , max = m / s δ = . mms = . T max = sn = r pmI z = Kg . m M s = NmM f = Nmp max = x ∈ { , . . . , } x ∈ { , . . . , } x ∈ { , . . . , } x ∈ { , . . . , } x ∈ { , . . . , }
6. Conclusion and future work
In this paper, a new population-based optimization algorithm termed the Archerfish Hunting Opti-mizer is introduced to handle constrained and unconstrained optimization problems. AHO is foundedon the shooting and jumping behaviors of archerfish when hunting aerial insects in nature. Some equa-tions are outlined to model the hunting behavior of archerfish to solve optimization problems. Tenunconstrained optimization problems are used to assess the performance of AHO. The exploration,exploitation, and local optima avoidance’s capacities are examined using unimodal, basic, hybrid, andcomposition functions. The obtained statistical results of the Wilcoxon signed-rank and the Friedman F. Zitouni, S. Harous, A. Belkeram, and LE. Baba Hammou / AHO tests confirm that AHO can find superior solutions by comparison to 12 well-acknowledged optimiz-ers. AHO’s collected numerical outcomes on three constrained engineering design problems also showthat AHO has outstanding results compared to 3 well-regarded optimizers.AHO is straightforwardly explained with uncomplicated exploration and exploitation techniques.It is desirable to introduce other evolutionary schemes such as mutation, crossover, or multi-swarm,which we plan to do in the future. In addition, we plan to develop the binary and multi-objectiveversions of AHO.
Acknowledgement
This research work is supported by UAEU Grant: 31T102-UPAR-1-2017. We appreciate the construc-tive comments of anonymous reviewers.
References [1] Vazirani VV. Approximation algorithms. Springer Science & Business Media, 2013.[2] Grossman T, Wool A. Computational experience with approximation algorithms for the set coveringproblem.
European Journal of Operational Research , 1997. (1):81–92.[3] Hochba DS. Approximation algorithms for NP-hard problems.
ACM Sigact News , 1997. (2):40–52.[4] Agarwal PK, Procopiuc CM. Exact and approximation algorithms for clustering. Algorithmica , 2002. (2):201–226.[5] Glover F. Future paths for integer programming and links to ar tifi cial intelli g en ce. Computersoperations research , 1986. (5):533–549.[6] Van Laarhoven PJ, Aarts EH. Simulated annealing. In: Simulated annealing: Theory and applications,pp. 7–15. Springer, 1987.[7] Dorigo M, Di Caro G. Ant colony optimization: a new meta-heuristic. In: Proceedings of the 1999congress on evolutionary computation-CEC99 (Cat. No. 99TH8406), volume 2. IEEE, 1999 pp. 1470–1477.[8] Fogel DB. Evolutionary computation: the fossil record. Wiley-IEEE Press, 1998.[9] Glover F. Tabu search: A tutorial. Interfaces , 1990. (4):74–94.[10] Moscato P, et al. On evolution, search, optimization, genetic algorithms and martial arts: Towardsmemetic algorithms. Caltech concurrent computation program, C3P Report , 1989. :1989.[11] Kennedy J, Eberhart R. Particle swarm optimization. In: Proceedings of ICNN’95-International Confer-ence on Neural Networks, volume 4. IEEE, 1995 pp. 1942–1948.[12] Hussain K, Salleh MNM, Cheng S, Naseem R. Common benchmark functions for metaheuristic evalua-tion: A review.
JOIV: International Journal on Informatics Visualization , 2017. (4-2):218–223.[13] Ezugwu AE, Olusanya MO, Govender P. Mathematical model formulation and hybrid metaheuristicoptimization approach for near-optimal blood assignment in a blood bank system. Expert Systems withApplications , 2019. :74–99. . Zitouni, S. Harous, A. Belkeram, and LE. Baba Hammou / AHO [14] Ho YC, Pepyne DL. Simple explanation of the no-free-lunch theorem and its implications. Journal ofoptimization theory and applications , 2002. (3):549–570.[15] Zitouni F, Harous S, Maamri R. The Solar System Algorithm: A Novel Metaheuristic Method for GlobalOptimization.
IEEE Access , 2021. :4542–4565. doi:10.1109/ACCESS.2020.3047912.[16] Holland JH. Genetic algorithms. Scientific american , 1992. (1):66–73.[17] Kennedy J, et al. Encyclopedia of machine learning.
Particle swarm optimization , 2010. pp. 760–766.[18] Bergmann HW. Optimization: Methods and Applications, Possibilities and Limitations: Proceedingsof an International Seminar Organized by Deutsche Forschungsanstalt F¨ur Luft-und Raumfahrt (DLR),Bonn, June 1989, volume 47. Springer Science & Business Media, 2012.[19] Fogel LJ, Owens AJ, Walsh MJ. Artificial intelligence through simulated evolution. 1966.[20] Karaboga D, Basturk B. A powerful and efficient algorithm for numerical function optimization: artificialbee colony (ABC) algorithm.
Journal of global optimization , 2007. (3):459–471.[21] Koza JR, Koza JR. Genetic programming: on the programming of computers by means of naturalselection, volume 1. MIT press, 1992.[22] Mirjalili S, Mirjalili SM, Lewis A. Grey wolf optimizer. Advances in engineering software , 2014. :46–61.[23] Storn R, Price K. Differential evolution–a simple and efficient heuristic for global optimization overcontinuous spaces. Journal of global optimization , 1997. (4):341–359.[24] Yang XS. A new metaheuristic bat-inspired algorithm. In: Nature inspired cooperative strategies foroptimization (NICSO 2010), pp. 65–74. Springer, 2010.[25] Simon D. Biogeography-based optimization. IEEE transactions on evolutionary computation , 2008. (6):702–713.[26] Mirjalili S, Lewis A. The whale optimization algorithm. Advances in engineering software , 2016. :51–67.[27] Hansen N. The CMA evolution strategy: a comparing review. In: Towards a new evolutionary computa-tion, pp. 75–102. Springer, 2006.[28] Mirjalili S. Dragonfly algorithm: a new meta-heuristic optimization technique for solving single-objective, discrete, and multi-objective problems. Neural Computing and Applications , 2016. (4):1053–1073.[29] Talbi H, Draa A. A new real-coded quantum-inspired evolutionary algorithm for continuous optimization. Applied Soft Computing , 2017. :765–791.[30] Kaveh A, Farhoudi N. A new optimization method: Dolphin echolocation. Advances in EngineeringSoftware , 2013. :53–70.[31] Pan WT. A new fruit fly optimization algorithm: taking the financial distress model as an example. Knowledge-Based Systems , 2012. :69–74.[32] Gandomi AH, Alavi AH. Krill herd: a new bio-inspired optimization algorithm. Communications innonlinear science and numerical simulation , 2012. (12):4831–4845. F. Zitouni, S. Harous, A. Belkeram, and LE. Baba Hammou / AHO [33] Askarzadeh A, Rezazadeh A. A new heuristic optimization algorithm for modeling of proton ex-change membrane fuel cell: bird mating optimizer.
International Journal of Energy Research , 2013. (10):1196–1204.[34] Oftadeh R, Mahjoob M, Shariatpanahi M. A novel meta-heuristic optimization algorithm inspiredby group hunting of animals: Hunting search. Computers & Mathematics with Applications , 2010. (7):2087–2098.[35] Yang XS. Firefly algorithm, Levy flights and global optimization. In: Research and development inintelligent systems XXVI, pp. 209–218. Springer, 2010.[36] Shiqin Y, Jianjun J, Guangxing Y. A dolphin partner optimization. In: 2009 WRI Global Congress onIntelligent Systems, volume 1. IEEE, 2009 pp. 124–128.[37] Yang XS, Deb S. Cuckoo search via L´evy flights. In: 2009 World Congress on Nature & BiologicallyInspired Computing (NaBIC). IEEE, 2009 pp. 210–214.[38] James J, Li VO. A social spider algorithm for global optimization. Applied Soft Computing , 2015. :614–627.[39] Lu X, Zhou Y. A novel global convergence algorithm: bee collecting pollen algorithm. In: InternationalConference on Intelligent Computing. Springer, 2008 pp. 518–525.[40] Abbass HA. MBO: Marriage in honey bees optimization-A haplometrosis polygynous swarming ap-proach. In: Proceedings of the 2001 Congress on Evolutionary Computation (IEEE Cat. No. 01TH8546),volume 1. IEEE, 2001 pp. 207–214.[41] Mucherino A, Seref O. Monkey search: a novel metaheuristic search for global optimization. In: AIPconference proceedings, volume 953. AIP, 2007 pp. 162–173.[42] Roth M. Termite: A swarm intelligent routing algorithm for mobile wireless ad-hoc networks. 2005.[43] Li X. A new intelligent optimization-artificial fish swarm algorithm. Doctor thesis, Zhejiang Universityof Zhejiang, China , 2003.[44] Saremi S, Mirjalili S, Lewis A. Grasshopper optimisation algorithm: theory and application.
Advancesin Engineering Software , 2017. :30–47.[45] Dhiman G, Kumar V. Seagull optimization algorithm: Theory and its applications for large-scale indus-trial engineering problems.
Knowledge-Based Systems , 2019. :169–196.[46] Mirjalili S, Gandomi AH, Mirjalili SZ, Saremi S, Faris H, Mirjalili SM. Salp Swarm Algorithm: A bio-inspired optimizer for engineering design problems.
Advances in Engineering Software , 2017. :163–191.[47] Cuevas E, Fausto F, Gonz´alez A. The Selfish Herd Optimizer. In: New Advancements in Swarm Algo-rithms: Operators and Applications, pp. 69–109. Springer, 2020.[48] Mirjalili S. Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm.
Knowledge-based systems , 2015. :228–249.[49] Mirjalili S. The ant lion optimizer. Advances in engineering software , 2015. :80–98.[50] Heidari AA, Mirjalili S, Faris H, Aljarah I, Mafarja M, Chen H. Harris hawks optimization: Algorithmand applications. Future Generation Computer Systems , 2019. :849–872.[51] Li S, Chen H, Wang M, Heidari AA, Mirjalili S. Slime mould algorithm: A new method for stochasticoptimization. Future Generation Computer Systems , 2020. . Zitouni, S. Harous, A. Belkeram, and LE. Baba Hammou / AHO [52] Wang GG. Moth search algorithm: a bio-inspired metaheuristic algorithm for global optimization prob-lems. Memetic Computing , 2018. (2):151–164.[53] Wang GG, Deb S, Coelho LdS. Elephant herding optimization. In: 2015 3rd International Symposiumon Computational and Business Intelligence (ISCBI). IEEE, 2015 pp. 1–5.[54] Wang GG, Deb S, Coelho LDS. Earthworm optimisation algorithm: a bio-inspired metaheuristic al-gorithm for global optimisation problems. International Journal of Bio-Inspired Computation , 2018. (1):1–22.[55] Wang GG, Deb S, Cui Z. Monarch butterfly optimization. Neural computing and applications , 2019. (7):1995–2014.[56] Labbi Y, Attous DB, Gabbar HA, Mahdad B, Zidan A. A new rooted tree optimization algorithm foreconomic dispatch with valve-point effect. International Journal of Electrical Power & Energy Systems ,2016. :298–311.[57] Kaur S, Awasthi LK, Sangal A, Dhiman G. Tunicate Swarm Algorithm: A new bio-inspired basedmetaheuristic paradigm for global optimization. Engineering Applications of Artificial Intelligence , 2020. :103541.[58] Kirkpatrick S, Gelatt CD, Vecchi MP. Optimization by simulated annealing. science , 1983. (4598):671–680.[59] Rao RV, Savsani VJ, Vakharia D. Teaching–learning-based optimization: a novel method for constrainedmechanical design optimization problems. Computer-Aided Design , 2011. (3):303–315.[60] ˇCern`y V. Thermodynamical approach to the traveling salesman problem: An efficient simulation algo-rithm. Journal of optimization theory and applications , 1985. (1):41–51.[61] Geem ZW, Kim JH, Loganathan GV. A new heuristic optimization algorithm: harmony search. simula-tion , 2001. (2):60–68.[62] Webster B, Philip J, Bernhard A. Local Search Optimization Algorithm Based on Natural Principles ofGravitation, IKE’03, Las Vegas, Nevada, USA, June 2003, 2003.[63] Rashedi E, Nezamabadi-Pour H, Saryazdi S. GSA: a gravitational search algorithm. Information sci-ences , 2009. (13):2232–2248.[64] Fogel DB. Artificial intelligence through simulated evolution. Wiley-IEEE Press, 1998.[65] Erol OK, Eksin I. A new optimization method: big bang–big crunch.
Advances in Engineering Software ,2006. (2):106–111.[66] He S, Wu Q, Saunders J. A novel group search optimizer inspired by animal behavioural ecology. In:2006 IEEE international conference on evolutionary computation. IEEE, 2006 pp. 1272–1278.[67] Kaveh A, Talatahari S. A novel heuristic optimization method: charged system search. Acta Mechanica ,2010. (3-4):267–289.[68] Atashpaz-Gargari E, Lucas C. Imperialist competitive algorithm: an algorithm for optimization inspiredby imperialistic competition. In: 2007 IEEE congress on evolutionary computation. IEEE, 2007 pp.4661–4667.[69] Formato R. Central force optimization: a new metaheuristic with applications in applied electromagnet-ics. Prog Electromagn Res 77: 425–491, 2007. F. Zitouni, S. Harous, A. Belkeram, and LE. Baba Hammou / AHO [70] Kashan AH. League championship algorithm: a new algorithm for numerical function optimization. In:2009 International Conference of Soft Computing and Pattern Recognition. IEEE, 2009 pp. 43–48.[71] Alatas B. ACROA: artificial chemical reaction optimization algorithm for global optimization.
ExpertSystems with Applications , 2011. (10):13170–13180.[72] Kaveh A, Mahdavi VR. Colliding bodies optimization: a novel meta-heuristic method. Computers &Structures , 2014. :18–27.[73] Hatamlou A. Black hole: A new heuristic optimization approach for data clustering.
Information sci-ences , 2013. :175–184.[74] Gandomi AH. Interior search algorithm (ISA): a novel approach for global optimization.
ISA transac-tions , 2014. (4):1168–1183.[75] Kaveh A, Khayatazad M. A new meta-heuristic method: ray optimization. Computers & structures ,2012. :283–294.[76] Sadollah A, Bahreininejad A, Eskandar H, Hamdi M. Mine blast algorithm: A new population basedalgorithm for solving constrained engineering optimization problems.
Applied Soft Computing , 2013. (5):2592–2612.[77] Du H, Wu X, Zhuang J. Small-world optimization algorithm for function optimization. In: InternationalConference on Natural Computation. Springer, 2006 pp. 264–273.[78] Moosavian N, Roodsari BK. Soccer league competition algorithm: A novel meta-heuristic algorithm foroptimal design of water distribution networks. Swarm and Evolutionary Computation , 2014. :14–24.[79] Shah-Hosseini H. Principal components analysis by the galaxy-based search algorithm: a novel meta-heuristic for continuous optimisation. International Journal of Computational Science and Engineering ,2011. (1-2):132–140.[80] Dai C, Chen W, Zhu Y, Zhang X. Seeker optimization algorithm for optimal reactive power dispatch. IEEE Transactions on power systems , 2009. (3):1218–1231.[81] Moghaddam FF, Moghaddam RF, Cheriet M. Curved space optimization: a random search based ongeneral relativity theory. arXiv preprint arXiv:1208.2214 , 2012.[82] Ramezani F, Lotfi S. Social-based algorithm (SBA). Applied Soft Computing , 2013. (5):2837–2856.[83] Mirjalili S. SCA: a sine cosine algorithm for solving optimization problems. Knowledge-based systems ,2016. :120–133.[84] Ghorbani N, Babaei E. Exchange market algorithm. Applied Soft Computing , 2014. :177–187.[85] Mirjalili S, Mirjalili SM, Hatamlou A. Multi-verse optimizer: a nature-inspired algorithm for globaloptimization. Neural Computing and Applications , 2016. (2):495–513.[86] Salih SQ, Alsewari AA. A new algorithm for normal and large-scale optimization problems: NomadicPeople Optimizer. Neural Computing and Applications , 2020. (14):10359–10386.[87] Mozaffari MH, Abdy H, Zahiri SH. IPO: an inclined planes system optimization algorithm. Computingand Informatics , 2016. (1):222–240.[88] Eita M, Fahmy M. Group counseling optimization. Applied Soft Computing , 2014. :585–604.[89] Tan Y, Zhu Y. Fireworks algorithm for optimization. In: International conference in swarm intelligence.Springer, 2010 pp. 355–364. . Zitouni, S. Harous, A. Belkeram, and LE. Baba Hammou / AHO [90] Mohammadi A, Zahiri SH. IIR model identification using a modified inclined planes system optimizationalgorithm. Artificial Intelligence Review , 2017. (2):237–259.[91] Mohammadi-Esfahrood S, Mohammadi A, Zahiri SH. A Simplified and Efficient Version of InclinedPlanes system Optimization Algorithm. In: 2019 5th Conference on Knowledge Based Engineering andInnovation (KBEI). IEEE pp. 504–509.[92] Kumar A, Misra RK, Singh D, Mishra S, Das S. The spherical search algorithm for bound-constrainedglobal optimization problems. Applied Soft Computing , 2019. :105734.[93] Boussa¨ıD I, Lepagnot J, Siarry P. A survey on optimization metaheuristics. Information sciences , 2013. :82–117.[94] Beheshti Z, Shamsuddin SMH. A review of population-based meta-heuristic algorithms.
Int. J. Adv. SoftComput. Appl , 2013. (1):1–35.[95] Do˘gan B, ¨Olmez T. A new metaheuristic for numerical function optimization: Vortex Search algorithm. Information Sciences , 2015. :125–145.[96] Hussain K, Salleh MNM, Cheng S, Shi Y. On the exploration and exploitation in popular swarm-basedmetaheuristic algorithms.
Neural Computing and Applications , 2019. (11):7665–7683.[97] Xu J, Zhang J. Exploration-exploitation tradeoffs in metaheuristics: Survey and analysis. In: Proceedingsof the 33rd Chinese Control Conference. IEEE, 2014 pp. 8633–8638.[98] Yang XS, Deb S, Fong S. Metaheuristic algorithms: optimal balance of intensification and diversification. Applied Mathematics & Information Sciences , 2014. (3):977.[99] Salleh MNM, Hussain K, Cheng S, Shi Y, Muhammad A, Ullah G, Naseem R. Exploration and exploita-tion measurement in swarm-based metaheuristic algorithms: An empirical analysis. In: Internationalconference on soft computing and data mining. Springer, 2018 pp. 24–32.[100] Morales-Casta˜neda B, Zaldivar D, Cuevas E, Fausto F, Rodr´ıguez A. A better balance in metaheuristicalgorithms: Does it exist? Swarm and Evolutionary Computation , 2020. p. 100671.[101] Dr´eo J, P´etrowski A, Siarry P, Taillard E. Metaheuristics for hard optimization: methods and case studies.Springer Science & Business Media, 2006.[102] Yue C, Price K, Suganthan P, Liang J, Ali M, Qu B, Awad N, Biswas P. Problem definitions and eval-uation criteria for the CEC 2020 special session and competition on single objective bound constrainednumerical optimization.
Comput. Intell. Lab., Zhengzhou Univ., Zhengzhou, China, Tech. Rep , 2019. .[103] Kumar A, Wu G, Ali MZ, Mallipeddi R, Suganthan PN, Das S. A test-suite of non-convex constrainedoptimization problems from the real-world and some baseline results.
Swarm and Evolutionary Compu-tation , 2020. p. 100693.[104] L¨uling K. The archer fish.
Scientific American , 1963. (1):100–109.[105] Rossel S, Corlija J, Schuster S. Predicting three-dimensional target motion: how archer fish determinewhere to catch their dislodged prey.
Journal of experimental biology , 2002. (21):3321–3326.[106] Schuster S, W¨ohl S, Griebsch M, Klostermeier I. Animal cognition: how archer fish learn to downrapidly moving targets.
Current Biology , 2006. (4):378–383. F. Zitouni, S. Harous, A. Belkeram, and LE. Baba Hammou / AHO [107] Schuster S, Rossel S, Schmidtmann A, J¨ager I, Poralla J. Archer fish learn to compensate for complexoptical distortions to determine the absolute size of their aerial prey.
Current Biology , 2004. (17):1565–1568.[108] Shih AM, Mendelson L, Techet AH. Archer fish jumping prey capture: kinematics and hydrodynamics. Journal of Experimental Biology , 2017. (8):1411–1422.[109] Vailati A, Zinnato L, Cerbino R. How archer fish achieve a powerful impact: hydrodynamic instabilityof a pulsed jet in Toxotes jaculatrix.
PLoS One , 2012. (10):e47867.[110] Dill LM. Refraction and the spitting behavior of the archerfish (Toxotes chatareus). Behavioral Ecologyand Sociobiology , 1977. (2):169–184.[111] Wheelon AD. Free flight of a ballistic missile. ARS journal , 1959. (12):915–926.[112] Cram´er H. Random variables and probability distributions, volume 36. Cambridge University Press,2004.[113] Artin E. The gamma function. Courier Dover Publications, 2015.[114] Zimmerman DW, Zumbo BD. Relative power of the Wilcoxon test, the Friedman test, and repeated-measures ANOVA on ranks. The Journal of Experimental Education , 1993. (1):75–86.[115] Tables of Probability Distributions. In: RIFFENBURGH RH (ed.), Statistics in Medicine (Second Edi-tion), pp. 586 – 601. Academic Press, Burlington, second edition edition. ISBN 978-0-12-088770-5,2006. doi:https://doi.org/10.1016/B978-012088770-5/50069-1. URL .[116] Etemadi N. An elementary proof of the strong law of large numbers. Zeitschrift f¨ur Wahrscheinlichkeit-stheorie und verwandte Gebiete , 1981. (1):119–122.[117] Salgotra R, Singh U, Saha S, Gandomi AH. Improving Cuckoo Search: Incorporating Changes forCEC 2017 and CEC 2020 Benchmark Problems. In: 2020 IEEE Congress on Evolutionary Computation(CEC). IEEE, 2020 pp. 1–7.[118] Boluf´e-R¨ohler A, Chen S. A Multi-Population Exploration-only Exploitation-only Hybrid on CEC-2020Single Objective Bound Constrained Problems. In: 2020 IEEE Congress on Evolutionary Computation(CEC). IEEE, 2020 pp. 1–8.[119] Stanovov V, Akhmedova S, Semenkin E. Ranked Archive Differential Evolution with Selective Pressurefor CEC 2020 Numerical Optimization. In: 2020 IEEE Congress on Evolutionary Computation (CEC).IEEE, 2020 pp. 1–7.[120] Sallam KM, Elsayed SM, Chakrabortty RK, Ryan MJ. Improved multi-operator differential evolutionalgorithm for solving unconstrained problems. In: 2020 IEEE Congress on Evolutionary Computation(CEC). IEEE, 2020 pp. 1–8.[121] Viktorin A, Senkerik R, Pluhacek M, Kadavy T, Zamuda A. DISH-XX Solving CEC2020 Single Objec-tive Bound Constrained Numerical optimization Benchmark. In: 2020 IEEE Congress on EvolutionaryComputation (CEC). IEEE, 2020 pp. 1–8.[122] Mohamed AW, Hadi AA, Mohamed AK, Awad NH. Evaluating the Performance of Adaptive Gaining-Sharing Knowledge Based Algorithm on CEC 2020 Benchmark Problems. In: 2020 IEEE Congress onEvolutionary Computation (CEC). IEEE, 2020 pp. 1–8. . Zitouni, S. Harous, A. Belkeram, and LE. Baba Hammou / AHO [123] Brest J, Mauˇcec MS, Boˇskovi´c B. Differential Evolution Algorithm for Single Objective Bound-Constrained Optimization: Algorithm j2020. In: 2020 IEEE Congress on Evolutionary Computation(CEC). IEEE, 2020 pp. 1–8.[124] Bujok P, Kolenovsky P, Janisch V. Eigenvector Crossover in jDE100 Algorithm. In: 2020 IEEE Congresson Evolutionary Computation (CEC). IEEE, 2020 pp. 1–6.[125] Biswas PP, Suganthan PN. Large Initial Population and Neighborhood Search incorporated in LSHADEto solve CEC2020 Benchmark Problems. In: 2020 IEEE Congress on Evolutionary Computation (CEC).IEEE, 2020 pp. 1–7.[126] Jou YC, Wang SY, Yeh JF, Chiang TC. Multi-population Modified L-SHADE for Single Objective BoundConstrained optimization. In: 2020 IEEE Congress on Evolutionary Computation (CEC). IEEE, 2020pp. 1–8.[127] Kadavy T, Pluhacek M, Viktorin A, Senkerik R. SOMA-CL for competition on single objective boundconstrained numerical optimization benchmark: a competition entry on single objective bound con-strained numerical optimization at the genetic and evolutionary computation conference (GECCO) 2020.In: Proceedings of the 2020 Genetic and Evolutionary Computation Conference Companion. 2020 pp.9–10.[128] Mohamed AW, Hadi AA, Mohamed AK. Gaining-sharing knowledge based algorithm for solving opti-mization problems: A novel nature-inspired algorithm. International Journal of Machine Learning andCybernetics , 2019. pp. 1–29.[129] Rey D, Neuh¨auser M. Wilcoxon-signed-rank test. In: International encyclopedia of statistical science,pp. 1658–1659. Springer, Berlin, Heidelberg, 2011.[130] Trivedi A, Srinivasan D, Biswas N. An improved unified differential evolution algorithm for constrainedoptimization problems. In: Proceedings of 2018 IEEE Congress on Evolutionary Computation. IEEE,2018 pp. 1–10.[131] Hellwig M, Beyer HG. A matrix adaptation evolution strategy for constrained real-parameter optimiza-tion. In: 2018 IEEE congress on evolutionary computation (CEC). IEEE, 2018 pp. 1–8.[132] Fan Z, Fang Y, Li W, Yuan Y, Wang Z, Bian X. LSHADE44 with an Improved εε