An accelerated CLPSO algorithm
Muhammad Omer Bin Saeed, Muhammad Saqib Sohail, Syed Zeeshan Rizvi, Mobien Shoaib, Asrar Ul Haq Sheikh
aa r X i v : . [ c s . N E ] A p r An accelerated CLPSO algorithm
M. O. Bin Saeed, M. S. Sohail, S. Z. Rizvi, M. Shoaib and A.U. H. Sheikh
The particle swarm approach provides a low complexity solution tothe optimization problem among various existing heuristic algorithms.Recent advances in the algorithm resulted in improved performance atthe cost of increased computational complexity, which is undesirable.Literature shows that the particle swarm optimization algorithm based oncomprehensive learning provides the best complexity-performance trade-off. We show how to reduce the complexity of this algorithm further, witha slight but acceptable performance loss. This enhancement allows theapplication of the algorithm in time critical applications, such as, real-time tracking, equalization etc.
Introduction:
The particle swarm optimization (PSO) algorithm wasintroduced in 1995 by Kennedy and Eberhart [1, 2]. It is a populationbased optimization algorithm, emulating swarm behavior that is observedin a herd of animals, a flock of birds or a school of fish. For functionoptimization purposes, the swarm consists of particles, hence the nameparticle swarm optimization. Each particle searches for a potential solutionin a multi dimensional search space. The aim of the swarm is to convergeto an optimum solution, global to the whole swarm. The estimate for eachparticle is tested using a certain fitness value that defines the goodness ofthe estimate. Every particle combines its own best attained solution withthe best solution of the whole swarm to adapt its search pattern.Several variants have been suggested in the literature to improve theperformance of the PSO algorithm. Each variant has its own advantagesand disadvantages. Among the most popular variants, the comprehensivelearning PSO (CLPSO) and orthogonal learning PSO (OLPSO) algorithmsprovide the best performance over a wide range of test functions [3, 4].Results in [3] and [4] show the superiority of these two algorithms over theother variants. However, both these algorithms are computationally verycomplex. In general, the CLPSO variant provides the best performance-complexity trade-off among all other existing PSO variants. Still, it is toocomplex for many applications, such as channel equalization and radardetection etc. It also tends to be slow converging as it attempts to providehighly accurate results. We note that most applications do not requiresuch a high level of accuracy. Here, we propose a fast converging, lowcomplexity solution that achieves reasonably good level of accuracy. Thecost is a slight degradation in performance, which is still highly acceptablefor the aforementioned applications.
Comprehensive Learning Particle Swarm Optimizer (CLPSO):
Considera D -dimensional hyperspace in which a swarm of N particles istrying to find the optimum solution, given by the position vector X = (cid:2) x , x , · · · , x D (cid:3) T . The individual position of each particle k isdenoted by the D -dimensional vector X k = (cid:2) x k , x k , · · · , x Dk (cid:3) T . Similarly,the velocity vector of each particle k is given by the vector V k = (cid:2) v k , v k , · · · , v Dk (cid:3) T . Each particle updates its position based on its own bestrecorded position as well as the best recorded position of the whole swarm.Let the best individual position vector of each particle and the best globalposition vector be denoted by the vectors P k = (cid:2) p k , p k , · · · , p Dk (cid:3) T and G k = (cid:2) g , g , · · · , g D (cid:3) T , respectively. Then the velocity update equationfor the PSO algorithm is given by: v dk ( i + 1) = v dk ( i ) + c .r . (cid:0) p dk − x dk ( i ) (cid:1) + c .r . (cid:0) g d − x dk ( i ) (cid:1) , (1)where k = 1 , · · · , N is the particle index, d = 1 , · · · , D is the dimensionindex, i denotes the time index, c and c are positive constants knownas the acceleration coefficients, and r and r are uniformly distributedrandom numbers within the range [0 , . The global best is generated fromthe neighborhood of each particle. In general, the velocity and the positionfor each particle are bounded within a predefined limit, [ − V max , V max ] and [ X min , X max ] , respectively. The bounds ensure that the particles donot diverge from the search hyperspace.The CLPSO algorithm divides the unknown estimate into sub-vectors[3]. For each sub-vector, a particle chooses two random neighbor particles.The neighbor particle that gives the best fitness value for that particularsub-vector is chosen as an exemplar. The combined result of all sub-vectorsgives the overall best vector, which is then used to perform the update. If the particle stops improving for a certain number of iterations then theneighbor particles for the sub-vectors are changed. The velocity updateequation for the CLPSO algorithm is given by [3]: v dk ( i + 1) = w k ( i ) .v dk ( i ) + c k .r k . (cid:0) ˆ p dk − x dk ( i ) (cid:1) , (2)where w k ( i ) is a time-varying weighting coefficient, ˆ p dk gives the overallbest position value for dimension d of particle k , c k is the accelerationcoefficient and r k is a uniformly distributed random number. The overallCLPSO algorithm is thus defined by the following set of equations [3]: V k ( i + 1) = w k ( i ) . V k ( i ) + c k .r k . (cid:0) ˆP k − X k ( i ) (cid:1) (3) X k ( i + 1) = X k ( i ) + V k ( i + 1) , (4) The Accelerated CLPSO Algorithm:
An event-triggering approach isapplied to the CLPSO algorithm. The accelerating coefficient c k is set to if the distance value (cid:12)(cid:12) ˆ p dk − x dk ( i ) (cid:12)(cid:12) ≤ γ , where γ is a certain threshold. Thisensures that the particle stays close to the best obtained vector. However,each dimension is to be treated separately and the accelerating coefficient c k is converted into a vector where each individual value c dk is treated basedon whether the distance value for that dimension is within the thresholdor not. Thus, the accelerating coefficient is governed by the followingequation: c dk ( i ) = (cid:26) , if (cid:12)(cid:12) ˆ p dk − x dk ( i ) (cid:12)(cid:12) ≤ γ C otherwise (5)The above formulation stems from two observations. One observationis that all dimensions do not require an update at every iteration. Secondly,every iterative step does not contribute significantly towards an improvedupdate. The update equations are executed every time, resulting in extracomputations. Our proposed step aims to remove the insignificant updatesteps, resulting in a faster algorithm. We term our proposed algorithm asan accelerated CLPSO algorithm (ACLPSO). Test Functions:
Several test functions are available in the literature thatcan be used to test the performance of the different variants of the PSOalgorithms [3, 4]. The algorithms aim to minimize the fitness value forthese test functions. Due to the paucity of space, we choose only the fivefunctions given in Table 1:
Table 1:
Test functions and their equations.
Function names Function equations Sphere f ( x ) = D P i =1 x i Rosenbrock f ( x ) = D P i =1 (cid:16) (cid:0) x i − x i +1 (cid:1) + ( xi − (cid:17) Rastrigin f ( x ) = D P i =1 (cid:0) x i − πx i ) + 10 (cid:1) Griewank f ( x ) = D P i =1 x i − D Q i =1 cos (cid:16) xi √ i (cid:17) + 1 Ackley f ( x ) = − − . s D D P i =1 x i ! − exp (cid:18) D D P i =1 cos (2 πx i ) (cid:19) + 20 + e Results and Discussion:
In this section, we compare the performanceof the standard CLPSO algorithm with the newly proposed ACLPSOalgorithm using the test functions defined above. The swarm has 40particles, with each particle having 30 dimensions. All results are averagedover 200 experiments.Table 2 gives the results of the simulations. Two different values of γ arechosen for detailed analysis. The table shows the mean fitness values forthe CLPSO and ACLPSO algorithms when they run for the complete 5000iterations. As can be seen from (3), there are a total of 3 multiplicationsfor updating each dimension per particle. The first multiplication occursevery time for both the algorithms. The remaining two multiplicationsoccur in the proposed algorithm only if the threshold value is exceededwhereas they always occur for the CLPSO algorithm. As a result, thenumber of computations is reduced. The % -age computations column liststhe amount of computations the proposed algorithm requires relative tothe standard CLPSO algorithm. The effective iterations column gives theequivalent iteration number at which the CLPSO algorithm has the same ELECTRONICS LETTERS Vol. 00 No. 00 umber of computations as the proposed algorithm. The effective meanis the fitness value of the CLPSO algorithm after the number of effectiveiterations. As can be seen, the proposed algorithm provides a very fast andlow-complexity result for all test functions. The only trade-off is that theproposed algorithm does not achieve the same final value as the CLPSOalgorithm. However, this trade-off is acceptable as the applications thatrequire fast convergence, generally, do not require accuracy of more than afew decimal places. This is the main focus of our work and, as can be seenclearly, the outcome is more than satisfactory.
Table 2:
Simulation results.
Func. γ CLPSO ACLPSO CLPSO
Actual Mean % -age Eff EffMean Comp Iter Mean f ( x ) 1 e − e −
14 6 e − . e − e −
14 6 e − . e − f ( x ) 1 e − .
17 34 .
24 44 . e − .
17 28 .
23 67 . . f ( x ) 1 e − e −
11 11 .
87 43 . e − e −
11 1 e − . . f ( x ) 1 e − e −
15 3 e − . e − e − e −
15 3 e − . e − f ( x ) 1 e − e − .
28 43 . e − e − e − . The simulation plots are shown in Figs. 1 and 2 for only two of theresults. These results corroborate the results in the table and the discussionabout the results. Iterations F i t ne ss V a l ue ( d B ) CLPSOACLPSO
Fig. 1. Performance comparison for Rosenbrock function with γ = 1 e − . −7 −6 −5 −4 −3 −2 −1 Iterations F i t ne ss V a l ue ( d B ) CLPSOACLPSO
Fig. 2. Performance comparison for Ackley function with γ = 1 e − . Conclusion:
We have proposed an accelerated comprehensive learningPSO algorithm that provides a fast and low complexity solution fortime critical applications. The simulation results show that the proposed algorithm performs faster than the standard CLPSO algorithm withreduced computations and an acceptable degradation in performance.M. O. Bin Saeed, M. S. Sohail and A. U. H. Sheikh (
Department ofElectrical Engineering, College of Engineering Sciences, King FahdUniversity of Petroleum & Minerals, Dhahran 31261, Saudi Arabia )E-mail: [email protected]. Z. Rizvi (
College of Engineering, University of Georgia, Athens, GA30602, USA )M. Shoaib (
Prince Sultan Advanced Technologies Research Institute,College of Engineering, King Saud University, Riyadh 11421, SaudiArabia ) References
Proc. 6th Int. Symp. Micromachine Human Sci. , Nagoya, Japan,1995, pp. 39-43.2 Kennedy, J. and Eberhart, R.C.: ‘Particle swarm optimization’, in
Proc.IEEE Int. Conf. Neural Networks , 1995, pp. 1942-1948.3 Liang, J.J., Qin, A.K., Suganthan, P.N. and Baskar, S.: ‘Comprehensivelearning particle swarm optimizer for global optimization of multimodalfunctions’,
IEEE Trans. Evol. Comput. , 2006, , pp. 281-295.4 Zhan, Z.-H., Zhang, J., Li, Y. and Shi, Y.-H.: ‘Orthogonal learning particleswarm optimization’, IEEE Trans. Evol. Comput. , 2011, , pp. 832-847., pp. 832-847.