Grammatical Swarm Based-Adaptable Velocity Update Equations in Particle Swarm Optimizer

10
Grammatical Swarm Based-Adaptable Velocity Update Equations in Particle Swarm Optimizer Tapas Si 1 , Arunava De 2 , and Anup Kumar Bhattacharjee 3 1 Department of Computer Science & Engineering Bankura Unnayani Institute of Engineering, Bankura, W.B, India [email protected] 2 Department of Information Technology Dr. B.C Roy Engineering College, Durgapur, W.B, India [email protected] 3 Department of Electronics and Communication Engineering National Institute of Technology, Durgapur, W.B, India [email protected] Abstract. In this work, a new method for creating diversity in Particle Swarm Optimization is devised. The key feature of this method is to derive velocity up- date equation for each particle in Particle Swarm Optimizer using Grammatical Swarm algorithm. Grammatical Swarm is a Grammatical Evolution algorithm based on Particle Swarm Optimizer. Each particle updates its position by updat- ing velocity. In classical Particle Swarm Optimizer, same velocity update equa- tion for all particles is responsible for creating diversity in the population. Particle Swarm Optimizer has quick convergence but suffers from premature convergence in local optima due to lack in diversity. In the proposed method, different veloc- ity update equations are evolved using Grammatical Swarm for each particles to create the diversity in the population. The proposed method is applied on 8 well-known benchmark unconstrained optimization problems and compared with Comprehensive Learning Particle Swarm Optimizer. Experimental results show that the proposed method performed better than Comprehensive Learning Particle Swarm Optimizer. Keywords: Particle Swarm Optimizer, Genetic Programming, Grammatical Evolution, Grammatical Swarm, Comprehensive Learning Particle Swarm Op- timizer, Velocity update equations, Optimization. 1 Introduction Particle Swarm Optimization(PSO) [1] was developed by Kennedy and Eberhart in 1995. PSO is a population based global optimization algorithm having stochastic na- ture. One advantage of PSO is its faster convergence speed. But it suffers from pre- mature convergence in local optima due to lack in diversity. A lot of research has been already done in order to solve that local optima problem by creating diversity in the population. Different variants of PSO like FIPSO[2], CLPSO [3] are devel- oped to enhance the performance of PSO. Different mutation strategies like Gausian S.C. Satapathy, S.K. Udgata, and B.N. Biswal (eds.), FICTA 2013, 197 Advances in Intelligent Systems and Computing 247, DOI: 10.1007/978-3-319-02931-3_ 24, c Springer International Publishing Switzerland 2014

Transcript of Grammatical Swarm Based-Adaptable Velocity Update Equations in Particle Swarm Optimizer

Grammatical Swarm Based-Adaptable Velocity UpdateEquations in Particle Swarm Optimizer

Tapas Si1, Arunava De2, and Anup Kumar Bhattacharjee3

1 Department of Computer Science & EngineeringBankura Unnayani Institute of Engineering, Bankura, W.B, India

[email protected] Department of Information Technology

Dr. B.C Roy Engineering College, Durgapur, W.B, [email protected]

3 Department of Electronics and Communication EngineeringNational Institute of Technology, Durgapur, W.B, India

[email protected]

Abstract. In this work, a new method for creating diversity in Particle SwarmOptimization is devised. The key feature of this method is to derive velocity up-date equation for each particle in Particle Swarm Optimizer using GrammaticalSwarm algorithm. Grammatical Swarm is a Grammatical Evolution algorithmbased on Particle Swarm Optimizer. Each particle updates its position by updat-ing velocity. In classical Particle Swarm Optimizer, same velocity update equa-tion for all particles is responsible for creating diversity in the population. ParticleSwarm Optimizer has quick convergence but suffers from premature convergencein local optima due to lack in diversity. In the proposed method, different veloc-ity update equations are evolved using Grammatical Swarm for each particlesto create the diversity in the population. The proposed method is applied on 8well-known benchmark unconstrained optimization problems and compared withComprehensive Learning Particle Swarm Optimizer. Experimental results showthat the proposed method performed better than Comprehensive Learning ParticleSwarm Optimizer.

Keywords: Particle Swarm Optimizer, Genetic Programming, GrammaticalEvolution, Grammatical Swarm, Comprehensive Learning Particle Swarm Op-timizer, Velocity update equations, Optimization.

1 Introduction

Particle Swarm Optimization(PSO) [1] was developed by Kennedy and Eberhart in1995. PSO is a population based global optimization algorithm having stochastic na-ture. One advantage of PSO is its faster convergence speed. But it suffers from pre-mature convergence in local optima due to lack in diversity. A lot of research hasbeen already done in order to solve that local optima problem by creating diversityin the population. Different variants of PSO like FIPSO[2], CLPSO [3] are devel-oped to enhance the performance of PSO. Different mutation strategies like Gausian

S.C. Satapathy, S.K. Udgata, and B.N. Biswal (eds.), FICTA 2013, 197Advances in Intelligent Systems and Computing 247,DOI: 10.1007/978-3-319-02931-3_24, c© Springer International Publishing Switzerland 2014

198 T. Si, A. De, and A.K. Bhattacharjee

mutation[4],cauchy mutation[5],adaptive mutation[6], polynomial mutation[7,8,9], dif-ferential mutation [10] are employed in PSO to overcome local optima problem.Here, description of related works which include PSO combined with GeneticProgramming(GP) [12] is given.

M. Rashid [13] proposed GP based adaptable PSO (PSOGP) in which every parti-cle used different velocity update equation to modify their position in the swarm spacein order to achieve high exploration. Each equation is a GP expression. T. Si [14,15]proposed Grammatical Differential Evolution(GDE) based adaptation in PSO (GDE-APSO) in which each particles adopt different velocity update equation during thesearch process resulting in more exploration of the search space.

In this work, each particle uses its own velocity update equation evolved by gram-matical swarm with the objective of creating diversity in the population so that the localoptima problem can be avoided.

The remaining of the paper is structured as follows: In Section 2, classical PSO algo-rithm is described. Grammatical Swarm is described in Section 3. A detail descriptionof the proposed method is given in Section 4. Experimental setup is given in Section 5.Section 6 comprises of results and discussions. Finally a conclusion with future work isgiven in Section 7.

2 Particle Swarm Optimization

Particle swarm optimization (PSO) [1] is a population based global optimization algo-rithm having stochastic nature. Each individual in PSO is called as particle and set ofparticles is called as swarm. Particle has its position Xi and velocity Vi where i is theindex of particle. The position Xi is represented as < Xi1,Xi2,Xi3, ...,XiD > where D isthe dimension of the problem to be optimized by the PSO. Each particle has its ownmemory to store its personal best X pbest

i found so far. The best of all personal best so-lution is called the global best Xgbest of the swarm. Each particle is accelerated by itsvelocity and the velocity is updated by the following equation:

Vi(t + 1) =W ×Vi(t)+C1R1(Xpbesti (t)−Xi(t))+C2R2(X

gbest(t)−Xi(t)) (1)

and position is updated by following equation:

Xi(t + 1) = Xi(t)+Vi(t + 1) (2)

In Eq. (1), W is the inertia weight in the range (0,1), C1 and C2 are the personalcognizance and social cognizance respectively. R1 and R2 are two uniformly distributedrandom number in (0,1) used for diversification.

Y. Shi and R.C Eberhart [11] introduced a linearly decreasing inertia weight withtime in the range (Wmin,Wmax) = (0.4,0.9). The corresponding equation is given inbelow:

W =Wmax − (Wmax −Wmin)× (t

tmax) (3)

GS Based-Adaptable Velocity Update Equations in PSO 199

3 Grammatical Swarm

Grammatical Swarm(GS)[16] algorithm is a Grammatical Evolution(GE)[17] based onPSO. GE is variant of Grammatical-based Genetic Programming that can write programin any arbitrary language. GE uses linear genome structure(variable-length bit string)instead of tree data structure in Genetic Programming(GP)[12]. The expressions in GSare evolved using PSO in swarm space. In Grammatical Swarm, each particle’s posi-tion represents a set of integer values(codon) in the range [0,255]. The dimension ofparticle is the number of codons to be used to derive the expression from Backus-NaurForm(BNF) grammar. Particle’s position represents the genotype which is mapped tophenotype(fitness corresponding derived expression).

4 Proposed Method

4.1 Algorithm

In the proposed method, GS adaptable PSO indicates the evolution of velocity updateequations for each particle in PSO. Here, the search space of GS is denoted as Gram-matical Swarm Space in which each particle represents a genome containing a numberof codons. On the other hand, another Swarm Space is denoting the problem’s searchspace where particles search the solution of the given problem. Therefore, in the pro-posed method GSPSO, a Dual Swarm Space is used. Grammatical Swarm Space ismapped to Swarm Space(i.e Genotype-Phenotype mapping). Therefore, the number ofpopulation in two swarm spaces are equal. The search space range in GS is [0,255].

Table 1. GSPSO Algorithm

Algorithm:GSPSO

1. Initialize the population of PSO and GS2. Calculate the fitness of particles3. Calculate the pbest and gbest4. While termination criteria5. For each individual6. Perform velocity and position update for GS7. If derived expression from particle of GS is valid8. Update the velocity using this new expression and update the position9. Else10. Update the velocity with pbest expression and update the position11. End12. Calculate new fitness13. Update pbest and gbest of PSO14. Update pbest expression if new expression is valid andgbest velocity updating equation in GS15. End16. End

200 T. Si, A. De, and A.K. Bhattacharjee

And search space range of particles in other swarm space is [Xmin,Xmax]. The Vmaxfor both GS and PSO are set to 50% of the search space range. The velocity in GS andPSO are strictly bounded in the range [−Vmax,Vmax].

In the proposed algorithm, individuals in Grammatical Swarm share PSO’s fitnessfunction i.e local fitness, pbest and gbest of PSO in solution space.

The velocity update equation can be rewritten in the following form:

V(i)(t + 1) = f (a j(t)), j = 1,2,3,4 (4)

where a1 =Vi,a2 = Xi,a3 = X pbesti ,a4 = Xgbest

i . The function set is F = {+,−,∗,/} andthe terminal set is T = {a1,a2,a3,a4,r} where r is random constant in (0, 1).

Fig. 1. Genotype

The Backus-Naur Form (BNF) Grammar is used in GE for genotype-phenotypemap-ping. BNF is a meta-syntax used to express Context-Free Grammar(CFG) by specifyingproduction rules in simple, human and machine -understandable manner. An exampleof BNF grammar is described below:

1. <expr> := (<expr><op><expr>) (0)| <var> (1)

2. <op> := + (0)|- (1)|* (2)|/ (3)

3. <var> := a1 (0)|a2 (1)|a3 (2)|a4 (3)|r (4)

r represents a random number in the range (0,1).

A mapping process is used to map from integer-value to rule number in the derivationof expression using BNF grammar by the following ways:

rule=(codon integer value) MOD (number of rules for the current non-terminal)In the derivation process,if the current non-terminal is <expr> , then, the rule numberis generated by the following way:

rule number=(170 mod 2)=0<expr> will be replaced by (<expr><op><expr>)

<expr> :=(<expr><op><expr>) (170 mod 2)=0:=(<var><op><expr>) (55 mod 2)=1:=(r<op><expr>) (149 mod 5)=4:=(r/<expr>) (83 mod 4)=3

GS Based-Adaptable Velocity Update Equations in PSO 201

:=(r/<expr>) (210 mod 2)=0:=(r/<var>) (175 mod 2)=1:=(r/<var>) (180 mod 2)=0:=(r/a1)

The resultant derived expression will be (0.9345/a1) where r is replaced by a randomnumber 0.9345.

The velocity update equation can be rewritten in the following form:

V(i)(t + 1) = f (a j(t)), j = 1,2,3,4 (5)

where a1 = Vi,a2 = Xi,a3 = X pbesti ,a4 = Xgbest

i . The function set is F = {+,−,∗,/}and the terminal set is T = {a1,a2,a3,a4,r} where r is random constant in (0, 1). Thevelocity update equations are evolved using above BNF grammar.

5 Experimental Setup

5.1 Benchmark Problems

There are 8 different global optimization problems, including 4 uni-modal functions( f1- f4), 4 multi-modal functions( f5- f8) are chosen in this experimental studies. Thesefunctions obtained from Ref. [18]. The function f4 has platue like region. The funtionf5 is highly multi-modal i.e it has too many local optima.The functions ( f6- f8) havefew local optima.All functions are used in this work to be minimized.The descriptionof these benchmark functions and their global optima are given in Table 2.

Table 2. The benchmark functions

Test Function S fmin

f1(x) = ∑Di=1 x2

i [-100,100] 0f2(x) = ∑D

i=1(∑ij=1 x j)

2 [-100,100] 0

f3(x) = ∑Di=1(106)

i−1n−1 x2

i [-100,100] 0f4(x) = ∑D

i=1[100(xi+1 −x2i )

2 +(1−x2i )

2] [-100,100] 0f5(x) = ∑D

i=1−xi ∗ sin(√| xi |) [-500,500] -12569.5

f6(x) = ∑Di=1

x2i

4000 −∏Di=1 cos( xi√

i)+1 [-600,600] 0

f7(x) =−20∗exp(−0.2∗√

1D ∑D

i=1 x2i )

−exp( 1D ∑D

i=1 cos(2πxi))+20+e [-32,32] 0f8(x) = ∑D

i=1[xi −10cos(2πxi)+10] [-5.12,5.12] 0

202 T. Si, A. De, and A.K. Bhattacharjee

Table 3. Parameters Settings

Parameters Values

Problem’s Dimension(D) 30Dimension in GS(i.e Length ofGenome)

50

Number of Wrapping 2Population Size(NP) 20FEs (where FEs is the maximum num-ber offunction evaluations allowed for eachrun)

1,00,000

Vmax 0.5× (Xmax −Xmin)c1 = c2 1.49445ωmax,ωmin 0.9,0.4Threshold Error(e) 1e−03Termination criteria Maximum number of function evaluations or E =

| f (X)− f (X∗)| ≤ e where f (X) is the current best andf (X∗) is the global optimum.E is the best-error of a runof the algorithm. e is the threshold error.

Total number of runs for each problem 50

5.2 Parameters Settings

5.3 PC Configuration

1. System: Fedora 172. CPU: AMD FX -8150 Eight-Core 3.6 GHz3. RAM: 16 GB4. Software: Matlab 2010b

6 Results and Discussions

The devised method is applied on well-known benchmark unconstrained functions de-scribed in Section 5 for 30 dimension. The quality of solutions are measured in terms ofmean and standard deviation of best-run-errors from 50 runs and are tabulated in Table4. Best-run-error is the absolute difference between global optimum and best solutionobtained from a single run. Success Rate(SR) is given in Table 5. Success Rate(SR) iscalculated as follows:

SR =Number o f achieved threshold errors

Total runs(6)

Statistical t-test [19] has been carried out for sample size(number of runs)=50 anddegrees of freedom=98 to compare the performance of CLPSO and GSPSO algorithmswith statistical significance for each problem. The last column of Table 4 signifies

GS Based-Adaptable Velocity Update Equations in PSO 203

whether the null hypothesis that the means of the two data are equal is accepted orrejected. The value “-” indicates that the approach will have a lower value with 95%of confidence, the value “+” represents that the approach will have a higher value with95% of confidence and the value “≈” means that there is not statistically significantdifference between the approaches.

Convergence speed is measured in terms of mean and standard deviation of numberof function evaluations taken by the algorithms and it is given with average CPU timein Table 5. Better results in Table 4 and 5 are marked in bold face. Convergence graphsof GSPSO are given in Figure 2.

Table 4. Mean and Standard Deviation of best-run-error, success rate

Test# GSPSO CLPSO SignificanceMean Std. Dev. SR(%) Mean Std. Dev. SR(%)

f1 1.01E-04 2.40E-04 100.00 9.34E-04 6.48E-05 100.00 +f2 3.90E-05 1.35E-04 100.00 6.11E-01 6.71E-01 0.00 +f3 2.51E-05 1.31E-04 100.00 9.28E-04 6.46E-05 100.00 +f4 8.59E-05 2.08E-04 100.00 34.33 23.46 0.00 +f5 3023.498 1631.845 0.00 4721.51 531.35 0.00 +f6 1.15E-04 2.20E-04 100.00 6.17E-03 7.90E-03 56.00 +f7 6.84E-05 1.72E-04 100.00 9.61E-04 3.86E-05 100.00 +f8 4.54E-05 1.57E-04 100.00 13.64 4.4465 0.00 +

Table 5. Mean,Standard Deviation of FES and mean CPU Time

Test# GSPSO CLPSOMean Std. Dev. Mean Time(Sec) Mean Std. Dev. Mean Time(Sec)

f1 1212.8 847.4765 32.7245 39605 645.5949 1.0964f2 1516.8 998.7614 41.9142 100000 0.00 3.5592f3 1246.8 714.2584 35.8272 45020 648.0047 1.6755f4 5085.20 2749.30 140.4408 100000 0.00 2.7181f5 100000 0.00 2646.648 100000 0.00 3.763f6 996 701.718 27.5345 68039 29385 2.4653f7 1302.80 938.2399 35.7485 43677 832.3986 1.3273f8 1330 1381 35.946 100000 0.00 2.9752

From Table 4, it is found that GSPSO outperformed over PSO in statistical significantway for all. GSPSO is more robust(always produce same result) than CLPSO. Thesuccess rate of GSPSO is 100% in finding out the target or threshold error for all exceptfunction f5. GSPSO has also faster convergence speed than CLPSO in achieving targeterror value.

From Table 5, it is seen that average FEs of GSPSO is better than CLPSO. ButGSPSO takes much more CPU time than CLPSO because extra CPU time is neededdue to higher length of the genome and to derive the expressions from genome with

204 T. Si, A. De, and A.K. Bhattacharjee

additional multiple wrapping. If obtaining better solution has higher priority over com-putational time, then this experimental studies show that GSPSO outperformed overCLPSO and GSPSO is more efficient and effective than CLPSO in optimization of un-constrained functions.

Figure 2 depicts the convergence behaviour of GSPSO and it also shows that GSPSOmaintains good diversity in the population(in solution space).

(a) f5 (b) f6

(c) f7 (d) f8

Fig. 2. Convergence graph of GSPSO for function f5 − f8

As particles update their velocity by different equations during the different runs,a set of best evolved equations in different runs in the optimization of function f5 aregiven in Table 6.

Table 6. Evolved Equations

Sl. No Evolved Equations Simplified Equations

1 minus(minus(pdivide(x3,x3),x3),x2) 1−Xpbest −X2 minus(x1,minus(plus(x2,x4),x4)) V −X3 minus(times(x4,x4),x3) X2

gbest −Xpbest

4 minus(times(x3,minus(x4,0.44359)),x2) Xpbest × (Xgbest −0.44359)−X5 minus(pdivide(x4,x1),x2) Xgbest/V −X6 minus(plus(x3,x3),plus(plus(x4,x3),x2)) −X −Xgbest +Xpbest7 minus(minus(x1,x3),x2) V −X −Xpbest

7 Conclusions

In this paper, grammatical swarm based adaptable velocity update equations in parti-cle swarm optimizer algorithm is devised. Each particles uses different velocity update

GS Based-Adaptable Velocity Update Equations in PSO 205

equation to update their position in order to create diversity in the population where asparticles in grammatical swarm use the classical velocity update equation. The GSPSOis applied to solve well-known benchmark unconstrained optimization problems. Ex-perimental results established that GSPSO performed better than CLPSO in terms ofquality of solutions, robustness and convergence behaviour. The analytical as well asexperimental studies will be carried out as a future work in how different velocity equa-tions in different times create diversity in the population. The future works of this studyis also directed towards the training of artificial neural network and optimizing the morecomplex problems.

References

1. Kennedy, J., Eberhart, R.C.: Particle swarm optimization. In: IEEE International Conferenceon Neural Networks, Piscataway, NJ, pp. 1942–1948 (1995)

2. Mendes, R., Kennedy, J., Neves, J.: The Fully Informed Particle Swarm: Simpler, MaybeBetter. IEEE Transactions on Evolutionary Computation 8(3), 204–210 (2004)

3. Liang, J.J., Qin, A.K., Suganthan, P.N., Baskar, S.: Comprehensive Particle Swarm Optimizerfor Global Optimization of Multimodal Functions. IEEE Transactions on Evolutionary Com-putation 10(3), 281–295 (2006)

4. Higashi, N., Lba, H.: Particle Swarm Optimization with Gaussian Mutation. In: IEEE SwarmIntelligence Symposium, Indianapolis, pp. 72–79 (2003)

5. Li, C., Liu, Y., Zhou, A., Kang, L., Wang, H.: A Fast Particle Swarm Optimization Algorithmwith Cauchy Mutation and Natural Selection Strategy. In: Kang, L., Liu, Y., Zeng, S. (eds.)ISICA 2007. LNCS, vol. 4683, pp. 334–343. Springer, Heidelberg (2007)

6. Tang, J., Zhao, X.: Particle Swarm Optimization with Adaptive Mutation. In: WASE Inter-national Conference on Information Engineering (2009)

7. Si, T., Jana, N.D., Sil, J.: Particle Swarm Optimization with Adaptive Polynomial Mutation.In: World Congress on Information and Communication Technologies (WICT 2011), Mum-bai, India, pp. 143–147 (2011)

8. Si, T., Jana, N.D., Sil, J.: Constrained Function Optimization Using PSO with PolynomialMutation. In: Panigrahi, B.K., Suganthan, P.N., Das, S., Satapathy, S.C. (eds.) SEMCCO2011, Part I. LNCS, vol. 7076, pp. 209–216. Springer, Heidelberg (2011)

9. Jana, N.D., Si, T., Sil, J.: Particle Swarm Optimization with Adaptive Mutation in LocalBest of Particles. In: 2012 International Congress on Informatics, Environment, Energy andApplications-IEEA 2012, IPCSIT, vol. 38. IACSIT Press, Singapore (2012)

10. Si, T., Jana, N.D.: Particle swarm optimisation with differential mutation. Int. J. IntelligentSystems Technologies and Applications 11(3/4), 212–251 (2012)

11. Shi, Y., Eberhart, R.C.: A modified particle swarm optimizer. In: Proceedings of the IEEECongress on Evolutionary Computation (CEC 1998), Piscataway, NJ, pp. 69–73 (1998)

12. Koza, J.R.: Genetic Programming: On the Programming of Computers by Means of NaturalSelection. MIT Press (1992)

13. Rashid, M.: Combining PSO algorithm and Honey Bee Food Foraging Behaviour for SolvingMultimodal and Dynamic Optimization Problems, Ph.D Dissertation, Department of Com-puter Science, National University of Computer & Emerging Sciences, Islamabad, Pakistan(2010)

14. Si, T.: Grammatical Differential Evolution Adaptable Particle Swarm Optimization Al-gorithm. International Journal of Electronics Communications and Computer Engineer-ing(IJECCE) 3(6), 1319–1324 (2012)

206 T. Si, A. De, and A.K. Bhattacharjee

15. Si, T.: Grammatical Differential Evolution Adaptable Particle Swarm Optimizer for Arti-ficial Neural Network Training. International Journal of Electronics Communications andComputer Engineering(IJECCE) 4(1), 239–243 (2013)

16. O’Neill, M., Brabazon, A.: Grammatical Swarm: The Generation of Programs by SocialProgramming. Natural Computing 5(4), 443–462

17. O’Neill, M., Ryan, C.: Grammatical Evolution. IEEE Trans. Evolutionary Computation 5(4),349–358 (2001)

18. Yao, X., Liu, Y., Lin, G.: Evolutionary programming made faster. IEEE Transactions onEvolutionary Computation 3, 82–102 (1999)

19. Das, N.G.: Statistical Methods (Combined Vol). Hill Education Private Limited, TataMcgraw (2008)