Parallel Swarms Oriented Particle Swarm Optimization

10
1 Parallel Swarms Oriented Particle Swarm Optimization Tad Gonsalves and Akira Egashira Department of Information and Communication Sciences, Faculty of Science & Technology, Sophia University, 7-1 Kioicho, Chiyoda-ku, Tokyo, 102-8554 Japan E-mail: [email protected] [email protected] Abstract The Particle Swarm Optimization (PSO) is a recently invented evolutionary computation technique which is gaining popularity owing to its simplicity in implementation and rapid convergence. In the case of single-peak functions, PSO rapidly converges to the peak; however, in the case of multi-modal functions, the PSO particles are known to get trapped in the local optima. In this paper, we propose a variation of the algorithm called Parallel Swarms Oriented Particle Swarm Optimization (PSO-PSO) which consists of a multi-stage and a single- stage of evolution. In the multi-stage of evolution, individual sub-swarms evolve independently in parallel and in the single-stage of evolution the sub-swarms exchange information to search for the global best. The two interweaved stages of evolution demonstrate better performance on test functions, especially of higher dimensions. The attractive feature of the PSO-PSO version of the algorithm is that it does not introduce any new parameters to improve its convergence performance. The strategy maintains the simple and intuitive structure as well as the implemental and computational advantages of the basic PSO. 1. Introduction Evolutionary Algorithms (EAs) are increasingly being applied to solve the problems in diverse domains. These meta-heuristic algorithms are found to be successful in many domains chiefly because of their domain-independent evolutionary mechanisms. Evolutionary computation is inspired by biological processes which are at work in nature. Genetic algorithm (GA) [13] modeled on the Darwinian evolutionary paradigm is the oldest and the best known Evolutionary Algorithm. It mimics the natural processes of selection, cross-over and mutation to search for optimum solutions in massive search spaces. Particle Swarm Optimization (PSO) is a recently developed algorithm belonging to the class of biologically inspired methods [9-12], [15-18]. PSO imitates the social behavior of insects, birds or fish swarming together to hunt for food. PSO is a population-based approach that maintains a set of candidate solutions, called particles, which move within the search space. During the exploration of the search space, each particle maintains a memory of two pieces of information: the best solution (pbest) that it has encountered so far and the best solution (gbest) encountered by the swarm as a whole. This information is used to direct the search. Researchers have found that PSO has the following advantages over the other biologically inspired evolutionary algorithms: (1) Its operational principle is very simple and intuitive; (2) it relies on very few external control parameters; (3) it can be easily implemented, and (4) it has rapid convergence. PSO has developed very fast, has obtained very good applications in wide

Transcript of Parallel Swarms Oriented Particle Swarm Optimization

1

Parallel Swarms Oriented Particle Swarm Optimization

Tad Gonsalves and Akira Egashira Department of Information and Communication Sciences,

Faculty of Science & Technology,

Sophia University, 7-1 Kioicho, Chiyoda-ku, Tokyo, 102-8554 Japan E-mail: [email protected] [email protected]

Abstract –The Particle Swarm Optimization (PSO) is a recently invented evolutionary

computation technique which is gaining popularity owing to its simplicity in implementation

and rapid convergence. In the case of single-peak functions, PSO rapidly converges to the peak;

however, in the case of multi-modal functions, the PSO particles are known to get trapped in the

local optima. In this paper, we propose a variation of the algorithm called Parallel Swarms

Oriented Particle Swarm Optimization (PSO-PSO) which consists of a multi-stage and a single-

stage of evolution. In the multi-stage of evolution, individual sub-swarms evolve independently

in parallel and in the single-stage of evolution the sub-swarms exchange information to search

for the global best. The two interweaved stages of evolution demonstrate better performance on

test functions, especially of higher dimensions. The attractive feature of the PSO-PSO version

of the algorithm is that it does not introduce any new parameters to improve its convergence

performance. The strategy maintains the simple and intuitive structure as well as the

implemental and computational advantages of the basic PSO.

1. Introduction

Evolutionary Algorithms (EAs) are increasingly being applied to solve the problems in diverse

domains. These meta-heuristic algorithms are found to be successful in many domains chiefly

because of their domain-independent evolutionary mechanisms. Evolutionary computation is

inspired by biological processes which are at work in nature. Genetic algorithm (GA) [13]

modeled on the Darwinian evolutionary paradigm is the oldest and the best known Evolutionary

Algorithm. It mimics the natural processes of selection, cross-over and mutation to search for

optimum solutions in massive search spaces.

Particle Swarm Optimization (PSO) is a recently developed algorithm belonging to the class

of biologically inspired methods [9-12], [15-18]. PSO imitates the social behavior of insects,

birds or fish swarming together to hunt for food. PSO is a population-based approach that

maintains a set of candidate solutions, called particles, which move within the search space.

During the exploration of the search space, each particle maintains a memory of two pieces of

information: the best solution (pbest) that it has encountered so far and the best solution (gbest)

encountered by the swarm as a whole. This information is used to direct the search.

Researchers have found that PSO has the following advantages over the other biologically

inspired evolutionary algorithms: (1) Its operational principle is very simple and intuitive; (2) it

relies on very few external control parameters; (3) it can be easily implemented, and (4) it has

rapid convergence. PSO has developed very fast, has obtained very good applications in wide

2

variety of areas [1], [19], [21] and has become one of the intelligent computing study hotspots

in recent years [3], [20], [21].

In the case of single-peak functions, PSO rapidly converges to the peak; however, in the case

of multi-modal functions, the PSO particles are known to get trapped in the local optima. A

significant number of variations are being made to the standard PSO to avoid the particles from

getting trapped in the local optima [6], [22], [24-27], [33-34], [37-39]. Other methods which

restart the particles trapped at the local optima have also been proposed [25-26], [34]. In these

methods, when the velocity of a particle falls below a given threshold, it is re-initialized to a

randomly selected large value to avoid stagnation. The risk of stagnation is reduced by

randomly accelerating the flying particles. Some of the more recent methods utilize multi-

swarms to search for the global optimum while avoiding getting trapped in the local optima [4],

[5], [7], [23], [35].

However, in all the studies mentioned above, a number of new parameters are introduced in

the original PSO. This destroys the simplicity of the algorithm and leads to an undesirable

computational overhead. In this study, we introduce another variation to the evolution of the

standard PSO, but without resorting to additional new parameters. The strategy maintains the

simple and intuitive structure as well as the implemental and computational advantages of the

basic PSO. Thus, the contribution of this study is the improvement of the performance of the

basic PSO without increasing the complexity of the algorithm.

This paper is organized as follows: In section 2, we present the original PSO as proposed by

Kennedy and Eberhart in 1995. In section 3, we explain the working of the standard PSO and

some of the variations found in the literature on PSO. In section 4, we present our new Parallel

Swarm Oriented PSO and demonstrate the performance results in section 5. We conclude the

paper in section 6 and propose some ideas for further research.

2. Original Particle Swarm Optimization

The population-based PSO conducts a search using a population of individuals. The individual

in the population is called the particle and the population is called the swarm. The performance

of each particle is measured according to a predefined fitness function. Particles are assumed to

“fly” over the search space in order to find promising regions of the landscape. In the

minimization case, such regions possess lower functional values than other regions visited

previously. Each particle is treated as a point in a d-dimensional space which adjusts its own

“flying” according to its flying experience as well as the flying experience of the other

companion particles. By making adjustments to the flying based on the local best (pbest) and

the global best (gbest) found so far, the swarm as a whole converges to the optimum point, or at

least to a near-optimal point, in the search space.

The notations used in PSO are as follows: The ith particle of the swarm in iteration t is

represented by the d-dimensional vector, xi(t) = (xi1, xi2,…, xid). Each particle also has a position

change known as velocity, which for the ith particle in iteration t is vi(t) = (vi1, vi2,…, vid). The

best previous position (the position with the best fitness value) of the ith particle is pi (t-1) = (pi1,

3

pi2,…, pid). The best particle in the swarm, i.e., the particle with the smallest function value

found in all the previous iterations, is denoted by the index g. In a given iteration t, the velocity

and position of each particle is updated using the following equations:

vi (t ) = wvi (t-1) + c1r1(pi (t-1) − xi (t-1)) + c2r2(pg (t-1) − xi (t-1)) (1)

and

( ) ( 1) ( )i i ix t x t v t (2)

where, i = 1, 2,…, NP; t = 1, 2,…,T. NP is the size of the swarm, and T is the iteration limit; c1

and c2 are positive constants (called “social factors”), and r1 and r2 are random numbers between

0 and 1; w is inertia weight that controls the impact of the previous history of the velocities on

the current velocity, influencing the trade-off between the global and local experiences. A large

inertia weight facilitates global exploration (searching new areas), while a small one tends to

facilitate local exploration (fine-tuning the current search area). Equation (1) is used to compute

a particle’s new velocity, based on its previous velocity and the distances from its current

position to its local best and to the global best positions. The new velocity is then used to

compute the particle’s new position (Equation (2)).

3. Standard PSO and its variations

Any successful meta-heuristic algorithm maintains a delicate balance between exploration

(diversifying the search to wider areas of the search space) and exploitation (intensifying the

search in narrow promising areas). Shi and Eberhart later introduced the inertia weight in the

original PSO to improve the PSO search [32]. A high value of the inertial weight favors

exploration, while a low value favors exploitation. The inertia weight is defined as:

(3)

where, and are respectively, the initial and the final values of the inertia weight, is

the current iteration number and is the maximum number of iterations. Many studies

have shown that the PSO performance is improved by decreasing linearly from 0.9 to 0.4

using the above equation.

Eberhart and Shi have further proposed a random modification for the inertia weight to make

the standard PSO applicable to dynamic problems [11]. The inertia weight is randomly

modified according to the following equation:

= 0.5 + rand (0,1) / 2 (4)

4

As opposed to the above linear decrement, Jie, Wang, et. al. [14] have proposed a non-linear

modification of the inertia weight over time given by:

(5)

This inertia weight varies slowly in the initial stages, but more rapidly in the final stages. This

implies that the algorithm makes a wider global search in the early stages and narrower local

search in the final stages.

Generally, the very same velocity and position update formulae are applied to each and every

flying particle. Moreover, the very same inertia weight is applied to each particle in a given

iteration. However, Yang, Yuan, et. al. [36] have proposed a modified PSO algorithm with

dynamic adaptation, in which a modified velocity updating formula of the particle is used,

where the randomness in the course of updating the particle velocity is relatively decreased and

each particle has a different inertia weight applied to it in a given iteration. Further, this

algorithm introduces two new parameters describing the evolving state of the algorithm, the

evolution speed factor and the aggregation degree factor. In the new strategy, the inertia weight

is dynamically adjusted according to the evolution speed and the aggregation degree. The

evolution speed factor is given by:

|

( ) (

)

( ) (

)| (6)

where, ( ) is the fitness value of

. The parameter h (0 < h ≤ 1) reflects the

evolutionary speed of each particle. The smaller the value of h is, the faster the speed.

The aggregation degree is given by:

|

| (7)

where, is the mean fitness of all the particles in the swarm and is the optimal value

found in the tth iteration.

The inertia weight is updated as:

(

) (8)

where, is the initial inertia weight. The choice and is typically in the range [0, 1].

Another variation is the introduction of a constriction coefficient to replace to ensure the

quick convergence of PSO [8]. The velocity update is given by:

vi (t ) = χ (vi (t-1) + c1r1(pi (t-1) − xi (t-1)) + c2r2(pg (t-1) − xi (t-1))) (9)

5

| √ | (10)

where, is the constriction coefficient, = c1+ c2 and > 4.

4. Parallel Swarm Oriented PSO

In all the PSO variations mentioned in the preceding sections, a number of new parameters are

introduced in the original PSO. This destroys the simplicity of the algorithm and leads to an

undesirable computational overhead. In this section, we describe our novel approach called the

Parallel Swarms Oriented PSO (PSO-PSO). This version of the PSO does not introduce any new

algorithm parameters to improve its convergence performance. The strategy maintains the

simple and intuitive structure as well as the implemental and computational advantages of the

basic PSO.

The algorithm consists of a multi-evolutionary phase and a single-evolutionary phase. In the

multi-evolutionary phase, initially a number of sub-swarms are randomly generated so as to

uniformly cover the decision space. The multiple sub-swarms are then allowed to evolve

independently, each one maintaining its own particle-best (pbest) and swarm-best (sbest). The

latter is a new term we have introduced to represent the best particle in a given swarm in its

history of evolution. After a pre-determined number of evolutionary cycles, the multi-

evolutionary ends and the single-evolutionary phase begins. The sub-swarms exchange

information and record the global-best (gbest) of the entire collection of the sub-swarms. In the

single-evolutionary phase all the sub-swarms merge and continue to evolve using the individual

sub-swarm particle-best and swarm-best and the overall global-best. The sub-swarms then

return to the multi-evolutionary phase and continue the search. The algorithm flow chart of the

PSO-PSO algorithm is shown in Fig. 1.

Multi-evolutionary phase: K number of independent swarms evolve in parallel

Step 1: Randomly generate K number of independent swarm populations so as to be uniformly

distributed over the entire decision space.

Step 2: Evaluate the fitness of the particles in each individual swarm. In the minimization

problems, fitness of a particle is inversely proportional to the value of the function.

Step 3: Determine the particle best (pbest) and the swarm best (sbest) of each individual swarm.

Step 4: Update the velocity and position of each particle in each swarm according to Equations

(1) and (2).

Step 5: Allow the individual swarms to evolve independently through N iterations (i.e. repeat

steps 2 through 4).

6

Single-evolutionary phase: The individual swarms exchange information

Step 6: Determine the global-best (gbest) by comparing the swarm-bet (sbest) of all the swarms.

For minimization problems, the gbest in a given iteration is given by the following equation:

gbest = min(sbest1, sbest2, sbest3,…………… sbestk) (11)

Step 7: The individual swarms start interacting by using the gbest as reference. Update the

velocities of all the particles according to the equation:

vij (t ) = wvij (t-1) + c1r1(pij (t-1) − xij (t-1)) + c2r2(psj (t-1) − xij (t-1)) + c3r3(pg (t-1) − xij (t-1)) (12)

where, xij is the position of the ith particle in the j

th swarm, vij is the velocity of the i

th particle in

the jth swarm, pij is the pbest of the i

th particle in the j

th swarm, psj is the sbest of the j

th swarm

and pg is the global best of the entire information-exchanging collection of sub-swarms. c1, c2, c3

are the acceleration parameters and r1, r2, r3 are uniform random numbers.

Step 8: Update the positions of all the particles according to the equation:

xij (t) = xij (t-1)) + vij (t) (13)

Step 9: Repeat steps 2 ~ 8 through M iterations and output the global best.

Fig. 1. Parallel Swarms oriented PSO algorithm flowchart

7

s

5. Performance comparison of single and multi-swarms PSO

5. Conclusion

Fig. 2a. 10D Rosenberg Fig. 2b. 10D Rastrigin

Fig. 3a. 20D Rosenbrock Fig. 3b. 20D Rastrigin

Fig. 4a. 30D Rosenbrock Fig. 4b. 30D Rastrigin

Ave.

fu

ncti

on v

alue

Num. of swarms (num. of particles)

Ave.

fu

ncti

on v

alue

Num. of swarms (num. of particles)

Ave.

fu

ncti

on v

alue

Num. of swarms (num. of particles)

Ave.

fu

ncti

on v

alue

Num. of swarms (num. of particles)

Ave.

fu

ncti

on v

alue

Num. of swarms (num. of particles)

Ave.

fu

ncti

on v

alue

Num. of swarms (num. of particles)

8

The PSO-PSO algorithm is implemented in MATLAB on a 24 core server with 256 GB RAM.

Each CPU core is dedicated to the evolution of a single sub-swarm in the multi-stage

computational phase of the algorithm. The results are presented in Fig. 2, Fig. 3 and Fig. 4. For

smaller dimensions, there is no appreciable difference between the performance of the ordinary

(single) PSO and the multi-swarms PSO-PSO. This can be seen in the optimization results of

the 10-dimensional Rosenbrock and Rastrigin funcions (Fig. 2a & Fig. 2b). However, the

superior performance of the multi-swarm PSO-PSO approach is evident in higher dimensions.

This can be seen in the optimization of the 20 (Fig. 3a & Fig. 3b) and 30 dimensional

Rosenbrock and Rastrigin functions (Fig. 4a & Fig. 4b).

5. Conclusion

The particle swarm optimization (PSO) is increasingly becoming widespread in a variety of

applications as a reliable and robust optimization algorithm. The attractive features of this

evolutionary algorithm is that it has very few control parameters, is simple to program and is

rapidly converging. The only drawback reported so far is that at times it gets trapped in local

optima. Many researchers have addressed this issue, but by introducing a number of new

parameters in the original PSO. This destroys the simplicity of the algorithm and leads to an

undesirable computational overhead. In this study, we have proposed a variation of the

algorithm called Parallel Swarms Oriented PSO (PSO-PSO) which consists of a multi-stage and

a single-stage of evolution. The two interweaved stages of evolution demonstrate better

performance on test functions, especially of higher dimensions. The attractive feature of PSO-

PSO version of the algorithm is that it does not introduce any new parameters to improve its

convergence performance. The PSO-PSO strategy maintains the simple and intuitive structure

as well as the implemental and computational advantages of the basic PSO. Thus, the

contribution of this study is the improvement of the performance of the basic PSO without

increasing the complexity of the algorithm.

6. References

[1] M. R. AlRashidi and M. E. El-Hawary, “A survey of particle swarm optimization applications in power system

operations”, Electric Power Compon. Syst., 34(12):1359 -1357, 2006.

[2] F. van den Bergh, An Analysis of Particle Swarm Optimizers. Ph.D. thesis. Department of Computer Science,

University of Pretoria, Pretoria, South Africa, 2002.

[3] Bergh F, Engelbrecht A P. A cooperative approach to particle swarm optimization. IEEE Transactions on

Evolutionary Computation, 2004, 8(3): 225-239.

[4] T. M. Blackwell and J. Branke, “Multi-Swarm optimization in dynamic environment,” Lecture Notes in Computer

Science, vol. 3005, pp. 489-500, Springer, Berlin, 2004.

[5] T. M. Blackwell and J. Branke, “Multi-Swarm, exclusions and anticonvergence in dynamic environments,” IEEE

Transactions on Evolutionary Computation, 2006, 10(4): 459-472.

[6] G.M. Chen, J.Y. Jia, Q. Han, Study on the strategy of decreasing inertia weight in particle swarm optimization

algorithm, J. Xian Jiaotong Univ. 40 (2006) 1039-1042.

9

[7] K. Chen, T. Li and T. Cao, “Tribe-PSO: A novel global optimization algorithm and its application in molecular

docking,” Chemometrics and Intelligent Laboratory Systems, vol. 82, pp. 248-259, 2006.

[8] M. Clerc, Particle Swarm Optimization, ISTE Publishing Company, London, 2006.

[9] Eberhart R, Kennedy J. A new optimizer using particle swarm theory. In: Proceedings of the Sixth International

Symposium on Micro Machine and Human Science. Piscataway: IEEE Service Center, 1995, 39-43

[10] R.C. Eberhart and Y. Shi, “Comparing inertia weights and constriction factors in particle swarm optimization”,

Proc. CEC, San Diego, CA, pp. 84-88, 2000.

[11] R.C. Eberhart, Y.H. Shi, Tracking and optimizing dynamic systems with particle swarms, in: Proc. of the IEEE

Congress on Evolutionary Computation. San Francisco, USA (2001) 94-100.

[12] R Eberhart andY. Shi. “Particle swarm optimization: developments, applications and resource”, Proceedings

of the 2001 Congress on Evolutionary Computation, vol. 1: 81-86.

[13] Holland, John, 1975/1992, Adaptation in Natural and Artificial Systems. Cambridge, MA: MIT Press.

[14] Jing Jie; Wanliang Wang; Chunsheng Liu; Beiping Hou, “Multi-swarm particle swarm optimization based on

mixed search behavior”, 5th IEEE Conference on Industrial Electronics and Applications (ICIEA), 2010, pp.605-610.

[15] Kennedy J, Eberhart R. Particle swarm optimization. In: Proceedings of IEEE International Conference on

Neural Networks (ICNN), 1995, vols.1-6: 1942-1948.

[16] J. Kennedy, R.C. Eberhart, A new optimizer using particle swarm theory, in: Proc. of the Sixth Int. Symp. on

Micro Machine and Human Science (MHS’95), Nagoya, Japan (1995) 39-43.

[17] Kennedy, J. and R. C. Eberhart (1997). A discrete binary version of the particle swarm algorithm. Systems, Man,

and Cybernetics.1997 IEEE International Conference on Computational Cybernetics and Simulation.

[18] James Kennedy, Russell C. Eberhart, Yuhui Shi, Swarm intelligence; Morgan Kaufmann Publishers, 2001.

[19] Kulkarni, R.V.; Venayagamoorthy, G.K., “Particle Swarm Optimization in Wireless-Sensor Networks: A Brief

Survey”, IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews, 41(2):262-267.

[20] Liang J J, Qin A K, Suganthan P N et al. Comprehensive learning particle swarm optimizer for global

optimization of multimodal functions. IEEE Transactions on Evolutionary Computation, 2006, 10(3): 281-295.

[21] Lijuan Wang; Jun Shen; Jianming Yong, “A survey on bio-inspired algorithms for web service composition”,

IEEE 16th International Conference on Computer Supported Cooperative Work in Design (CSCWD), 2012, pp.569-

574.

[22] Z.S. Lu, Z.R. Hou, Particle swarm optimization with adaptive mutation, Acta Electron. Sinica 32 (2004) 416-420,

Beijing, China.

[23] B. Niu, Y. Zhu, X. He and H. Wu, “MCPSO: A multi-swarm cooperative particle swarm optimizer” Proc.

Applied Mathematics and Computation, 2007, 185(2):1050-1062.

[24] F. Pan, X.Y. Tu, J. Chen, J.W. Fu, Harmonious particle swarm optimizer – HPSO, Comput. Eng. 31 (2005) 169-

171, Shanghai, China.

[25] S. Pasupuleti and R. Battiti, “The gregarious particle swarm optimizer– G-PSO,” Proceedings of the 8th Annual

Conference on Genetic and Evolutionary Computation, pp. 67-74, 2000.

[26] A. Ratnaweera, S. Halgamuge and H.Watson, “Self-organizing hierarchical particle swarm optimizer with time-

varying acceleration coefficients,” IEEE Transactions on Evolutionary Computation, Vol. 8, pp. 240–255, 2004.

[27] J. F. Schutte, A.A. Groenwold, “A Study of Global Optimization using Particle Swarms”, Journal of Global

Optimization archive, January 2005, 31(1): 93-108.

[28] Shen Lin-cheng, Huo Xiao-hua, Niu Yi-feng. “Survey of discrete particle swarm optimization algorithm”,

Systems Engineering and Electronics, 2008, 30(10):1986-1994.

10

[29] Y. Shi and R. Eberhart, “Parameter Selection in Particle Swann Optimization”. Proc. Seventh Annual Conf on

Evolutionary Programming, March 1998, pp.591-601.

[30] Shi Y and Eberhart R C, “A modified particle swarm optimizer,” IEEE World Congr. Computational Intell.,

Anchorage, 1998, pp. 69-73.

[31] Y.H. Shi, R.C. Eberhart, A modified particle swarm optimizer, in: Proc. of the IEEE Congress on Evolutionary

Computation. IEEE Service Center, USA, 1998, 69-73.

[32] Y. Shi and R. C. Eberhart, “Empirical study of particle swarm optimization”, In: Proc. Congress on

Evolutionary Computation, Piscataway, NJ: IEEE Service Center, 1999, pp. 1945-1950.

[33] Y.H. Shi, R.C. Eberhart, Fuzzy Adaptive particle swarm optimization, in: Proc. of the IEEE Congress on

Evolutionary Computation, vol. 1, Seoul Korea (2001), 101-106.

[34] Keiji Tatsumi, Takashi Yukami and Tetsuzo Tanino, Restarting Multi-type Particle Swarm Optimization Using

an Adaptive Selection of Particle Type, Proceedings of the 2009 IEEE International Conference on Systems, Man,

and Cybernetics, 2009, pp.923-928.

[35] Wu Lie-yang, Sun Hui, Bai Ming-ming. “Particle swarm optimization algorithm of two sub-swarms exchange

based on different evolvement model”, Journal of Nanchang Institute of Technology, 2008, 4:1-4.

[36] Xueming Yang, Jinsha Yuan, Jiangye Yuan, Huina Mao, A modified particle swarm optimizer with dynamic

adaptation, Applied Mathematics and Computation 189 (2007) 1205–1213.

[37] Y.L. Zhang, L.H. Ma, L.Y. Zhang, J.X. Qian, On the Convergence Analysis and Parameter Selection in Particle

Swarm Optimization, in: Proc. Int. Conf. on Machine learning and Cybernetics. Zhejiang University, Hangzhou,

China, (2003) 1802–1807.

[38] L.P. Zhang, H.J. Yu, D.Z. Chen, S.X. Hu, Analysis and improvement of particle swarm optimization algorithm,

Inform. Control 33 (2004) 513–517, Shengyang, China.

[39] X.P. Zhang, Y.P. Du, G.Q. Qin, Adaptive particle swarm algorithm with dynamically changing inertia weight, J.

Xian Jiaotong Univ. 39 (2005) 1039-1042.

[40] Jia Zhao, Li Lü, Hui Sun, Xing-wang Zhang, A Novel Two Sub-swarms Exchange Particle Swarm Optimization

Based on Multi-phases, 2010 IEEE International Conference on Granular Computing, 2010, pp. 626-629.