Research Article Heterogeneous Differential Evolution for Numerical Optimization Academic Editors: G
-
Upload
independent -
Category
Documents
-
view
2 -
download
0
Transcript of Research Article Heterogeneous Differential Evolution for Numerical Optimization Academic Editors: G
Research ArticleHeterogeneous Differential Evolution forNumerical Optimization
Hui Wang,1 Wenjun Wang,2 Zhihua Cui,3,4 Hui Sun,1 and Shahryar Rahnamayan5
1 School of Information Engineering, Nanchang Institute of Technology, Nanchang 330099, China2 School of Business Administration, Nanchang Institute of Technology, Nanchang 330099, China3 Complex System and Computational Intelligent Laboratory, Taiyuan University of Science and Technology, Taiyuan 030024, China4 State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing 210023, China5 Department of Electrical, Computer and Software Engineering, University of Ontario Institute of Technology,2000 Simcoe Street North Oshawa, ON, Canada L1H 7K4
Correspondence should be addressed to Hui Wang; [email protected]
Received 28 September 2013; Accepted 23 December 2013; Published 5 February 2014
Academic Editors: G. C. Gini and J. Zhang
Copyright Β© 2014 Hui Wang et al. This is an open access article distributed under the Creative Commons Attribution License,which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Differential evolution (DE) is a population-based stochastic search algorithm which has shown a good performance in solvingmany benchmarks and real-world optimization problems. Individuals in the standard DE, and most of its modifications, exhibitthe same search characteristics because of the use of the sameDE scheme.This paper proposes a simple and effective heterogeneousDE (HDE) to balance exploration and exploitation. In HDE, individuals are allowed to follow different search behaviors randomlyselected from a DE scheme pool. Experiments are conducted on a comprehensive set of benchmark functions, including classicalproblems and shifted large-scale problems.The results show that heterogeneous DE achieves promising performance on a majorityof the test problems.
1. Introduction
Differential evolution (DE) [1] is a well-known algorithm forglobal optimization over continuous search spaces. AlthoughDE has shown a good performance over many optimizationproblems, its performance is greatly influenced by its muta-tion scheme and control parameters (population size, scalefactor, and crossover rate). To enhance the performance ofDE, many improved DE variants have been proposed basedon modified mutation strategies [2, 3] or adaptive parametercontrol [4β6].
The standard DE and most of its modifications [7β10]make use of homogeneous populations where all of the indi-viduals follow exactly the same behavior. That is, individualsimplement the same DE scheme, such as DE/rand/1/bin andDE/best/1/bin.The effect is that individuals in the populationbehave with the same exploration and/or exploitation char-acteristics [11]. An ideal optimization algorithm should bal-ance exploration and exploitation during the search process.Initially, the algorithm should concentrate on exploration. As
the iteration increases, it would be better to use exploitationto find more accurate solutions. However, it is difficultto determine when the algorithm should switch from anexplorative behavior to an exploitative behavior. To tacklethis problem, a new concept of heterogeneous swarms [11] isproposed and applied to particle swarm optimization (PSO)[12, 13], where particles in the swarm use different velocityand position update rules. Therefore, the swarm may consistof explorative particles as well as exploitative particles. Thismakes the heterogeneous PSO have the ability to balanceexploration and exploitation during the search process.
In this paper, a simple and effective heterogeneous DE(HDE) algorithm is proposed inspired by the idea of hetero-geneous swarms [11]. In HDE, individuals will be allocatedto different search behaviors randomly selected from a DEscheme pool. However, the concept of heterogeneous swarmsused in DE is not new. Qin et al. [6] proposed a self-adaptive DE (SaDE), where individuals are also allowedto implement different mutation schemes according to acomplex probability model. Gong et al. [14] combined
Hindawi Publishing Corporatione Scientific World JournalVolume 2014, Article ID 318063, 7 pageshttp://dx.doi.org/10.1155/2014/318063
2 The Scientific World Journal
a strategy adaptation mechanism with four mutation strate-gies proposed in JADE [3]. For other self-adaptive DE vari-ants [8], the search characteristics of individuals dynamicallychange according to the adaptation of the control parameters(scale factor and/or crossover rate). The HDE proposed inthis paper differs from the above DE variants. In HDE,the behaviors are randomly assigned from a DE schemepool. By the suggestions of heterogeneous PSO [11], twoheterogeneous models are proposed. This first one is staticHDE (sHDE), where the randomly selected behaviors arefixed during the evolution. The second one is dynamic HDE(dHDE), where the behaviors can randomly change duringthe search process. Experimental studies are conducted on acomprehensive set of benchmark functions, including clas-sical problems and shifted large-scale problems. Simulationresults demonstrate the efficiency and effectiveness of theproposed heterogeneous DE.
The rest of the paper is organized as follows. In Section 2,the standard DE algorithm is briefly introduced. The HDEis proposed in Section 3. Experimental simulations, results,and discussions are presented in Section 4. Finally, the workis concluded in Section 5.
2. Differential Evolution
There are several variants of DE [1], which use differentmutation strategies and/or crossover schemes. To distinguishthese different DE schemes, the notation βDE/π₯/π¦/π§β is used,where βDEβ indicates the DE algorithm, βπ₯β denotes thevector to be mutated, βπ¦β is the number of difference vectorsused in the mutation, and βπ§β stands for the type of crossoverscheme, exponential (exp) or binomial (bin). The followingsection discusses the mutation and crossover operations.
2.1. Mutation. For each vector ππ,πΊ
at generation πΊ, thisoperation creates mutant vectors π
π,πΊbased on the current
parent population.The following are five well-known variantmutation strategies:
(1) DE/rand/1
ππ,πΊ= ππ1,πΊ+ πΉ β (π
π2,πΊβ ππ3,πΊ) , (1)
(2) DE/best/1
ππ,πΊ= πbest,πΊ + πΉ β (ππ
1,πΊβ ππ2,πΊ) , (2)
(3) DE/current-to-best/1
ππ,πΊ= ππ,πΊ+ πΉ β (πbest,πΊ β ππ,πΊ) + πΉ β (ππ
1,πΊβ ππ2,πΊ) , (3)
(4) DE/rand/2
ππ,πΊ= ππ1,πΊ+ πΉ β (π
π2,πΊβ ππ3,πΊ) + πΉ β (π
π4,πΊβ ππ5,πΊ) , (4)
(5) DE/best/2
ππ,πΊ= πbest,πΊ + πΉ β (ππ
1,πΊβ ππ2,πΊ) + πΉ β (π
π3,πΊβ ππ4,πΊ) ,
(5)
where the indices π1, π2, π3, π4, and π
5are mutually different
random indices chosen from the set {1, 2, . . . , ππ}, ππis the
population size, and all are different from the base index π.The scale factorπΉ is a real number that controls the differencevectors.πbest,πΊ is the best vector in terms of fitness value at thecurrent generation πΊ.
2.2. Crossover. Similar to EAs, DE also employs a crossoveroperator to build trial vectorsπ
π,πΊby recombining the current
vectorππ,πΊ
and themutant oneππ,πΊ.The trail vector is defined
as follows:
π’π,π,πΊ= {
Vπ,π,πΊ, if rand
π (0, 1) β€ CR β¨ π = πrand
π₯π,π,πΊ, otherwise,
(6)
where CR β (0, 1) is the predefined crossover probability,randπ(0, 1) is a uniform random number within [0, 1] for the
πth dimension, and πrand β {1, 2, . . . , π·} is a random index.
2.3. Selection. After the crossover, a greedy selection mecha-nism is used to select the better one from the parent vectorππ,πΊ
and the trail vector ππ,πΊ
according to their fitness valuesπ(β ). Without losing of generality, this paper only considersminimization problems. If, and only if, the trial vector π
π,πΊis
better than the parent vector ππ,πΊ, then π
π,πΊ+1is set to π
π,πΊ;
otherwise, we keepππ,πΊ+1
the same withππ,πΊ:
ππ,πΊ+1= {
ππ,πΊ, if π (π
π,πΊ) β€ π (π
π,πΊ)
ππ,πΊ, otherwise.
(7)
3. Heterogeneous DE
In the standard DE andmost of its modifications, individualsin the population behavewith the same search characteristics,exploration, and/or exploitation, because of the use of thesame DE scheme. The effect is that the algorithms couldhardly balance exploration and exploitation during the searchprocess. Inspired by the heterogeneous swarms [11], a simpleand effective heterogeneousDE (HDE) algorithm is proposedin this paper. Compared to DE and most of its variants,individuals in HDE are allowed to implement differentbehaviors.
To implement different DE schemes in HDE, we needto address two questions. First, which DE scheme should bechosen to construct the DE scheme pool? Second, how do weassign DE schemes to individuals?
As mentioned before, there are several DE schemes, anddifferent DE schemes have different search characteristics. Inthis paper, three differentDE schemes are chosen to constructthe DE scheme pool: (1) DE/rand/1/bin; (2) DE/best/1/bin;and (3) DE/BoR/1/bin [16]. The first two schemes are twobasic DE strategies proposed in [1], and the last one wasrecently proposed in [16], where a new mutation strategycalled βbest-of-randomβ (BoR) is defined as follows:
ππ,πΊ= πππ,πΊ+ πΉ β (π
π1,πΊβ ππ2,πΊ) , (8)
where the individuals ππ1
, ππ2
, and πππ
are randomly chosenfrom the current population, π = π
1= π2= ππ, andπ
ππ
is the bestone of them; that is, π(π
ππ
) β€ min(π(ππ1
), π(ππ2
)).
The Scientific World Journal 3
Variables DE scheme
Β· Β· Β· Β· Β· Β·
X1,G
X2,G
X3,G
XNπ,G
ID1
ID2
ID3
IDNπ
Figure 1: The encoding of individuals in heterogeneous DE.
There are two reasons for choosing these DE schemes.First, these three DE schemes are very simple and easy toimplement. Second, each of them has different search charac-teristics. For the DE/rand/1/bin, it obtains higher populationdiversity. DE/best/1/bin shows faster convergence speed. Thelast one provides a middle phase between the first two DEschemes. Note that this paper only chooses three DE schemesto construct the DE scheme pool and other DE schemes canalso be possibly used.
Figure 1 presents the encoding method in HDE, whereIDπdenotes the employed DE scheme for the πth individual.
In this paper, IDπβ {1, 2, 3}; that is, ID
π= 1 indicates the
DE/rand/1/bin scheme, IDπ= 2 denotes the DE/best/1/bin
scheme, and IDπ= 3 stands for the DE/BoR/1/bin scheme.
In order to address the second question, two differentheterogeneous models, namely, static HDE (sHDE) anddynamic HDE (dHDE), are used to assign DE schemes toindividuals [11].
(i) In the static HDE (sHDE), DE schemes are randomlyassigned to individuals in population initialization.The assigned DE schemes do not change during thesearch process.
(ii) In the dynamic HDE (dHDE), DE schemes are ran-domly assigned to individuals during initialization.As the iteration increases, the assignedDE schemes ofindividuals are not fixed. They can randomly changeduring the search process. An individual randomlyselects a new DE scheme from the DE scheme poolwhen the individual fails to improve its objectivefitness value. In the standard DE and its variants, theobjective fitness value of each individual π
πsatisfies
π(ππ,1) β€ π(π
π,2) β€ β β β β€ π(X
π,πΊ). If the new gen-
erated individual (trail vector ππ) could not improve
its previous position (ππ), it may indicate early stag-
nation. This can be addressed by assigning a new DEscheme to the individual.
Themain steps of dynamic heterogeneousDE (dHDE) arepresented in Algorithm 1, where π is the current population,FEs is the number of fitness evaluations, and MAX FEs is
the maximum number of FEs. For static HDE (sHDE), pleasedelete line 21 in Algorithm 1.
4. Experimental Verifications
This section provides experimental studies ofHDEon 18well-known benchmark optimization problems. According to theproperties of these problems, two series of experiments areconducted: (1) comparison of HDE on classical optimizationproblems and (2) comparison of HDE on shifted large-scaleoptimization problems.
4.1. Results on Classical Optimization Problems. In this sec-tion, 12 classical benchmark problems are used to verify theperformance of HDE. These problems were considered in[2, 5, 17β19]. Table 1 presents a brief description of thesebenchmark problems. All the problems are to be minimized.
Experiments are conducted to compare HDE with othersix DE variants. The involved algorithms are listed below:
(i) DE/rand/1/bin,(ii) DE/best/1/bin,(iii) DE/BoR/1/bin [16],(iv) self-adapting DE (jDE) [5],(v) DE with neighborhood search (NSDE) [20],(vi) DE using neighborhood-based mutation (DEGL) [2],(vii) the proposed sHDE and dHDE.
In the experiments, we have two series of comparisons:(1) comparison of sHDE/dHDE with basic DE schemes and(2) comparison of sHDE/dHDEwith state-of-the-art DE var-iants.The first comparison aims to check whether the hetero-geneousmethod is helpful to improve the performance ofDE.The second comparison investigates whether the proposedapproach is better or worse than some recently proposed DEvariants.
For these two comparisons, we use the same parametersettings as follows. For the three basic DE schemes (DE/rand/1/bin, DE/best/1/bin, and DE/BoR/1/bin) and sHDE/dHDE,
4 The Scientific World Journal
Table 1: The 12 classical benchmark optimization problems
Problem Name π· Properties Search rangeπ1
Sphere 25 Unimodal [β100, 100]
π2
Schewefel 2.22 25 Unimodal [β10, 10]
π3
Schewefel 1.2 25 Unimodal [β100, 100]
π4
Schewefel 2.21 25 Unimodal [β100, 100]
π5
Rosenbrock 25 Multimodal [β30, 30]
π6
Step 25 Unimodal [β100, 100]
π7
Quartic with noise 25 Unimodal [β1.28, 1.28]
π8
Schewefel 2.26 25 Multimodal [β500, 500]
π9
Rastrigin 25 Multimodal [β5.12, 5.12]
π10
Ackley 25 Multimodal [β32, 32]
π11
Griewank 25 Multimodal [β600, 600]
π12
Penalized 25 Multimodal [β50, 50]
(1) Randomly initialize the population π including the variables and ID;(2) Evaluate the objective fitness value of each individuals in π;(3) FEs = π
π;
(4) while FEs β€MAX FEs do(5) for π = 1 toπ
πdo
(6) if IDπ== 1 then
(7) Use DE/rand/1/bin to generate a trail vector ππ;
(8) end(9) if ID
π== 2 then
(10) Use DE/best/1/bin to generate a trail vector ππ;
(11) end(12) if ID
π== 3 then
(13) Use DE/BoR/1/bin to generate a trail vector ππ;
(14) end(15) Evaluate the objective fitness value of π
π;
(16) FEs++;(17) if f (π
π) β€ f (π
π) then
(18) ππ= ππ
(19) end(20) else
/β For static HDE (sHDE), please delete lines 20β22 β/
(21) Randomly assign a new different DE scheme toππ, (change the value of ID
π);
(22) end(23) end(24) end
Algorithm 1: The dynamic heterogeneous DE (dHDE).
the control parameters, πΉ and CR, are set to 0.5 and 0.9,respectively [5]. For jDE, NSDE, and DEGL, the parametersπΉ and CR are self-adaptive. For all algorithms, the populationsize (π
π) and maximum number of FEs are set to 10 β π· and
5.00πΈ+05, respectively [2]. All the experiments are conducted30 times, and the mean error fitness values are reported.
4.1.1. Comparison of HDE with Basic DE Schemes. The com-parison results of HDE with the three basic DE schemes arepresented in Table 2, where βMeanβ denotes the mean errorfitness values. From the results, sHDE and dHDE outperformother three basic DE schemes on a majority of test problems.For unimodal problems (π
1βπ4), DE/best/1/bin shows faster
convergence than other algorithms for the attraction of theglobal best individual. For π
6and π
7, all the algorithms
obtain similar performance. DE/rand/1/bin provides highpopulation diversity but slows down convergence rate. Thatis why DE/rand/1/bin performs better than DE/best/1/binon most multimodal problems, but worse on unimodalproblems. DE/BoR/bin/1 is a middle phase of DE/rand/1/binand DE/best/1/bin. It obtains better performance than theother two basic DE schemes. For the heterogeneity of thesebasicDE schemes, sHDE anddHDE significantly improve theperformance of DE. For unimodal problems, DE/best/1/binobtains the best performance, while sHDE and dHDEperform better than DE/rand/1/bin and DE/BoR/1/bin. For
The Scientific World Journal 5
Table 2: Comparison of HDE with basic DE schemes on classical benchmark problems.
Problem DE/rand/1/bin DE/best/1/bin DE/BoR/1/bin sHDE dHDEMean Mean Mean Mean Mean
π1
0.00πΈ + 00 4.38πΈ β 27 2.37πΈ β 46 6.58πΈ β 79 0.00πΈ + 00
π2
0.00πΈ + 00 2.64πΈ β 13 3.02πΈ β 22 3.32πΈ β 39 4.96πΈ β 103
π3
2.68πΈ β 125 2.41πΈ β 04 1.51πΈ β 11 3.10πΈ β 21 7.21πΈ β 57
π4
5.34πΈ β 64 1.77πΈ β 07 1.89πΈ β 11 1.42πΈ β 07 2.49πΈ β 40
π5
5.75πΈ β 29 2.91πΈ β 04 1.45πΈ β 16 1.11πΈ β 29 1.73πΈ β 29
π6
0.00πΈ + 00 0.00πΈ + 00 0.00πΈ + 00 0.00πΈ + 00 0.00πΈ + 00
π7
1.38πΈ β 03 2.50πΈ β 03 1.52πΈ β 03 8.31πΈ β 04 1.84πΈ β 03
π8
3.36πΈ + 03 1.68πΈ + 03 2.78πΈ + 03 0.00πΈ + 00 8.56πΈ + 02
π9
4.38πΈ + 01 1.32πΈ + 02 2.62πΈ + 01 5.22πΈ + 01 0.00πΈ + 00
π10
2.81πΈ + 00 2.55πΈ β 14 4.14πΈ β 15 4.14πΈ β 15 4.14πΈ β 15
π11
2.95πΈ β 02 0.00πΈ + 00 0.00πΈ + 00 0.00πΈ + 00 0.00πΈ + 00
π12
9.96πΈ β 01 5.00πΈ β 28 1.88πΈ β 32 1.88πΈ β 32 1.88πΈ β 32
Table 3: Comparison of HDE with jDE, NSDE, and DEGL on classical benchmark problems.
Problem jDE NSDE DEGL sHDE dHDEMean Mean Mean Mean Mean
π1
4.04πΈ β 35 9.55πΈ β 35 8.78πΈ β 37 6.58πΈ β 79 0.00πΈ + 00
π2
8.34πΈ β 26 8.94πΈ β 30 4.95πΈ β 36 3.32πΈ β 39 4.96πΈ β 103
π3
4.76πΈ β 14 3.06πΈ β 09 1.21πΈ β 26 3.10πΈ β 21 7.21πΈ β 57
π4
3.02πΈ β 14 2.09πΈ β 11 4.99πΈ β 15 1.42πΈ β 04 2.49πΈ β 40
π5
5.64πΈ β 26 2.65πΈ β 25 6.89πΈ β 25 1.11πΈ β 29 1.73πΈ β 29
π6
1.67πΈ β 36 4.04πΈ β 28 9.56πΈ β 48 0.00πΈ + 00 0.00πΈ + 00
π7
3.76πΈ β 02 4.35πΈ β 03 1.05πΈ β 07 8.31πΈ β 04 1.84πΈ β 03
π8
0.00πΈ + 00 2.60πΈ + 00 0.00πΈ + 00 0.00πΈ + 00 8.56πΈ + 02
π9
6.74πΈ β 24 4.84πΈ β 21 5.85πΈ β 25 5.22πΈ + 01 0.00πΈ + 00
π10
7.83πΈ β 15 5.97πΈ β 10 5.98πΈ β 23 4.14πΈ β 15 4.14πΈ β 15
π11
1.83πΈ β 28 7.93πΈ β 26 2.99πΈ β 36 0.00πΈ + 00 0.00πΈ + 00
π12
9.37πΈ β 24 5.85πΈ β 21 7.21πΈ β 27 1.88πΈ β 32 1.88πΈ β 32
multimodal problems, DE/best/1/bin is the worst algorithm,while sHDE and dHDE obtain better performance than otheralgorithms.
For the comparison of sHDE and dHDE, sHDE performsbetter on 3 problems, while dHDE achieves better resultson 5 problems. For the remaining 4 problems, both of themcould search the global optimum. Although the dynamicheterogeneous model improves the performance of HDE onmany problems, it may lead to premature convergence. In thedHDE, if the current individual could not improve its fitnessvalue, a new DE scheme is assigned to the individual. If thenew scheme is DE/best/1/bin, the individual will be attractedby the global best individuals. This will potentially run therisk of premature convergence.
4.1.2. Comparison of HDE with Other State-of-the-Art DEVariants. The comparison results of HDE with other threestate-of-the-art DE variants are presented in Table 3, whereβMeanβ denotes the mean error fitness values. From theresults, sHDE and dHDE perform better than other threeDE algorithms on the majority of test problems. dHDE
Table 4: Average rankings achieved by Friedman test.
Algorithms RankingdHDE 4.17sHDE 3.58DEGL 3.50jDE 2.25NSDE 1.50
outperforms jDE and NSDE on all test problems except forπ8. On this problem, dHDE falls into the local minima,
while sHDE could successfully solve it. dHDE achieves betterresults than DEGL on 9 problems, while DEGL performsbetter than dHDE on the remaining 3 problems.
In order to compare the performance of multiple algo-rithms on the test suite, we conducted Friedman test accord-ing to the suggestions of [21]. Table 4 shows the averageranking of jDE,NSDE,DEGL, sHDE, and dHDE.As seen, theperformance of the five algorithms can be sorted by averageranking into the following order: dHDE, sHDE, DEGL,
6 The Scientific World Journal
Table 5: The 6 shifted large-scale benchmark optimization problems proposed in [15].
Problem Name π· Properties Search rangeπΉ1
Shifted Sphere 500 Unimodal, separable, scalable [β100, 100]
πΉ2
Shifted Schewefel 2.21 500 Unimodal, nonseparable [β100, 100]
πΉ3
Shifted Rosenbrock 500 Multimodal, nonseparable [β100, 100]
πΉ4
Shifted Rastrigin 500 Multimodal, separable [β5, 5]
πΉ5
Shifted Griewank 500 Multimodal, nonseparable [β600, 600]
πΉ6
Shifted Ackley 500 Multimodal, separable [β32, 32]
Table 6: Comparison of HDE with ODE and MDE on shifted large-scale benchmark problems.
Problem ODE MDE sHDE dHDEMean Mean Mean Mean
πΉ1
8.02πΈ + 01 1.95πΈ + 01 8.45πΈ + 00 1.14πΈ β 10
πΉ2
5.78πΈ + 00 2.70πΈ + 01 8.54πΈ + 01 1.01πΈ + 02
πΉ3
1.54πΈ + 05 4.67πΈ + 05 1.52πΈ + 05 1.29πΈ + 03
πΉ4
4.22πΈ + 03 4.14πΈ + 03 5.27πΈ + 03 3.42πΈ + 03
πΉ5
1.77πΈ + 00 1.52πΈ + 00 6.94πΈ β 01 1.83πΈ β 01
πΉ6
4.51πΈ + 00 4.02πΈ + 00 1.26πΈ + 00 1.58πΈ + 01
jDE, and NSDE. The best average ranking was obtainedby the dHDE algorithm, which outperforms the other fouralgorithms.
4.2. Results of HDE on Shifted Large-Scale Optimization Prob-lems. To verify the performance of HDE on complex optimi-zation problems, this section provides experimental studies ofHDE on 6 shifted large-scale problems. These problems wereconsidered in CEC 2008 special session and competition onlarge-scale global optimization [15]. Table 5 presents a briefdescriptions of these benchmark problems. All the problemsare to be minimized.
Experiments are conducted to compare four DE algo-rithms including the proposed sHDE and dHDE on 6 testproblems with π· = 500. The involved algorithms are listedbelow:
(i) opposition-based DE (ODE) [22],(ii) modified DE using Cauchy mutation (MDE) [23],(iii) the proposed sHDE and dHDE.
To have a fair comparison, all the four algorithms usethe same parameter settings by the suggestions of [22]. Thepopulation sizeπ
πis set toπ·. The control parameters, πΉ and
CR, are set to 0.5 and 0.9, respectively [22]. For ODE, therate of opposition π½
πis 0.3. The maximum number of FEs is
5000 β π·. All the experiments are conducted 30 times, and themean error fitness values are reported.
Table 6 presents the comparison results of HDE withODE and MDE on shifted large-scale problems. As seen,sHDE and dHDE outperform ODE and MDE on four prob-lems. The static heterogeneous model slightly improves thefinal solutions of DE, while the dynamic model significantlyimproves the results on πΉ
1and πΉ
3. However, the two het-
erogeneous models do not always work. For some problems,sHDE and/or dHDEperformworse thanODE andMDE.The
possible reason is that the heterogeneous models may hinderthe evolution. For the static model, the entire population isdivided into three groups. Each group uses a different basicDE scheme to generate new individuals.Though these groupsshare the search information of the entire population, eachgroup with small population may obtain slow convergencerate. Asmentioned before, the dynamic heterogeneousmodelruns the risk of premature convergence.
5. Conclusion
In the standard DE andmost of its modifications, individualsfollow the same search behavior for using the same DEscheme.The effect is that the algorithms could hardly balanceexplorative and exploitative abilities during the evolution.Inspired by the heterogeneous swarms, a simple and effectiveheterogeneous DE (HDE) is proposed in this paper, whereindividuals could follow different search characteristics ran-domly selected from a DE scheme pool. To implement theHDE, two heterogeneous models are proposed, one static,where individuals do not change their search behaviors,and a dynamic model where individuals may change theirsearch behaviors. Both versions of HDE initialize individualbehaviors by randomly selecting DE schemes from a DEscheme pool. In the dynamic HDE, when an individual doesnot improve its fitness value, a new different DE scheme israndomly selected from the DE scheme pool.
To verify the performance of HDE, two types of bench-mark problems, including classical problems and shiftedlarge-scale problems, are used in the experiments. Simulationresults show that the proposed sHDE and dHDE outperformthe other eight DE variants on a majority of test problems.It demonstrates that the heterogeneous models are helpful toimprove the performance of DE.
For the dynamic heterogeneous model, the frequency ofchanging new DE schemes may affect the performance of
The Scientific World Journal 7
dHDE.More experiments will be investigated.The concept ofheterogeneous swarms can be applied to other evolutionaryalgorithms to obtain good performance. Future researchwill investigate different algorithms based on heterogeneousswarms. Moreover, we will try to apply our approach to somereal-world problems [24, 25].
Conflict of Interests
The authors declare that there is no conflict of interestsregarding the publication of this paper.
Acknowledgments
This work is supported by the Humanity and Social Sci-ence Foundation of Ministry of Education of China (no.13YJCZH174), the National Natural Science Foundation ofChina (nos. 61305150, 61261039, and 61175127), the Scienceand Technology Plan Projects of Jiangxi Provincial EducationDepartment (no. GJJ13744 and GJJ13762), and the Founda-tion of State Key Laboratory of Software Engineering (no.SKLSE2012-09-19).
References
[1] R. Storn and K. Price, βDifferential evolutionβa simple andefficient heuristic for global optimization over continuousspaces,β Journal of Global Optimization, vol. 11, no. 4, pp. 341β359, 1997.
[2] S. Das, A. Abraham, U. K. Chakraborty, and A. Konar, βDif-ferential evolution using a neighborhood-basedmutation oper-ator,β IEEE Transactions on Evolutionary Computation, vol. 13,no. 3, pp. 526β553, 2009.
[3] J. Zhang and A. C. Sanderson, βJADE: adaptive differentialevolution with optional external archive,β IEEE Transactions onEvolutionary Computation, vol. 13, no. 5, pp. 945β958, 2009.
[4] J. Liu and J. Lampinen, βA fuzzy adaptive differential evolutionalgorithm,β Soft Computing, vol. 9, no. 6, pp. 448β462, 2005.
[5] J. Brest, S. Greiner, B. Boskovic, M. Mernik, and V. Zumer,βSelf-adapting control parameters in differential evolution: acomparative study on numerical benchmark problems,β IEEETransactions on Evolutionary Computation, vol. 10, no. 6, pp.646β657, 2006.
[6] A. K. Qin, V. L. Huang, and P. N. Suganthan, βDifferential evo-lution algorithm with strategy adaptation for global numericaloptimization,β IEEE Transactions on Evolutionary Computation,vol. 13, no. 2, pp. 398β417, 2009.
[7] F. Neri and V. Tirronen, βRecent advances in differential evolu-tion: a survey and experimental analysis,β Artificial IntelligenceReview, vol. 33, no. 1-2, pp. 61β106, 2010.
[8] S. Das and P. N. Suganthan, βDifferential evolution: a surveyof the state-of-the-art,β IEEE Transactions on EvolutionaryComputation, vol. 15, no. 1, pp. 4β31, 2011.
[9] X. S. Yang and S.Deb, βTwo-stage eagle strategywith differentialevolution,β International Journal of Bio-Inspired Computation,vol. 4, no. 1, pp. 1β5, 2012.
[10] Y.W.Zhong, L. J.Wang,C. Y.Wang, andH.Zhang, βMulti-agentsimulated annealing algorithm based on differential evolutionalgorithm,β International Journal of Bioinspired Computation,vol. 4, no. 4, pp. 217β228, 2012.
[11] A. P. Engelbrecht, βHeterogeneous particle swarm optimiza-tion,β International Conference on Swarm Intelligence, vol. 6234,pp. 191β202, 2010.
[12] J. Kennedy and R. Eberhart, βParticle swarm optimization,βin Proceedings of the IEEE International Conference on NeuralNetworks, pp. 1942β1948, December 1995.
[13] E. GarGarcΔ±aI-Gonzalo and J. L. Fernandez-MartΔ±nez, βA briefhistorical review of particle swarmoptimization (PSO),β Journalof Bioinformatics and Intelligent Control, vol. 1, no. 1, pp. 3β16,2012.
[14] W. Gong, Z. Cai, C. X. Ling, and H. Li, βEnhanced differentialevolution with adaptive strategies for numerical optimization,βIEEE Transactions on Systems, Man, and Cybernetics B, vol. 41,no. 2, pp. 397β413, 2011.
[15] K. Tang, X. Yao, P. N. Suganthan et al., Benchmark func-tions for the CECβ 2008 special session and competitionon high-dimensional real-parameter optimization, NatureInspired Computation and Applications Laboratory, USTC,Hefei, China, 2007.
[16] C. Lin, A. Qing, and Q. Feng, βA new differential mutationbase generator for differential evolution,β Journal of Global Opti-mization, vol. 49, no. 1, pp. 69β90, 2011.
[17] L. Ali and S. L. Sabat, βParticle swarm optimization based uni-versal solver for global optimization,β Journal of Bioinformaticsand Intelligent Control, vol. 1, no. 1, pp. 95β105., 2012.
[18] X. J. Cai, S. J. Fan, and Y. Tan, βLight responsive curve selectionfor photosynthesis operator of APOA,β International Journal ofBio-Inspired Computation, vol. 4, no. 6, pp. 373β379, 2012.
[19] L. P. Xie, J. C. Zeng, and R. A. Formato, βSelection strategies forgravitational constant G in artificial physics optimisation basedon analysis of convergence properties,β International Journal ofBio-Inspired Computation, vol. 4, no. 6, pp. 380β391, 2012.
[20] Z. Yang, J. He, and X. Yao, βMaking a difference to differentialevolution,β Advance in Metaheuristics for Hard Optimization,pp. 397β414, 2008.
[21] S. GarcΔ±a, A. Fernandez, J. Luengo, and F. Herrera, βAdvancednonparametric tests for multiple comparisons in the design ofexperiments in computational intelligence and data mining:experimental analysis of power,β Information Sciences, vol. 180,no. 10, pp. 2044β2064, 2010.
[22] R. S. Rahnamayan, H. R. Tizhoosh, and M. M. A. Salama,βOpposition-based differential evolution,β IEEETransactions onEvolutionary Computation, vol. 12, no. 1, pp. 64β79, 2008.
[23] M. Ali andM. Pant, βImproving the performance of differentialevolution algorithm using Cauchy mutation,β Soft Computing,vol. 15, no. 5, pp. 991β1007, 2011.
[24] A. Y. Abdelaziz, R. A. Osama, and S.M. Elkhodary, βApplicationof ant colony optimization and harmony search algorithms toreconfiguration of radial distribution networks with distributedgenerations,β Journal of Bioinformatics and Intelligent Control,vol. 1, no. 1, pp. 86β94, 2012.
[25] S. Y. Zeng, Z. Liu, C. H. Li, Q. Zhang, and W. Wang, βAnevolutionary algorithm and its application in antenna design,βJournal of Bioinformatics and Intelligent Control, vol. 1, no. 2, pp.129β137, 2012.
Submit your manuscripts athttp://www.hindawi.com
International Journal ofComputer GamesTechnologyHindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
DistributedSensor Networks
International Journal of
Advances in
FuzzySystems
Hindawi Publishing Corporationhttp://www.hindawi.com
Volume 2014
International Journal of
ReconfigurableComputing
Hindawi Publishing Corporation http://www.hindawi.com Volume 2014
Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
Applied Computational Intelligence and Soft Computing
βAdvancesβinβ
Artificial Intelligence
HindawiβPublishingβCorporationhttp://www.hindawi.com Volumeβ2014
Advances in Software Engineering
Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
Electrical and Computer Engineering
Journal of
Journal of
Computer Networks and Communications
Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
Hindawi Publishing Corporation
http://www.hindawi.com Volume 2014
Advances in
Multimedia
International Journal of
Biomedical Imaging
Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
ArtificialNeural Systems
Advances in
Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
ComputationalIntelligence &Neuroscience
Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
Industrial EngineeringJournal of
Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
Modelling & Simulation in EngineeringHindawi Publishing Corporation http://www.hindawi.com Volume 2014
The Scientific World JournalHindawi Publishing Corporation http://www.hindawi.com Volume 2014
Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
Human-ComputerInteraction
Advances in
Computer EngineeringAdvances in
Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014