Analyzing the Simple Ranking and Selection Process for Constrained Evolutionary Optimization

16
Elfeky EZ, Sarker RA, Essam DL. Analyzing the simple ranking and selection process for constrained evolutionary optimization. JOURNAL OF COMPUTER SCIENCE AND TECHNOLOGY 23(1): 19–34 Jan. 2008 Analyzing the Simple Ranking and Selection Process for Constrained Evolutionary Optimization Ehab Z. Elfeky, Ruhul A. Sarker, and Daryl L. Essam School of Information Technology and Electrical Engineering, University of New South Wales, ADFA Campus Canberra 2600, Australia E-mail: {e.elfeky, r.sarker, d.essam}@adfa.edu.au Revised November 20, 2007. Abstract Many optimization problems that involve practical applications have functional constraints, and some of these constraints are active, meaning that they prevent any solution from improving the objective function value to the one that is better than any solution lying beyond the constraint limits. Therefore, the optimal solution usually lies on the boundary of the feasible region. In order to converge faster when solving such problems, a new ranking and selection scheme is introduced which exploits this feature of constrained problems. In conjunction with selection, a new crossover method is also presented based on three parents. When comparing the results of this new algorithm with six other evolutionary based methods, using 12 benchmark problems from the literature, it shows very encouraging performance. T-tests have been applied in this research to show if there is any statistically significance differences between the algorithms. A study has also been carried out in order to show the effect of each component of the proposed algorithm. Keywords constrained continuous optimization, evolutionary computation, genetic algorithms, multi-parent crossover 1 Introduction Many optimization problems are nonlinear and con- strained. These problems can be represented as fol- lows: min f (X), s.t. g i (X) 6 0, i =1, 2,...,m, h j (X) 6 0, j =1, 2,...,p, L i 6 x i 6 U i i =1, 2,...,n, (1) where X R n is the vector of solutions X = [x 1 ,x 2 ,...,x n ] T The objective function is f (X), m is the number of inequality constraints, g i (X) is the i-th inequality constraint, p is the number of equality constraints, and h j (X) is the j -th equality constraint. Each decision variable x i has a lower bound L i and an upper bound U i . Over the last two decades, Evolutionary Algorithms (EAs) have proved themselves as global optimization techniques. Among the evolutionary algorithms, Ge- netic Algorithms (GAs) are the most widely used technique for solving optimization problems. Con- strained optimization problems have been considered as difficult problems. Many researchers and practi- tioners (including Mezura and Coello [1] , Barbosa and Lemonge [2] , Deb [3] , Koziel and Michalewicz [4] , Far- mani and Wright [5] and Venkatraman and Yen [6] and others) attempted to solve constrained problems us- ing traditional GAs. Also, Runarsson and Yao [7] de- veloped an Evolution Strategies (ES) based approach for solving constrained optimization problems. They showed that their algorithm outperforms GA based ap- proaches for 12 well-known test problems. In this pa- per, we analyze the different combinations of operators and mechanisms, in order to design a new GA based approach for solving certain classes of constrained op- timization problems. For this purpose, we have de- signed a new ranking and selection method, and a new crossover method. It is a common situation for many constrained op- timization problems that some constraints are active at the global optimum point, thus the optimum point lies on the boundary of the feasible space [4] . This is usually true for business applications with limited re- sources. In such cases, it seems natural to restrict the search of the solution to near the boundary of the fea- sible space [8] . However, this may not be the situation for some other problems, such as engineering design and specially generated problems. As we know, crossover is a key element of GAs. Regular Paper

Transcript of Analyzing the Simple Ranking and Selection Process for Constrained Evolutionary Optimization

Elfeky EZ, Sarker RA, Essam DL. Analyzing the simple ranking and selection process for constrained evolutionary

optimization. JOURNAL OF COMPUTER SCIENCE AND TECHNOLOGY 23(1): 19–34 Jan. 2008

Analyzing the Simple Ranking and Selection Process for Constrained

Evolutionary Optimization

Ehab Z. Elfeky, Ruhul A. Sarker, and Daryl L. Essam

School of Information Technology and Electrical Engineering, University of New South Wales, ADFA CampusCanberra 2600, Australia

E-mail: {e.elfeky, r.sarker, d.essam}@adfa.edu.au

Revised November 20, 2007.

Abstract Many optimization problems that involve practical applications have functional constraints, and some ofthese constraints are active, meaning that they prevent any solution from improving the objective function value tothe one that is better than any solution lying beyond the constraint limits. Therefore, the optimal solution usuallylies on the boundary of the feasible region. In order to converge faster when solving such problems, a new rankingand selection scheme is introduced which exploits this feature of constrained problems. In conjunction with selection,a new crossover method is also presented based on three parents. When comparing the results of this new algorithmwith six other evolutionary based methods, using 12 benchmark problems from the literature, it shows very encouragingperformance. T-tests have been applied in this research to show if there is any statistically significance differencesbetween the algorithms. A study has also been carried out in order to show the effect of each component of the proposedalgorithm.

Keywords constrained continuous optimization, evolutionary computation, genetic algorithms, multi-parent crossover

1 Introduction

Many optimization problems are nonlinear and con-strained. These problems can be represented as fol-lows:

min f(X), s.t. gi(X) 6 0, i = 1, 2, . . . , m,

hj(X) 6 0, j = 1, 2, . . . , p,

Li 6 xi 6 Ui i = 1, 2, . . . , n,(1)

where X ∈ Rn is the vector of solutions X =[x1, x2, . . . , xn]T The objective function is f(X), mis the number of inequality constraints, gi(X) is thei-th inequality constraint, p is the number of equalityconstraints, and hj(X) is the j-th equality constraint.Each decision variable xi has a lower bound Li and anupper bound Ui.

Over the last two decades, Evolutionary Algorithms(EAs) have proved themselves as global optimizationtechniques. Among the evolutionary algorithms, Ge-netic Algorithms (GAs) are the most widely usedtechnique for solving optimization problems. Con-strained optimization problems have been consideredas difficult problems. Many researchers and practi-tioners (including Mezura and Coello[1], Barbosa and

Lemonge[2], Deb[3], Koziel and Michalewicz[4], Far-mani and Wright[5] and Venkatraman and Yen[6] andothers) attempted to solve constrained problems us-ing traditional GAs. Also, Runarsson and Yao[7] de-veloped an Evolution Strategies (ES) based approachfor solving constrained optimization problems. Theyshowed that their algorithm outperforms GA based ap-proaches for 12 well-known test problems. In this pa-per, we analyze the different combinations of operatorsand mechanisms, in order to design a new GA basedapproach for solving certain classes of constrained op-timization problems. For this purpose, we have de-signed a new ranking and selection method, and a newcrossover method.

It is a common situation for many constrained op-timization problems that some constraints are activeat the global optimum point, thus the optimum pointlies on the boundary of the feasible space[4]. This isusually true for business applications with limited re-sources. In such cases, it seems natural to restrict thesearch of the solution to near the boundary of the fea-sible space[8]. However, this may not be the situationfor some other problems, such as engineering designand specially generated problems.

As we know, crossover is a key element of GAs.

Regular Paper

20 J. Comput. Sci. & Technol., Jan. 2008, Vol.23, No.1

Many studies have been carried out to show how thisoperator affects the evolutionary process. The mostwidely used crossover methods are k-point, uniform,intermediate, global discrete, order-based, and matrix-based[9]. Most of these crossovers are based on twoparents. However, Eiben et al.[10] introduced a multi-parent reproduction process to GAs. They have in-dicated that increasing the number of parents in thecrossover leads to more reliable offspring. As they re-ported, performance increased up to a certain num-ber of parents and then decreased. They concludedthat the improvement was at the maximum level whenchanging the number of parents from 2 to 3. So it wasnot worthwhile going beyond 3 or 4 parents[10]. Notethat Eiben et al.[10] have applied their genetic algo-rithm with a binary representation.

As we are dealing with constrained problems, thefeasibility ratio (the ratio between the number of feasi-ble individuals and the total population size) plays animportant role in the search process and can thus beconsidered as an indicator for diversity. Hence, a widerrange of feasibility ratios would usually help to locatemore diverse solutions around the boundary of the fea-sible space. In this paper, the purpose of designing athree-parent crossover is to generate offspring close tothe boundary of the feasible region. For this reason,we choose parents from both feasible and infeasibleregions for crossover. Such a multi-parent crossovershould speed up convergence when assisted by the se-lection mechanism which will be discussed in a latersection.

Without mutation, GAs could become trapped ina local optimum while solving complex optimizationproblems. Among others, uniform and non-uniformmutations are well known in GA applications. Uni-form mutation uses uniform random changes to indi-viduals, which favors diversity but slows down con-vergence. Alternatively, Michalewicz[11] proposed adynamic non-uniform mutation to reduce the disad-vantage of uniform mutation in real-coded GA. Also,Zhao et al.[12] reported that a non-uniform mutationoperator has the feature of searching the space uni-formly during the early stage and very locally in thelater stage. In other words, that non-uniform muta-tion has the common merits of a higher probability ofmaking long jumps at early stages, and also a muchbetter local fine-tuning ability in later stages[12]. Wehave introduced a mutation that uses both uniformand non-uniform mutation, to exploit the advantagesof both of them.

In GA applications, there are many different waysto select good individuals to survive and reproduce

new offspring. There are three most commonly usedranking and selection methods. First, fitness propor-tional reproduction, in this scheme individuals are cho-sen for selection in proportion to their fitness value.In the second method, rank based selection, the popu-lation is sorted from best to worst; then the higherranked individuals are given higher probabilities tosurvive[9]. The third method is tournament selection,where a random number of individuals are chosen fromthe population (with or without replacement) and thebest individual from this group is chosen as a par-ent for the next generation[9]. These processes are re-peated until the mating pool is filled. There are a va-riety of other selection methods, including stochasticmethods[7]. In this paper, we introduce an algorithmwhich uses a tournament selection in some stages ofthe evolution process, and uses a new ranking methodin other stages. This will be discussed later in moredetail.

The penalty function is the most widely usedmethod to deal with constraints in constrained op-timization. The penalty techniques used are: static,dynamic, annealing, adaptive, death penalties, supe-riority of feasible points, and faster adaptive meth-ods. A good comparison and analysis of these meth-ods can be found in Sarker and Newton[13]. Farmaniand Wright[5] designed a two-stage dynamic penaltymethod which applies a small penalty for slightly in-feasible solutions with reasonable fitness values. Inthis way, it permits those infeasible individuals be sur-vive and be promoted to a feasible region near theoptimal solution. Venkatraman and Yen[6] proposeda two-stage approach to solve constrained problemsusing GA. In the first stage, it deals with the con-straints simply like a constraint satisfaction problem.In the second stage, the algorithm handles the explo-ration and exploitation using non-dominated rankingand elitism respectively. In this paper, we have calcu-lated the constraint violation without penalizing theindividuals, and we have used this information to rankand select the individuals as parents as detailed later.

The developed algorithm was tested using twelvebenchmark problems from the specialized literature,and was compared with five other GA-based and oneES-based algorithms. From the comparisons, we canclaim that the proposed algorithm performing verywell for the twelve constrained optimization problemstested. Since EAs are stochastic algorithms, it is logi-cal to make stochastic comparisons between the algo-rithms, as the best fitness value could simply be anoutlier. For this purpose, we have analyzed the resultsusing mean and standard deviations, and performed a

Ehab Z. Elfeky et al.: Ranking & Selection for Constrained Optimization 21

series of t-tests. The results of t-tests provide usefulinsight of the stochastic search algorithms. We havealso analyzed the contribution of individual compo-nents of the proposed algorithm, which consequentlyjustifies the inclusion of these components.

This paper is organized as follows, Section 2presents the proposed algorithm, experimental studyis stated in Section 3, analysis of results is provided inSection 4, and finally, the conclusion in Section 5.

2 The Proposed Algorithm

In this section, we present our proposed algorithm.This algorithm uses a floating point representation,and the main steps are as follows.

1) Create random initial population.2) Check feasibility of all individuals.3) Evaluate the population.4) If the stopping criterion has been met, stop; other-

wise continue.5) If the feasibility ratio is between LR (lower limit

of ratio) and UR (upper limit of ratio), apply theproposed ranking and selection scheme; otherwise,apply the regular tournament selection.

6) Apply the triangular crossover.7) Apply the mixed mutation.8) Apply elitism by replacing the worst current indi-

vidual with the overall previous generations’ bestindividual.

9) Go to step 4).

The details of the components of our algorithm arediscussed below.

2.1 Ranking

To exploit the feature that optimal solutions existon the boundary of the feasible region, the selection

and search process should concentrate on the individ-uals in that region. Therefore, the ranking scheme isdesigned as follows.• The feasible individuals are ordered from the best

to the worst based on their objective function value.• Then, those solutions are divided into two groups.

Group (a) has a fixed proportion of those with higherquality (smaller objective function), and group (b) haslower quality feasible solutions. As the group (b) in-dividuals are on the worse side of the feasible region.Hence, they should not be considered in the selectionprocess, as they will slow down the algorithm conver-gence.• The infeasible individuals are arranged in ascend-

ing order of the constraint violations. A given pro-portion of the individuals with highest violations (ofall constraints) are discarded from the selection pro-cess (see group (e) in Fig.1), because they are furtheraway from the feasible region and will slow down thealgorithm convergence. We are fully aware that thesediscarded individuals may diversify the search process,but consider that this diversity requires more compu-tational time.• The rest of the infeasible individuals are then ar-

ranged in ascending order of their objective functionvalues. All infeasible individuals who have worse ob-jective function values than the best feasible individualare then discarded from the selection (see group (d) inFig.1), because they will guide the search process inthe wrong direction away from the optimal solution.• The remaining individuals are the target of the

selection, because they are in the right space near toboth the optimal and feasible space (see group (c) inFig.1.

Fig.1. Ranking scheme in the proposed algorithm. O(x) is the objective value of individuals in group (x), V (x) is the constraints

violation of individuals in group (x).

22 J. Comput. Sci. & Technol., Jan. 2008, Vol.23, No.1

2.2 Selection

Up to now, there are two groups of individuals tostill be considered, group (a) which includes the in-dividuals on the feasible side of the boundary of thefeasible region, and the other group (c) that includesthe individuals on the infeasible side. If there are in-dividuals in group (c), then in the selection process,two feasible individuals will be selected from the firstgroup, and one infeasible individual will be selectedfrom the second group, these three individuals willthen undergo the triangular crossover process, other-wise all three individuals are chosen from group (a).

1 if (feasibility ratio is inside the range [LR..UR]) do

2 for i := 1 to population size/3 do

3 x ← random individual from group (a)

4 y ← individual from group (a)

5 z ← individual from group (c)

6 copy individuals x, y, and z to the new mating pool

7 od

8 else {apply Deb’s Method[2]}9 for i := 1 to population size do

10 a ← random individual

11 b ← random individual

12 if (a is infeasible and b is infeasible) then

13 if (a cnstr violation < b cnstr violation) then

14 copy individual a to the new mating pool

15 else

16 copy individual b to the new mating pool

17 fi

18 else if (a is feasible and b is feasible) then

19 if (a objective < b objective) then

20 copy individual a to the new mating pool

21 else

22 copy individual b to the new mating pool

23 fi

24 else if (a is feasible and b is infeasible) then

25 copy individual a to the new mating pool

26 else if (a is infeasible and b is feasible) then

27 copy individual b to the new mating pool

28 fi

29 fi

30 od

31 fi

Fig.2. Outline of the selection process.

This ranking and selection scheme needs a reason-able amount of both feasible and infeasible individualsto work; therefore this mechanism is applied only whenthe feasibility ratio is between [LR, UR] = [0.3, 0.8] inthis paper. Otherwise, the regular tournament selec-tion is used to select two individuals from the tourna-ment set and a third one randomly. In cases where the

optimal solution is not on the boundary, group (b) canbe used instead of group (c). The pseudo-code for theselection process is shown in Fig.2.

2.3 Triangular Crossover

Consider the new mating pool, each successivegroup of three individuals are selected as parentsp1, p2, p3. Next, choose any three random numbersr1, r2, r3, each of them in the interval [0, 1] wherer1 + r2 + r3 = 1, the resulted offspring for the newgeneration will be constructed as a linear combinationof the three parents as follows.

o1 = (r1 × p1) + (r2 × p3) + (r3 × p2),o2 = (r1 × p2) + (r2 × p1) + (r3 × p3),o3 = (r1 × p3) + (r2 × p2) + (r3 × p1).Selecting two feasible parents and one infeasible

parent gives a higher probability for the offspring tobe feasible. The resulting offspring from this crossoverof such selected individuals should ideally be nearer tothe boundary of the feasible region than their parentswere. The range of feasibility ratios controls the di-versity of the solutions, while the proposed crossoverspeeds up the convergence of the search process.

2.4 Mutation

We have introduced a mixed uniform and non-uniform mutation to exploit the advantages of bothmutation methods. The probability of mutation (pm)is fixed during the whole evolution, but the stepsize is nonlinearly decreased over time as stated byMichalewicz[11]. There is also a low probability of do-ing a uniform mutation (pum) that moves an individ-ual to any part of the search space. In this way, the al-gorithm is converging during most of the evolution us-ing the non-uniform mutation, while at the same timethere is some chance to explore the rest of the feasiblespace in later generations using that part of the uni-form mutation. This method helps with sophisticatedmulti-modal problems or problems with complicatedfeasible fitness landscapes. The pseudo-code for themutation process is shown in Fig.3.

2.5 Constraints Handling

Deb[3] introduced a method to handle constraints inan efficient way. In that method all infeasible individ-uals were penalized, and the best infeasible individualwas assigned worse fitness than the worst feasible indi-vidual. We used this method when the feasibility ratiowas too low. After that, the constraint handling was

Ehab Z. Elfeky et al.: Ranking & Selection for Constrained Optimization 23

1 for i := 1 to population size do

2 for j := 1 to chromosome length do

3 if (random (0, 1) 6 pm) then

4 if (random (0, 1) 6 pum) then {uniform}5 δ := 1

6 else {non-uniform}7 δ := (random (0, 1))(1−

1T

)b

8 fi

9 if (random(0, 1) 6 0.5) then

10 xij := xij − random(0, δ × Lj)

11 else

12 xij := xij + random(0, δ × Uj)

13 fi

14 fi

15 od

16 od

where random (a, b) is a random number within the rangeof [a, b], δ is the step length within range [0, 1], hence ifit is 0 it will generate the same individual, and if it is 1it will allow the individual to mutate to the edge of thedomain boundary. t is the current generation number, T isthe maximum allowed generation number, b is a parameterto control the speed at which the step length decreases, inthis paper it is 4. Lj is the lower domain bound for thedecision variable j, Uj is the upper domain bound for thedecision variable j, xij is the value of the decision variablej in individual i.

Fig.3. Outline of the mixed mutation process.

done implicitly during the ranking and selectionscheme, where we exclude high violation individuals;therefore it converges to the feasible space over time.

3 Experimental Results

In order to measure the performance of our de-veloped algorithm, we have compared it, denoted asTC (Triangular Crossover), with eight existing algo-rithms by solving twelve benchmark problems com-monly used by other researchers. The characteristicsof the test problems can be found in Runarsson andYao[7]. The algorithms compared are Barbosa andLemonge[2] (denoted as ACM), Deb’s[3] GA (DEB),Koziel and Michalewicz’s[4] GA (KM), Farmani andWright’s[5] denoted as SAFF and Venkatraman andYen’s[6] denoted as VY. Although these five methodsare GA based, Runarsson and Yao[7] has also intro-duced an evolution strategy method which dependson stochastic ranking (SR). Recently, Runarsson andYao[14] have revised their stochastic ranking algorithm,which is now known as “search biases”, in that workthey treated the constraint violation as another ob-jective and used the Pareto concept to handle theconstraints. They also improved the search by in-

troducing a new search distribution. They reportedtwo sets of results, the first one is Improved Over-Penalized approach (IOP), and the second one is Im-proved Stochastic Ranking (ISR). Currently, the re-sults of this method are considered the best amongthe existing algorithms.

We have solved each test problem 30 times withdifferent random seeds. We have presented the bestfitness values over 30 runs in Table 1. In all cases,up to 350 000 objective function evaluations had beenmade before stopping the algorithm, as a similar eval-uation budget was used in the other algorithms dis-cussed above. The population size was set to 30. Theprobability of crossover is 0.8 for using the whole arith-metic triangular crossover discussed earlier. The mu-tation probability (pm) is 0.1 and for the proposedmutation we set the probability of using the uniformmutation (pum) to 0.1.

The algorithms SR, IOP, and ISR have solved allof the 12 problems considered in this paper. How-ever, KM, SAFF and VY solved 11 problems each.KM failed to find any feasible solution for g05, andSAFF and VY did not report g12. ACM and DEBreported the results for only 4 and 5 test problemsrespectively[2,3]. We have not presented ACM andDEB results in this paper. However, interested readerscan find them in our earlier paper[15]. We have pre-sented the available results of the six other algorithmsalong with our results in Table 1.

Considering the best results out of 30 indepen-dent runs, we can claim that our proposed algorithmachieved superior results than the five GA-based algo-rithms (KM, DEB, ACM, SAFF and VY). If we com-pare the solutions of our algorithm with SR, both ofthem achieved the optimal solution in 7 test problemsg01, g03, g04, g06, g08, g11, and g12. In g05, the pro-posed algorithm achieved the optimal solution whileSR achieved better than the reported optimal due toconstraints relaxation. Our algorithm achieved bet-ter solutions for test problem g02, while SR obtainedthe optimal solution for g09 where also our algorithmachieved a very close to optimum solution. Moreover,SR was better for g07. Therefore, we can claim thatbased on the best solutions obtained, our algorithm iscompetitive to SR. However, IOP and ISR obtainedoptimal solutions in most cases.

However, since GA and ES are stochastic algo-rithms, it is logical to make a stochastic comparisoninstead of a static comparison using the best fitnessvalue. As such a static comparison could be mislead-ing as the best fitness value could simply be an outlier.For this purpose, we have analyzed the mean and stan-

24 J. Comput. Sci. & Technol., Jan. 2008, Vol.23, No.1

dard deviations of 30 independent runs for the sevenalgorithms discussed earlier. The mean and standarddeviations are presented in Table 1. Note that the av-erages for VY are not available as they reported me-dian data only. Also, the standard deviations for KMare not available.

As shown in Table 1, both TC and SR have thesame mean for g01, g03, g08, g11, and g12, but TC

has slightly better standard deviations in g11 and SRhas slightly better standard deviations in g03 and g08.The proposed algorithm has better mean and standarddeviation in g02 and g06. SR has a better mean andstandard deviation in g04, g05, and g07. SR has a bet-ter mean but worse standard deviation in g09. Theseresults emphasize TC and SR competitiveness. Noticethat SR, IOP, ISR, and TC get the optimal solutionconstantly in 6, 7, 9 and 6 test problems respectively.

Table 1. Results Out of 30 Runs of Each Algorithm (VY=Venkatraman and Yen[6], SAFF=Farmani and Wright[5],

KM = Koziel and Michalewicz, SR=Stochastic Ranking, IOP=Improved Over Penalized, ISR=Improved Stochastic

Ranking, TC=Proposed Algorithm)

fcn/mk Best Median Mean St. Dev. Worst

g01 −15.000

VY −15.000 −15.000 – 8.51 ×10−1 −12.000

SAFF −15.000 – −15.000 0.00 × 100 −15.000

KM −14.786 – −14.708 – –

SR −15.000 −15.000 −15.000 0.00 ×100 −15.000

IOP −15.000 −15.000 −15.000 1.30 ×10−15 −15.000

ISR −15.000 −15.000 −15.000 5.80 ×10−14 −15.000

TC −15.000 −15.000 −15.000 0.00 ×100 −15.000

g02 −0.803 619

VY −0.803 190 −0.755 332 – 3.27 ×10−2 −0.672 169

SAFF −0.802 970 – −0.790 100 1.20 ×10−2 −0.760 430

KM −0.799 530 – −0.796 710 – –

SR −0.803 515 −0.785 800 −0.781 975 2.00 ×10−2 −0.726 288

IOP −0.803 619 −0.780 843 −0.776 283 2.30 ×10−2 −0.712 818

ISR −0.803 619 −0.793 082 −0.782 715 2.20 ×10−2 −0.723 591

TC −0.803 615 −0.798 174 −0.796 036 8.84 ×10−3 −0.777 435

g03 −1.000

VY −1.000 −0.985 – 4.89 ×10−2 −0.786

SAFF −1.000 – −1.000 7.50 ×10−5 −1.000

KM −1.000 – −1.000 – –

SR −1.000 −1.000 −1.000 1.90 ×10−4 −1.000

IOP −0.747 −0.210 −0.257 1.90 ×10−1 −0.031

ISR −1.001 −1.001 −1.001 8.20 ×10−9 −1.001

TC −1.000 −1.000 −1.000 4.02 ×10−4 −1.000

g04 −30 665.539

VY −30 665.531 −30 663.364 – 3.31 ×100 −30 651.960

SAFF −30 665.500 – −30 665.200 4.85 ×10−1 −30 663.300

KM −30 664.500 – −30 655.300 – –

SR −30 665.539 −30 665.539 −30 665.539 2.00 ×10−5 −30 665.539

IOP −30 665.539 −30 665.539 −30 665.539 1.10 ×10−11 −30 665.539

ISR −30 665.539 −30 665.539 −30 665.539 1.10 ×10−11 −30 665.539

TC −30 665.539 −30 665.302 −30 665.531 9.16 ×10−3 −30 663.829

g05 5 126.498

VY 5 126.510 5170.529 – 3.41 ×102 6 112.223

SAFF 5 126.989 – 5 432.080 3.89 ×103 6 089.430

KM – – – – –

SR 5 126.497 5 127.372 5 128.881 3.50 ×100 5 142.472

IOP 5 126.497 5 173.967 5 268.610 2.00 ×102 5 826.807

To be continued

Ehab Z. Elfeky et al.: Ranking & Selection for Constrained Optimization 25

Continue from the previous page

fcn/mk Best Median Mean St. Dev. Worst

ISR 5 126.497 5 126.497 5 126.497 7.20 ×10−13 5 126.497

TC 5 126.498 5 147.053 5 288.127 1.56 ×102 5 562.850

g06 −6 961.814

VY −6 961.179 −6 959.568 – 1.27 ×100 −6 954.319

SAFF −6 961.800 – −6 961.800 0.00 ×100 −6 961.800

KM −6 952.100 – −6 342.600 – –

SR −6 961.814 −6 961.814 −6 875.940 1.60 ×102 −6 350.262

IOP −6 961.814 −6 961.814 −6 961.814 1.90 ×10−12 −6 961.814

ISR −6 961.814 −6 961.814 −6 961.814 1.90 ×10−12 −6 961.814

TC −6 961.814 −6 961.814 −6 961.814 3.70 ×10−12 −6 961.814

g07 24.306

VY 24.411 26.736 – 2.61 ×100 35.882

SAFF 24.480 – 26.580 1.14 ×100 28.400

KM 24.620 – 24.826 – –

SR 24.307 24.357 24.374 6.60 ×10−2 24.642

IOP 24.306 24.306 24.307 1.30 ×10−3 24.311

ISR 24.306 24.306 24.306 6.30 ×10−5 24.306

TC 24.740 26.092 25.987 6.63 ×10−1 27.659

g08 −0.095 825

VY −0.095 825 −0.095 825 – 0.00 ×100 −0.095 825

SAFF −0.095 825 – −0.095 825 0.00 ×100 −0.095 825

KM −0.095 825 – −0.089 157 – –

SR −0.095 825 −0.095 825 −0.095 825 2.60 ×10−17 −0.095 825

IOP −0.095 825 −0.095 825 −0.095 825 5.10 ×10−17 −0.095 825

ISR −0.095 825 −0.095 825 −0.095 825 2.70 ×10−17 −0.095 825

TC −0.095 825 −0.095 825 −0.095 825 4.23 ×10−17 −0.095 825

g09 680.630

VY 680.762 681.706 – 7.44 ×10−1 684.131

SAFF 680.640 – 680.720 5.92 ×10−2 680.870

KM 680.910 – 681.160 – –

SR 680.630 680.641 680.656 3.40 ×10−2 680.763

IOP 680.630 680.630 680.630 1.70 ×10−7 680.630

ISR 680.630 680.630 680.630 3.20 ×10−13 680.630

TC 680.631 680.660 680.663 2.19 ×10−2 680.707

g10 7 049.248

VY 7 060.553 7 723.167 – 7.97 ×102 12 097.408

SAFF 7 061.340 – 7 627.890 3.73 ×102 8 288.790

KM 7147.900 – 8 163.600 – –

SR 7 054.316 7 327.613 7 559.192 5.30 ×102 8 835.655

IOP 7 049.248 7 049.248 7 049.248 7.50 ×10−4 7 049.252

ISR 7 049.248 7 049.248 7 049.250 3.20 ×10−3 7 049.270

TC 7 080.265 7 703.490 7 892.470 7.41 ×102 9 847.039

g11 0.750

VY 0.749 0.749 – 9.30 ×10−3 0.809

SAFF 0.750 – 0.750 0.00 ×100 0.750

KM 0.750 0.750 0.750 – 0.750

SR 0.750 0.750 0.750 8.00 ×10−5 0.750

IOP 0.750 0.750 0.750 1.10 ×10−16 0.750

ISR 0.750 0.754 0.756 6.90 ×10−3 0.774

TC 0.750 0.750 0.750 0.00 ×100 0.750

g12 −1.000 000

VY – – – – –

SAFF – – – – –

KM −0.999 900 – – – –

To be continued

26 J. Comput. Sci. & Technol., Jan. 2008, Vol.23, No.1

Continue from the previous page

fcn/mk Best Median Mean St. Dev. Worst

SR −1.000 000 −1.000 000 −1.000 000 0.00 ×100 −1.000 000

IOP −1.000 000 −0.999 954 −0.999 889 1.50 ×10−4 −0.999 385

ISR −1.000 000 −1.000 000 −1.000 000 1.20 ×10−9 −1.000 000

TC −1.000 000 −1.000 000 −1.000 000 0.00 ×100 −1.000 000

4 Analysis of Results

As discussed in the earlier section, the mean andstandard deviation measures do not provide precisestatistical comparisons; hence we have used t-tests. Inaddition, we have analyzed the effect of the individualcomponents of the proposed algorithm.

4.1 Statistical Testing

The t-test is a statistical significance testing whichassesses whether the mean of two groups are statis-tically different from each other or not. The t-testresults are meaningful when the distribution of thesample follows normal distribution. The chi-squaregoodness of fit test indicates that the sample underthe study is coming from normally distributed popu-lation. This decision has been made with a confidencelevel of 95%. The t-test only needs the mean and thestandard deviation of both groups. We apply it in thispaper with confidence level 95%, the t-test when calcu-lated at this level of confidence equals 2 at the degreeof freedom level of 60.

Table 2. t-Test Between TC and Each of the Other

Algorithms at Significance Level 95%, i.e., t-Calculated

is Greater Than 2

SAFF SR IOP ISR

g01 ≈ ≈ ≈ ≈g02 – – – –

g03 ≈ ≈ − ∗g04 – + + +

g05 ≈ + ≈ +

g06 – – ≈ ≈g07 – + + +

g08 ≈ ≈ ≈ ≈g09 – ≈ + +

g10 ≈ ≈ + +

g11 ≈ ≈ ≈ –

g12 ≈ ≈ – ≈Note: “−” means better significant difference, “+” meansworse significant difference, and “≈” means no significantdifference, ∗ indicates not comparable.

We have performed t-tests between TC and each ofSAFF, SR, IOP, and ISP. The t-test between TC andVY cannot be performed due to the absence of mean

data. To make the t-test results more readable, wehave used “−” sign if the t-calculated is less than −2,we have also used “+” sign if the t-calculated is greaterthan +2, and finally we put “≈” sign if t-calculatedis between −2 and +2. As shown in Table 2, for agiven test problem, TC would be considered signifi-cantly better than any other comparable algorithm ifthere is a “−” sign, significantly worse if there is a “+”sign and insignificant if there is a “≈” sign.

It is clear from these results that TC outperformsSAFF as there is no significance difference in 7 testproblems; however, TC is significantly better thanSAFF for the remaining 5 test problems. TC is quitecompetitive with both SR and IOP, as they are signifi-cantly better than TC in 3 and 4 test problems respec-tively; while at the same time TC is better than themin 2 and 3 test problems respectively. Overall, ISRis better than TC. However we have indicated “∗” in-stead of “+” for g03 under ISR as their reported meanvalue is better than the optimal due to the reportingof infeasible solutions which when applying the t-testwould naively show a significant difference. Comparedto ISR, TC is significantly better in g02 and g11.

In both g04 and g07, SR, IOP, and ISR are signif-icantly better than TC. The t-test result shows thatTC is significantly better than all the three ES-basedalgorithms SR, IOP and ISR for g02. Runarsson andYao[14] stated about g02 that “This benchmark func-tion is known to have a very rugged fitness landscapeand in general the most difficult to solve of these func-tions”.

It is interesting to report that although TC has ob-tained an optimal solution for g04 and g05 (see Table1), SR is significantly better than TC when perform-ing the t-test (see Table 2). On the other hand, g06is the opposite case, where SR has obtained an opti-mal solution but TC is significantly better than SR.These cases demonstrate that a statistical comparisonlike t-test is useful when comparing two stochastic al-gorithms, such as evolutionary algorithms which areaffected by the chosen random seed used to start theevolution. We can also safely claim that the perfor-mance of the proposed algorithm is substantially bet-ter compared to the performance of SAFF. To makethe comparison more meaningful, we have also ex-

Ehab Z. Elfeky et al.: Ranking & Selection for Constrained Optimization 27

cluded the best and the worst of the 30 best solutionsas outliers and have then applied the t-test on the newsamples. However, we have not observed any signifi-cant changes. It is worth noting here that the meanand standard deviation (excluding best and worst fit-ness values) for the later comparison were not avail-able from Runarsson and Yao[8]. However, we havecalculated these accurately using a simple analyticalmethod.

4.2 Proposed Components Effect

In order to see the effect of each component in theproposed algorithm, we have designed a number of ex-periments as follows.• Base Run: with regular tournament selection, two

parents crossover, and non-uniform mutation.• Selection Run: the same parameters as the base

run, except replacing the tournament selection withthe proposed ranking and selection scheme as dis-cussed in Section 2.• Crossover Run: similar to the selection run, but

with replacing two parents crossover with the triangu-lar crossover.• Mutation Run: similar to selection run, except

replacing the non-uniform mutation with the mixedmutation (applying both uniform and non-uniform).• Final Run: all the previously proposed compo-

nents are included in this run.The proposed ranking and selection requires a good

mix of feasible and infeasible individuals in the pop-ulation. Hence, we have experimented with differentfeasibility ratios for when that selection is applied. Inparticular we have reported two sets of experiments inthis paper: (i) for LR = 0.4 and UR = 0.7, and (ii)LR = 0.3 and UR = 0.8. As previously stated, if thefeasibility ratio is outside this range, we use a regulartournament selection. Figs. 2 and 3 present the effectof each component on six test problems for the abovetwo sets of experiments. We have considered only thetest problems g02, g04, g05, g07, g09, and g10 in thisanalysis as we obtain optimal solutions easily for therest of the test problems in this paper.

For each test problem under investigation in thissection, we have run 30 times for each of the aboveruns. To evaluate the added value of each componentwe have performed t-tests (with confidence level 95%).For ease of explanation, we will indicate as BETTERif the added component improves the fitness value andWORSE if it deteriorates. In the following few figures,the t-test results are presented using six bars for eachproblem, these represent:

• t-calculated when comparing Selection Run withBase Run (S-B) which is the first column for eachtest problem,

• t-calculated when comparing Crossover Run withSelection Run (C-S) which is the second column,

• t-calculated when comparing Mutation Run withSelection Run (M-S) (third),

• t-calculated when comparing Mutation Run withCrossover Run (M-C) (fourth),

• t-calculated when comparing Final Run withCrossover Run (F-C) (fifth), and

• t-calculated when comparing Final Run with Mu-tation Run (F-M) (sixth).

For example, in Fig.4, the first column is represent-ing the result of the t-test between the Selection Runand the Base Run (S-B) and it has a negative valueless than 2, which implies that the ranking and selec-tion component has an insignificantly BETTER effecton test problem g02. However, for test problem g04,the second column which represents the result of the t-test between the Crossover Run and the Selection Runhas a negative value less than −2, which implies thatthe triangular crossover component has a significantlyBETTER effect on test problem g04. Note that thecrossover effect is on the top of the ranking and selec-tion effects.

Fig.4. Effect of the components when the Proposed Ranking

and Selection applied and the feasibility ratio between range

[0.4∼0.7].

The Base Run for g10 could not find any feasiblesolution, so we could not make the t-test between theSelection Run and the Base Run. At the same time theselection run achieved feasible solutions, so we considerthis as a significantly BETTER effect. As a result, thefirst bar is missing in the figures for g10.

28 J. Comput. Sci. & Technol., Jan. 2008, Vol.23, No.1

In the first experiment, we applied the proposedranking and selection when the feasibility ratio wasbetween 0.4 and 0.7. The results of this experimentare presented in Fig.4, in which the proposed selectionhas a BETTER effect, and usually significantly BET-TER (see S-B column in Fig.4). From the C-S andM-S columns we can say that each of the proposedcrossover and mixed mutation techniques do not haveany significantly WORSE individual effect, moreoverin most cases it has a significantly BETTER effect.The column M-C tells us that the individual effect ofthe proposed mutation is significantly BETTER thanthe individual effect of the proposed crossover in prob-lems g02, g04, g07, and g10, which is not the casein g09 and g05; notice that g05 has 3 equality con-straints which is more difficult when the step lengthof mutation is increased. When the columns F-C andF-M are compared it is apparent that the combinedeffect of both the proposed crossover and mutationis better than the individual effect of any of them ing04 and g05. In g02 the WORSE individual effect ofthe crossover is hampering the combined effect (finalrun) from getting better results than the mutation run;hence we can see that the final run is BETTER thanthe crossover run (F-C column). g09 has the same sit-uation; but this time the WORSE individual effect isfor mutation run. For g07 and g10, the combined ef-fect of the proposed components is not as good as theindividual effect of these components was.

From the above analysis, it can be said that noneof the proposed components (Ranking and Selection,Crossover, and Mutation) has a significant WORSEindividual effect on the test problems; indeed in mostcases they have significant BETTER effect.

Fig.5. Effect of the components when the Proposed Ranking

and Selection are applied and the feasibility ratio is between

range [0.3∼0.8].

The proposed ranking and selection scheme dis-cards unnecessary individuals from the search space(see Section 2). Apparently, it may seem that suchan approach would reduce the diversity of the searchprocess. In fact, the feasibility ratios, as discussed ear-lier, control the diversity of the solutions of interest inthe search process. The crossover is designed to speedup the convergence. In the next experiment, we in-creased the probability of using the proposed rankingand selection (by widening the feasibility ratio rangeto 0.3∼0.8), which would further increase the diver-sity. The results of this experiment are presented inFig.5.

By comparing Figs. 4 and 5, we can notice thatthere are no changes in the effect of the proposed se-lection and ranking. On the other hand, the crossovereffect is generally worse after increasing the diversity;however g02 is insignificantly better affected. At thesame time, there are no significant changes in the effectof the proposed mutation. It means that increasing thediversity has a BETTER effect on g02 in the crossoverrun, but it has a WORSE effect on the crossover run ing04, where it moved from being a significantly BET-TER effect down to an insignificantly WORSE effect.At the same time, it had an insignificant effect on therest of the problems. We can argue here that increas-ing the diversity had a good effect on the test problemwith the largest number of variables, but it had a badeffect on the test problem with the lowest number ofvariables, while it has an insignificant effect on thetest problems with numbers of variables between thehighest and the lowest.

In the experiments carried out so far, the triangularcrossover has been applied to all individuals irrespec-tive of the use of the proposed simple ranking andselection, or the tournament selection. We know thatthe proposed selection would increase diversity whileon the other hand; the proposed crossover will decreasediversity. To make a better balance, we propose to ap-ply triangular crossover only when the simple rankingand selection is in effect. In other cases, when the fea-sibility ratio is outside the prescribed range, we willapply two parent crossover with tournament selection.We call this process “synchronization” in this paper.

The experimental results, in terms of t-calculated,with synchronization for feasibility ratios of 0.4∼0.7are presented in Fig.6, and for feasibility ratios of0.3∼0.8 are presented in Fig.7. In this set of exper-iments, the t-test results are presented using two barsfor each problem, which represent:• t-calculated when comparing Selection Run (syn-

chronized with crossover) with Base Run which is the

Ehab Z. Elfeky et al.: Ranking & Selection for Constrained Optimization 29

Fig.6. Components effect when both the Proposed Ranking and

Selection and the triangular crossover are applied when the fea-

sibility ratio is in the range [0.4∼0.7].

first column for every test problem, and• t-calculated when comparing Mutation Run with

Crossover Run which is the second column for everytest problem.

As of Figs. 6 and 7, when the synchronization hasbeen done, the proposed components not only makeconsistently positive contributions but also make a sig-nificant contribution in most cases. We believe such aprocess provides a better balance of exploration and

Fig.7. Components effect when both the Proposed Ranking and

Selection and the triangular crossover are applied when the fea-

sibility ratio is in the range [0.3∼0.8].

exploitation for the proposed algorithm.

4.3 Effect of Population Size

Another set of experiments has been carried outin order to explore the effect of the population size.We consider population sizes of 30, 90 and 150. Inthis experiment, we have used four different parame-ter settings as that in the previous experiments (usingtwo feasibility ratios with or without synchronization).

Table 3. Aaverage Results of the 4 Combination Runs for Different Population Sizes (30, 90, and 150)

fcn/mk Best Median Mean St. Dev. Worst

g02 −0.803 619

30 −0.803 610 −0.795 520 −0.795 653 8.19 ×10−3 −0.77 4105

90 −0.803 585 −0.797 112 −0.797 103 6.77 ×10−3 −0.779 099

150 −0.803 563 −0.802 753 −0.799 541 4.90 ×10−3 −0.787 396

g04 −30 665.539

30 −30 665.539 −30 665.536 −30 665.534 6.16 ×10−3 −30 665.513

90 −30 665.492 −30 665.382 −30 665.327 1.81 ×10−1 −30 664.691

150 −30 665.404 −30 665.102 −30 665.030 3.09 ×10−1 −30 664.216

g05 5 126.498

30 5 127.165 5 201.568 5 291.483 2.33 ×102 5 994.270

90 5 126.723 5 180.743 5 242.558 1.76 ×102 5 864.186

150 5 127.118 5 163.594 5 210.785 1.17 ×102 5 556.757

g07 24.306

30 24.586 26.105 26.007 6.73 ×10−1 27.441

90 25.130 26.168 26.137 5.50 ×10−1 27.234

150 25.380 26.445 26.428 5.06 ×10−1 27.436

g09 680.630

30 680.633 680.660 680.663 2.26 ×10−2 680.717

90 680.654 680.737 680.749 6.00 ×10−2 680.904

150 680.673 680.790 680.793 7.62 ×10−2 680.973

g10 7 049.331

30 7 074.452 8 133.826 8 379.120 1.12 ×103 11 602.079

90 7 079.461 7 455.075 7 673.639 6.07 ×102 9 679.872

150 7 068.303 7 437.768 7 540.044 4.13 ×102 8 640.422

30 J. Comput. Sci. & Technol., Jan. 2008, Vol.23, No.1

For each population size and each parameter set, weran each test problem (g02, g04, g05, g07, g09 andg10) for 30 independent runs. We have presented theaverage results of the 4 combinations runs in Table 3.

We have also performed t-test to see whether thereis a significance difference between the results of differ-ent population sizes. From the results in Table 3 andthe t-tests, we can report that a smaller populationsize (30) is better for g04, g07, and g09, and a largerpopulation size (150) is better for g10. On the otherhand, the effect of population sizes is insignificant forg02, and g05. Note that the results presented in Ta-ble 3 are different from Table 1 as they are based ondifferent numbers of runs.

4.4 Relation Between the Feasibility Ratioand the Proposed Ranking and Selection

In order to have a deeper understanding of the newselection and ranking process, the percentage of use ofthe proposed simple ranking and selection (the numberof times the proposed selection was used divided by thetotal number of times any type of selection was used)was calculated and is stated in Table 4. In generalterms, the effects of the components are dependent onthe application of the proposed ranking and selectionscheme. As can be seen in Table 4, the low percentagein g02 is caused by the high percentage of the feasi-ble region in the search space of the problem, which isequal to 99.9973%. Interestingly, it is not more than1.00% in the other test problems, except g04 which is27.0079%. The very high and very low percentages offeasible space are making it harder to have feasibilityratios inside the considered range. It is clear from Ta-ble 4 that the changing of the feasibility ratio rangefrom [0.4∼0.7] to [0.3∼0.8] gives the algorithm morechances to apply the proposed ranking and selectionscheme.

To give an overview of the variations of feasibil-ity ratios over different generations in a single run,we have plotted them in Figs. 8, 9, and 10 for threearbitrarily chosen test problems in a single run. Forbetter understanding of the trend of the feasibility ra-tios, the moving average of every 100 feasibility ratioshas also plotted. From the plots, it is clear that g07and g09 start with lower feasibility ratios and the ra-tios increase as the generation progresses. For g02, thesituation is different. The problem has a large feasibleregion as compared to its defined search space. As aresult, it starts with a high feasibility ratio (0.6∼1.0),and as the evolution process advances, the algorithmconcentrates on the edge of the feasible space whichhelps to create infeasible individuals closer to feasible

space. It is interesting to observe here that the rangesof feasibility ratios maintained by the algorithm at thelater generations are similar for all three test problems.We believe this is a favorable point for our algorithmthat would help to avoid traps of local optima.

Table 4. Usage Percentage of the Proposed Ranking

and Selection for Different Runs

Fcn. RunAsynchronized Synchronized

0.4∼0.7 0.3∼0.8 0.4∼0.7 0.3∼0.8

Selection 5.62% 24.69%

1.08% 11.01%g02 Crossover 3.83% 19.92%

Mutation 8.50% 33.75% 6.88% 26.15%

Selection 45.84% 88.95%

g04 Crossover 38.04% 71.33% 43.94% 72.42%

Mutation 36.99% 70.88% 43.38% 72.00%

Selection 47.02% 67.11%

39.62% 54.40%g07 Crossover 43.10% 57.25%

Mutation 42.74% 56.83% 39.11% 53.96%

Selection 61.43% 90.94%

61.43% 81.58%g09 Crossover 54.47% 80.26%

Mutation 54.12% 78.52% 57.60% 79.99%

Selection 43.21% 62.91%

43.21% 29.77%g10 Crossover 32.56% 44.30%

Mutation 31.79% 44.52% 30.67% 43.25%

Fig.8. Feasibility ratio and the moving average of each 100 gen-

erations for g07 in a single run.

Fig.9. Feasibility ratio and the moving average of each 100 gen-

erations for g09 in a single run.

Ehab Z. Elfeky et al.: Ranking & Selection for Constrained Optimization 31

Fig.10. Feasibility ratio and the moving average of each 100

generations for g02 in a single run.

In this paper, we have empirically shown that ourapproach is able to deal with a variety of constrainedoptimization problems (i.e., with both linear and non-linear constraints and objective functions, and withequality and inequality constraints). The benchmarkadopted includes test functions with both small andlarge feasible spaces. We have also argued that ourproposed approach is very simple to implement andcan solve a variety of optimization problems.

5 Conclusions

In this paper, new ranking, selection, and crossovermethods were introduced to solve constrained opti-mization problems. The idea behind these new meth-ods is the exploitation of some of the features ofconstrained problems. The performance of the pro-posed algorithm has been compared with six exist-ing evolutionary algorithms using twelve benchmarktest problems. The results of the proposed algorithmare clearly better than the five GA-based approachesand are competitive with the best known ES-basedapproach. In two test problems, we have better re-sults than all the other EA-based solutions consideredin the comparison. The superiority of our algorithm isthe combined effect of our proposed ranking, selection,crossover and mutation methods.

In this paper, a statistical significance test has beencarried out in order to make a fairer comparison, whichshowed that both of the proposed GAs and the bestEA-based approach are very competitive. Detailedanalysis on the effect of the individual componentsof the proposed algorithm has also been done. It isclear from this analysis that all the components had apositive impact on the algorithmic design.

References

[1] Mezura-Montes E, Coello C A C. A simple multimemberedevolution strategy to solve constrained optimization prob-lems. IEEE Trans. Evolutionary Computation, 2005, 9(1):1–17.

[2] Barbosa H J C, Lemonge A C C. A new adaptive penaltyscheme for genetic algorithms. Inf. Sci., 2003, 156(3): 215–251.

[3] Deb K. An efficient constraint handling method for geneticalgorithms. Computer Methods in Applied Mechanics andEngineering, 2000, 186(2-4): 311–338.

[4] Koziel S, Michalewicz Z. Evolutionary algorithms, homomor-phous mappings, and constrained parameter optimization.Evolutionary Computation, 1999, 7(1): 19–44.

[5] Farmani R, Wright J A. Self-adaptive fitness formulation forconstrained optimization. IEEE Transactions on Evolution-ary Computation, 2003, 7(5): 445–455.

[6] Venkatraman S, Yen G G. A generic framework for con-strained optimization using genetic algorithms. IEEETransactions on Evolutionary Computation, 2005, 9(4):424–435.

[7] Runarsson T P, Yao X. Stochastic ranking for constrainedevolutionary optimization. IEEE Transactions on Evolu-tionary Computation, 2000, 4(3): 284.

[8] Michalewicz Z. Genetic algorithms, numerical optimization,and constraints. In Proc. 6th International Conference onGenetic Algorithms, San Francisco, CA, 1995, pp.151–158.

[9] Sarker R, Kamruzzaman J, Newton C. Evolutionary opti-mization (EvOpt): A brief review and analysis. Interna-tional Journal of Computational Intelligence and Applica-tions, 2003, 3(4): 311–330.

[10] Eiben A E, Raue P E, Ruttkay Z. Genetic algorithms withmulti-parent recombination. In Proc. 3rd Conference onParallel Problem Solving from Nature, Jerusalem, 1994,pp.78–87.

[11] Michalewicz Z. Genetic Algorithms + Data Structures =Evolution Programs. 3rd Rev. and Extended Ed, Berlin;New York: Springer-Verlag, 1996.

[12] Zhao X, Gao X S, Hu Z. Evolutionary programming based onnon-uniform mutation. MMRC, AMSS, Chinese Academy ofSciences, Beijing, China, December 2004, No.23, pp.352–374.

[13] Sarker R, Newton C. A comparative study of differentpenalty function-based GAs for constrained optimization. InProc. the 4th Australia-Japan Joint Workshop on Intelligentand Evolutionary Systems, Japan, 2000.

[14] Runarsson T P, Yao X. Search biases in constrained evolu-tionary optimization. IEEE Transactions on Systems, Manand Cybernetics, Part C: Applications and Reviews, 2005,35(2): 233–243.

[15] Elfeky E Z, Sarker R A, Essam D L. A simple ranking andselection for constrained evolutionary optimization. In Proc.6th International Conf. Simulated Evolution and Learning,Hefei, China, 2006, pp.537–544.

[16] Floudas C A, Pardalos P M. A collection of test problems forconstrained global optimization algorithms. Lecture Notesin Computer Science, vol. 455. Berlin: Springer-Verlag,1990.

[17] Michalewicz Z, Nazhiyath G, Michalewicz M. A note on use-fulness of geometrical crossover for numerical optimizationproblems. In Proc. 5th Annual Conference on EvolutionaryProgramming, San Diego, CA, 1996, pp.305–312.

[18] Himmelblau D M. Applied Nonlinear Programming. NewYork: McGraw-Hill, 1972.

32 J. Comput. Sci. & Technol., Jan. 2008, Vol.23, No.1

[19] Hock W, Schittkowski K. Text Examples for Nonlinear Pro-gramming Codes. New York: Springer-Verlag, 1981.

Ehab Z. Elfeky received hisB.Sc. and Master’s degrees by re-search in 2000 and 2004 respec-tively from the Cairo University inEgypt. He is an associate lecturer atCairo University since 2000. Cur-rently he is a Ph.D. candidate inAustralian Defence Force Academy.His research interests include evolu-tionary computation, applied math-

ematical modelling and optimization techniques.

Ruhul A. Sarker obtainedhis Ph.D. degree in operations re-search (1991) from DalTech (for-mer TUNS), Dalhousie University,Halifax, Canada. He is currentlya senior lecturer in Operations Re-search at the School of Informa-tion Technology and Electrical En-gineering, University of New SouthWales (UNSW), ADFA Campus,

Canberra, Australia. Before joining UNSW@ADFA in1998, he worked with Monash University and BangladeshUniversity of Engineering and Technology. He has pub-lished 150+ refereed technical papers in international jour-nals. He has edited six reference books and several confer-ence proceedings, and served conference as guest editorsand technical reviewers for a number of international jour-nals. He is the lead author of the book “OptimizationModelling: A Practical Approach” published by Taylor andFrancis. His edited book “Evolutionary Optimization” waspublished by Kluwer (now Springer). His research inter-ests include applied mathematical modelling, optimizationand evolutionary computation. Dr. Sarker was a technicalco-chair of IEEE-CEC2003 and served many internationalconferences in the capacity of chair, co-chair or PC mem-ber. He is a member of INFORMS, IEEE and ASOR. Dr.Sarker is the editor of ASOR Bulletin, the national publi-cation of the Australian Society for Operations Research.

Daryl Essam received his B.Sc.degree from the University of NewEngland in 1990 and his Ph.D. de-gree from the University of NewSouth Wales in 2000. He was anassociate lecturer at the Universityof New England from 1991 and hasbeen with the Australian DefenseForce Academy campus of the Uni-versity of New South Wales since

1994, where he is now a senior lecturer. His research in-terests include genetic algorithms, with a focus on bothgenetic programming and multi-objective optimisation.

Appendix Test Function Suite

For more information the original sources are cited witheach test function.

A. g01Minimize[16]

f(x) = 5

4∑i=1

xi − 5

4∑i=1

x2i −

13∑i=5

xi

subject to

g1(x) = 2x1 + 2x2 + x10 + x11 − 10 6 0,

g2(x) = 2x1 + 2x3 + x10 + x12 − 10 6 0,

g3(x) = 2x2 + 2x3 + x11 + x12 − 10 6 0,

g4(x) = −8x1 + x10 6 0,

g5(x) = −8x2 + x11 6 0,

g6(x) = −8x3 + x12 6 0,

g7(x) = −2x4 − x5 + x10 6 0,

g8(x) = −2x6 − x7 + x11 6 0,

g9(x) = −2x8 − x9 + x12 6 0,

where the bounds are 0 6 xi 6 1 (i = 1, . . . , 9), 0 6xi 6 100 (i = 10, 11, 12), and 0 6 x13 6 1. The globalminimum is at x∗ = (1, 1, 1, 1, 1, 1, 1, 1, 1, 3, 3, 3, 1)where six constraints are active (g1, g2, g3, g7, g8, and g9)and f(x∗) = −15.

B. g02Maximize[4]

f(x) =

∣∣∣∣∣

n∑i=1

cos4(xi)− 2

n∏i=1

cos2(xi)

√√√√n∑

i=i

ix2i

∣∣∣∣∣

subject to

g1(x) = 0.75−n∏

i=1

xi 6 0,

g2(x) =

n∑i=1

xi − 7.5n 6 0,

where n = 20 and 0 6 xi 6 10 (i = 1, . . . , n). Theglobal maximum is unknown; the best we found is f(x∗) =0.803619 (which, to the best of our knowledge, is betterthan any reported value), constraint g1 is close to beingactive (g1 = −10−8).

C. g03Maximize[17]

f(x) = (√

n)nn∏

i=1

xi

subject to

h1(x) =

n∑i=1

x2i − 1 = 0

Ehab Z. Elfeky et al.: Ranking & Selection for Constrained Optimization 33

where n = 10 and 0 6 xi 6 1 (i = 1, . . . , n). The globalmaximum is at x∗ = 1/

√n (i = 1, . . . , n) where f(x∗) = 1.

D. g04Minimize[18]

f(x) = 5.3578547x23 + 0.8356891x1x5

+ 37.293239x1 − 40792.141

subject to

g1(x) = 85.334407 + 0.0056858x2x5

+ 0.0006262x1x4 − 0.0022053x3x5 − 92 6 0,

g2(x) = − 85.334407− 0.0056858x2x5

− 0.0006262x1x4 + 0.0022053x3x5 6 0,

g3(x) = 80.51249 + 0.0071317x2x5

+ 0.0029955x1x2 + 0.0021813x23 − 110 6 0,

g4(x) = − 80.51249− 0.0071317x2x5

− 0.0029955x1x2 − 0.0021813x23 + 90 6 0,

g5(x) = 9.300961 + 0.0047026x3x5

+ 0.0012547x1x3 + 0.0019085x3x4 − 25 6 0,

g6(x) = − 9.300961− 0.0047026x3x5

− 0.0012547x1x3 − 0.0019085x3x4 + 20 6 0,

where 78 6 x1 6 102, 33 6 x2 6 45, and 27 6 xi 6 45(i = 3, 4, 5). The optimum solution is x∗ = (78, 33,29.995256025682, 45, 36.775812905788) where f(x∗) =30665.539. Two constraints are active (g1 and g6).

E. g05Minimize[19]

f(x) = 3x1 + 0.000001x31 + 2x2 +

0.000002

3x32

subject to

g1(x) = − x4 + x3 − 0.55 6 0,

g2(x) = − x3 + x4 − 0.55 6 0,

h3(x) = 1000 sin(−x3 − 0.25) + 1000 sin(−x4 − 0.25)

+ 894.8− x1 = 0,

h4(x) = 1000 sin(x3 − 0.25) + 1000 sin(x3 − x4 − 0.25)

+ 894.8− x2 = 0,

h4(x) = 1000 sin(x4 − 0.25) + 1000 sin(x4 − x3 − 0.25)

+ 1294.8 = 0,

where 0 6 x1 6 1200, 0 6 x2 6 1200, −0.55 6 x3 6 0.55,and −0.55 6 x4 6 0.55. The best known solution isx∗ = (679.9453, 1026.067, 0.1188764,−0.3962336) wheref(x∗) = 5126.4981.

F. g06Minimize[16]

f(x) = (x1 − 10)3 + (x2 − 20)3

subject to

g1(x) = −(x1 − 5)2 − (x2 − 5)2 + 100 6 0,

g2(x) = (x1 − 6)2 + (x2 − 5)2 − 82.81 6 0,

where 13 6 x1 6 100, and 0 6 x2 6 100. The opti-mum solution is x∗ = (14.095, 0.84296) where f(x∗) =−6961.81388. Both constraints are active.

G. g07

Minimize[19]

f(x) = x21 + x2

2 + x1x2 − 14x1 − 16x2 + (x3 − 10)2

+ 4(x4 − 5)2 + (x5 − 3)2 + 2(x6 − 1)2 + 5x27

+ 7(x8 − 11)2 + 2(x9 − 10)2 + (x10 − 7)2 + 45

subject to

g1(x) = − 105 + 4x1 + 5x2 − 3x7 + 9x8 6 0,

g2(x) = − 10x1 − 8x2 − 17x7 + 2x8 6 0,

g3(x) = − 8x1 + 2x2 + 5x9 + 2x10 − 12 6 0,

g4(x) = 3(x1 − 2)2 + 4(x2 − 3)2 + 2x23 − 7x4 − 120 6 0,

g5(x) = 5x21 + 8x2 + (x3 − 6)2 − 2x4 − 40 6 0,

g6(x) = x21 + 2(x2 − 2)2 − 2x1x2 + 14x5 − 6x6 6 0,

g7(x) = 0.5(x1 − 8)2 + 2(x2 − 4)2 + 3x25 − x6 − 30 6 0,

g8(x) = − 3x1 + 6x2 + 12(x9 − 8)2 − 7x10 6 0,

where −10 6 xi 6 10 (i = 1, . . . , 10). The optimum so-lution is x∗ = (2.171996, 2.363683, 8.773926, 5.095984,0.9906548, 1.430574, 1.321644, 9.828726, 8.280092,8.375927) where f(x∗) = 24.3062091. Six constraints areactive (g1, g2, g3, g4, g5, and g6).

H. g08

Minimize[4]

f(x) =sin3(2πx1) sin(2πx2)

x31(x1 + x2)

subject to

g1(x) = x21 − x2 + 1 6 0,

g2(x) = 1− x1 + (x2 − 4)2 6 0,

where 0 6 x1 6 10, and 0 6 x2 6 10. The optimumsolution is x∗ = (1.2279713, 4.2453733) where f(x∗) =0.095825. The solution lies within the feasible region.

I. g09

Minimize[19]

f(x) = (x1 − 10)2 + 5(x2 − 12)2 + x43 + 3(x4 − 11)2

+ 10x65 + 7x2

6 + x47 − 4x6x7 − 10x6 − 8x7

subject to

g1(x) = − 127 + 2x21 + 3x4

2 + x3 + 4x24 + 5x5 6 0,

g2(x) = − 282 + 7x1 + 3x2 + 10x23 + x4 − x5 6 0,

g3(x) = − 196 + 23x1 + x22 + 6x2

6 − 8x7 6 0,

g4(x) = 4x21 + x2

2 − 3x1x2 + 2x23 + 5x6 − 11x7 6 0,

34 J. Comput. Sci. & Technol., Jan. 2008, Vol.23, No.1

where −10 6 xi 6 10 (i = 1, . . . , 7). The opti-mum solution is x∗ = (2.330499, 1.951372, 0.4775414,4.365726, 0.6244870, 1.038131, 1.594227) where f(x∗) =680.6300573. Two constraints are active (g1 and g4).

J. g10

Minimize[19]

f(x) = x1 + x2 + x3

subject to

g1(x) = − 1 + 0.0025(x4 + x6) 6 0,

g2(x) = − 1 + 0.0025(x5 + x7 − x4) 6 0,

g3(x) = − 1 + 0.01(x8 − x5) 6 0,

g4(x) = − x1x6 + 833.33252x4 + 100x1 − 8333.333 6 0,

g5(x) = − x2x7 + 1250x5 + x2x4 − 1250x4 6 0,

g6(x) = − x3x8 + 1250000 + x3x5 − 2500x5 6 0,

where 100 6 x1 6 10000, 1000 6 xi 6 10000 (i = 2, 3)and 10 6 xi 6 1000 (i = 4, . . . , 8). The optimum solutionis x∗ = (579.3167, 1359.943, 5110.071, 182.0174, 295.5985,217.9799, 286.4162, 395.5979) where f(x∗) = 7049.3307.Three constraints are active (g1, g2 and g3).

K. g11

Minimize[4]

f(x) = x21 + (x2 − 1)2

subject to

h(x) = x2 − x21 = 0,

where −1 6 x1 6 1 and −1 6 x2 6 1. The optimumsolution is x∗ = (±1/

√2, 1/2) where f(x∗) = 0.75.

L. g12

Maximize[4]

f(x) =(100− (x1 − 5)2 − (x2 − 5)2 − (x3 − 5)2)

100

subject to

g(x) = (x1 − p)2 − (x2 − q)2 − (x3 − r)2 − 0.0625 6 0

where 0 6 xi 6 10 (i = 1, 2, 3) and p, q, r = 1, 2, . . . , 9.The feasible region of the search space consists of 93 dis-jointed spheres. A point (x1, x2, x3) is feasible if and onlyif there exist (p, q, r) such that the above inequality holds.The optimum is located at x∗ = (5, 5, 5) where f(x∗) = 1.The solution lies within the feasible region[2].

∼∼∼∼∼∼∼∼∼∼∼∼∼∼∼∼∼∼∼∼∼∼∼∼∼∼∼∼∼∼∼∼∼∼∼∼∼∼∼∼∼∼∼∼∼∼∼∼∼∼∼∼∼∼∼∼∼∼∼∼∼∼

Correction

The second author’s Chinese name of the paper entitled “Autocorrelation Values of New Generalized Cyclo-tomic Sequences of Order Two and Length pq”, published on No.6, 2007, pp.830–834, should be 陈智雄, butnot 陈志雄.