A Preliminary Study of Fitness Inheritance in Evolutionary Constrained Optimization

12
A Preliminary Study of Fitness Inheritance in Evolutionary Constrained Optimization Efr´ en Mezura-Montes 1 , Luc´ ıa Mu˜ noz-D´ avila 2 , and Carlos A. Coello Coello 3 1 Laboratorio Nacional de Inform´atica Avanzada (LANIA A.C.), R´ ebsamen 80, Centro, Xalapa, Veracruz, 91000, M ´ EXICO [email protected] 2 Instituto Tecnol´ogico de Apizaco, Av. Instituto Tecnol´ogico S/N, Apizaco, Tlaxcala, M ´ EXICO [email protected] 3 Departamento de Computaci´on, Av. IPN No. 2508, Col. San Pedro Zacatenco, exico, D.F., 07300, M ´ EXICO [email protected] Summary. This document presents a proposal to incorporate a fitness inheritance mechanism into an Evolution Strategy used to solve the general nonlinear program- ming problem. The aim is to find a trade-off between a lower number of evaluations of each solution and a good performance of the approach. A set of test problems taken from the specialized literature was used to test the capabilities of the proposed approach to save evaluations and to maintain a competitive performance. 1 Introduction The general nonlinear programming problem (NLP) is formally defined as follows: Find x which minimizes f (x) subject to: g i (x) 0, i =1,...,m, and h j (x)=0, j =1,...,p where x IR n is the vector of solutions x = [x 1 ,x 2 ,...,x n ] T , where each x i , i =1,...,n is bounded by lower and upper limits L i x i U i which define the search space S , F is the feasible region and F⊆S ; m is the number of inequality constraints and p is the number of equality constraints (in both cases, constraints could be linear or nonlinear). Evolutionary Algorithms (EAs) are widely used as alternative techniques (mathematical programming approaches are always the first choice) to solve the NLP [1]. However, three shortcomings can be identified when applying EAs to solve the NLP: A set of parameter values must be defined by the user and the behavior of the EA in the search process depend of these values. In the presence of constraints, a constraint-handling mechanism must be added to the EA in order to incorporate feasibility information in its se- lection and/or replacement process(es) and this mechanism may involve additional parameters to be fine-tuned by the user.

Transcript of A Preliminary Study of Fitness Inheritance in Evolutionary Constrained Optimization

A Preliminary Study of Fitness Inheritance in

Evolutionary Constrained Optimization

Efren Mezura-Montes1, Lucıa Munoz-Davila2, and Carlos A. Coello Coello3

1 Laboratorio Nacional de Informatica Avanzada (LANIA A.C.), Rebsamen 80,Centro, Xalapa, Veracruz, 91000, MEXICO [email protected]

2 Instituto Tecnologico de Apizaco, Av. Instituto Tecnologico S/N, Apizaco,Tlaxcala, MEXICO [email protected]

3 Departamento de Computacion, Av. IPN No. 2508, Col. San Pedro Zacatenco,Mexico, D.F., 07300, MEXICO [email protected]

Summary. This document presents a proposal to incorporate a fitness inheritancemechanism into an Evolution Strategy used to solve the general nonlinear program-ming problem. The aim is to find a trade-off between a lower number of evaluationsof each solution and a good performance of the approach. A set of test problemstaken from the specialized literature was used to test the capabilities of the proposedapproach to save evaluations and to maintain a competitive performance.

1 Introduction

The general nonlinear programming problem (NLP) is formally defined asfollows: Find x which minimizes f(x) subject to: gi(x) ≤ 0, i = 1, . . . , m,

and hj(x) = 0, j = 1, . . . , p where x ∈ IRn is the vector of solutions x =[x1, x2, . . . , xn]T , where each xi, i = 1, . . . , n is bounded by lower and upperlimits Li ≤ xi ≤ Ui which define the search space S, F is the feasible regionand F ⊆ S; m is the number of inequality constraints and p is the number ofequality constraints (in both cases, constraints could be linear or nonlinear).

Evolutionary Algorithms (EAs) are widely used as alternative techniques(mathematical programming approaches are always the first choice) to solvethe NLP [1].

However, three shortcomings can be identified when applying EAs to solvethe NLP:

• A set of parameter values must be defined by the user and the behaviorof the EA in the search process depend of these values.

• In the presence of constraints, a constraint-handling mechanism must beadded to the EA in order to incorporate feasibility information in its se-lection and/or replacement process(es) and this mechanism may involveadditional parameters to be fine-tuned by the user.

2 Mezura-Montes, Munoz-Davila and Coello Coello

• Usually, the EA requires to perform several evaluations of the objectivefunction of the problem and its constraints as to find a “good” solution.Furthermore, for some real-world problems, the evaluation of a single so-lution may require a high computational cost.

This work is focused on the last disadvantage. A common approach to dealwith it is the use of fitness approximation models which prevent the EA touse the original (and maybe costly) model of the problem every time a newsolution is evaluated [2]. Polynomial models, Kriging, Neural Networks andSupport Vector Machines are the main approaches used for fitness approxi-mation [2]. In fact, some applications of them are reported in the specializedliterature [3]. But, despite the fact that the use of these models decreases thenumber of evaluations required by an EA, they indeed add an extra compu-tational cost related to its generation and updating process.

On the other hand, this work aims to propose a simpler approximationmechanism, known as fitness inheritance [4], which prevents a new solutionof being evaluated. Instead, it inherits the fitness value from its parents. Thismechanism is added to an Evolution Strategy [5] which is used to solve theNLP problem.

This document is organized as follows: Section 2 presents a brief summaryof approaches for evaluation savings used to solve the general NLP problemadopting EAs. After that, Section 3 includes a description of our proposedapproach. Then, in Section 4 the experimental design, the results obtainedand their corresponding discussions are shown. Finally, Section 5 summarizesour findings and presents some possible paths for future work.

2 Related work

There are several approaches reported in the specialized literature about fit-ness approximation models used in EAs. However, the main efforts have beencentered either in unconstrained global optimization problems [2], or in mul-tiobjective optimization problems [6, 7]. We will focus here on the approachesproposed precisely to solve the general NLP problem (with constraints):

• Runarsson [8] used a nearest-neighborhood model to solve a set of NLPproblems. The results showed that, using just the information of the closestsolution in the decision space (defined by the lower and upper limits of thedecision variables (see Section 1) based on a set of solutions found duringthe search provides a more competitive performance with respect to usingthe average value of a set of solutions in the vicinity of the new solutionto be evaluated. The overall performance obtained in this approach ishighly competitive, but its main disadvantage is that it requires to storea considerable high number of solutions to obtain a better approximationof the new solutions to be generated, and this storage may be prohibitivein some cases.

Fitness-Inheritance in Constrained Optimization 3

• Mezura-Montes and Coello Coello [9] proposed an approach based on Dif-ferential Evolution (DE) [10] to solve the general NLP by reducing thecomputational cost measured by the number of evaluations. Instead of us-ing approximation models, the authors proposed to use features related tothe search engine, DE in this case, in order to avoid the evaluation of somesolutions and assigning a zero fitness value (death penalty). Afterwards,these solutions are discarded. This mechanism also slowed down the con-vergence of the approach and, for some problems, better solutions werefound. The main disadvantage of the approach is that it only works withDE.

• Won and Ray [3] compared Kriging and Cokriging surrogate models withradial-basis functions using a set of five constrained engineering designproblems. They found that the results obtained by using these modelswere very competitive. However, these models may be more difficult toimplement.

3 Our proposed approach

Motivated by the findings of Runarsson with respect to the use of informationof the closest solution in the decision space, but trying to avoid keeping alarge set of solutions and also aiming to provide an easy implementation, wepropose a simple approach which provides competitive results, but decreasingthe number of evaluations required by the EA.

Fitness inheritance (FI) was originally proposed by Smith [4]. The idea isto approximate the values of the objective function of an offspring based onthe values of its parents. Smith initially proposed to compute the average ofthe parents’ objective function values. He alternatively proposed to use thedistance of each parent to its offspring in the decision space to determinethe amount of contribution of each parent’s objective function value to thecorresponding value of the offspring. When using FI, several evaluations maybe saved during the evolutionary process.

We then propose to use Smith’s ideas, originally incorporated into a geneticalgorithm (GA) to solve simple unconstrained optimization problems, in anEvolution Strategy designed to solve constrained optimization problems, i.e.the general NLP problem.

Evolution Strategies (ESs) were developed in Germany in 1964 to solvecomplex hydrodynamical problems. The researchers involved in this work wereIngo Rechenberg, Hans-Paul Schwefel and Paul Bienert [11].

The ES simulates the evolution at an individual level; thus, this approachincorporates a crossover operator, either sexual or panmictic (more than twoparents), which, however, acts as a secondary operator. Mutation is the mainoperator and it is used with random numbers generated under a Gaussiandistribution. The mutation values vary over time and are self-adaptive. The

4 Mezura-Montes, Munoz-Davila and Coello Coello

encoding is at a phenotypic level (i.e., no encoding is required). Parent selec-tion is performed in a purely random process (i.e., it is not based on fitnessvalues) and the replacement process is deterministic and extinctive, based onfitness value (the worst individuals have zero probability of survival).

There are several versions of ESs. The first of them is the (1 + 1)-ES.This version has only one solution which is mutated to create one child. If thechild is better than the parent, it will replace it. The first number betweenparentheses refers to the size of the parents population (one in this case), the“+” sign refers to the type of replacement (the other possible replacementis the “,” replacement) and the last value refers to the number of offspringgenerated from the parents (also one in this case). There are other types ofESs like the (µ + 1)-ES, (1 + λ)-ES, (µ + λ)-ES and the (µ + λ)-ES.

One feature that clearly distinguishes ESs from other EAs like GAs, is thatan ES performs a self-adaptive process with the stepsize values (σi) of eachindividual for each dimension “i” of the search space. Figure 1 presents howa solution is encoded in an ES. Both, the decision variables of the problemand the stepsizes for each one of them are stored in one single solution. Thesestepsizes are subject to recombination and mutation because they are evolvingas well.

10.1237.034 0.02 0.1

σ1,σ2decision variables

strategy parameters

Fig. 1. Encoding of a solution in a typical Evolution Strategy. Decision variablesand strategy parameters are both represented in a single solution.

In this work we use a (µ + λ) − ES with panmictic discrete-intermediaterecombination (more than two parents are used to generate one offspring)applied to both decision variables and strategy parameters as shown in Figure2.

Noncorrelated Gaussian mutation is implemented as follows:

σ′i = σi · exp(τ ′ · N(0, 1) + τ · Ni(0, 1)) (1)

x′i = xi + σ′

i · Ni(0, 1) (2)

where σi is the stepsize of the variable xi, τ and τ ′ are interpreted as “learningrates” and are defined by Schwefel [5] as: τ = (

2√

n)−1 and τ ′ = (√

2n)−1,where n is the number of decision variables. Ni(x, y) is a function that returnsa real normal-distributed random number with mean x and standard deviationy. The index i indicates that this random number is generated anew for eachdecision variable or strategy parameter.

Fitness-Inheritance in Constrained Optimization 5

recombinationSelect randomly Parent 1 from the parent populationFOR i=1 to n DO

Select randomly Parent 2 from the parent populationIF flip(0.5) THEN

IF flip(0.5) THENchildi = Parent 1i

ELSEchildi = Parent 2i

END IFELSE

childi = Parent 1i + ((Parent 2i − Parent 1i/2, 0)END IF

END FOREND

Fig. 2. Pseudocode of the recombination operator used in our approach. Parent 1 isfixed during all the process, but parent 2 is chosen anew for each decision variable.flip(P ) is a function that returns TRUE with probability P .

The constraint-handling mechanism chosen is based on Deb’s feasibilityrules [12] used to rank solutions as to choose those that will remain in thepopulation for the next generation. Those rules are: (1) Between 2 feasiblesolutions, the one with the highest fitness value wins, (2) if one solution isfeasible and the other one is infeasible, the feasible solution wins and (3)if both solutions are infeasible, the one with the lowest sum of constraintviolation is preferred (

∑mi=1 max(0, gi(x))). The fitness inheritance mechanism

is proposed as follows: In the recombination operator used and detailed inFigure 2, “n + 1” parents are used, where “n” is the number of decisionvariables. The inherited values for the objective function and the constraintsfor the only offspring generated in each recombination and mutation processwill be calculated from this subset of parents taken from the whole population.

The Manhattan distance in the decision space is calculated between theoffspring and each one of its parents by using the following expression:∑n

i=1 |Xpi − Xhi|, where Xp is the parent solution, Xh is the offspring and n

is the number of decision variables of the problem. The offspring will take allits values (objective function and constraints) from the closest parent in thedecision space.

It is important to note that, in this approach, the set of solutions to beconsidered to inherit their values to the offspring is adapted depending ofthe dimensionality of the problem. Moreover, there is no need to store a highnumber of solutions because the same parents are only considered to inherittheir values to the offspring.

In order to this fitness inheritance approach to perform well, two param-eters were considered:

• 0 ≤ IR ≤ 1: Inheritance ratio. The percentage of the set of λ offspringthat will use the fitness inheritance mechanism. The remaining 1 − IR

solutions will be evaluated in the real model of the problem.

6 Mezura-Montes, Munoz-Davila and Coello Coello

• 0 ≤ RR ≤ 1: Replacement ratio. The percentage of solutions with inheritedvalues that will survive for the next generation.

These parameters allow the user to control the error caused by the solu-tions with inherited values. If several solutions with approximated values arein the population, the search may be guided by non-exact information morefrequently.

In the first and last generation all solutions are evaluated with the originalmodel (i.e. IR = 0) as to start from exact information and also to report thebest solution found so far.

The replacement process (i.e. to select the µ solutions from the µ + λ

which will survive for the next generation) of the ES with the FI mechanismis designed in such a way that only a percentage of solutions with inheritedvalues will be in the next generation. Then, the process looks to decrease theerror generated by the solutions with non-exact values.

The complete pseudocode of our proposed Evolution Strategy with FitnessInheritance is detailed in Figure 3. The initial population is generated withrandom values for each decision variables between its lower and upper bounds.

All stepsize values are initialized as follows: σi(0) =(

xui −xl

i√n

)

where xui − xl

i

are the upper and lower bounds of the decision variable i, i = 1, . . . , n.

Generate an initial population of size µEvaluate each solution in the population with the original model

FOR G = 1 TO Max Generations DOFOR i=1 TO λ DOGenerate one offspring by using recombination and mutation(Figure 2 and Equations 1 and 2)

→ If flip(IR) AND G < Max Generations THEN→ The offspring inherits its objective function and constraints values→ from its closest parent

ELSEThe offspring is evaluated in the original model

END IFEND FORSplit the µ + λ solutions in two groups (solutions with inherited valuesand solutions evaluated in the original model) and rank each groupbased on Deb’s feasibility rules.FOR i=1 TO µ DO

→ IF flip(RR) THEN→ Select to survive the best individual from the group of solutions→ with inherited values.→ Delete this solution from its group.

ELSESelect to survive the best individual from the group of solutionsevaluated with the original model.Delete this solution from its group.

END IFEND FOR

END FOR

Fig. 3. Pseudocode of the (µ+λ)-ES with fitness inheritance. flip(p) returns 1 withprobability p. Steps marked with → are those where the fitness inheritance approachis included.

Fitness-Inheritance in Constrained Optimization 7

4 Experiments and results

In the experimental design, we used 13 well-known benchmark problems foundin the specialized literature [13]. A summary of the main features per testproblem is presented in Table 1 and the complete expressions are included inan Appendix at the end of this document. In every experiment, 30 independentruns were performed.

Two experiments were executed: (1) To compare the ES with the fitnessinheritance mechanism considering that this version will perform less evalua-tions with respect to the original ES without using fitness inheritance and (2)to compare the ES with the fitness inheritance mechanism but now adjustingit to perform the same number of evaluations that the original ES withoutusing fitness inheritance. The goal of the first experiment is to analyze thecapabilities of the FI approach to decrease the number of evaluations withoutaffecting the performance of the original approach. The second experimentaims to analyze the behavior of the FI mechanism in similar conditions withrespect to the original version of the algorithm.

The following nomenclature was used in the results reported: IR-RR-

FIES. Where IR is the inheritance ratio, RR is the survival ratio. Finally,FIES means: Fitness Inheritance Evolution Strategy.

For the first version of the ES without fitness inheritance the parame-ters used were as follows: (100 + 300)-ES with 0-0-FIES, i.e. IR = 0% andRR = 0%, Max Generations = 800 (240,000 evaluations). For the versions ofthe ES with fitness inheritance the following parameters were used: (100+300)-ES with 30-50-FIES i.e. IR = 30% and RR = 50%, Max Generations = 800(167,000 evaluations). For the second version of the ES without fitness inheri-tance the parameters used were as follows: (100+200)-ES with 0-0-FIES, i.e.IR = 0% and RR = 0%, Max Generations = 850 (170,000 evaluations). Thestatistical results obtained from the set of 30 independent runs performed perES version per problem are presented in Table 2.

Regarding the comparison of the first experiment 0-0-FIES (240,000 eval-uations) and 30-50-FIES (167,000 evaluations) we observed the following: Infunctions g01, g04, g06, g08 and g12 both approaches reached the best knownsolution consistently. On the other hand, in functions g02, g03, g05, g07, g10and g13 0-0-FIES obtained an overall better performance than 30-50-FIES.However, the differences in the obtained results are not as high as expected.These results suggest that, there is a small performance decrease when thefitness inheritance is applied (as expected because several solutions have an in-exact value to guide the search). However, considerable savings in the numberof evaluations (about 30%) are also achieved.

For the second experiment (both compared approaches performing thesame number of evaluations ≈ 170, 000) the following was observed: In func-tions g03, g07, g10 and g13, 30-50-FIES obtained a “better” best result withrespect to 0-0-FIES. With respect to the worst value found, 30-50-FIES ob-tained “better” results in problems g03, g05, g07 and g11, and 0-0-FIES was

8 Mezura-Montes, Munoz-Davila and Coello Coello

Table 1. Main features for each benchmark problem used in the experiments. ρ

is the estimated size of the feasible region with respect to the whole search space[13], n is the number of decision variables, LI is the number of linear inequalityconstraints , NI the number of nonlinear inequality constraints, LE is the number oflinear equality constraints and NE is the number of nonlinear equality constraints.

Problem n Type of function ρ LI NI LE NE

g01 13 quadratic 0.0003% 9 0 0 0

g02 20 nonlinear 99.9973% 0 2 0 0

g03 10 nonlinear 0.0026% 0 0 0 1

g04 5 quadratic 27.0079% 0 6 0 0

g05 4 nonlinear 0.0000% 2 0 0 3

g06 2 nonlinear 0.0057% 0 2 0 0

g07 10 quadratic 0.0000% 3 5 0 0

g08 2 nonlinear 0.8581% 0 2 0 0

g09 7 nonlinear 0.5199% 0 4 0 0

g10 8 linear 0.0020% 3 3 0 0

g11 2 quadratic 0.0973% 0 0 0 1

g12 3 quadratic 4.7697% 0 1 0 0

g13 5 nonlinear 0.0000% 0 0 0 3

better in problem g10, and both approaches provided a “similar” worst resultin problem g13. Finally, regarding the mean and standard deviation values,30-50-FIES provided better results in problems g02, g07 and g11. The re-sults in this second experiment suggest that the fitness inheritance mechanism,which indeed introduces some error in the values which guide the search, isable to promote the exploration of other regions of the search space as to ob-tain either “better” results or a more consistent behavior to reach the vicinityof the best known solution. This behavior was observed in problems with avery small feasible region (g03, g05, g07, g10, g11 and g13) where some ofthem have nonlinear equality constraints. In this type of problems it is verycommon that the search is strongly biased by the first feasible solution foundand premature convergence inside the feasible region may occur. The incor-poration of solutions which may be infeasible, but based on its closeness toa feasible solution will be considered feasible (due to the inheritance process)seems to allow the evolutionary search to explore in a different way the searchspace and to approach the feasible region from different regions. These resultsare far from being conclusive and more detailed experiments are necessaryto validate the aforementioned discussion. Nonetheless, the results obtainedshow that fitness inheritance is a valid option to be considered in this typeof ES in order to save evaluations without considerably affecting the goodperformance of the approach.

5 Conclusions and future work

An approach to incorporate a fitness inheritance mechanism to an EvolutionStrategy to solve the general NLP problem was proposed. The approach isbased on a panmictic recombination operator, where the closest one from theset of parents (in the decision space) is chosen to inherit all their values tothe offspring. Two experiments were designed to evaluate (1) the capabilities

Fitness-Inheritance in Constrained Optimization 9

Table 2. Statistical results obtained with the three ES versions: one with fitnessinheritance 30-50-FIES (167,000 evaluations) and two without fitness inheritance(0-0-FIES (240,000 evaluations) and 0-0-FIES (170,000 evaluations). A result inboldface indicates either a better result or that the best know solution was reached.

Problem &Statistical results

Stats 30-50-FIES 0-0-FIES 0-0-FIESBest known solution (240,000) (170,000)

g01 Best -15.000 -15.000 -15.000-15 Mean -15.000 -15.000 -15.000

Worst -15.000 -15.000 -15.000St. Dev. 0 0 0

g02 Best 0.803534 0.803595 0.8035690.803619 Mean 0.792789 0.787545 0.784593

Worst 0.744716 0.755566 0.754253St. Dev. 0.012727 0.01184 0.013125

g03 Best 0.95351 0.98147 0.9149611 Mean 0.799909 0.886853 0.804792

Worst 0.643692 0.738482 0.605035St. Dev. 0.087753 0.06431 0.08445

g04 Best -30665.539 -30665.539 -30665.539-30665.539 Mean -30665.539 -30665.539 -30665.539

Worst -30665.539 -30665.539 -30665.539St. Dev. 0 0 0

g05 Best 5142.870 5126.610 5122.2405126.498 Mean 5177.359 5204.006 5162.288

Worst 5229.140 5386.690 5391.810St. Dev. 30.5066 114.6465 76.25770

g06 Best -6961.814 -6961.814 -6961.814-6961.814 Mean -6961.814 -6961.814 -6961.814

Worst -6961.814 -6961.814 -6961.814St. Dev. 0 0 0

g07 Best 24.347 24.328 24.37824.306 Mean 24.458 24.462 24.484

Worst 24.707 24.646 24.833St. Dev. 0.0878 0.07206 0.10997

g08 Best 0.095826 0.095826 0.0958260.095825 Mean 0.095826 0.095826 0.095826

Worst 0.095826 0.095826 0.095826St. Dev. 0 0 0

g09 Best 680.63 680.63 680.63680.63 Mean 680.642 680.642 680.642

Worst 680.678 680.678 680.667St. Dev. 0.00987 0.00987 0.007427

g10 Best 7058.1 7052.22 7063.727049.25 Mean 7273.402 7261.021 7252.095

Worst 7674.44 7488.69 7608.37St. Dev. 120.559 102.93005 128.2823

g11 Best 0.75 0.75 0.750.75 Mean 0.75 0.7504 0.7544

Worst 0.76 0.76 0.79St. Dev. 0.0014 0.0022 0.01

g12 Best 1.000 1.000 1.0001 Mean 1.000 1.000 1.000

Worst 1.000 1.000 1.000St. Dev. 0 0 0

g13 Best 0.497647 0.464606 0.9922150.053949 Mean 0.99617813 0.92083767 0.99611781

Worst 0.999926 0.998043 0.999926St. Dev. 0.2465 0.17461 0.00227

of the proposed inheritance approach to save evaluations without affectingthe overall performance of the original algorithm and (2) the behavior of thefitness inheritance in similar conditions (number of evaluations) with respectto the original algorithm. The results obtained showed that the inheritancemechanism is able to decrease the number of evaluations required by theoriginal approach (about 30%) without considerably affecting its good per-formance. Furthermore, it was initially analyzed that the fitness inheritancemechanism is able to promote further exploration of the search space in someproblems, most of them with a small feasible region and with nonlinear in-equality constraints, as to obtain “better” results. However, this last finding

10 Mezura-Montes, Munoz-Davila and Coello Coello

requires further experimentation and analysis, which in fact is part of thefuture work, besides a more careful study of the effect of the IR and RR

parameters in the behavior of the evolutionary search.

Acknowledgement. The first and third authors gratefully acknowledge support fromCONACyT through projects No. 52048-Y and No. 45683-Y respectively.

Appendix

The details of the benchmark functions used are the following:

• g01:

Minimize: f(x) = 5

4

i=1xi − 5

4

i=1x2

i−

13

i=5xi

subject to:g1(x) = 2x1 + 2x2 + x10 + x11 − 10 ≤ 0g2(x) = 2x1 + 2x3 + x10 + x12 − 10 ≤ 0g3(x) = 2x2 + 2x3 + x11 + x12 − 10 ≤ 0g4(x) = −8x1 + x10 ≤ 0g5(x) = −8x2 + x11 ≤ 0g6(x) = −8x3 + x12 ≤ 0g7(x) = −2x4 − x5 + x10 ≤ 0g8(x) = −2x6 − x7 + x11 ≤ 0g9(x) = −2x8 − x9 + x12 ≤ 0

where the bounds are 0 ≤ xi ≤ 1 (i = 1, . . . , 9), 0 ≤ xi ≤ 100 (i = 10, 11, 12) and 0 ≤ x13 ≤ 1. Theglobal optimum is located at x∗ = (1, 1, 1, 1, 1, 1, 1, 1, 1, 3, 3, 3, 1) where f(x∗) = −15. Constraints g1, g2,g3, g4, g5 and g6 are active.

• g02:

Maximize: f(x) =

∑n

i=1cos4(xi)−2

∏n

i=1cos2(xi)

√∑

n

i=1ix2

i

subject to:

g1(x) = 0.75 −∏

n

i=1xi ≤ 0

g2(x) =

∑n

i=1xi − 7.5n ≤ 0

where n = 20 and 0 ≤ xi ≤ 10 (i = 1, . . . , n). The global maximum is unknown; the best reported solution

is: f(x∗) = 0.803619. Constraint g1 is close to being active (g1 = −10−8).

• g03:

Maximize: f(x) =

(√n

)n ∏n

i=1xi

subject to:

h(x) =

∑n

i=1x2

i− 1 = 0

where n = 10 and 0 ≤ xi ≤ 1 (i = 1, . . . , n). The global maximum is located at x∗i

= 1/√

n (i = 1, . . . , n)

where f(x∗) = 1.

• g04:

Minimize: f(x) = 5.3578547x23

+ 0.8356891x1x5 + 37.293239x1 − 40792.141

subject to:g1(x) = 85.334407 + 0.0056858x2x5 + 0.0006262x1x4 − 0.0022053x3x5 − 92 ≤ 0g2(x) = −85.334407 − 0.0056858x2x5 − 0.0006262x1x4 + 0.0022053x3x5 ≤ 0

g3(x) = 80.51249 + 0.0071317x2x5 + 0.0029955x1x2 + 0.0021813x23

− 110 ≤ 0

g4(x) = −80.51249 − 0.0071317x2x5 − 0.0029955x1x2 − 0.0021813x23

+ 90 ≤ 0

g5(x) = 9.300961 + 0.0047026x3x5 + 0.0012547x1x3 + 0.0019085x3x4 − 25 ≤ 0g6(x) = −9.300961 − 0.0047026x3x5 − 0.0012547x1x3 − 0.0019085x3x4 + 20 ≤ 0

where: 78 ≤ x1 ≤ 102, 33 ≤ x2 ≤ 45, 27 ≤ xi ≤ 45 (i = 3, 4, 5). The global optimum is located atx∗ = (78, 33, 29.995256025682, 45, 36.775812905788) where f(x∗) = −30665.539. Constraints g1 and g6are active.

• g05

Minimize:f(x) = 3x1 + 0.000001x31

+ 2x2 + (0.000002/3)x32

subject to:g1(x) = −x4 + x3 − 0.55 ≤ 0g2(x) = −x3 + x4 − 0.55 ≤ 0h3(x) = 1000 sin(−x3 − 0.25) + 1000 sin(−x4 − 0.25) + 894.8 − x1 = 0h4(x) = 1000 sin(x3 − 0.25) + 1000 sin(x3 − x4 − 0.25) + 894.8 − x2 = 0h5(x) = 1000 sin(x4 − 0.25) + 1000 sin(x4 − x3 − 0.25) + 1294.8 = 0

Fitness-Inheritance in Constrained Optimization 11

where 0 ≤ x1 ≤ 1200, 0 ≤ x2 ≤ 1200, −0.55 ≤ x3 ≤ 0.55, and −0.55 ≤ x4 ≤ 0.55. The best knownsolution is x∗ = (679.9453, 1026.067, 0.1188764, −0.3962336) where f(x∗) = 5126.4981.

• g06

Minimize: f(x) = (x1 − 10)3 + (x2 − 20)3

subject to:

g1(x) = −(x1 − 5)2 − (x2 − 5)2 + 100 ≤ 0

g2(x) = (x1 − 6)2 + (x2 − 5)2 − 82.81 ≤ 0

where 13 ≤ x1 ≤ 100 and 0 ≤ x2 ≤ 100. The global optimum is located at x∗ = (14.095, 0.84296)where f(x∗) = −6961.81388. Both constraints are active.

• g07

Minimize: f(x) = x21

+ x22

+ x1x2 − 14x1 − 16x2 + (x3 − 10)2 + 4(x4 − 5)2 + (x5 − 3)2 + 2(x6 − 1)2 +

5x27

+ 7(x8 − 11)2 + 2(x9 − 10)2 + (x10 − 7)2 + 45

subject to:g1(x) = −105 + 4x1 + 5x2 − 3x7 + 9x8 ≤ 0g2(x) = 10x1 − 8x2 − 17x7 + 2x8 ≤ 0g3(x) = −8x1 + 2x2 + 5x9 − 2x10 − 12 ≤ 0

g4(x) = 3(x1 − 2)2 + 4(x2 − 3)2 + 2x23

− 7x4 − 120 ≤ 0

g5(x) = 5x21

+ 8x2 + (x3 − 6)2 − 2x4 − 40 ≤ 0

g6(x) = x21

+ 2(x2 − 2)2 − 2x1x2 + 14x5 − 6x6 ≤ 0

g7(x) = 0.5(x1 − 8)2 + 2(x2 − 4)2 + 3x25

− x6 − 30 ≤ 0

g8(x) = −3x1 + 6x2 + 12(x9 − 8)2 − 7x10 ≤ 0

where −10 ≤ xi ≤ 10 (i = 1, . . . , 10). The global optimum is located at x∗ = (2.171996, 2.363683, 8.773926,5.095984, 0.9906548, 1.430574, 1.321644, 9.828726, 8.280092, 8.375927) where f(x∗) = 24.3062091. Con-straints g1, g2, g3, g4, g5 and g6 are active.

• g08

Maximize: f(x) =sin3(2πx1) sin(2πx2)

x31(x1+x2)

subject to:

g1(x) = x21

− x2 + 1 ≤ 0

g2(x) = 1 − x1 + (x2 − 4)2 ≤ 0

where 0 ≤ x1 ≤ 10 and 0 ≤ x2 ≤ 10. The global optimum is located at x∗ = (1.2279713, 4.2453733)where f(x∗) = 0.095825.

• g09

Minimize: f(x) = (x1 − 10)2 + 5(x2 − 12)2 + x43

+ 3(x4 − 11)2 + 10x65

+ 7x26

+ x47

− 4x6x7 − 10x6 − 8x7subject to:

g1(x) = −127 + 2x21

+ 3x42

+ x3 + 4x24

+ 5x5 ≤ 0

g2(x) = −282 + 7x1 + 3x2 + 10x23

+ x4 − x5 ≤ 0

g3(x) = −196 + 23x1 + x22

+ 6x26

− 8x7 ≤ 0

g4(x) = 4x21

+ x22

− 3x1x2 + 2x23

+ 5x6 − 11x7 ≤ 0

where −10 ≤ xi ≤ 10 (i = 1, . . . , 7). The global optimum is located at x∗ = (2.330499, 1.951372,−0.4775414, 4.365726, −0.6244870, 1.038131, 1.594227) where f(x∗) = 680.6300573. Two constraints areactive (g1 and g4).

• g10Minimize: f(x) = x1 + x2 + x3subject to: g1(x) = −1 + 0.0025(x4 + x6) ≤ 0g2(x) = −1 + 0.0025(x5 + x7 − x4) ≤ 0g3(x) = −1 + 0.01(x8 − x5) ≤ 0g4(x) = −x1x6 + 833.33252x4 + 100x1 − 83333.333 ≤ 0g5(x) = −x2x7 + 1250x5 + x2x4 − 1250x4 ≤ 0g6(x) = −x3x8 + 1250000 + x3x5 − 2500x5 ≤ 0

where 100 ≤ x1 ≤ 10000, 1000 ≤ xi ≤ 10000, (i = 2, 3), 10 ≤ xi ≤ 1000, (i = 4, . . . , 8). The global op-timum is located at x∗ = (579.19, 1360.13, 5109.92, 182.0174, 295.5985, 217.9799, 286.40, 395.5979), wheref(x∗) = 7049.25. g1, g2 and g3 are active.

• g11

Minimize: f(x) = x21

+ (x2 − 1)2

subject to:

h(x) = x2 − x21

= 0

where: −1 ≤ x1 ≤ 1, −1 ≤ x2 ≤ 1. The global optimum is located at x∗ = (±1/√

2, 1/2) wheref(x∗) = 0.75.

• g12

Maximize: f(x) =100−(x1−5)2−(x2−5)2−(x3−5)2

100subject to:

g1(x) = (x1 − p)2 + (x2 − q)2 + (x3 − r)2 − 0.0625 ≤ 0

where 0 ≤ xi ≤ 10 (i = 1, 2, 3) and p, q, r = 1, 2, . . . , 9. The feasible region of the search space consists

of 93 disjointed spheres. A point (x1, x2, x3) is feasible if and only if there exist p, q, r such the above

12 Mezura-Montes, Munoz-Davila and Coello Coello

inequality (5) holds. The global optimum is located at x∗ = (5, 5, 5) where f(x∗) = 1.

• g13Minimize: f(x) = ex1x2x3x4x5

subject to:

g1(x) = x21

+ x22

+ x23

+ x24

+ x25

− 10 = 0

g2(x) = x2x3 − 5x4x5 = 0

g3(x) = x31

+ x32

+ 1 = 0

where −2.3 ≤ xi ≤ 2.3 (i = 1, 2) and −3.2 ≤ xi ≤ 3.2 (i = 3, 4, 5). The global optimum is located atx∗ = (−1.717143, 1.595709, 1.827247, −0.7636413, −0.763645) where f(x∗) = 0.0539498.

References

1. Michalewicz, Z. and Fogel, D. B. (2004) How to Solve It: Modern Heuristics,2nd edition. Springer, Berlin, Germany.

2. Jin, Y. (2005) A comprehensive survey of fitness approximation in evolutionarycomputation. Soft Computing - A Fusion of Foundations, Methodologies andApplications, 9(1), 3–12.

3. Won, K.-S. and Ray, T. (2004) Performance of kriging and cokriging basedSurrogate Models within the Unified Framework for Surrogate Assisted Opti-mization. Proceedings of the IEEE Congress on Evolutionary Computation 2004,Piscataway, New Jersey, June, pp. 1577–1585. IEEE Service Center.

4. Smith, R. E., Dike, B. A., and Stegmann, S. A. (1995) Fitness Inheritance inGenetic Algorithms. SAC ’95: Proceedings of the 1995 ACM Symposium onApplied Computing, Nashville, Tennessee, USA, pp. 345–350. ACM Press.

5. Back, T. (1996) Evolutionary Algorithms in Theory and Practice. Oxford Uni-versity Press, New York.

6. Reyes-Sierra, M. and Coello Coello, C. A. (2005) Fitness Inheritance in Multi-Objective Particle Swarm Optimization. 2005 IEEE Swarm Intelligence Sym-posium (SIS’05), Pasadena, California, USA, June, pp. 116–123. IEEE Press.

7. Voutchkov, I. and Keane, A. (2006) Multiobjective Optimization Using Surro-gates. In Parmee, I. (ed.), Proceedings of the Seventh International Conferenceon Adaptive Computing in Design and Manufacture (ACDM’2006), Bristol, UK,April, pp. 167–175. The Institute for People-centred Computation.

8. Runarsson, T. P. (2004) Constrained Evolutionary Optimization by Approxi-mate Ranking and Surrogate Models. Proceedings of 8th Parallel Problem Solv-ing From Nature, September, pp. 401–410. UK, Springer. LNCS Vol. 3242.

9. Mezura-Montes, E. and Coello Coello, C. A. (2005) Saving Evaluations in Dif-ferential Evolution for Constrained Optimization. Sixth Mexican InternationalConference on Computer Science (ENC’05), September, pp. 274–281. IEEEComputer Society Press.

10. Price, K. V., Storn, R. M., and Lampinen, J. A. (2005) Differential Evolution.A Practical Approach to Global Optimization. Springer, Berlin.

11. Schwefel, H.-P. (1995) Evolution and Optimum Seeking. Wiley, New York.12. Deb, K. (2000) An Efficient Constraint Handling Method for Genetic Algo-

rithms. Comp. Methods in Applied Mechanics and Engineering, 186(2-4), 311–338.

13. Michalewicz, Z. and Schoenauer, M. (1996) Evolutionary Algorithms for Con-strained Parameter Optimization Problems. Evolutionary Computation, 4(1),1–32.