IT-CEMOP: An iterative co-evolutionary algorithm for multiobjective optimization problem with...

17
IT-CEMOP: An iterative co-evolutionary algorithm for multiobjective optimization problem with nonlinear constraints M.S. Osman a , Mahmoud A. Abo-Sinna b , A.A. Mousa b, * a High Institute of Technology, 10th Ramadan city, Egypt b Department of Basic Engineering Science, Faculty of Engineering, Shebin El-Kom, Menoufia University, Egypt Abstract Over the past few years, researchers have developed a number of multiobjective evolutionary algorithms (MOEAs). Although most studies concentrate on solving unconstrained optimization problems, there exit a few studies where MOEAs have been extended to solve constrained optimization problems. Most of them were based on penalty functions for handling nonlinear constraints by genetic algorithms. However the performance of these methods is highly problem- dependent, many methods require additional tuning of several parameters. In this paper, we present a new optimization algorithm, which is based on concept of co-evolution and repair algorithm for handling nonlinear constraints. The algorithm maintains a finite-sized archive of nondominated solutions which gets iteratively updated in the presence of new solutions based on the concept of e-dominance. The use of e-dominance also makes the algorithms practical by allowing a decision maker to control the resolution of the Pareto set approximation by choosing an appropriate e value, which guarantees convergence and diversity. The results, provided by the proposed algorithm for six benchmark problems, are promising when compared with exiting well-known algorithms. Also, our results suggest that our algorithm is better applicable for solving real-world application problems. Ó 2006 Elsevier Inc. All rights reserved. Keywords: Multiobjective nonlinear programming; Multiobjective evolutionary algorithms; Genetic algorithms; e-Dominance 1. Introduction When attempting to optimize a decision in industrial and scientific applications, the designer is frequently faced with the problem of achieving several design targets, some of which may be conflicting and noncommen- surable and wherein a gain in one objective is at the expense of another. This problem can be generally reduced to multiobjective optimization problems (MOPs) in operational description, which has been in the spotlight of operations research communities over years. Usually, there is no unique optimal solution, but rather a set of 0096-3003/$ - see front matter Ó 2006 Elsevier Inc. All rights reserved. doi:10.1016/j.amc.2006.05.095 * Corresponding author. E-mail address: [email protected] (A.A. Mousa). Applied Mathematics and Computation 183 (2006) 373–389 www.elsevier.com/locate/amc

Transcript of IT-CEMOP: An iterative co-evolutionary algorithm for multiobjective optimization problem with...

Applied Mathematics and Computation 183 (2006) 373–389

www.elsevier.com/locate/amc

IT-CEMOP: An iterative co-evolutionary algorithmfor multiobjective optimization problem

with nonlinear constraints

M.S. Osman a, Mahmoud A. Abo-Sinna b, A.A. Mousa b,*

a High Institute of Technology, 10th Ramadan city, Egyptb Department of Basic Engineering Science, Faculty of Engineering, Shebin El-Kom, Menoufia University, Egypt

Abstract

Over the past few years, researchers have developed a number of multiobjective evolutionary algorithms (MOEAs).Although most studies concentrate on solving unconstrained optimization problems, there exit a few studies whereMOEAs have been extended to solve constrained optimization problems. Most of them were based on penalty functionsfor handling nonlinear constraints by genetic algorithms. However the performance of these methods is highly problem-dependent, many methods require additional tuning of several parameters.

In this paper, we present a new optimization algorithm, which is based on concept of co-evolution and repair algorithmfor handling nonlinear constraints. The algorithm maintains a finite-sized archive of nondominated solutions which getsiteratively updated in the presence of new solutions based on the concept of e-dominance. The use of e-dominance alsomakes the algorithms practical by allowing a decision maker to control the resolution of the Pareto set approximationby choosing an appropriate e value, which guarantees convergence and diversity. The results, provided by the proposedalgorithm for six benchmark problems, are promising when compared with exiting well-known algorithms. Also, ourresults suggest that our algorithm is better applicable for solving real-world application problems.� 2006 Elsevier Inc. All rights reserved.

Keywords: Multiobjective nonlinear programming; Multiobjective evolutionary algorithms; Genetic algorithms; e-Dominance

1. Introduction

When attempting to optimize a decision in industrial and scientific applications, the designer is frequentlyfaced with the problem of achieving several design targets, some of which may be conflicting and noncommen-surable and wherein a gain in one objective is at the expense of another. This problem can be generally reducedto multiobjective optimization problems (MOPs) in operational description, which has been in the spotlight ofoperations research communities over years. Usually, there is no unique optimal solution, but rather a set of

0096-3003/$ - see front matter � 2006 Elsevier Inc. All rights reserved.

doi:10.1016/j.amc.2006.05.095

* Corresponding author.E-mail address: [email protected] (A.A. Mousa).

374 M.S. Osman et al. / Applied Mathematics and Computation 183 (2006) 373–389

alternative solutions and these solutions are optimal in the wider sense that no other solutions in decisionspace are superior to them when all objectives are considered. They are known as Pareto-optimal solutions,also termed nondominated noninferior, admissible, or efficient solutions [20].

During the past decade, various multiobjective evolutionary algorithms (MOEAs) have been proposed andapplied in MOPs [8]. A representative collection of these algorithms includes the vector evaluated genetic algo-rithm (VEGA) by Schaffer [25], the niched Pareto genetic algorithm (NPGA) [12] and the nondominated sort-ing genetic algorithm (NSGA) by Srinivas and Deb [26], the nondominated sorting genetic algorithm II(NSGA-II) by Deb et al. [3], the strength Pareto evolutionary algorithm (SPEA) by Zitzler and Thiele [28],the strength Pareto evolutionary algorithm II (SPEA-II) by Zitzler et al. [29], the Pareto archived evolutionstrategy (PAES) by Knowles and Corne [14] and the memetic PAES (M-PAES) by Knowles and Corne[15]. Although these MOEAs differ from each other in both exploitation and exploration, they share the com-mon purpose, searching for a near-optimal, well-extended and uniformly diversified Pareto-optimal front for agiven MOP. However, this ultimate goal is far from being accomplished by the existing MOEAs as docu-mented in the literature, e.g., [8].

On the other hand, there exist a few studies where an MOEA is specifically designed for handling con-straints. Among all methods, the usual penalty function approach [13,18,19] where a penalty proportionalto the total constraint violation is added to all objective functions. When applying this procedure, all con-straints and objective functions must be normalized.

Deb et al. [3,5] defined a constraint-domination principle, which differentiates from feasible solutions dur-ing the nondominated sorting procedure.

Kurpati et al. [16] suggested four constraint handling improvements for MOGA. These improvements aremade in the fitness assignment stage of a MOGA and are all based upon a ‘‘Constraint-First-Objective-Next’’model.

Chafekar et al. [6] propose two approaches for solving constrained multiobjective optimization problemsusing steady state GAs. One method called objective exchange genetic algorithm for design optimization(OEGADO) runs several GAs concurrently with each GA optimizing one objective and exchanging informa-tion about its objective with the others. The other method called objective switching genetic algorithm fordesign optimization (OSGADO) runs each objective sequentially with a common population for all objectives.Despite all these developments, there seem to be not enough studies concerning procedure for handlingconstraints.

In this paper, we present a new optimization system (IT-CEMOP), which is based on concept of co-evolu-tion and repair algorithm for handing constraints. Also, it is based on the e-dominance concept which main-tains a finite-sized archive of nondominated solutions which gets iteratively updated according to the chosene-vector, also it guarantees convergence and diversity.

The remainder of the paper is organized as follows. In Section 2 we describe some preliminaries on MOPs,and in Section 3 we present constraint multiobjective optimization via genetic algorithm. Experimental resultsare given and discussed in Section 4. Section 5 indicates our conclusion and notes for future work.

2. Preliminaries

2.1. Problem formulation

A general multiobjective optimization problem is expressed byMOP:

Min F ðxÞ ¼ ðf1ðxÞ; f2ðxÞ; . . . ; fmðxÞÞT

s:t: x 2 S

x ¼ ðx1; x2; . . . ; xnÞTð1Þ

where ðf1ðxÞ; f2ðxÞ; . . . ; fmðxÞÞ are the m objectives functions, (x1,x2, . . .,xn) are the n optimization parameters,and S 2 Rn is the solution or parameter space. Obtainable objective vectors, {F(x)jx 2 S} are denoted by K, so{F:S! K}, S is mapped by F onto K. This situation is represented in Fig. 1 for the case n = 2, m = 3.

Fig. 1. MOP evaluation mapping.

Fig. 2. The concept of Pareto-optimality.

M.S. Osman et al. / Applied Mathematics and Computation 183 (2006) 373–389 375

Because F(x) is a vector, there is no unique solution to this problem, instead, the concept of noninferiority(also called Pareto-optimality) must be used to characterize the objectives. A noninferior solution is one inwhich an improvement in one objective requires a degradation of another (Fig. 2).

Definition 1 (Pareto-optimal solution). x* is said to be a Pareto-optimal solution of MOP if there exists noother feasible x (i.e., x 2 S) such that, fj(x) 6 fj(x*) for all j = 1,2, . . .,m and fj(x) < fj(x*) for at least oneobjective function fj.

2.2. Structure of an iterative multiobjective search algorithm

The purpose of this section is to informally describe the problem we are dealing with. To this end, let us firstgive a template for a large class of iterative search procedures which are characterized by the generation of asequence of search points and a finite memory.

Algorithm 1

Iterative search procedure

1. t , 02. A(0) = 03. while terminate (A(t), t) = false do4. t , t + 15. f(t)

, generate () {generate new search point}6. A(t)

, update (A(t� 1), f (t)) {update archive}7. end while

8. Output: A(t)

Fig. 3. Block diagram of archive/selection strategy.

376 M.S. Osman et al. / Applied Mathematics and Computation 183 (2006) 373–389

An abstract description of a generic iterative search algorithm is given in Algorithm 1 [17]. The integer tdenotes the iteration count, the n-dimensional vector f (t) 2 F is the sample generated at iteration t and theset A(t) will be called the archive at iteration t and should contain a representative subset of the samples inthe objective space F generated so far. To simplify the notation, we represent samples by n-dimensional realvectors f where each coordinate represents one of the objective values (Fig. 3).

The purpose of the function generate is to generate a new solutions in each iteration t, possibly using thecontents of the old archive set A(t�1). The function update gets the new solutions f(t) and the old archive setA(t�1) and determines the updated one, namely A(t). In general, the purpose of this sample storage is to gather‘useful’ information about the underlying search problem during the run. Its use is usually two-fold: On theone hand it is used to store the ‘best’ solutions found so far, on the other hand the search operator exploits thisinformation to steer the search to promising regions.

This algorithm could easily be viewed as an evolutionary algorithm when the generate operator is associ-ated with variation (recombination and mutation). However, we would like to point out that all followinginvestigations are equally valid for any kind of iterative process which can be described as Algorithm 1 andused for approximating the Pareto set of multiobjective optimization problems.

There are several reasons, why the archive A(t) should be of constant size, independent of the number ofiterations t. At first, the computation time grows with the number of archived solutions, as for example thefunction generate may use it for guiding the search, or it may simply be impossible to store all solutions asthe physical memory is always finite. In addition, the value of presenting such a large set of solutions to a deci-sion maker is doubtful in the context of decision support, instead one should provide him with a set of the bestrepresentative samples. Finally, in limiting the solution set preference information could be used to steer theprocess to certain parts of the search space.

2.3. Clustering algorithm

The clustering approach of SPEA [28] forms N clusters (where N is the archive size) from N 0(>N). Popu-

lation members by initially assuming each of N 0 members to be a separate cluster. Thereafter, allN 0

2

� �

Euclidean distances in the objective space are computed. Then, two clusters with the smallest distance aremerged together to form one bigger cluster. This process reduces the number of clusters to N 0 � 1. Theinter-cluster distances are computed again and another merging is done. This process is repeated until thenumber of clusters is reduced to N. With multiple population member occupying two clusters, the average dis-tance of all pair-wise distances between solutions of the two clusters is used. Fig. 4 illustrate this procedure.For the two clusters shown in dashed lines, the average Euclidean distance among the solutions of two clustersare computed as shown in Fig. 4. The average distance is computed for all pairs of clusters and the two clusterswith the smallest average distance are merged together. If (N 0 � N) is of the order of N (the archive size), thenthe procedure requires O(N3) computations in each iteration. Since this procedure is repeated in every iterationof SPEA, the computational overhead as well as storage requirements for implementing the clustering conceptare large. However, since the clustering is implemented based on the Euclidean distance among solutions, theresulting distribution of clustered solutions is usually good.

Fig. 4. The clustering approach used in SPEA is illustrated.

M.S. Osman et al. / Applied Mathematics and Computation 183 (2006) 373–389 377

2.4. Concept of Pareto set approximation

In many multiobjective optimization problems, the Pareto set x* is of substantial size. Thus, the numericaldetermination of x* is prohibitive, and x* as a result of an optimization is questionable. Moreover, it is notclear at all what a decision maker can do with such a large result of an optimization run. What would be moredesirable is an approximation of x* which approximately dominates all elements of x and is of reasonable size.This set can then be used by a decision maker to determine interesting regions of the decision and objectivespace, which can be explored in further optimization runs. Next, we define a generalization of the dominancerelation as visualized in Fig. 5.

Without loss of generality, a normalized and positive objective space as well as a maximization problem isassumed for notational convenience.

Definition 2 (e-dominance). Let f:x! Rm and a, b 2 X. Then a is said to e-dominate b for some e > 0, denotedas a �e b, if and only if for i 2 {1, . . .,m}

ð1þ eÞfiðaÞP fiðbÞ: ð2Þ

Definition 3 (e-approximate Pareto set). Let X be a set of decision alternatives and e > 0. Then a set xe is calledan e-approximate Pareto set of X, if any vector a 2 x is e-dominated by at least one vector b 2 xe, i.e.,

8a 2 x : 9b 2 xe such that b�ea ð3Þ

The set of all e-approximate Pareto sets of X is denoted as Pe(x). The image of an e-approximate Pareto set

Fe = f(xe) is called an e-approximate Pareto front.

Of course, the set xe is not unique. Many different concepts for e-efficiency and the corresponding Pareto setapproximations exist in the operations research literature, a survey is given by Helbig and Pateva [11]. As mostof the concepts deal with infinite sets, they are not practical for our purpose of producing and maintaining arepresentative subset.

Fig. 5. Graphs visualizing the concepts of dominance (left) and e-dominance (right).

378 M.S. Osman et al. / Applied Mathematics and Computation 183 (2006) 373–389

Regarding the size of such Pareto set approximations, Erlebach et al. [7] have pointed out that under verymild assumptions, there is always an approximate Pareto set whose size is polynomial in the length of theencoded input, i.e., in the logarithm of the largest objective value. This can be achieved by placing a hyper-grid in the objective space using the coordinates 1, (1 + e), (1 + e)2, . . ., for each objective. As it suffices to haveone representative solution in each grid cell and to have only nondominated cells occupied, it can be seen thatfor any finite e and any set X with bounded image in objective space, i.e., 1 6 fi(x) 6 K for allx 2 X, i 2 {1, . . .,m} there exists a set xe containing vectors of size

jX ej 6ln k

lnð1þ eÞ

� �m�1

: ð4Þ

Note that the concept of approximation can also be used if other similar definitions of e-dominance areused, e.g., the following additive approximation:

ei þ f ðbÞP f ðaÞ 8i 2 f1; . . . ;mg; ð5Þ

where ei are constants, separately defined for each coordinate. In this case there exist e-approximate Pareto setswhose size can be bounded as follows:

jxej 6Ym�1

j¼1

k � 1

ej; ð6Þ

where 1 6 fi(x) 6 k, k P ei for all i 2 {1, . . .,m}. A further refinement of the concept of e-approximate Paretosets leads to the following definition.

Definition 4 (e-Pareto set). Let X be a set of decision alternatives and e > 0. Then a set x�e � F is called an e-Pareto set of X if

1. x�e is an e-approximate Pareto set of X, i.e., x�e 2 P eðxÞ, and2. x�e contains maximal elements of X only, i.e., x�e � X �

The set of all e-Pareto sets of X is denoted as P �e ðxÞ. The image of an e-Pareto set is called an e-Pareto front.The above defined concepts are visualized in Fig. 6.

Since finding the Pareto set of an arbitrary decision space X is usually not practical because of its size, oneneed to be less ambitious in general. Therefore, the e-approximate Pareto set is a practical solution concept asit not only represents all alternatives X but also consists of a smaller number of elements. Of course, an e-Par-eto set is more attractive as it consists of Pareto-optimal decision alternatives only.

Diversity can be defined in various ways. Here, we associate diversity with approximation quality in thesense that we require a good approximation quality in every region of the Pareto set. Furthermore, the mea-surement of proximity is performed in the objective space only. The main reason for excluding the decisionspace from the considerations is that any measurement regarding diversity or proximity relies on the decision

Fig. 6. Graphs visualizing the concepts of e-approximate Pareto front (left) and e-Pareto front (right).

M.S. Osman et al. / Applied Mathematics and Computation 183 (2006) 373–389 379

space being a metric space. This is not always the case and therefore hinders the derivation of results which arevalid for all multiobjective optimization problems.

According to Definition 2, the e value stands for a relative ‘‘tolerance’’ allowed for the objective values. Incontrast, using Eq. (5) we would allow a constant additive (absolute) tolerance. The choice of the e value isapplication specific: A decision maker should choose a type and magnitude that suits the (physical) meaningof the objective values best. The e value further determines the maximal size of the solution set, and therefore,of the required memory according to Eqs. (4) and (6). This is especially important in higher dimensional objec-tive spaces, where the concept of e-dominance can reduce the required number of solutions considerably.

2.4.1. Select operator for e-approximate Pareto set

After the definition of the type of solution set we are aiming at, the next step is to implement this solutionconcept algorithmically. First, a select operator is presented which leads to the maintenance of an e-approx-imate Pareto set. The idea is that ‘‘new solutions are only accepted in the archive if they are not e-dominated by

any other element of the current archive’’. If a solution is accepted, all dominated solutions are removed.

Algorithm 2

Select operator for e-approximate Pareto set

1. INPUT : A, x

2. if $ x0 2 A such that x 0 �e x then

3. A 0 , A4. else

5. D , {x 0 2 A: x � x 0}6. A 0 , A [ {x}nD7. end if

8. Output: A 0

Theorem 1. F t ¼St

j¼1f ðjÞ; 1 6 fðjÞi 6 k, be the set of all decision alternatives created in Algorithm 1 and given

to the select operator as defined in Algorithm 2. Then At is an e-approximate Pareto set of Ft with bounded size,

i.e.,

1. At 2 P�e ðF tÞ2. jAtj 6 ln k

lnð1þeÞ

� �ðm�1Þ

Proof

1. The proof can be found in [17].2. The objective space is divided into ln K

lnð1þeÞ

� �mboxes, and from each box at most one point can be in A(t) at

the same time. Now consider the ln Klnð1þeÞ

� �m�1

equivalence classes of boxes where (without loss of generality)

in each class the boxes have the same coordinates in all but one dimension. There are ln Klnð1þeÞ

� �different

boxes in each class constituting a chain of dominating boxes. Hence, only one point from each of these clas-ses can be a member of A(t) at the same time. h

2.4.2. Algorithm to maintain an e-Pareto set

In the next step we guarantee ‘‘in addition to a minimal distance between points’’ that the points in A(t) aremaximal elements of all alternatives generated so far. The following Algorithm 3 has a two level concept. Onthe coarse level, the objective space is discredited by a division into boxes (see Algorithm 4), where each

380 M.S. Osman et al. / Applied Mathematics and Computation 183 (2006) 373–389

decision alternative uniquely belongs to one box in objective space. Using a generalized dominance relation onthese boxes. (The box-dominance relation is not equivalent with the e-dominance relation, but defined as the nor-

mal dominance relation on the box index vector. Both dominance and box-dominance imply e-dominance), thealgorithm always maintains a set of nondominated boxes and thereby guarantees the e-approximation prop-erty. On the fine level, at most one element is kept per box. Within a box, each representative vector can onlybe replaced by a dominating one, thus guaranteeing convergence.

Algorithm 3

Select operator for e-Pareto set

1. INPUT: A, x

2. D , {x 0 2 A: box (x) � box(x 0)3. if D 5 / then4. A 0 , A [ {x}nD5. else if $x 0: (box (x 0) = box (x) ^ x � x 0) then

6. A 0 , A [ {x}n {x 0}7. else if 9= x 0: (box(x 0) � box (x)) then

8. A0 ¼D A [ fxg9. else

10. A 0 , A

11. endif12. OUTPUT A 0

Algorithm 4

Function box

1. INPUT x

2. for all i 2 {1, . . .,m} do

3. bi ¼ ln fiðxÞlnð1þeÞ

j k4. end for

5. b = (b1,b2, . . .,bm)

3. Constraint multiobjective optimization via genetic algorithm

Multiobjective optimization problems give rise to a set of Pareto-optimal solutions, none of which can besaid to be better than other in all objectives. In any interesting multiobjective optimization problem, there exista number of such solutions which are of interest to designers and practitioners. Since no one solution is betterthan any other solution in the Pareto-optimal set, it is also a goal in a multiobjective optimization to find asmany such Pareto-optimal solutions as possible. Unlike most classical search and optimization problems, GAswork with a population of solutions and thus are likely (and unique) candidates for finding multiple Pareto-optimal solutions simultaneously. There are two tasks that are achieved in a multiobjective GA:

1. Convergence to the Pareto-optimal set, and2. maintenance of diversity among solutions of the Pareto-optimal set.

GAs with suitable modification in their operators has worked well to solve many multiobjective optimiza-tion problems with respect to above two tasks. Most of multiobjective GAs works with the concept of dom-ination. Despite all these developments in MOEA, there seem to be not enough studies concerning procedure

6. OUPUT b (box index vector)

M.S. Osman et al. / Applied Mathematics and Computation 183 (2006) 373–389 381

for handling constraints. Here we present a new optimization system, which is based on concept of co-evolu-tionary and repair algorithms. Also it is based on the e-dominance concept. The use of e-dominance alsomakes the algorithms practical by allowing a decision maker to control the resolution of the Pareto setapproximation by choosing an appropriate e value.

3.1. Initialization stage

The algorithm uses two separate population, the first population P(t) consists of the individuals which ini-tialized randomly satisfying the search space (the lower and upper bounds), while the second population R(t)

consists of reference points which satisfying all constraints (feasible points), However, in order to ensure con-vergence to the true Pareto-optimal solutions, we concentrated on how elitism could be introduced in the algo-rithm. So, we propose an archiving/selection strategy that guarantees at the same time progress towards thePareto-optimal set and a covering of the whole range of the nondominated solutions. The algorithm maintainsan externally finite-sized archive A(t) of nondominated solutions which gets iteratively updated in the presenceof a new solutions based on the concept of e-dominance.

3.2. Repair algorithm

The idea of this technique is to separate any feasible individuals in a population from those that are infea-sible by repairing infeasible individuals.

This approach co-evolves the population of infeasible individuals until they become feasible. Repair processworks as follows. Assume, there is a search point x 62 S (where S is the feasible space). In such a case the algo-rithm selects one of the reference points (better reference point has better chances to be selected), say r 2 S andcreates random points Z from the segment defined between x, r, but the segment may be extended equally [22,23]on both sides determined by a user specified parameter l 2 [0,1]. Thus, a new feasible individual is expressed as:

z1 ¼ c � xþ ð1� cÞ � r; z2 ¼ ð1� cÞ � xþ c � r;

where c = (1 + 2l)d � l and d 2 [0, 1] is a random generated number.

3.3. IT-CEMOP algorithm

IT-CEMOP uses two separate population, the first population P(t=0) (where t is the iteration counter) con-sists of the individuals which initialized randomly satisfying the search space (The lower and upper bounds),while the second population R(0) consists of reference points which satisfying all constraints (feasible points).Also, it stores initially the Pareto-optimal solutions externally in a finite sized archive of nondominated solu-tions A(0). We use cluster algorithm to create the next population Pt + 1, if jP(t)j > jA(t)j then new populationPt + 1 consists of all individual from A(t)and the population P(t) are considered for the clustering procedureto complete P(t + 1), if jP(t)j < jA(t)j then jPj solutions are picked up at random from A(t) and directly copiedto the new population P(t + 1).

Since our goal is to find new nondominated solutions, one simple way to combine multiple objective func-tions into a scalar fitness function [21] is the following weighted sum approach:

f ðxÞ ¼ w1f1ðxÞ þ � � � þ wifiðxÞ þ � � � þ wmfmðxÞ ¼Xm

j¼1

wjfjðxÞ;

where x is a string (i.e., individual), f(x) is a combined fitness function, fi(x) is the ith objective function Whena pair of strings are selected for a crossover operation, we assign a random number to each weight as follows.

wi ¼randomið�ÞPmj¼1randomjð�Þ

; i ¼ 1; 2; . . . ;m:

Calculate the fitness value of each string using the random weights wi. Select a pair of strings from the cur-rent population according to he following selection probability b(x) of a string x in the population P(t)

TableGA pa

PopulaNo. ofCrossoMutatSelectiCrossoMutatRelativ

Algor

IT-C

1. t ,

2. Cr3. A(0

3. wh

4. t ,

5. P(t

6. A(t

7. end

382 M.S. Osman et al. / Applied Mathematics and Computation 183 (2006) 373–389

bðxÞ ¼f ðxÞ � fmin P ðtÞ

� �P

x2P ðtÞ f ðxÞ � fminðP ðtÞÞ� ; where f min P ðtÞ

� �¼ min f ðxÞjx 2 P ðtÞ

� :

This step is repeated for selecting jPj/2 pairs of strings from the current populations. For each selected pairapply crossover operation to generate two new strings, for each strings generated by crossover operation,apply a mutation operator with a prespecified mutation probability. The system also includes the survivalof some of the good individuals without crossover or selection. This method seems to be better than the othersif applied constantly.

Algorithm 5 shows the IT-CEMOP algorithm. The purpose of the function generate is to generate a newpopulation in each iteration t, possibly using the contents of the old population P(t�1) and the old archive setA(t�1) in associated with variation (recombination and mutation). The function update gets the new popula-tion P(t) and the old archive set A(t�1) and determines the updated one, namely A(t) as indicated in Algorithm3.

ithm 5

EMOP algorithm

0eate P(0), R(0)

) = nondominated (P(0))ile terminate (A(t), t) = false dot + 1

), generate (A(t� 1), P(t� 1)) {generate new search point}

), update (A(t� 1), P(t)) {update archive (Algorithm 3)}while

(t)

The algorithm maintains a finite-sized archive of nondominated solutions which gets iteratively updated inthe presence of a new solutions based on the concept of e-dominance, such that new solutions are only acceptedin the archive if they are not e-dominated by any other element in the current archive (Algorithm 3), The use ofe-dominance also makes the algorithms practical by allowing a decision maker to control the resolution of thePareto set approximation by choosing an appropriate e value.

Table 1 lists the parameter setting used in the algorithm for all runs. A careful observation will reveal thefollowing properties of the proposed optimization system.

1. It emphasizes nondominated solutions.2. It maintains the diversity in the archive by allowing only one solution to be presented in each pre-assigned

hyper-box on the Pareto-optimal front.3. It is an elitist approach.4. It is based on concept of co-evolution and repair algorithm for handing nonlinear constraints.5. It can solve constrained optimization problems.

8. Output: A

1rameters

tion size (N) 300generation 500ver probability 0.85

ion probability 0.01on operator Roulette wheelver operator BLX-a

ion operator Polynomial mutatione tolerance e 10e�6

M.S. Osman et al. / Applied Mathematics and Computation 183 (2006) 373–389 383

4. Experimental results

In order to validate the proposed algorithm and quantitatively compare its performance with otheradvanced MOEAs, graphical presentation, statistical analysis of the experimental results and associated obser-vations are presented in this section. In our comparison study, four prominent benchmark test functions withdistinct Pareto-optimal front are selected from [1,24,26,27], which brought forward various benchmark func-tions for testing MOEAs. Table 2 shows the variable bounds, objective functions and constraints for all theseproblems. Also, the problems chosen from the engineering domains are Two-Bar Truss Design by Deb [4] andSpeed Reducer design used by Golinski [9].

The problems chosen from the benchmark domains are BNH used by Binh and Korn [1], SRN used bySrinivas and Deb [26], TNK suggested by Tanaka [27] and OSY used by Osyczka and Kundu [24]. All theseproblems are constrained multiobjective problems.

Table 2Test problems used in this study

Problem Variables bounds Objective function f(x) and constraints g(x) Characteristics of Pareto front

BNH x1 2 [0,5] f1ðxÞ ¼ 4x21 þ 4x2

2s Continuous convexx2 2 [0,3] f2(x) = (x1 � 5)2 + (x2 � 5)2

g1ðxÞ ¼ ðx1 � 5Þ2 þ x22 6 25

g2(x) = (x1 � 8)2 + (x2 + 3)2 P 7.7

SRN x1 2 [�20,20] f1(x) = 2 + (x1 � 2)2 + (x2 � 2)2 Continuous convexx2 2 [�20,20] f2(x) = 9x1 � (x2 � 1)2

g1ðxÞ ¼ x21 þ x2

2 6 225g2(x) = x1 � 3x2 + 10 6 0

TNK x1 2 [0,p] f1(x) = x1 Discretex2 2 [0,p] f2(x) = x2

g1ðxÞ ¼ x21 þ x2

2 � 1� 0:1 cosð16 arctan x1

x2ÞP 0

g2(x) = (x1 � 0.5)2 + (x2 � 0.5)26 0.5

OSY x1 2 [0,10] f1(x) = �[25(x1 � 2)2 + (x2 � 2)2 + (x3 � 1)2 + (x4 � 4)2 + (x5 � 1)2] Continuous nonconvexx2 2 [0,10] f2ðxÞ ¼ x2

1 þ x22 þ x2

3 þ x24 þ x2

5 þ x26

x3 2 [1,5] g1(x) = x1 + x2 � 2 P 0x4 2 [0,6] g2(x) = 6 � x1 � x2 P 0x5 2 [1,5] g3(x) = 2 � x2 + x1 P 0x6 2 [0,10] g4(x) = 2 � x1 + 3x2 P 0

g5(x) = 4 � (x3 � 3)2 � x4 P 0g6(x) = (x5 � 3)2 + x6 � 4 P 0

Fig. 7. Objective spaces for the SRN (right) and BNH (left).

384 M.S. Osman et al. / Applied Mathematics and Computation 183 (2006) 373–389

The BNH and the SRN (The feasible objective space along with the Pareto-optimal solutions are shown inFig. 7) problems are fairly simple in that the constraints may not introduce additional difficulty in finding thePareto-optimal solutions. It was observed that all MOEAs methods performed equally well, and gave a densesampling of solutions along the true Pareto-optimal curve.

The TNK problem (Fig. 8) and the OSY problem (Fig. 9) are relatively difficult. The constraints in theTNK problem make the Pareto-optimal set discontinuous. The constraints in the OSY problem divide thePareto-optimal set into five regions that can demand a GA to maintain its population at different intersec-tions of the constraint boundaries. We compare our method with a reliable and efficient multiobjectivegenetic algorithm NSGA II. The results show that our method can be used efficiently for constrainedMOP than NSGA II. It worth mentioning hat the number of the Pareto-optimal solutions obtained by

Fig. 8. Result for the TNK problem.

Fig. 9. Result for the OSY problem.

M.S. Osman et al. / Applied Mathematics and Computation 183 (2006) 373–389 385

NSGA II is limited by its population size, but our optimization system keep track of all the feasible solutionsfound during the optimization and therefore do not have any restrictions on the number of the Pareto-opti-mal solutions found.

4.1. Welded beam design

A welded beam design is used by Deb [2], where a beam needs to be welded on another beam and mustcarry a certain load F (Fig. 10).

It is desired to find four design parameters (thickness of the beam b, width of the beam t, length of weld l,and weld thickness h) for which the cost of the beam and the deflection at the open end are minimum. Theoverhang portion of the beam has a length of 14 in. and F = 6000 Ib force is applied at the end of the beam.It is intuitive that an optimal design for cost will make all four design variables to take small values. Whenthe beam dimensions are small, it is likely that the deflection at the end of the beam is going to be large.Again, a little thought will reveal that a design for minimum deflection at the end (or maximum rigidityof the above beam) will make all four design dimensions to take large dimensions. Thus, the design solutionsfor minimum cost and maximum rigidity (or minimum end deflection) are conflicting to each other. Thiskind of conflicting objective functions leads to Pareto-optimal solutions. In the following, we present themathematical formulation of the two-objective optimization problem of minimizing cost and the enddeflection:

Min f 1ðxÞ ¼ 1:10471h2lþ 0:04811tbð14þ lÞMin f 2ðxÞ ¼ 2:1952=t2b

Fig. 10. The welded beam design problem.

Fig. 11. A speed reducer.

Fig. 12. Result for the welded beam design.

TableDesign

x1

x2

x3

x4

386 M.S. Osman et al. / Applied Mathematics and Computation 183 (2006) 373–389

s:t:

g1ðxÞ ¼ 13600� rðxÞP 0; g2ðxÞ ¼ 30000� rðxÞP 0;

g3ðxÞ ¼ b� h P 0; g4ðxÞ ¼ P cðxÞ � 6000 P 0;

h; b 2 ½0:125; 5� l; t 2 ½0:1; 10�;where

s ¼ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiðs0Þ2 þ ðs00Þ2 þ ls0s0=

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi0:25ðl2 þ ðhþ lÞ2Þ

qr; s0 ¼ 6000=

ffiffiffi2p

hl;

s00 ¼6000ð14þ 0:5lÞ

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi0:25ðl2 þ ðhþ lÞ2Þ

q2ffiffiffi2p

hlðl2=12þ 0:25ðhþ tÞ2Þ; r ¼ 504000=t2b;

P c ¼ 64746:022ð1� :0282346tÞtb3:

In the welded beam design problem, the nonlinear constraints can cause difficulties in finding the Paretosolutions. As shown in Fig. 12, our algorithm outperformed NSGA-II in both distribution and spread.

4.2. Speed reducer design

The well-known speed reducer test problem represents the design of a simple gear box such as might be usedin a light airplane between the engine and propeller to allow each to rotate at its most efficient speed (Fig. 11).The objective is to minimize the speed reducer weight while satisfying a number of constraints imposed by gearand shaft design practices. This problem was modeled by Golinski [9,10] as a single-level optimization, andsince then many others have used it to test a variety of methods. Here, the problem has been converted intoa two objective optimization problem. The mathematical formulation, of the problem is now described. Thereare seven design variables, (x1,x2,x3,x4,x5,x6,x7), which represent as depicted in Table 3.

The first objective of speed reducer problem is to find the minimum of a gear box volume f1(Æ), (and, hence, itsminimum weight). The second objective, f2(Æ), is to minimize the stress in one f the two gear shafts. The design is

3variables

Width of the gear face (cm) x5 Shaft 2 length between bearings (cm)Teeth module (cm) x6 Diameter of shaft 1 (cm)Number of pinion teeth (Integer)

x7 Diameter of shaft 2 (cm)Shaft 1 length between bearings (cm)

Table 4The problem constraints

g1 Upper bound on the bending stress of the gear toothg2 Upper bound on the contact stress of the gear toothg3, g4 Upper bounds on the transverse deflection of shafts 1, 2g5 � g7 Dimensional restrictions based on space and experienceg8, g9 Design requirements on the shafts based on experienceg10, g11 Constraints on stress in the gear shafts

M.S. Osman et al. / Applied Mathematics and Computation 183 (2006) 373–389 387

subject to constraints imposed by gear and shaft design practices. An upper and lower limit is imposed on eachof the seven design variables There are 11 other inequality constraints as depicted in Table 4.

The optimization formulation is:

Min f 1fweight ¼ 0:7854x1x22ð10x2

3=3þ 14:933x3 � 43:0934Þ � 1:508x1ðx26 þ x2

7Þþ 7:477ðx3

6 þ x37Þ þ 0:7854ðx4x2

6 þ x5x27Þ;

Min f 2 ¼ fstress ¼

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffið745x4=x2x3Þ2 þ 1:69� 107

q0:1x3

6

s:t:

g1:1

x1x22x3

� 1

276 0; g2:

1

x1x22x2

3

� 1

397:56 0;

g3:x3

4

x2x3x46

� 1

1:936 0; g4:

x35

x2x3x47

� 1

1:936 0;

g5: x2x3 � 40 6 0; g6:x1

x2

� 12 6 0;

g7: 5� x1

x2

6 0; g8: 1:9� x4 þ 1:5x6 6 0;

g9: 1:9� x5 þ 1:1x7 6 0; g10: f2ð�xÞ 6 1100;

g11:

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffið745x5=x2x3Þ2 þ 1:575� 108

q0:1x3

7

6 850:

The lower and upper limits on the seven variables are:

2:6 6 x1 6 3:6; 0:7 6 x2 6 0:8; 17 6 x3 6 18; 7:3 6 x4 6 8:3;

7:3 6 x5 6 8:3; 2:9 6 x6 6 3:9; 5:0 6 x7 6 5:5:ð7Þ

As shown in Fig. 13, our algorithm worked well in both distribution and spread. Also, our methods keeptrack of all the feasible solutions found by iteratively update the archive content during the optimization.

5. Conclusions

Finding a good distribution of solutions near the Pareto-optimal front in small computational time is adream of multiobjective EA researchers and practitioner. Although most past studies concentrate on solvingunconstrained optimization problems, there exit a few studies where MOEAs have been extended to solve con-strained optimization problems. Most of them were based on penalty functions for handling nonlinear con-straints by genetic algorithms. However the performance of these methods is highly problem-dependent,many methods require additional tuning of several parameters.

In this paper, we present a new optimization system, which is based on concept of co-evolution and repairalgorithm for handling nonlinear constraints. Also it is based on the e-dominance concept which maintainsa finite-sized archive of nondominated solutions which gets iteratively updated. Also, it guarantees thatthe archive will get bounded according to the chosen e value. The concept of e-dominance will then allow

Fig. 13. Result for the speed reducer design.

388 M.S. Osman et al. / Applied Mathematics and Computation 183 (2006) 373–389

pre-specified precisions to exist among the preferred Pareto-optimal vectors, which guarantees convergenceand diversity. It is worth mentioning that the number of Pareto-optimal solutions obtained by NSGA-II islimited by its population size. Our methods keep track of all the feasible solutions found during the optimi-zation and therefore do not have any restrictions on the number of Pareto-optimal solutions found.

The solution, provided by the proposed algorithm for four prominent test problems and two engineeringapplications are promising when compared with the exiting well-known algorithm. Also, our results suggestthat our optimization system is better applicable for solving real-world application problems.

For future work, we intend to test the algorithm on more problems. We would also like to improve ourmethod. Also, the parameters chosen in this paper were generated experimentally. It would be interestingto see the effect of these parameters on the algorithm, and we would also like to use our methods for morecomplex real-world applications.

References

[1] T. Binh, U. Korn, MOBES: A multi-objective Evolution Strategy for constrained optimization problems. In: Proceedings of the 3rdInternational Conference on Genetic algorithm MENDEL, Brno, Czech Republic, 1997, pp. 176–182.

[2] K. Deb, Optimal design of a welded beam via genetic algorithms, AIAA Journal 29 (11) (1991) 2013–2015.[3] K. Deb, S. Agrawal, A. Pratab, T. Meyarivan, A fast elitist non-dominated sorting genetic algorithms for multiobjective optimization:

NSGA II, KanGAL Report 200001, Indian Institute of Technology, Kanpur, India, (2000).[4] K. Deb, Multi-objective optimization using evolutionary algorithms, Wiley, NY, USA, 2001.[5] K. Deb, M. Mohan, S. Mishra, A fast multiobjective evolutionary algorithm for finding well spread Pareto-optimal solutions, in:

C.M. Fonesca et al. (Eds.), In: Proceedings of the evolutionary multi-criterion optimization. Second International Conference.EMO 2003, Faro, Portugal, Lecture Notes in Computer Science, vol. 2632, Springer, 2003, pp. 222–236.

[6] Deepti Chafekar, Jiang Xuan, Khaleed Rasheed, Constrained multi-objective optimization using steady state genetic algorithms, in:Erick Cantu-Paz et al. (Eds.), In: Proceedings of Genetic and Evolutionary Computation-GECCO 2003, Part I, Lecture Notes inComputer Science, vol. 2723, Springer, 2003, pp. 813–824.

[7] T. Erlebach, H. Kellerer, U. Pferschy, Approximating multiobjective knapsack problems. In: Proceedings of the Seventh InternationalWorkshop on Algorithms and Data Structures (WADS 2001), Lecture Notes in Computer Science, vol. 2125, 2001, pp. 210–221.

[8] C.M. Fonseca, P.J. Fleming, An overview of evolutionary algorithms in multiobjective optimization, Evolutionary Computation 3 (1)(1995) 1–16.

[9] J. Golinski, Optimal synthesis problems solved by means of nonlinear programming and random methods, Journal of Mechanisms 5(1970) 287–309.

[10] J. Golinski, An adaptive optimization system applied to machine synthesis, Mechanism and Machine Theory 8 (1973) 419–436.

M.S. Osman et al. / Applied Mathematics and Computation 183 (2006) 373–389 389

[11] S. Helbig, D. Pateva, On several concepts for e-efficiency, OR Spektrum 16 (3) (1994) 179–186.[12] J. Horn, N. Nafpliotis, D.E. Goldberg, A niched Pareto genetic algorithm for multiobjective optimization, in: J.J. Grefenstette et al.

(Eds.), IEEE World Congress on Computational Intelligence, In: Proceedings of the 1st IEEE Conference on EvolutionaryComputation, IEEE Press, Piscataway, NJ, 1994, pp. 82–87.

[13] F. Jimenez, J.L. Verdegay, Constrained multiobjective optimization by evolutionary algorithm. In: Proceeding of the InternationalICSC Symposium on Engineering of intelligent systems (EIS’98), 1998, pp. 266–271.

[14] J.D. Knowles, D.W. Corne, The Pareto archived evolution strategy: a new baseline algorithm for multiobjective optimization, in: In:Proceedings of the 1999 Congress on Evolutionary Computation, IEEE Press, Piscataway, NJ, 1999, pp. 98–105.

[15] J.D. Knowles, D.W. Corne, M-PAES: A memetic algorithm for multiobjective optimization, in: In: Proceedings of the 2000 Congresson Evolutionary Computation, IEEE Press, Piscataway, NJ, 2000, pp. 325–332.

[16] A. Kurpati, S. Azarm, J. Wu, Constraint handling improvements for multiobjective genetic algorithms, Structured MultiobjectiveOptimization 23 (2002) 204–213.

[17] M. Laumanns, L. Thiele, K. Deb, E. Zitzler, Archiving with guaranteed convergence and diversity in multi-objective optimization, in:In: Proceedings of the Genetic and Evolutionary Computation Conference (GECCO 2002), Morgan Kaufman Publishers, New York,NY, USA, 2002, pp. 439–447.

[18] Z. Michalewiz, Genetic algorithms + Data structures = Evolution programs, third ed., Springer-Verlag, 1996.[19] Z. Michalewicz, M. Schoenauer, Evolutionary algorithms for constrained parameter optimization problems, Evolutionary

Computation 4 (1) (1996) 1–32.[20] K. Miettinen, Non-linear multiobjective optimization, Kluwer Academic Publisher, Dordrecht, 2002.[21] T. Murata, H. Ishibuchi, H. Tanaka, Multiobjective genetic algorithm and its application to flowshop scheduling, Computers and

Industrial Engineering 30 (4) (1996) 957–968.[22] M.S. Osman, M.A. Abo-Sinna, A.A. Mousa, A solution to the optimal power flow using genetic algorithm, Journal of Applied

Mathematics and Computation 155 (2004) 391–405.[23] M.S. Osman, M.A. Abo-Sinna, A.A. Mousa, An effective genetic algorithm approach to multiobjective resource allocation problems

(MORAPs), Journal of Applied Mathematics and Computation 163 (2005) 755–768.[24] A. Osyczka, S. Kundu, A new method to solve generalized multicriteria optimization problems (10) (1995) 98–105.[25] J.D. Schaffer, Multiple objective optimization with vector evaluated genetic algorithms, in: J.J. Grefenstette et al. (Eds.), Genetic

algorithms and their applications, In: Proceedings of the 1st International Conference on Genetic Algorithms, Lawrence Erlbaum,Mahwah, NJ, 1985, pp. 93–100.

[26] N. Srinivas, K. Deb, Multiobjective optimization using nondominated sorting in genetic algorithms, Evolutionary Computation 2 (3)(1999) 221–248.

[27] M. Tanaka, GA- based decision support system for multi-criteria optimization. In: Proceedings of the International Conference onsystems, Man and Cybernetics 2, 1995, pp. 1556–1561.

[28] E. Zitzler, L. Thiele, Multiobjective optimization using evolutionary algorithms: A comparative case study, in: A.E. Eiben, T. Back,M. Schoenauer, H.P. Schwefel, (Eds.), Fifth International Conference on Parallel Problem Solving from Nature (PPSN-V), Berlin,Germany, 1998, pp. 292–301.

[29] E. Zitzler, M. Laumanns, L. Thiele, SPEA2: Improving the strength Pareto evolutionary algorithm for multiobjective optimization.In: Proceedings of Evolutionary methods for design, optimization and control with applications to industrial problems, EUROGEN2001, Athens, Greece, (2001).