Incorporating the Notion of Relative Importance of Objectives in Evolutionary Multiobjective...

17
530 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 14, NO. 4, AUGUST 2010 Incorporating the Notion of Relative Importance of Objectives in Evolutionary Multiobjective Optimization Lily Rachmawati, Member, IEEE, and Dipti Srinivasan, Senior Member, IEEE Abstract —This paper describes the use of decision maker preferences in terms of the relative importance of objectives in evolutionary multiobjective optimization. A mathematical model of the relative importance of objectives and an elicitation algorithm are proposed, and three methods of incorporating explicated preference information are described and applied to standard test problems in an empirical study. The axiomatic model proposed here formalizes the notion of relative importance of objectives as a partial order that supports strict preference, equality of importance, and incomparability between objective pairs. Unlike most approaches, the proposed model does not encode relative importance as a set of real-valued parameters. Instead, the approach provides a functional correspondence between a coherent overall preference with a subset of the Pareto-optimal front. An elicitation algorithm is also provided to assist a human decision maker in constructing a coherent overall preference. Besides elicitation of a priori preference, an interactive facility is also furnished to enable modification of overall preference while the search progresses. Three techniques of integrating explicated preference information into the well- known Non-dominated Sorting Genetic Algorithm (NSGA)-II are also described and validated in a set of empirical investigation. The approach allows a focus on a subset of the Pareto-front. Validations on test problems demonstrate that the preference- based algorithm gained better convergence as the dimensionality of the problems increased. Index Terms—Genetic algorithm, multiobjective optimization, preference, relative importance of objectives. I. Introduction M ULTIOBJECTIVE optimization problems (MOP) in- volve multiple conflicting, incomparable, and noncom- mensurable objective functions. The conflict, incomparability, and noncommensurability of the objectives imply that an MOP corresponds to a nonsingular set of Pareto-optimal solutions characterized by trade-off in the performance in various ob- jectives [1]. Improvement in one objective is gained only with degradation in another such that in the objective space of the MOP at hand, the corresponding Pareto-optimal set form a nondominated front, the Pareto-front. Manuscript received April 3, 2009; revised August 7, 2009 and October 6, 2009. Date of publication April 19, 2010; date of current version July 30, 2010. This work was supported by the National Research Foundation under Grant R-263-000-522-272. The authors are with the Department of Electrical and Computer En- gineering, National University of Singapore, Singapore 117576 (e-mail: [email protected]; [email protected]). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TEVC.2009.2036162 A well-distributed approximation of the entire trade-off sur- face constitutes candidates from which the user selects a final solution according to his/her preference [2]. The availability of a number of candidates is desirable in MOPs, as human decision making behavior is characterized by anomalies and uncertainties [3] that render an analytical representation and manipulation of preference-based decision making criterion difficult. However, in higher dimensional problems with many ob- jectives, a meaningful approximation of the entire trade-off front requires increasingly intractable number of solutions to be accommodated in the archive. Pareto-ranking based fitness function, the major building block of most contemporary multiobjective evolutionary algorithms (MOEA) also loses effectiveness with increasing dimensionality as more and more solutions are nondominated with respect to each other [4]. Effectively, the ability of the population to progress toward the optima deteriorates quickly with increasing number of objectives. An a priori incorporation of preference information facilitates a more effective optmization in high-dimensional objective spaces by focusing on a subset of the Pareto-optimal front. The greater resolution proffered in the approximation of the region of interest benefits decision makers while an increase in the specificity of search criterion by the inclusion of preference into the fitness promises also to improve the efficacy of MOEAs in finding the optimal front. Of the numerous ways multiobjective decision making preference could be formulated, those based on the notions of a goal vector and the relative importance of objectives are the most popular. A considerable number of work incorpo- rating goal vectors have been reported in the evolutionary multiobjective optimization (EMOO) literature (e.g., [5]–[8]). Besides goal-based algorithms, work focusing on knees of the Pareto-front can also be found in literature (e.g., [9]–[12]). Approaches implementing general ad-hoc preference models that allow decision makers to specify preferences by defining mapping functions for each objective (desirability function in [13]) or a population-based metric (indicator function in [14]) have also been proposed. The remainder of this paper deals exclusively with the less straightforward formulation, i.e., preference in terms of the relative importance of objectives. The relative importance of objectives is commonly repre- sented by a set of positive real-valued importance parameters, e.g., in analytic hierarchy process (AHP) [15], multiattribute 1089-778X/$26.00 c 2009 IEEE

Transcript of Incorporating the Notion of Relative Importance of Objectives in Evolutionary Multiobjective...

530 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 14, NO. 4, AUGUST 2010

Incorporating the Notion of Relative Importanceof Objectives in Evolutionary Multiobjective

OptimizationLily Rachmawati, Member, IEEE, and Dipti Srinivasan, Senior Member, IEEE

Abstract—This paper describes the use of decision makerpreferences in terms of the relative importance of objectivesin evolutionary multiobjective optimization. A mathematicalmodel of the relative importance of objectives and an elicitationalgorithm are proposed, and three methods of incorporatingexplicated preference information are described and applied tostandard test problems in an empirical study. The axiomaticmodel proposed here formalizes the notion of relative importanceof objectives as a partial order that supports strict preference,equality of importance, and incomparability between objectivepairs. Unlike most approaches, the proposed model does notencode relative importance as a set of real-valued parameters.Instead, the approach provides a functional correspondencebetween a coherent overall preference with a subset of thePareto-optimal front. An elicitation algorithm is also providedto assist a human decision maker in constructing a coherentoverall preference. Besides elicitation of a priori preference, aninteractive facility is also furnished to enable modification ofoverall preference while the search progresses. Three techniquesof integrating explicated preference information into the well-known Non-dominated Sorting Genetic Algorithm (NSGA)-II arealso described and validated in a set of empirical investigation.The approach allows a focus on a subset of the Pareto-front.Validations on test problems demonstrate that the preference-based algorithm gained better convergence as the dimensionalityof the problems increased.

Index Terms—Genetic algorithm, multiobjective optimization,preference, relative importance of objectives.

I. Introduction

MULTIOBJECTIVE optimization problems (MOP) in-volve multiple conflicting, incomparable, and noncom-

mensurable objective functions. The conflict, incomparability,and noncommensurability of the objectives imply that an MOPcorresponds to a nonsingular set of Pareto-optimal solutionscharacterized by trade-off in the performance in various ob-jectives [1]. Improvement in one objective is gained only withdegradation in another such that in the objective space of theMOP at hand, the corresponding Pareto-optimal set form anondominated front, the Pareto-front.

Manuscript received April 3, 2009; revised August 7, 2009 and October 6,2009. Date of publication April 19, 2010; date of current version July 30,2010. This work was supported by the National Research Foundation underGrant R-263-000-522-272.

The authors are with the Department of Electrical and Computer En-gineering, National University of Singapore, Singapore 117576 (e-mail:[email protected]; [email protected]).

Color versions of one or more of the figures in this paper are availableonline at http://ieeexplore.ieee.org.

Digital Object Identifier 10.1109/TEVC.2009.2036162

A well-distributed approximation of the entire trade-off sur-face constitutes candidates from which the user selects a finalsolution according to his/her preference [2]. The availabilityof a number of candidates is desirable in MOPs, as humandecision making behavior is characterized by anomalies anduncertainties [3] that render an analytical representation andmanipulation of preference-based decision making criteriondifficult.

However, in higher dimensional problems with many ob-jectives, a meaningful approximation of the entire trade-offfront requires increasingly intractable number of solutions tobe accommodated in the archive. Pareto-ranking based fitnessfunction, the major building block of most contemporarymultiobjective evolutionary algorithms (MOEA) also loseseffectiveness with increasing dimensionality as more and moresolutions are nondominated with respect to each other [4].Effectively, the ability of the population to progress towardthe optima deteriorates quickly with increasing number ofobjectives. An a priori incorporation of preference informationfacilitates a more effective optmization in high-dimensionalobjective spaces by focusing on a subset of the Pareto-optimalfront. The greater resolution proffered in the approximationof the region of interest benefits decision makers while anincrease in the specificity of search criterion by the inclusionof preference into the fitness promises also to improve theefficacy of MOEAs in finding the optimal front.

Of the numerous ways multiobjective decision makingpreference could be formulated, those based on the notionsof a goal vector and the relative importance of objectives arethe most popular. A considerable number of work incorpo-rating goal vectors have been reported in the evolutionarymultiobjective optimization (EMOO) literature (e.g., [5]–[8]).Besides goal-based algorithms, work focusing on knees of thePareto-front can also be found in literature (e.g., [9]–[12]).Approaches implementing general ad-hoc preference modelsthat allow decision makers to specify preferences by definingmapping functions for each objective (desirability function in[13]) or a population-based metric (indicator function in [14])have also been proposed. The remainder of this paper dealsexclusively with the less straightforward formulation, i.e.,preference in terms of the relative importance of objectives.

The relative importance of objectives is commonly repre-sented by a set of positive real-valued importance parameters,e.g., in analytic hierarchy process (AHP) [15], multiattribute

1089-778X/$26.00 c© 2009 IEEE

RACHMAWATI AND SRINIVASAN: INCORPORATING THE NOTION OF RELATIVE IMPORTANCE OF OBJECTIVES 531

utility theory (MAUT) [16], and the outranking approach [17]–[22]. The magnitude of the importance parameters indicatesthe importance of the associated objectives, and the over-all compatibility of a solution with the preference is givenby some forms of aggregation. This paper presents a newmodel formalizing the notion of relative importance for use inEMOO. The axiomatic framework proposed represents impor-tance ranking as a partial order among objectives. The partialorder supports three types of relations between objective pairs:incomparability, equal importance, and strict preference.

A coherent overall preference is designed to correspond toa specific subset of the Pareto-optimal front. The functionalmapping from preference model to the objective space ensuresthat all Pareto-optimal solutions are accessible to the decisionmaker in the sense that for a given Pareto-optimal solution X∗

there exists a partial ranking in terms of objective importancesuch that the solution X∗ is a member of the desired subsetof the Pareto-front. This feature is absent from existing majorframeworks like MAUT and Preference Ranking Organiza-tion Method for Enrichment Evaluations (PROMETHEE)-II, aclass of outranking approaches, for nonconvex Pareto-fronts.The support for incomparability between objectives is anotherunique feature of the proposed model.

To facilitate the explication of a coherent overall preferencein the representation framework proposed, an elicitation algo-rithm is also given in this paper. The elicitation algorithmallows human decision makers to directly construct a partialimportance ranking, or specify partial preference by assertingtotally ordered subsets from the set of objectives. In thelatter case, the decision maker needs only to specify anyconceived precedence and/or equal importance between ob-jectives. Incomparability constitutes the basic default relationand is applied to any objective pair for which no preferencerelation has been specified by the decision maker. This ismotivated by the fact that incomparability between any twoobjectives denotes the relevance of the comprehensive trade-off between the two objectives. That is, if objectives fm and fn

are incomparable, then the entire trade-off from the individualmaxima to minima of fm and fn within the Pareto-front isimportant to the decision maker.

A facility for interactive modification of preference is alsooutlined. At specific intervals during the search the decisionmaker is given a summary of the attainments in each objective,and may modify the overall preference in three different ways,i.e., by specifying: 1) a completely ordered chain of L objec-tives where L is between 2 and M, and 2) desired improvementor degradation of an objective fm or desired improvement ofobjective fm at the expense of another objective fn.

In this paper, explicated preference information is integratedinto the Non-dominated Sorting Genetic Algorithm (NSGA)-II in three different ways: as a bias in the crowding distanceestimate, as a rank penalty and as constraints. The preference-based NSGA-II is applied to scalable problems with variousdifficulties. Comparative performance of integration methodsand preference profiles is studied empirically.

The remainder of the paper is organized as follows. InSection II, a brief description of the multiobjective optimiza-tion problem is given. In Section III, a brief discussion on

representation frameworks modeling the relative importance ofobjectives and associated elicitation approaches is presented.Adoption of the models into MOEAs are also discussed. InSection IV, the model of preference in terms of the relativeimportance of objectives is introduced. In Section V, thea priori elicitation algorithm and the interactive preferencemodification are described. Section VI furnishes a detaileddescription of the integration techniques. Section VII presentsthe empirical investigation and discussion of results obtainedand Section VIII concludes the paper.

II. Multiobjective Optimization

In Section II-A, key concepts in EMOO are revisited whilein Section II-B pertinent issues in the use of preference inMOEAs are discussed.

A. Multiobjective Optimization Problem

The MOP can be stated as follows [1]:

Find the vectors of decision variables X∗ = [x1, x2, ..xN ] thatsatisfy:

p inequality constraints: gi(X∗) ≥ 0, i = 1 , 2 , ..., pq equality constraints: hi(X∗) = 0, i = 1 , 2 , ..., q

And minimize M conflicting objectivefunctions: F = [f1(X∗), f2(X∗), ..., fM(X∗)]

where fm : RN → R

The conflict among objectives implies that objective func-tions cannot be simultaneously optimized. In the absence ofpreference information, incomparability applies between thenoncommensurable, conflicting objectives. The conflict, in-comparability, and noncommensurability among objectives arecaptured in the irreflexive, transitive, and asymmetric binaryrelation Pareto-dominance. For a minimization of the set ofobjectives F (X) = [f1(X), ..fM(X)] the definition Pareto-dominance is given as follows for two candidate solutions Xi

and Xj [1]:

Xi ≺ Xj ⇔∀m ∈ [1, M], fm(Xi) ≤ fm(Xj) ∧

∃m ∈ [1, M], fm(Xi) < fm(Xj). (1)

That is, a solution Xi Pareto-dominates solution Xj if and onlyif it performs as well as Xj in all objectives and better than Xj

in at least one objective. Pareto-dominance defines a partialorder on the objective space. A pair of solutions for whichno dominance relation applies constitutes a nondominated set.A solution X is optimal in the Pareto-sense if no solutionthat dominates X can be found in the feasible region. Thisimplies that improvement to X in terms of any objectivenecessarily entails degradation in at least another objective.The Pareto-optimum of any particular MOP consists of a setof nondominated solutions, which corresponds to the Pareto-optimal front in the objective space.

Most MOEAs aim to find a well-distributed approximationof the Pareto-optimal set. The population of candidate solu-tions is evolved by applying mutation and/or recombination to

532 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 14, NO. 4, AUGUST 2010

generate new solutions. Solutions are propagated by inclusioninto an archive, which is decided by a two-tiered fitness con-sisting of a Pareto-dominance based ranking and a diversity-promoting niching. While the Pareto-dominance based rankingguides the population toward the true Pareto-front, the nichingmechanism promotes a uniform distribution of solutions alongretained nondominated fronts. Hence, at any time during theexecution of the algorithm the archive ideally retains the bestnondominated solutions discovered. The desired result of ageneric MOEA is a set of nondominated solutions close tothe true Pareto-optimal front and uniformly distributed in theobjective space.

B. Preference in Evolutionary Multiobjective Optimization

Under the assumption of incomparability and noncom-mensurability of objectives in Pareto-dominance criterion, nomember of a nondominated set of solutions is superior to anyother member. Human decision making preference introducesan additional criterion that imposes an order among a set ofnondominated solutions.

A scheme for incorporating preference into an optimizationalgorithm concerns three aspects.

1) Preference representation, i.e., a mathematical modelof the internal preference of the decision maker. Themodeling of preference can be achieved by a directcomparison of alternative solutions or induction of pref-erence relations between solutions from their associatedattributes [26].

2) Preference elicitation. Elicitation facilitates the expli-cation of the decision maker’s internal preference co-herently in terms of the selected model. An elicitationalgorithm is especially important in high-dimensionalproblems given the limited span of short term memory[27].

3) Preference integration, i.e., a method of integrating pref-erence information within the selected model into theoptimization algorithm to inform the search or solutiongeneration process. In MOEA, preference informationhas been integrated in the fitness evaluation as a modi-fied Pareto-dominance criterion (e.g., [5], [29]–[34]) ora bias on the distribution of solutions (e.g., [8], [36]).Alternatively preference has also been encoded into thepenalty functions in a coevolutionary framework (e.g.,[34]).

In [28], Coello noted that incorporation of human prefer-ence into the fitness function of an MOEA must preservePareto-dominance relation among feasible solutions, that ifXi dominates Xj then Xi is preferred to Xj . He also notedthat scalability with respect to the number of objectives is anissue to be addressed in preference-based approaches. Othernecessary properties of preference incorporation in EMOO areas follows.

1) Accessibility of any Pareto-optimal solution under thepreference model. In a parametrical preference modelthis implies that for any solution Xi in the Pareto-optimalset there is a set of parameter settings such that Xi

maximises preference.

2) Facility for specification of partial preference. Thelimitations of human perception [27] require that inmany-objective problems decision makers be allowedto specify preference for subsets of the objectives F ={f1, ...fM} to arrive at the overall preference.

3) Noninferior convergence in comparison to the baselinegeneral purpose MOEA. The preference-based MOEAshould obtain solutions at least as close to the Pareto-front as a comparable general-purpose MOEA. Existingadverse effects of excluding nondominated solutionsfrom solution generation or archival because of incom-patibility with explicated preference should be compen-sated for.

4) Functional correspondence between the preferencemodel and the objective space/solution space if thepreference model is not defined in the solution space.

5) Control over the extent of focus in the algorithmicimplementation of preference directed search.

These features are present in the preference representation,elicitation, and integration proposed in this paper.

III. Related Work

This section presents current literature on the incorporationof the notion of relative importance of objectives in MOEAs.Research work concerned with other notions of preference,e.g., goal-directed EMOO is not discussed here.

In classical multicriteria decision making, the relative im-portance of objectives is commonly represented by a set ofM importance parameters, the magnitudes of which reflectthe importance of respective objectives [37]. Importance pa-rameters take the form of scaling constants in MAUT [16],weights in outranking methods [17], [19]–[25], trade-offs orsubstitution rates in Zionts and Wallenius procedures [38],[39], and eigenvectors of pairwise comparison matrices inAHP [15]. The relative importance of objectives has alsobeen represented by a strongly lexicographic ordering. Suchordering is equivalent to a special case of a MAUT-basedweighted sum [40]. MAUT and outranking models constitutethe predominant underlying models employed by preference-based MOEAs in literature [29], [30], [34]–[36], [43]–[45].

MAUT represents the relative importance of objectives ascoefficients of an aggregative value function, which is usuallyan additive or multiplicative function. The additive form is byfar the most popular, owing to the simplicity and intuitivenessof its construct as well as its efficacy in converging on thePareto-front, as is demonstrated in Bentley and Wakefield’sinvestigation [41]. The MAUT model supports preference(P) and indifference (I) relations, where P is irreflexive,asymmetric, and transitive, and I is reflexive, symmetric, andtransitive. The relation I is also an equivalence relation.

The basic MAUT-based weighted sum formulation is em-ployed in, among others, the work of Cvetkovic et al. [34] andBranke et al. [36]. Variations of the basic model are proposedand employed in, e.g., [29], [30], and [33].

In [29], Branke et al. defines a guided dominance, where themaximum and minimum utilities of solutions are consideredinstead of the actual attainment in the objective functions.

RACHMAWATI AND SRINIVASAN: INCORPORATING THE NOTION OF RELATIVE IMPORTANCE OF OBJECTIVES 533

Guided dominance preserves Pareto-dominance and accommo-dates incomparability among members of a nondominated setwhose utilities are between the minimum and the maximum.Application of the guided dominance relation into NSGA-IIproduced improvement in terms of convergence on the Pareto-front [29]. For application on problems with more than twoobjectives, the explication of maximum and minimum utilityfunctions is difficult. The use of two weighted sums (i.e., themaximum and minimum utilities) also yields a 1-D subsetof the Pareto-front, irrespective of the actual dimensionalityof the Pareto-front. On the other hand, employing a pair ofmaximum and minimum utilities for each pair of objectivesimpairs scalability. The approach is also unable to focus onlyon the compromise region in a concave Pareto-front.

Jin and Sendhoff [30] adapted the random weighted ag-gregation and dynamic weighted aggregation (DWA) [42] toinclude preference by setting the upper and lower boundsto the weight perturbations. The algorithms in [30] supportincomparability between solutions and provide the user witha control over the extent of focus. While DWA facilitatesretention of compromise solutions in the nonconvex non-dominated front, the lack of explicit diversity preservationand inferior performance in high-dimensional problems [42]constitute significant drawbacks of the approach. Arriving atthe upper and lower bounds of weights for higher dimensionalproblems is also difficult.

The biased crowding distance approach proposed in [36] isscalable and facilitates focus on concave parts of the Pareto-front. Instead of inter-solution distance, crowding is computedusing the distance of the solutions’ projection onto a hyper-plane described by the preference-encoding weighted sum. Theextent of the focus is controlled by raising the ratio of actualdistance and projected distance to the power variable α inthe computation of the biased crowding measure. Selectionof the appropriate power variable α for a desired extent offocus requires trial and error as the relation between alphaand coverage changes according to the Pareto-front geometry.

The effect of the biased crowding distance scheme is a focuson the area surrounding the point at which the hyper-planedescribed by the weighted sum is tangent to the nondominatedfront [36]. However, the approach was shown to be inferior tothe guided dominance approach in terms of convergence [36].

Cvetkovic and Parmee [31]–[34] proposed a weighted sumpreference model with an AHP-like elicitation algorithm. Theaxiomatic representation features binary and unary preferencerelations, each of which is associated with a parameter takenfrom the real set as well as a linguistic label. The main weak-ness of the preference model is the lack of accommodationfor incomparability between objectives. The setting of thereal-number parameters denoting preference relations is alsolargely arbitrary although any chosen values impinge directlyon the coefficients in the weighted sum [30].

The scheme represents importance of an objective by anumerical weight in an additive aggregation. Integration ofthe weighted sum is performed by the weighted Pareto-optimization [31], weighted coevolutionary optimization [33],and weighted scenario and constraint handling [32], [33].Weighted Pareto-optimization is accomplished with the lex-

icographic evaluation of the weighted dominance relation,Pareto-ranking and a weighted sum of objectives. While theweighted dominance relation preserves Pareto-dominance andalso supports incomparability for particular settings of theparameter τ, this binary relation promotes the retention ofonly extreme solutions and degenerates into a random searchin certain cases. The weighted coevolutionary optimizationapproach incorporates preference by modifying penalty termsof coevolving subpopulations according to the importance ofthe objective function associated with each subpopulation. Themain challenge in the coevolutionary approach is in guaran-teeing a workable compromise solution, obtained only if thesubpopulations reach convergence. With increasingly complexpreference and larger number of objectives, convergence iseven harder to attain.

Greenwood et al. [35] proposed an ISMAUT-based weigh-ted sum approach where the importance-ranking of objectivesis implicitly derived from the ranking of candidate solutions.The imprecisely specified weight coefficients are characterizedby a set of constraints describing preference as revealed inpair-wise comparison of candidate solutions. A minimizationof the difference in the weighted sums of a pair of solutions,subject to the pre-determined constraints is performed in thefitness computation. This linear optimization is performedfor every solution pair in the archive/population to obtainthe fitness of the solutions. Contradictions that may occurin the overall decision making preference are resolved by acomputationally expensive procedure.

The basic model of outranking approaches, unlike MAUT,incorporates incomparability between alternative solutions.The actual purpose of modeling preference in an outrankingapproach varies from choosing the best among alternatives,sorting alternatives into categories and ranking alternativesfrom best to worst [23]. Of the class of PROMETHEE models,PROMETHEE I, II and IV [24], [25] are designed to produce apartial and or complete ranking of alternatives, and is suitablefor direct adoption into an MOEA.

Researchers have implemented PROMETHEE II as thefitness function of an evolutionary algorithm (EA) [43]–[45].In the model, each objective fm is associated with a “weight”wm denoting its importance [24]. The outranking score foreach pair of solutions π(Xi, Xj) is an aggregation of theperformance difference in each objective function mappedto a monotonically nondecreasing function of the decisionmaker’s choice. The function encodes levels of performancedifference to a score between 0 and 1, denoting preferenceor indifference. For every candidate solution Xi, outgoing andingoing flows (φ+(Xi), and φ−(Xi)) are computed as the sumsof π(Xi, Xj) and π(Xj, Xi) where Xj are other candidate solu-tions in the set. The net flow, denoted by φ(Xi), is maximisedby the EAs in [43]–[45]. Effectively defining a preorder inthe solution space, PROMETHEE-II supports preference andindifference among solutions on the basis of the net flow.

In the following sections, a novel model of thenotion of relative importance of objectives is presented.The preference model is unique in that it allows incomparabil-ity between objectives, results in a partial ranking of objectivesand provides an explicit functional correspondence between

534 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 14, NO. 4, AUGUST 2010

explicated decision maker preference and the target region inthe objective space. Thus, the importance of an objective isnot represented by an importance parameter but the selectedsection of the Pareto-front projected onto the axis of thisobjective. The representation is discussed in the followingsections, along with the algorithm to assist the decision makerin explicating his/her internal preference to produce a coherentoverall preference. Lastly, the implementation of preferenceinformation in the fitness function allows the MOEA to findand retain solutions on geometrically ill-behaved Pareto-fronts.The approach is described in more detail in the succeedingsections.

IV. Preference Representation

This section describes the mathematical model of the notionof relative importance of objectives. The proposed model rep-resents decision making preference as a partial order of objec-tives in terms of importance. The partial order supports strictpreference (henceforth replaced by the term “precedence” toavoid confusion and denoted by P), equal importance (I), andincomparability (Q) between objective pairs. The three binaryrelations are defined within an axiomatic framework and areassociated with a subset of a prototype Pareto-optimal front.Section IV-A describes the properties of the binary relationsP , I, and Q.

The set of binary preference assertions for objective pairsin f1 to fM induce a partial order. Overall preference overthe set of M objectives F = [f1, f2, . . . , fm, . . . , fM] and thefunctional correspondence to a subset of the M−1-dimensionalPareto-optimal front are explained in Section IV-B.

A. Preference Structure

A preference structure P consists of a group of binaryrelations R defined on a set A such that for each pair inA, one and only one binary relation from the set structureis satisfied [46]. The preference structure PF described herefacilitates the expression of decision maker’s perception ofthe relative importance of objectives. The binary relationssupported in PF describes the relative importance over theset of objective functions F = [f1, f2, . . . , fm, . . . , fM] andare given as follows.

1) Precedence (P): Precedence specifies the quality ofbeing more important. The assertion of “fmPfn” impliesthat fm is considered more important than fn. Prece-dence is irreflexive, transitive, and asymmetrical, that is

¬fmPfm (2)

fmPfn ∩ fnPfo ⇒ fmPfo (3)

fmPfn ⇒ ¬fnPfm. (4)

2) Equal importance (I): Equal importance is defined asa reflexive, transitive, and symmetrical binary relation,that is

fmIfm (5)

fmIfn ∩ fnIfo ⇒ fmIfo (6)

fmIfn ⇔ fnIfm. (7)

3) Incomparability (Q): Incomparability allows the deci-sion maker to express the absence of preference for aparticular objective fm over another objective fn. It isdistinct from equal importance in that no judgment overthe relative importance of the two objectives concernedis possible, implying that all performance trade-off inthe two objectives is relevant to the decision maker.In the absence of human decision making, preferenceinformation incomparability describes the default binaryrelation between any objective pair. Incomparability isirreflexive, symmetrical, and intransitive, that is

¬fmQfm (8)

fmQfn ∩ fnQfo ⇒ fmQfo (9)

fmQfn ⇔ fnQfm. (10)

Other properties of the binary relations are as follows:

fmPfn ∩ fnIfo ⇒ fmPfo (11)

fmPfn ∩ fmIfo ⇒ foPfn (12)

fmPfn ∪ fmIfn ∪ fnPfm ⇔ fmQfn. (13)

The three binary relations defined above are associated witha functional mapping to a 2-D objective space.

A way of consistently characterizing the preferred solutionsfor a given preference profile irrespective of the geometry ofthe Pareto-front is desirable. This consistency is instrumentalto an effective articulation of preference by a human decisionmaker. Even if the geometrical attributes of the actual Pareto-front are unknown a priori, a consistent characterizationequips the decision maker with some information of thesolutions he/she may expect for any given preference profile.

To achieve this consistency, the functional mapping frompreference expressed in P F to the Pareto-front is defined interms of a prototype nondominated front. The prototype frontselected is linear, continuous, and defined in [0, 1]. Let theprototype front be described by F = [f1, f2]. Then f1 + f2 = 1.An illustration of the prototype front and a mapping from theactual front is given in Figs. 1 and 2. The choice of the linearfront is motivated partly by its simplicity and scalability toM-objective problems. Simplicity helps the decision maker informulating his/her preference in terms of binary relations inP F . The scaling of the preference model and its functionalmapping to M objective problems is described in the nextsection.

To accommodate the three binary preference relations de-fined in PF , the prototype front is divided into three nonover-lapping segments of identical length as depicted in Fig. 1. Thelinear front in Fig. 1 is the prototype front while the curve isa normalized concave Pareto-front associated with an actual

RACHMAWATI AND SRINIVASAN: INCORPORATING THE NOTION OF RELATIVE IMPORTANCE OF OBJECTIVES 535

Fig. 1. Desired solutions corresponding to f1Pf2 (squares), f1If2 (asterisks),and f2Pf1 (triangles).

MOP. The first section plotted as squares is desired whenfmPfn is asserted. The second and third sections (marked byasterisks and triangles, respectively) are the desired subsetsof the Pareto-front when fmIfn and fnPfm, respectively, areasserted. The preference assertion fmQfn corresponds to theentire span of the prototype Pareto-front.

Mathematically, the desired subsets of the Pareto-front couldbe characterized by the following inequalities:

f1Pf2 ⇔ 2f1 ≤ f2 (14)

f1If2 ⇔ f1 ≤ f2 ≤ 4f1 (15)

where f1 and f2 correspond to the prototype objective spaceand Pareto-front. The choice of the coefficients 2 and 4 in theabove inequalities follows from equal division of the prototypefront into three nonoverlapping subsets. Other values may ofcourse be used if other ways of dividing the prototype front isdeemed necessary or desirable. Within the scope of this paper,equal and nonoverlapping division is adopted as it is deemedmost intuitive for the general case.

For a practical MOP, the target subset of the Pareto-optimalset are solutions whose projected normalizations falls withinthe desired region as given in (14) and (15). That is, given anobjective vector [f1(X∗

i ), f2(X∗i )], normalization is performed

to the vector to obtain [f1(X∗i ), f2(X∗

i )]. A projection of thisnormalized Pareto-front onto the prototype front is computed.Denote the projection as [f1(X∗

i ), f2(X∗i )]. If and only if the

projection [f1(X∗i ), f2(X∗

i )] is a desired solution for any givenpreference assertion, then the solution X∗ is a member of thetarget subset.

With this scheme, irrespective of the geometrical attributesof the actual Pareto-front, the decision maker knows that ifhe/she asserts f1Pf2, the best third in terms of performance inf1 (which, because of nondominance, is also the worst third inf2) are to be expected from the implemented optimization. Acomparison of the two Pareto-fronts in Figs. 1 and 2 illustratesthis. Fig. 2 shows the normalized Pareto-front of problem KUR

Fig. 2. Desired solutions corresponding to f1Pf2 (squares), f1If2 (asterisks),and f2Pf1 (triangles).

[47], which consists of three disjointed segments. Note thatalthough the geometry of this Pareto-front is different from thatin Fig. 1, the target regions corresponding to the three prefer-ence assertions (f1Pf2, f1If2 and f2Pf1) are related throughthe prototype front, i.e., the projected normalization of thethree target regions lie in the same part of the prototype front.

B. Overall Preference in M-Objective Problems

This section discusses the general case involving M ob-jectives. The overall decision maker preference consists of acoherent set of binary preference relations asserted for theset of objective functions F , where coherence of the set ofbinary relations is given by the compliance with the propertiesoutlined in 2 to 13.

From the properties of the binary relations P , I, and Q,it is evident that the characteristic binary relation in P F isa partial order. Under the overall preference asserted, F is apartially ordered set. The derivation of the partial ranking of F

given a set of asserted binary preference relatios is furnishedin the next section as part of the description of the elicitationalgorithm. The remainder of this section is concerned withthe notation used to represent the partial order over F andthe functional mapping of the overall partial ranking to theobjective space.

Let F be the set of objective functions partially ordered interms of importance and C = {ci} in the power set of F bea collection of totally ordered sets such that the union of allci equals F and each chain ci is maximal in size. Thus, ineach ci every objective pair are related by precedence (P) orequal importance (I), an objective fm can only appear oncein a chain and there is at least one member out of any twochains ci for which the incomparability relation (Q) applies,that is

∀ci, cj ∈ F, ∃fm ∈ ci, fn ∈ cj, fmQfn. (16)

The finite partially ordered set of objectives F may bedescribed by a listing of the constituent chains. In this paper,

536 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 14, NO. 4, AUGUST 2010

a chain is denoted by an enumeration of members, orderedin terms of decreasing importance going from the left toright. Objectives with equal importance are gathered anddelimited by a square bracket. For example, the followingchain {[f2], [f1, f3]} implies that f2 is preferred to f1 andf3, and f1 and f3 are of equal importance.

Similar to the 2-D case in Section IV-A, a coherent set ofbinary preference assertions corresponds to a region in the pro-totype nondominated front in the M-dimensional space. Theprototype front is the continuous linear hyperplane satisfying∑M

m=1 fm = 1, defined for 0 ≤ fm ≤ 1.The subset of the prototype front corresponding to the

overall preference for M objectives is the intersection of allregions delineated by inequalities given by the constituentbinary preference assertions. For a practical MOP, the desiredset of solutions are similarly the Pareto-optimal solutionswhose projected normalization on the prototype front fallswithin the desired region. This is illustrated in the followingexamples.

Example 1: Minimize objective functions F = [f1, f2, f3]where the following preference relations are asserted: f1Pf2,f1Pf3.

Given the asserted relations, F is a partially ordered setwhere {[f1], [f2]} is the first chain and {[f1], [f3]} the secondchain. The objectives f2 and f3 are incomparable. The plots inFig. 3 illustrate the subset of the Pareto-front that correspondsto the objective importance ranking. The shaded area is thedesired region of the Pareto-front for the given partial ranking.This target region is given by the following:

2f1 ≤ f2 ∩ 2f1 ≤ f3. (17)

Observe that 2-D projections of the Pareto-front onto f1 −f2 axis (the second plot in the set, Fig. 3) and f1 − f3 axis(the third plot in the set) demonstrate that trade-off solutionswith good performance in f1 are favored at the expense ofperformance in f2 and f3, respectively. The projection ontothe f2 − f3 axis shows that the entire trade-off between f2

and f3 is represented in the target subset.Example 2: Minimize objective functions F = [f1, f2, f3,

f4, f5, f6] where the following are asserted: f1Pf2, f2Pf3,f2Pf4, f6If4. The asserted relations imposes a partial orderon F, where {[f1], [f2], [f3]}, {[f1], [f2], [f4, f6]}, and {[f5]}are the constituent chains.

V. Preference Articulation

Articulation of overall preference in terms of the proposedmodel is equivalent to constructing a comprehensive partialranking of F . The complexity of constructing a coherentcomplete ranking increases combinatorially with respect tothe number of objectives in the set F . A means of par-tially specifying the complete ranking is beneficial to humandecision makers. Section V-A describes an algorithm thatintegrates a totally ordered chain into existing partial ranking.The decision maker may specify a chain of λAt objectivesthat are totally ordered by the binary relations P and I,where 2 ≤ λAt ≤ M. The algorithm manages conflictwith previously asserted rankings and merges the chain with

existing chains in the partial ranking to produce an updated,coherent partial ranking. Once a complete preference profileis explicated and integrated into the search, the search mayprogress in such a way that the decision maker wishes tomodify the preference. Interactive facility that informs the de-cision maker with strategic information on the search progressas well as handles modification of preference is described inSection V-B.

A. A Priori Elicitation

Explication of the decision maker’s overall preference isassisted by the elicitation algorithm described here. The algo-rithm works by maintaining a minimal coherent set of totallyordered subsets (chains) of objectives. The partial rankingimplied by the set of chains express the binary relations artic-ulated by the decision maker. Elicitation is accomplished bythe procedure Main and stops at the prompting of the decisionmaker or when a complete ranking among all objectives hasbeen specified. Let

1) R (t) be the partial ranking of objectives at iteration t.2) ci be the constituent chains of R (t), 1 ≤ i ≤ K, and λci

the length of chain ci.3) At be the chain specified by the decision maker at

iteration t.4) B R (t) be the M x M matrix that records the binary

preference relations between the M objectives impliedin the partial ranking in R (t). Elements of B R (t), bR (t)

mn ,are assigned the following values:

bR (t)mn =

⎧⎪⎪⎪⎨⎪⎪⎪⎩

1, if fmPfn

2, if fmIfn

0, if fnQfn

(18)

B ∅ is the M x M zero matrix. B c correspondingly is theM x M matrix that records binary preference relationsin chain c.

5) Index(n, c) be a procedure that returns the index m ofthe objective function fm placed at position n in thechain c.

6) Grade(fm, c) be a procedure that returns the importancerank of objective function fm in the chain c. If fm is nota member of c, it returns 0.

7) Head(c) be a procedure that returns the most importantmember of c, i.e., it returns fm where Grade(fm, c) = 1.Tail(c) is the procedure that returns the least importantmember of c.

8) Post(fm, c) be a procedure that returns the position ofobjective fm in chain c. If fm is not a member of c, itreturns 0. Else, it returns a value between 1 and λc

9) Insert(c, pos, rel, fm) be a procedure that inserts fm atposition pos + 1 and noting that fm is related to theobjective placed at pos by binary relation rel.

At initialization, the partial ranking R (0) = ∅, B R (t) = B ∅.The following while loop is executed:

RACHMAWATI AND SRINIVASAN: INCORPORATING THE NOTION OF RELATIVE IMPORTANCE OF OBJECTIVES 537

Fig. 3. Prototype front: linear front traced and desired subset shaded dark. (a) 3-D view. (b) View from f1–f2 plane. (c) View from f1–f3 plane. (d) Viewfrom f2–f3 plane.

Main

While (stop == 0) {At = read user input;If (IsValid(At) == 1)

B At

= UpdateMatrix(At, B ∅);Status = VerifyMatrix(B At

);If (Status == −1)

HandleContradiction(C );Else

R (t + 1) = Merge(R (t), At);Else

Print("Error: one or more objectives occur twice");Increment t;stop = UserStop OR (IsFull(B R (t)) == 1);CompleteChains(B R (t))};

A new chain At is not a valid totally ordered subset of F

if: 1) it contains only one objective; or 2) if any objectiveoccurs twice or more. The procedure IsValid(At) checks thisand informs the decision maker of any objectives that occurtwice or more in the asserted chain. Integration of chain At isperformed if the asserted chain is valid by first constructingthe matrix B At

according to the rule in (18) and verifying

the validity of the matrix. Construction of the matrix B At

forchain At is simple. The pseudocode UpdateMatrix(At, B At

)is given in the following.

UpdateMatrix(At, B At

)

UpdateStatus = 0;For (k = 1 to λAt − 1)

For (l = k + 1 to λAt )m = Index(k, At);n = Index(l, At);If (Grade(fm, At) < Grade(fn, A

t) & bAt

m,n �= 1)UpdateStatus = 1;bAt

m,n = 1;If (Grade(fm, At) == Grade(fn, A

t) & bAt

m,n �= 2)UpdateStatus = 1;bAt

m,n = 2;bAt

n,m = 2;return UpdateStatus;

The procedure VerifyMatrix returns −1 if any of theranking implied in At contradicts an order already assertedin the existing partial ranking (R (t)), 0 if the chain At is thefirst asserted (t = 0), and 1 otherwise.

538 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 14, NO. 4, AUGUST 2010

VerifyMatrix(B At

)

If (t==0)R (t + 1) = At

B R (t) = B At

Status = 0;Else

Status = 1;C = {};For (m=1 to M, n=1 to M)

If (bAt

mn �= 0 & bR (t)mn �= 0 & bAt

mn �= bR (t)mn )

C = C ∪ {m, n, bR (t)mn , bAt

mn};Status = −1;

return Status;

Contradictions (Status = −1) are handled by the procedureHandleContradiction(C ). In the second case (Status == 0),the partial ranking R (t) is simply At and the matrix B R (t) isthe matrix B At

. The third case requires a merging of R (t) andAt , performed by the procedure Merge(R (t), At) describedbelow in pseudocode.

Merge (R (t), At)

B R (t+1) = B ∅;R (t + 1) = ∅;For (k = 1 to K)

r = 1;flag = 0;For (i = 1 to λAt )

fj = Index(i, At);fs = Index(r, ck);insertion = 0;While (bR (t)

sj �= 0 & r ≤ λck )Insert(At, i, b

R (t)sj , fs);

insertion = bR (t)sj , fs;

r+ = 1;fs = Index(r, ck);

If (insertion > 0)Insert(ck, r − 1, insertion, fj);

If (bR (t)sj == 0)

flag = 1;If (flag == 1)

UpdateStatus = UpdateMatrix(ck, B R (t+1));If (UpdateStatus == 1)

R (t + 1) = R (t + 1) ∪ ck

UpdateStatus = UpdateMatrix(At, B R (t+1));If (UpdateStatus == 1)

R (t + 1) = R (t + 1) ∪ At;

The main elicitation procedure terminates when the decisionmaker asserts a Stop, or the user has specified enough chains tocover the entire set F . The procedure IsFull checks the lattercondition, i.e., if all objectives in the set F have been includedin one or more chains specified by the user and integrated intoR (t). This can be done simply by checking the matrix B R (t).The objective fm is not in any of the chains in R (t) if and

only if entries in row m and column m of B R (t) are all zeros.If no such row and column pair is found in B R (t), then allobjectives either equal, precedes or preceded by at least oneother objective in importance.

IsFull(B At

)

For (m = 1 to M)sum1 =

∑n∈[1,M] b

At

mn

sum2 =∑

n∈[1,M] bAt

nm;if (sum1 == 0 & sum2 == 0)

return 0;return 1;

The decision maker specifies completely ordered chains inthe elicitation scheme. Objectives which are incomparableall others do not have to be specified. The procedureCompleteChains(B R (t)) inserts necessary chains such thatR (t) completely describes preference over the M objectivesin F . At the conclusion of the elicitation, objectives arepresented in lists, each of which constitutes a completelyordered subset of F .

CompleteChains(B R (t))

For (m = 1 to M)sum1 =

∑n∈[1,M] b

R (t)mn

sum2 =∑

n∈[1,M] bR (t)nm ;

if (sum1 == 0 & sum2 == 0)K = K + 1;R (t) = R (t) ∪ [fm];

The next section deals with the interactive modification ofpreference.

B. Interactive Modification

A facility to interact with the decision makers couldbe beneficial for inexperienced decision makers and/or inproblems with complex and unexpected behavior. It is tobe noted, however, that an optimization algorithm shouldnot rely too much on interaction with the decision maker.Gardiner and Vanderpooten [60] studied interactive algorithmsin actual practice and reported that the median number ofiterations is between three and eight. Miettinen [40] pointedout that one may ask whether the rapid convergence ofpreference results from true satisfaction, or fatigue on thedecision maker’s part, or even possibly because the deci-sion makers did not know how to continue the solutionprocess.

Two essential aspects of optimization with interactive pref-erence specification include the following.

1) Availability of strategic information in the searchprogress. Ideally, the decision maker is given a completepicture of the global as well as local characteristics ofthe problem. Thus equipped, the decision maker is ableto effectively evaluate the suitability of the specifiedpreference and refine the instantiation of the preferencemodel.

RACHMAWATI AND SRINIVASAN: INCORPORATING THE NOTION OF RELATIVE IMPORTANCE OF OBJECTIVES 539

2) An algorithm to modify part of the specified preferencewithout disrupting the integrity of the whole. Supposean importance ranking is established for a set of M

objectives and the decision maker desires to changethe relative importance concerning L of the M ob-jectives 1 ≤ L < M. Without loss of generality letthese be

[f1, f2, .., fL

]. The relative ranking among

the remaining objectives[fL+1..fM

]should remain

intact.

In this approach, the decision maker is given the followinginformation of the search progress.

1) The minimum and maximum attainment in each ob-jective fm in the best nondominated front. This iseasily done by maintaining extreme solutions as theoptimization progresses.

2) Distribution parameters (mean and standard deviation)for each fm. Minimum and maximum obtained fromextreme solutions are excluded in the computation ofthe distribution parameters.

The information is supplied at decision-maker specified inter-vals in the optimization. Partial modification of a completeoverall preference may be performed in one of the threefollowing ways.

1) Specifying a completely ordered chain Dt of λDt ob-jectives, 2 ≤ λDt < M. Merging of this chain tothe existing partial ranking is performed with the sameprocedure applied in the a priori elicitation described inthe previous section. If any contradiction arises, binaryrelations asserted in Dt prevail.

2) Specifying improvement or degradation for an objectivefm. This facility is useful when the decision makeris dissatisfied with the poor performance in objectivefm but cannot specify which other objectives are to besacrificed to improve performance in fm. It may alsobe used when performance in fm surpasses expectationand the decision maker cannot select the objectives tobe improved at the expense of fm.The procedure Promote(fm) improves fm at the expenseof all other objectives which precede it in importanceand Demote(fm) degrades fm to improve all otherobjectives less important than fm. The pseudocodes ofthe procedures are given in the Appendix.

3) Specifying desired improvement of an objective fm atthe expense of another objective fn. There are fourpossible cases.

a) If fm and fn are present in the same chain, withfm preceding fn, the information is relayed tothe decision maker and nothing is done. This isbecause fm is already prioritized over fn. Thedecision maker may choose to simply specifyimprovement for objective fm or degradation forobjective fn in alternative 2 above.

b) If fm and fn are incomparable in the initial rank-ing, then fmPfn is a new chain to be merged withexisting partial ranking. This is handled by thesame procedure of chain merging described in thea priori elicitation.

c) If fm and fn are present in the same chain but fm

is of equal importance to fn, fn is demoted to thenext lower rank in importance.

d) If fm and fn are present in the same chain butfn precedes fm, any objectives preceded by fn

and preceding fm in the chain are promoted tothe next higher rank. The procedure ensures thatperformance in fm is favored at the expense onlyof performance in fn, and not in other objectives.

The pseudocode of alternative 3 (Tradeoff(fm, fn)) isgiven in the Appendix.

VI. Integration into Pareto-Dominance Based

General Purpose MOEA

Primary features distinguishing an MOEA from the single-objective counterpart include the design of the fitness func-tion and the use of elitism, which was shown to be atheoretical requirement [59]. The fitness function of mostcurrent MOEAs, e.g., [49]–[54] consists of a lexicographicalevaluation of Pareto-rank followed by crowding index ordistance as first suggested by Goldberg [48]. Exceptions to thisinclude algorithms described in [42], [55], [56], where explicitniching is omitted or combined with Pareto-dominance in asingle criterion. In this section, we propose three methods ofintegrating preference information into the fitness evaluationof state-of-the-art MOEAs, with particular focus on those thatemploy Pareto-ranking. The strategies are as follows.

1) Inclusion of preference information as constraints. Toincorporate preference, the inequalities given in Sec-tion IV-A are applied to the current population and/orarchive where normalization is done with respect to theextrema of the best nondominated front. In this particularstrategy, the region of interest as defined by inequalitiesderived from the partial ranking of solutions is consid-ered as the feasible region. The following is performedin comparing a pair of solutions in the population and/orarchive:

if V (Xi) > 0 and V (Xj) > 0 thenif V (Xi) < V (Xj) then

Xi ≺pref Xj

else if V (Xj) < V (Xi) thenXj ≺pref Xi

end ifelse

if Xi ≺ Xj thenXi ≺pref Xj

else if Xj ≺ Xi thenXj ≺pref Xi

end ifend if.

The function V (X) in the above denotes the constraintviolation, which is taken to be the maximum magnitudeof the violation of all inequalities describing the desiredregion.

2) Inclusion of preference information as rank penalty.Pareto-ranking introduces a complete order to the

540 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 14, NO. 4, AUGUST 2010

partially ordered objective space by means of existingdominance relation between solution pairs in the set.Here incompatibility with the preference-based inequal-ities incurs a penalty in the Pareto-rank of a solution.As Pareto-rank is usually defined as integers, the penaltyimposed is one. The strategy works only with MOEAsthat implement Pareto-ranking. Pareto-dominance is pre-served in NSGA-II by performing the ranking fromthe best nondominated layer such that nondominatedsolutions satisfying preference-based inequalities are as-signed rank 1 (subset 1), nondominated solutions notsatisfying preference inequalities (subset 2) are assignedrank 2 along with solutions which satisfy preferenceinequalities and are nondominated with solutions insubset 2, and so on until the population is filled. Inalgorithms implementing Strength Pareto-ranking thepenalty is added to the strength figure, maintainingdominance relations, and in Multi Objective GeneticAlgorithm (MOGA) the penalty is added to the rank ina straightforward manner, while rank of a solution Xi iscomputed not as the number of solutions that dominateXi but the sum of the ranks of solutions that dominateXi.

3) Inclusion of preference in computing the crowding dis-tance. Satisfaction of the preference inequalities leads amultiple of the actual crowding distance of a solution tobe considered as the crowding distance, i.e., if a solu-tion satisfies the inequalities then CrowdingDistance =Factor × ActualCrowdingDistance with factor largerthan one, whereas dissatisfaction with inequalities cor-respond to factor equals one. The multiplication factoris the biasing strength of this approach. This strategy isapplicable to any MOEA that implements crowding inthe computation of the fitness.

In the following section, an implementation of the abovestrategies into the framework of NSGA-II is described andapplied to MOPs of varying difficulties.

VII. Empirical Investigation

This section presents the application of preference-basedMOEA on standard test problems in EMOO literature. Thepreference incorporation scheme described in this paper wasimplemented in NSGA-II, arguably the most widely usedMOEA, which is severely affected by an increase in thedimensionality of MOPs. The Pareto-ranking in NSGA-IIranks solutions into successive nondominated layers, thusproviding little selection pressure as the number of objectivesincreases and more solutions belong to the same nondominatedlayer.

The empirical study investigates the efficacy of the threeintegration strategies presented in the previous section in ar-riving at the desired region and the contribution of preference-directed search to the convergence on the Pareto-front as thedimensionality of the problem increases. The test problemsemployed exhibit various specific challenges to MOEAs, i.e.,the presence of multiple local optima, biased distribution ofsolutions, nonconvexity, and discontinuity [58], [61].

A. Settings and Parameters

The biobjective problems employed in this paper are ZDT1,ZDT2, and ZDT3 proposed in [58]. Each of the scalableDTLZ1, DTLZ2, DTLZ3, DTLZ4, and DTLZ5 problems, pro-posed in [61] is also implemented with 3, 4, and 6 objectives.The number of decision variables are kept to the originalformulation of these problems in [58] and [61].

ZDT1 to ZDT3 are 2-D, continuous problems. While ZDT1pertains to a convex Pareto-front, ZDT2, and ZDT3 containnonconvexities. The Pareto-front of ZDT3 in particular con-sists of disconnected regions. Test problems DTLZ1 to DTLZ5are scalable MOPs, and were implemented as 3, 4, and 6-objective problems in the study. The corresponding number ofdecision variables are N = 12, N = 13, N = 16 for DTLZ1,and N = 12, N = 13, N = 16 for the other DTLZ problems.DTLZ1 has a linear Pareto-front with numerous local minimain the objective space. DTLZ2 has a nonconvex Pareto-front,identical in shape to the Pareto-front of problems DTLZ3 andDTLZ4. DTLZ3 incorporates the g(X) function of DTLZ1,and therefore, has numerous local minima in the objectivespace. The Pareto-front of DTLZ4 is associated with nonuni-form distribution of solutions on the front. Considerably moresolutions are situated on the edges of the front than elsewhere.The Pareto-optima of test problem DTLZ5 corresponds toa (M − 2)-dimensional concave front. Thus, some objectiveimportance ranking may be invalid for problem DTLZ5, in thesense that there are no Pareto-optimal solutions that satisfy thepreference-based inequalities.

A number of different preference settings are employed inthe empirical study to observe the effect of changing prefer-ence to the performance of the preference-based algorithm.These are summarized in Table II. The first three (pref #1to #3) were applied to the 2-objective problems while therest were applied to the 3, 4, and 6-objective problems. Notethat Pref #4, #5, and #6 correspond to successively smallersubregions of a given Pareto-front.

The entire simulation was implemented in C. Simulatedbinary crossover and mutation [62] were employed with ratespx = 0.9 and pm = 1/N. Ten runs were performed for eachtest problem and preference setting. The biasing strength wasset at 10. In addition, the baseline NSGA-II was also appliedto the same problems.

With each of the 2-D problems, 20 000 function evalua-tions were executed before termination. 100 000, 200 000, and500 000 evaluations were allowed for the 3, 4, and 6-objectiveversions of the problem DTLZ3. 30 000, 60 000, and 150 000function evaluations were performed before termination in thecase of the 3, 4, and 6-objective versions of the rest of the testproblems.

B. Results and Discussion

To investigate the comparative performance of the algo-rithms, the nondominated ratio metric is applied to thesets of solutions obtained. The nondominated ratio is aK-ary metric that measures the proportion of solution setSk belonging to the best nondominated front of the unionof S1, ..SK [63]. The metric highlights the comparative

RACHMAWATI AND SRINIVASAN: INCORPORATING THE NOTION OF RELATIVE IMPORTANCE OF OBJECTIVES 541

TABLE I

Nondominance Ratio

Problem Pref. Baseline Constraint Penalty Rank Biased CrowdingMean S.D. Mean S.D. Mean S.D. Mean S.D.

ZDT1 Pref1 0.804 0.067856 0.777 0.15755 0.941 0.03178 0.783 0.11285N = 30 Pref2 0.769 0.045814 0.846 0.10762 0.867 0.074244 0.803 0.10573

Pref3 0.794 0.059292 0.845 0.092165 0.908 0.049844 0.801 0.076949ZDT2 Pref1 0.746 0.049035 0.139 0.08698 0.938 0.057889 0.772 0.14718N = 30 Pref2 0.801 0.077237 0.209 0.19017 0.909 0.065735 0.867 0.15225

Pref3 0.741 0.039285 0.837 0.13158 0.832 0.178 0.803 0.095341ZDT3 Pref1 0.736 0.053375 0.82 0.091894 0.834 0.065862 0.894 0.032728N = 30 Pref2 0.798 0.042111 0.874 0.075011 0.863 0.11767 0.801 0.10148

Pref3 0.836 0.047656 0.843 0.062902 0.848 0.070364 0.876 0.051467DTLZ1 Pref4 0.879 0.061183 0.995 0.010801 0.697 0.33086 0.909 0.09712M = 3 Pref5 0.917 0.0715 0.988 0.017512 0.86 0.1075 0.809 0.17785N = 7 Pref6 0.918 0.0751 0.962 0.11331 0.782 0.15303 0.79 0.19596

DTLZ2 Pref4 0.862 0.044672 0.931 0.026013 0.902 0.026998 0.867 0.027909M = 3 Pref5 0.91 0.03266 0.94 0.024037 0.905 0.029533 0.867 0.028694N = 12 Pref6 0.92 0.032318 0.946 0.02319 0.954 0.017764 0.897 0.03401DTLZ3 Pref4 0.909 0.052164 0.886 0.089716 0.939 0.024698 0.732 0.15838M = 3 Pref5 0.919 0.052799 0.713 0.16573 0.969 0.027669 0.644 0.24838

Pref6 0.957 0.044485 0.909 0.097348 0.965 0.035978 0.944 0.051897DTLZ4 Pref4 0.869 0.034785 0.914 0.037771 0.898 0.035839 0.866 0.035653M = 3 Pref5 0.909 0.03573 0.978 0.025734 0.929 0.022828 0.873 0.054171N = 12 Pref6 0.927 0.038312 0.954 0.022211 0.924 0.030258 0.907 0.035606DTLZ5 Pref4 0.909 0.042282 0.995 0.0052705 0.917 0.039172 0.977 0.0094868M = 3 Pref5 0.914 0.031693 0.019 0.029231 0.936 0.035024 0.99 0.008165N = 12 Pref6 0.917 0.029078 0.933 0.023118 0.974 0.014298 0.937 0.022136DTLZ1 Pref4 0.261 0.14263 0.937 0.051001 0.929 0.067404 0.845 0.089846M = 4 Pref5 0.152 0.059591 0.896 0.20408 0.952 0.084827 0.908 0.083106N = 8 Pref6 0.373 0.1976 0.983 0.046916 0.324 0.22756 0.368 0.23729

DTLZ2 Pref4 0.836 0.074863 0.913 0.049227 0.935 0.02273 0.899 0.050211M = 4 Pref5 0.823 0.078888 0.946 0.028752 0.929 0.031073 0.942 0.021499N = 13 Pref6 0.841 0.045326 0.951 0.024244 0.943 0.038601 0.933 0.03335DTLZ3 Pref4 0.301 0.13186 0.913 0.076019 0.922 0.057504 0.891 0.1112M = 4 Pref5 0.377 0.096038 0.954 0.043512 0.906 0.050288 0.906 0.13327

Pref6 0.47 0.20254 0.95 0.15811 0.5 0.21489 0.421 0.10311DTLZ4 Pref4 0.819 0.073098 0.924 0.038644 0.94 0.046667 0.89 0.028674M = 4 Pref5 0.805 0.060782 0.938 0.028983 0.936 0.022706 0.92 0.064979N = 13 Pref6 0.855 0.06884 0.97 0.027889 0.952 0.017512 0.932 0.034577DTLZ5 Pref4 0.919 0.025144 0.907 0.21675 0.844 0.27973 0.884 0.02319M = 4 Pref5 0.894 0.02319 0.602 0.50957 0.235 0.04378 0.895 0.041433N = 13 Pref6 0.971 0.017288 0.448 0.057697 0.299 0.051088 0.683 0.067667DTLZ1 Pref4 0.331 0.1236 0.834 0.17024 0.62 0.28351 0.356 0.14401M = 6 Pref5 0.305 0.10967 0.869 0.174 0.465 0.16548 0.464 0.22882N = 10 Pref6 0.081 0.087235 0.892 0.16982 0.434 0.33073 0.204 0.21614DTLZ2 Pref4 0.743 0.089449 0.92 0.11944 0.927 0.041913 0.875 0.076048M = 6 Pref5 0.317 0.10605 0.941 0.068548 0.445 0.16406 0.835 0.10416N = 15 Pref6 0.348 0.091141 0.851 0.19547 0.663 0.25399 0.783 0.12876DTLZ3 Pref4 0.645 0.21293 0.841 0.10888 0.879 0.10268 0.468 0.16645M = 6 Pref5 0.578 0.22459 0.839 0.14821 0.699 0.19416 0.519 0.23087N = 15 Pref6 0.284 0.11645 0.616 0.35243 0.637 0.27713 0.582 0.33687DTLZ4 Pref4 0.882 0.044672 0.973 0.023594 0.94 0.036818 0.968 0.027406M = 6 Pref5 0.713 0.074543 0.954 0.037476 0.793 0.074095 0.977 0.02406N = 15 Pref6 0.231 0.2369 0.9 0.15797 0.517 0.092262 0.82 0.16221DTLZ5 Pref4 0.981 0.013703 0.64 0.31205 0.409 0.12793 0.464 0.099242M = 6 Pref5 0.993 0.012517 0.317 0.23665 0.44 0.22435 0.238 0.05808N = 15 Pref6 0.813 0.056184 0.875 0.049721 0.872 0.054119 0.855 0.066708

performance of the K-solution sets in terms of proxim-ity to the Pareto-front. The results obtained from apply-ing nondominated ratio to the ten sets of solution setsobtained with the three preference strategies are given inTable I.

Observe that while both penalty rank and constraint methodresult in nondominated solutions in the desired region to theexclusion of solutions elsewhere, the biased crowding distancemethod, depending on the magnitude of the multiplicationfactor could admit solutions in other regions. The difference

is that undesired nondominated regions are sampled moresparsely. This effect is illustrated in Fig. 4 for problems ZDT1and ZDT3.

This behavior benefits the crowding distance approach inproblems like DTLZ5, as is evident from the Table I and willbe discussed further later.

The numbers in Table I demonstrate that even for2-objective problems a focus on a subset of the Pareto-front is beneficial to convergence. The baseline algorithm wasconsistently outperformed by the preference-based algorithm.

542 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 14, NO. 4, AUGUST 2010

Fig. 4. Solutions obtained for problem ZDT1 (prefs #1–3) by constraint-based, rank penalty, and biased crowding methods. (a), (d), (g) Constraint-basedmethods. (b), (e), (h) Rank penalty methods. (c), (f), (i) Biased crowding methods.

Constraint-based approach, it is noted, did not fare as well asthe other two approaches. Penalty rank yielded the best resultsfor 2-objective problems.

Except for the case of DTLZ5 with invalid preferenceprofile, the constraint and penalty rank approaches alternatelyperformed best for the higher objective cases. The constraint-based approach emerged as the best in more cases while thebiased crowding approach degenerated in performance as thenumber of objectives increased.

The observation could be explained by the difference inwhich the three preference incorporation strategies take pref-erence information into account. Constraint-based approachenforces preference satisfaction as the first criterion, aheadof Pareto-ranking and crowding distance in the computationof fitness. The selection pressure in effect guides a candidatesolution toward the feasible region first. The strategy restrictsthe search by precluding or hampering the generation of a goodsolution within the desired region from another good solution

outside the desired region. Additionally, the comparison op-erator is guaranteed to preserve Pareto-dominance relationsbetween a pair of solutions Xi ≺ Xj only if Xi satisfiespreference-based inequalities, or both Xi and Xj do not satisfythe inequalities. The resulting impairment to the search ispronounced in the 2-D ZDT2. The fact that the constraint-based approach outperformed the baseline in other problems(given feasible preferences) suggests that the impact of non-preservation of Pareto-dominance in certain scenarios is minor.

In higher dimensional cases, the weakness of the constraint-based approach was more than compensated for by the advan-tage of restricting the search within a subset of the objectivespace. A trend in fact could be noted in the convergenceof the approach as the size of the desired region decreases.Application of the simple distance metric [64] demonstratesconsistently improved proximity to the optimal as the size ofthe desired subset decreases for the 6-objective problems (seeFig. 6). The generational distance metric in [64] measures the

RACHMAWATI AND SRINIVASAN: INCORPORATING THE NOTION OF RELATIVE IMPORTANCE OF OBJECTIVES 543

Fig. 5. Solutions obtained for problem ZDT3 (prefs #1–3) by constraint-based, rank penalty, and biased crowding methods. (a), (d), (g) Constraint-basedmethods. (b), (e), (h) Rank penalty methods. (c), (f), (i) Biased crowding methods.

average distance between members of the obtained solutionset to the nearest Pareto-optimal solution in a referenceset. The rank penalty approach modifies the Pareto-rank,effectively placing preference satisfaction as the second in thelexicographic order in the evaluation of fitness. Like the biasedcrowding approach the method guarantees Pareto-dominancepreservation. Results demonstrated that penalty rank almostalways achieved better convergence than the biased crowdingapproach.

While the constraint-based and penalty rank methods effec-tually excluded nondominated solutions that not satisfying thepreference-based inequalities. The biased crowding approachincreased the density of solutions in the desired region relativeto the density of solutions elsewhere.

The main advantage of the biased crowding distance ap-proach over the other two is most apparent in problem DTLZ5.Invalid preference was simply disregarded. However, in higherobjective cases, this advantage vanished. The performance of

the approach in DTLZ5 with infeasible preferences deterio-rated as the number of objectives increased. Within the largerobjective space, the biasing strength (10) more effectively lim-ited the search to suboptimal regions of the space. Decreasingthe biasing strength (to 1.5) produced results comparable to thebaseline algorithm for 4 and 6-objective DTLZ5. Such low-biasing strength, however, allows too many solutions outsidethe desired region for lower objective problems with feasi-ble preferences. In higher objective problems with feasiblepreferences, the weaker biasing strength impaired convergencesignificantly.

Overall, preference-based focus is beneficial to the conver-gence of NSGA-II. In higher objective problems, constraint-based approach constitutes the best integration method outof the three proposed. Preference formulated a priori maybe invalid for problems associated with discontinuous Pareto-front or a Pareto-front of M − 2 or lower dimensionality.The condition can be easily detected by noting the absence

544 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 14, NO. 4, AUGUST 2010

Fig. 6. Distance: constraint-based approach. (a) DTLZ1. (b) DTLZ2. (c) DTLZ3. (d) DTLZ4.

TABLE II

Preference Settings

Pref. Asserted Relations Chains#1 f1Pf2 {[f1]; [f2]}#2 f1If2 {[f1, f2]}#3 f2Pf1 {[f2]; [f1]}#4 f2Pf1,f2Pf3 {[f2]; [f1]}, {[f2]; [f3]}#5 f3Pf2, f2Pf1 {[f3], [f2], [f1]}#6 f2If1, f1If3 {[f1, f2, f3]}

of nondominated solutions that satisfy the preference-basedinequalities and remedied by terminating the algorithm andrequesting the user to refine his/her preference. Alternatively,once the condition is detected, the algorithm could be allowedto degenerate to a general purpose MOEA.

VIII. Conclusion

This paper presented a mathematical model of the notionof relative importance of objectives for inclusion into EMOO.The resulting preference representation framework is moresuited than classical models for evolutionary optimization inthat it utilizes the ability of EAs to obtain multiple viablesolutions in one go. It also includes general purpose EMOOas a special case where specific preference is absent. Ana priori elicitation and interactive modification algorithm assistthe human decision maker in explicating coherent preference.

Three methods of integrating preference information intoNSGA-II have been described and extensively studied empiri-cally. Results demonstrate the viability of using preference-based focus especially for high-dimensional problems. In-corporation of explicated preference in other state-of-the-artMOEAs has been briefly touched on. Further work on this

and the generalization of the preference model to accommo-date varying degrees of preference constitute possible futureresearch directions.

Appendix

Promote(fm)

For (i = 1 to K)if (fm ∈ ci&Head(ci) �= fm)

l = Post(fm, ci);For (n = l to λci )

j = Index(n, ci);DecrementGrade(fj, c

i);UpdateMatrix(ci, B R(t));

Demote(fm)

For (i = 1 to K)if (fm ∈ ci&Tail(ci) �= fm)

l = Post(fm, ci);For (n = l + 1 to λci )

j = Index(n, ci);DecrementGrade(fj, c

i);UpdateMatrix(ci, B R (t));

Tradeoff(fm, fn)

/* case aIf (bR (t)

mn == 1)print("fm is already prioritized over fn");Exit;

/* case bIf (bR (t)

mn == 0&bR (t)nm == 0)

RACHMAWATI AND SRINIVASAN: INCORPORATING THE NOTION OF RELATIVE IMPORTANCE OF OBJECTIVES 545

Insert({[

fm

],[fn

]}, R (t));

Exit;For (i = 1 to K)/* case c

If (bR (t)mn == 2 & Post(fm, ci) �= 0)

IncrementGrade(fn, ci);

bR (t)mn == 1;

Else/* case d

If (bR (t)nm == 1&Post(fm, ci) �=0&Post(fn, c

i) �= 0)l = Post(fn, c

i);for (j = l + 1 to λci )

k = Index(j, ci);DecrementGrade(fk, c

i);If (Grade(fm, ci) == Grade(fn, c

i))bR (t)

mn = 2;

References

[1] K. Deb, Multiobjective Optimization Using Evolutionary Algorithm.Chichester, U.K.: Wiley, 2001. ch. 1, pp. 1–12.

[2] C. A. C. Coello, Evolutionary Algorithms for Solving MultiobjectiveProblems. New York: Kluwer, 2002. ch. 1, p. 13.

[3] J. Braga and C. Starmer, “Preference anomalies, preference elicitation,and the discovered preference hypothesis,” Environ. Res. Econ., vol. 32,no. 1, pp. 55–89, 2005.

[4] C. A. C. Coello, “20 Years of evolutionary multiobjective optimization:What has been done and what remains to be done,” in ComputationalIntelligence: Principles and Practice, G. Y. Yen, D. B. Fogel, Eds. LosAlamitos, CA: IEEE Comput. Soc. Press, 2006, pp. 73–88.

[5] C. M. Fonseca and P. J. Fleming, “Multiobjective optimization andmultiple constraint handling with evolutionary algorithms. Part I: Aunified formulation,” IEEE Trans. Syst., Man, Cybern. A: Syst. Humans,vol. 28, no. 1, pp. 26–37, Jan. 1998.

[6] K. C. Tan, E. F. Khor, T. H. Lee, and R. Sathikannan, “An evolutionaryalgorithm with advanced goal and priority specification for multiobjec-tive optimization,” J. Artif. Intell. Res., vol. 18, pp. 183–215, 2003.

[7] K. Deb, “Solving goal programming problems using multiobjectivegenetic algorithms,” in Proc. IEEE Congr. Evol. Comput., 1999,pp. 77–84.

[8] K. Deb and J. Sundar, “Reference point based multiobjective optimiza-tion using evolutionary algorithms,” in Proc. Genetic Evol. Comput.Conf. (GECCO), Seattle, WA, 2006, pp. 635–642.

[9] L. Rachmawati and D. Srinivasan, “A multiobjective evolutionary algo-rithm with controllable focus on the knees of the Pareto-front,” IEEETrans. Evol. Comput., vol. 13, no. 4, pp. 810–824, 2009.

[10] O. Schutze, M. Laumanns, and C. A. C. Coello, “Approximating theknee of an MOP with stochastic search algorithms,” in Proc. ParallelProblem Solving Nature, 2008, pp. 795–804.

[11] L. Rachmawati and D. Srinivasan, “A multiobjective genetic algorithmwith controllable convergence on knee regions,” in Proc. IEEE Congr.Evol. Comput., 2006, pp. 1916–1923.

[12] J. Branke, K. Deb, H. Dierolf, and M. Osswald, “Finding knees inmultiobjective optimization,” in Proc. Parallel Problem Solving Nature,2004, pp. 722–731.

[13] H. Trautmann and J. Mehnen, “Preference-based Pareto optimization incertain and noisy environments,” Eng. Optimization, vol. 41, no. 1, pp.23–38, 2009.

[14] E. Zitzler and S. Kunzli, “Indicator-based selection in multiobjectivesearch,” in Proc. 8th Int. Conf. Parallel Problem Solving Nature (PPSN)Lecture Notes in Computer Science, vol. 3242. Sep. 2004, pp. 832–842.

[15] T. L. Saaty, The Analytic Hierarchy Process: Planning, Priority Setting,Resource Allocation. New York: McGraw-Hill, 1980. ch. 1, pp. 3–34.

[16] R. L. Keeney and H. Raiffa, Decision with Multiple Objectives: Prefer-ences and Value Tradeoffs. New York: Wiley, 1976. ch. 4, pp. 131–158.

[17] B. Roy, “Classement et choix en presence de points de vue multiples(la methode Electre),” Revue Francaise d’Informatique et de RechercheOperationnelle, vol. 8, pp. 57–75, 1968.

[18] B. Roy, “How outranking relations helps multiple criteria decisionmaking,” in Multicriteria Decision Making, J. Cochrane and M. Zeleny,Eds. Columbia, SC: Univ. South Carolina, 1973, pp. 179–201.

[19] B. Roy and J. M. Skalka, “Electre IS: aspects methodologiques et guided’utilisation,” Document du LAMSADE, vol. 30, p. 125, 1984.

[20] B. Roy and P. Bertier, “La methode Electre II: Une application au mediaplanning,” in OR’72, M. Ross, Ed. Amsterdam, The Netherlands: NorthHolland, 1973, pp. 291–302.

[21] B. Roy, “Electre III: un algorithme de classement fonde sur unerepresentation floue des preferences en presence des criteres multiples,”Cahiers du CERO, vol. 20, no. 1, pp. 3–24, 1978.

[22] B. Roy and J.-C. Hugonnard, “Ranking of suburban line extensionprojects on the Paris metro system by a multicriteria method,” Trans-portation Res., vol. 16A, no. 4, pp. 301–312, 1982.

[23] Ph. Vincke, “Outranking approach,” in Multicriteria Decision Making,Advances in MCDM Models, Algorithms, Theory and Applications,T. Gal, T. Stewart, and T. Hanne, eds. Dordrecht, Germany: KluwerAcademic, pp. 11.1–11.29, 1999.

[24] J. P. Brans, B. Mareschal, and P. Vincke, “A new family of outrankingmethods in multicriteria analysis,” in Proc. Oper. Res., 1984, pp. 408–421.

[25] J. P. Brans and B. Mareschal, “PROMETHEE V-MCDM problems withadditional segmentation constraints,” INFOR, vol. 30, no. 2, pp. 85–96,1992.

[26] P. Vincke, “Preferences and numbers,” in A. Colorni, M. Paruccini, andB. Roy, Eds. A-MCD-A: Aide Multi Critere a la Decision-Multiple Crite-ria Decision Aiding, Luxemburg, Germany: The European CommissionJoint Research Center, 2001, pp. 343–354.

[27] R. L. Solso, Cognitive Psychology. Boston, MA: Allyn & Bacon,1988. ch. 8, p. 216.

[28] C. A. C. Coello, “Handling preferences in evolutionary multiobjectiveoptimization: A survey,” in Proc. IEEE Congr. Evol. Comput., vol. 1.Jul. 2000, pp. 30–37.

[29] J. Branke, T. Kauβlerand, and H. Schmeck, “Guidance in evolutionarymultiobjective optimization,” Adv. Eng. Softw., vol. 32, no. 6, pp. 499–507, 2001.

[30] Y. Jin and B. Sendhoff, “Incorporation of fuzzy preferences intoevolutionary multiobjective optimization,” in Proc. 4th Asia Pa-cific Conf. Simulated Evol. Learning, vol. 1. Singapore, Nov. 2002,pp. 26–30.

[31] D. Cvetkovic and I. C. Parmee, “Use of preferences for GA-basedmultiobjective optimization,” in Proc. Genetic Evol. Comput. Conf., SanMateo, CA: Morgan Kaufmann, 1999, pp. 1504–1509.

[32] D. Cvetkovic and I. C. Parmee, “Genetic-algorithm-based multiobjectiveoptimization and conceptual engineering design,” in Proc. IEEE Congr.Evol. Comput., 1999, pp. 29–36.

[33] D. Cvetkovic and I. C. Parmee, “The application of genetic algo-rithms and preferences in engineering design,” Plymouth Eng. DesignCenter, Univ. Plymouth, Plymouth, U.K., Tech. Rep. PEDC-01-2000,Feb. 2000.

[34] D. Cvetkovic and I. C. Parmee, “Preferences and their application inevolutionary multiobjective optimization,” IEEE Trans. Evol. Comput.,vol. 1, no. 6, pp. 42–57, 2002.

[35] G. W. Greenwood, X. Hu, and J. G. D’Ambrosio, “Fitness functions formultiple objective optimization problems: Combining preferences withPareto rankings,” in Foundations of Genetic Algorithms. San Mateo, CA:Morgan Kaufmann, 2006, pp. 437–455.

[36] J. Branke and K. Deb, “Integrating user preferences into evolutionarymultiobjective optimization,” Kanpur Genetic Algorithm LaboratoryRep. 2 004 004, May 2004.

[37] B. Roy and V. Mousseau, “A theoretical framework for analysing thenotion of relative importance of criteria,” J. Multi-Criteria DecisionAnal., vol. 5, no. 1, pp. 145–159, 1996.

[38] S. Zionts and J. Wallenius, “An interactive programming method forsolving the multiple criteria problem,” Manage. Sci., vol. 22, no. 6,pp. 652–663, Feb. 1976.

[39] S. Zionts and J. Wallenius, “An interactive multiple objective linear pro-gramming method for a class of underlying nonlinear utility functions,”Manage. Sci., vol. 29, no. 5, pp. 519–529, May 1983.

[40] K. M. Miettinen, Nonlinear Multiobjective Optimization. Norwell, MA:Kluwer, 1999. ch. 4, p. 120.

[41] P. J. Bentley and J. P. Wakefield, “Finding acceptable Pareto-optimalsolutions using multiobjective genetic algorithms,” in Soft Computingin Engineering Design and Manufacturing. Berlin, Germany: Springer-Verlag, 1997, part 5, pp. 231–240.

[42] Y. Jin, T. Okabe T., and B. Sendhoff, “Adapting weighted aggregation formultiobjective evolution strategies,” in Proc. 1st Int. Conf. Evol. Multi-

546 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 14, NO. 4, AUGUST 2010

Criterion Optimization, Lecture Notes in Computer Science. Mar. 2001,pp. 96–110.

[43] B. Rekiek, “Assembly line design: Multiple objective grouping evolu-tionary algorithm and the balancing of mixed-model hybrid assemblyline,” Department of Applied Mechanices, CAD Unit. Ph.D. thesis,Universite Libre de Bruxelles, Brussels, Belgium, 2001.

[44] P. De Lit, P. Latinne, B. Rekiek, and A. Delchambre, “Assemblyplanning with an ordering evolutionary algorithm,” Int. J. ProductionRese., vol. 39, no. 16, pp. 3623–3640, 2001.

[45] R. F. Coelho, H. Bersini, and P. Bouillard, “Parametrical mechanicaldesign with constraints and preferences: Application to a purge valve,”Comput. Methods Applicat. Mech. Eng., vol. 192, nos. 39–40, pp. 4355–4378, Sep. 2003.

[46] M. Öztürk, A. Tsoukiàs, and Ph. Vincke, “Preference modelling,” inMultiple Criteria Decision Analysis: State of the Art Surveys, M. Ehrgott,S. Greco, and J. Figueira, Eds. New York: Springer, 2005, pp. 27–73.

[47] F. Kursawe, “A variant of evolution strategies for vector optimization,”in Proc. Parallel Problem Solving Nature, 1990, pp. 193–197.

[48] D. E. Goldberg, Genetic Algorithms in Search, Optimization andMachine Learning, Reading, MA: Addison-Wesley, 1989. ch. 5, p. 201.

[49] C. M. Fonseca and P. J. Fleming, “An overview of evolutionary algo-rithms in multiobjective optimization,” Evol. Comput., vol. 3, no. 1,pp. 1–16, 1995.

[50] E. Zitzler and L. Thiele, “Multiobjective evolutionary algorithms: Acomparative case study and the strength Pareto approach,” IEEE Trans.Evol. Comput., vol. 3, no. 4, pp. 257–271, Nov. 1999.

[51] J. Knowles and D. Corne, “The Pareto archived evolution strategy: Anew baseline algorithm for multiobjective optimization,” in Proc. IEEECongr. Evol. Comput., 1999, pp. 98–105.

[52] D. W. Corne, N. R. Jerram, J. D. Knowles, and M. J. Oates, “PESA-II:Region-based selection in evolutionary multiobjective optimization,” inProc. Genetic Evol. Comput. Conf. (GECCO), 2001, pp. 283–290.

[53] K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan, “A fast and elitistmultiobjective genetic algorithm: NSGA-II,” IEEE Trans. Evol. Comput.,vol. 6, no. 2, pp. 182–197, Apr. 2002.

[54] E. Zitzler, M. Laumanns, and L. Thiele, “SPEA2: Improving the strengthPareto evolutionary algorithm for multiobjective optimization,” in Evolu-tionary Methods for Design, Optimisation and Control with Applicationto Industrial Problems (EUROGEN 2001), K. C. Giannakoglou, et. al.,Eds. 2002, pp. 95–100.

[55] K. Deb, M. Mohan, and S. Mishra. “Evaluating the ε-domination basedmultiobjective evolutionary algorithm for a quick computation of Pareto-optimal solutions,” Evol. Comput., vol. 13, no. 4, pp. 501–525, Dec.2005.

[56] C. L. Mumford, “Simple population replacement strategies for a steady-state multiobjective evolutionary algorithm,” in Proc. Genetic Evol.Comput. Conf. (GECCO), 2004, pp. 1389–1400.

[57] E. Zitzler, “Evolutionary algorithms for multiobjective optimization:Methods and applications,” Dept. Inform. Tech. Elect. Eng., Ph.D.dissertation, Swiss Federal Inst. Technol. (ETH), Zurich, Switzerland,Nov. 1999.

[58] E. Zitzler, K. Deb, and L. Thiele, “Comparison of multiobjectiveevolutionary algorithms: Empirical results,” Evol. Comput., vol. 8,no. 2, pp. 173–195, 2000.

[59] G. Rudolph and A. Agapie, “Convergence properties of some multiob-jective evolutionary algorithms,” in Proc. IEEE Congr. Evol. Comput.,vol. 2. 2000, pp. 1010–1016.

[60] L. R. Gardiner and D. Vanderpooten “Interactive multiple criteriaprocedures: Some reflections,” in Multicriteria Analysis, J. Climaco, Ed.Heidelberg, Germany: Springer-Verlag, 1997, pp. 290–301.

[61] K. Deb, L. Thiele, M. Laumanns, and E. Zitzler, “Scalable test problemsfor evolutionary multiobjective optimization,” in Evolutionary Mul-tiobjective Optimization: Theoretical Advances and Applications, A.Abraham, L. Jain, and R. Goldberg, Eds. New York: Springer, 2005,pp. 105–145.

[62] K. Deb and S. Agrawal, “Simulated binary crossover for continuoussearch space,” Complex Syst., vol. 9, pp. 115–148, Apr. 1995.

[63] C. K. Goh and K. C. Tan, “A competitive-cooperative coevolutionaryparadigm for dynamic multiobjective optimization,” IEEE Trans. Evol.Comput., vol. 14, no. 1, pp. 103–127, Feb. 2009.

[64] D. A. Van Veldhuizen, “Multiobjective evolutionary algorithms: Classi-fications, analyses, and new innovations,” Ph.D. dissertation, GraduateSchool Eng. Air Force Inst. Technol., Air Univ., Jun. 1999.

Lily Rachmawati (S’09) received the B.E. andPh.D. degrees in electrical engineering from the Na-tional University of Singapore, Singapore, in 2004and 2009, respectively.

She is currently a Research Fellow with theDepartment of Electrical and Computer Engineering,National University of Singapore. Her research in-terests include multiobjective optimization and evo-lutionary computation.

Dipti Srinivasan (M’91–SM’02) received the B.E.degree in electrical engineering from the NationalInstitute of Technology, India, in 1986, and theM.Eng. and Ph.D. degrees in electrical engineeringfrom the National University of Singapore (NUS),Singapore, in 1991 and 1994, respectively.

From 1994 to 1995, she was with the ComputerScience Division, University of California, Berkeley,as a Postdoctoral Researcher. In June 1995, shejoined the faculty of the Department of Electricaland Computer Engineering, NUS, where she is

currently an Associate Professor. From 1998 to 1999, she was a VisitingFaculty Member with the Department of Electrical and Computer Engineering,Indian Institute of Science, Bangalore, India. She is the author and coauthorof over 170 technical papers in various international refereed journals andconferences. Her research interests include neural networks, evolutionary com-putation, intelligent multiagent systems, and the application of computationalintelligence techniques to engineering optimization, planning and controlproblems in intelligent transportation systems, and power systems.

Dr. Srinivasan is a Member of the Institution of Engineers, Singapore.