Genetic algorithm based multi-objective reliability optimization in interval environment

9
Genetic algorithm based multi-objective reliability optimization in interval environment Laxminarayan Sahoo a,, Asoke Kumar Bhunia b , Parmad Kumar Kapur c a Department of Mathematics, Raniganj Girls’ College, Raniganj 713 347, India b Department of Mathematics, The University of Burdwan, Burdwan 713 104, India c Department of Operational Research, University of Delhi, Delhi 110 007, India article info Article history: Received 26 September 2010 Received in revised form 15 August 2011 Accepted 8 September 2011 Available online 25 September 2011 Keywords: Reliability optimization Genetic algorithm Multi-objective Interval number Ideal objective Utopian objective abstract In most of the real world design or decision making problems involving reliability optimization, there are simultaneous optimization of multiple objectives such as the maximization of system reliability and the minimization of system cost, weight and volume. In this paper, our goal is to solve the constrained multi- objective reliability optimization problem of a system with interval valued reliability of each component by maximizing the system reliability and minimizing the system cost under several constraints. For this purpose, four different multi-objective optimization problems have been formulated with the help of interval mathematics and our newly proposed order relations of interval valued numbers. Then these optimization problems have been solved by advanced genetic algorithm and the concept of Pareto opti- mality. Finally, to illustrate and also to compare the results, a numerical example has been solved. Ó 2011 Elsevier Ltd. All rights reserved. 1. Introduction Most of the real-world design or decision making problems involving reliability optimization require the simultaneous optimi- zation of more than one objective function. Mostly, the reliability optimization problems have been formulated by researchers by single objective optimization problems. An early study in this field was reported by Sakawa (2002). For simultaneous maximization of system reliability and minimization of system cost of reliability allocation he formulated and solved a multi-objective problem using a surrogate worth trade-off method (Sakawa, 1978). Around the same time, Inagaki, Inoue, and Akashi (1978) solved a different problem by maximizing the system reliability and minimizing the system cost and weight by implementing an interactive optimiza- tion method. To develop an overview on the trend of research in this area, one may refer to the works of Park (1987), Dhingra (1992), Rao and Dhingra (1992), Srinivas and Deb (1994), Ravi, Reddy, and Zimmermann (2000), Huang, Tian, and Zuo (2005), Coit and Konak (2006) and others. In the recent years, Taboada and Coit (2007) proposed a new method which is based on the sequential combination of multi-objective evolutionary algorithms and data clustering on the prospective solutions. In addition Taboada, Bahe- ranwala, Coit, and Wattanapongsakorn (2007) proposed two differ- ent approaches to reduce the size of the Pareto optimal set for multi-objective reliability optimization design problems. Out of those two approaches, in the first approach, a pseudo-ranking scheme to select the solutions by the decision maker according to their objective function priorities was developed. On the other hand in their second approach, demonstrates the use of data min- ing clustering techniques to group the data with the implementa- tion of k-means algorithm to find the clusters of similar solutions. In the same year, Ramirez-Marquez and Coit (2007) proposed a multi-state component critical analysis for the improvement of reliability in multi-state systems. In the same area, Taboada, Espir- itu, and Coit (2008a) presented an extension and applied a previ- ously developed multi-objective evolutionary algorithm for solving the design allocation problems of multi-state series–paral- lel system for power system. Taboada, Espiritu, and Coit (2008b) solved multiple objective multi-state reliability optimization de- sign problems by maximizing system reliability and minimizing both the system cost and weight. In the year 2009, Li, Liao, and Cioit (2009) proposed a two-stage approach for multi-objective decision making with applications to system reliability optimiza- tion. Ramirez-Marquez and Rocco (2010) developed a new evolu- tionary optimization technique for multi-state two-terminal reliability allocation in multi-objective problems. With the view to identify the combination of component failures that provide maximum reduction of network performance, Rocco, Ramirez- Marquez, Salazar, and Hernandez (2010) studied the vulnerability analysis of a complex network. Several researchers have solved reliability optimization problems with single objective (Aggarwal & Gupta, 2005; Coit & Smith, 1996; Gopal, Aggarwal, & Gupta, 0360-8352/$ - see front matter Ó 2011 Elsevier Ltd. All rights reserved. doi:10.1016/j.cie.2011.09.003 Corresponding author. E-mail addresses: [email protected] (L. Sahoo), [email protected] (A.K. Bhunia), [email protected] (P.K. Kapur). Computers & Industrial Engineering 62 (2012) 152–160 Contents lists available at SciVerse ScienceDirect Computers & Industrial Engineering journal homepage: www.elsevier.com/locate/caie

Transcript of Genetic algorithm based multi-objective reliability optimization in interval environment

Computers & Industrial Engineering 62 (2012) 152–160

Contents lists available at SciVerse ScienceDirect

Computers & Industrial Engineering

journal homepage: www.elsevier .com/ locate/caie

Genetic algorithm based multi-objective reliability optimizationin interval environment

Laxminarayan Sahoo a,⇑, Asoke Kumar Bhunia b, Parmad Kumar Kapur c

a Department of Mathematics, Raniganj Girls’ College, Raniganj 713 347, Indiab Department of Mathematics, The University of Burdwan, Burdwan 713 104, Indiac Department of Operational Research, University of Delhi, Delhi 110 007, India

a r t i c l e i n f o

Article history:Received 26 September 2010Received in revised form 15 August 2011Accepted 8 September 2011Available online 25 September 2011

Keywords:Reliability optimizationGenetic algorithmMulti-objectiveInterval numberIdeal objectiveUtopian objective

0360-8352/$ - see front matter � 2011 Elsevier Ltd. Adoi:10.1016/j.cie.2011.09.003

⇑ Corresponding author.E-mail addresses: [email protected] (L. Sahoo), m

(A.K. Bhunia), [email protected] (P.K. Kapur).

a b s t r a c t

In most of the real world design or decision making problems involving reliability optimization, there aresimultaneous optimization of multiple objectives such as the maximization of system reliability and theminimization of system cost, weight and volume. In this paper, our goal is to solve the constrained multi-objective reliability optimization problem of a system with interval valued reliability of each componentby maximizing the system reliability and minimizing the system cost under several constraints. For thispurpose, four different multi-objective optimization problems have been formulated with the help ofinterval mathematics and our newly proposed order relations of interval valued numbers. Then theseoptimization problems have been solved by advanced genetic algorithm and the concept of Pareto opti-mality. Finally, to illustrate and also to compare the results, a numerical example has been solved.

� 2011 Elsevier Ltd. All rights reserved.

1. Introduction multi-objective reliability optimization design problems. Out of

Most of the real-world design or decision making problemsinvolving reliability optimization require the simultaneous optimi-zation of more than one objective function. Mostly, the reliabilityoptimization problems have been formulated by researchers bysingle objective optimization problems. An early study in this fieldwas reported by Sakawa (2002). For simultaneous maximization ofsystem reliability and minimization of system cost of reliabilityallocation he formulated and solved a multi-objective problemusing a surrogate worth trade-off method (Sakawa, 1978). Aroundthe same time, Inagaki, Inoue, and Akashi (1978) solved a differentproblem by maximizing the system reliability and minimizing thesystem cost and weight by implementing an interactive optimiza-tion method. To develop an overview on the trend of research inthis area, one may refer to the works of Park (1987), Dhingra(1992), Rao and Dhingra (1992), Srinivas and Deb (1994), Ravi,Reddy, and Zimmermann (2000), Huang, Tian, and Zuo (2005), Coitand Konak (2006) and others. In the recent years, Taboada and Coit(2007) proposed a new method which is based on the sequentialcombination of multi-objective evolutionary algorithms and dataclustering on the prospective solutions. In addition Taboada, Bahe-ranwala, Coit, and Wattanapongsakorn (2007) proposed two differ-ent approaches to reduce the size of the Pareto optimal set for

ll rights reserved.

[email protected]

those two approaches, in the first approach, a pseudo-rankingscheme to select the solutions by the decision maker accordingto their objective function priorities was developed. On the otherhand in their second approach, demonstrates the use of data min-ing clustering techniques to group the data with the implementa-tion of k-means algorithm to find the clusters of similar solutions.In the same year, Ramirez-Marquez and Coit (2007) proposed amulti-state component critical analysis for the improvement ofreliability in multi-state systems. In the same area, Taboada, Espir-itu, and Coit (2008a) presented an extension and applied a previ-ously developed multi-objective evolutionary algorithm forsolving the design allocation problems of multi-state series–paral-lel system for power system. Taboada, Espiritu, and Coit (2008b)solved multiple objective multi-state reliability optimization de-sign problems by maximizing system reliability and minimizingboth the system cost and weight. In the year 2009, Li, Liao, andCioit (2009) proposed a two-stage approach for multi-objectivedecision making with applications to system reliability optimiza-tion. Ramirez-Marquez and Rocco (2010) developed a new evolu-tionary optimization technique for multi-state two-terminalreliability allocation in multi-objective problems. With the viewto identify the combination of component failures that providemaximum reduction of network performance, Rocco, Ramirez-Marquez, Salazar, and Hernandez (2010) studied the vulnerabilityanalysis of a complex network. Several researchers have solvedreliability optimization problems with single objective (Aggarwal& Gupta, 2005; Coit & Smith, 1996; Gopal, Aggarwal, & Gupta,

L. Sahoo et al. / Computers & Industrial Engineering 62 (2012) 152–160 153

1980; Ha & Kuo, 2006; Hikita, Nakagawa, & Narihisa, 1992; Kim &Yum, 1993; Kuo & Prasad, 2000; Kuo, Prasad, Tillman, & Hwuang,2001; Misra & Sharma, 1991; Nakagawa & Nakashima, 1977). Mostof the reliability optimization problems with single objectives ormulti-objective are based on the assumption of fixed/constant reli-abilities of components which lie between zero and one. However,in real-life situations, the reliability of an individual componentmay not be fixed. It may vary due to several reasons, such as im-proper storage facilities, the human factor and other factors relat-ing to environment. There is no technology by which differentcomponents can be produced with exactly identical reliabilities.So, the reliability of each component is sensible and it may be trea-ted as a positive imprecise number instead of a fixed real number.To tackle the problem with such imprecise numbers, generally sto-chastic, fuzzy and fuzzy- stochastic approaches are applied and thecorresponding problems are converted to deterministic problemsfor the purpose of solving. In the stochastic approach, the parame-ters are assumed to be random variables with known probabilitydistributions. In the fuzzy approach, the parameters, constraintsand goals are considered as fuzzy sets with known membershipfunctions or fuzzy numbers. On the other hand, in the fuzzy-sto-chastic approach, some parameters are viewed as fuzzy sets/fuzzynumbers and others as random variables. However, it is a formida-ble task for a decision maker to specify the appropriate member-ship function for a fuzzy approach and probability distributionfor a stochastic approach and both for the fuzzy stochastic ap-proach. So, to avoid these difficulties for handling the imprecisenumbers by different approaches, one may use an interval numberto represent the imprecise number, as this representation is themost significant representation among others. Studies of the sys-tem reliability by considering the component reliabilities as impre-cise have already been initiated by some researchers like Coolenand Newby (1994), Utkin and Gurov (1999, 2001), Gupta, Bhunia,and Roy (2009), Bhunia, Sahoo, and Roy (2010) and Sahoo, Bhunia,and Roy (2010). In the single objective optimization, one attemptsto obtain the best design or decision, which is usually a global min-imum or the global maximum depending on whether the optimi-zation problem is of minimization or maximization type. On theother hand for the multiple objectives, there may not exist onesolution which is best (global minimum or maximum) with respectto all the objectives. In multi-objective optimization, there exists aset of solutions which are superior to the rest of the solutions inthe search space when all the objectives are considered, but areinferior to other solutions in the space in one or more objectives(not all). These solutions are known as Pareto optimal solutionsor nondominated solutions (Srinivas and Deb (1994)) and the restof the solutions are known as dominated solutions. Since none ofthe solutions in the nondominated set can be considered as abso-lutely better than one another, any one of them is an acceptablesolution. As reliability of each component is interval valued, there-fore, the system reliability would be interval valued. In this paper,GA-based approach has been presented for solving the multi-objective reliability optimization with interval objectives. Theobjectives considered here, are the maximization of the systemreliability and minimization of the system cost. Also, we have con-sidered the cost coefficient as interval valued. For this purpose sev-eral problems having multi-objective reliability optimizationproblems with interval valued objectives have been formulatedand solved. In this connection, we have also developed the defini-tion of Pareto optimality in interval environment. To obtain theoptimal solution of multi-objective optimization problem we haveconverted the same into a single objective constrained optimiza-tion problem. Further, the reduced optimization problem has beenconverted into unconstrained optimization problem by using pen-alty function technique. For solving such typical problems, we have

developed a real coded elitist GA with tournament selection,uniform crossover and one-neighborhood mutation. Finally, toillustrate the different approaches based on different multi-objec-tive optimization techniques, a numerical example has been solvedand to investigate the overall performance of the proposed GAbased penalty technique for solving multi-objective optimizationproblems, sensitivity analyses have been carried out graphically.

The organization of the paper is given as follows. In Section 2,the assumptions and notations are given. The details of finiteinterval mathematics and interval order relations are given inSection 3. Section 4 presents the details of multi-objective optimi-zation and problem formulation in interval environment. In Sec-tion 5, genetic algorithm based constraints handling approach isdiscussed. Section 6 presents numerical example and sensitivityanalysis to illustrate the proposed GA based penalty technique.In Section 7, concluding remarks is presented to draw the conclu-sion from this research work.

2. Assumptions and notations

The following assumptions and notations have been used in theentire paper.

2.1. Assumptions

(i) Reliability of each component is imprecise and intervalvalued.

(ii) Failures of components are mutually statisticallyindependent.

(iii) The system will not be damaged or failed due to failedcomponents.

(iv) All redundancy is active and there is no provision for repair.(v) The components as well as the system have two different

states, viz. operating state and failure state.(vi) The cost coefficients are imprecise and interval valued.

2.2. Notations

Ai(x) = [fiL(x), fiR(x)]

interval valued objective function z�i ¼ ½z�iL; z�iR� ith component of interval valued ideal

objective vector

z��i ¼ ½z��iL ; z��iR � ith component of interval valued utopian

objective vector

RS = [RSL, RSR] interval valued system reliability CS = [CSL, CSR] interval valued system cost [CiL, CiR] interval valued cost of ith component ri = [riL, riR] interval valued reliability of ith

component

n number of redundant components m number of constraints x (x1, x2, . . ., xn)T

gj(x)

constraints function j = 1, 2, . . ., m Pi constant associated with volume Wi constant associated with weight xi number of redundancies of ith

component ðxi P 1Þ

R set of real numbers Rn n-dimensional Euclidean space S feasible region ½R�SL;R

�SR�

optimal value of RS = [RSL, RSR]

½C�SL;C�SR�

optimal value of CS = [CSL, CSR]

½R��SL;R��SR�

value of RS ( = [RSL, RSR]) for infeasible

solution

(continued on next page)

154 L. Sahoo et al. / Computers & Industrial Engineering 62 (2012) 152–160

½C��SL ;C��SR�

value of CS ( = [CSL, CSR]) for infeasible

solution

ei = [eiL, eiL] small positive and computationally

significant interval number

[fauxL, fauxR] interval valued auxiliary objective

function

p_size population size p_cross probability of crossover p_mute probability of mutation m_gen maximum number of generations

3. Finite interval mathematics and interval order relations

An interval number A is a closed interval denoted by A = [aL, aR]and is defined by A ¼ ½aL; aR� ¼ fx : aL 6 x 6 aR; x 2 Rg where R bethe set of all real numbers and aL, aR be the left and right limits,respectively. An interval A can also be denoted by A = hac, awi andis defined A ¼ hac; awi ¼ fx : ac � aw 6 x 6 ac þ aw; x 2 Rg, where ac

and aw are respectively, the center and radius of the interval A, thatis, ac = (aL + aR)/2 and aw = (aR � aL)/2. Actually, every real numbercan be expressed as an interval and denoted by [x, x] for all x e R,with zero radius.

Here, we shall give some basic formulas of Interval Mathemat-ics as follows:

Let A = [aL, aR] and B = [bL, bR] be two intervals.

Addition: A + B = [aL, aR] + [bL, bR] = [aL + bL, aR + bR]Subtraction: A � B = [aL, aR] � [bL, bR] = [aL � bR, aR � bL]

Scalar multiplication: aA ¼ a½aL; aR� ¼½aaL;aaR� if a P 0½aaR;aaL� if a < 0

�Multiplication A � B = [aL, aR] � [bL, bR] = [ min (aLbL, aLbR, aRbL,aRbR), max (aLbL, aLbR, aRbL, aRbR)]Division: A

B ¼ A� 1B ¼ ½aL; aR� � ½ 1bR

; 1bL�; provided 0 R ½bL; bR�

Exponential: expðAÞ ¼ ½expðaLÞ; expðaRÞ�Logarithm: logðAÞ ¼ ½logðaLÞ; logðaRÞ�

3.1. Integral power of an interval

According to Hansen and Walster (2004), the integral power ofan interval is defined by

An ¼ ½aL; aR�n ¼

½1;1� if n ¼ 0

½anL ; a

nR� if aL P 0 or if n is odd

½anR; a

nL � if aR 6 0;n is even

½0;maxðanL ; a

nR� if aL 6 0 6 aR;n > 0 is even

8>>>><>>>>:

3.2. nth root of an interval

Karmakar, Mahato, and Bhunia (2009) defined the nth root of aninterval as follows:

ffiffiffiffiffiffiffiffiffiffiffiffiffiffi½aL; aR�n

½ ffiffiffiffiffiaLnp

;ffiffiffiffiffiaR

np � if aL P 0 or if n is odd

½0; ffiffiffiffiffiaR

np � if aL 6 0; aR P 0 and n is even

u if aR < 0 and;nis even

8><>:

where / is the empty interval and this definition is called nth rootof an interval A = [aL, aR].

The rational power of an interval A = [aL, aR] is defined asfollows:

ðAÞpq ¼ ðApÞ

1q or equivalently; ðAÞ

pq ¼ exp

pq

log A� �

3.3. Interval power of an interval

Let A = [aL, aR] and B = [bL, bR] be two intervals, then

ðAÞB ¼ ½aL; aR�½bL ;bR � ¼

eminðbL log aL ;bL log aR ;bR log aL ;bR log aRÞ;�

emaxðbL log aL ;bL log aR ;bR log aL ;bR log aRÞ�

if aL P 0

a complex interval if aL < 0

8>><>>:

expðBÞ ¼ ½e; e�½bL ;bR �

¼ ½eminðbL log e;bL log e;bR log e;bR log eÞ; emaxðbL log e;bL log e;bR log e;bR log eÞ�

¼ ½ebL ; ebR � ¼ ½expðbLÞ; expðbRÞ�

3.4. Mean of interval numbers

According to Bhunia et al. (2010), mean of n interval numbers isdefined as follows:

The mean of xi = [xiL, xiR], i = 1, 2, . . ., n is given by

�x ¼ ½�xL; �xR� ¼1n

Xn

i¼1

xiL;1n

Xn

i¼1

xiR

" #

3.5. Order relation of interval numbers

According to the assumption (i), the objective function ofredundancy allocation problem would be interval valued. So, tofind the optimal solution of the said problem, the order relationsof interval numbers play an important role in decision making.For this purpose, we have proposed a new definition of order rela-tions of interval numbers.

Let A = [aL, aR] and B = [bL, bR] be two intervals. Then these twointervals may be of the following three types:

Type-1: Two intervals are disjoint.Type-2: Two intervals are partially overlapping.Type-3: One of the intervals contained the other one.

It is to be noted that both the intervals A = [aL, aR] and B = [bL, bR]will be equal in case of fully overlapping intervals. That is A = B ifaL = bL and aR = bR.

In the year 2006, Mahato and Bhunia (2006) proposed the def-initions of order relations for maximization problems in the con-text of optimistic and pessimistic decision maker’s point of view.However, in their definitions of pessimistic decision making, forType-3 intervals, sometimes optimistic decisions are to be consid-ered. To overcome this situation, we have proposed a general def-inition of order relations irrespective of optimistic as well aspessimistic decision makers’ point of view.

Definition 3.1. The order relation > max between the intervalsA = [aL, aR] = hac, awi and B = [bL, bR] = hbc, bwi, then for maximiza-tion problems

(i) A>maxB() ac > bc for Type� 1 and Type� 2 intervals(ii) A>maxB() either ac P bc ^ aw < bw or

ac P bc ^ aL > bL for Type� 3 intervals

According to this definition, the interval A is accepted for max-imization case. Clearly the order relation >max is reflexive and tran-sitive but not symmetric.

L. Sahoo et al. / Computers & Industrial Engineering 62 (2012) 152–160 155

Definition 3.2. The order relation <min between the intervalsA = [aL, aR] = hac, awi and B = [bL, bR] = hbc, bwi, then for minimizationproblems

(i) A<minB() ac < bc for Type� 1 and Type� 2 intervals(ii) A<minB() either ac 6 bc ^ aw < bw or ac 6 bc ^ aL < bL

for Type� 3 intervals

According to this definition, the interval A is accepted forminimization case. Clearly the order relation <min is reflexive andtransitive but not symmetric.

4. Multi-objective optimization and problem formulation ininterval environment

According to the existing literature, several methods have beendeveloped for solving the multi-objective optimization problemwith non-interval valued objectives. However, to the best of ourknowledge and belief, none has developed the techniques/methodsfor solving multi-objective optimization problems with intervalvalued objectives. In this section, we shall discuss the solutionmethodologies/techniques for solving multi-objective optimiza-tion problem with interval valued objectives for several decisionvariables.

These types of multi-objective optimization problems can bewritten as

Minimize fA1ðxÞ;A2ðxÞ; . . . ;AkðxÞg

subject to x 2 S

where AiðxÞ ¼ ½fiLðxÞ; fiRðxÞ�; i ¼ 1;2; . . . ; k

and S ¼ fx : gjðxÞ 6 0; j ¼ 1;2; . . . ;mg

Before going to discuss about the solution methodologies of theoptimization problem, we shall define the Pareto optimality (withrespect to general decision makers’ point of view) , ideal objectivesand different types of ideal objective vectors for the aboveproblem.

Definition 4.1. A decision vector x� e S is Pareto optimal ifthere does not exist another decision vector x e S such thatAi(x) < min Ai(x�) for at least one index i i.e., for Type-1 and Type-2intervals fiL(x) + fiR(x) < fiL(x�) + fiR(x�) and for Type-3 intervals,either ðfiLðxÞ þ fiRðxÞ 6 fiLðx�Þ þ fiRðx�ÞÞ ^ ðfiRðxÞ � fiLðxÞ < fiRðx�Þ�fiLðx�ÞÞ or ðfiLðxÞ þ fiRðxÞ 6 fiLðx�Þ þ fiRðx�ÞÞ ^ fiLðxÞ < fiLðx�Þ.

Definition 4.2. The (open) ball of radius d > 0 centered at a point x�

in metric space Rn is defined as Bðx�; dÞ ¼ fx 2 Rn : dðx; x�Þ < dgwhere d is the distance function or metric.

Definition 4.3. A decision vector x� e S is locally Pareto optimal ifthere exists d > 0 such that x� is Pareto optimal in S \ B(x�, d) whereB(x�, d) is an open ball of radius d > 0 centered at a point x�.

Definition 4.4. A decision vector x� e S is weakly Pareto optimal ifthere does not exist another decision vector x e S such thatAiðxÞ<minAiðx�Þforall i ¼ 1;2; . . . ; k.

Definition 4.5. An objective vector minimizing each of the objec-tive functions is called an ideal (or perfect) objective vector.

Definition 4.6. A utopian objective vector z�� 2 Rk is an infeasibleobjective vector whose components are formed by z��i ¼ z�i � ei forall i = 1, 2, . . . , k where z�i is the component of the ideal objectivevector and ei > 0 is a relatively small but computationally signifi-cant scalar.

Definition 4.7. Let us consider n = {n1, n2, . . ., nn} andg = {g1, g2, . . . , gn} be any two vectors in Rn. Define the mappingdp : Rn � Rn ! R and d1 : R� Rn ! R as follows:

dpðn;gÞ ¼Xn

i¼1

jni � gijp

( )1p

where 1 6 p <1and d1ðn;gÞ

¼max16i6nfjni � gijg

Here, dp for each 1 6 p <1 and d1 a metric on the set R00

Definition 4.8. Let lp, 1 6 p <1 be the set of all sequences n = {ni}of real scalars such that

P1i¼1jnijp <1. Define the mapping

d : lp � lp ! R by dðn;gÞ ¼ fPn

i¼1jni � gijpg

1p where n = {ni} and

g = {gi} are in lp.Easily, we can prove that lp is a metric spaceAccording to the existing literature there are several techniques

for solving the multi-objective optimization problems with non-interval valued objectives. In these techniques, the multi-objectiveoptimization problems have been formulated as different types ofproblems. Some of these problems are as follows:

(i) Tchebycheff problem.(ii) Weighted Tchebycheff problem.

(iii) Lexicographic problem.(iv) Lexicographic weighted Tchebycheff problem.

Now, we shall formulate all these problems with interval valuedobjectives.

4.1. Tchebycheff problem

When p ?1, the dp metric reduces to a Tchebycheff metric.The corresponding problem (which is called Tchebycheff problem)with interval objective is of the form:

Minimize Maxi¼1;2;...;k

ðjAiðxÞ � z�i jÞ

subject to x 2 Sð1Þ

4.2. Weighted Tchebycheff problem

When p ?1 and wi P 0, then dp metric is called a Tchebycheffmetric and the corresponding problem (called Weighted Tcheby-cheff problem) with interval objectives is of the form

Minimize Maxi¼1;2;...;k

ðwijAiðxÞ � z�i jÞ

subject toXk

i¼1

wi ¼ 1

and x 2 S

ð2Þ

4.3. Lexicographic problem

In lexicographic ordering the decision maker sorts the objectivefunctions according to their absolute importance. This means thatthe more important objective is infinitely more important. Afterordering, the most important objective function is optimized, sub-

x1 x

2x

i nx

Stage 1 2 i n

Fig. 4.1. A n-stage series system.

156 L. Sahoo et al. / Computers & Industrial Engineering 62 (2012) 152–160

ject to the given constraints. If this problem gives a unique solu-tion, it will be the solution of the whole multi-objective optimiza-tion problem. Otherwise, the second most important objectivefunction is to be optimized. If this problem gives a unique solution,it will be the solution of the original problem and so on.

Let the objective functions be arranged according to the lexico-graphic order from the most important to the less important AkðxÞ.Here, we write the lexicographic problem as

Lex Minimize A1ðxÞ; A2ðxÞ;A3ðxÞ; . . . ;AkðxÞsubject to x 2 S

ð3Þ

Here the term ‘Lex Minimize’ means first arrange the objectivefunctions according to their importance and then minimize.

4.4. Lexicographic weighted Tchebycheff problem

If p ?1 and wi P 0, the metric dp is called a Tchebycheff met-ric and the corresponding lexicographic weighted Tchebycheffproblem is as follows:

Lex Minimize Maxi¼1;2;...;k

ðwijAiðxÞ � z�i jÞ;Xk

i¼1

ðAiðxÞ � z��i Þ( )

subject toXk

i¼1

wi ¼ 1

and x 2 S

ð4Þ

where z��i ¼ z�i � ei is the ith component of utopian objective vectorwhich is an infeasible objective vector and ei, i = 1, 2, . . . , k be rela-tively small positive interval numbers and computationallysignificant.

4.5. Problem formulation

Let us consider a system consisting of n subsystems in series inwhich the ith ð1 6 i 6 nÞ subsystem consists of xi components inparallel (see Fig. 4.1) and the reliability of each component as wellas the cost of resources are interval valued. Such a system is calledseries–parallel system or n-stage series system. In this system, thesystem reliability and also the system cost would be interval val-ued. Assuming all the components in ith subsystem as identical,the system reliability RS is given by

RSðxÞ ¼Yn

i¼1

½RSLðxÞ;RSRðxÞ�

where RSLðxÞ ¼ ½1� ð1� riLÞxi � and RSRðxÞ ¼ ½1� ð1� riRÞxi �

Hence, the problem is to determine the number of redundant com-ponents xi, i = 1, 2, . . ., n by maximizing the system reliability [RSR,RSL] and minimizing the system cost [CSL, CSR], subject to the givenconstraints. Hence the problem can be written as

MaximizeYn

i¼1

½RSLðxÞ;RSRðxÞ�

Minimize ½CSLðxÞ;CSRðxÞ�subject to the constraints gjðxÞ 6 0; j ¼ 1;2; . . . ;m:

ð5Þ

The above problem is a multi-objective optimization problem withinterval valued objectives.

The Tchebycheff problem with interval objective correspondingto (5) is of the form:

Minimize Maxðj½�RSR � R�SR;�RSL � R�SL�j; j½CSL � C�SR;CSR � C�SL�jÞsubject to gjðxÞ 6 0; j ¼ 1;2; . . . ;m

ð6Þ

The Weighted Tchebycheff problem with interval objectives is ofthe form:

Minimize Maxðwj½�RSR � R�SR;�RSL � R�SL�j; ð1�wÞj½CSL � C�SR;CSR � C�SL�jÞ

subject to gjðxÞ 6 0; j ¼ 1;2; . . . ;mð7Þ

Let the objective functions be arranged according to the lexico-graphic order from the most important [�RSR, �RSL] to the lessimportant [CSL, CSR]. In this technique, the multi-objective optimiza-tion problem (5) reduces to

Lex Minimizeð½�RSR;�RSL�; ½CSL;CSR�Þsubject to gjðxÞ 6 0; j ¼ 1;2; . . . ;m

ð8Þ

The Lexicographic Weighted Tchebycheff problem is of theform:

Lex Minimize Maxðwj½�RSR � R�SR;�RSL � R�SL�j; ð1�wÞj½CSL � C�SR;CSR � C�SL�jÞ; ð½�RSR � R��SR;

� RSL � R��SL� þ ½CSL � C��SR;CSR � C��SL�Þsubject to gjðxÞ 6 0; j ¼ 1;2; . . . ;m

ð9Þ

where ½R��SL;R��SR�; ½C

��SL;C

��SR�Þ is the utopian objective vector which is an

infeasible objective vector. Hence this vector is equivalent toð½R�SL � e1R;R

�SR � e1L�; ½C�SL � e2R;C

�SR � e2L�Þ, where ½e1L; e1R�; ½e2L; e2R�

are relatively small positive interval numbers but computationallysignificant scalars.

5. Genetic algorithm based constraints handling approach

Clearly the optimization problems (6)–(9) are constrained opti-mization problem with interval valued objective. To solve theseproblems an important question for handling the constraintsemerges out to be highly relevant. Over the last few years, severaltechniques have been proposed to handle the constraints in genet-ic algorithms for solving the optimization problem with non-inter-val/ fixed valued objective (Deb, 2000). Recently Gupta et al. (2009)and Bhunia et al. (2010) solved the optimization problem usingBig-M penalty method. In this method the given constrainedoptimization problem with an interval valued fitness function isconverted to an interval valued unconstrained optimization prob-lem by penalizing a large positive number say M, which can bewritten in the interval form as [M, M]and the penalty is called asBig-M penalty. In this work, we have used the Big-M penaltytechnique.

L. Sahoo et al. / Computers & Industrial Engineering 62 (2012) 152–160 157

Let us consider a constrained optimization problem

Maximizeð�½fauxL; fauxR�Þ

subject to the constraints

gjðxÞ 6 0; j ¼ 1;2; . . . ;m;

The form of Big-M penalty is as follows:

maximize ½f̂ auxL; f̂ auxR� ¼ �½fauxL; fauxR� þ hðxÞ ð10Þ

where hðxÞ ¼½0;0� if x 2 S

½fauxL; fauxR� þ ½�M;�M� if x R S

and S ¼ fx : gjðxÞ 6 0; j ¼ 1;2; . . . ;mg and be the feasible space. Here(�[fauxL, fauxR]) is the interval valued auxiliary objective function.Problem (10) is an integer non-linear unconstrained optimizationproblem with interval objective of n integer variables x1, x2, . . ., xn.For solving this problem, we have developed a real coded geneticalgorithm (GA) with advanced operators for integer variables.

Genetic algorithm is a well-known computerized stochasticsearch method based on the evolutionary theory of Charles Darwin‘‘survival of the fittest’’ and natural genetics (Goldberg, 1989). GAhas successfully been applied to optimization problems in differentfields, like engineering design, optimal control, transportation andassignment problems, job scheduling, inventory control and otherreal-life decision-making problems. The most fundamental idea ofGenetic Algorithm is to imitate the natural evolution process artifi-cially in which populations undergo continuous changes through ge-netic operators, like crossover, mutation and selection. Geneticalgorithm can easily be implemented with the help of computer pro-gramming. In particular, it is very useful for solving complicatedoptimization problems which cannot be solved easily by direct orgradient based mathematical techniques. It is very effective to han-dle large-scale, real-life, discrete and continuous optimization prob-lems without making unrealistic assumptions and approximations.Keeping the imitation of natural evolution as the foundation, geneticalgorithm can be designed appropriately and modified to exploitspecial features of the problem to solve. This algorithm starts withan initial population of possible solutions (called individuals) to a gi-ven problem where each individual is represented using some formof encoding as a chromosome. These chromosomes are evaluated fortheir fitness. Based on their fitness, chromosomes in the populationare to be selected for reproduction and selected individuals aremanipulated by two known genetic operations, like crossover andmutation. The crossover operation is applied to create offspring froma pair of selected chromosomes. The mutation operation is used for alittle modification/change to reproduce offspring. The repeatedapplications of genetic operators to the relatively fit chromosomesresult in an increase in the average fitness of the population overgeneration and identification of improved solutions to the problemunder investigation. This process is applied iteratively until the ter-mination criterion is satisfied. The procedural algorithm of the work-ing principle of GA is as follows:

AlgorithmStep-1: Set population size (p_size), probability of crossover(p_cross), probability of mutation (p_mute), maximum genera-tion (m_gen) and bounds of the variables.Step-2: [t = 0[ t represents the number of current generation].Step-3: Initialize the chromosomes of the population P(t) [P(t)represents the population at tth generation].Step-4: Evaluate the fitness function of each chromosome ofP(t) considering the objective function as the fitness function.Step-5: Find the best chromosome from the population P(t).Step-6: t Is increased by unity.Step-7: If the termination criterion is satisfied go to Step-14,otherwise, go to next step.

Step-8: Select the population P(t) from the population P(t � 1)of earlier generation by tournament selection process.Step-9: Alter the population P(t) by crossover, mutation andelitism operators.Step-10: Evaluate the fitness function value of each chromo-some of P(t).Step-11: Find the best chromosome from P(t).Step-12: Compare the best chromosome of P(t) and P(t � 1)store better one.Step-13: Go to Step-6.Step-14: Print the best chromosome (which is the solution ofthe optimization problem).Step-15: End.

To implement the above GA for the proposed model, the fol-lowing basic components are to be considered. (i) GA parameters(population size, probability of crossover and probability ofmutation), (ii) chromosome representation, (iii) initialization ofpopulation, (iv) evaluation of fitness function, (v) selectionprocess and (vi) genetic operators (crossover, mutation andelitism).

There are several GA parameters, viz. population size, maxi-mum number of generation, crossover rate (the probability ofcrossover) and mutation rate (the probability of mutation). Thereis no hard and fast rule for selecting the population size for GA(i.e. for determining how large it should be). If the population sizeis very large, then storing of data in intermediate steps of GA maygive rise to some difficulties at the time of execution. When thepopulation size is very small, then some genetic operators do notwork properly. There is no clear indication regarding the selectionof the most appropriate value for maximum number of genera-tions. It varies from problem to problem and depends upon thenumber of genes (variables) of a chromosome. From natural genet-ics, it is obvious that the rate of crossover is always greater thanthat of the rate of mutation. Generally, the crossover rate variesfrom 0.6 to 0.95 whereas the mutation rate varies from 0.05 to0.2. Sometimes mutation rate is considered as 1/n where is thenumber of genes (variables) of the chromosome.

Appropriate representation of a chromosome is an important is-sue in the application of GA for solving optimization problems.Decision regarding the appropriate representation of chromosome(individual) imposes a tough situation on the users of GA. There aredifferent types of representations, like, binary, real, octal, hexadec-imal coding, available in the existing literature. The use of tradi-tional binary representations is not effective in many real-worldnon-linear problems. Since the proposed multi-objective optimiza-tion problem is non-linear, hence to overcome the difficulty, realcoding representation is used. In this representation, for a givenproblem with n decision variables, a n-component vectorx = (x1, x2, . . . , xn) is used as a chromosome to represent a solutionto the problem. A chromosome denoted as vkðk ¼ 1;2; . . . ; p sizeÞ isan ordered list of n genes as vk = {vk1, vk2, . . ., vki, . . ., vkn}.

After representation of chromosomes, the next step is to initial-ize the chromosome that will take part in artificial genetics. To ini-tialize the population, first of all we have to find the independentvariables and their bounds for the given problem. Then the initial-ization process produces population size number of chromosomesin which every gene, which represents the decision variables, is ini-tialized by generating a random number between the bounds ofthe decision variables.

Evaluation function plays the same role in GA as that which thebiological and physical environment plays in natural evolutionprocess. After initialization of chromosomes of potential solutions,we need to see how relatively good are they. Therefore, we have tocalculate the fitness value for each chromosome. In our work, thevalue of objective function of the reduced unconstrained optimiza-

Table 6.1Shows the data for the example.

i 1 2 3 4 5

ri [0.78,0.82] [0.84,0.85] [0.87,0.91] [0.63,0.66] [0.74,0.76]Ci [6,8] [5,8] [3,6] [6,9] [3,6]Pi 1 2 3 4 2Wi 7 8 8 6 9

b1 = 110, b2 = 200.

158 L. Sahoo et al. / Computers & Industrial Engineering 62 (2012) 152–160

tion problems corresponding to the chromosome is considered asthe fitness value of that chromosome.

The selection operator which is the first operator in artificialgenetics plays an important role in GA. This selection process isbased on the Darwin’s principle on natural evolution ‘‘survival ofthe fittest’’. The primary objective of this process is to select theabove average individuals/chromosomes from the populationaccording to the fitness value of each chromosome and eliminatethe rest of the individuals/chromosomes. There are several meth-ods for implementing the selection process. In this work, we haveused tournament selection in which two individuals in the popula-tion are selected on the basis of the magnitude of their fitness withrespect to the rest of the population with replacement as the selec-tion operator. The following assumptions for this selection proce-dure are to be considered:

(i) when both the individuals/chromosomes are feasible thenthe one with better fitness value is selected,

(ii) when one individual/chromosome is feasible and another isinfeasible then the feasible one is selected,

(iii) when both the individuals/chromosomes are infeasible withunequal constraint violation, then the chromosome with lessconstraint violation is selected,

(iv) when both the individuals/chromosomes are infeasible withequal constraint violation, then any one individual/chromo-some is selected.

The exploration and exploitation of the solution space is madepossible by exchanging genetic information of the current chromo-somes. After the selection process, other genetic operators, likecrossover and mutation are applied to the resulting chromosomesi.e., those which have survived. Crossover is an operator that cre-ates new individuals/chromosomes (offspring) by recombiningthe features of both parent solutions. It operates on two or moreparent solutions at a time and produces offspring for next genera-tion. In this operation, expected [p_cross� p_size] number of chro-mosomes will take part (� and [ ] denote the product and theintegral value respectively). Here, the uniform crossover operationhas been used. The different steps of this operator are asfollows:

Step-1: Find the integral value of ½p cross�p size� and store it inN.Step-2: Select two chromosomes vk and vi randomly from thepopulation.Step-3: Compute the components �vkj and �vkj ¼ ðj ¼ 1;2; . . . ;nÞof two offspring by either �vkj ¼ vkj � g and �v ij ¼ v ij þ g if vkj > vij

or, �v ij ¼ v ij � g and �vkj ¼ vkj þ g where g is a random integernumber between 0 and |vkj � vij|, j = 1, 2, . . ., nStep-4: Repeat Step-2 and Step-3 for N

2 times.

The aim of mutation operator is to introduce the random varia-tions into the population and is used to prevent the search processfrom converging to the local optima. This operator helps to regainthe information lost in earlier generations and is responsible forfine tuning capabilities of the system and is applied to a single indi-vidual only. Usually, its rate is very low; otherwise it would defeatthe order building being generated through the selection andcrossover operations. In this work, we have used one-neighbor-hood mutation. The different steps of this operator are as follows:

Step-1: Find the integral value of ½p mute�p size� and store it inN.Step-2: Select a chromosome vi randomly from the population.Step-3: Select a particular gene vik(k = 1, 2, . . ., n) on chromo-some vi for mutation and domain of vikis [lik, uik].

Step-4: Create new gene v 0ikðk ¼ 1;2; . . . ;nÞ corresponding tothe selected gene vik by mutation process as follows:

v 0ik ¼

v ik þ 1 ifv ik ¼ lik

v ik � 1 ifv ik ¼ uik

v ik þ 1 if a random digit is 0v ik � 1 if a random digit is 1

8>>><>>>:

Step-5: Repeat Step-2 to Step-4 for N times.

In any generation of GA, sometimes there arises a situation thatthe best chromosome may be lost from the population when a newpopulation is created by crossover and mutation operations. Toovercome this situation the worst individual/chromosome of thecurrent generation is replaced by the best individual/chromosomeof previous generation. Instead of single chromosome one morechromosomes may take part in this operation. This process isnamed as elitism.

6. Numerical example and sensitivity analysis

To illustrate the proposed techniques for solving constrainedmulti-objective optimization problem with interval valued reliabil-ities of components by genetic algorithm, the following numericalexample has been considered.

Maximize ½RSL;RSR� ¼Y5

i¼1

½1� ½1� riR;1� riL�xi �

Minimize ½CSL;CSR� ¼X5

i¼1

½CiL;CiR�½xi þ expðxi=4Þ�

subject to the constraints

g1ðxÞ ¼X5

i¼1

Pix2i � b1 6 0

g2ðxÞ ¼X5

i¼1

Wi½xi expðxi=4Þ� � b2 6 0

and xi being a nonnegative integer for i = 1, 2, 3, 4, 5; where the val-ues of Pi, Wi , b1 and b2 are given in Table 6.1.

The proposed method/technique has been coded in C program-ming language. The computational work has been done on a PCwith Intel core-2-duo 2.5 Ghz processor in LINUX environment.For each case 20 independent runs have been performed to calcu-late the best found system reliability which is nothing but the opti-mal value of the system reliability. In this computation, the valuesof genetic parameters like, population size, mutation rate, cross-over rate and maximum number of generations have been takenas 100, 0.15, 0.85 and 150 respectively. The computational resultshave been shown in Table 6.2.

Table 6.2Presents the comparative results obtained from different proposed problems.

Problem x0s Best R� Mean value of R� Best C�

Tchebycheff (2,1,2,1,3) [0.4864, 0.5310] [0.4864,0.5310] [73.3138,120.6125]Weighted Tchebycheff (1,2,2,1,3) [0.4625, 0.5175] [0.4625,0.5175] [71.9491,120.6125]Lexicographic (3,2,2,3,3) [0.8839, 0.9132] [0.8839,0.9132] [105.9448,168.7731]Lexicographic Weighted Tchebycheff (1,2,2,1,3) [0.4625, 0.5175] [0.4625,0.5175] [71.9491,120.6125]

0.8

0.84

0.88

0.92

0.96

1

50 60 70 80 90 100 110 120Popsize

Syst

em R

elia

bilit

y

Lower limit of intervalvalued system reliability

Upper limit of intervalvalued system reliability

Fig. 6.1. Population size vs. interval valued system reliability.

0.6

0.7

0.8

0.9

1

0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95Probability of Crossover

Syst

em re

liabi

lity

Lower limit ofinterval valuedsystem reliability

Upper limit ofinterval valuedsystem reliability

Fig. 6.2. Probability of crossover vs. interval valued system reliability.

0.5

0.6

0.7

0.8

0.9

1

0.05 0.10 0.15 0.20 0.25Probability of Mutation

Syst

em R

elia

bilit

y

Lower limit of intervalvalued system reliabilityUpper limit of intervalvalued system reliability

Fig. 6.3. Probability of mutation vs. interval valued system reliability.

L. Sahoo et al. / Computers & Industrial Engineering 62 (2012) 152–160 159

From Table 6.2, the following observations can be made:

(i) The results obtained by Weighted Tchebycheff and Lexico-graphic Tchebycheff problems be the same.

(ii) the best found value, mean value of system reliability R� andthe corresponding system cost C� in Lexicographic problemare higher than the same in other problems.

(iii) all the results in Tchebycheff problems are greater thanweighted Tchebycheff and lexicographic weighted Tcheby-cheff problems.

Hence from the above observation, it can be concluded that thesolution of Lexicographic problem is the best solution. In this case,the best found value as well as mean value of system reliabilityR�are far away from the same results obtained from other prob-lems. If a decision maker is interested for a system with minimumsystem cost, then, he/she may take the solution of either weightedTchebycheff or Lexicographic weighted Tchebycheff problem asboth provide the same solution.

To investigate the overall performance of the proposed GAbased penalty technique for solving Lexicographic problem corre-sponding to multi-objective optimization problems, sensitivityanalyses have been carried out graphically on the system reliabilitywith respect to different GA parameters separately, taking otherparameters at their original values. These have been shown in Figs.6.1–6.3. From Fig. 6.1, it is observed that both the bounds of thesystem reliability be the same for all the values of population size(p_size) greater than or equal to 60. This means that our proposedGA is stable when population size exceeds 60. In Figs. 6.2 and 6.3,the values of system reliability have been computed with respectto the probability of crossover (p_cross) within the range from0.55 to 0.95 and the probability of mutation (p_mute) within therange 0.05 to 0.25 respectively. From these figures, it is clear thatthe proposed GA is stable with respect to probability of crossoveras well as the probability of mutation.

7. Concluding remarks

In this paper, for the first time, we have formulated four differ-ent problems for solving constrained multi-objective optimizationproblems with interval objectives. Then we have solved theseproblems corresponding to constrained multi-objective reliabilityoptimization problem with the assumption that the reliability ofeach component as well the cost coefficients are interval valued.These representations are more appropriate among other existingrepresentations, like random variable representation with knownprobability distribution, fuzzy set with known membership func-tion or fuzzy number. The reduced problem has been convertedto single objective optimization problem using Big-M penalty tech-nique and solved by advanced genetic algorithm.

In interval approach, one may have questions in one’s mindregarding the selection of the value of reliability of different com-ponents in interval form and also regarding the advantages ofusing this approach.

In this approach, there is also a drawback. As the objective func-tions (system reliability, system cost) are interval valued so thatthe best found solutions obtained from proposed method are alsointerval valued. In that case there is an uncertainty which is noth-ing but the width of the interval. In other approaches the values ofobjective functions are fixed.

In stochastic approach, an imprecise parameter in any problemis represented by a random variable with known probability distri-bution obtained by studying its past records. On the other hand, infuzzy approach, imprecise parameters are represented by fuzzynumbers (triangular, trapezoidal, parabolic, etc.) or fuzzy sets with

160 L. Sahoo et al. / Computers & Industrial Engineering 62 (2012) 152–160

the membership function. In these methods the selection of appro-priate distribution function, type of fuzzy numbers or the member-ship function is a formidable task. However, in interval approachby careful observation of past records (without any sophisticatedmathematical/statistical treatment) one can easily select theappropriate interval to represent the imprecise parameter. As a re-sult, it can be concluded that the interval approach is the best ap-proach among others like, probabilistic, fuzzy and fuzzy-stochasticapproaches.

There are several ways to increase the system reliability. Some ofthese are (i) to increase the reliability of each component of a system(ii) to use the parallel redundancy for the less reliable component(iii) to use the standby redundancy. As a result, the system cost, vol-ume and weight may be increased. So to optimize the system in thecontext of system reliability, system cost, volume and weight, thecorresponding problem is formulated as multi-objective optimiza-tion problem. In this type of problem system reliability is maximizedwhereas other measures, like system cost, volume and weight areminimized. In the proposed work, we have considered only the max-imization of system reliability and minimization of system cost.

For solving the optimization problem, we have used the GAbased Big-M penalty approach. In this approach, the value of fit-ness function is not computed for infeasible solution. Instead, thevalue of M is considered for the value of fitness function. Guptaet al. (2009) and Bhunia et al. (2010) applied this for solving con-strained redundancy allocation problems. In their works, there isno indication regarding the value of M. However, for infeasiblesolution, the value of M may be taken depending on the fitnessfunction value. A small value (in case of maximization problem)or a large value (in case of minimization problem) may be consid-ered for M to solve constrained optimization problem.

For further research one may apply these techniques for solvinginterval valued multi-objective optimization problems in the areasof engineering disciplines and management science.

Acknowledgments

For this research, the second author would like to acknowledgethe financial support provided by the Council of Scientific andIndustrial Research (CSIR), New Delhi, India. The authors are alsograteful to anonymous referees for their constructive as well ashelpful suggestions and comments to revise the paper in the pres-ent form.

References

Aggarwal, K. K., & Gupta, J. S. (2005). Penalty function approach in heuristicalgorithms for constrained. IEEE Transactions on Reliability, 54(3), 549–558.

Bhunia, A. K., Sahoo, L., & Roy, D. (2010). Reliability stochastic optimization for aseries system with interval component reliability via genetic algorithm. AppliedMathematics and Computation, 216, 929–939.

Coit, W., & Konak, A. (2006). Multiple weighted objectives heuristic for theredundancy allocation problem. IEEE Transaction on Reliability, 55(3), 551–558.

Coit, D. W., & Smith, A. E. (1996). Reliability optimization of series-parallel systemusing a genetic algorithm. IEEE Transactions on Reliability, R-52(2), 254–260.

Coolen, F. P. A., & Newby, M. J. (1994). Bayesian reliability analysis with impreciseprior probabilities. Reliability Engineering and System Safety, 43, 75–85.

Deb, K. (2000). An efficient constraint handling method for genetic algorithms.Computer Methods in Applied Mechanics and Engineering, 186, 311–338.

Dhingra, A. K. (1992). Optimal apportionment of reliability and redundancy in seriessystems under multiple objectives. IEEE Transactions on Reliability, 41(4),576–582.

Goldberg, D. E. (1989). Genetic algorithms: Search. optimization and machine learning.Reading, MA: Addison Wesley.

Gopal, K., Aggarwal, K. K., & Gupta, J. S. (1980). A new method for solving reliabilityoptimization problem. IEEE Transactions on Reliability, 29, 36–38.

Gupta, R. K., Bhunia, A. K., & Roy, D. (2009). A GA based penalty function techniquefor solving constrained Redundancy allocation problem of series system withinterval valued reliabilities of components. Journal of Computational and AppliedMathematics, 232, 275–284.

Ha, C., & Kuo, W. (2006). Reliability redundancy allocation: An improved realizationfor nonconvex nonlinear programming problems. European Journal ofOperational Research, 171, 124–138.

Hansen, E., & Walster, G. W. (2004). Global optimization using interval analysis. NewYork: Marcel Dekker Inc.

Hikita, M., Nakagawa, K. K., & Narihisa, H. (1992). Reliability optimization ofsystems by a surrogate-constraints algorithm. IEEE Transactions on Reliability, R-41(3), 473–480.

Huang, H., Tian, Z., & Zuo, M. J. (2005). Intelligent Interactive multiobjectiveoptimization method and its application to reliability optimization. IIETransactions, 37, 983–993.

Inagaki, T., Inoue, K., & Akashi, H. (1978). Interactive optimization of systemreliability under multiple objectives. IEEE Transactions on Reliability, 27,264–267.

Karmakar, S., Mahato, S., & Bhunia, A. K. (2009). Interval oriented multi-sectiontechniques for global optimization. Journal of Computation and AppliedMathematics, 224, 476–491.

Kim, J. H., & Yum, B. J. (1993). A heuristic method for solving redundancyoptimization problems in complex systems. IEEE Transactions on Reliability,42(4), 572–578.

Kuo, W., & Prasad, V. R. (2000). An annotated overview of system reliabilityoptimization. IEEE Transactions on Reliability, 49(2), 487–493.

Kuo, W., Prasad, V. R., Tillman, F. A., & Hwuang, C. L. (2001). Optimal reliability designfundamentals and application. Cambridge University Press.

Li, Z., Liao, H., & Cioit, W. (2009). A two-stage approach for multi-objective decisionmaking with applications to system reliability optimization. ReliabilityEngineering & System Safety, 94(10), 1585–1592.

Mahato, S. K., & Bhunia, A. K. (2006). Interval computing technique for globaloptimization. Applied Mathematics Research Express, 2006, 1–19.

Misra, K. B., & Sharma, U. (1991). An efficient algorithm to solve integerprogramming problems arising in system reliability design. IEEE Transactionson Reliability, R-40, 81–91.

Nakagawa, Y., & Nakashima, K. (1977). A heuristic method for determining optimalreliability allocation. IEEE Transactions on Reliability, R-26(3), 156–161.

Park, K. S. (1987). Fuzzy apportionment of system reliability. IEEE Transactions onReliability, 36, 129–132.

Ramirez-Marquez, J. E., & Coit, D. (2007). Multi-state component criticality analysisfor reliability improvement in multi-state systems. Reliability Engineering &System Safety, 92(12), 1608–1619.

Ramirez-Marquez, J. E., & Rocco, C. (2010). Evolutionary optimization technique formulti-state two-terminal reliability allocation in multi-objective problems. IIETransactions, 42(8), 539–552.

Rao, S. S., & Dhingra, A. K. (1992). Reliability and redundancy apportionment usingcrisp and fuzzy multiobjective optimization approaches. Reliability Engineeringand system Safety, 37, 253–261.

Ravi, V., Reddy, P. J., & Zimmermann, H. J. (2000). Fuzzy global optimization ofcomplex system reliability. IEEE Transactions on Fuzzy Systems, 8, 241–248.

Rocco, C., Ramirez-Marquez, J. E., Salazar, D., & Hernandez, I. (2010).Implementation of multi-objective optimization for vulnerability analysis ofcomplex networks. Journal of Risk and Reliability, 224(2), 87–95.

Sahoo, L., Bhunia, A. K., & Roy, D. (2010). A genetic algorithm based reliabilityredundancy optimization for interval valued reliabilities of components. Journalof Applied Quantitative Methods, 5(2), 270–287.

Sakawa, M. (1978). Multiobjective optimization by the surrogate worth trade-offmethod. IEEE Transactions on Reliability, 27(5), 311–314.

Sakawa, M. (2002). Genetic algorithms and fuzzy multiobjective optimization. KiuwerAcademic Publishers.

Srinivas, N., & Deb, K. (1994). Multiobjective optimization using nondominatedsorting in genetic algorithms. Journal of Evolutionary Computation, 2(3),221–248.

Taboada, H., Baheranwala, F., Coit, D., & Wattanapongsakorn, N. (2007). Practicalsolutions for multi-objective optimization: An application to system reliabilitydesign problems. Reliability Engineering & System Safety, 92(3), 314–322.

Taboada, H., & Coit, D. (2007). Data clustering of solutions for multiple objectivesystem reliability optimization problems. Quality Technology & QuantitativeManagement Journal, 4(2), 35–54.

Taboada, H., Espiritu, J., & Coit, D. (2008a). MOMS-GA: A multiobjective multi-stategenetic algorithm for system reliability optimization design problems. IEEETransaction on Reliability, 57(1), 182–191.

Taboada, H., Espiritu, J., & Coit, D. (2008b). Design allocation of multi-state series-parallel systems for power systems planning: A multiple objective evolutionaryapproach. Journal of Risk and Reliability, 222(3).

Utkin, L. V., & Gurov, S. V. (1999). Imprecise reliability of general structures.Knowledge and Information Systems, 1(4), 459–480.

Utkin, L. V., & Gurov, S. V. (2001). New reliability models based on impreciseprobabilities. In C. Hsu (Ed.), Advanced signal processing technology(pp. 110–139). World Scientific.