Probabilistic constraints for reliability problems

6
Probabilistic Constraints for Reliability Problems Elsa Carvalho Centro de Inteligência Artificial Universidade Nova de Lisboa Portugal [email protected] Jorge Cruz Centro de Inteligência Artificial Universidade Nova de Lisboa Portugal [email protected] Pedro Barahona Centro de Inteligência Artificial Universidade Nova de Lisboa Portugal [email protected] ABSTRACT Reliability quantifies the ability of a system to perform its required function under stated conditions. The reliability of a decision is usually represented as the probability of an adequate functioning of the system where both the decision and uncontrollable variables are subject to uncertainty. In this paper we extend previous work on probabilistic constraint programming to compute such reliability, assuming probability distributions for the uncertain values. Usu- ally this computation is very hard and requires a number of approx- imations, thus the computed value may be far from the exact one. Traditional methods do not provide any guarantees with respect to correctness of the results provided. We guarantee the computation of safe bounds for the reliability of a decision, which is of major relevance for problems dealing with non-linear constraints. Categories and Subject Descriptors I.2.m [Artificial Intelligence]: Miscellaneous Keywords Continuous constraints, uncertainty, reliability 1. INTRODUCTION Reliability studies the ability of a system to perform its required function under stated conditions. For instance, civil engineering structures must operate under uncertain forces caused by environ- mental conditions and materials display some variability due to manufacturing conditions. When modeling a design problem there is often a distinction between controllable (or decision) variables, representing alternative actions available to the decision maker, and uncontrollable variables (or states of nature) corresponding to ex- ternal factors outside her reach. Uncertainty affects both types of variables. There can be variability on the actual values of the de- cision design variables (e.g. the exact intended values of material properties may not be obtained due to limitations of the manufac- turing process). Or there can be uncertainty due to external factors that represent states of nature (e.g. earthquakes, wind). In both cases, it is important to quantify the reliability of a chosen design. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. SAC’10 March 22-26, 2010, Sierre, Switzerland Copyright 2010 ACM 978-1-60558-638-0/10/03 ...$10.00. Reliability is often reported in terms of a probability (of the ad- equate functioning of the system) and its exact quantification re- quires the calculation of a multi-dimensional integral with a non- linear integration boundary. Because there is rarely a closed-form solution, this calculation is one of the major concerns of the classi- cal approaches to solve reliability problems, which adopt approxi- mation methods that rely on several simplifications of the original problem to compute a reliability estimate, often leading to inaccu- rate results (especially in highly non-linear problems). Since decisions with a high reliability estimate but with a high uncertainty on the estimation are not credible solutions, it is impor- tant to obtain bounds to such estimate. These safe bounds are not available with classical approaches, but they can be provided by constraint programming. Constraint programming is an adequate technology to study many types of systems that arise in engineer- ing, biomedical [6] and other types of applications [15], namely dealing with continuous domains. It has been focused on finding values for decision variables that satisfy the constraints of a prob- lem, since these are the variables over which the problem solver has some degree of choice. However, decisions have to be made taking uncertainty into account. Since not all scenarios are equally likely a safe approach is often inadequate, as it does not provide solutions with a high likelihood to succeed. A probabilistic constraint framework that assumes likelihood dis- tributions for uncontrollable variables was proposed in [3] and ap- plied to inverse problems in [4], where decision variables are not present. In this paper we extend such framework to reliability problems, considering uncertainty on both decision and uncontrol- lable variables, and computing safe bounds on reliability estimates. Moreover, we show that the framework proposed is able to address reliability based optimization where other criteria (e.g. cost mini- mization) are considered in addition to reliability estimates. 2. RELIABILITY PROBLEMS Consider a general reliability problem with uncontrollable and decision variables. Assuming that uncontrollable variables are mod- eled by random variables, the reliability of a decision is the proba- bility of success of the modeled system given the choices commit- ted in the decision variables. Let X = {X 1 ,...,X n } be a set of n random variables, bounded by domains IX = IX 1 ×···× IX n and with a joint probability density function (p.d.f.) fX(X). Let D = {D1,...,Dm} be a set of m decision variables, bounded by domains I D = I D 1 ×···× I Dm . The feasible region, F , of a reliability problem is described by a set of k constraints G, on the decision and random variables: F = {hx, di∈ I X × I D : 1jk G j (x, d) 0} (1) Given a decision d and region Δ= I X × d, its reliability is

Transcript of Probabilistic constraints for reliability problems

Probabilistic Constraints for Reliability Problems

Elsa CarvalhoCentro de Inteligência ArtificialUniversidade Nova de Lisboa

[email protected]

Jorge CruzCentro de Inteligência ArtificialUniversidade Nova de Lisboa

[email protected]

Pedro BarahonaCentro de Inteligência ArtificialUniversidade Nova de Lisboa

[email protected]

ABSTRACTReliability quantifies the ability of a system to perform its requiredfunction under stated conditions. The reliability of a decision isusually represented as the probability of an adequate functioningof the system where both the decision and uncontrollable variablesare subject to uncertainty. In this paper we extend previous work onprobabilistic constraint programming to compute such reliability,assuming probability distributions for the uncertain values. Usu-ally this computation is very hard and requires a number of approx-imations, thus the computed value may be far from the exact one.Traditional methods do not provide any guarantees with respect tocorrectness of the results provided. We guarantee the computationof safe bounds for the reliability of a decision, which is of majorrelevance for problems dealing with non-linear constraints.

Categories and Subject DescriptorsI.2.m [Artificial Intelligence]: Miscellaneous

KeywordsContinuous constraints, uncertainty, reliability

1. INTRODUCTIONReliability studies the ability of a system to perform its required

function under stated conditions. For instance, civil engineeringstructures must operate under uncertain forces caused by environ-mental conditions and materials display some variability due tomanufacturing conditions. When modeling a design problem thereis often a distinction between controllable (or decision) variables,representing alternative actions available to the decision maker, anduncontrollable variables (or states of nature) corresponding to ex-ternal factors outside her reach. Uncertainty affects both types ofvariables. There can be variability on the actual values of the de-cision design variables (e.g. the exact intended values of materialproperties may not be obtained due to limitations of the manufac-turing process). Or there can be uncertainty due to external factorsthat represent states of nature (e.g. earthquakes, wind). In bothcases, it is important to quantify the reliability of a chosen design.

Permission to make digital or hard copies of all or part of this work forpersonal or classroom use is granted without fee provided that copies arenot made or distributed for profit or commercial advantage and that copiesbear this notice and the full citation on the first page. To copy otherwise, torepublish, to post on servers or to redistribute to lists, requires prior specificpermission and/or a fee.SAC’10 March 22-26, 2010, Sierre, SwitzerlandCopyright 2010 ACM 978-1-60558-638-0/10/03 ...$10.00.

Reliability is often reported in terms of a probability (of the ad-equate functioning of the system) and its exact quantification re-quires the calculation of a multi-dimensional integral with a non-linear integration boundary. Because there is rarely a closed-formsolution, this calculation is one of the major concerns of the classi-cal approaches to solve reliability problems, which adopt approxi-mation methods that rely on several simplifications of the originalproblem to compute a reliability estimate, often leading to inaccu-rate results (especially in highly non-linear problems).

Since decisions with a high reliability estimate but with a highuncertainty on the estimation are not credible solutions, it is impor-tant to obtain bounds to such estimate. These safe bounds are notavailable with classical approaches, but they can be provided byconstraint programming. Constraint programming is an adequatetechnology to study many types of systems that arise in engineer-ing, biomedical [6] and other types of applications [15], namelydealing with continuous domains. It has been focused on findingvalues for decision variables that satisfy the constraints of a prob-lem, since these are the variables over which the problem solver hassome degree of choice. However, decisions have to be made takinguncertainty into account. Since not all scenarios are equally likelya safe approach is often inadequate, as it does not provide solutionswith a high likelihood to succeed.

A probabilistic constraint framework that assumes likelihood dis-tributions for uncontrollable variables was proposed in [3] and ap-plied to inverse problems in [4], where decision variables are notpresent. In this paper we extend such framework to reliabilityproblems, considering uncertainty on both decision and uncontrol-lable variables, and computing safe bounds on reliability estimates.Moreover, we show that the framework proposed is able to addressreliability based optimization where other criteria (e.g. cost mini-mization) are considered in addition to reliability estimates.

2. RELIABILITY PROBLEMSConsider a general reliability problem with uncontrollable and

decision variables. Assuming that uncontrollable variables are mod-eled by random variables, the reliability of a decision is the proba-bility of success of the modeled system given the choices commit-ted in the decision variables.

Let X = {X1, . . . , Xn} be a set of n random variables, boundedby domains IX = IX1 × · · · × IXn and with a joint probabilitydensity function (p.d.f.) fX(X). Let D = {D1, . . . , Dm} be a setof m decision variables, bounded by domains ID = ID1 × · · · ×IDm . The feasible region, F , of a reliability problem is describedby a set of k constraints G, on the decision and random variables:

F = {〈x, d〉 ∈ IX × ID : ∀1≤j≤kGj(x, d) ≥ 0} (1)

Given a decision d and region ∆ = IX × d, its reliability is

the probability that a point in ∆ is feasible, computed as the multi-dimensional integral on the region F ∩∆:

R(d) =

F∩∆

fX(x)dx (2)

The reliability of a decision is 0 if there are no value combina-tions for the random variables (with d) that satisfy the constraints(F ∩ ∆ = {}). Conversely the reliability of a decision is 1 if alltheir value combinations satisfy the constraints (F ∩∆ = ∆).

In reliability problems dealing with continuous decision vari-ables the decision space may be discretized into a set of hyperboxes(the Cartesian product of m intervals), with step sizes specified bythe decision maker. This allows the selection of meaningful deci-sions δ as hyperboxes (δ ⊆ ID) in which the points are consideredindifferent among each other, i.e., are equally likely.

Since decision and random variables are probabilistically inde-pendent, the reliability of δ is computed as the multi-dimensionalintegral on the region F ∩ ∆, where ∆ = IX × δ and fD is amultivariate uniform distribution over δ:

R(δ) =

F∩∆

fX(x)fD(d)dxdd (3)

Consider the reliability problem (figure 1) with random vari-able X1, bounded by IX1 = [0, 18] with a normalized Gaussianp.d.f. fX(X1), decision variable D1 bounded by ID1 = [0, 18]and feasible region F (dark grey area) defined by the constraintG1(X1, D1) : 18−D1 −X1 ≥ 0.

Figure 1: A reliability problem. a) d1 = 8; b) d1 ∈ [8, 9].

One possible decision (figure 1a) is δ = {8}, with ∆ = [0, 18]×{8} (complete vertical line) and F ∩ ∆ = [0, 10] × {8} (thickvertical line). Its reliability (light grey area) is:

R({8}) =

∫ 10

0

fX(x1)dx1 (4)

Other possible decision (figure 1b) is δ = [8, 9], with ∆ =[0, 18] × [8, 9] (complete rectangle) and F ∩ ∆ is the black area.With a uniform p.d.f. in [8,9], fD(D), its reliability is:

R([8, 9]) =

∫ 9

8

∫ 18−d1

0

fX(x1)fD(d1)dx1dd1 (5)

When an hypergrid on the decision variables is considered, it ispossible to evaluate how reliable each decision is and characterizethe entire decision space in terms of its reliability. This informationis useful to decision makers as it allows to choose between differentalternatives, based on the given reliability and on their expertise.

In practice many reliability problems include optimization crite-ria and are called reliability-based optimization problems [7]. Be-sides the information about the failure mode of a system (modeled

by the constraints), they include information about its desired be-havior, modeled by objective functions over the decision variables,Hi(D). The aim is to obtain reliable optimal decisions.

2.1 Classical TechniquesReliability estimation involves the calculation of a multi dimen-

sional integral in a possibly highly non-linear integration bound-ary. Analytical computation of that integral is usually impossible.Classical techniques devised a variety of approximation methodsto compute this integral, including sampling techniques based onMonte Carlo simulation (MCS) [10]. This approach works wellfor a small reliability requirement, but as the desired reliability in-creases, the number of samples must also increase to find at leastone infeasible solution. As the number of variables increases, spe-cially for non-linear problems, the MCS approach becomes inad-equate for practical use, due to its prohibitively high computationcost. In [12], Hasofer and Lind introduced the reliability indextechnique for calculating approximations of the desired integralwith reduced computation costs. The reliability index has beenextensively used in the first and second order reliability methods(FORM [14] and SORM [9]). However, the accuracy of the com-puted approximation is sacrificed due to several assumptions takento implement those methods.

A first assumption is that the joint p.d.f. in (2) can be approx-imated by a multivariate Gaussian. Various normal transforma-tion techniques can be applied [13] which may lead to major errorswhen the original space includes non-normal random variables.

A second assumption is that the feasible space determined by asingle constraint can be reasonably approximated based on a par-ticular point, most probable point (MPP), on the constraint bound-ary. Instead of the original constraint, a tangent plane (FORM) ora quadratic surface (SORM), fitted at the MPP, is used to approx-imate the feasible region. However, the non linearity of the con-straint may lead to unreasonable approximation errors. Firstly, thelocal optimization methods [12], used to search for the MPP, arenot guaranteed to converge to a global minimum. Secondly, an ap-proximation based only on a single MPP does not account for thepossibly significant contributions from the other points [16]. Fi-nally, the linear or quadratic approximation of the constraint maybe unrealistic for highly non-linear constraints.

A third assumption is that the overall reliability can be reason-ably approximated from the individual contributions of each con-straint. In its simplest form, only the most critical constraint is usedto delimit the unfeasible region. This may obviously lead to over-estimations of the overall reliability. More accurate approaches [8]take into account the contribution of all the constraints but, to avoidoverlapping between the contribution of each pair of constraints,have to rely on approximations of corresponding joint bivariate nor-mal distributions.

Given the simplifications adopted, none of the above methodsprovides guarantees on the reliability values computed, speciallyfor non-linear problems. In contrast, the technique presented in thispaper does not suffer from this limitation, guaranteeing safe boundsfor the reliability of a decision, given the probabilistic continuousconstraint satisfaction paradigm that supports it.

3. CONTINUOUS CONSTRAINTSA Continuous Constraint Satisfaction Problem (CCSP) [17, 2,

20] is defined by a set of real valued variables and a set of con-straints on subsets of the variables. A solution is a value assignmentto all variables satisfying all the constraints.

Constraint reasoning aims at eliminating value combinations fromthe initial domains (the initial search space) that do not satisfy the

constraints. Usually, during constraint reasoning in continuous do-mains, the search space is maintained as a set of boxes (Cartesianproduct of intervals) which are pruned and subdivided until a stop-ping criterion is satisfied (e.g. all boxes are small enough).

The pruning of the variable domains is based on constraint prop-agation. For this purpose, narrowing functions (mappings betweenboxes) are associated with constraints, often implementing efficientmethods from interval analysis (e.g. the interval Newton [18]) toguarantee correctness (elimination of no solutions) and contract-ness (the box obtained is smaller or equal than the original one).

In classical CCSPs uncertainty is modeled by intervals that rep-resent the domains of the variables. Constraint reasoning reducesuncertainty providing a safe method for computing a set of boxesenclosing the feasible space. This paradigm cannot distinguish be-tween different scenarios and thus all combination of values withinsuch enclosure are considered equally plausible.

4. PROBABILISTIC CONSTRAINTSProbability provides a classical model for dealing with uncer-

tainty [11]. The basic element of probability theory is the randomvariable with an associated domain where it can assume values. Inparticular, continuous random variables assume real values. A pos-sible world, or atomic event, is an assignment of values to all thevariables of the model. An event is a set of possible worlds. Theset of all possible worlds in the model is the sample space. If all therandom variables are continuous, the sample space is the Cartesianproduct of their domains, and the possible worlds and events are,respectively, points and regions from such hyperspace.

Probability measures may be associated with events. A proba-bilistic model is an encoding of probabilistic information, allowingto compute the probability of any event, in accordance with the ax-ioms of probability. In the continuous case, the usual method forspecifying a probabilistic model assumes, either explicitly or im-plicitly, a full joint probability density function (p.d.f.), which as-signs a probability measure to each point of the sample space. Theprobability of any event E, given a p.d.f. f , is its multi-dimensionalintegral on the region defined by the event:

P (E) =

E

f(x)dx (6)

In accordance with the axioms of probability, f must be a non-negative function and, when the event E is the complete samplespace, the above integral must be 1.

To complement the interval bounded representation of uncer-tainty with a probabilistic characterization of the value distributionsa probabilistic extension of a CCSP (PCCSP) was proposed in [3].

In a PCCSP (X, D, C, f), X is a tuple of n real valued variables〈x1, . . . , xn〉, D is the Cartesian product of their domains I1 ×· · · × In, with each variable xi ranging over the real interval Ii, Cis a finite set of numerical constraints on subsets of the variables inX , and f is a non-negative point function defined in D such that:

I1

. . .

In

f(x1, . . . , xn)dxn . . . dx1 = 1 (7)

The framework associates a probabilistic model to its initial searchspace D characterized by a joint p.d.f. f . The probability of anyevent E may be theoretically computed as in (6). In particular, thefeasible space F is the event containing all possible points that sat-isfy the constraints. See [3] for further details and related work.

The ability of this framework to combine feasibility, throughconstraint reasoning, and probability providing a safe method toreason with a probabilistic model, makes it appealing for decisionsupport in reliability problems.

5. CONSTRAINTS FOR RELIABILITYIn order to handle reliability problems three functionalities were

included in the PCCSP framework. The first calculates a safe en-closure of the probability of the feasible space and is generic forconstraint problems characterized by sets of inequalities (algorithm1). The others use such functionality for decision support in reli-ability problems (algorithm 2) and reliability-based optimization(algorithm 3). This section describes such functionalities.

The aim of reliability assessment is to characterize each decisionδ ⊆ ID in terms of its reliability value, R(δ), calculated by equa-tion (3). Such reliability corresponds to the probability of the fea-sible space F of a PCCSP (X, D, C, f), where X are the decisionand random variables of the reliability problem, the sample spaceD = ∆, the constraints C are the inequality constraints Gj ≥ 0 ofthe reliability problem and the p.d.f. f = fX × fD .

To compute a multi-dimensional integral of a non-linear integra-tion area, this area is safely approximated by discretizing it intoa set of boxes. Then, the integrals of all boxes are computed andsummed up to obtain an approximation of the original integral. Asafe lower bound for the probability value corresponds to sum thecontributions of all boxes completely included in the original area(an inner approximation) whereas a safe upper bound correspondsto sum the contributions of all boxes that intersect with the origi-nal area (an outer approximation). The inner approximation is thusthe set of boxes where all the points are proved to be in the solu-tion space (solution boxes). The outer approximation is the set ofboxes not eliminated by constraint reasoning. The boxes that arein the outer, but not in the inner approximation, contribute to thedifference between the lower and upper reliability bounds and willbe referred as uncertain boxes. Constraint reasoning progressivelyreduces this difference either by transforming an uncertain box intoa solution box, or by eliminating or reducing an uncertain box.

Figure 2 shows the inner (white rectangles) and outer (white plusgrey rectangles) approximations, for a decision δ and a solutionspace given by the area inside the curve lines. The uncertain boxmarked with a circle in figure 2a is transformed into two smalleruncertain boxes and one solution box (figure 2b), providing a bettersolution space approximation than the previous one.

Figure 2: Inner and outer approximations of the non lineararea inside the curve lines.

Given a PCCSP (X, D, C, f) and a precision requirement α, al-gorithm 1 computes an enclosure (smaller or equal than α) of thecorrect probability value for the solution space. It maintains an in-terval p with the current probability enclosure and a list of uncertainboxes L sorted in decreasing order of their probability. A solutionbox contributes to the lower and upper bounds of the overall proba-bility p, while an uncertain box contributes only to the upper boundsince it may be inconsistent. In the subsequent split and prune pro-cess, applied to the boxes in L, p is updated according to the type ofthe boxes obtained. This process stops either when precision α is

achieved or when there are no more boxes to process. Nevertheless,values of α close to the machine limit may be impossible to com-pute due to rounding errors or unreasonable computation times. Inthis case an alternative stoping criteria can be enforced.

Algorithm 1 Solution space probabilityfunction ssProbability((X, D, C, f), α)1: L ← ®2: p ← [0, 0]3: updateProb(D, C, f, p, L)4: return ssProb(L, C, f, p, α)

function ssProb(L, C, f, p, α)1: while (L 6= ® ∧ width(p) > α) do2: B ← remove(L)3: (LB, RB) ← split(B)4: pB ← boxProb(B, C, f)5: p ← [left(p)− left(pB), right(p)− right(pB)]6: updateProb(LB, C, f, p, L)7: updateProb(RB, C, f, p, L)8: end while9: return p

procedure updateProb(B, C, f, &p, &L)1: B ← CPA(B, C)2: if B 6= ® then3: if ¬allSolutions(B, C) then4: L ← orderedInsert(B, L)5: end if6: p ← p + boxProb(B, C, f)7: end if

function boxProb(B, C, f)1: p ← probability(B, f)2: if allSolutions(B, C) then return p3: else return [0, right(p)]

Function ssProbability is the starting point of algorithm 1, re-turning the intended enclosure of the correct probability value. Itcalls the auxiliary function ssProb (line 4) with values for p and Ladequately initialized (lines 1-3).

Given a box B, procedure updateProb updates the values ofp and L. First, B is pruned by the constraint propagation algo-rithm, CPA, according to constraints C (line 1). Then, if B isnot eliminated (line 2) the probability enclosure p is updated withthe contribution of B (line 6). Moreover if B is still an uncertainbox (line 3) it is inserted in list L, according to its probability (line4). To verify satisfiability, function allSolutions relies on intervalevaluations of each constraint over the box domains.

Function boxProb returns an interval with the lower and uppercontributions of box B, depending on its type. The probability of Bis the integral of the p.d.f. f in the box. Function probability (see[5] for an implementation) computes an enclosure of such integral(line 1). If B is a solution box this interval is returned (line 2).Otherwise, it is an uncertain box, and only the upper bound getsthe upper value of the calculated contribution (line 3).

Function ssProb iteratively processes all the uncertain boxes inL, to progressively calculate the probability enclosure p, until pre-cision α is achieved or there are no more boxes (line 1). Each boxB in L is split in a left and a right box, by dividing the larger inter-val of B. Function split provides this functionality (line 3). Thecontribution of box B is replaced by the contributions of both subboxes LB and RB (lines 4-7). When the process stops, p is re-turned (line 9). Since L is ordered, the next uncertain box B to beprocessed is always the one with greater probability value (line 1).

5.1 Decision Space ReliabilityOnce established an hypergrid over the decision space of a re-

liability problem, it is possible to characterize the entire space interms of its reliability. Algorithm 2 is included in the framework toprovide this functionality. It assumes an hypergrid of step size ε inall decision dimensions.

Given a reliability problem (〈X, D〉, IX×ID, G, fX), algorithm2 finds each hyperbox (a grid decision) that might satisfy all theconstraints in G (using constraint reasoning), calculates its relia-bility and includes this information in a list of decisions L. Thislist is the output of the algorithm and can be used for informationretrieval, useful to the decision maker. An additional parameter canbe a threshold for the reliability value. Decisions with reliabilitybelow that value are not included in the returned list L.

Algorithm 2 Decision space reliability characterizationfunction dsReliability((〈X, D〉, IX × ID, G, fX), ε, α)1: L ← ®2: f ← 1

εm fX

3: dsRel(〈X, D〉, IX × ID, G, f, L, ε, α)4: return L

procedure dsRel(〈X, D〉, IX × ID, G, f, &L, ε, α)1: IX × ID ← CPA(IX × ID, G)2: if IX × ID 6= ® then3: if gridDecision(ID, gridIdx, ε) then4: R ← ssProbability((〈X, D〉, IX × ID, G, f), α)5: insert(〈gridIdx, R〉, L)6: else7: (IL, IR) ← split(ID)8: dsRel(〈X, D〉, IX × IL, G, fX , L, ε, α)9: dsRel(〈X, D〉, IX × IR, G, fX , L, ε, α)

10: end if11: end iffunction gridDecision(I1 × . . .× Im, &〈j1, . . . jm〉, ε)1: for i = 1 to m do2: ji ← floor(left(Ii)/ε)3: if ceil(right(Ii)/ε) > ji + 1 then return false4: end for5: return true

Function dsReliability is the starting point of algorithm 2 andreturns a list of pairs L. Considering m decision variables, the firstelement of the pair is an m-tuple with the ε-hypergrid coordinates(referred as grid coordinates) of a decision. The second is its relia-bility enclosure value. It calls the auxiliary function dsRel (line 3),with the empty list L (line 1) and the joint p.d.f. f . This p.d.f. is theproduct of the random variables joint p.d.f. and each uniform p.d.f.of the m decision variables (line 2). Since a decision correspondsto an m-dimensional, ε-hypergrid box (referred as grid decision),the product of all uniform p.d.f.s is the constant 1

εm .Function dsRel recursively finds and processes each grid feasi-

ble decision of the reliability problem. Then updates list L with thecorresponding information. The constraint propagation algorithm(CPA) prunes box IX × ID (line 1). If it is not eliminated (line 2)and it is already a grid decision (line 3) then, a pair, with its gridcoordinates and its reliability value, is inserted in L. Otherwise it issplit in a left and a right box (only decision variables are splitable)(line 7) which are recursively processed.

Function gridDecision verifies if the box I1 × . . . × Im is agrid decision and, if it is, calculates its grid coordinates. The con-dition is true if, for each box dimension (line 1), the correspondinginterval does not span over more than one grid interval (lines 2,3).

5.2 Reliability-Based OptimizationUsing the new proposed functionalities, the probabilistic frame-

work is further enhanced to deal with reliability-based optimizationproblems. The approach is to choose, among the computed deci-sions, those that are Pareto decisions, according to the criteria ofmaximizing reliability and optimizing the given objective functionsHi. Since this is a multi-objective optimization, a Pareto-optimalfrontier is computed, using algorithm 3.

Consider l objective functions Hi. Similarly to what was doneto calculate the reliability value enclosure for a decision, basedon the feasible space, it is also possible to calculate enclosuresfor the possible values of optimization functions over the feasi-ble space (Hi for maximization functions or −Hi for minimiza-tion functions). Thus, a l + 1-tuple of enclosing intervals, Oδ , canbe associated with each decision δ, where the first element rep-resents the reliability and the others, the objective functions. Forany two decisions δ1 and δ2 with its corresponding tuples Oδ1 andOδ2 , δ1 strictly dominates δ2, if it satisfies the Pareto criterion:∀iOδ1 [i] ≥ Oδ2 [i] ∧ ∃iOδ1 [i] > Oδ2 [i]. The Pareto-optimal fron-tier is the set of decisions that are not strictly dominated by anotherdecision. Since the compared elements are intervals, the ≥ and >interval operators [18] must be used (e.g. [a, b] > [c, d] iff a > d).

Algorithm 3 Reliability-based optimizationfunction dsOptimize(〈X, D〉, IX × ID, G, H, fX , ε, α)1: L ← ®2: f ← 1

εm fX

3: dsOptim(〈X, D〉, IX × ID, G, H, f, L, ε, α)4: return L

procedure updateFrontier(〈gridIdx, O〉, &L)1: for i = 1 to size(L) do2: if dominates(O, second(Li)) then remove(Li, L)3: elsif dominates(second(Li), O) then return4: end for5: insert(〈gridIdx, O〉, L)

Function dsOptimize is the starting point of algorithm 3 and isvery similar to dsReliability of algorithm 2. However, instead ofreturning all grid decisions (grid coordinates and reliability value),it returns only the Pareto-optimal frontier decisions, based on theobjective functions H and the reliability value. The information foreach decision is augmented with its corresponding Hi values.

Since function dsOptim is an adaptation of its non-optimizationcounterpart, dsRel, its pseudo code is replaced by a description ofsuch adaptation. In function dsOptim, instead of inserting everygrid decision in L, this list maintains only the non dominated griddecisions. So, when a grid decision is found, its corresponding in-formation is used to update L, substituting the call to insert (line5 of dsRel) by a call to updateFrontier. For a grid decision,besides the reliability value, additional information, on the corre-sponding Hi values, must be supplied to updateFrontier to in-spect dominance. Thus, algorithm 1 must be adapted to maintainsuch additional information, given the corresponding Hi functions.

Function updateFrontier updates the current Pareto frontierL, by comparing its elements with the new decision information O(line 1). Function dominates, which verifies the Pareto criterion.If the new decision dominates an existing one, that one is removedfrom L (line 2). The new decision is inserted in L if it is not domi-nated by any decision in that list (lines 3, 5).

6. PRELIMINARY RESULTSIn this section the proposed functionalities are tested with a com-

monly used mathematical example (e.g. [7]). For this purpose aprototype for the framework was implemented in C++, using theGaol [1] interval arithmetic library and a constraint propagation al-gorithm (CPA) that enforces box consistency [2].

The problem has decision variables D1 and D2, and randomvariables X1 and X2, which represent variability around the de-cision values. To simplify, the formulas use auxiliary variablesy = D1 + X1 and z = D2 + X2.

Decision variables assume values in ID = [1, 10] × [1, 10] andrandom variables assume values in IX = [−0.9, 0.9]× [−0.9, 0.9]with triangular distributions in their domains and mode 0. The con-straints are {G1, G2, G3} = { 1

20y2z−1 ≥ 0,−y2−8z +75 ≥ 0

, 5y2 + 5z2 + 6yz − 64y − 16z + 124 ≥ 0}.Figure 3a presents the feasible space of such problem (projected

over the the decision space D) characterized by its reliability. Thestep size for each decision is ε = 0.1 and the required precision forthe reliability is α = 1%.

D1

D2

1 2 3 4 5 6 7 8

2

3

4

5

6

7

8

9

G1G2

G3

a)

82 84 86 88 90 92 94 96 98 1006

6.5

7

7.5

Reliability (%))

Op

tim

al F

unct

ion V

alu

e (m

inim

ize D

1 +

D2)

b)

Figure 3: a) Decision space reliability; b) Trade-off betweenH11 and the reliability.

Figure 3a is drawn based on a matrix of reliability values, cre-ated using the output of algorithm 2, i.e. a list of pairs 〈Reliabilityinterval, Grid coordinates〉. The grid coordinates are the line andcolumn indexes and the mid point of the reliability interval is as-signed to the respective matrix cell. The less reliable a decision isthe darker is its color (white is 100% reliable whereas black is 0%).

The calculated information allows the decision maker to have aglobal view of the problem. Based on this information and on hisexpertise, he can choose to further explore one or more regions ofinterest, with increased accuracy.

To illustrate the reliability-based optimization functionality wetested this example with two different minimization functions (sep-arately) H11(D) = D1+D2 and H12(D) = D1+D2+sin(3D2

1)+sin(3D2

2). Only decisions with a reliability upper bound above84.134% (two Gaussian standard deviations) are considered.

With objective function H11 and the considered uncertainty, thenon dominated decisions were obtained near the feasible regionabove the intersection of constraints G1 and G3. Figure 3b showsthe relation between the reliability values and the correspondingH11 values for the obtained decisions. This constitutes importantknowledge to the decider, as it provides information on the trade-off between the system reliability value and its desired behavior.The objective function H12 has several local optima (figure 4b)and our method is able to identify and characterize them in termsof their different reliability values producing an overview of thePareto-optimal frontier (figure 4a).

It is noteworthy that while the output of our optimization method

a)

2.83

3.23.4

2.8

3

3.2

3.4

3

4

5

6

7

8

9

D1D2

b)

Figure 4: a) Pareto-optimal frontier for H12; b) H12 optimiza-tion function.

is a safe Pareto-optimal frontier given the optimization criteria andthe maximization of the reliability value, for the classical RBDOtechniques it is a single decision point (the optimum value of theobjective function) given a target reliability, with no guaranteesof global optimality. On the other hand, in certain classes of re-liability problems, the classical techniques are unable to providesufficiently accurate results due to their approximation methods,while we guarantee safe bounds for the exact results. Neverthe-less our framework can easily incorporate the same type of out-put, i.e., to obtain the best decision (or set of decisions) given theobjective function, for a target reliability. For instance, for targetreliability β = 84.134%, the global optimum for objective func-tion H11 is proved to be enclosed in [6.00, 6.02]. It was provedthat there are no decisions with reliability β and H11 values lessthan 6.00 and a particular decision was found with D1 = 3.11 andD2 = 2.91 within the desired reliability β (in fact proved to beabove 84.147%) and with H11 value of 6.02. Similarly, for objec-tive function H12 and the same β the optimal value is enclosed by[4.250, 4.351].

Since in the literature computational efficiency is usually reportedin terms of number of function evaluations, we can only say that ourmethod spends several minutes to achieve the desired result whileclassical techniques spend several seconds. Although our methodis slower than the classical ones it provides more information andis a good trade-off between efficiency and guaranteed safe results.

7. CONCLUSIONS AND FUTURE WORKIn this paper we extend previous work on the PCCSP framework

to deal with reliability problems. For this purpose we assume un-certainty not only on the uncontrollable variables but also on thedecision variables, modeling both kinds of uncertainty with a prob-ability distribution. Given its grounding on continuous constraintpropagation techniques, the PCCSP paradigm allows the compu-tation of safe bounds for the reliability measure of any decision,accommodating any probability distribution, contrary to other clas-sical approaches that use various kinds of approximations but donot provide such guarantee. Finally we extended our approachto address reliability-based optimization problems, by filtering theobtained solutions so as to maintain exclusively those that are notPareto dominated.

In this paper we did not focus on the most computationally ef-ficient methods in all the components of our framework. Never-theless, and despite the present implementation is computationallyexpensive, we argue that the technique is worthwhile as it providessafety guarantees and may avoid costly wrong decisions, especially

in problems with non-linear constraints, where the approximationsof the classical approaches may be too inaccurate.

As future work we consider including state of the art techniquesfor the sake of efficiency and generalization. Techniques for inte-gration using interval analysis [5] could be used to compute intervalbounds for the integration of any p.d.f. function over a box (func-tion probability). Efficiency could be improved by methods thatuse constraint propagation on the negation of the constraints [19](function allSolutions) and by more efficient algorithms for con-tinuous constraint propagation (function CPA). On the other hand,we intend to make a complexity study of the framework to addressits scalability.

8. REFERENCES[1] Gaol www.sourceforge.net/projects/gaol.[2] F. Benhamou, D. McAllester, and P. van Hentenryck.

CLP(intervals) revisited. In ISLP, pages 124–138. MITPress, 1994.

[3] E. Carvalho, J. Cruz, and P. Barahona. Probabilisticcontinuous constraint satisfaction problems. In ICTAI (2),pages 155–162, 2008.

[4] E. Carvalho, J. Cruz, and P. Barahona. Probabilisticreasoning for inverse problems. In Advances in SoftComputing, volume 46, pages 115–128. Springer, 2008.

[5] C. Chen. Computing interval enclosures for definite integralsby application of triple adaptive strategies. Computing,78(1):81–99, 2006.

[6] J. Cruz and P. Barahona. Constraint reasoning in deepbiomedical models. J. AI in Medicine, 34:77–88, 2005.

[7] K. Deb, D. P. S. Gupta, and A. K. Mall. Handlinguncertainties through reliability-based optimization usingevolutionary algorithms, 2006.

[8] O. Ditlevsen. Narrow reliability bounds for structural system.J. St. Mech., 4:431–439, 1979.

[9] B. Fiessler, H.-J. Neumann, and R. Rackwitz. Quadratic limitstates in structural reliability. J. Engrg. Mech. Div.,105:661U–676, 1979.

[10] A. Halder and S. Mahadevan. Probability, Reliability andStatistical Methods in Engineering Design. Wiley, 1999.

[11] J. Y. Halpern. Reasoning about Uncertainty. MIT Press,2003.

[12] A. M. Hasofer and N. C. Lind. Exact and invariantsecond-moment code format. J. Engrg. Mech. Div., 1974.

[13] M. Hohenbichler and R. Rackwitz. Non-normal dependentvectors in structural safety. J. Engrg. Mech. Div.,107:1227–1238, 1981.

[14] M. Hohenbichler and R. Rackwitz. First-order concepts insystem reliability. Struct. Safety, (1):177–188, 1983.

[15] L. Jaulin, M. Kieffer, O. Didrit, and E. Walter. AppliedInterval Analysis. Springer, 2001.

[16] A. Kiureghian and T. Dakessian. Multiple design points in1st and 2nd-order reliability. Str. Safety, 20(1):37–49, 1998.

[17] O. Lhomme. Consistency techniques for numeric CSPs. InProc. of the 13th IJCAI, pages 232–238, 1993.

[18] R. Moore. Interval Analysis. Prentice-Hall, EnglewoodCliffs, 1966.

[19] S. Ratschan. Efficient solving of quantified inequalityconstraints over the real numbers. ACM Trans. Comput.Logic, 7(4):723–748, 2006.

[20] D. Sam-Haroud and B. Faltings. Consistency techniques forcontinuous constraints. Constraints, 1(1/2):85–118, 1996.