Accounting for Both Random Errors and Systematic Errors in Uncertainty Propagation Analysis of...

13
Risk Analysis, Vol. 25, No. 6, 2006 DOI: 10.1111/j.1539-6924.2005.00704.x Accounting for Both Random Errors and Systematic Errors in Uncertainty Propagation Analysis of Computer Models Involving Experimental Measurements with Monte Carlo Methods Victor R. Vasquez 1and Wallace B. Whiting A Monte Carlo method is presented to study the effect of systematic and random errors on computer models mainly dealing with experimental data. It is a common assumption in this type of models (linear and nonlinear regression, and nonregression computer models) involving experimental measurements that the error sources are mainly random and independent with no constant background errors (systematic errors). However, from comparisons of different experimental data sources evidence is often found of significant bias or calibration errors. The uncertainty analysis approach presented in this work is based on the analysis of cumulative probability distributions for output variables of the models involved taking into account the effect of both types of errors. The probability distributions are obtained by performing Monte Carlo simulation coupled with appropriate definitions for the random and systematic errors. The main objectives are to detect the error source with stochastic dominance on the uncertainty propagation and the combined effect on output variables of the models. The results from the case studies analyzed show that the approach is able to distinguish which error type has a more significant effect on the performance of the model. Also, it was found that systematic or calibration errors, if present, cannot be neglected in uncertainty analysis of models dependent on experimental measurements such as chemical and physical properties. The approach can be used to facilitate decision making in fields related to safety factors selection, modeling, experimental data measurement, and experimental design. KEY WORDS: Bias error; Monte Carlo; random error; statistical methods; stochastic processes; sys- tematic error; uncertainty; uncertainty propogation 1. INTRODUCTION Accurate estimation of chemical and physical properties is an important use of computer models in science and engineering. Because of limitations on modeling and knowledge of reality, risk assess- ments are necessary to quantitatively evaluate the un- 1 Department of Chemical Engineering, University of Nevada, Reno, Reno, NV 89557, USA. Address correspondence to Victor R. Vasquez, Chemical Engi- neering Department, University of Nevada, Reno, NV 89557, USA; [email protected]. certainty associated with the predictions and model behavior. In fields like economics or complex pro- cesses such as radioactive waste disposal, risk eval- uations become more difficult than in other fields because of the large number of stochastic variables involved in the models used. For instance, accord- ing to Stix (1998), billions of U.S. dollar losses are attributable to model risk in the financial market. Risk assessments in technological systems are used in safety evaluations, environmental hazard estimations, estimation of safety factors for process design and simulation, and risk evaluation of nuclear facilities, 1669 0272-4332/05/0100-1669$22.00/1 C 2006 Society for Risk Analysis

Transcript of Accounting for Both Random Errors and Systematic Errors in Uncertainty Propagation Analysis of...

Risk Analysis, Vol. 25, No. 6, 2006 DOI: 10.1111/j.1539-6924.2005.00704.x

Accounting for Both Random Errors and Systematic Errorsin Uncertainty Propagation Analysis of Computer ModelsInvolving Experimental Measurements withMonte Carlo Methods

Victor R. Vasquez1∗ and Wallace B. Whiting

A Monte Carlo method is presented to study the effect of systematic and random errors oncomputer models mainly dealing with experimental data. It is a common assumption in this typeof models (linear and nonlinear regression, and nonregression computer models) involvingexperimental measurements that the error sources are mainly random and independent withno constant background errors (systematic errors). However, from comparisons of differentexperimental data sources evidence is often found of significant bias or calibration errors. Theuncertainty analysis approach presented in this work is based on the analysis of cumulativeprobability distributions for output variables of the models involved taking into account theeffect of both types of errors. The probability distributions are obtained by performing MonteCarlo simulation coupled with appropriate definitions for the random and systematic errors.The main objectives are to detect the error source with stochastic dominance on the uncertaintypropagation and the combined effect on output variables of the models. The results from thecase studies analyzed show that the approach is able to distinguish which error type has amore significant effect on the performance of the model. Also, it was found that systematic orcalibration errors, if present, cannot be neglected in uncertainty analysis of models dependenton experimental measurements such as chemical and physical properties. The approach canbe used to facilitate decision making in fields related to safety factors selection, modeling,experimental data measurement, and experimental design.

KEY WORDS: Bias error; Monte Carlo; random error; statistical methods; stochastic processes; sys-tematic error; uncertainty; uncertainty propogation

1. INTRODUCTION

Accurate estimation of chemical and physicalproperties is an important use of computer modelsin science and engineering. Because of limitationson modeling and knowledge of reality, risk assess-ments are necessary to quantitatively evaluate the un-

1 Department of Chemical Engineering, University of Nevada,Reno, Reno, NV 89557, USA.

∗ Address correspondence to Victor R. Vasquez, Chemical Engi-neering Department, University of Nevada, Reno, NV 89557,USA; [email protected].

certainty associated with the predictions and modelbehavior. In fields like economics or complex pro-cesses such as radioactive waste disposal, risk eval-uations become more difficult than in other fieldsbecause of the large number of stochastic variablesinvolved in the models used. For instance, accord-ing to Stix (1998), billions of U.S. dollar losses areattributable to model risk in the financial market.Risk assessments in technological systems are used insafety evaluations, environmental hazard estimations,estimation of safety factors for process design andsimulation, and risk evaluation of nuclear facilities,

1669 0272-4332/05/0100-1669$22.00/1 C© 2006 Society for Risk Analysis

1670 Vasquez and Whiting

among others (e.g., Morgan & Henrion, 1990; Helton,1994).

The sources of uncertainty in computer modelsand processes associated with these fields can be nu-merous. Good literature reviews about these issuescan be found in references such as Morgan andHenrion (1990), Dietrich (1991), Taylor (1982), Rowe(1994), Pate–Cornell (1996), Helton (1997), and Hoff-man and Hammonds (1994). Among the most com-mon error sources investigated from an experimentalstandpoint are random, systematic or bias, and modelformulation errors. The random or experimental er-rors are a function of the apparatus and techniqueused to perform the measurements. It is usually as-sumed that the laboratory adequately controls theexternal factors that can significantly affect the mea-surements, such as stochastic variations in the ambienttemperature and pressure, vibrations, and so forth. Inother fields this type of variation is usually referredto as stochastic uncertainty and the one from experi-mental technique errors is referred to as subjectiveor epistemic uncertainty. It is noteworthy to pointout that the context where these definitions are usedsometimes differs substantially. For instance, in casessuch as uncertainty analysis of nuclear reactors, exter-nal environmental factors can play a significant rolewhile the analytical laboratory errors are usually un-der control. On the other hand, systematic errors areassociated with calibration bias in the methods andequipment used to obtain the properties.

Experimentalists have paid significant attentionto the effect of random errors on uncertainty propa-gation in chemical and physical property estimation.However, even though the concept of systematic erroris clear, there is a surprising paucity of methodologiesto deal with the propagation analysis of systematic er-rors. The effect of the latter can be more significantthan usually expected. Important pioneering work de-scribing the role of uncertainty on complex processeshas been reported by Helton (1994, 1997) and ref-erences therein, Pate–Cornell (1996), and Hoffmanand Hammonds (1994). Additionally, as pointed outby Shlyakhter (1994), the presence of this type of er-ror violates the assumptions necessary for the use ofthe central limit theorem, making the use of normaldistributions for characterizing errors inappropriate.Usually, it is assumed that the scientist has reducedthe systematic error to a minimum, but there are al-ways irreducible residual systematic errors. On theother hand, there is a psychological perception thatreporting estimates of systematic errors decreases

the quality and credibility of the experimental mea-surements, which explains why bias error estimatesare hardly ever found in literature data sources. Forconvincing results of the presence of systematic er-rors, the reader may look at the history of reportedfundamental physical constants such as the velocityof light, electron mass, electron charge, and so forth(Henrion & Fischoff, 1986; Morgan & Henrion 1990).We know that significant effort has been made to ob-tain high accuracy and precision for these physicalmeasurements, and they are probably the most reli-ably known parameters (Shlyakhter, 1992). However,the presence of systematic errors seems unavoidable.This problem is aggravated in fields such as appliedsciences, engineering, and economics.

We believe that it is very important to know whatthe potential effects of background errors could haveon the output of computational procedures or mod-els involving chemical and physical properties froma probabilistic standpoint. Knowing the sensitivity ofoutput variables to systematic errors and separatingtheir effects from random error sources could sub-stantially improve the state of information of the prob-lem for decision-making purposes, such as the accu-racy and precision required of input data for reliableprediction.

In this work, we propose a method for analyzingthe effect of random and systematic errors on uncer-tainty propagation of computer models, which are di-rectly or indirectly related to experimental measure-ments. The approach is based upon the definition ofsystematic or bias limits, which can depend on the vari-able or variables being measured. Random samplinginside these limits using an appropriate probabilitydistribution allows us to evaluate the computer modelunder systematic uncertain conditions. Of particularinterest are the effects of possible calibration errorsin experimental measurements. The results are ana-lyzed through the use of cumulative probability dis-tributions (cdf) for the output variables of the model.A similar procedure is used to study the effect ofrandom errors and the combined effect of both er-ror types. When dealing with regression models (lin-ear and nonlinear) the uncertainty analysis is carriedout by incorporating the random variations directly inthe data used to regress the parameters of the model.This permits us to obtain a more realistic evaluationof the uncertainty propagation caused by experimen-tal or measurement errors than would trying to per-form the evaluation using estimated uncertainty levelsin the model parameters. Another advantage of this

Accounting for Both Random Errors and Systematic Errors 1671

approach is the incorporation of both subjective andrandom uncertainty before obtaining the parametersof the models by regression. This allows us to estimatewhich type of error exhibits the dominant contribu-tion to the total error.

With illustrative case studies, it is shown that theapproach presented is able to distinguish when onetype of error has a more significant effect on perfor-mance evaluations of selected output variables. Ad-ditionally, the method can be used to facilitate thedecision-making process in fields like experimentaldesign, modeling, and safety factors selection, amongothers. In Section 2, a basic description of randomand systematic uncertainty as usually defined in theexperimental community is given, including the basicdefinitions used in error analysis propagation. Sec-tion 3 partially describes the main strategies for per-forming uncertainty and sensitivity analysis under theinfluence of random and systematic errors, followedby a methodological description of the approach pre-sented in this work, which is illustrated by using twocase studies in the fields of nonlinear regression andcancer risks assessment.

2. RANDOM AND SYSTEMATICUNCERTAINTY

Quantitative knowledge of random and system-atic uncertainty represents a measure of the state ofinformation available for a given set of experimen-tal data or observable parameters. Uncertainties thatare detected by repeating measuring procedures un-der the same conditions are called random errors. Onthe other hand, the errors that are not revealed inthis way are called systematic errors. The incorpora-tion of the latter in uncertainty analysis is one of themain objectives of this work. There are many othersituations in statistical and mathematical procedureswhere systematic effects are found. We refer to themas bias effects or simply bias. An example of system-atic error in experimental measurements is calibrationerror. For instance, if during the calibration of a ther-mometer there is a calibration error of half a degree,all the measurements performed with that instrumentwill be off by that amount. This is usually called a con-stant bias or calibration error, and it can be positiveor negative.

In other fields related to risk assessment it iscommon to find the classification of stochastic ver-sus subjective uncertainty. Following the presentationgiven by Helton (1994, 1997), stochastic uncertaintyincludes the totality of occurrences that take place

in the system under consideration characterized bythe probability of such events or occurrences. For ex-ample, in this context a stochastic probability spacecan be defined as the likelihood of a n-day sequenceof weather conditions at a specific geographicallocation. In our context, the random or experimentalerror will be also a consequence of a series of stochas-tic events, but restricted to conditions of a measure-ment laboratory or equipment. Note that some ran-dom error is inherent in the apparatus. Thus, if wemeasure the temperature of a system that is at ther-mal equilibrium n times, we will obtain a distributionof temperature values, which is usually characterizedusing a normal probability distribution. This lack ofknowledge about the “true” value of the system tem-perature is usually called subjective uncertainty in thecontext presented by Helton (1994, 1997). Within thisframework, the systematic errors are not analyzed asindependent events. Analyzing these, coupled withrandom errors, is one of the main objectives of thiswork. Of course, the relevance of systematic errors de-pends on the nature and purpose of the system underconsideration.

The difference between random and systematicuncertainty is not always clear because of the highdegree of subjective judgment in defining systematicerrors. A good general definition of systematic uncer-tainty is the difference between the observed meanand the true value. However, the procedures used toquantify this type of uncertainty are highly contro-versial. Traditional practices involve only reportingthe sample population average (x) with a variabil-ity measure based mainly on the instrument statistics.However, at least a qualitative idea of systematic andother external errors should be reported. The stan-dard ANSI/ASME PTC 19.1 (1998) and the Guide tothe Expression of Uncertainty in Measurement (1995)of the International Organization of Standardization(ISO) strongly emphasize that a good uncertaintyanalysis should be able to identify several sources ofsystematic or bias errors. Some authors indicate thatthe uncertainty analysis might reveal dozens of pri-mary measurement errors (e.g., Hayward, 1977).

In this work, we are interested in comparingthe effects of random errors with those of poten-tial systematic ones in order to define a mechanismor methodology to quantify this difference. Evidenceand mathematical treatment of random errors havebeen extensively discussed in the technical literature.On the other hand, evidence and mathematical anal-ysis of systematic errors are much less common in

1672 Vasquez and Whiting

literature. However, ample evidence of the impactof systematic errors is found when comparing mea-surements of an invariant property over time or fromdifferent laboratories. For instance, as mentionedin the Introduction section, Henrion and Fischhoff(1986) made an interesting analysis of the values offundamental physical constants over time, showingclear evidence of the systematic trends present intheir values. Vasquez and Whiting (1998) reported ev-idence of systematic errors in thermodynamic data forternary liquid-liquid equilibria (LLE) systems.

Systematic trends can also be present in manyother situations. For instance, Bazant and Kaxiras(1996a, 1996b) discuss the problem of having differ-ent pair potential curves obtained from the inver-sion of multiple cohesive energy curves for the samematerial (silicon in this particular case). Pair poten-tials are used for property predictions using statisticalmechanics and thermodynamic methods. With differ-ent curves, obvious systematic trends are introducedthrough computer models. Additionally, several inter-molecular potential functions can be used in statisticalmechanical approaches for property models (Rowley,1994; Tester & Modell, 1997), increasing the complex-ity of analyzing the bias introduced by the differencesamong the functions. Additional cases and situationswhere systematic trends or bias are introduced intocomputer models are briefly discussed in the follow-ing section.

It is important to summarize the basic definitionsused in the literature to report and describe randomand bias errors present in experimental data. A com-mon expression for decomposing the total error foran observable or measurable event k of the variableu is given by

uk = β + εk, (1)

where β is a fixed bias error (as described above inthe thermometer calibration example), and εk is arandom precision error. Note that β and εk can bepositive or negative. The bias error is assumed to bea constant background for all the observable events.In this definition, the term representing systematic er-ror β is not a function of k, an approach suggested byANSI/ASME (1998) and also found in the Guide tothe Expression of Uncertainty in Measurement (1995)of ISO where correction factors are added for varioussources of systematic error. In other words, β is de-fined independent of the values for the design variableof the kth experiment.

When several sources of systematic errors areidentified, β is suggested to be calculated as a mean

of bias limits or additive correction factors as follows:

β ≈[

m∑i=1

ϕ2S,i

]1/2

, (2)

where i defines the sources of bias errors, and ϕS isthe bias range within the error source i. Similarly, thesame approach is used to define a total random errorbased on individual standard deviation estimates,

εk =[

n∑i=1

σ 2R,i

]1/2

. (3)

A similar approach for including both randomand bias errors in one term is presented by Diet-rich (1991) with minor variations, from a conceptualstandpoint, from the one presented by ANSI/ASME(1998). The main difference lies in the use of a Gaus-sian tolerance probability κ multiplying a quadraturesum of both types of errors,

uk = κ[β2 + ε2

k

]1/2, (4)

where κ is the value used to define uncertainty inter-vals for means of large samples of Gaussian popula-tions defined as x ± κσ . Additional formulae are pre-sented when dealing with small samples, replacing theGaussian probability term κ by a tolerance probabil-ity from the “t” Student distribution, which is equiv-alently used to define uncertainty intervals for meansof small samples as x ± t · s, where s is the estimate ofthe standard deviation σ . Also, formulations for ef-fective degrees of freedom and t effective values aregiven when both types of errors are present. Most ofthe literature references suggest a separate treatmentfor random and systematic uncertainties, emphasiz-ing statistical analysis for random errors. We note thatthere is still substantial uncertainty in defining the roleof random and systematic errors on uncertainty andsensitivity analysis. Also, when dealing with system-atic errors we found from experimental evidence thatin most of the cases it is not practical to define constantbias backgrounds. As noted by Vasquez and Whit-ing (1998) in the analysis of thermodynamic data, thesystematic errors detected are not constant and tendto be a function of the magnitude of the variablesmeasured. Similar evidence was reported by Henrionand Fischhoff (1986) for fundamental physical con-stants. This suggests that the methodologies presentedby ANSI/ASME (1998), ISO (1995), and authors likeDietrich (1991) and Steele et al. (1997) might not beappropriate for a wide variety of problems when deal-ing with bias errors.

Accounting for Both Random Errors and Systematic Errors 1673

3. UNCERTAINTY PROPAGATION INCOMPUTER MODELS

Uncertainty propagation in computer modelsfrom an analytical point of view is very difficult be-cause of the usual nonlinear complexity in practi-cal modeling situations. Most of the theoretical ap-proaches for the analysis of error propagation arebased on using the Taylor’s series expansion (1982,1995, 1998) of the model y = f (x1, x2, . . ., xN) andthe standard uncertainties for the input variables x1,x2, . . . , xN . Using the notation from ISO (1995), thecombined uncertainty uc(y) on the estimate of y isdefined as

u2c(y) =

N∑i=1

[∂ f∂xi

]2

u2(xi ). (5)

When the nonlinearity of f increases, the use ofhigher-order terms in the Taylor’s series expansion isrecommended. When xi has a symmetric distribution,ISO (1995) specifically suggests including the follow-ing terms in Equation (5):

N∑i=1

N∑j=1

[12

(∂2 f

∂xi∂xj

)2

+ ∂ f∂xi

∂3 f

∂xi∂x2j

]u2(xi )u2(xj ).

For the case of having correlation among the in-put variables, the combined uncertainty is computedas

u2c(y) =

N∑i=1

c2i u2(xi )

+ 2N−1∑i=1

N∑j=i+1

ci c j u(xi )u(xj )r(xi , xj ), (6)

where

ci ≡ ∂ f/∂xi , r(xi , xj ) = u(xi , xj )u(xi )u(xj )

. (7)

For relatively complex nonlinear models, this ap-proach cannot be implemented successfully from ananalytical standpoint. In this case, one approach tofollow is to use numerical techniques to approxi-mate partial derivatives and other mathematical op-erations, but the numerical errors can be significant,leading to poor resolution in the uncertainty propaga-tion computations. On the other hand, Monte Carlomethods have shown to be a reasonable tool for un-certainty propagation analysis when the complexityof the model is significant. Helton (1993) presentsa good description about the application of MonteCarlo methods in uncertainty and sensitivity analy-

sis. It is also important to cite the pioneering work ofIman and Conover (1980, 1982), and Iman and Helton(1988) in these issues, who proposed the use of strat-ified sampling techniques such as Latin hypercubesampling (LHS) over the parameter distributions inthe model to facilitate the convergence of cumulativeprobability distributions for output variables. How-ever, important limitations are found in the applica-tion of LHS when the parameter space of the modelis highly nonlinear.

Additionally, random errors can cause othertypes of bias effects on output variables of com-puter models. For example, Faber et al. (1995a, 1995b)pointed out that random errors produce skewed dis-tributions of estimated quantities in nonlinear mod-els. Only for linear transformation of the data willthe random errors cancel out. Faber et al. (1995a,1995b) present theoretical derivations to predict ran-dom error bias in principal component analysis. As alast example of bias situations, Gatland and Thomp-son (1993) developed theory-based procedures foreliminating bias in linear fits to log data, in partic-ular for data from exponential decay experiments.In this case, the log transformation introduces a biaswhen fitting the parameters of the model yi = exp(a + bxi) + εi� yi. Usually, the objective functionin this case is defined by min χ2(a, b) = ∑

i ε2i =∑

i [yi − exp(a + bxi )]2/�y2

i . The latter equationcannot be solved analytically, so a linear transforma-tion of the form a + bxi = ln(yi − εi� yi) is car-ried out to obtain the optimum values of a and b. Ifno correction is made in the error terms

∑i εi and

�yi after the transformation, a bias is introduced.Many other cases and applications of particular biaserror situations can be found easily in the literature,which we do not pretend to cover exhaustively in thiswork.

4. RANDOM AND/OR SYSTEMATICUNCERTAINTY ANALYSIS WITH MONTECARLO METHODS

The approach proposed for incorporating the ef-fect of random and/or systematic errors on uncer-tainty propagation analysis consists, as a first stage,of defining appropriate probability distributions forthe random and systematic errors based on differ-ent data sources, historical data, expert opinions, andany other information source available. Then we de-fine any bias or systematic error (including any vari-able behavior) of the model variables or parameters.

1674 Vasquez and Whiting

One has to have an idea of the range of the system-atic errors to start the analysis, but it is importantto explore the consequences of having uncertainty indefining such limits (this is shown later with an ex-ample). In general, we distinguish two types of sys-tematic errors in this work depending upon their na-ture. The first type consist of those cases where thebias is constant. For instance, for the example of ther-mometer calibration if we assume that the referencethermometer is off by a constant quantity indepen-dent of the temperature measurement, this representssuch a case. For uncertainty analysis purposes, if oneis uncertain about the bias present in the referencethermometer one way of studying the effect of thiserror would be to first define a maximum and a mini-mum value for the bias and then apply a Monte Carlotechnique sampling values between the two limits.Now, the second type of systematic error would bethe case when the bias in the reference thermometeris a function of the actual measurement. Note thatthis situation creates a new and different problem inthe sense that the theoretical background previouslydescribed does not apply. Therefore, it is necessaryto know a priori the behavior of bias with respect toany of the variables upon which it is dependent. Itturns out that this type of situation is fairly commonin practice. Therefore, any approach to study the ef-fect of the bias error would have to consider of theseeffects.

Once the statistical information regarding theerrors has been collected, the uncertainty propaga-tion analysis is performed by analyzing separately theeffects of random and systematic errors on outputvariables through the use of cumulative probabilitydistributions. The last step consists of the analysis ofthe combined effect of both type of errors. Also, adistinction is made when the parameters in the modelare from regression procedures or not. For regressionmodels, the approach suggested involves sampling foruncertainty analysis based on the data used for re-gressing the parameters and not on the parametersafter regression. This has the advantage of not deal-ing with the estimation of experimental error prop-agation to the parameter estimates, which is neededto define appropriate probability distributions of theparameters if the uncertainty analysis is carried outstarting from the parameter space, as usually done(Iman & Conver, 1980; Iman & Helton, 1988). Addi-tionally, incorporating systematic trends in the uncer-tainty analysis becomes easier if done directly in thedata used for regression. Thus, the proposed method ismore direct and requires fewer approximations. For

nonregression models, the random error effects areestimated by performing sampling over the variablesdistributions with random errors. (In the context ofthe approach presented by Helton (1994, 1997), thiscorresponds to sampling over the distributions used tocharacterize the degree of belief of the variables in themodel, also described as the lack of knowledge aboutthe model variables.) Then, systematic error effectsare taken into account by performing Monte Carlosimulations over and within the bias range definedfor the probabilistic means of the variables involved,as long as their values are obtained by experimentalmeans or a clear possible systematic error source isidentified.

4.1. Random Effects

For methodological purposes, we define a pseudo-data set as one that is obtained by random perturba-tion of the original data set. For regression models,each input datum of the data set used for parame-ter regression is randomly perturbed according to aprobability distribution describing the random errors.In practice, N(x, s2

x) is very common; however, any er-ror interactions should be accounted for through jointerror probability distributions. This procedure is re-peated until we generate n pseudo-data sets, wheren is also the number of Monte Carlo simulations de-fined according to the desired precision on the es-timated cumulative probability distributions for theoutput variables (for details, see Morgan and Hen-rion, 1990). The goal is to produce a collection of ndata sets that are each equally likely to have been ob-served in the experiment. Then, the model parametersare regressed using the pseudo-data sets generated,so n parameter sets are obtained, which are used toevaluate a given condition or value of a design vari-able in the model in order to evaluate the response ofit under different, but likely, parameter sets. This pro-cedure allows the construction of cumulative proba-bility distributions for the output variables based onthe random errors. The statistical properties of thesedistributions quantify the effect of the random errorson the output variables of the computer model. Forthe case of nonregression models (or models that havethe parameters already defined), LHS-type samplingover the variable distributions is suggested as long asthere are no significant nonlinearities in the parame-ter space. If nonlinearities are significant, equal prob-ability sampling (Vasquez & Whiting, 2000) is stronglysuggested.

Accounting for Both Random Errors and Systematic Errors 1675

4.2. Systematic Effects

For regression models, n pseudo-data sets aregenerated by randomly shifting the original completedata set inside the defined bias range. The randomshift is performed using a uniform probability distri-bution unless knowledge of the behavior of system-atic errors from a probabilistic standpoint is availablea priori, which is rare. The possibility of having dy-namic bias ranges has to be taken into account. Bydynamic bias ranges, we mean systematic error limitsthat can be a function of the measured quantities orvariables and are, therefore, not constant as suggestedby ANSI (1998) and ISO (1995). Usually, when deal-ing with systematic errors within given bias ranges, itis expected that the likelihood of the data set is thesame everywhere within the range, justifying the useof rectangular distributions. However, if there is un-certainty in defining the bias errors, these effects canbe analyzed by assigning an appropriate probabilitydistribution to take into account the uncertainty inthe definition of the ranges for the systematic errorsand then using additional Monte Carlo simulations todetermine the effect of such uncertainty. This issue isillustrated and explored further in the following sec-tion with a nonlinear regression case. The next step isto regress n parameter sets using the n pseudo-datasets generated. Then, as explained above, the parame-ters are evaluated in the computer model to generatethe cumulative probability distributions for the out-put variables of interest.

4.3. Combined Effects

The approach for estimating combined effects ofrandom and systematic errors assumes that randomand systematic errors are independent of each other,which means that the perturbation type can be intro-duced in any order (i.e., first random, second bias, orvice-verse). Defining the data set as the set of valuesto be perturbed (for regression models these are thevalues used to regress the parameters, and for nonre-gression models the mean values for the input vari-ables), at first the whole set is perturbed using thesystematic error analysis described above, and theneach datum within each of the n pseudo-data sets isagain perturbed using the random error approach de-scribed before. This procedure will produce n pseudo-data sets that have both types of error incorporated.Then the parameters obtained via regression (or therandomly perturbed data themselves) are evaluatedin the computer model to generate the respective

cumulative probability distribution of the outputvariables, which have statistical properties producedby both types of errors.

5. EXAMPLE: A BASIC NONLINEARREGRESSION MODEL

An exponential function reported by Dolby andLipton (1972) is used, as an example, to illustrate theapproach proposed for uncertainty propagation anal-ysis in nonlinear regression computer models. Themodel has the form of

y = α + βγ x (0 < γ < 1). (8)

We assume in this particular example that the pa-rameters α, β, and γ are regressed from a given setof experimental data. For illustrative purposes only,the experimental data were obtained by evaluatingEquation (9), xi = i − 1 for i = 1, 2, 3, 4, 5, 6, 7

yi = 1.0 + 6.0(0.55)xi . (9)

Fig. 1 shows the behavior of Equation (9) togetherwith hypothetical experimental data showing typicalrandom errors built in. Additionally, hypothetical biaslimits are indicated (upper and lower). The case ofrandom error effects is analyzed by adding randomnormal deviates to the values of x for each datumand then adding random normal deviates to the val-ues of y obtained from Equation (9) for each datum.The normal distributions used for each datum of the

Fig. 1. Upper and lower synthetic systematic error ranges definedfor the model y(x) = α + β (γ )x . The points shown represent a10% random normal variation on hypothetical experimental data.

1676 Vasquez and Whiting

Fig. 2. Comparison of the systematic and random effects on thecumulative probability curve estimation for the function value atx = 1.0 in the equation y(x) = α + β (γ )x .

experimental set are defined as N(xi, σ x) and N(y(xi),σ y) for i = 1, . . . , 7, where σ x is computed as 10% ofxi and σ y as 10% of y(xi). One hundred pseudo-datasets were generated from these probability distribu-tions using the methodology proposed for the anal-ysis of random effects. One hundred parameter sets(α, β, and γ of Equation (8)) were regressed using adirect search optimization method (Powell’s method)(Press et al., 1994). Then Equation (8) was evaluated atx = 1.0. The results are presented in Fig. 2, curve (b),as a cumulative probability distribution, which showsthe influence of the simulated random errors on thatspecific value of the model.

The bias error effects are simulated by randomlychoosing β values from a uniform distribution givenby U[4.0, 8.0] when simulating the yi values. Thus,Equation (8) with α = 1.0, β = 8.0, γ = 0.55 andwith α = 1.0, β = 4.0, γ = 0.55 defined the syntheticbias upper and lower limits, respectively. One hun-dred parameter sets (α, β, γ ) were regressed from thepseudo-data sets of (xi, yi); i = 1, 7, generated by ran-domly choosing the β values. Fig. 2, curve (a), presentsthe cumulative probability distribution of Equation(8) evaluated at x = 1.0. Comparing curves (a) and(b) in Fig. 2, it is clear that the effect of the systematicerror is more significant than the effect produced bythe random errors (as expected in this case), show-ing that the methodology is able to distinguish whenthese two types of errors are of significantly differentmagnitudes. Another important issue is the charac-

terization of the cumulative probability distributions(cdf ). Although the mean of the cdf for the randomerrors is a good estimate for the unknown true valueof the output variable from the probabilistic stand-point, this is not the case for the cdf obtained for thesystematic effects, where any value on that distribu-tion can be the unknown true. The knowledge of thecdf width in the case of systematic errors becomesvery important for decision making (even more sothan for the case of random error effects) because ofthe difficulty in estimating which is the unknown trueoutput value. For instance, in fields like design safetyfactors for technological systems, this kind of resultis extremely important. In general, the cdf width isa measure of how much uncertainty exists and howsensitive are the output variables to systematic andrandom effects. Also note that, because this is a sim-ple model, the effect of using systematic input deviatesfrom a uniform distribution produces a uniform effecton the distribution of the output variable. For morecomplex models, this effect is no longer observed be-cause of nonlinear effects on the uncertainty propaga-tion (Faber et al., 1995a, 1995b). Additional examplesand discussions about the importance of the cdf widthare presented by Helton et al. (1993, 1997). In thesereferences the calculation of nuclear reactor accidentsafety goals and performance assessment of radioac-tive waste disposal are presented.

Curve (c) of Fig. 2 shows the cumulative prob-ability obtained from combining both error types inthe uncertainty analysis. It is observed that, in thisparticular case, the cdf width is mainly defined by thesystematic error effects. In other words, the contribu-tion to the total error is mainly caused by the system-atic errors. It is important to note that when dealingwith nonlinear models, equations such as Equation(2) will not estimate appropriately the effect of com-bined errors because of the nonlinear transformationsperformed by the model.

During the design, running, and analysis of exper-iments, replicates of the measurements are always in-cluded. However, information about them is not usu-ally reported, which could be very useful to charac-terize potential systematic errors in the data. On theother hand, this is understandable if one acknowl-edges that reporting potential sources of systematicerrors might not look very good on the quality ofthe data (an artificial perception). In principle, un-der well-designed experiments, with appropriate mea-surement techniques, one can expect that the meanreported for a given experimental condition corre-sponds truly to the physical mean of such condition,

Accounting for Both Random Errors and Systematic Errors 1677

but unfortunately this is not the case under the pres-ence of unaccounted systematic errors. This type ofsituation can affect the resolution of the proposedmethod because of the inherent uncertainty presentwhen choosing the bias ranges and when using the re-ported mean values to incorporate the random erroreffects. The consequences of this might be reflected asan overestimation of the uncertainty propagation ofrandom errors and/or uncertainty caused by system-atic errors. In order to evaluate the effects of this prob-lem, and gain insights about the resolution and sen-sitivity of the proposed approach, additional uncer-tainty analysis has to be performed, which can be setup by assigning uncertainty levels to the bias rangeschosen, and to the estimation of the second statisticalmoment (i.e., µ2 = ∫

X (x − µ)2 p(x)dx, where p(x) isthe probability density function), which is commonlyused to characterize experimental measurements.

In the present example, in order to see the ef-fect of having uncertain bias ranges on the cumulativeprobability distribution of the function y(x) evaluatedat x = 1.0, a normal distribution with mean µ = 6.0and standard deviation σ = 1.0 was used to randomlyassign the lower and upper bias range of the parame-ter β in Equation (8). One hundred random pairs ofβ values were drawn from this distribution to definebias ranges, and then the approach proposed was usedto generate a cdf curve for each bias range obtainedduring the sampling. For each pair of β values drawn,the largest is used to define the upper limit and thelowest defines the lower bias limit. Note that the biasranges are not necessarily symmetric because of theway the β values are generated, but it is straightfor-ward to generate them symmetrically as well. All thecumulative probability distribution curves generatedare plotted in Fig. 3. The main points to observe, inthis figure, are the general broadness of all the cdfcurves together, and the general shape of the distri-butions. Comparing these characteristics with thoseof curve (a) in Fig. 2, we can see that the latter hassimilar broadness and shape. This means that the useof approximate extreme bias ranges (in this example ahypothetical worst-case scenario) covers, reasonablywell, the bias ranges obtained from the normal distri-bution, from a practical standpoint.

The case of having uncertainty on the estimationof the mean due to random error effects is studied byassuming one order of magnitude lower coefficient ofvariation (CV = σ/µ) than the one present in the ac-tual measurement (real or synthetic). In other words,we studied the sensitivity of having uncertainty in thebias range. It is important to mention that, given a

Fig. 3. Systematic error effects on the cumulative probability curveestimation for the function value at x = 1.0 in the equation y(x) =α + β (γ )x for 100 bias limits simulating the uncertainty effect onthe limits definition.

reasonable sample size, the variance of the meanestimation should be very small when compared tothe uncertainty of singular measurements. In this ex-ample, the data set is perturbed first to introduce thenormal random error, and then a second perturba-tion step is carried out to simulate the effect of havinguncertain mean-value estimation. In other words, thefirst step introduces (σ = 1%) normally distributedperturbations to each datum, and the second stepdoes the same using 0.1% standard deviation in or-der to take into account the uncertainty when eval-uating experimental mean values for measurements.The results are presented in Fig. 4, which consists of100 cdf curves. Note that the uncertainty on the meanestimation is not very significant in this case, and thecdf curves are similar to curve (b) in Fig. 2. This sug-gests that when the uncertainty of the mean estima-tion is small, it is reasonable to expect a good reso-lution by using the approach proposed. This type ofsituation was also studied by Shlyakhter (1992, 1994),assuming that the uncertainty in the estimation of thestandard deviation of normally distributed data hasalso a normal distribution with unknown variance.His approach is restricted to normal distributionsin the data; however, with the approach presentedhere it is possible to analyze any kind of probabilitydistributions.

1678 Vasquez and Whiting

Fig. 4. Random effects on the cumulative probability curve esti-mation for the function value at x = 1.0 in the equation y(x) = α +β (γ )x for 100 pseudo-experimental data sets simulating the effectof having uncertainty in the experimental mean estimation.

The combined effect of both types of uncertaintywas simulated by combining the two cases describedabove. The bias range uncertainty was introduced fol-lowed by the uncertainty on the mean estimation. Theresulting cdf curves are presented in Fig. 5. As pointedout in the discussion about the results presented inFig. 2, the error propagation is mainly defined by sys-tematic errors in this case, and the broadness of thecdf shown in Fig. 2, curve (c), is similar to the generalbroadness of the cdf curves presented in Fig. 5.

In general, if there is evidence of significant un-certainty on the bias range definition and data meanestimation, we suggest the use of the methodologydescribed above to evaluate the impact of these un-certainties on the cumulative probability distributionsestimation and, in that way, to determine if more pre-cision in the input data is required to improve theuncertainty analysis.

6. EXAMPLE: MONTE CARLO SIMULATIONSOF INCREMENTAL LIFETIMECANCER RISK

An example adapted from Shlyakhter (1994) isused to illustrate the proposed approach for uncer-tainty propagation analysis of random and systematic

Fig. 5. Combined uncertainty effect on the bias limits definitionand pseudo-experimental mean estimation on the cumulative prob-ability curve estimation for the function value at x = 1.0 in theequation y(x) = α + β (γ )x .

errors on a nonregression model. The case consists ofincremental lifetime cancer risk (ILCR) estimationsfor children, which was produced by benzene inges-tion with soil. This problem was first addressed byThompson et al. (1992) in public health risk assess-ments. The model for ILCR estimation is of the form

ILCR = Cs × Sr × RBA × Ew × Ey × ELf × cfbw × dy × ylf

× CPF,

(10)

where Cs is the benzene concentration in soil (mg/kg)(which can be represented by a log-normal distribu-tion with µ = 0.84, σ = 0.77, where µ and σ are themean and standard deviation of X = ln Cs; this defi-nition also applies for the rest of log-normal distribu-tions involved in this example), Sr is the soil ingestionrate (mg/day) (also described by a log-normal distri-bution with properties µ = 3.44, σ = 0.80), RBA = 1.0is the relative bio-availability, Ew = 1.0 is the numberof exposure days per week, Ey = 20 is the number ofexposure weeks per year, ELf =10 is the number of ex-posure years per life, c f = 10−6 kg/mg is a conversionfactor, bw is the body weight (kg) (which is describedby a normal distribution with µ = 47.0 and σ = 8.3),dy = 365 days per year, ylf = 70 years per lifetime,

Accounting for Both Random Errors and Systematic Errors 1679

and CPF is the cancer potency factor (kg.day/mg)(described by the log-normal distribution with statis-tical parameters µ = −4.33 and σ = 0.67). The distri-butions given characterize the experimental randomvariation on the values of these variables as describedin Section 2.

We first applied the methodology to determinethe effect of random errors on Equation (10) by per-forming 1,000 Monte Carlo simulations using the pro-vided probability distributions for the input variables.The results are presented in Figs. 6 and 7 as a cu-mulative probability distribution curve labeled “Ran-dom.” These results are in very good agreement withthose obtained by Shlyakhter (1994), without the ef-fect of systematic trends over time (Fig. 5, u = 0, inShlyakter (1994)). Then, the effect of having bias er-ror on the model was studied by adding systematicchanges to the statistical means of the stochastic inputvariables involved. The bias errors were assigned bydrawing pseudo-mean values for the input variables

ILCR x 10^9

0.001 0.01 0.1 1 10 100

Cum

ulat

ive

Pro

babi

lity

0.0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1.0

Random error only10% SEO30% SEO50% SEO70% SEO85% SEO90% SEO

SEO: Systematic Error Only

Fig. 6. Monte Carlo simulations of the incremental lifetime can-cer risk (ILCR) for children caused by ingestion of benzene in soil(Thompson et al., 1992). Cumulative probability curves are pre-sented for different bias error situations on the stochastic variablesof Equation (10). The percentage values indicated correspond tothe systematic error (SEO) used for the statistical mean character-izing the input variables. For instance, for a given stochastic variablewith mean µ, its new mean after a bias effect of 10% SEO is intro-duced is defined by a randomly chosen value from the distribution(U)[0.9µ, 1.1µ]. For comparison purposes, the Monte Carlo simu-lation produced from having only random error effects is included(“Random error only”).

from a uniform distribution defined as U[µ(1 − p),µ(1 + p)], where p is the maximum fractional changeallowed in µ because of bias or systematic errors. Us-ing this criterion, the suggested methodology for theanalysis of systematic effects was applied for 10%,30%, 50%, 70%, 85%, and 90% percent change inµ. The results are presented as cumulative probabil-ity curves in Fig. 6 together with the curve obtainedfor random effects for comparison purposes. It can beseen that the cumulative probability curves approx-imate the uncertainty caused by random errors onlyfor high percentage values (approximately 80%) ofsystematic deviation, which is very unlikely from apractical standpoint. This means that the error sourcewith the largest effect is controlled by the randomcontributions in the ILCR model. Additionally, thecombined effect of random and systematic errors wasstudied according to the approach suggested abovefor such cases. The results are presented in Fig. 7,and again, the same conclusion is drawn regardingthe dominant contribution of the random errors. Theresults presented by Shlyakhter (1994) also verify thesensitivity of this model to changes in the random un-certainty definition (changes in standard deviationsover time).

In order to simplify the results and discussionof this example, the case of having uncertain biasranges and uncertainty around the mean estimationcaused by random errors is not included. However,Shlyakhter (1994) presents an excellent illustration ofthe effects produced by past errors (and uncertaintyin variance estimation) and how they can be charac-terized statistically.

7. CONCLUSIONS ANDRECOMMENDATIONS

An approach based on Monte Carlo simulationwas developed to analyze the effect of systematicand random errors on computer models involving ex-perimental measurements. The results from the casestudies presented show that the approach is able todistinguish which error type has the dominant con-tribution to the total error on the uncertainty prop-agation through the computer model. It was shownthat systematic errors present in experimental mea-surements can play a very significant role in un-certainty propagation, an effect that is traditionallyignored when estimating and using chemical andphysical properties. Additionally, it was observed thatthe effect of random and systematic errors on the un-certainty propagation is not additive, meaning that

1680 Vasquez and Whiting

ILCR x 10^9

1e-4 1e-3 1e-2 1e-1 1e+0 1e+1 1e+2

Cum

ulat

ive

Pro

babi

lity

0.0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1.0

Random error only10% SR30% SR50% SR70% SR85% SR90% SR

SR: Systematic and Random Error

Fig. 7. Monte Carlo simulations of the incremental lifetime can-cer risk (ILCR) for children caused by ingestion of benzene in soil(Thompson et al., 1992). Cumulative probability curves are pre-sented for different combined error effects (random and system-atic) of the stochastic variables in the Equation (10). The percent-age values indicated correspond to the systematic error used on thestatistical mean characterizing the input variables. For instance, fora given stochastic variable with mean µ, its new mean after a biaseffect of 10% is introduced is defined by a randomly chosen valuefrom the distribution (U)[0.9µ, 1.1µ]. Then the effect of randomerrors is added to give the combined effect described as 10% SR.For comparison purposes, the Monte Carlo simulation producedfrom having only random error effects is included (“Random erroronly”).

the error type with the largest effect clearly definesthe uncertainty propagated to the output variables ofthe model. In general, experimentalists have not beenconcerned enough about systematic errors when re-porting experimental or observable data. We believethat these systematic errors are significant sources ofuncertainty and that more attention should be givento estimating them and using them in uncertainty andsensitivity analyses. In that way, the awareness of thisproblem in the chemical and physical properties esti-mation community will be increased. Uncertainty andsensitivity analysis in other fields involving highly dan-gerous processes such as radioactive waste disposaland nuclear reactor design and operation have shownthe effect of having different types of uncertainties onperformance assessments. Valuable examples wheredifferent types of errors are studied can be found inHelton et al. (1996, 1997) and Breeding et al. (1992).

The approach presented can be used to facilitatedecision making in fields related to safety factors se-lection, modeling, experimental data measurement,and experimental design.

ACKNOWLEDGMENTS

This work was supported, in part, by NationalScience Foundation Grant CTS—96-96192. The sug-gestions and comments made by Mark Meerschaertfrom the Department of Mathematics at University ofNevada–Reno regarding the analysis of having uncer-tain bias ranges and uncertainty around mean estima-tion due to random errors are kindly acknowledged.Additionally, we want to acknowledge insightful com-ments made by the reviewers of this work, in particu-lar, one of them, for pointing out important referencesfor the treatment of stochastic and subjective uncer-tainty.

REFERENCES

American Society of Mechanical Engineers. 1998. ANSI/ASMEPTC 19.1–1998: Measurement Uncertainty Part 1. New York:ASME.

Bazant, M. Z., & Kaxiras, E. (1996a). Derivation of interatomicpotentials by inversion of ab initio cohesive energy curves. InE. Kaxiras, J., Joannopoulos, P. Vashita, & R., Kalia, (Eds.),Materials Theory, Simulations and Parallel Algorithms, Vol-ume 408 of Materials Research Society Symposia Proceedings(pp. 79–84). Pittsburgh: Materials Research Society.

Bazant, M. Z., & Kaxiras, E. (1996b). Modeling of covalent bond-ing in solids by inversion of cohesive energy curves. PhysicalReview Letters, 77, 4370–4373.

Breeding, R. J., Helton, J. C., Gorham, E. D., & Harper, F. T. (1992).Summary description of the methods used in the probabilisticrisk assessments for NUREG–1150. Nuclear Engineering andDesign, 135, 1–27.

Dietrich, C. F. (1991). Uncertainty, Calibration and Probability, 2nded. New York: Adam Hilger.

Dolby, G. R., & Lipton, S. (1972). Maximum likelihood estimationof the general nonlinear functional relationship with replicatedobservations and correlated errors. Biometrika, 59(1), 121–129.

Faber, N. M., Meinders, M. J., Geladi, P., Sjostrom, M., Buydens, L.M. C., & Kateman, G. (1995a). Random error bias in principalcomponent analysis. Part I. Derivation of theoretical predic-tions. Analytica Chimica Acta, 304, 257–271.

Faber, N. M., Meinders, M. J., Geladi, P., Sjostrom, M., Buydens, L.M. C., & Kateman, G. (1995b). Random error bias in principalcomponent analysis. Part II. Application of theoretical predic-tions to multivariate problems. Analytica Chimica Acta, 304,273–283.

Gatland, I. R., & Thompson, W. J. (1993). Parameter bias elimina-tion for log-transformed data with arbitrary error characteris-tics. American Journal of Physics, 61(3), 269–272.

Hayward, A. T. J. (1977). Repeatability and Accuracy. New York:Mechanical Engineering Publication Ltd., USA.

Helton, J. C. (1993). Uncertainty and sensitivity analysis techniquesfor use in performance assessment for radioactive waste dis-posal. Reliability Engineering and System Safety, 42, 327–367.

Accounting for Both Random Errors and Systematic Errors 1681

Helton, J. C. (1994). Treatment of uncertainty in performance as-sessments for complex systems. Risk Analysis, 14(4), 483–511.

Helton, J. C. (1997). Uncertainty and sensitivity analysis in the pres-ence of stochastic and subjective uncertainty. Journal of Statis-tical Computations and Simulations, 57, 3–76.

Helton, J. C., Anderson, D. R., Baker, B. L., Bean, J. E., Berglund,J. W., Beyeler, W., Economy, K., Garner, J. W., Hora, S. C., Iuz-zolino, H. J., Knupp, P., Marietta, M. G., Rath, J., Rechard, R. P.,Roache, P. J., Rudeen, D. K., Salari, K., Schreiber, J. D., Swift,P. N., Tierney, M. S., & Vaughn, P. (1996). Uncertainty andsensitivity analysis results obtained in the 1992 performanceassessment for the waste isolation pilot plant. Reliability Engi-neering and System Safety, 51, 53–100.

Helton, J. C., Anderson, D. R., Marietta, M. G., & Rechard, R. P.(1997). Performance assessment for the waste isolation plantpilot plant: From regulation to calculation for 40 CFR 191.13.Operations Research, 45, 157–177.

Helton, J. C., & Breeding, R. J. (1993). Calculation of reactor ac-cident safety goals. Reliability Engineering and System Safety,39, 129–158.

Henrion, M., & Fischoff, B. (1986). Assessing uncertainty in phys-ical constants. American Journal of Physics, 54(9), 791–798.

Hoffman, F. O., & Hammonds, J. S. (1994). Propagation of uncer-tainty in risk assessments: The need to distinguish betweenuncertainty due to lack of knowledge and uncertainty due tovariability. Risk Analysis, 14(5), 707–712.

Iman, R. L., & Conover, W. J. (1980). Small sample sensitivity analy-sis techniques for computer models, with an application to riskassessment. Communnications in Statistics—Theory & Meth-ods, A9(17), 1749–1874.

Iman, R. L., & Conover, W. J. (1982). A distribution-free approachto inducing rank correlation among input variables. Communi-cations in Statistics—Simulation and Computation, 11(3), 311–334.

Iman, R. L., & Helton, J. C. (1988). An investigation of uncertaintyand sensitivity analysis techniques for computer models. RiskAnalysis, 8(1), 71–90.

International Organization for Standardization (ISO). 1995. Guideto The Expression of Uncertainty in Measurement. Switzerland:ISO.

Morgan, M. G., & Henrion, M. (1990). Uncertainty: A Guide toDealing with Uncertainty in Quantitative Risk and Policy Anal-ysis. Cambridge, UK: Cambridge University Press.

Pate-Cornell, M. E. (1996). Uncertainties in risk analysis: Six levelsof treatment. Reliability Engineering and System Safety, 54, 95–111, 1996.

Press, W. H., Teukolsky, S. A., Vetterling, W. T., & Flannery, B.P. (1994). Numerical Recipes, 2nd ed. New York: CambridgePress.

Rowe, W. D. (1994). Understanding uncertainty. Risk Analysis,14(5), 743–750.

Rowley, R. (1994). Statistical Mechanics for Thermo-PhysicalProperty Calculations. Englewood Cliffs, NJ: Prentice HallInternational.

Shlyakhter, A. I. (1994). An improved framework for uncertaintyanalysis: Accounting for unsuspected errors. Risk Analysis,14(4), 441–447.

Shlyakhter, A., & Kammen, D. M. (1992). Sea–level rise or fall?Nature, 357, 25.

Steele, W. G., Ferguson, R. A., & Coleman, H. W. (1997).Computer–assisted uncertainty analysis. Computer Applica-tions in Engineering Education, 5(3), 169–179.

Stix, G. (1998). A calculus of risk. Scientific American, 278(5), 92–97.

Taylor, J. R. (1982). An Introduction to Error Analysis. California:Oxford University Press.

Tester, J., & Modell, M. (1997). Thermodynamics and its Applica-tions, 3rd ed. Upper Saddle River, NJ: Prentice Hall Interna-tional.

Thompson, K. M., Burmaster, D. E., & Crouch, E. A. C. (1992).Monte Carlo techniques for quantitative uncertainty analysisin public health risk assessments. Risk Analysis, 12(1), 53–63.

Vasquez, V. R., & Whiting, W. B. (1998). Uncertainty of predictedprocess performance due to variations in thermodynamicsmodel parameter estimation from different experimental datasets. Fluid Phase Equilibria, 142, 115–130.

Vasquez, V. R., & Whiting, W. B. (2000). Uncertainty and sensi-tivity analysis of thermodynamic models using equal probabil-ity sampling (EPS). Computers and Chemical Engineering, 23,1825–1838.