Applications of Probabilistic Risk Assessments: The Selection of Appropriate Tools1

21
An early version of this paper was presented at the Second Conference of the European Section of the 1 Society for Risk Analysis, IIASA, Laxenburg, Austria, April 2-3, 1990. APPLICATIONS OF PROBABILISTIC RISK ASSESSMENTS The Selection of Appropriate Tools 1 Joanne Linnerooth-Bayer and Björn Wahlström International Institute for Applied Systems Analysis, A-2361 Laxenburg, Austria

Transcript of Applications of Probabilistic Risk Assessments: The Selection of Appropriate Tools1

An early version of this paper was presented at the Second Conference of the European Section of the1

Society for Risk Analysis, IIASA, Laxenburg, Austria, April 2-3, 1990.

APPLICATIONS OF PROBABILISTIC RISK ASSESSMENTSThe Selection of Appropriate Tools 1

Joanne Linnerooth-Bayer and Björn Wahlström International Institute for Applied Systems Analysis, A-2361 Laxenburg, Austria

ABSTRACT

Probabilistic risk assessment (PRA) is an important methodology for assessing the risks of complex

technologies. This paper discusses the strengths and weaknesses of PRA. Its application is explored in three

different settings: adversarial policy processes, regulatory/licensing procedures, and plant safety audits. It

is concluded that PRA is a valuable tool for auditing safety precautions of existing or planned technologies,

especially when it is carried out as an interactive process involving designers and plant personnel who are

familiar with actual, everyday operations. PRA has not proven to be as well-suited in providing absolute

risk estimates in public policy debates concerning the acceptability of a technology, or for the licensing and

regulatory procedures. The reasons for this are discussed.

Key words: Probabilistic risk assessment, risk and policy making, regulatory processes, safety

improvements, public acceptance.

3

1. Introduction

In the United States, and to a more limited extent in European countries, risk analysis emerged during the

1980s as a major methodology for regulatory policy making. As a part of risk analysis, quantitative risk assessment(1)

includes both the identification of risks to public health and safety and to the environment, and an estimation of the

probability and severity of harm associated with them. Quantitative risk assessment has been applied extensively in

such areas as work place hazards, the transport of hazardous substances, public exposure to toxic chemicals, and the

development of technologies with a low-probability of a potentially very high-consequence accident.

Risk estimates for low-probability events are problematic since historical data is usually inadequate to

calculate meaningful accident frequencies. Probabilistic risk assessment (PRA) circumvents this problem by

providing estimates of overall accident probabilities synthesized from the design and performance characteristics

of individual components of the technology, such as pipes, pumps, valves, pressure vessels, control equipment and

human operators. In effect, PRA is a simplified model of the system and its operation.

The task of risk assessment has been viewed to be advantageous when separated from risk management,

that is, separating the scientific findings and policy judgments embodied in risk assessments from the political,

economic, and technical considerations that influence the choice of policy. This separation is steadily becoming(2)

more difficult as risk issues and technologies become more complex and consequently require increasingly difficult

judgments regarding the definition of issues, the selection of analytical structure, the designation of assumptions,

the interpretation of data, and the use of results. Large uncertainties, and even ignorance, dominate many areas of

risk to the extent that the very lack of knowledge is unsuspected.(3)

Ravetz suggests that information provided by such applied methodologies as PRA (as distinct from the

methodology itself) be viewed as a `tool' which must be shaped so as to be robust in the performance of its intended

purpose. This tool metaphor implies that probabilistic risk assessment may be more appropriate for some(4)

applications than for others. Since their inception in the late 1960s, three distinct uses of PRAs have emerged:

(1) to provide overall risk estimates of a technology for policy making purposes, public inquiries, and

environmental impact statements;

(2) to aid regulatory decisions and licensing procedures by providing quantified estimates of risk and

risk reductions; and

4

(3) to identify potential safety improvements in plant design and operation, quality control, backfitting

measures, and maintenance.

The first, and to some extent the second, of these applications are meant to aid political policy processes

by providing comparative data on the risks of the contested or regulated technology, whereas the third application

mainly provides information to the industry for its internal decisions on safety improvements. Crucial as this

distinction between public policy making and decision making within the industry is, it is often not fully appreciated

when considering the applicability and value of PRAs. Similarly, whether a PRA is employed to assess the risk of

a technology before its deployment, or to improve an existing plant, is a related and also important consideration

of its usefulness.

In this paper we review the PRA methodology and examine its use in three different contexts: adversarial

policy settings, regulatory procedures, and industry design and safety audits. We conclude that one of its more

valuable uses may be as an interactive tool that brings together safety analysts, plant designers and operators for the

purpose of improving a plant design or for continuously monitoring and improving the safety of plant equipment and

procedures. The use of PRA for establishing an absolute level of risk that can both inform and guide pluralistic

policy processes is more limited. This is due to the nature of political procedures which make tacit demands for

objective risk numbers, as well as to the nature of the PRA methodology which can only provide estimates from

existing data and expert judgment. Institutional reforms are needed to better integrate the judgment of experts with

the `common wisdom' of those affected by the policies they influence.

2. Probabilistic Risk Assessment (PRA)

Probabilistic risk assessment is a methodology for assessing and improving the safety of a technology. The

methodology entails the construction of possible chains of events called `event trees' which lead to unwanted

consequences or, working backwards, constructing chains of faults called `fault trees' in the search for accident

precursors. The risks are quantified by calculating an estimate of the probability of these event or fault sequences

5

and combining this with an estimate of the damages. The damages are usually expressed in terms of lives lost,

injuries, plant and property loss, and sometimes ecological harm.

In the nuclear industry, probabilistic methods for assessing plant safety were introduced as an alternative

to deterministic methods which have been the basis of most safety criteria in the past, e.g., the single-failure criterion

and the fail-safe principle. Designing redundant safety systems and applying `defence-in-depth' principles is

facilitated by the use of probabilistic methods for assessing the risks. A deterministic approach adopts conservative

assumptions, and consequently focuses on worst-case accident scenarios which provide an unrealistic picture of the

safety systems and give little guidance on the relative rankings of safety improvements. The PRA methodology(5)

consists of three general steps: (1) identification of the events that initiate and lead to unwanted consequences; (2)

modeling of the identified event sequences with respect to probabilities and consequences; and (3) quantification

of the risks involved.

A PRA can be carried out at different levels of detail. In the nuclear field there are three levels of PRA.

Level 1 considers only the probability of a reactor core melt; Level 2 considers releases of radioactivity to the

environment; and Level 3 considers the resulting individual and societal risks. The latter is often referred to as a

comprehensive or large-scale risk assessment.

Following the first comprehensive application of PRA, the U.S. Reactor Safety Study, in 1975 , more than(6)

15 large-scale (Level 3) PRAs have been carried out for nuclear power plants in the U.S. In Sweden, as of 1986,(7)

seven PRAs had been completed. A major nuclear risk study in Germany assessed the risks of 19 sites with a(8) (9)

total of 25 reactors. PRAs have found applications in many other areas, including production of chemicals, LNG

transport and storage, the transport of dangerous materials, oil drilling rigs, aircraft safety, and the space industry.

The methodology has also been used for comparing the risks of different technologies.(10)

There is little doubt that probabilistic risk assessments have contributed significantly to our understanding

of technological risks. In the nuclear field, for instance, PRAs have led to retrofitting actions that have decreased

the estimated frequency of core melts. The studies have also demonstrated that accidents beyond the `design base'

are major contributors to risk and that human interactions are extremely important. Moreover, comparative risk(11)

studies have provided important insights as to how the risks of different technologies are related. However, despite

these and other contributions there are acknowledged limitations of the methodology which affect its applicability.(12)

2.1. Completeness and failure data

6

The main limitations of the PRA methodology include problems of ensuring the completeness of the

analysis and the adequacy of the data. No analysis can be guaranteed complete, which means in the PRA case

considering everything of significance that might go wrong. This issue is the most crucial problem of the

methodology, since omissions can always remain with serious consequences. The problems of completeness typically

include human interactions and modeling of common-cause failures.

A cut-off level for small risks has to be assigned for practical reasons. This means that the analyst must

determine which event sequences are of such low probability that they can be ignored. In cases where these

sequences are very unlikely, no significant error will be made. However, unexpected common-cause failures can

introduce estimates which understate the probability by several orders of magnitude, thereby rendering the

simplification unjustified. As new information emerges, earlier estimates can appear to be too optimistic as was

demonstrated by intergranular stress corrosion cracking in boiling water reactor piping.(13)

External events such as earthquakes, aircraft crashes, floods and extreme weather conditions can also be

major contributors to accident risks. Since the external environment involves more complexity and lack of structure

than do internal plant operations, these estimates are often very uncertain. There is also a tacit assumption in many

risk assessments that the plant is built as designed and is adequately maintained. Violations of safety technical

specifications and sabotage are rarely included in the studies.

The treatment of common-cause failures, i.e., multiple failures resulting from the same cause, can be

problematic in a PRA. The difficulty of modeling these failures has been evidenced by large differences in their

inclusion in recent PRAs. These differences have been attributed not only to the dissimilarity of plants but also to

the analysis team, the purpose of the study, and the analytical methods used.(14)

The significance of human interactions with technological systems, another limitation to the methodology, was

demonstrated by the accidents at Three Mile Island, Bhopal and Chernobyl. However, in spite of two decades of

research in the human factors area there are still no practical methods, other than expert judgement, available for

identifying human malfunction and for assigning error probabilities.(15)

The accuracy of the data base may also be a limiting factor. The empirical experience on component failure

is often good enough for reasonable statistical extrapolations in terms of failure rates but the question always remains

as to whether these failure rates can be transplanted from one context to another.

2.2. Assumptions and the use of expert judgment

7

The results of the analyses depend on a series of analytical assumptions and judgements which must be

made when modeling the systems and combining the models with data obtained from historical experience. A

considerable amount of expert judgement will be used in this process. This judgement can be valid and useful when

recognized as such. In fact, engineering and analytical judgements enter a PRA analysis in many diverse ways:(16)

characterizing the risks; choosing how to fill gaps in the data and indicating what contingencies should be simply

left out; modeling complex physical phenomena, portraying the degree of confidence in the results, and choosing

which formats to use for presentation. Throughout the analysis assumptions must be made and all assumptions(17)

require judgments as to their appropriateness. In addition, experts themselves are immersed in their own analytical

frameworks and work under scientific norms that encourage a precision that can lead to difficulties in appreciating

variances within the socio-technological systems.(18)

Considering the inherent subjectivity of the studies, it is not surprising that the results often vary

significantly, even when different analyses are carried out for the same plant. Experience from reliability benchmark

calculations, for instance, has shown that probability estimates can range even over two orders of magnitude. This(19)

variation was also demonstrated in an early review of the use of PRAs to estimate the risks of liquid-natural-gas

storage facilities. The results of the PRAs for one planned site, when expressed in terms of the individual risk to(20)

people living in the vicinity of the facility, differed over orders of magnitude. This was not due to deficiencies in the

analyses but rather to subtle differences in the specification of the problem, assumptions, and the models employed.

This variability in the results of PRAs has also been documented in the nuclear field. Current core damage

frequency estimates for US reactors range from about 10 to 10 per year. This variance is not due solely to-5 -3 (21)

different plant designs and site characteristics as Fullwood and Hall have noted, the differences also stem from the

scope of the studies, the PRA methods employed, and the analytical assumptions. Large variations in PRAs due(22)

to differences in modeling approaches have also been documented in Sweden. In a study on the state-of-the-art(23)

in PRA, the U.S. Government Accounting Office concluded that inconsistencies in PRA results limit their

comparability and represent a fundamental problem of the methodology.(24)

2.3. Expressing uncertainties

The variability of different PRAs underscores the importance of estimating and reporting the underlying

assumptions and uncertainties. End results should be documented not as single numbers reflecting whatever measures

for risk are chosen, but rather as probability distributions reflecting the certainty of these numbers. Indeed, the

8

expression of uncertainties is one of the basic ideas of the PRA methodology. As the IAEA points out, a main

advantages of PRA over more deterministic estimates is that a substantial part of the uncertainties can be identified

and quantified.(25)

For the most part, PRA uncertainties stem from the completeness of the analysis, the modeling accuracy,

and the adequacy of the parameter estimates. As to the latter, the uncertainties can be calculated by propagating the

probability distributions of the data through the analysis, assuming, however, that the data is appropriate for the plant

in question. The uncertainties arising from the inherent incompleteness of the analysis and the model accuracy are

more elusive. These are generally dealt with by the use of sensitivity analysis, although Bayesian methods have also

been proposed.(26)

The practice of reflecting uncertainties in the analyses is somewhat different from the theory. Hirschberg,

for instance, points to examples of careless treatment of uncertainties in nuclear PRAs. "No clear distinctions are

usually made between the parametric, modeling and incompleteness uncertainties and their relative importance. The

analysts are not always aware of which type of framework (frequentist or subjectivist, i.e., Bayesian) they actually

apply...".(27)

Similar problems were reported much earlier with respect to the PRAs carried out for LNG storage

facilities. The analysts invariably overstated the confidence of the results by failing to acknowledge the existence

of experts and models with conflicting results; by eliminating some accidents from consideration based on the

judgement of the analysts but failing to acknowledge the possibility of this judgement being erroneous, and by

choosing parameters that sometimes differed by several orders of magnitude without qualifying the results with a

discussion of the uncertainties. A recent OECD study in which a number of non-nuclear risk assessments were(28)

examined lamented that the problem of treating uncertainties is often raised but rarely tackled in any depth.(29)

The failure to be explicit about uncertainties is also an issue with respect to risk assessments of toxic

chemicals. Finkel , who advocates that uncertainty analysis be incorporated in risk assessments and risk(30)

management decisions, warns that artificially precise but very fragile numbers may mask the fact that analysts are

ignorant of or ignoring uncertainties. Even the expression of uncertainty, however, requires "judgment calls" since,

as Finkel recognizes, "uncertainty assessment is both a science and an art, and it does not always aspire to scientific

`truth' or rigorous scientific standards". Judgments must be made in documenting the confidence of the results(31)

without being so inclusive as to engender paralysis.

9

2.4 Complexity

The increasing complexity and interdependence of technological systems raises a number of issues. Large-

scale nuclear risk assessments, for example, encompass a myriad of different systems requiring the involvement of

expertise from widely different areas. As Wynne has noted, the description of possible chains of events is so(32)

complicated and open to interpretation that there is often room for dispute after an accident as to whether this was

described in the PRA or not. The sheer magnitude of the calculations can be daunting. A PRA for a nuclear power

plant requires thousands of parameter estimates and the report may contain several thousand pages. This hinders

meaningful communication of the results to the final users of the study.

Yet nuclear power generation is a relatively simple and well-understood technology; many types of

chemical plants are far more complex and less well understood. While PRA with its `divide-and-conquer' approach(33)

is arguably suited for assessing the risks of complex systems, it may be appropriate only for those complex systems

that are, at the same time, structured and well defined. This may lead the assessment process to frame the risk issues

in exactly these terms. The danger is that the risk issue is narrowed or bounded to match the limitations of the

assessment methodology.

There is a natural tendency to define a problem in such a way that the analytical results are valid and

credible. This tendency has been apparent in the nuclear field. As an official at the U.K. Atomic Energy Agency

observed, hazards which can be evaluated with confidence have been given comparatively more attention than other

hazards, even those which were considered salient in the Reactor Safety Study but for which the analytical tools have

not greatly advanced since that time. Early PRAs focused on a single plant and mainly on hardware malfunctions(34)

leading to accidents. It was natural to focus on hardware functions, according to Hirschberg, as estimates could be

supported by reliable data and were therefore more credible and less controversial. Moreover, accident(35)

consequences have usually been defined in relatively narrow terms, that is by plant and property damage and health

effects to the immediately surrounding population. Far less attention has been given to the possibility of more diffuse

consequences such as psychological effects, social disruption, and threats to the ecological sustainability of a region.

As assessment methodologies have become more mature, technical risk issues have expanded in scope to

include factors more difficult to quantify. This broadening of the analytical scope has been positive in the sense that

it has simultaneously broadened the whole concept of technology as it relates to social and ecological systems. Quite

naturally, however, this has led to increasingly `soft' and less precise analytical results, but results which give a more

sensitive picture of the social and environmental ramifications of complex, technological systems.

10

With global risk problems emerging on political agendas, the question of analytical scope has become more

urgent. In many risk areas, analysts can no longer justifiably consider only the consequences of an accident at a

single or several similar plants and exclude consideration of perhaps far-reaching consequences in the event that the

technology proliferates in many different forms over the globe. Completeness in the PRA methodology might thus

be accompanied by an equally important concept of completeness with respect to the scope of the risk issue: the risk

to whom and over what time scale? Although this issue does not challenge the appropriateness of PRA, per se, with

its focus at the plant level, it is important to keep in mind the broader issues when considering the uses, and

limitations, of the PRA methodology.

3. PRA in Public Policy Debates

PRA is used in public policy procedures in three main ways: The first, which has been most visible in the

nuclear field, is to provide a basis for debate on the social acceptability of a technology. One motive for these studies

has been to clarify the risks and, in so doing, to narrow the debate to the areas of real disagreement. In the energy

area, this has often entailed carrying out comparative risk studies of different ways of generating electricity. A(36)

second use concerns the licensing of hazardous technologies, where the public authorities are called upon to give

approval for a site. PRAs are produced as quantitative evidence of the public risks involved. The third major use is

to satisfy regulatory requirements on existing or planned facilities. This application has intensified in Europe

following the implementation of European Community's Seveso Directive.

An early comparative study on the use of PRA in siting procedures for liquid natural gas terminals in four

countries concluded that technical risk analyses contributed little to building a consensus concerning the safety of

competing sites. The expert disagreements even appeared to be reinforced by the quantitative studies. In a more(37)

recent comparative study of nuclear risk analysis carried out in five countries, this finding was repeated: The results

showed that the risk studies "enlarged, not narrowed, the debate. Instead of shaping an accepted discourse for the

discussion of risk, they intensified the conflict ... Instead of greater acceptance of the bases upon which numerical

calculations rest, they heightened suspicions over the motivations for the studies and those who conduct them".(38)

11

There appears to be a mismatch between the acclaimed purposes of PRA in promoting consensus through

greater understanding of the risks and the actual uses made of PRAs in pluralistic policy debates. One reason is

undoubtedly the complexity of the problems and the PRA methodology itself, which makes it difficult even for

specialists to acquire a balanced view of the evidence provided in a study. Another reason may stem from the rather

fundamental differences between individual decision making and public policy making. Policies, unlike individual

decisions, need to gain a consensus in order to be viable. A consensus within and/or beyond an organization can only

be reached with convincing and institutionally appropriate arguments. In the Swedish nuclear debate, for example,(39)

Hansson observed that the energy risk studies "were predominantly used in argumentative contexts, in most cases

as support for particular views, but also as objects of criticism".(40)

The role of argumentation in the policy arena is crucial in appreciating the use of PRA studies and their

often misfit in policy debates. In a policy context, the interested parties often compete for credibility and influence

by appealing to scientific evidence which can be interpreted in such a way as to support and justify their arguments.

The best justification is viewed as `solid' factual evidence, a demand which cannot be fulfilled by PRAs where the

estimates are informed to an important extent by expert judgment. This has led stakeholders to underplay the

uncertainties of the analyses, or ignore them altogether. Rather than focusing the debate on the range of judgments(41)

and the uncertainties of the analyses, this has pitted experts against experts. As Fiorini has observed, "Science and

expert knowledge have not taken the politics out of technically-based policy issues, as many observers expected only

decades ago. Instead, the increasing involvement of technical experts in policy disputes has politicized expertise".(42)

In addition to the difficulties of bringing subjective estimates as evidence in pluralistic policy processes,

there are other important reasons why PRAs have not led to more `rational' debate. The risk perception research

demonstrates that lay judgements of risk are influenced by a number of factors not acknowledged in the technical

risk concept embedded in PRAs. Voluntary and familiar risks are of less public concern than those that are assumed

involuntarily, dreaded, or viewed as potentially catastrophic. These differential risk perceptions cannot, however,(43)

fully explain the intense conflicts witnessed in relation to many risk issues. In fact, the notion that differential risk

perceptions are fundamental to technological risk conflicts, or even that probability of harm plays an important role

in public concern, has been challenged. If technically defined risk, or risk as probability times harm, is not fully(44)

relevant to public concern, then the rationality implicit in risk calculations may not be shared by many individuals

of the public. The idea of `plural rationalities' which has been put forward by cultural anthropologists , suggests(45)

that individuals have different views of nature and that these views are rooted more in social contexts than in

12

individual perceptions of threat. The acceptability of risky technologies is colored, if not determined, by these social

contexts. For example, according to Schwarz and Thompson, the controversy over nuclear waste is viewed

differently by "those who are convinced that the wastes we already have are already pointing the finger of death at

thousands of children not yet born" and "those who would, as the saying goes, happily sprinkle that same waste on

their cornflakes".(46)

The fact that individuals have different ways of viewing the issues and different logics for their resolution,

or different concepts of rationality and reasonableness, points to the difficulty of arriving at a social consensus only

through a technical definition of risk. This consensus requires democratic procedures for extended debate and

deliberation that allow the full participation of all stakeholders in the process of policy making. But this participation,

as good as it is in theory, is problematic in practice, especially in view of the increasing complexity of technological

systems. The gap may be widening between the expectations of people regarding their capacity to influence decisions

and the realities of policy making.

Few groups outside the technical community, for example, have the requisite expertise and resources to

review a large-scale PRA, which can easily exceed ten volumes with thousands of pages of technical appendices.(47)

Institutional reforms that assure financial and other resources to intervenor groups is one (much discussed)

improvement. However, a more serious impediment to participation may be presented by the difficulties in framing

debates so that diverse public concerns each receive a hearing. Yet if, as Fiorini argues, risk analysis is viewed as

a political process informed by expert judgment rather than an expert process with token citizen representation ,(48)

refining procedures for communication and participation may then be as crucial to risk management as refining the

assessment methodologies.

4. PRA in regulatory procedures

The purposes of probabilistic risk assessment in the regulatory process are varied. Some are required, as

discussed above, for addressing issues of social acceptability, others for checking if the plant meets safety

requirements, or for checking if the licensee has a good understanding of the hazards, and some are used to develop

emergency plans. Risk is almost always defined as the probability of harm to the public, although environmental

13

damage is also entering the assessments. Risk assessments, while not always quantitative, have become practically

a routine requirement for many licensing procedures. Most new industrial facilities in Western Europe are subject

to planning or licensing procedures which require some sort of risk assessment. The so-called Seveso Directive of

the European Community requires that probabilistic risk assessments be carried out for some 1800 hazardous(49)

facilities now in operation. While there is no federal legislation in the U.S. and Canada requiring operating licenses

(except for the nuclear industry), federal projects in the U.S. are subject to Environmental Impact Statements which

are increasingly concerned with accidental risks.

The Netherlands is a forerunner in requiring quantitative, probabilistic risk assessments. Safety targets in

terms of acceptable and unacceptable risks to individuals (i.e., annual probability of death to a person) and to society

(i.e., the cumulative accident frequency) have also been proposed. The acceptable risk criteria inherent in these(50)

safety targets also underlie the `bottom line' approach proposed in the Clean Air Act amendments being considered

by the U.S. Congress. According to this approach, a point estimate separates reasonable from unreasonable risk. The

Netherlands and the U.S. are the only countries proposing such explicit, quantitative regulatory criteria, but the

OECD reports that quantitative safety targets are in use, if mostly on an informal level, in practically all its member

countries. Nuclear legislation, for example, makes frequent reference to them, and there is a great deal of interest(51)

in safety targets at the industrial level.(52)

While government officials in France and Germany are familiar with these criteria and even employ them

on an informal basis, they consider them to be too rigid and inappropriately removed from the social context to be

incorporated into formal criteria. This reservation was also voiced in a recent report by Bengtsson who concluded

that PRAs are not mature enough to permit stringent comparisons of quantitative `bottom-line' estimates with

prescribed safety goals. The same conclusion could be drawn with respect to comparing PRA results with other(53)

types of risks, for example, with risks in nature, with risks that people knowingly accept in their everyday lives, or

with risks from alternative technologies. Such comparisons require an absolute measure of risk, e.g., the number of

expected deaths per year in a certain population or an expected frequency of deaths for the exposed population.

Where uncertainty prevails, as is typical in PRAs, ranking different risks simply by comparing their point estimates

is problematic. If the uncertainty of the estimates is taken into account, this might well change the rank order. As(54)

Finkel has pointed out, a risk assessment is not always suited to prioritizing within a spectrum of risks if it has

difficulties assigning absolute values to them. Moreover assigning conservative estimates to the parameters as a(55)

way of circumventing the uncertainty can only aggravate the problem of comparison and prioritization.

14

This does not mean that PRAs cannot be useful for regulatory decision making, but it does suggest extreme

caution in making risk comparisons. It also suggests that a good understanding of the uncertainties, assumptions and

subjective judgements underlying the estimates may be as informative as the estimates themselves. If these judgments

are exposed and made scrutable, then PRA can be a far more valuable tool for policy makers than more deterministic

methods that do not lend themselves to such scrutiny.

This potential, however, tends to be limited by the political and institutional settings in which PRAs are

carried out. Especially in the U.S., where the system of judicial oversight requires supportive and preferably

quantitative evidence to defend agency decisions, the need for ex post justification may reduce the potential for

candid assessment. The same tendencies can be seen in European government policies which are increasingly

challenged by environmental groups. (56)

As expertise becomes more politicized, institutional reforms are needed to accommodate the results of

PRAs and other types of analyses that rely on expert judgment. But just as analytical expertise formed by judgment

is valuable to political decisions, so is the wisdom of those who are affected by or merely concerned about these

decisions. Functowitz and Ravetz argue for a new type of science to accommodate problems that do not have neat(57)

solutions, when the phenomena themselves are ambiguous and the mathematical techniques open to methodological

criticism. This science should include local wisdom, since those whose lives depend on the solution of these

problems will have a keen, `back yard' awareness the general principles involved.

5. PRAs for Safety Audits

Given the justifiable skepticism regarding the use of PRAs for providing absolute estimates of overall safety

levels , a seemingly more modest, but perhaps more valuable use of the tool is to improve the understanding of(58)

safety systems within a plant and to create an improved awareness of safety in its design and operation. This

application is sometimes referred to as reliability analysis, which can be carried out as a compulsory part of licensing

or regulatory procedures, or as an internal safety audit initiated by the plant management. Its main purpose is: to

identify weak points in overall safety precautions; to improve safety systems; to optimize safety technical

specifications; to select schedules for tests and maintenance; and to determine the need for operational

15

improvements. These analyses usually rely on fault trees which, even without quantification, can yield significant

insights into the structure of the system.

A recent development is the so-called `living PRA', which provides a continuously updated representation

of the plant that allows the integration of new experience and plant modifications. Such a continuously evolving

assessment can improve plant safety management by promoting the effective use of new information. The evaluation

of operational experience is also depending on a context to be beneficial. These approaches can help to overcome(59)

the usual split between safety assurance and the reliability segments in the organizational structures of large

utilities. The `living PRA' can, in principle, begin at the design stage before the plant is actually constructed. At(60)

this stage, the PRA is naturally crude since it requires information about how the plant will eventually be built, but

it is valuable in that it draws attention to the safety of the overall system. The PRA can then be refined as the design

and construction process progresses.

The value of the PRA methodology for improving the understanding of plant safety may be greatly

enhanced if the tool is developed by those persons, system experts and plant operators, intimately involved in plant

operations. Carried out as an internal and interactive exercise, the value of the PRA may lie more in the process than

in the final risk quantification. Since the methodology builds on a knowledge of the system behavior under normal

and adverse conditions, it can obviously benefit from informed knowledge of the everyday, routine conditions of

the system and, most importantly, how the plant and equipment are actually being operated. This knowledge can be

enhanced with the involvement of the operators and other staff in the PRA effort. This involvement not only has the

potential of improving and enlightening the analysis, but it can be useful as a training exercise as well.

As promising as a PRA may be in a process of internal and interactive assessment, this application appears

more the exception than the rule. PRAs are rarely carried out for internal safety purposes except in the anticipation

of public acceptance problems. Moreover, plant personnel are not generally involved in the assessment process.(61)

While big chemical and petroleum companies usually have specialized risk assessment departments, most PRAs are

carried out by consulting firms and their subcontractors. The risk assessment activities of these departments and(62)

firms aim mainly at demonstrating compliance with regulations, which means that the feedback to actual day-to-day

operation can be superfluous.

In some cases, however, there are positive `spin-offs' in the form of genuine safety improvements during

the PRA process. In fact, the regulatory relationship is sometimes such that improvements in the plant are negotiated

continuously during the regulatory assessment procedures. This is not always the case, however, and there is a

16

concern that many analyses carried out for regulatory purposes by outside consultants have little relationship to or

feedback for the actual operation of the plant. Increasing standardization of the methodology with the use of

computerized packages, which has considerably decreased the costs of analysis and made tailored analyses possible,

has, at the same time, led to concern that there is too little focus on the nuances of different plants.

6. Concluding Remarks

Probabilistic risk assessment has contributed to our understanding of many complex technological systems.

This contribution has relied on the strengths of the PRA tool, namely, decomposing complex systems and

extrapolating failure rates derived from historical operating data on the component parts. It is also matched by

experience, which shows that the PRA methodology is reasonably well suited for identifying safety improvements

in plant design and operation. In this application even relative estimates of the effects of proposed safety measures

can be informative, although it should be kept in mind that a rank ordering can be misleading when the uncertainties

are large. This application can also be based on relatively consistent assumptions since the analysis is not comparing

risks of entirely different technologies. Most importantly, the methodology can reveal large oversights in the safety

design, although smaller omissions become increasingly difficult to detect.

Although experience is more limited, the methodology can also provide a basis for safety management by

continuously integrating operational experience. The use of PRAs for improving plant safety might be enhanced if

carried out as an internal exercise, either at the initiative of the plant management for improving safety or for

fulfilling regulatory requirements. An interactive study that brings together personnel involved in plant operations

on a daily basis to identify event and fault sequences, and even to estimate their probabilities, can enhance safety

awareness and give important insights on the safety of actual operations without requiring precise, absolute risk

figures. In this use, the process of carrying out a PRA may be as important as the results.

A more problematic use of the tool is in public policy procedures, especially the generation of point

estimates of overall risk to compare with established safety targets or with other risk sources. This assumes a

precision in the results that cannot be produced in practice. This does not mean that an application of the

methodology cannot provide useful and important insights for comparative analyses. Even without precise risk

17

estimates, such studies can identify important risk contributors and their sometimes major uncertainties in the life

cycle of a technology. However, it cannot be assumed that the technical definition of risk embodied in PRAs is

central to public concern regarding hazardous technologies.

The PRA methodology relies to an important extent on expert judgment in characterizing the risks, deciding

on the appropriateness of the data, incorporating human interactions, modeling complex physical phenomena, and

even in interpreting the results. To the extent that analytical judgments and assumptions are made transparent and

scrutable, the tool can be valuable in exposing the source of the probability estimates and the uncertainty surrounding

these estimates. In theory, a PRA can provide public policy debates then with clearer and more informative evidence

on low-probabilistic risks than more deterministic assessments. That this potential has not been fully realized in

actual policy procedures can be partly attributed to the mismatch between multi-stakeholder processes, where tacit

demands are often made on analysts to provide `objective' risk numbers, and the PRA methodology, where the

estimates are informed by analytical judgment. This mismatch suggests that institutional reforms that can better

accommodate subjective scientific evidence (as well as `local wisdom') are as important for effective and credible

public policy as refinements of the PRA methodology. Any strategy for change will call for greater knowledge as

to how political and institutional forces affect the conduct and use of PRAs.(63)

18

1. R. Zimmerman, "The Management of Risk," Risk Evaluation and Management (V. Covello, J. Menkes,and J. Mumpower, eds.) 435-460 (Plenum, 1986).

2. U.S. National Academy of Sciences Risk Assessment in the Federal Government: Managing the Process(Washington, D.C.:NAS, 1983).

3. J. Ravetz, "Uncertainty, Ignorance and Policy," Science for Public Policy, (H. Brooks and C. Cogsei, eds.)77-95 (Pergamon 1987).

4. J. Ravetz, ibid.

5. S. Hirschberg, Dependencies, Human Interactions and Uncertainties in Probabilistic Safety Assessment(Nordic Liaison Committee for Atomic Energy, 1990).

6. U.S. Nuclear Regulatory Commission, Reactor Safety Study, (WASH-1400, NUREG-75/014. Washington,DC. 1975).

7. U.S. Nuclear Regulatory Commission, Probabilistic Risk Assessment (PRA) Reference Document,(Washington D.C. The Commission, 1984).

8. B. Hansson, "Major Energy Risk Assessments in Sweden: Information Flow and Impacts," Nuclear RiskAnalysis in Comparative Perspective, (R. Kasperson and J. Kasperson, eds.). 50-84 (Allen and Unwin,London, 1987).

9. Gesellschaft für Reaktorsicherheit, "Deutsche Risikostudie Kernkraftwerke: Eine Untersuchung zu demdurch Störfalle in Kernkraftwerken verursachten Risiko," (Cologne: Verlag TÜV Rheinland, 1980).

10. Herbert Inhaber, Energy Risk Assessment (Gordon and Breach Science Publishers, 1982).

11. U.S. Nuclear Regulatory Commission, op.cit. 6.

12. R. Fullwood and R. Hall, Probabilistic Risk Assessment in the Nuclear Power Industry: Fundamentals andApplications (Pergamon, New York, 1988).

13. G. S. Holman, "Application of Reliability Technicques to Prioritize BWR Recirculation Loop Welds forIn-Service Inspection", p.4, (U.S.Nuclear Regulatory Commission, NUREG/CR-5486, 1989).

14. S. Hirschberg, op.cit. 5.

15. E.M. Dougherty, "Human reliability analysis - Where shouldst thou turn?" Reliability Engineering &System Safety, 29:3 (1990).

16. R. Keeney and D. von Winterfeldt, "On The Uses of Expert Judgement on Complex Technical Problems,"IEEE Transactions on Engineering Management, 36:83-86 (1989).

17. J. Lathrop and J. Linnerooth, "The Use of Risk Assessments in a Political Decision Process," (IIASAWorking Paper WP-81-119, International Institute of Applied Systems Analysis, Laxenburg, Austria,1981).

18. B. Wynne, "Risk Assessment of Technological Systems: Dimensions of Uncertainty," Risk Managementand Hazardous Waste, 269-310 (Springer, Berlin, 1987).

19. Hirschberg, op.cit. 5, pp. 4.1-4.59

Notes

19

20. C. Mandl and J. Lathrop, "LEG Risk Assessments: Experts Disagree," Risk Analysis and DecisionProcesses (H. Kunreuther and J. Linnerooth, eds.) 148-177 (Springer, Berlin, 1983).

21. R. Fullwood and R. Hall, op.cit. 12, p. 279.

22. Ibid. p. 279.

23. S. Carlsson, S. Hirschberg, and G. Johanson, "Qualitative Review of Probabilistic Safety AssessmentCharacteristics," PSA '87 - International SNS/ENS/ANS Topical Meeting on Probabilistic SafetyAssessment and Risk Management (Zurich, Switzerland, August 30-Sept. 4, 1987).

24. U.S. Government Accounting Office (GAO), "Probabilistic Risk Assessment: An Emerging Aid to NuclearPower Plant Safety Regulation", GAO/RCED-85-11 (Washington, D.C. 1985).

25. IAEA. Draft Guidelines for Conducting Probabilistic Safety Assessment of Nuclear Power Plants, SafetySeries Report, p. 141 (1989).

26. C. Clarotti, "PSA, Subjective Probability and Decision Making," PSA '89- International Topical Meetingon Probability, Reliability and Safety Assessment (Pittsburgh, Pennsylvania, April 2-7, 1989) and U.Pulkkinen, "Bayesian Uncertainty Analysis of Probabilistic Risk Models," PSA '89 - International TopicalMeeting on Probability, Reliability and Safety Assessment (Pittsburgh, Pennsylvania, April 2-7, 1989).

27. S. Hirschberg, op.cit. 5, p.4-61.

28. J. Lathrop and J. Linnerooth, op.cit. 17.

29. OECD, "Risk Assessment and Risk Management for Accidents Connected with Industrial Activities",Environment Monographs, 19:57 (Paris, 1989).

30. A. Finkel, Confronting Uncertainty in Risk Management: A Guide for Decision Makers, (Center for RiskManagement, Resources for the Future, Washington D.C., 1990).

31. A. Finkel, Ibid., p. xiv.

32. B. Wynne, op.cit. 18, p. 278.

33. OECD, op.cit. 29.

34. T. O'Riordan, "Nuclear Risk in the United Kingdom," Nuclear Risk Analysis in Comparative Perspective(Kasperson, R. and J. Kasperson, eds.) p. 207 (1987).

35. S. Hirschberg, op.cit. 5, p.3-43.

36. A. Fritzsche, "The health risks of energy production," Risk Analysis, 9:565-577 (1989).

37. H. Kunreuther and J. Linnerooth, Risk Analysis and Decision Processes: The Siting of LEG Facilities inFour Countries (Springer Verlag, Berlin, 1983).

38. R.E. Kasperson, J. Dooley, B. Hansson, J. Kasperson, T. O'Riordan, and H. Paschen, "Large-scale NuclearRisk Analysis: Its Impacts and Future," Nuclear Risk Analysis in Comparative Perspective (R. Kaspersonand J. Kasperson, eds.) p. 226 (Allen and Unwin, 1987).

39. G. Majone, The Uses of Policy Analysis (Yale University Press, New Haven, Conn., 1984).

20

40. B. Hansson, "Major Energy Risk Assessments in Sweden: Information Flow and Impacts" Nuclear RiskAnalysis in Comparative Perspective (R. Kasperson and J. Kasperson, eds.) p. 82 (Allen and Unwin,Winchester, Mass., 1987).

41. J. Linnerooth, "The Political Processing of Uncertainty," Acta Psychologica 56:219-231 (1984).

42. D. Fiorini. "Environmental Risk and Democratic Process: A Critical Review," Columbia Journal ofEnvironmental Law, 14:501-547 (1989).

43. P. Slovic, "Perception of Risk," Science 236, 280-285 (1987).

44. H. Otway and D. von Winterfeld, "Beyond Acceptable Risk: On the Social Acceptability of Technologies,"Policy Sciences 14:247-256 (1982), and S. Rayner and R. Cantor, "How Fair is Safe Enough? The CulturalApproach to Societal Technology Choice," Risk Analysis 7, 3-9 (1987).

45. M. Douglas, Cultural Bias, Occasional Paper no. 35 (London: Royal Anthropological Institute of GreatBritain and Ireland, 1978), M. Douglas, Essays in the Sociology of Perception (Routeledge and KeganPaul, London, 1982), M. Thompson, "Postscript: A Cultural Basis for Comparison, Risk Analysis andDecision Processes (Kunreuther and J. Linnerooth, eds.) (Springer, Berlin, 1983).

46. M. Schwarz and M. Thompson, Divided We Stand: Redefining Politics, Technology and Social Choice,p. 57, (University of Pennsylvania Press, 1990).

47. L. Lederman and F. Niehaus, "Probabilistic Safety Assessment (PSA) as a Tool for Risk Assessment andManagement," Industrial Risk Management and Clean Technology (S. Maltrezow, A. McTry, and W.Irwin, eds.) 82-89 (1990).

48. D. Fiorini, "Technical and Democratic Values in Risk Analysis," Risk Analysis 9:293-299 (1989).

49. Council Directive of 24 June 1982 (82/501/EEC).

50. For a discussion, see OECD, op.cit. 29.

51. OECD, Ibid.

52. L. Lederman and F. Niehaus, op.cit. 47, p. 87, and IAEA, "Status, experience and future prospects for thedevelopment of probabilistic safety criteria," IAEA-TECDOC-524 (1989).

53. G. Bengtsson, Risk Analysis and Safety Rationale: Final Report of a Joint Nordic Research Program inNuclear Safety, p. ix, (Nordic Liaison Committee for Atomic Energy, 1990).

54. Finkel, op.cit. 30, p. xvi.

55. Finkel, Ibid., p. xvi.

56. R. Brickman, "Science and the Politics of Toxic Chemicals Regulations: US and European Contrasts,"Science, Technology and Human Values 9:107-11 (1984).

57. S. Funtowicz and J. Ravetz, "Post-normal science: A New Science for New Times," Scientific European,20-22 (1990) .

58. US Government Accounting Office (GAO), op.cit. 24.

59. J. W. Minarick, "The USNRC Accident Sequence Precursor Program: Present Methods and Findings",Reliability Engineering and System Safety, 27:1 (1990).