Integrated assessment and energy analysis: Quality assurance in multi-criteria analysis of...

28
Integrated assessment and energy analysis: Quality assurance in multi-criteria analysis of sustainability Mario Giampietro a, * , Kozo Mayumi b , Giuseppe Munda c a Istituto Nazionale di Ricerca per gli Alimenti e la Nutrizione, Via Ardeatina 546, 00178 Rome, Italy b University of Tokushima, Faculty of Integrated Arts and Sciences, Tokushima City 770-8502, Japan c Universitat Autonoma de Barcelona, Department of Economics and Economic History, Edifici B, 08193 Bellaterra, Spain Abstract Science for sustainability policy requires the handling of multi-dimensional and multi-scale analyses. Integrated assessment is about generating information relevant for decision-making. This generates a divide between two scientific paradigms: (1) ‘Post-Normal Science’ acknowledges the unavoidable existence of non-equivalent perceptions and representations of the reality; legitimate but contrasting perspectives found among social actors; heavy levels of uncertainty. (2) ‘Normal Science’, believes that it is possible to handle in a rigorous and rational way these challenges and that therefore it is possible to define in substantive terms ‘the best course of action’ for society. This paper is written to explain the reasons and the tools developed by scientists working within the Post- Normal Science paradigm. q 2005 Published by Elsevier Ltd. 1. Introduction 1.1. The context of this paper The conclusions of the second Biennial International Workshop Advances in Energy Studies include, among other recommendations, a call for the scientific community to reframe quantitative analyses within an ecological economics perspective and to develop more effective tools for decision makers [1]. In relation to this point, one of the technical sessions of that workshop—entitled ‘Energy and Governance’—was dedicated to the development of policy and management tools for energy analysis to Energy xx (2005) 1–28 www.elsevier.com/locate/energy 0360-5442/$ - see front matter q 2005 Published by Elsevier Ltd. doi:10.1016/j.energy.2005.03.005 * Corresponding author. Tel.: C39 06 51494439; fax: C39 06 51494550. E-mail address: [email protected] (M. Giampietro). DTD 5 ARTICLE IN PRESS

Transcript of Integrated assessment and energy analysis: Quality assurance in multi-criteria analysis of...

DTD 5 ARTICLE IN PRESS

Integrated assessment and energy analysis: Quality assurance

in multi-criteria analysis of sustainability

Mario Giampietroa,*, Kozo Mayumib, Giuseppe Mundac

aIstituto Nazionale di Ricerca per gli Alimenti e la Nutrizione, Via Ardeatina 546, 00178 Rome, ItalybUniversity of Tokushima, Faculty of Integrated Arts and Sciences, Tokushima City 770-8502, Japan

cUniversitat Autonoma de Barcelona, Department of Economics and Economic History, Edifici B, 08193 Bellaterra, Spain

Abstract

Science for sustainability policy requires the handling of multi-dimensional and multi-scale analyses. Integrated

assessment is about generating information relevant for decision-making. This generates a divide between two

scientific paradigms: (1) ‘Post-Normal Science’ acknowledges the unavoidable existence of non-equivalent

perceptions and representations of the reality; legitimate but contrasting perspectives found among social actors;

heavy levels of uncertainty. (2) ‘Normal Science’, believes that it is possible to handle in a rigorous and rational

way these challenges and that therefore it is possible to define in substantive terms ‘the best course of action’ for

society. This paper is written to explain the reasons and the tools developed by scientists working within the Post-

Normal Science paradigm.

q 2005 Published by Elsevier Ltd.

1. Introduction

1.1. The context of this paper

The conclusions of the second Biennial International Workshop Advances in Energy Studies include,

among other recommendations, a call for the scientific community to reframe quantitative analyses

within an ecological economics perspective and to develop more effective tools for decision makers [1].

In relation to this point, one of the technical sessions of that workshop—entitled ‘Energy and

Governance’—was dedicated to the development of policy and management tools for energy analysis to

Energy xx (2005) 1–28

www.elsevier.com/locate/energy

0360-5442/$ - see front matter q 2005 Published by Elsevier Ltd.

doi:10.1016/j.energy.2005.03.005

* Corresponding author. Tel.: C39 06 51494439; fax: C39 06 51494550.

E-mail address: [email protected] (M. Giampietro).

M. Giampietro et al. / Energy xx (2005) 1–282

DTD 5 ARTICLE IN PRESS

deal with the need of considering simultaneously different dimensions (economic, ecological, social) in

decision making.

This article is an attempt to blend together the content of two papers which were presented in that

session. The two papers were both dedicated to Integrated Assessment* [terms written in bold and with a

star are defined in the glossary]. The original two papers were focusing on two issues: (i) the technical

problems associated with the handling of integrated packages of indicators referring to different scales

and dimensions of analysis; (ii) an overview of policy challenges and available tools for dealing with the

obvious fact that different stakeholders* will carry different legitimate definitions of what should be

considered as an ‘improvement’ or a ‘worsening’ of an existing situation.

In the rest of this paper, we deal with these two issues and their relevance for energy analysis. In

particular, in this introduction we provide a general discussion of science for governance and

implications for the use of quantitative analyses in this field. We raise several issues related to the

operationalization of basic ideas of Post-Normal Science in relation to science for governance. In

Section 2 we provide an example of the problematic quantification of energy flows when perceived and

represented within a system organized in hierarchical levels on multiple scales (a critical appraisal of the

energetics of human labor). In Section 3 we provide typologies of difficulties faced when attempting a

characterization of the performance of electricity generation in a country on a multi-criteria space*.

Finally, in Section 4, we deal with the obvious but often neglected fact that any process which is

generating scientific inputs used for governance should include an explicit task of ‘quality assurance’.

Such a task is required to guarantee transparency and accountability in relation to the integrity and

competence adopted in the process. This is a crucial requisite to obtain, later on, legitimization and social

acceptance for the consequent process of decision making.

1.2. The epistemological challenge implied by ‘science for governance’

In the year 2000, a group of students in economics operating in France established a web site whose

content was against autism* in academic economics [http://www.paecon.net/]. They were against:

(i) economics’ ‘uncontrolled use’ and treatment of mathematics as ‘an end in itself’, and the resulting in

‘autistic science’; (ii) the repressive domination of neoclassical theory and derivative approaches in the

curriculum; (iii) the dogmatic teaching style, which leaves no place for critical and reflective thought. An

excessive hegemonization of a given scientific paradigm carries the risk of determining a strong

‘normalization’ and ‘lock-in’ of the scientific characterization of any problem structuring in that field.

More or less in the same period the same set of issues popped out in other scientific fields. For

example, several discussions can be found in the field of conservation ecology about the risk of using

excessive formalization in analytical models used to assist decision making [2]. In relation to this topic,

Anderson [3] lists three main related points: (1) quantitative analysis is ‘essentially worthless if it is not

translated into effective policy’ [4]; (2) very complicated models are much more difficult to

communicate and this can imply the loss of important information in the interaction between scientists

and decision makers [5,6]; (3) quantitative analyses must be relevant to decision makers. This requires a

pre-analytical agreement between scientists and decision-makers about an appropriate definition of the

problem structuring [7,8].

Within the field of epistemology* such a discussion is a very old one and has been carried out

in relation to ‘science for governance’ for decades [9]. The debate in this field is about how to

define and guarantee ‘quality’ for science. It should be noted that this discussion deals with one

M. Giampietro et al. / Energy xx (2005) 1–28 3

DTD 5 ARTICLE IN PRESS

of the oldest topic in philosophy. Here we can recall Socrates’ line that ‘scientists are those that

know about their own ignorance’. Using an epistemological jargon we can say that the problem

of science for governance is about how to obtain a semiotic closure (a term introduced by Pattee,

[10]) in science. By semiotic closure* we mean the ability to adopt a problem structuring* which

has two elements:

(i)

it must be relevant in relation to goals and stakes. It has to pass a semantic check in relation

to the WHY/WHAT questions. That is, the problem structuring adopted in the analysis must

be declared by some legitimizing source as socially and politically relevant; and

(ii)

it must be useful in relation to the operation of a system of control used to guide action. It

has to pass a syntactic check in relation to HOW/WHAT questions. That is, the problem

structuring adopted in the analysis must be declared as scientifically useful by recognized

experts and consistent with the codified knowledge preserved by the academic establishment.

The evolution of this debate led first to the definition of Normal Science by Khun [11]: a validated

scientific paradigm implies the fulfillment of a set of given characteristics that are used to judge that a

scientific field has reached semiotic closure. That is, Normal Science can be characterized using a set

of requisites defined in the social discourse, which are able to provide social legitimization to the

scientific endeavour.

Later on Funtowicz and Ravetz introduced the concept of Post-Normal Science to indicate the

existence of fields of applications related to the issue of science for governance in which it is not

possible to reach a substantive agreement about whether or not a given scientific field has achieved the

semiotic closure [12]. That is, it is impossible to obtain an uncontested legitimization of a substantive

problem structuring.

The concept of Post-Normal Science will be discussed again later on in Section 4. For the

moment an important lesson implied by epistemological analysis is that a semiotic closure is

about a successful process. A process is a ‘transient’ in the characterization suggested by Ashby

in his An Introduction to Cybernetics [13]. A process has to be considered as something clearly

distinct from the formalization of that process, which is selected by the analyst. In other words,

the formalization of a process generates an image of that process within a given information

space, and therefore it has nothing to do with the complex series of events to which it refers to

[14–16]. In technical jargon this means that by using a given model scientists can simulate within

a set of variables a causal entailment perceived in the reality—for more on this concept see

Chapter 3 on modeling relation theory in Rosen [14]. Such a simulation can be obtained in

different ways (e.g. by using differential equations or by using transition probabilities). In any

case, it requires substituting the real observed system which is generating patterns and changes in

observable qualities, with an image of the system based on a selection of variables and an

inferential system which is generating patterns of change in the value taken by the proxy

variables [14]. This implies that any formalization requires and reflects a pre-analytical definition

of a finite set of relevant variables which are associated with the identity of the observed system

at a given scale*—e.g. model duration and spatial domain [extent] and differentials adopted in

the equations [grain]; [14,16,17]. Because of this, the resulting formal information space used to

represent the modeled system and its behaviour will reflect not only the characteristics of the

observed system, but also the choices made by the scientists on how to observe reality [18–22].

M. Giampietro et al. / Energy xx (2005) 1–284

DTD 5 ARTICLE IN PRESS

1.3. The difference between ‘Post-Normal Science’ and ‘Normal Science’ approach

The issues discussed so far generated a clear split in the academic communities of scientists using

quantitative analyses in the field of sustainability. The divide is between The Post-Normal Science

approach and the Normal Science approach.

the Post-Normal Science approach—There are scientists that notwithstanding their hard

science background have no problems in acknowledging two crucial facts: (i) it is necessary to

deal always with a preliminary semantic check when defining a problem structuring related to

analysis of sustainability. Concepts such as ‘health’, ‘quality of life’, ‘sustainability’ cannot be

defined in a substantive formal way once and for all. Each application requires a special

semantic check within the given context and in relation to the meanings assigned to these

terms by the actual actors; and (ii) quantitative analyses of future scenarios will always be

affected by important doses of ignorance. Nobody can predict the future, no matter how

sophisticated are the models, modelers and computers used to do that. Ignorance about the

future is unavoidable.

The Normal Science approach—There are scientists that, on the contrary, believe that it is possible

to define in substantive terms what is good and bad for ‘consumers’, ‘citizens’ and ‘society’ (one

definition fits all). Moreover they assume that: (a) such a definition can be known by them; (b) such

a definition will not change later on; and (c) the issue of the unavoidable presence of uncertainty and

ignorance can be dealt with by using more sophisticated analyses, larger computers, more rigorous

tests and better expertise. This implies assuming that it is possible to deal with the issue of

sustainability in terms of optimization of utility functions, optimization of production functions,

maximization of the efficiency in the use of resources and by engineering ecosystems to improve

their sustainability. Within this paradigm, the quality of the process used to individuate and

implement all these optimizing processes can be guaranteed by the diligent application of scientific

standards based on the ‘state of the art’ of the ‘know how’ available in the different scientific

disciplines involved.

These two approaches generate an important difference.

Scientists working within the paradigm of Normal Science seem to believe that: (i) the

quantitative analysis they are handling represent a substantive definition of what is good and bad

for the system; and (ii) within these quantitative analyses different costs and benefits can be

reduced to each other within a common numeraire (Zthey assume that different indicators

developed in different scientific fields are commensurable). Therefore, they are handling a process

in which the descriptive and normative aspects are melted together. That is, what is calculated to

be ‘the optimum choice’ is supposed to be at the same time: (1) the best possible representation

of the system (this implies the existence of an uncontested agreement among all the actors

involved on the series of choices made on the descriptive side); and (2) the best possible solution

to the problem, which has been chosen among those included in the best possible option space.

This second assumption implies the existence of an uncontested agreement on the normative side

among all the actors involved on: (i) the set of options to consider; (ii) the set of actors to be

considered as relevant; (iii) the set of goals to consider; (iv) the reliability of the information

coming from the descriptive side; (v) the priorities to adopt when making a choice.

M. Giampietro et al. / Energy xx (2005) 1–28 5

DTD 5 ARTICLE IN PRESS

This long list of required agreements among the stakeholders explains why the Normal Science

approach tends to be preferred in decision making. When it is difficult to obtain an agreement among

actors carrying legitimate contrasting views and dealing with issues in which uncertainty is important,

it is much easier to assume that such an agreement does exist rather than verify its existence by asking

them. On the other hand, in the last decades, conflicts over the use of resources and conflicts associated

with the innate tension among different dimensions of sustainability are becoming so important that

they can no longer be ignored. This increasing relevance and visibility of conflicts is forcing scientists

and decision makers to include in their agenda the necessity of dealing explicitly with the evident lack

of agreement expressed by various stakeholders over choices made in the process of decision making

in relation to both the descriptive and normative side. Within this new context, the objective of

scientific investigation must become that of enhancing the process of social resolution of sustainability

issues, rather than individuating a definite technological ‘solution’ or policy implementation.

Moreover, such a process should be based on participation and mutual learning among stakeholders.

Therefore, scientists working within the paradigm of Post-Normal Science have to face two

different typologies of problems. Not only they have to check the quality of their problem structuring

in relation to the special context (‘food quality’ in Burkina Faso is different from ‘food quality’ in the

USA) and in relation to the specificity of the actors (when operating within the USA, ‘food quality’ for

a macrobiotic yuppie is different from ‘food quality’ for a poor single mother of six), but also they

have to perform such a check in relation to two distinct sides of the process of decision making: (1) on

the descriptive side (the WHAT/HOW side)—this implies deciding which is the most useful

representation of the problem—after issue identification. That is, what are the key indicators useful to

characterize the system in relation to the attributes considered as relevant, what are the key

mechanisms and controls associated to the outcomes of interest. This translates into a series of

decisions about the variables to be used, the models to be used, the data and measurement scheme to be

used to characterize and represent possible solutions and scenarios; (2) on the normative side (the

WHY/HOW side)—this implies deciding which is the most relevant issue definition, which is the most

desirable decision to be made within the selected option space. This translates into a series of answers

to questions like: what are the goals and values that should be considered in the choice and

implementation of a given policy? What are the options that should be evaluated? Who are the actors

that have to be invited to be part of the process of discussion? To answer these questions we have to

adopt a series of decisions about how to handle the legitimate but contrasting perspectives found

among the social actors, how to select the targets to be adopted in the process, how to decide the

weighting factors to be adopted when considering contrasting indications and incommensurable trade-

offs. To make the life of Post-Normal scientists harder these two series of decisions to be taken on the

descriptive and normative side: (i) depend on each other (they are linked in a sort of chicken–egg

relation); and (ii) have to be made having clear in mind that the information used in this process is

likely to be affected by high level of uncertainty or even worse by genuine ignorance.

The present authors have been working for quite a while now with quantitative analyses applied to

the issues of sustainability within the Post-Normal Paradigm. As done in the special session of the

Porto Venere workshop, we want to illustrate the existence of possible alternative approaches to the

dominant paradigm of Normal Science in the field of science for governance of sustainability. Our

selection of topics reflects our different background (the first and second authors started their career as

Engineers and then entered into interdisciplinary and trans-disciplinary studies, whereas the third

author started as an Econometrician that later on entered into economic theory, multi-criteria analysis

M. Giampietro et al. / Energy xx (2005) 1–286

DTD 5 ARTICLE IN PRESS

and urban ecology). All of us did a trajectory across disciplines while keeping working on the

generation of practical tools for analysis. This led us to work on the two different sides of the process

of decision making. In particular:

(1)

on the descriptive side—Mario Giampietro and Kozo Mayumi have dealt for many years with the

challenge implied by the use of integrated packages of indicators which are referring to non-

equivalent descriptive domains*. Whenever, an integrated assessment requires the use of

indicators and models that cannot be reduced to each other, we are facing technical

incommensurability*. This means that when comparing different indicators referring to different

typologies of costs and benefits defined at different scales (e.g. improvements and worsening in

my back-yard versus improvements and worsening at the global level) and/or defined within

different scientific disciplines (e.g. economic losses measured in US$ 1998 versus losses of

biodiversity occurring over centuries) it is impossible to develop a system of accounting that is

able to reduce these different typologies of costs and benefits to a common numeraire in a

substantive way.

What if we accept the existence of technical incommensurability in integrated assessment? This is

a class of problems associated with Multi-Scale Integrated Analysis—for an overview of this

concept see Giampietro and Ramos [23]; Giampietro [21]; Giampietro and Mayumi, [24–25].

(2)

On the normative side—Giuseppe Munda has dealt for many years with the challenge implied by

the handling of the following questions. When discussing of sustainability, we should be able, first

of all, to answer questions like: ‘sustainability of what? Sustainability for whom? Sustainability

for how long? Sustainability at what cost?’ [2, p. 26]. This is a series of questions about which it is

impossible to reach a substantive agreement among all relevant actors when coming to the

individuation of a unique vector of answers. Actually, when dealing with the issue of sustainability

and related conflicts it is not even easy to define: (i) who should be considered as a relevant actor,

and (ii) who is legitimized to decide about it. For example, when dealing with the choice, made at

the national level, of energy sources, how to deal with the fact that such a choice may generate

effects on other countries? What about possible effects on future generations? Whenever, an

integrated assessment requires the use of normative values (cultural identities, goals, taboos) that

cannot be reduced to each other we are facing social incommensurability*. This implies that in a

given social conflict we should expect that the ‘terrorists’ on one side are the ‘freedom fighters’ of

the other side. Therefore, when dealing with an existing conflict, it is ludicrous to expect that after

adopting a formal identity for characterizing them in a model (either in the category of terrorist or

in that of freedom-fighter) it will be possible to obtain, later on, an agreement on such a substantive

characterization from the actors operating on the two sides.

What if we accept the existence of social incommensurability in integrated assessment? This is a

class of problems associated with the implementation of procedures aimed at Social Multi-criteria

Evaluation—for an overview of this concept see Munda [26].

An integrated assessment related to sustainability almost always implies the handling of technical

incommensurability and social incommensurability. Scientists willing to use quantitative analysis in

this field have to learn how to better listen to both: (i) other scientists working in other disciplinary

fields (no matter if hard or soft); and (ii) regular people representing the clients of such an assessment.

Acknowledging this fact implies changing the scope of quantitative analysis. Quantitative analyses

M. Giampietro et al. / Energy xx (2005) 1–28 7

DTD 5 ARTICLE IN PRESS

should no longer be used for the individuation of the ‘best course of action’ (a task justified by

substantive reasoning). Rather they should be used for fostering social learning (e.g. looking for better

issue definitions, better understanding of existing trends, individuation of areas of uncertainty, helping

the sharing of meaning among the actors about a useful problem structuring).

1.4. Integrated assessment and engineers

The paradigm of Normal Science so far has been characterized as a scientific paradigm in which

quantitative analyses are used to seek and find the solution that produces more and better than the

actual one. This paradigm, therefore, seems to imply that it is always possible to have: (a) a win-win

solution in the first place (Zmore and better can be obtained without negative effects); and (b) a

substantive formal definition, agreed upon by all social actors, of how to define and measure better. If

these two assumptions were true, the only problem for hard scientists would be that of generating a

formal output (i.e. a number) that indicates the maximum in improvement for the system. Such a final

result is then expected to be an uncontested input for policy-making.

It is fair to say, however, that in reality very few scientists (even those that are proud to call

themselves reductionist) are so fundamentalist to really believe these assumptions (even though very

few of them will write it down). Everybody knows that in reality indications given by models and data

used in any assessment are always mediated by political negotiation and common sense. The real issue

is how to handle this mediation.

Fortunately, among the various categories of hard scientists, engineers are, in general, those better

acquainted with the idea that life is complex. Engineers, like physicians, are a special kind of scientists

that can openly challenge the validity of the two above assumptions. Any good engineer, for example,

knows that optimization in reality means looking for some reasonable compromise and that technical,

legal, political and economic issues are always deeply connected in real situations. As George E. Box

reminds us: ‘all models are wrong, some are useful’ [18]. Indeed, when dealing with science for

governance the real issue is not that of relying on the indications given by models, but rather how to

understand and decide which models can be useful for policy-making, and how to define what should

be considered an acceptable compromise among legitimate but contrasting definitions of

improvements.

Regarding the need of using a multi-criterial approach, it should be noted that in 1824, well

before the introduction of the concept of Integrated Assessment, Carnot stated in the closing

paragraph of his Reflections on the motive power of fire, and on machines fitted to develop that

power: ‘We should not expect ever to utilize in practice all the motive power of combustibles.

The attempts made to attain this result would be far more harmful than useful if they caused

other important considerations to be neglected. The economy of the combustible [efficiency] is

only one of the conditions to be fulfilled in heat-engines. In many cases it is only secondary. It

should often give precedence to safety, to strength, to the durability of the engine, to the small

space, which it must occupy, to small cost of installation, etc. To know how to appreciate in each

case, at their true value, the considerations of convenience and economy which may present

themselves; to know how to discern the more important of those which are only secondary; to

balance them properly against each other; in order to attain the best results by the simplest

means; such should be the leading characteristics of the man called to direct, to co-ordinate the

labours of his fellow men, to make them co-operate towards a useful end, whatsoever it may

M. Giampietro et al. / Energy xx (2005) 1–288

DTD 5 ARTICLE IN PRESS

be (italics added)’ [27, p. 59]. In this conclusive passage of his seminal work Carnot decided to

explicitly warn his readers against the potential pitfalls of analyses that go for a single optimizing

goal (e.g. maximizing efficiency) at the time, without considering the context and the input

coming from the normative side (WHY/WHAT) associated to the social aspects.

Regarding the development and use of analytical tools able to handle several indicators for

characterizing performance in relation to multiple attributes there are rigorous methods for

decision support analysis developed in the field of engineering under the name of Multi-Objective

Decision Making or Multi-Attribute Decision Making (e.g. Scott and Antonsson [28]). It should

be noted, however, that the big advantage of industrial design over analyses of sustainability is

that: (1) all the relevant information for defining the performance of the designed system is

supposed to be available to the designer; and (2) the validity check about the relevance of the

problem structuring (the quality check on the normative side) is supposed to be given.

Finally, one of the most interesting attempt to develop a procedure for dealing with the

challenge discussed so far in participatory way (the use of quantitative analyses within a Post-

Normal Paradigm) has been developed within System Engineering by Checkland and others under

the name of soft systems methodology (Checkland [29], Checkland and Scholes [30], Roling and

Wagemakers [31]).

2. Dealing with the predicament of multiple-scales

2.1. Exploring one of the largest ‘fiasco’ of energy analysis

In order to illustrate the implications of the existence of multiple scales on conventional

energy analysis we are using here an example discussed more in detail in Giampietro and

Mayumi [32]. The example is related to one of the most well-known case studies of this field:

the attempt to develop a standardized tool kit for dealing with the energetics of human labor.

Probably, this attempt represents one of the largest ‘fiasco’ of energy analysis (for an overview of

issues, attempts and critical appraisal of results see: Fluck [33,34]; Giampietro and Pimentel

[35,36]; Giampietro et al. [37,38]).

In order to characterize in useful terms indices of ‘efficiency’ or ‘efficacy’ in relation to the

energetic of human labor three pieces of information are required:

(1)

the requirement—and/or the availability—of an adequate energy input needed to sustain the

conversion of interest (an inflow of energy carriers). In the case of human labor this is the flow of

nutrients contained in food.

(2)

the ability of the considered converter to transform the energy input into a flow of useful

energy to fulfil a given set of tasks (in this case a system made up of humans has to be able

to convert available food energy input into useful energy at a certain rate, depending on the

assigned task).

(3)

the achievement obtained by the work done—the results associated with the application of useful

energy to a given set of tasks (in this case this has to do with the usefulness of the work done by

human labor in the interaction with the context).

Fig. 1. Relevant qualities for characterizing the behavior of an energy converter across hierarchical levels.

M. Giampietro et al. / Energy xx (2005) 1–28 9

DTD 5 ARTICLE IN PRESS

At this point, if we want to use indices based on energy analysis to formalize the concept of

performance, then we have to link these three pieces of information to four non-equivalent numerical

assessments—Fig. 1:

(1)

Energy input required and consumed by the converter.In the case of human labor, this is food

(energy carriers for humans). However, if the converter were a diesel engine, food would no

longer be considered as an energy input. This implies that the energy input can only be

characterized and defined in relation to the chosen/given identity of the converter using it.

(2)

The power level at which useful energy is generated by the converter. This is a more elusive

‘observable quality’ of the converter. Still this information is crucial. When dealing with the

characterization of energy converters we have to always consider both the pace of the throughput

(the power) and the output/input ratio. A higher power level tends to be associated to a lower

output/input ratio (e.g. the faster you drive, the lower the mileage of your car). However, it is very

difficult to find a standard definition of ‘power level’ applicable to the generic category of ‘energy

converters’ (e.g. how to compare in terms of power levels: human workers to light bulbs and/or

crop fields to economic sectors?). This is especially true when dealing with multiple converters

operating simultaneously at different scales on multiple different tasks (e.g. ecosystems versus

economies). Moreover, a particular assessment of power level (e.g. 1000 HP of a tractor versus

0.1 HP of a human worker) does not map onto either energy input flows [Zhow much energy is

consumed by the converter over a given period of time used as reference—e.g. a year] nor onto

how much applied power is delivered [Zhow much useful energy has been generated by

M. Giampietro et al. / Energy xx (2005) 1–2810

DTD 5 ARTICLE IN PRESS

a converter over a given period of time used as reference—e.g. a year]. These two different pieces

of information depend on how many hours the tractors or the human worker have worked during

the reference period and how wisely they have been operating.

(3)

The flow of applied power generated by the conversion. The numerical mapping of this quality

clearly depends heavily on the previous choices about how to define and measure power levels and

power supply. In fact, an assessment of the flow of applied power represents a possible

formalization of the semantic concept of ‘useful energy’. Therefore, this is a ‘measured flow of

energy’ generated by a given converter which must be able to fulfil a specified task (e.g. water

pumped out of the well, hectares of tilled soil). To make things more difficult, the definition of

usefulness of such a task can only be given, when considering the hierarchical level of the whole

system to which the converter belong (e.g. if the water is needed to do something by the owner of

the pump, if the hectares are under cultivation). Put in another way, such a task must be defined as

‘useful’ by an observer which is operating at a hierarchical level higher than the level at which the

converter is transforming energy input into useful energy. What is produced by the work of a

tractor has a value, which is defined by the interaction of the farm with a larger context (e.g. the

selling of products on the market). That is, the definition of ‘usefulness’ refers to the interaction of

the whole system—the farm, seen as a black box (to which the converter within the tractor belongs

as a component) with its context. Any assessment of the quality ‘usefulness’ requires a descriptive

domain different from that used to represent the conversion at the level of the converter [21].

(4)

The work done by the flow of applied power (what is achieved by the physical effort generated by

the converter). Work is another very elusive quality that requires a lot of assumptions to be

measured and quantified in biophysical terms. This represents a big problem with energy analysis.

In fact, even if the two assessments #3 and #4 use the same measurement unit (e.g. MJ) they are

different in terms of what are the relevant observable qualities of the system. That is, assessment

#3 (flow of applied power as seen from the converter—e.g. MJ of power delivered by a 100 HP

pump working for 25 h) and assessment #4 (work done—e.g. MJ equivalent to the lifting of a

certain number of m3 of water up to 3 m) neither coincide in numerical terms or map 1 to 1 in a

substantive way. In fact, the same amount of applied power can imply differences in achievement,

because of differences in design of technology (i.e. the pump) and in know how when using it.

2.2. Interpretation of the energy analysis of human labour using hierarchy theory

The overview provided in Fig. 1 should already make clear to those familiar with epistemological

implications of hierarchy theory* (for an overview of the literature see: Allen and Starr [19]; Salthe

[39]; Ahl and Allen [20]) that practical procedures used to generate numerical assessments within a

linear input/output framework cannot escape the unavoidable ambiguity and arbitrariness implied by

the hierarchical nature of complex systems. A linear characterization of input/output using the four

assessments discussed so far requires the simultaneous use of at least two non-equivalent and non-

reducible descriptive domains. This opens the door to technical incommensurability and therefore to

an unavoidable degree of arbitrariness in the problem structuring. Any definition or assessment of

energy flows (both as input and/or output) will in fact depend on an arbitrary choice made by the

analyst about what should be considered as the focal level n. That is: what should be considered as a

converter, what should be considered as an energy carrier, what should be considered as the whole

Table 1

Non-equivalent assessments of energy requirement for 1 h of human labor

Methoda no. Worker system boundar-

iesbEnergy input (EI) per hour of labor

Food energy inputc (MJ/

h)

Exosomatic energy

inputd (MJ/h)

Other energy forms

across scalese

1 Man 0.5f Ignored Ignored

1 Woman 0.3g Ignored Ignored

1 Adult 0.4h Ignored Ignored

2 Man 0.8f Ignored Ignored

2 Woman 0.5g Ignored Ignored

2 Adult 0.6h Ignored Ignored

3 Man 1.6f Ignored Ignored

3 Woman 1.2g Ignored Ignored

3 Adult 1.3h Ignored Ignored

4 Man 2.5f Ignored Ignored

4 Woman 1.8g Ignored Ignored

4 Adult 2.1h Ignored Ignored

5 Household 3.9i Ignored Ignored

6 Society 4.2j Ignored Ignored

7a Household 3.9i 39 (food system)k Ignored

7b Society 4.2j 42 (food system)k Ignored

8 Society Ignored 400 (society)l Ignored

9 Society Ignored 400 (society)l 2!1010 Emjoulesm

After Giampietro [21].a The nine methods considered are: (1) only extra metabolic energy due to the actual work (total energy consumption minus metabolic rate) in

an hour; (2) total metabolic energy spent during actual work (including metabolic rate) in an hour; (3) metabolic energy spent in a typical work

day divided by the hours worked in that day; (4) metabolic energy spent in a year divided by the hours worked in that year. (5) as Method 4 but

applied at hierarchical level of household; (6) as Method 4 but applied at the level of society; (7) besides food energy including also exosomatic

energy spent in food system per year divided by the hours of work in that year (7a at household level; 7b at society level); (8) total exosomatic

energy consumed by society in a year divided by the hours of work delivered in that society in that year; (9) assessing the EMjoules of solar

energy equivalent of the amount of fossil energy assessed with Method 8.b The ‘systems delivering work’ considered are: typical adult man (Man), typical adult woman (Woman), average adult (Adult), typical

household, and an entire society. Definitions of typical are arbitrary and only serve to exemplify methods of calculation.c Food energy input is approximated by the metabolic energy requirement. Given the nature of the diet and food losses during consumption,

this flow can be translated into food energy consumption. Considering also post-harvest food losses and pre-harvest crop-losses, it can then be

translated into different requirements of food (energy) production.d We report here an assessment referring only to fossil energy input.e Other energy forms, acting now or in the past, which are (were) relevant for the current stabilization of the described system even if

operating on space-time scales not detected by the actual definition of identity for the system.f Based on a basal metabolic rate for adult men of BMRZ0.0485W C3.67 MJ/day (WZweightZ70 kg)Z7.065 MJ/dayZ0.294 MJ/h.

Physical activity factor for moderate occupational work (classified as moderate) 2.7!BMR. Average daily physical activity factor 1.78!BMR

(moderate occupational work). Source: Anonymous, 1985. Occupational work load: 8 h/day considering work days only, and 5 h/day average

work load over the entire year including weekends, holidays, absence.g Based on a basal metabolic rate for adult women of BMRZ0.0364 WC3.47 (WZweightZ55 kg)Z5.472 MJ/dayZ0.228 MJ/h. Physical

activity factor for moderate occupational work 2.2!BMR, average daily physical activity factor (based on moderate occupational work) 1.64!BMR. Source: Anonymous, 1985. Occupational work load: 8 h/day (considering work days only) or 5 h/days (average work load over the entire

year including weekends, holidays, absence).h Assuming a 50% gender ratio.i A typical household is arbitrarily assumed to consist of one adult male (70 kg, moderate occupational activity), one adult female (55 kg,

moderate occupational activity), and two children (male of 12 years and female of 9 years).j This assessment refers to the USA. In 1993, the food energy requirement was 910,000 TJ/year and the work supply 215 billion hours.k Assuming 10 MJ of fossil energy spent in the food system per 1 MJ of food consumed (Giampietro et al., 1994).l Assuming a total primary energy supply in 1993 in the USA (including the energy sector) of 85,000,000 TJ divided by a work supply of 215

billion hours.m Assuming a transformity ratio for fossil energy of 50,000,000 EMJoules/Joule.

M. Giampietro et al. / Energy xx (2005) 1–28 11

DTD 5 ARTICLE IN PRESS

M. Giampietro et al. / Energy xx (2005) 1–2812

DTD 5 ARTICLE IN PRESS

system to which the converter belongs, what has to be included and excluded in the characterization of

the environment, when checking: (i) the admissibility of boundary conditions; and (ii) the usefulness

of work done. In terms of hierarchy theory we can describe this fact as follows.

The unavoidable preliminary triadic filtering* needed to obtain a meaningful representation of

reality implies selecting: (i) the interface between the focal level n and the lower level nK1 to

represent the structural organization of the system (WHAT/HOW); and (ii) the interface between

the focal level n and higher level nC1 to represent the relational functions of the system

(WHAT/WHY). The arbitrariness of this choice is at the basis of the impasse faced by energy

analysis. Ignoring this fact, simply leads to a list of non-equivalent and non-reducible

assessments of the same concepts. A self-explanatory example of this standard impasse in the

field of energy analysis is given in Table 1—which reports an example of several non-equivalent

rigorous assessments of the energetic equivalent of 1 h of labor found in literature.

That is, every time we choose a particular hierarchical level of analysis for assessing an

energy flow (e.g. an individual worker over a day) we are also selecting a space-time scale at

which we will describe the process of energy conversion. This, in turn, implies a non-equivalent

definition of what is the context (environment) and what is the lower level (where structural

components are defined). Human workers can be seen as individuals operating over a 1 h time

horizon (muscles are the converters in this case), or as citizens of developed countries (machines

are the converters in this case). The definitions of identities of these elements must be, then

compatible with the identity of energy carriers (individuals eat food, developed countries eat

fossil energy).

A discussion of an innovative approach to handle technical incommensurability when dealing with

multi-scale integrated analysis of metabolic systems is available in three chapters (Chapters, 6, 7 and 8

co-authored with Kozo Mayumi) of the book of Giampietro [21]. These chapters present an analysis of

such a problem and possible ways out (e.g. looking for mosaic effects across levels, impredicative loop

analysis, narratives useful for surfing in complex time). A brief overview of these concepts is now

available in Giampietro and Ramos-Martin [23].

3. Facing the predicament of multi-criteria energy analysis

3.1. The unavoidable presence of both technical and social incommensurability

In the previous section, we discussed the problem associated with the insurgence of technical

incommensurability, even when adopting a single criterion of analysis (energetics of human labor),

when multiple scales are considered. In this section, we want to discuss the fact that as soon as energy

analysis deals with problems that are multi-dimensional and multi-scale (the quote of Carnot can be

recalled here) it is unavoidable to face not only technical but also social incommensurability.

To introduce an example of multi-criteria energy analysis we will use a typical ‘problem’ that can

only be characterized in relation to non-equivalent attributes. The example is: ‘assessing the

performance of electric generation within a society’ in order to decide about technical innovations or

new regulations. To characterize the performance of the energy sector one has to consider a set of non-

reducible criteria, which are related to non-equivalent objectives (Multi-Dimensional Analysis). In

our example we may select: (i) self-reliance for the country; (ii) low economic cost for the supply of

M. Giampietro et al. / Energy xx (2005) 1–28 13

DTD 5 ARTICLE IN PRESS

energy carriers; (iii) good health of ecosystems; (iv) high quality of life for citizens; (v) minimization

of risk of accidents; (vi) preservation of landscapes. On the descriptive side, an integrated analysis of

potential changes in the energy sector then requires gathering different indicators reflecting changes in

relation to these criteria. This requires an interdisciplinary team of scientists. Several distinct

disciplines, in fact, are required to study in quantitative terms (with numerical indicators and models)

the expected effects of a given policy in relation to the given set of relevant attributes. Obviously, the

more we enlarge the set of dimensions and scales to be included in the characterization and assessment

of scenarios of change in the energy sector, the more we make the task of scientists harder. The more

the effects of technical and social incommensurability enhance each other, the more it becomes

difficult to guarantee the quality of the process of Integrated Assessment. On the other hand, concepts

like welfare and sustainability are multi-dimensional concepts in their very essence.

3.2. An example of multi-criteria evaluation of the electric sector of a country

Let’s imagine for example to discuss of a research protocol for a multi-criteria analysis of different

options for generating electricity. This would require:

(i)

Using parallel descriptions of processes referring to the various distinct hierarchical levels at

which the various energy conversions can be analysed (e.g. individual piece of machinery, the

whole plant, the level of the related economic activity, the macro-economic level, and the

biosphere).

(ii)

Using several sets of indicators of ‘good’ and ‘bad’ referring to distinct hierarchical levels and

selected in order to include relevant perspectives of various actors (e.g. owner of the plant,

investors, workers, consumers, future generations, national governments, local communities,

marginal social groups, ecosystems, endangered species).

(iii)

Establishing links, in the integrated analysis, among processes occurring on the various

hierarchical levels considered in the analysis. This is a step required to enable an informed

discussion on possible future scenarios following technological and policy changes (e.g. what

happens when pulling in different directions a blanket which is too short).

(iv)

Involving stakeholders in the process of definition of the terms of reference for the analysis, in the

selection of indicators, in a critical appraisal of data and models generating scenarios and, finally,

in a discussion of pros and cons of the various options considered.

Remaining in the example of a multi-criteria analysis of the performance of energy sectors, let’s

now imagine that one wishes to either: (A) evaluate ex-post three strategies adopted in different

countries for producing electricity in the second half of 1900—e.g. France, Italy and Spain; or

(B) evaluate now pros and cons of three possible options for producing electricity—e.g. using the

same three options chosen in the past by these countries: (i) nuclear; (ii) oilC import; (iii) use as much

as possible coal. In order to do such an evaluation we should, first of all, select a set of indicators able

to cover the various relevant dimensions required to characterize the performance of a system

producing electricity. Then we should be able to: (a) assign numerical values to the resulting set of

indicators; and (b) judge the reliability of these assessments.

For the sake of simplicity, we select in our hypothetical analysis four main objectives for the

evaluation of electricity production: (1) economic performance; (2) environmental impact; (3) political

Economic characteristics Ecological Impact

Require

ment

of Subsid

ies

Return on

InvestmentGHGemission

Cos

tU

S$/k

Wh

(giv

en y

ear)

Radioactive

wastes

Habitat

destruction

Pro

babi

lity

of a

n ac

cide

nt

Dependency

on imports

Central controlon the supply

Aestheti

c

effec

ts on

landsca

pe

Consequences

of an accident

Internal supply

of plutonium

France

Nuclear

Italy

Oil/Import

Spain

Coal

Safety/Quality of lifeSelf-reliance/Power

Fig. 2. Multi-objective integrated representation of the performance of an electric sector.

M. Giampietro et al. / Energy xx (2005) 1–2814

DTD 5 ARTICLE IN PRESS

relevance of self-reliance and military strength; (4) safety and quality of life for consumers. After

having done that, a characterization on such a multi-objective space would look like what is shown in

Fig. 2.

We apologize in advance for the over-simplification in the characterization of the options given in

Fig. 2, but getting into details or examples of real studies is beside the point here. As it will become

immediately evident from the following discussion, the specific quality of such an initial

characterization is not particularly relevant. Its role is just that of providing a narrative to our

discussion. Our main point is exactly that it is not possible to do an integrated analysis of the type

shown in Fig. 2 in ‘the right way’ (this would imply operating within the paradigm of substantive

rationality). No matter how good is the protocol specified for such an analysis, it is unavoidable that,

according to the perspective, data and personal opinions of some other analyst, such a characterization

could have been done in a better way. In fact, the chain of choices that leads to the representation given

in Fig. 2 is necessarily associated with a chain of location specific events and with a series of value

calls. These value calls start with the selection of the four objectives.

Probably, each informed reader of Energy if asked to do so, would have proposed a different

selection of objectives and/or indicators for characterising the performance of electricity generation.

But each one of these different characterizations of an energy sector based on the approach given in

Fig. 2 could have been easily criticised by other readers. The point we want to make with this example

is exactly that any characterization of the performance of an energy sector on a multi-criteria space

will always face the following systemic problems:

M. Giampietro et al. / Energy xx (2005) 1–28 15

DTD 5 ARTICLE IN PRESS

Unavoidable ‘openness’ of the information space. Actually, the number of objectives (and relevant

criteria for each objective) that could be used to characterize the performance of electricity

generation is virtually infinite. Depending on the variety of locations in space, locations in time and

the characteristics of the cultural contexts in which the analysis can be performed. This means that

there is an open and expanding universe of possible criteria (and relative indicators), which can be

and are actually used by human actors—and educated readers—to define such a performance.

Incommensurability of trade-offs. Some of the criteria (and relative indicators) measuring relevant

characteristics of the system will result incommensurable (price of a kWh in dollars, degree of self-

reliance, aesthetic preferences, negative impact on biodiversity) and conflicting in nature (e.g.

within a conversion process at a given level of technology, the lower the economic cost for

production, the higher the environmental impact).

Indeterminacy. Not all the assessments referring to different indicators have the same reliability

(e.g. assessment of the emission of CO2 per Kwh versus assessment of the effect on biodiversity).

The various indicators of performance can imply the adoption of different time horizons for

representing changes in the energy sector. That is, when considering an economic criterion such as

the cost of electricity, we have to consider fluctuations in prices with a ‘time differential’ of 1 year.

Whereas when considering changes in the level of ecological impact we can deal with gradients

with a life-expectancy of centuries or even more—e.g. the expected decay time of radioactive

wastes. When one deals with long-term scenarios about activities that have never been done before

(e.g. the handling of radioactive waste during the decommissioning of nuclear plant) it becomes

very difficult to get reliable indications on the actualised cost for the year 2020—expressed in US

dollars 1998—for 1 kWh of electricity produced now.

Genuine ignorance. The existence of various dynamics operating in parallel on different scales is at

the root of uncertainty in both forms: indeterminacy (impossibility to perform accurate

predictions—butterfly effect) and genuine ignorance (possible emergence of new relevant issues

to be dealt with in the future).

Quality of the problem structuring (on the normative side). This reflects the agreement from all

stakeholders on what is the right ‘problem structuring’ (issue identification, selection of indicators,

quality of data) to be adopted in the analysis. For example, when dealing with the objective

‘self-reliance and military power’ (lower-left quadrant in Fig. 2) we assumed in this radar-diagram

two relevant criteria: (1) ‘internal supply of plutonium’ (which can be used for making nuclear

weapons); and (2) ‘central control on the supply’. In this representation (based on the Flag Model)

these criteria have been considered to be associated with ‘improvements’ when the value taken by

the indicator increases. In Fig. 2 this is reflected by a position taken by the value on the indicator in

the dark grey area close to the origin (red in the original diagramZbad). Whereas a position in the

very light grey away from the origin (green in the original diagramZgood). An overview on the flag

model and more in general on graphic tools for Multi-Objective Integrated Representation is given

in Gomiero and Giampietro [40]. Many people may agree with this choice of defining improvement

in relation to this criterion (e.g. this was the case of decision makers in France in the 1960s). For

these people having the capability of making nuclear weapons and a strong control on the supply of

power is ‘good’. Many others, however, would resent this choice. According to those that oppose

nuclear energy and the associated possibility to make nuclear weapons the characterization of the

nuclear option for generating electricity should adopt an opposite graphic representation of these

M. Giampietro et al. / Energy xx (2005) 1–2816

DTD 5 ARTICLE IN PRESS

gradients (inverting the position of the areas indicating ‘good’ and ‘bad’). In this view, nuclear

capability and central control should be identified as bad characteristics of this option. Obviously,

dealing with such a contrast of opinions is well outside the realm of scientific analysis. Still, such a

decision is required by the analyst at the moment of generating an integrated representation of

performance as done in Fig. 2.

Quality of data (on the descriptive side). As noted earlier the set of assessments used in the various

quadrants are subject to different doses of arbitrariness. But even remaining within the same

quadrant and within the same indicator, how accurate is the assessment—for example—of the

economic cost of a kWh of electricity? Any numerical assessment of such a cost will depend on

what the analyst decide to include in the relative calculation (e.g. the time horizon and the discount

rate adopted and the procedure adopted for Life Cycle Assessment, the risk of environmental

damages). This is the analogous to the truncation problem in energy analysis (what is accounted as

embodied in the assessment of a given energy input). A paper of Andrew Stirling, focusing exactly

on the range of values found in the assessment of economic costs for producing a KWh of

electricity, is enlightening at this regard. Stirling [41] monitored a large number of actual studies,

sponsored by industries or governments in industrialized countries. They were all aiming exactly at

the assessment of ‘the economic cost’ of electricity generation (the same indicator referring to the

same criterion). When assessing external costs (e.g. related to environmental impact) of electricity

generation in constant US currency terms (US$ 1995) in a new coal power station, the range of

values that Stirling found in literature goes from: ‘less than 0.05 cents/kWh’ to ‘more than

1000 cents/kWh’! The spread of the values taken by these assessments is so large to show a

difference of more than 20,000 times. Stirling’s analysis is very important for our discussion since

each of the assessments considered in his study was performed by reputable scholars operating in

reputable institutions (normal science at work for individual risk assessment studies). These studies

were all providing highly precise estimates based on the use of three or four digits! The resulting

estimates were then used to support strongly prescriptive conclusions (normative input) about how

to select the ‘best option’ considered in the analysis. The mechanism justifying the existence of

large differences in non-equivalent rigorous assessments has been discussed in Table 1.

Quality of the process of decision making (on the normative side). Whenever one is in the unlikely

situation of having reached a general agreement on: (a) a satisfying selection of objectives; (b) a

satisfying selection of indicators; (c) the usefulness of the underlying problem structuring; and

(d) the reliability of the set of data used in the representation of performance on the multi-criteria

space; the story is still not over. Then one has to start the process required for deciding what should

be selected as the best profile of different types of ‘costs’ and ‘benefits’ within a set of alternatives

for a particular country when producing electricity. This second process does not coincide with

scientific analysis. For example, the development of nuclear energy in France or the reliance on coal

plants in Spain even if fuelled by low quality local coal, can be explained by a clear priority given to

the objective of self-reliance over the others. The two generals in power at the moment the decision

was made—De Gaulle and Franco—probably did play a key role in determining this priority. On

the contrary, for historic and political reasons, such an objective was never a key issue in Italy.

Quality of the handling of uncertainty throughout the whole process. Finally, any selection of an

option based on the discussion of future scenarios is unavoidably affected by heavy doses of

uncertainty. When choosing between the three options represented in Fig. 2, how to deal with the

M. Giampietro et al. / Energy xx (2005) 1–28 17

DTD 5 ARTICLE IN PRESS

different degree of uncertainty affecting the characterizations of these three options associated with

the chosen multi-criteria space? What if some crucial criterion is missing for the moment?

Whenever it is impossible to establish exactly the future state of the problem faced, one can decide

to deal with such a problem either in terms of stochastic uncertainty, thoroughly studied in

probability theory and statistics, or in terms of fuzzy uncertainty, focusing on the ambiguity of the

description of the event itself [42]. However, one should always be aware that genuine ignorance is

always there too. One cannot calculate the risk of ‘something’ happening in the future, without

knowing ahead what this ‘something’ will be.

In all cases in which we can expect that the information used in the problem structuring is affected

by subjectivity, incompleteness and imprecision, the great advantage of multi-criteria evaluation is the

possibility to take these different factors into account.

4. Quality assurance in integrated assessment of sustainability

4.1. The task of quality assurance implied by Post-Normal Science

Let’s imagine, now, to deal with a controversial topic such as: ‘it is nuclear energy a desirable

alternative to oil?’ Let’s imagine, to make things more difficult, to frame such a choice within the

rationale provided by the precautionary principle (which has recently been re-stated as a key guiding

concept for policy in Europe in a Communication from the European Commission [43]). In this case,

we should be able to define, first of all, what should be considered as ‘enough’ scientific evidence

about possible future hazards linked to this option. The crucial dilemma about ‘how to apply’ such a

principle therefore is related to a previous choice of paradigm between substantive rationality and

procedural rationality [44]. Substantive rationality means assuming that it is possible to define in

absolute terms what should be considered ‘enough’ scientific evidence. Whereas, procedural

rationality means acknowledging that it is not possible to define in absolute terms what should be

considered as ‘enough’ scientific evidence. Therefore, such a decision must be ‘the outcome of

appropriate deliberation’ and therefore ‘procedural rationality depends on the process that generated

it’ [44, p. 131]. At this regard it should be noted that in some occasions we will never know, not even

‘ex post’ what course of action would have been to be considered as ‘the best one’. In fact, in real life,

evolving systems imply a long list of ‘only one experiment’.

The paradigm of substantive rationality (the hidden choice of conventional reductionism) implies

that a committee of experts can decide about what should be considered ‘enough’ and therefore the

best interest of the citizen. But, what if the perception of ‘best interest of citizen’ adopted by the

‘committee of experts’ does not coincide with the set of criteria considered relevant by the citizens

themselves? What if the assessment of ‘better efficiency and negligible risk’ provided by the

‘committee of experts’ will turnout to be wrong? [45].

According to what discussed so far, therefore, a Post-Normal Approach to the analysis of the

performance of an electric sector should:

1.

Keep separated the descriptive from the normative side. Scientists should not claim to provide ‘the

correct’ analysis/description of the system. Rather, what scientists can do is simply to generate

M. Giampietro et al. / Energy xx (2005) 1–2818

DTD 5 ARTICLE IN PRESS

several sets of ‘view dependent’ representations of an energy system (reflecting the interests of

stakeholders) that can be used to discuss pros and cons associated to possible policies. The novelty

of this approach is that when generating these various models scientists should make clear such a

view-dependence from the beginning;

2.

Generate analyses that can learn in time. Any analysis should be organized in order to remain open

to additional alternative ‘view-dependent’ representations. In fact, for enhancing the ongoing

negotiation among groups expressing different views and interests about the performance of the

same energy system, it can become useful during the process to add alternative ‘view-dependent’

representations (new variables, indicators, models, sources of data) to the original set.

3.

Acknowledge the unavoidable presence of ignorance and uncertainty. The goals related to the

concept of sustainability can not be all achieved at the same time, just by adopting a single ‘silver

bullet’ technical solution. Rather dealing with the issue of sustainability, implies a wise

compromise between contrasting goals. The goals of flexibility and adaptability very often clash

with the goals of efficiency and economies of scale. This type of dialectics is associated with a large

degree of uncertainty;

4.

Do not put all the epistemological eggs in the same basket. Look as much as possible for mosaics of

complementing information (epistemological plurality), using analyses which have been generated

in different scientific fields (physics, engineering, economics, sociology, political science, applied

and theoretical ecology, etc.). If we admit that it is not possible to compress scientific descriptions

of the same system obtained adopting different space-time scales (technical incommensurability),

then we have to look for mix of complementing non-equivalent views. The reader can recall, here,

the integrated use of medical tests (blood tests, X-rays, ultrasound scan, etc.) to get a better picture

of the health of a given person;

5.

Avoid a dramatic hegemonization in the choice of relevant objectives and criteria. In normative

terms, this can be considered an analogous of the advice #4 (avoiding to put all the eggs in the same

basket in the descriptive side). In any process of decision-making, it is always useful to enlarge the

number of alternative perspectives that can be used to define tasks, indicators, models and their

relative relevance. In fact, the issue of sustainability implies considering in the process of problem

structuring the view of actors that in the past were not included in ‘traditional’ cost-benefit

analyses. Therefore, within the Post-Normal Science paradigm the tools developed by neo-classical

economics for dealing with sustainability are considered an example of excessive hegemonization

in normative terms. That is, the interests of future generations, the health of local ecosystems

(which is associated with the preservation of biodiversity), and the preservation of values

associated to non-dominant cultures tend to be systematically neglected by standard economic

accounting.

6.

Increase the transparency of the process of integrated assessment by making scientific analyses

more ‘stakeholder-friendly’. Scientists should try to make an extra effort to make their

assumptions, analyses, data collection and measurement processes easy to understand.

4.2. Procedures and tool kits to be developed

After acknowledging that it is not possible to define in substantive terms the ‘right problem

structuring’ for an Integrated Assessment of a sustainability problem, we are left with the only viable

M. Giampietro et al. / Energy xx (2005) 1–28 19

DTD 5 ARTICLE IN PRESS

option of a procedural definition of it. This opens the problem of how to guarantee the quality of such a

procedural definition. The solution suggested here is an iterative use of two different tool kits for

performing a quality check both on the descriptive and the normative side. In particular we can

imagine a process of iteration among two distinct activities:

(A)

Discussion and development of a tool kit for ‘discussion support’. In this activity scientists are

the main actors and social actors the consultants: the goal is the development of integrated

packages of analytical tools required to do a good job on the descriptive side. This information

space has to be constructed according to the EXTERNAL input received from the social actors

about what is relevant in relation to the definition of good and bad. The social actors, as

consultants, have to provide a package of questions to be answered. In this activity, the scientists

are those in charge to process such an input according to the best available knowledge of the

issue.

This side of the process requires an ability of scientists coming from different disciplines to

interact on a given problem structuring provided by the society. This is what we introduced before

under the label of multiple scale integrated analysis.

(B)

Discussion and development of a tool kit for ‘decision support’. In this activity stakeholders and

other relevant agents are the main actors and scientists are the consultants: the goal is the

development of procedures required to do a good job on the normative side. The resulting process

should make possible to decide, through negotiations: (1) what is relevant and what should be

considered as good and bad in the decision process; (2) what is an acceptable quality in the

process generating the information produced by the scientists (e.g. definition of quality criteria:

relevance, fairness in respecting legitimate contrasting views, no cheating with the collection of

data or choice of models), and finally (3) the final outcome of the integrated assessment (e.g.

policy to be implemented).

This side of the process requires an EXTERNAL input (given by scientists) consisting in a

qualitative and quantitative evaluation of the situation on different scales and dimensions. In their

input, scientists have to include also information about expected effects of changes induced by the

decision under analysis (discussion of scenarios and reliability of them). But the social actors are

those in charge to process such an input. This is what we introduced before as social multi-criteria

evaluation.

Since the scientific process associated with activity A affects the social process associated with

activity B and vice versa, the only reasonable option to handle this chicken–egg situation is to

establish some form of organized iteration between the two, by keeping in mind that process A is a

scientific activity referring to the descriptive side (that requires an input from social actors) and

process B is a social activity referring to the normative side (that requires an input from scientists).

Both of them depend on each other. This is where the need of a new type of ‘expertise’ enters into play.

In order to assure the quality of such an iterative process, it is necessary to implement adequate

procedures. The implementation of these procedures requires developing an expertise in relation to

three tasks of quality assurance: (a) the process on the descriptive side determining the quality of

Multi-Scale Integrated Analysis; (b) the process on the normative side determining the quality of the

Societal Multi-Criteria Evaluation; and (c) the handling of iteration among these two processes (about

the fairness and competence used to handle the interaction among the actors in the two processes). For

M. Giampietro et al. / Energy xx (2005) 1–2820

DTD 5 ARTICLE IN PRESS

an example of how it is possible to integrate soft system methodologies (e.g. the approach developed

by Checkland) with quantitative analyses in an iterative process see Chapter 5 of Giampietro [21].

4.3. The limitations of formal multi-criteria characterization

A typical multi-criteria problem (with a discrete number of alternatives) may be described in the

following way: A is a finite set of n feasible actions (or alternatives); m is the number of evaluation

criteria which are considered relevant in a decision problem. In this way a decision problem may be

represented in a tabular or matrix form. The performance of any given alternative, according to the set

of relevant criteria can be characterized through a multi-criteria impact profile in a matrix form, which

is an alphanumeric view of the information organized in Fig. 2.

These multi-criteria impact profiles can be based on quantitative, qualitative or both types of

information. In the example of multi-criteria impact profile given in Fig. 2 we have nZ3 (possible

choices of electricity generation) and mZ12 (criteria used to evaluate the performance), indicated by

the 12 axes on the radar diagram: e.g. cost of kWh, internal supply of plutonium, Green House Gas

emission (GHG), etc. These criteria are grouped into four main dimensions (economic analysis,

ecological impact, self-reliance/power, and safety/quality of life). That is, the concept of dimensions is

hierarchically higher than the concept of criteria. Different objectives may require the adoption of non-

equivalent descriptive domains (different dimensions of analysis). This implies that several criteria

can be used to characterize one dimension (e.g. the economic evaluation of a power plant requires the

assessment of several economic indices). Goals reflect the need of formulating the general objectives

(selected in the profile). This is obtained by locating target values over the set of selected indicators. In

the graphic representation given in Fig. 2, goals are indicated by the various bullets on the various

indicators in the radar diagram (in the middle of the mid-grey area). In this way, we are bridging two

hierarchical levels of analysis: (1) definition of performance in general terms, obtained by selecting a

set of different relevant dimensions (i.e. formulation of general objectives such as maximizing the

economic efficiency, maximizing the usefulness for users, etc.); (2) translation of these general

principles into a numerical mapping of performance over a set of indicators, which are necessarily

context specific (location specific description). In the example of Fig. 2, we have that after indicating

the goals and the scores of the alternative for each criterion, the 12 criteria are transformed into

attributes of performance. They reflect, in this way, the level of achievements of the goals. Finally, we

can imagine that special threshold values (e.g. a limited budget of money for producing electricity, a

given constraint on local availability of natural resources, a strong will of the people about the level of

hazard they are willing to accept, given regulations about maximum level of pollutant acceptable) will

imply the existence of constraints on the value that can be taken by the selected set of indicators. In

this way it is possible to define the viability domain for alternatives. In the example given in Fig. 2, the

viability domain (or feasibility region) would be the area on the radar diagram obtained by excluding

the white circle on the origin of the axis (Ztoo bad to be acceptable).

After having framed a multi-criteria analysis in this way, it is possible to adopt various methods

related to the ranking of alternatives. In relation to this task there are two definitions that carry a lot of

epistemological implications [42]:

(1)

Dominance: an action a dominates an action b if a is at least as good as b for all the criteria taken

into consideration, and much better than b for at least one criterion.

Criterion 1

Criterion 2

A

B

C

E

D

A, B, C, D, E, represent different alternatives

Fig. 3. Example of ranking of solutions.

M. Giampietro et al. / Energy xx (2005) 1–28 21

DTD 5 ARTICLE IN PRESS

(2)

Efficient solution: an action a belonging to A is efficient if there is no action b in A which

dominates a.

Therefore, the concept of multi-criterial efficiency (a formal definition of ‘better’) can easily be

illustrated graphically—see Fig. 3 which refers to a two-criteria state space. Alternative C performs

better than B in all respects and hence C should be preferred to B. The same can be said for D

compared with A. Thus only C and D are efficient, rational, alternatives, compared with B.

The implications of these definitions can be critically appraised against the basic epistemological

problems discussed in the previous three sections. As observed in Section 1, the concepts of

‘dominance’ and ‘efficiency’, which can be used to rank different options, apply not to the reality, but

rather, to a given representation of the reality, determined by the choice made by the analyst when

selecting a given problem structuring. That is, what is ‘seen’ as the existing set of constraints and

opportunities in the formalization of the problem structuring used in the multi-criteria characteriz-

ation, does not necessarily reflect the real set of constraints and opportunities. This distinction between

‘reality’ and ‘representation of the reality’ becomes crucial in all cases in which uncertainty

and genuine ignorance can be assumed to play an important role in the process. This is always the case

when dealing with future scenarios, reflexive systems and multiple relevant scales. In this situation the

analysts should be very aware of the heavy implications of confusing the reality with their

representation of it. This means that the concept of ‘efficient solution’ may be useful, but only in the

short term, whereas it is very likely to become dangerous in the long term. For a detailed analysis of

the mechanism generating Jevon’s paradox*—see Giampietro [21].

Any algorithmic definition of ‘an ideal option’ (an option dominating all the others on all the

selected criteria) must be based on a static and finite information space. This implies the unavoidable

existence of: (1) other goals/objectives (reflecting the identity of other social groups) not considered in

the existing analysis; and (2) other relevant dynamics and constraints which could have been detected

only by adopting a different scale and a different set of observable qualities in the observation space.

This is the reason why it is important to look for several ways for structuring the problem

complementing each other (plurality of representations). Whenever we describe a system, even when

M. Giampietro et al. / Energy xx (2005) 1–2822

DTD 5 ARTICLE IN PRESS

adopting a multi-dimensional representation, we are using a number of dimensions smaller than the

one required to catch all its relevant qualities. The metaphor of the Flatlanders [46] can be a good

example of this. This is what explains the danger of going for the ‘best solution’ as determined by a

computation to define a political decision. The difference from computation and decision making

relies exactly in the fact that decision making is based on both: (1) context dependent goals and

(2) insufficient information (Fesce, personal communication). In this frame we recall here the

suggestion of Simon that when dealing with complexity the concept of ‘procedural rationality’ should

replace that of ‘substantive one’ (the one adopted by default by reductionism). The amplification of the

best performing activities (according to a selected set of goals and a given problem structuring) and the

elimination of less performing activities (considered as obsolete at a given point in space and time), is

a wise strategy only under the assumptions that adopted (1) goals and (2) problem structuring will

remain relevant and useful in the future and wherever such a strategy is applied. As soon as either of

these two assumptions will lose validity the strategy will backfire.

4.4. Addressing explicitly the unavoidable existence of conflicts

In the previous section we saw that technical and social incommensurability associated with large

dose of uncertainty imply a serious limitation on the use of algorithms and formal protocols for

handling in an appropriate way the conflicts on the normative side.

Therefore, given the importance of value conflicts for the class of problems we are dealing with, it

becomes essential also to develop analytical tools showing clearly the impacts of different potential

choices on each of the different stakeholders (or social actors) considered in the analysis. This means

that together with an integrated representation of changes on different levels and different descriptive

domains (something similar to what represented in Fig. 2), we have also to generate a conflict analysis

procedure, able to indicate groups whose interests seem to cluster together or diverge.

An example of analytical tool developed to handle this task is the Novel Approach to Imprecise

Assessment and Decision Environments (NAIADE), which was created for sustainability policy

problems specifically [42]. NAIADE is a discrete multi-criteria method whose impact (or evaluation)

matrix may include either crisp, stochastic or fuzzy measurements of the performance of an alternative

with respect to an evaluation criterion, thus it is very flexible for real-world applications.

A peculiarity of NAIADE is the use of conflict analysis procedures to be integrated with the multi-

criteria results. This to allow policy-makers to seek for decisions that could reduce the degree of

conflict (in order to reach a certain degree of consensus) or that could have a higher degree of equity on

different income groups. NAIADE uses a fuzzy conflict analysis procedure. Starting with a matrix

showing the impacts of different courses of action on each different interest group we can build a

different type of impact matrix, which can be called a social impact matrix. A fuzzy clustering

procedure indicating the groups whose interests are closer in comparison with the other ones can be

used. In this way, by applying the NAIADE software, one can arrive to a dendogram of a coalition

formation process (related to the same problem considered in the multi-criteria impact matrix

referring to the information graphed in Fig. 2). An example of such an analysis (but applied to a

participatory integrated assessment of policy options about water management) is illustrated in Fig. 4.

This different perspective of analysis, obviously, carries the risk of generating serious divergence

between the ranking obtained when using a multi-criteria selection process based on the information

provided by a multi-criteria impact matrix (the type of information given Fig. 2) and the equity

Fig. 4. Example of conflict analysis.

M. Giampietro et al. / Energy xx (2005) 1–28 23

DTD 5 ARTICLE IN PRESS

ranking based on social conflicts represented in the social impact matrix (the type of information given

in Fig. 4).

Multi-criteria impact matrices (characterizing different options in relation to the score of different

attributes on a multi-criteria space) tend to be considered more ‘technical’. In fact, the structure of this

information reflects the choices made in the step represent by scientists that selected relevant criteria,

useful indicators and reliable data. However, as observed before, the ‘neutral’ role of this technical

information is far from being above confrontations. For example, getting back to the integrated

analysis given in Fig. 2, interests group (e.g. a lobby in favour of nuclear energy) can fight about the

relevance of the various indicators or about the reliability of the relative data (the reader can recall

here the amazing findings of Stirling). Moreover, ranking various options requires considering all

the criteria simultaneously in search of the most satisfying compromise solution. So at times the fight

for the protection of individual interests can move to a deliberate effort of including or excluding those

objectives or criteria that are viewed as favourable or dangerous for certain interests.

On the contrary, social impact matrices based on the impact score of each of the considered

alternatives in relation to each interest group are much more direct. In fact, such a score is determined

by the group itself. Irreducible conflicts may exist between different coalitions or even between single

groups. This policy analysis can be conditioned by heavy value judgments such as: have all actors the

same importance (i.e. weight)? Should a social desirable ranking be obtained on the grounds of the

majority principle? Should some veto power be conceded to minorities? Are income distribution

effects important? And so on. That is, this combined analysis makes it possible to address in a more

structured way, some of the topics addressed in the list of quality checks presented before. Examples

of applications are available in the literature [47–51].

M. Giampietro et al. / Energy xx (2005) 1–2824

DTD 5 ARTICLE IN PRESS

4.5. Back to the basic implications of Post-Normal Science

In general terms, we can say that the epistemological concerns discussed so far have not been

considered very relevant by scientific research in the past. On the other hand, the new nature of the

problems faced in this third millennium (e.g. climate change, mad cow and avian flu, genetic modified

organisms), implies that very often scientists cannot provide any useful input to the social debate

without interacting with the rest of the society, as well as the rest of the society cannot perform any

sound decision making without interacting with the scientists [21,26]. That is, the question of ‘how to

improve the quality of a policy process’ must be put, quite quickly, on the agenda of scientists,

decision makers and indeed of the whole society. This extension of the ‘peer community’ is essential

for maintaining the quality of the process of decision making when dealing with reflexive complex

systems together with uncertainty and value conflicts. The new epistemological framework called

‘Post-Normal Science’ was proposed exactly to better deal with two crucial aspects of science in the

policy domain: uncertainty and value conflict.

Post-Normal Science can be characterized in relation to other, complementary scientific strategies,

according to the diagram given in Fig. 5, which is based on two axes: systems uncertainties (on the

descriptive side) and decision stakes (on the normative side).

When both uncertainty and stakes are small, we are in the realm of ‘normal’ academic science,

where it is safe to rely on ‘codified expertise’ without much discussion. When the task is to design and

build a standard elevator, any good practitioner can do it safely, as long as, the codified know—how is

applied properly.

Fig. 5. The PNS graph in relation to semiotic closure—after giampietro [21].

M. Giampietro et al. / Energy xx (2005) 1–28 25

DTD 5 ARTICLE IN PRESS

When either uncertainty or stakes are in the medium range, then the application of routine

techniques and standardized and generalized knowledge is no longer enough. In these cases, skill,

judgment, sometimes even courage are required to adjust the ‘general knowledge’ available to ‘special

situations’. Funtowicz and Ravetz call this ‘professional consultancy’, with the examples of the

surgeon or the senior engineer facing a critical situation. In this situation, the client must have a say in

the choice of the surgeon or the senior engineer that with their choices will determine the final

outcome.

Finally we arrive to cases, in which the possible outcomes are not completely determined by

scientific facts; inferences will (naturally and legitimately) be conditioned by the values held by the

agent. When stakes are very high (as when an institution is seriously threatened by a policy) then

partisan discussion and a defensive tactic will involve challenging every step of a scientific argument

taking sides. An example of this strategy could be the firm denial of the existence of a problem of

global warming by those actors that do not want to implement precautionary policies. We are now in

the realm of Post-Normal Science.

In this situation, the tactic of fighting over the definition of relevant facts or the reliability of

proposed data (e.g. again the study of Stirling and the data of Table 1) should not be considered wrong.

On the contrary, within the realm of Post-Normal Science legitimate contrasting views, also when

these views are held by scientists, have to be openly used to challenge scientific arguments. Taking

side is wrong only when is conducted covertly, as by scientists who present themselves as impartial

judges when, in reality, they are actually committed advocates of a particular view.

4.6. Conclusions

According to the paradigm of Post-Normal Science scientific inputs developed in relation to the

topic of science for governance should no longer be based on mono-criteria analyses generated by

closed committees of experts. Rather multi-criteria analysis of sustainability should be obtained

through participatory procedures of integrated assessment. This requires the adoption of innovative

analytical approaches.

We do not claim that the tools mentioned in this paper for checking the quality of the information

used on the normative and the descriptive side are the best tools available for this task. The tools

mentioned here are those that were presented in the session of Porto Venere to which this paper refers

to. In any case, we believe that they make the point that it is possible to adopt innovative approaches to

do things in a different way.

If we accept the idea that the process of decision making is intended as a search for ‘satisfying

solutions’, the new role of scientists should be that of facilitating the negotiation among stakeholders

by clarifying the nature and possible consequences of trade-offs in relation to non-equivalent criteria

of quality and in face of uncertainty on predictions. The alternative is hampering such a process by

taking sides on substantive basis. That is, by indicating the ‘best’ solution among the considered

options following a process that in any case will be reflecting either vested interests or personal views.

The implications of complexity in relation to science for governance entail a two-way dialogue of

science within the society: scientists have to teach and learn at the same time. This is in contrast with

the concept of substantive rationality that entails a one-way flow of information. The concept of Post-

Normal Science puts back the scientists within the continuous process of social learning, rather than

holding science as something external and above it.

M. Giampietro et al. / Energy xx (2005) 1–2826

DTD 5 ARTICLE IN PRESS

References

[1] Brown MT. Final document of the International Workshop held in Porto Venere. In: Ulgiati S, Brown MT,

Giampietro M, Herendeen R, Mayumi K, editors. Advances in energy studies. Exploring supplies, constraints, and

strategies. Padova, Italy: Modesti Publisher; 2001. p. 305–18.

[2] Allen TFH, Tainter J, Pires C, Hoekstra TW. Dragnet ecology—“just the fact ma’am“: the privilege of science in a

postmodern world. BioScience 2001;51:475–85.

[3] Anderson JL. Embracing uncertainty: the interface of Bayesian statistics and cognitive psychology. Conservation

Ecology [online] 1988;2(1):2. Available on internet URL: http://www.consecol.org/vol2/iss1/art2

[4] Meffe GK, Viederman S. Combining science and policy conservation in biology. Wildlife Soc Bull 1995;23(3):327–32.

[5] Walters CJ. Adaptive management of renewable resources. New York: MacMillan; 1986.

[6] Clark TW. Creating and using knowledge for species and ecosystem conservation: science, organizations, and policy.

Perspect Biol Med 1993;36(3):497–525.

[7] Brunner RD, Clark TW. A practice-based approach to ecosystem management. Conservation Biol 1997;11(1):48–56.

[8] Weeks P, Packard JM. Acceptance of scientific management by natural resource-dependent communities. Conservation

Biol 1997;11(1):236–45.

[9] Funtowicz SO, Ravetz JR. A new scientific methodology for global environmental issues. In: Costanza R, editor.

Ecological economics. New York: Columbia; 1991. p. 137–52.

[10] Pattee HH. Evolving self-reference: matter, symbols, and semantic closure. Commun Cognit Artif Intell 1995;12:9–28.

[11] Kuhn TS. The structure of scientific revolutions. Chicago: University of Chicago Press; 1962.

[12] Ravetz JR, Funtowicz SO. (Guest editors) Special Issue of Futures dedicated to Post-Normal Science [Futures: 1999.

vol. 31].

[13] Ashby WR. An introduction to cybernetics. London: Chapman & Hall Ltd; 1957.

[14] Rosen R. Anticipatory systems: philosophical, mathematical and methodological foundations. New York: Pergamon

Press; 1985.

[15] Rosen R. Essays on life itself. New York: Columbia University Press; 2000.

[16] Kampis G. Self-modifying systems in biology and cognitive science: a new framework for dynamics, information and

complexity. Oxford: Pergamon Press; 1991.

[17] Georgescu-Roegen N. The entropy law and the economic process. Cambridge: Harvard University Press; 1971.

[18] Box GEP. Robustness is the strategy of scientific model building. In: Launer RL, Wilkinson GN, editors. Robustness in

statistics. New York: Academic Press; 1979. p. 201–36.

[19] Allen TFH, Starr TB. Hierarchy—perspectives for ecological complexity. Chicago: The University of Chicago Press;

1982.

[20] Ahl V, Allen TFH. Hierarchy theory. New York: Columbia University Press; 1996.

[21] Giampietro M. Multi-scale integrated analysis of agroecosystems. Boca Raton: CRC Press; 2003.

[22] Giampietro M. Complexity and scales: the challenge for integrated assessment. In: Rotmans J, Rothman DS, editors.

Scaling issues in integrated assessment. The Netherlands: Swets & Zeitlinger B.V. Lissen; 2003.

[23] Giampietro M, Ramos-Martin J. Multi-scale integrated analysis of sustainability: a methodological tool to improve the

quality of narratives, Int J Global Environ Issues [in press].

[24] Giampietro M, Mayumi K, guest editors. Societal metabolism and multiple-scales integrated assessment. Popul Environ

2000;22(2):95–254.

[25] Giampietro M, Mayumi K, guest editors. Societal metabolism and multiple-scales integrated assessment. Popul Environ

2001;22(3):255–352.

[26] Munda G. Social multi-criteria evaluation (SMCE): methodological foundations and operational consequences. Eur

J Oper Res 2004;158(3):662–77.

[27] Carnot S. Reflexions sur la puissance motrice du feu sur les machines propres a developper cette puissance. Paris:

Bachelier, Libraire; 1824.

[28] Scott MJ, Antonsson EK. Aggregation functions for engineering design trade-offs. Fuzzy Sets Syst 1998;99(3):253–64.

[29] Checkland P. Systems thinking, systems practice. Chicester: Wiley; 1981.

[30] Checkland P, Scholes J. Soft-systems methodology in action. Chicester: Wiley; 1990.

M. Giampietro et al. / Energy xx (2005) 1–28 27

DTD 5 ARTICLE IN PRESS

[31] Roling N, Wagemakers A, editors. Facilitating sustainable agriculture participatory learning and adaptive management

in times of environmental uncertainty. Cambridge: Cambridge University Press; 1998.

[32] Giampietro M, Mayumi K. Complex systems and energy. In: Cleveland C, editor. Encyclopedia of energy. San Diego:

Elsevier; 2004.

[33] Fluck RC. Net energy sequestered in agricultural labor. Trans Am Soc Agric Eng 1981;24:1449–55.

[34] Fluck RC. Energy of human labor, in Energy in Farm Production, vol. 6 of Energy in World Agriculture, Fluck, RC.

Amsterdam: Elsevier; 1992.

[35] Giampietro M, Pimentel D. Assessment of the energetics of human labor. Agric Ecosyst Environ 1990;32:257–72.

[36] Giampietro M, Pimentel D. Energy efficiency: assessing the interaction between humans and their environment. Ecol

Econ 1991;4:117–44.

[37] Giampietro M, Pimentel D, Cerretelli G. Energy analysis of agricultural ecosystem management: human return and

sustainability. Agric Ecosyst Environ 1992;38:219–44.

[38] Giampietro M, Bukkens SGF, Pimentel D. Labor productivity: a biophysical definition and assessment. Human Ecol

1993;21(3):229–60.

[39] Salthe S. Evolving hierarchical systems: their structure and representation. New York: Columbia University Press;

1985.

[40] Gomiero T, Giampietro M. Overview of graphic tools for data representation in integrated analysis of farming systems.

Int J Global Environ Issues [in press].

[41] Stirling A. Multicriteria mapping: mitigating the problems of environmental evaluation?. In: Foster J, editor. Valuating

nature: economics, ethics and environment. London: Routledge; 1997.

[42] Munda G. Multicriteria evaluation in a fuzzy environment. Theory and applications in ecological economics.

Heidelberg: Physica-Verlag; 1995.

[43] Commission of the European Communities. Communication from the Commission on the Precautionary Principle,

02.02.2000. COM(2000)1. Brussels: The Commission, 2000. See also: http://europa.eu.int/comm/off/com/health_

consumer/precaution.htm

[44] Simon HA. From substantive to precedural rationality. In: Latais JS, editor. Methods and appraisal in economics.

Cambridge: Cambridge University Press; 1976.

[45] Giampietro M. The precautionary principle and ecological hazards of genetically modified organisms. Ambio 2002;

31(6):466–70.

[46] Abbott EA. Flatland, a romance of many dimensions. Boston: Little, Brown & Co.; 1935.

[47] Munda G, Nijkamp P. Policy analysis for sustainable development: an operational approach to natural

park management. In: Coccossis H, Nijkamp P, editors. Planning for our cultural heritage. Aldershot: Avebury;

1995. p. 69–88.

[48] Munda G, Nijkamp P, Rietveld P. Qualitative multicriteria methods for fuzzy evaluation problems: an illustration of

economic-ecological evaluation. Eur J Oper Res 1995;82:79–97.

[49] Munda G. Multicriteria evaluation as a multidimensional approach to welfare measurement. In: van den Bergh J, van

der Straaten J, editors. Economy and ecosystems in change: analytical and historical approaches. Cheltenham, UK:

Edward Elgar; 1997. p. 96–115.

[50] Munda G, Paruccini M, Rossi G. Multicriteria evaluation methods in renewable resource management: the case of

integrated water management under drought conditions. In: Beinat E, Nijkamp P, editors. Multicriteria evaluation in

land-use management: methodologies and case studies. Dordrecht: Kluwer; 1998. p. 79–94.

[51] De Marchi B, Funtowicz S, Lo Cascio S, Munda G. Combining participative and institutional approaches with

multicriteria evaluation. An empirical study for water issues in Troina, Sicily. Ecol Econ 2000;34(2):267–82.

Glossary

Assessment: a critical evaluation and analysis of information relevant for decision making.

Integrated Assessment: the simultaneous appraisal of attributes of performance referring either to different

M. Giampietro et al. / Energy xx (2005) 1–2828

DTD 5 ARTICLE IN PRESS

dimensions of analysis and/or different scales. It requires the simultaneous use of indicators developed in

different disciplinary fields.

Multi-criteria space: an integrated set of logically independent criteria used to characterize the performance of a

system in relation to a selected set of policy options.

Autism: a variable developmental disorder that appears by age three and is characterized by impairment of the

ability to form normal social relationships, by impairment of the ability to communicate with others, and by

stereotyped behavior patterns [Merriam-Webster Dictionary].

Epistemology: the study or a theory of the nature and grounds of knowledge especially with reference to its

limits and validity [Merriam-Webster Dictionary].

Problem structuring: the result of a process of formalization of a perceived problem in scientific terms. It

includes several steps: issue identification, choice of criteria, choice of indicators, choice of models, choice of

appropriate protocols for data collection.

Semiotic closure: a successful action in the reality which is reflecting a successful analysis and planning.

This expression coined by Pattee (1995) is related to the concept of Semiotic Triad introduced by Peirce

[/APPLY/REPRESENT/TRANDSUCE/APPLY/]—Peirce (1935).

Scale: the relation between a given perception and representation of the reality, which is determined by the

choice made by the observer about the relevant attributes associated with the identity of the observed system.

The finite ability to process information of any observer/agent implies that after having defined the identity of

the observed system both the relative perception and the representation will be characterized by a given grain

(minimum detectable gradients, differentials) and a given extend (size of the domain of observation).

Non-equivalent descriptive domains: representations of the reality based on definitions of space and time that

are not reducible to each other. An example of two non-equivalent descriptive domains is given by two

pictures of the same person taken by: (1) a microscope and (2) a regular camera. With a microscope it is

impossible to perceive the quality ‘face’ observable only when using a regular camera.

Technical incommensurability: it is impossible to reduce to a single model and data set heterogeneous

information referring to representations belonging to non-equivalent descriptive domains.

Social incommensurability: in policy problems socials actors always call a set of contrasting and legitimate

values, perceptions and interests. This implies that any decision is always associated with the generation of

winners and losers.

Hierarchy Theory: a theory of the observer’s role in any formal study of complex systems (Ahl and Allen, 1996,

p. 29).

Triadic Filtering: the operation required to define an identity for the observed system. It implies determining an

expected relation among types used to represent on three contiguous hierarchical levels: (1) the whole; (2) its

parts; (3) the associative context (admissible environment).

Stakeholders: those affecting and affected by the policy (or changes) under analysis.

Jevon’s paradox: an increase in efficiency in a using a resource leads, in the medium to long term, to an

increased consumption of that resource (rather than a decrease). For complex adaptive systems ‘ceteris’ are

never ‘paribus’.