uaai15329 277..303

27
u SALVAGING THE SPIRIT OF THE METER-MODELS TRADITION: A MODEL OF BELIEF REVISION BY WAY OF AN ABSTRACT IDEALIZATION OF RESPONSE TO INCOMING EVIDENCE DELIVERY DURING THE CONSTRUCTION OF PROOF IN COURT ALDO FRANCO DRAGONI Istituto di Informatica, University of Ancona, Ancona, Italy EPHRAIM NISSAN School of Computing and Mathematical Sciences, University of Greenwich, Greenwich, London, England, United Kingdom Inside the Juror (Hastie 1994) was, in a sense, a point of arrival for research developing formalisms that describe judicial decision making. Meter-based models of various kinds were mature, and even ready for giving way to such models that would concern themselves with the narrative content of the cases at hand, that a court is called to decide upon. Moreover, excessive emphasis was placed on lay factfinders, i.e. on jurors. It is noticeable that as ‘‘AI & Law’’ has become increasingly concerned with evidence in recent years with efforts coordinated by Nissan & Martino, Zeleznikow, and othersthe baggage of the meter-based models from jury research does not appear to be exploited. In this article, we try to combine their tradition with a technique of belief revision from artificial intelligence, in an attempt to provide an architectural component that would be complementary to models that apply representations or reasoning to legal narrative content. Address correspondence to Dr. Ephraim Nissan, 282 Gipsy Road, Welling, Kent DA16 1JJ, England. E-mail: [email protected] Applied Artificial Intelligence, 18:277303, 2004 Copyright # Taylor & Francis Inc. ISSN: 0883-9514 print/1087-6545 online DOI: 10.1080=08839510490279889 277

Transcript of uaai15329 277..303

u SALVAGING THE SPIRITOF THE METER-MODELSTRADITION: A MODELOF BELIEF REVISION BY WAYOF AN ABSTRACTIDEALIZATION OF RESPONSETO INCOMING EVIDENCEDELIVERY DURING THECONSTRUCTION OF PROOFIN COURT

ALDO FRANCO DRAGONIIstituto di Informatica, University of Ancona,

Ancona, Italy

EPHRAIM NISSANSchool of Computing and Mathematical Sciences,

University of Greenwich, Greenwich, London,

England, United Kingdom

Inside the Juror (Hastie 1994) was, in a sense, a point of arrival for research developingformalisms that describe judicial decision making. Meter-based models of various kinds weremature, and even ready for giving way to such models that would concern themselves with thenarrative content of the cases at hand, that a court is called to decide upon. Moreover,excessive emphasis was placed on lay factfinders, i.e. on jurors. It is noticeable that as‘‘AI & Law’’ has become increasingly concerned with evidence in recent years � with effortscoordinated by Nissan & Martino, Zeleznikow, and others�the baggage of the meter-basedmodels from jury research does not appear to be exploited. In this article, we try to combinetheir tradition with a technique of belief revision from artificial intelligence, in an attemptto provide an architectural component that would be complementary to models that applyrepresentations or reasoning to legal narrative content.

Address correspondence to Dr. Ephraim Nissan, 282 Gipsy Road, Welling, Kent DA16 1JJ, England.

E-mail: [email protected]

Applied Artificial Intelligence, 18:277�303, 2004

Copyright # Taylor & Francis Inc.

ISSN: 0883-9514 print/1087-6545 online

DOI: 10.1080=08839510490279889

277

BACKGROUND IN JURY RESEARCH

What is the proper role for artificial intelligence tools, architectures, orformal models in the service of treating legal evidence during investigationsor in the courtroom? It seems to be unquestionable that such a role wouldbe welcomed if it’s to support the legal professionals involved in any role,provided that such support takes place in any way other than affecting whatthe verdict on the defendant’s guilt is going to be.

If, instead, the role of the AI tool is one of suggesting to the factfinders atruth value for a proposition concerning the guilty or not guilty status of thesuspect or the defendant, other than independently supported by othermeans, well, that is a terrain rife with controversy; see, for example, viewsfrom the opposing camps in Allen and Redmayne (1997) and also Tillersand Green (1998).1 This is less so, if such a model is relegated to the statusof on object with which scholarship for its own sake could be safely left toplay with, also to the satisfaction of the skeptics. This paper does not get intothe controversy, because the contribution of the formalism it is going to pro-pose is to jury research; namely, it augments the taxonomy of approachesthat found their place in Reid Hastie’s Inside the Juror (Hastie 1994). Asfar as we know, the earliest implementation of this kind of model in artificialintelligence (with either symbolic or connectionist computation) is the one de-scribed in Gaines et al. (1996). That model, which adopts a (by then) currentapproach from jury research and represents it by means of artificial neuralnetworks, had earlier been the subject of Gaines (1994).

As presented in Hastie (1994), there are four current main approaches tothe formal modeling of the process of juror decision making, by which,through exposure to the evidence being presented in court, a juror’s attitudeto the accused being or not being guilty is shaped:

. Probability theory approaches (Bayesian posterior probability of guilt).

. Algebraic formulation (sequential averaging model): perceived strength ofevidence for guilt.

. Stochastic process models (state transitions over time are probabilistic).

. Cognitive information processing and story model.

Hastie (1994, Figure 1.1 on p. 7) describes trial events ‘‘in terms of thetypes of information presented to the juror.’’ These include: indictment,defendant’s plea, prosecution opening statement, defense opening statement,witnesses (comprising the sequence: statements of witness and judge, obser-vations of witnesses, observations of judge), defense closing arguments, pros-ecution closing arguments, judge’s instructions on procedures (theprocedures being: presumption of innocence, determination of facts, admissi-bility, credibility, reasonable inference, and standard of proof), and judge’s

278 A. F. Dragoni and E. Nissan

instructions on verdicts (where verdict categories have these features:identity, mental state, actions, and circumstances).

For the juror’s task, Hastie proposes a flowchart of its tentative structure(Hastie 1994, p. 8, Figure 1.2), notwithstanding the differences of opinionsthat admittedly exist in the literature about how this takes place in the juror’scognition. Given inputs from the trial (witnesses, exhibits, and so forth), thejuror has to encode meaning, the next step being (A) ‘‘Select admissible evi-dence.’’ Later on in the trial events, given the judge’s procedural instructions,the juror has to encode the meaning of the procedures (presumption of inno-cence, and so forth, as listed earlier), and this in turn has three outgoing arcs,to: (B) ‘‘Evaluate for credibility’’ (into which, an arc comes from A as well),(C) ‘‘Evaluate for implications,’’ and (Z), for which see in the following.There is a loop by which (B) ‘‘Evaluate for credibility’’ leads into (C) ‘‘Evalu-ate for implications,’’ and then into (D) ‘‘Construct sequence of events,’’which in turn provides a feedback which affects B. Besides, D leads to a test:(T) ‘‘More evidence?’’ If there is indeed, one goes to A; otherwise, one goesto Z. Given the judge’s instructions on verdicts, the juror has to learn verdictcategories, and this in turn leads to (Z) ‘‘Predeliberation judgment.’’ Theflowchart from Hastie is redrawn here, with some simplification, in Figure 1.

The Bayesian and the algebraic approaches are meter-models, in thesense that a hypothetical meter measures the propensity of the jurorto consider the defendant guilty as charged, starting from the presumption ofinnocence, when gradually faced with new evidence. Unlike in the Bayesianmodel, in the algebraic model, the fundamental dimension of judgment isnot assumed to be a subjective probability. Accordingly, the rules of coher-ence of judgment are required in the Bayesian model, but are relaxed inthe algebraic model. The Bayesian model requires successive judgment tobe coherent: extreme judgments (0 or 1) are final. In contrast, in thealgebraic model, extreme judgments are not necessarily final, and can beadjusted when subsequent evidence is received.

The belief updating is ‘‘additive’’ in the algebraic model: weigh eachevidence item and add the weighted value to the current degree of belief.In contrast, in the Bayesian model, the belief updating is by multiplication:there is a multiplicative adjustment calculation.

The difference between the Bayesian probability updating model and thestochastic Poisson process model is that, in the latter, what is probabilistic isthe state-transitions over time. Figure 3 (taken from Nissan [2001a]) com-pares these two approaches, by modifying and coalescing two flowchartsfrom Hastie’s overview (Hastie 1994, p. 21, Figure 1.6, and p. 13, Figure 1.4).

Next, the fourth approach in that overview is the cognitive story modelwithin the cognitive information processing model, which is within theconceptual universe of cognitive science (at its interface with computationalmodels of cognition). The cognitive story model is about constructing stories

Meter-Models Tradition 279

and evaluating their plausibility and fitness as explanations, then learning theverdict categories, and, next, matching the accepted story to verdict cate-gories. This fourth approach is associated in jury research with the namesof Pennington and Hastie (1983) (see also Hastie et al. 1983).

Twining (1997) has warned about it being unfortunate that contested jurytrials be treated, in ‘‘much American evidentiary discourse’’ and ‘‘satellitefields, such as the agenda for psychological research into evidentiaryproblems,’’ ‘‘as the only or the main or even the paradigm arena in whichimportant decisions about questions of fact are taken’’ (ibid., p. 444), andwhile acknowledging the importance of the cognitive story model (ibid., n.16), he has signaled in this connection the excessive emphasis on the jury.Twining (1999, Sec. 2.3; 1994, Ch. 7) has warned as well about the dangerthat a ‘‘good story’’ poses in court of pushing out the true story. Jackson(1996) provides a semiotic perspective on narrative in the context of thecriminal trial. See also Jackson (1990), and his other works (Jackson 1985;1988a; 1988b; 1994).

FIGURE 1. A flowchart for the juror’s task.

280 A. F. Dragoni and E. Nissan

FIGURE 2. Taken from Nissan (2001a), Figure 2 is a redrawn, coalesced flowchart of the ones Hastie

gives for the Bayesian probability updating model, and for the algebraic sequential averaging model

(Hastie 1994, Figure 1.4 on p. 13 and Figure 1.5 on p. 18).

Meter-Models Tradition 281

FIGURE 3. A comparison of the Bayesian probability updating model and the stochastic Poisson process

model.

282 A. F. Dragoni and E. Nissan

Nissan has argued elsewhere for the crucial role of an AI model of mak-ing sense of a narrative and of its plausibility for research applying AI to legalevidence. For example, on narrative stereotypes and narrative improbability,see the papers on the Jama story (Geiger et al. 2001; Nissan 2001b; Nissanand Dragoni 2000). Refer as well to the papers on the ALIBI project (Kufliket al. 1991; Fakher-Eldeen et al. 1993; Nissan and Rousseau 1997), as well asto Nissan’s papers in the companion special issue in Cybernetics & Systems(2003), Nissan’s paper on an amnesia case (2001c), and his paper on the mazeof identities in Pirandello’s Henry IV (Nissan 2002a). Nissan has also deve-loped a goal-driven formal analysis (the COLUMBUS model) for a passagein a complex literary work whose narratives are not in the realistic tradition(Nissan 2002b).

‘‘BELIEF REVISION’’: AN EMERGENT DISCIPLINE FROMARTIFICIAL INTELLIGENCE

Belief Revision (BR) is an emergent discipline from artificial intelligence.It studies the impact of acquiring new information. The ability to revise opi-nions and beliefs is imperative for intelligent systems exchanging informationin a dynamic world. BR first came into focus in the work of the philosophersof cognition William Harper (1976; 1977) and Isaac Levi (1977; 1980; 1991),but almost immediately broke into computer science and artificial intelli-gence through a seminal paper to which we are going to refer as the AGMapproach (Alchourr�oon et al. 1985). Interestingly, article was actually inspiredby an application to law: the promulgation of a new law, added to an extantbody of law. After about two decades of intensive research, BR has estab-lished itself as a mature field of study (see Williams and Rott [2001] for arecent state-of-the-art collection). In the following, we try to sketch the evol-utionary line of this complex field, focusing on the reason why we needed todepart considerably from the initial start-up concepts in order to model someof the inquirer’s cognitive processes.

The AGM approach formalized the set of beliefs as a logic theory Kdescribed in a formal language L. The problem of revision arises when weget a new formula p (belonging to L). Given a theory K, let us get a new for-mula p. The result has to be a new revised theory. The revision K�p is basedon p. More generally, if TL is the set of theories in language L, a ‘‘revision’’is a function �: TL �L ! TL. So, ‘‘revision’’ is an operator: (K, p) !K�p(i.e., it takes a theory K and a formula p, and returns a revised theoryK�p). All the research aims at finding how to change the theory K in orderto take into account the new formula p.

A problem arises especially when the new formula p is in contrast withthe extant theory K. How to handle contradiction, in case the new infor-mation contradicts the current set of beliefs? Then the new theory could

Meter-Models Tradition 283

not merely be the old theory extended with the new formula (as this wouldexhibit inconsistency).

It is necessary to find how to change the given theory (i.e., the current setof beliefs) in order to incorporate the incoming information, i.e., the new for-mula. To get an idea of how complex this is, consider that a (logic) theory isan infinite set of formulae: namely, all those formulae obtainable from thebasic formulae by making the deductive closure. AGM set forth three ration-ality principles that must govern the change:

AGM1. Consistency: K�p must be consistent (i.e., with no conflicting propo-sitions, as possibly introduced by p).

AGM2. Minimal change: K�p should alter as little as possible K (while tryingto satisfy the consistency principle).

AGM3. Priority to the incoming information: pmust belong to K�p (thus notbeing relegated to the status of a rarely consulted appendix to the oldtheory).

We will focus our attention on the second and the third principles. The for-mer says that the new theory must be as similar as possible to the old one. It’sa rehashed Occam’s razor: ‘‘Simpler explanations should be preferred untilnew evidence proves otherwise.’’ Said otherwise: ‘‘If to explain a phenom-enon it is unnecessary to make given hypotheses, then don’t make them.’’

As to the third principle, it says that the new theory must incorporate thenew formula. This requires that priority be given to the incoming infor-mation, because if you consult the older formulas first, then you ‘‘neglect’’the new formula’s impact (it ‘‘doesn’t bother’’ the old theory).

From these rationality principles, AGM derived eight postulates affectingtheory revision:

1. AGM1: For any p 2 L and K 2 TL;K�p 2 TL:The new theory must be a theory.

2. AGM2: p 2 K�pThe new formula must belong to the new theory. Known as ‘‘postulate ofsuccess,’’ this is the most controversial AGM postulate.

3. AGM3: K�p � KþpThe new theory must be a subset of the expanded theory we would get ifit had been allowable to merely augment the old theory with the newformula: Such an expanded theory would be inconsistent if a contradictsthe old theory. The expanded, inconsistent theory includes all thoseformulae of language L that necessarily satisfy the axiom.

4. AGM4: If : p =2K then Kþp � K�p:If the negation of the new information is not derivable from the oldtheory, then the new theory must contain all those formulae that can bederived by merely adding the new formula to the old theory.

284 A. F. Dragoni and E. Nissan

5. AGM5: K�p is inconsistent if and only if p is inconsistent.The new theory is only inconsistent if the new formula is, and vice versa.

6. AGM6: If p � q then K�p ¼ K�q:Two logically equivalent formulae produce the same effects of change.(We’ll come back to that, questioning this postulate.)

7. AGM7: K�p ^ q � ðK�pÞþqIf the new formula is a logic conjunction p and q, then the new theory mustbe a subset of the final result of this sequence of steps:a) Revise the old theory with p, then,b) Expand the intermediate theory by merely adding q.

8. AGM8: If : q =2K�p then ðK�pÞþq � K�p ^ q:If the new formula is a logic conjunction p and q, and, moreover, the ne-gation of q is not derivable from the theory as revised with p, then the newtheory as revised with the new formula must contain all those formulaeobtainable by first revising the old theory with p, and then expandingthe intermediate theory by merely adding q.

These axioms describe the rational properties to which revision shouldobey, but they do not suggest how to perform it. A first operational defi-nition came from Levi’s identity:

K�p ¼ ðK�: pÞþp

that defines revision in terms of contraction. Contracting a theory K by aformula pmeans making it impossible to derive p fromK. So Levi’s identitysimply means that to derive K by p, we must first make it impossible toderive :p (possibly deleting some axioms from K), and then we have toadd p.

Being so closely related, it is intuitive that there exist eight postulates forcontraction too. One of them deserve to be mentioned since, probably, it hasbeen the most debated issue in the belief change literature (Ferme 1998;Hansson 1999; Lindstrom and Rabinowicz 1991; Makinson 1997; Nayak1994), the so-called postulate of recovery:

K � ðK�pÞþp

Behind these problematic issues, the main problem is the fact that theeight postulates for revision do not univocally define revision. The openquestion is which theories, out of the infinitely numerous new ones satisfyingthe eight postulates, are we to choose for our present purposes? Levi’s ident-ity leaves the problem open, since there are different ways to perform con-traction. The key ideas here should be those of minimality and similaritybetween epistemic states. Unfortunately, as the editors of Williams and Rott(2001) pointed out: ‘‘formalising this intuition has proved illusive and a defi-nition of minimal change, or indeed similarity, has not been developed.’’

Meter-Models Tradition 285

Computer scientists recently pointed out the similarities between beliefrevision and database merging: ‘‘Fusion and revision are closely related be-cause revision can be viewed as a fusion process where the input informationhas priority over a priori beliefs, while fusion is basically considered as a sym-metric operation. Both fusion and revision involve inconsistency handling.’’(Benferhat et al. 2001).

An important follow-up of this line of research has been the sharp dis-tinction made between ‘‘revision’’ and ‘‘updating.’’ If the new informationreports of some modification in the current state of a dynamic world, thenthe consequent change in the representation of the world is called ‘‘updat-ing.’’ If the new information reports of new evidence regarding a static worldwhose representation was approximate, incomplete, or erroneous, then thecorresponding change is called ‘‘revision.’’ With revision, the items of infor-mation which gradually arrive all refer to the same situation, which is fixed intime: such is the case of a criminal event whose narrative circumstances haveto be reconstructed by an investigative team or by fact finders (a judge or ajury). In contrast, with updating, the items of information which graduallyarrive refer to situations which keep changing dynamically: the use for suchitems of information is to make the current representation as correspondingas possible to the current state of the situation represented.

This applies, for example, to a flow of information on a serial killer stillon the loose. For example, Cabras (1996) considers the impact of the con-strual in the mass media in Italy of a criminal case, on the investigation itself,and on the ‘‘giudici popolari’’ to whom the case could have gone (this is the‘‘domesticated’’ version of a jury at criminal trials at a ‘‘Corte d’Assise’’ inItaly, where trained judges are in control anyway). The case she selected isthat of the so-called ‘‘Monster of Foligno,’’ a serial killer who used to leavemessages between crimes. A man eventually implicated himself by claimingthat he was the killer. He was released in due course, and later on, the realculprit was found. We didn’t try our formalism on this case, yet arguablyinvestigations on serial killers are a good example of updating instead ofrevision, vis-a-vis recoverability in the sense we explained in the main text.

As Katsuno and Mendelzon (1991) pointed out, AGM3 and AGM4, indefining expansion as a particular case of revision, do not apply to updating.Information coming from a real world normally imply both kinds of cogni-tive operations. In the remainder of this paper, we focus on this case in whichincoming information brings some refinement or correction to previous in-complete and=or erroneous description of a static situation and we need toperform a pure belief revision.

Even if the AGM approach cannot be set aside when speaking of beliefrevision, much effort in bridging the gap between theory and practice camefrom a parallel conception of belief revision, which originated (almostat the same time) research into the so called Truth Maintenance Systems

286 A. F. Dragoni and E. Nissan

(Doyle 1979). See, for instance, the modeling in Martins and Shapiro (1988).De Kleer’s Assumption-Base Truth Maintenance Systems (ATMS) paradigm(De Kleer 1986) overcame some limitations of Doyle’s TMS, and wasimmediately regarded as a powerful reasoning tool to revise beliefs (and toperform the dual cognitive operation of diagnosis). Crucial to the ATMSarchitecture is the notion of assumption, which designates a decision tobelieve something without any commitment as to what is assumed. Anassumed datum is the problem-solver output that has been assumed to holdafter the assumption was utilized to derive it. Assumptions are connected toassumed data via justifications, and form the foundation to which everydatum’s support can be ultimately traced back. The same assumption mayjustify multiple data, or one datum may be justified by multiple assumptions.

According to us, to be fruitfully applied in modeling the cognitive state ofan inquirer or a juror, receiving information from many sources about asame static situation, a BR framework should possess some special requisites.These are:

. The ability to reject an incoming information.A belief revision system for a multi-source environment should drop therationality principle of ‘‘priority to the incoming information,’’ since thereis no direct correlation between the chronology of the informative acts andthe credibility of their contents; it seems more reasonable treating all theavailable pieces of information as they had been collected at the same time.2

. The ability to recover previously discarded beliefs.Cognitive agents should be able to recover previously discarded pieces ofknowledge after that new evidence redeems them (see Figure 4). The pointis that this should be done not only when the new information directly ‘‘sup-ports’’ a previously rejected belief, but also when the incoming informationindirectly supports it by disclaiming those beliefs that contradicted it causingits ostracism. Elsewhere we called this rule ‘‘Principle of Recoverability,’’which means any previously held piece of knowledge should belong to thecurrent knowledge space if consistent with it (Dragoni et al. 1995; Dragoni

FIGURE 4. If q ‘‘rejects’’ p and subsequently :q ‘‘rejects’’ q, then p should be restored, even if it is not the

case that :q implies p.

Meter-Models Tradition 287

1997). But in this paper, we rename it the ‘‘Principle of Persistence.’’ Therationale for this principle is that if someone gave us a piece of information(sometime in the past) and currently there is no reason to reject it, then weshould accept it. Of course, this principle does not hold for updating,where the represented world does change. Here is an example of infor-mation which updates a situation. If I see an object in position B, I canno longer believe it is in position A. (The miracle of ubiquity is considereda miracle for the very reason that it does not belong in our common senseexperience of the world.) If somebody tells me that the object is not in B,this does not amount to having me believe that the object is now back in Aor never moved from A, as it may be that the object moved from B to C.In general, if observation b has determined the removal of information a,this does not imply that some further notification of change, c, which pro-vokes removal of observation b, should necessarily restore observation a.

. The ability to combine contradictory and concomitant evidences.The notion of fusion should blend that of revision (Dragoni and Giorgini1997). Every incoming information changes the cognitive state. Rejectingthe incoming information does not mean leaving beliefs unchanged since,in general, incoming information alters the distribution of the credibilityweights. Surely the last come information decreases the credibility of thebeliefs with which it got in contradiction, even in the case that it has beenrejected. The same is true when receiving a piece of information of whichwe were already aware; it is not the case that nothing happened (as AGM4states) since we are now, in general, more sure about that belief. Further-more, if it is true that incoming information affects the old one, it is like-wise true that the old one affects the incoming information. In fact, anautonomous agent (where ‘‘autonomous’’ means that his cognitive stateis not determined by other agents) judges the credibility of a new infor-mation on the basis of its previous cognitive state. In conclusion, ‘‘revisingbeliefs’’ should simply mean ‘‘dealing with a new broader set of pieces ofinformation.’’

. The ability to deal with couples <source, information> .The way the credibility ordering is generated and revised must reflect thefact that beliefs come from different sources of information, since the re-liability and the number of independent informants3 affect the credibilityof the information and vice versa (Dragoni 1992). A juror cannot disregardwhere his beliefs come from because the same information, if coming fromdifferent sources, may deserve different weights in terms of credibility, oreven different interpretations of what it means.4

. Ability to maintain and compare multiple candidate cognitive states.This ability is the part of human intelligence that does not limit its actionto comparing single pieces of information, but goes on trying to recon-struct alternative cognitive scenarios as far as it is possible.

288 A. F. Dragoni and E. Nissan

. Sensitivity to the syntax.Although the AGM approach axiomatizes belief revision at the semanticlevel, we recognize that syntax plays an important role in everyday life.The way we syntactically pack (and unpack) pieces of information reflectsthe way we organize thinking and judge credibility, importance, relevance,and even truthfulness. A testimony of the form p ^ bcc ^ � � � ^ t ^ : p from adefendant A in a trial has the same semantic truth value than the testimonyq ^ : q from a defendant B, but normally B will be condemned while Acould be absolved by having her testimony regarded ‘‘partially true,’’whereas B’s testimony will be regarded as ‘‘totally contradictory.’’ Yet thisis unwarranted: e.g., local inconsistencies should not be necessarily fatal tothe credibility of a witness statement. (Semiologist of law Bernard Jacksonshowed5 that the pragmatics of delivery in court is paramount and not merelegal narrative semantics). A set of sentences seems not to be cognitivelyequivalent to their logical conjunction, and we could change a cognitivestate by simply clustering the same beliefs in a different way.

SOLUTION PROPOSED

From this discussion it should be evident that we cannot rely on theAGM framework to model the belief revision process of a juror or an inves-tigator. We better repose on the ATMS abstract conception. An implementedand tested computational architecture that does that it shown in Figure 5.

Let us zoom now on the initial part of the flowchart. For the purposes ofexemplification of the flow of information for a given set of beliefs and agiven new information, refer to the detail of the architecture as shown inFigure 6.

The overall schema of the multi-agent belief revision system we propose(see Figure 5) incorporates the basic ideas of:

. Assumption-based Truth Maintenance System (ATMS) to keep differentscenarios.

. Bayesian probability to recalculate the a-posteriori reliability of thesources of information.

. The Dempster-Shafer Theory of Evidence to calculate the credibility of thevarious pieces of information.

In Figure 6, on the left, one can see an incoming information, b (whosesource, U, is identified), further to the set of beliefs already found in theknowledge base, namely, informations a and v, which both come from sourceW, and moreover an information being a rule (‘‘If a, then not b’’), whichcomes from source T. The latter could, for example, be an expert witness

Meter-Models Tradition 289

or then a fictitious character such as common sense. In the parlance ofAnglo-American legal evidence theory, common sense is called ‘‘backgroundgeneralizations,’’ ‘‘common-sense generalizations,’’ or ‘‘general experience’’(see Twining 1999).

FIGURE 5. Our way to belief revision.

290 A. F. Dragoni and E. Nissan

Once past the knowledge base in the flowchart of Figure 6, in order torevise the set of beliefs with the new information b coming from source U,two steps are undertaken. Refer to Figure 7.

The ATMS-like mechanism is triggered; it executes steps S1 and S2.These are dual operations, respectively, as follows.

. Find all minimally inconsistent subsets (NOGOODSs).

. Find all maximally consistent subsets (GOODSs).

In the notation of set theory, the Venn diagram on the right side ofFigure 7 is intended to capture the following concept. Three GOODSs havebeen generated; the one labeled 1 includes a, b, and v; the one labeled 2includes B, v, and the rule ‘‘If a, then not b’’; whereas yet another GOODS,labeled 3, includes: a, v, and the same rule ‘‘If a, then not b’’.

Each one out of these three GOODSs is a candidate for being the pre-ferred new cognitive state (rather than the only new cognitive state). The de-cision as to which cognitive state to select is taken based on Dempster-Shafer(see Figure 9). Refer to Figure 8.

FIGURE 6. The first step: The arrival of aa new information item.

FIGURE 7. The second step: The generation of all the maximally consistent subsets of KB (i.e., the

knowledge base), plus the incoming information.

Meter-Models Tradition 291

Dempster-Shafer is resorted to in order to select the new preferredcognitive state, which consists of an assignment of degrees of credibility tothe three competing GOODSs. Dempster-Shafer takes as input values of apriori source-reliability (we’ll come back to that: this degree being set a prioriis possibly a limitation) and translates them into a ranking in terms of credi-bility of the items of information given by those sources. Yet, Dempster-Shafer could instead directly weigh the three GOODS, whereas (as said)we make it weigh the formulae instead. This choice stems from Dragoni’sfeeling that the behavior of Dempster-Shafer is unsatisfactory when

FIGURE 8. The complete process of belief revision.

292 A. F. Dragoni and E. Nissan

evaluating the GOOD in its entirety. (In fact, as the GOOD is a formula,Dempster-Shafer could conceivably assign a weight to it directly.)

Next, from the ranking of credibility on the individual formulae, we canobtain (by means of algorithms not discussed here) a ranking of preferenceson the GOODSs themselves. In the example, this highest ranking is for theGOOD with: a, b, and v. (Thus, provisionally discarding the contributionof source T, which here was said to be ‘‘common sense.’’) Nevertheless,our system generates a different output. The output actually generated bythe system obtains downstream of a recalculation of source reliability,achieved by trivially applying the Bayes theorem. In our example, it can beseen how its source T (‘‘common sense’’) which is most penalized by thecontradiction that occurred. Thus, in output B0, the rule which precludes bwas replaced with b itself. Note that the selection of B0 is merely a suggestion:the user of the system could make different choices, by suitably activatingsearch functions, or then by modifying the reliability values of the sources.Once the next information will arrive, everything will be triggered anew fromthe start, but with a new knowledge base, which will be the old knowledgebase revised with the information. It is important to realize that the new

FIGURE 9. The role of the Dempster-Shafer Theory of Evidence.

Meter-Models Tradition 293

knowledge base is not to be confused with B0. Therefore, any informationprovisionally discarded is recoverable later on, and if it will be recoveredindeed, it will be owing to the maximal consistency of the GOODSs.

The Dempster-Shafer Theory of Evidence is a simple and intuitive way totransfer the sources’ reliability to the information’s credibility, and tocombine the evidences of multiple sources.

Notwithstanding these advantages there are shortcomings, includingthe requirement that the degrees of reliability of the sources be establisheda priori, as well as computational complexity, and also disadvantages stem-ming from epistemological considerations from legal theory. Anyway, theadoption of Dempster-Shafer in the present framework is a choice that couldperhaps be called into question. A refinement is called for, because as itstands, the system requires (as said) associating with the sources an a prioridegree of reliability and, moreover, application other than approximated ofDempster-Shafer is computationally very complex.

CONCLUDING REMARKS

This paper introduced what is (relatively) a fairly powerful meter-basedformalism for capturing the process of juror decision making. It comparesfavorably with other approaches described in the literature of jury research,and it’s especially in this regard—namely, salvaging from the tradition of themeter based models an ongoing contribution to the field—that the presentwork is intended. This does not amount, however, to saying that the presentapproach is with no problems. (Much less, that it is more than a buildingblock: ambition for more completeness would call for an inference engineoperating on a narrative representation.)

A few remarks of evaluation follow, first about consistency. Of course,we want to enforce or restore consistency: judiciary acts cannot stem froman inconsistent set of hypotheses. Yet, we want to avoid unduly dismissingany possibility altogether. Therefore we contrast all such GOODSs thatobtain from the set of information items (which are globally inconsistent)provided by the various sources involved. Sometimes the same sourcemay be found in contradiction, or provide inconsistent information (self-inconsistency). In 1981, Marvin Minsky stated: ‘‘I do not believe that consist-ency is necessary or even desirable in developing an intelligent system.’’‘‘What is important is how one handles paradoxes or conflicts.’’ Enforcingconsistency produces limitations: ‘‘I doubt the feasibility of representing or-dinary knowledge in the form of many small independently true propositions(context-free truths).’’ In our own approach, we have a single, global, neverforgetting, inconsistent knowledge background, upon which many, specific,competitive, ever changing, consistent cognitive contexts are acting.

294 A. F. Dragoni and E. Nissan

Epistemologist Laurence BonJour (1998, originally 1985), whileintroducing the first one of his five conditions for coherence,6 the first onebeing: ‘‘A system of beliefs is coherent only if it is logically consistent’’( p. 217), remarked in note 7: ‘‘It may be questioned whether it is not an over-simplification to make logical consistency in this way an absolutely necessarycondition for coherence. In particular, some proponents of relevance logicsmay want to argue that in some cases a system of beliefs which was suffi-ciently rich and complex but which contained some trivial inconsistencymight be preferable to a much less rich system which was totally consis-tent.. . .’’ (BonJour, ibid., p. 230). Relevance logics in this context are notnecessarily very relevant to the concept of ‘‘relevance’’ in the parlance of legalevidence scholarship (in which sense, the term has to do with the admissibilityof evidence as a matter of policy).

Among the requirements or desiderata for a distributed belief revisionframework, we listed the desirability of the mechanism also displaying sensi-tivity to the syntax. Consider Figure 10: Is it really necessary to considerthe set of propositions in the circle on the left side equivalent to what is foundin the circle on the right? Are redundancies of no value at all? Moreover, iseven a local, peripheral inconsistency enough to invariably ditch a witnessstatement?

The discovery of a pair of items of information in contradiction inside arich-textured, articulate witness, should not necessarily invalidate the infor-mational content of the entire deposition. That is to say: a set of propositionsis not equivalent to their logic conjunction. This is a critique of the so-called‘‘Dalal’s Principle’’ (satisfied by AGM as per Williams), by which twologically equivalent informations should produce exactly the same revision.Arguably, Dalal’s Principle is unworkable in practice because it takes cogni-tive arbitrariness to delimit the set. How fine-grained are the details to be?What about cross-examination tactics?

Desiderata include criteria for deciding about inconsistencycontainedness. How local is it to be? When can we just cut off theNOGOODSs and retain the rest? Within the architecture described earlierin this paper, this would belong in the phases of recalculation of source

FIGURE 10.

Meter-Models Tradition 295

reliability and information credibility. One more problem is confabulation indepositions. Whereas our present framework is too abstract to take narrativeaspects into account, arguably our system could be a building block in anarchitecture with a complementary component to deal with narratives. Inparticular, a witness who reconstructs by inference instead of just describingwhat was witnessed is confabulating; this is precisely what traditional AI sys-tems from the 1980s, whose function was to answer questions about an inputnarrative text, do when information does not explicitly appear in the textthey analyze. Within the compass of this paper, we cannot address theseissues. Nevertheless, we claim that the formal framework described is asgood as other meter-base formal approaches to modeling juror decisionmaking, to the extent that such models do not explicitly handle narrativestructure.

Yet a major problem stemming from the adoption of Dempster-Shaferis that it is apparently tilted towards verificationism instead of falsification-ism. Take the case of a terrorist or organized crime ‘‘supergrass’’ informingabout accomplices and testifying in court. In Italy, such ‘‘pentiti’’ or‘‘superpentiti’’ are not considered to be reliable until further proof isobtained; the supergrasses reliability is taken to be greater to the extent thatgreater is the extent to which the deposition matches further evidence. Ashortcoming of this is that part of the deposition may be false and unaffec-ted by such further proof. Dempster-Shafer, as described in the frameworkof the architecture introduced in this paper, falls short of not being trickedinto unduly increasing the reliability of such an untruthful witness. Demp-ster-Shafer also tends to believe information from a source until contraryevidence is obtained. Such epistemological considerations affect not onlyformal representations; they also affect the way, for example, the mass me-dia may convey a criminal case or the proceedings in court. They may alsoaffect, what justice itself makes of witness statements made by children(i.e., child testimony). All of these issues are not addressed in the formalismpresented.

The multi-agent approach described is appropriate when a flow of newitems of information arrives from several sources, and each informa-tion=source pair has an unknown credibility degree. This befits the gradualdelivery of the evidence in court, when a juror’s opinion (or the opinion ofthe judges in a bench trial) is shaped about evidentiary strength. A formalismto deal with evidentiary strength has been presented in Shimony and Nissan(2001).

The results of a previous stage in this research were presented by us inDragoni et al. (2001). The general approach, not specifically concerned withlegal matters, was developed as a formalism for belief revision. Previousstages are represented by Dragoni (1992;1997) and Dragoni and Giorgini(1997a; 1997b).

296 A. F. Dragoni and E. Nissan

NOTES

1. There is a more general consideration to be made about attitudes towardBayesianism. In the literature of epistemology, objections and counter-objections have been expressed concerning the adequacy of Bayesianism.One well-known critic is Alvin Plantinga (1993a, Chap. 7; 1993b, Chap.8). In a textbook, philosopher Adam Morton (2003, Chap. 10) gave theseheadings to the main objections generally made by some epistemologists:‘‘Beliefs cannot be measured in numbers,’’ ‘‘Conditionalization gives thewrong answers,’’ ‘‘Bayesianism does not define the strength of evidence,’’and, most seriously, ‘‘Bayesianism needs a fixed body of propositions’’(ibid., pp. 158�159). One of the Bayesian responses to the latter objectionabout ‘‘the difficulty of knowing what probabilities to give novel proposi-tions’’ (ibid., p. 160), ‘‘is to argue that we can rationally give a completelynovel proposition any probability we like. Some probabilities may be moreconvenient or more normal, but if the proposition is really novel, then noprobability is forbidden. Then we can consider evidence and use it, viaBayes’ theorem, to change these probabilities. Given enough evidence,many differences in the probabilities that are first assigned will disappear,as the evidence forces them to a common value’’ (ibid.). For specific objec-tions to Bayesian models of judicial decision making, the reader is urged tosee the ones made in Ron Allen’s lead article in Allen and Redmayne(1997).

2. Consider parties A and B at a bench trial. A’s witnesses have finished givingevidence, and now a witness for B is being examined or cross-examined,and a witness for A (e.g., the plaintiff himself ) clings to the hope thatthe judge will remember to refer to the notes he had been scribbling whenA was giving evidence, as an item said by A contradicts what B is sayingnow. That there is such contradiction, will affect credibility: what A saidbefore will be less likely to be accepted as though there wasn’t to beadverse evidence from B. Yet, our dropping the principle of ‘‘priority tothe incoming information’’ in a multi-source environment should corre-spond to the practical rule that just because B is giving evidence after A,this by itself is not supposed to make B more credible than A.

3. The Anglo-American adversarial judicial system, in which the two partiesin a trial are rather symmetrical, provides a convenient example for ourpurposes, as the witnesses for the two parties constitute two sets of ‘‘sources’’,from which items of information flow. Yet, sources in a trial need not beso ‘‘similar’’. Consider ancient canon law. Rumors � a source consideredpublic�may have tainted a cleric, who wished to clear himself. He thenbrought witness in his defense. Each such witness was called a compur-gator. These compurgatores and the rumors are sources of a differentnature. The rumors themselves are items of information, and the source

Meter-Models Tradition 297

was not embodied in an individual (or in, say, a newspaper), but generi-cally in a subset of the public.

4. Here is a commonsensical example of the interplay of source and contextin making sense (in good faith or as a posture) of an item of information.Bona fide misinterpretation in context is exemplified by a situation in which I(Nissan), from a bus, was watching a poster behind a glass window; itshowed the photograph of the head of a man, his face very contracted,and his mouth as wide open as he apparently could. Something roundand white, which I eventually realized was the head of pin keeping the pos-ter in place, happened to be positioned on the man’s tongue. I may be for-given for thinking on the spur of the moment that some pill was beingadvertised, which was not the case. Besides, interpretation activity is notalways done in a truth-seeking mood. On the flank of the bus by whichI was commuting this morning, there was the following ad. (Because ofpragmatic knowledge about posters on the flank of a bus being ads, it musthave been an ad.) The poster was showing the face of a celebrity and read:‘‘Jennifer Lopez is in. . .’’ It included no other inscription, apparently tobuild up expectation for a sequel poster. Moreover, the palms of the handsof the woman in the picture were raised in the forefront—for example, asthough she was leaning on glass (which was not the case). I only noticedthat poster once the bus had stopped in front of a funeral parlor, whosedisplay window was exhibiting an array of tombstones and similar nicetiesfor people to order. The image (and inscription) from the ad on the busflank was mirrored on the display window. My expectations, as a prag-matically competent viewer, of what the advertisers would or wouldn’t ex-pect, prevents good faith (and affords this rather being a posture) if I wereto interpret the juxtaposed visual information (the facial image of the cel-ebrity along with the textual assertion from the poster, and the sampletombstones) as though this meant: ‘‘Celebrity So and So is inside theshop,’’ or even ‘‘inside one of these lovely displays.’’ Surely the marketingpeople didn’t foresee that this would be the situation of viewing the poster,and I am supposed to know this was unexpected. If I am to adopt an her-meneutic option that (mis)construes the advertiser’s given document (theexemplar of the ad) as in the given, singular, quite unflattering circum-stances, in such a way that the advertisement’s message backfires (i.e.,grossly thwarts the intended value of the image), this is an appropriationof the ‘‘text’’ (as broadly meant) for communicative purposes that, accordingto a situational context, may in turn be or not be legitimate. Appropriation-cum-‘‘bending,’’ however, is common in human communication. Sometextual genres practice appropriation quite overtly, prominently, andorganically; see, for example, Nissan et al. (1997). More generally, onquotations (ironic and otherwise), see Kotthoff (1998). On intertextuality,see Genette (1979). Nissan (2002b) applies it in the COLUMBUS model.

298 A. F. Dragoni and E. Nissan

5. Bernard Jackson (1988a, p.88) distinguishes the ‘‘story of the trial’’ asagainst the ‘‘story in the trial.’’ As he worded it in Jackson (1998,p.263), he argues ‘‘that the ‘story in the trial’ (e.g., the murder of whichthe defendant is accused) is mediated through the ‘story of the trial’ (thatcollection of narrative encounters manifest in the courtroom processitself )’’. See on this Jackson (1988a, p.8ff; 1995, p.160 and Chap. 10�12passim). Jackson refers to the narrative about which the court is calledto decide, by the term semantics of the legal narrative. By pragmaticshe refers to the delivery about it in court. Moreover, the semantics andpragmatics must be distinguished as well, e.g., in the process of collectivedeliberation among jurors. ‘‘Each juror must make sense not only of whathas been perceived in court, but also what is perceived in the behaviour offellow jurors: not only what they say (The story in jury speech: the semanticlevel) but also how they behave in saying it (the story of jury speech:the pragmatic level)’’ (Jackson 1996, p.43). By that same author, alsosee Jackson (1985, 1988b, 1990, 1994).

6. Distinguish between factual truth and legal truth, which is the one relevantin judicial contexts—this is because of the conventions of how proof isconstructed and which kinds of evidence are admissible or inadmissible.As to philosophy, or course coherence theory is just one out of variousviews of truth current in epistemology (for all of Alcoff’s remark [1998,p. 309]: ‘‘It may seem odd, but for most epistemologists today truth is anon-issue’’). To Paul Horwich, who advocates a deflationary ‘‘minimaltheory’’ of truth (itself evolved from the correspondence theory of truth),once he had shortly introduced coherence theory: ‘‘What has seemedwrong with this point of view is its refusal to endorse an apparently centralfeature of our conception of truth, namely the possibility of there beingsome discrepancy between what really is true and what we will (or should,given all possible evidence) believe to be true’’ (Horwich 1998, pp.316�317). Moreover, as pointed out above, factual truth and believabletruth are themselves distinct from the legal truth, which is what the courtwill find once the evidence is presented, which itself is subjected to the rulesof evidence, as well as to the contingencies of how (if it is a criminal case)the police inquiry was carried out and what it found.

Or, if it is, e.g., a case brought before an employment tribunal � how thesolicitors for the two parties selected the documentary evidence, how thebarristers directly examine and cross-examine, and how the witnesses copewith this, and whether the applicant (i.e., the plaintiff ) and his witnesseshave said enough to counteract beforehand whatever the witnesses forthe respondent (the employer) may say orally at a time when the applicantcan no longer speak, so that his barrister (possibly unaware of some facts,but made aware of what the appropriate response would have been, afterthe given witness for the respondent has already finished and can no longer

Meter-Models Tradition 299

be engaged in questions, may hope that the information new to him maybe useful while cross-examining the next witness(es) of the employer (if anyis=are left), in order to induce contradictions in the defense.

Legal debate can cope with different philosophical approaches toknowledge. William Twining, the London-based legal theorist, in a paper(Twining 1999) which explores issues in the Anglo-American evidencetheory, has remarked: ‘‘The tradition is both tough and flexible in thatit has accommodated a variety of perspectives and values and has usuallyavoided making extravagant claims: in the legal context one is concernedwith probabilities not certainties; with ‘soft’ rationality or informal logicrather than closed system logic; with rational support rather than demon-stration; and with reasonably warranted judgments rather than perfectknowledge. It is generally recognized that the pursuit of truth in adjudi-cation is an important, but not an absolute social value, which may beoverridden by competing values such as ‘preponderant vexation, expenseor delay’. . . . Some premises of the Rationalist Tradition have been subjectto sceptical attack from the outside. But it has been a sufficiently broadchurch to assimilate or co-opt most apparent external sceptics. Similarlywhile most Anglo-American evidence scholars have espoused or assumedwhat looks like a correspondence theory of truth, there is no reason why acoherence theory of truth cannot be accommodated, if indeed there is anydistinction of substance [p. 71] between the theories’’ (ibid., pp. 70�71).An example of a coherentist among legal evidence theorists is BernardJackson, also in England (Twining actually points out that much, ibid.,p. 71, note 9). See Jackson (1988a).

REFERENCES

Alchourr�oon, C. E., P. Gardenfors, and D. Makinson. 1985. On the logic of theory change: Partial meet

contraction and revision functions. The Journal of Symbolic Logic 50:510�530.

Alcoff, L. M. 1998. Introduction to part five: What is truth? In Epistemology: The Big Questions, ed.

L. M. Alcoff, 309�310, Oxford: Blackwell.

Benferhat, S., D. Dubois, and H. Prade. 2001. A computational model for belief change. In Frontiers in

Belief Revision, Applied Logic Series (22), eds. M. A. Williams and H. Rott, pp. 109�134, Dordrecht:

Kluwer.

BonJour, L. 1998. The elements of coherentism. In Epistemology: The Big Questions, ed. L. M. Alcoff, pp.

210�231, Oxford: Blackwell. (Page numbers are referred to as in Alcoff.) Originally, in Structure of

Empirical Knowledge, 87�110. Cambridge: Harvard University Press.

Cabras, C. 1996. Un mostro di carta. In Psicologia della prova, ed. C. Cabras, pp. 233�258. Milano: Giuffre.

De Kleer, J. 1986. An assumption based truth maintenance system. Artificial Intelligence 28:127�162.

Doyle, J. 1979.A truth maintenance system. Artificial Intelligence 12(3):231�272.

Dragoni, A. F. 1992. A model for belief revision in a multi-agent environment. In Decentralized A.I.3, eds.

E. Werner and Y. Demazeau, pp. 103�112. Amsterdam: North Holland Elsevier Science, 1992.

Dragoni, A. F., P. Mascaretti, and P. Puliti. 1995. A generalized approach to consistency-based belief

revision. In M. Gori and G. Soda, eds., Topics in Artificial Intelligence, Proc. of the 4th Conference

of the Italian Association for Artificial Intelligence, LNAI 992, pp. 231–236, Springer-Verlag.

300 A. F. Dragoni and E. Nissan

Dragoni, A. F. 1997. Belief revision: from theory to practice. In The Knowledge Engineering Review 12(2),

pp. 147�179. Cambridge, UK: Cambridge University Press.

Dragoni, A. F., and P. Giorgini. 1997a. Distributed knowledge revision-integration. In Proceedings of the

Sixth ACM International Conference on Information Technology and Management pp. 121�127.

New York: ACM Press.

Dragoni, A. F., and P. Giorgini. 1997b. Belief revision through the belief function formalism in a multi-

agent environment. In Intelligent Agents III, eds. M. Wooldridge, N. R. Jennings, and J. Muller,

Lecture Notes in Computer Science, no. 1193. Heidelberg: Springer-Verlag.

Dragoni, A. F., P. Giorgini, and E. Nissan. 2000. Distributed belief revision as applied within a descriptive

model of jury deliberations. In the Preproceedings of the AISB 2000 Symposium on AI and Legal

Reasoning, April 17, 2000, Birmingham, pp. 55�63, Reprinted in Information and Communications

Technology Law 10(1):53�65, 2001.

Fakher-Eldeen, F., T. Kuflik, E. Nissan, G. Puni, R. Salfati, Y. Shaul, and A. Spanioli. 1993. Inter-

pretation of imputed behavior in ALIBI (1 to 3) and SKILL. Informatica e Diritto, 2nd series

2(1= 2):213�242.

Ferme, E. 1998. On the logic of theory change: Contraction without recovery. Journal of Logic, Language

and Information 7:127�137.

Gaines, D. M. 1994. Juror Simulation, BSc Project Report, No. CS-DCB-9320, Computer Science Dept.,

Worcester Polytechnic Institute.

Gaines, D. M., D. C. Brown, and J. K. Doyle. 1996.A computer simulation model of juror decision

making. In Expert Systems With Applications, 11: 13�28.

Geiger, A., E. Nissan, and A. Stollman. 2001. The Jama legal narrative. Part I: The JAMA model and

narrative interpretation patterns. Information and Communications Technology Law 10(1):21�37.

Genette, G. 1979. Introduction a l’architexte. Paris: Seuil; The Architext: An Introduction (trans.

J. E. Lewin), Berkeley: University of California Press, 1992.

Hansson, S. O. 1999. Recovery and epistemic residue. Journal of Logic, Language and Information 8(4):

421�428.

Harper, W. L. 1976. Ramsey test conditionals and iterated belief change. In Foundations of Probability

Theory, Statistical Inference, and Statistical Theories of Sciences, vol. 1. eds. W. L. Harper, and

C. A. Hooker, 117�135. Norwell, MA: D. Reidel.

Hastie, R., ed. 1993. Inside the Juror: The Psychology of Juror Decision Making (Cambridge Series on

Judgment and Decision Making.) Cambridge, UK: Cambridge University Press.

Hastie, R. 1994. Introduction.

Harper, W. L. 1977. Rational conceptual change. In PSA 1976, Vol. 2, East Lansing, Michigan.

Hastie, R., S. D. Penrod, and N. Pennington. 1983. Inside the Jury.Cambridge, MA: Harvard University

Press.

Horwich, P. 1990. The minimal theory. In Epistemology: The Big Questions, ed. L.M. Alcoff, 311�321.

Oxford: Blackwell (Page numbers are referred to as in Alcoff.) Originally, in Truth, by P. Horwich,

1�14. Oxford: Blackwell, 1990.

Jackson, B. S. 1985. Semiotics and Legal Theory. London: Routledge & Kegan Paul.

Jackson B. S. 1988a. Law, Fact and Narrative Coherence. Merseyside: Deborah Charles Publications.

Jackson, B. S. 1988b. Narrative models in legal proof. International Journal for the Semiotics of Law 1:

225�246.

Jackson, B. S. 1990. Narrative theories and legal discourse. In Narrative in Culture: The Uses of Story-

telling in the Sciences, Philosophy and Literature, ed. C. Nash, 23�50. London: Routledge.

Jackson, B. S. 1994. Towards a semiotic model of professional practice, with some narrative reflections on

the criminal process. International Journal of the Legal Profession 1:55�79.

Jackson, B.S. 1995. Making Sense in Law. Liverpool: Deborah Charles Publications.

Jackson, B. S. 1996. ‘‘Anchored narratives’’ and the interface of law, psychology and semiotics. Legal and

Criminological Psychology 1(1):17�45.

Jackson, B. S. 1998. Bentham, truth and the semiotics of law. In Legal Theory at the End of the

Millennium, ed. M.D.A. Freeman, pp. 493�531. (Current Legal Problems 1998, Vol. 51.) Oxford:

Oxford University Press.

Meter-Models Tradition 301

Katsuno, H., and A. O. Mendelzon. 1991. On the difference between updating a knowledge base and revis-

ing it. In Proceeding of the 2nd International Conference on Principles of Knowledge Representation

and Reasoning, eds. J.’Allen, R. Fikes, and E. Sandewall, pp. 389�394. Morgan Kaufmann.

Kotthoff, H. 1998. Irony, quotation, and other forms of staged intertextuality: Double or contrastive

perspectivation in conversation. In Perspectivity in Discourse, eds. C. F. Graumann and W. Kall-

meyer, Amsterdam: Benjamins. Also: http: == ling.uni-konstanz.de = pp. =home = kotthoff = Seiten =

ironyframe.html

Kuflik, T., E. Nissan, and G. Puni. 1991. Finding excuses with ALIBI: Alternative plans that are deonti-

cally more defensible. Computers and Artificial Intelligence 10(4):297�325.

Levi, I. 1977. Subjunctives, dispositions and chances. Synthese 34:423�455.

Levi, I. 1980. The Enterprise of Knowledge. Cambridge, MA: The MIT Press.

Levi, L. 1991. The Fixation of Beliefs and its Undoing. Cambridge, UK: Cambridge University Press.

Lindstrom, S., and W. Rabinowicz. 1991. Epistemic entrenchment with incomparabilities and relational

belief revision. In The Logic of Theory Change. Journal of Philosophical Logic, 16. eds. Fuhrmann

and Morreau, 93�126. Springer Verlag.

Makinson, D. 1997. On the force of some apparent counterexamples to recovery. In Normative Systems in

Legal and Moral Theory: Festschrift for Carlos Alchourr�oon and Eugenio Bulygin, eds. E. Garzon

Valdez et al., 475�481. Berlin: Duncker & Humbolt.

Martins, J. P., and S. C. Shapiro. 1988. A model for belief revision. Artificial Intelligence 35:25�97.

Morton, A. 2003. A Guide through the Theory of Knowledge, 3rd ed. Oxford: Blackwell.

Nayak, A. 1994. Foundational belief change. Journal of Philosophical Logic 23:495�533.

Nissan, E. 2001a. Can you measure circumstantial evidence? The background of probative formalisms for

law. Information and Communication Technology Law 10(2):231�245.

Nissan, E. 2001b. The Jama legal narrative. Part II: A foray into concepts of improbability. Information

and Communications Technology Law 10(1):39�52.

Nissan, E. 2001c. An AI formalism for competing claims of identity: Capturing the ‘‘Smemorato di

Collegno’’ amnesia case. Computing and Informatics 20(6):625�656.

Nissan, E. 2002a. A formalism for misapprehended identities: Taking a leaf out of Pirandello. In Proceed-

ings of the Twentieth Twente Workshop on Language Technology, eds. O. Stock, C. Strapparava, and

A. Nijholt, pp. 113�123, Trento, Italy, April 15�16, 2002, Twente, Enschede, The Netherlands: Uni-

versity of Twente.

Nissan, E. 2002b. The COLUMBUS model (2 parts). International Journal of Computing Anticipatory

Systems 12:105�120 & 121�136.

Nissan, E. 2003. Identification and doing without it, Parts I to IV Cybernetics & Systems 34(4�5) and

34(6�7):317�380, 467�530.

Nissan, E., and A. F. Dragoni. 2000. Exoneration, and reasoning about it: A quick overview of three

perspectives. In Proceedings of the International ICSC Congress ‘‘Intelligent Systems Applications’’

(ISA’2000), pp. 94�100, Wollongong, Australia, December 2000.

Nissan, E., and D. Rousseau. 1997. Towards AI formalisms for legal evidence. In Foundations of Inte-

lligent Systems: Proceedings of the 10th International Symposium, ISMIS’97, eds. Z. W. Ras and

A. Skowron. Pages 328�337. Springer-Verlag.

Nissan, E., I. Rossler, and H. Weiss. 1997. Hermeneutics, accreting receptions, hypermedia. Journal of

Educational Computing Research 17:297�318.

Pennington, N., and R. Hastie. 1983. Juror decision making models: The generalization gap. Psychological

Bulletin 89:246�287.

Plantinga, A. 1993a. Warrant: The Current Debate. Oxford: Oxford University Press.

Plantinga, A. 1993b. Warrant and Proper Function. Oxford: Oxford University Press.

Shimony, S. E., and E. Nissan. 2001. Kappa calculus and evidentiary strength: A note on Aqvist’s logical

theory of legal evidence. Artificial Intelligence and Law 9(2–3):153�163.

Tillers, P., and E. Green (eds.) 1998. Probability and Inference in the Law of Evidence: The Uses and Limits

of Bayesianism. Boston and Dordrecht: Kluwer.

Twining, W. 1997. Freedom of proof and the reform of criminal evidence. Israel Law Review 31(1–3):

439�463.

302 A. F. Dragoni and E. Nissan

Twining, W. 1999. Necessary but dangerous? Generalizations and narrative in argumentation about

‘‘facts’’ in criminal process. In Complex Cases: Perspectives on the Netherlands Criminal Justice

‘‘System, ed. M. Malsch and J. F. Nijboer, 69�98. Amsterdam: Thela Thesis.

Williams, M. A., and H. Rott (eds.) 2001. Frontiers in Belief Revision, (Applied Logic Series, 22.)

Dordrecht, The Netherlands: Kluwer Academic Publishers.

Meter-Models Tradition 303