Probabilistic logic under coherence: complexity and algorithms

10
2nd International Symposium on Imprecise Probabilities and Their Applications, Ithaca, New York, 2001 Probabilistic Logic under Coherence: Complexity and Algorithms Veronica Biazzo Dipartimento di Matematica e Informatica, Universit` a degli Studi di Catania, Catania, Italy [email protected] Angelo Gilio Dipartimento di Metodi e Modelli Matematici, Universit` a “La Sapienza”, Roma, Italy [email protected] Thomas Lukasiewicz Institut und Ludwig Wittgenstein Labor f¨ ur Informationssysteme, Technische Universit¨ at Wien, Vienna, Austria [email protected] Giuseppe Sanfilippo Dipartimento di Matematica e Informatica, Universit` a degli Studi di Catania, Catania, Italy gsanfi[email protected] Abstract We study probabilistic logic under the viewpoint of the co- herence principle of de Finetti. In detail, we explore the re- lationship between coherence-based and classical model- theoretic probabilistic logic. Interestingly, we show that the notions of g-coherence and of g-coherent entailment can be expressed by combining notions in model-theoretic probabilistic logic with concepts from default reasoning. Using these results, we analyze the computational com- plexity of probabilistic reasoning under coherence. More- over, we present new algorithms for deciding g-coher- ence and for computing tight g-coherent intervals, which reduce these tasks to standard reasoning tasks in model- theoretic probabilistic logic. Thus, efficient techniques for model-theoretic probabilistic reasoning can immediately be applied for probabilistic reasoning under coherence, for example, column generation techniques. We then de- scribe two other interesting techniques for efficient model- theoretic probabilistic reasoning in the conjunctive case. Keywords. Conditional probability assessments, proba- bilistic logic, g-coherence, g-coherent entailment, com- plexity and algorithms. 1 Introduction The probabilistic treatment of uncertainty plays an impor- tant role in many applications of knowledge representation and reasoning. Often, we need to reason with uncertain information under partial knowledge and then the use of precise probabilistic assessments seems unrealistic. More- over, the family of uncertain quantities at hand has often no particular algebraic structure. In such cases, a general approach is obtained by us- ing (conditional and/or unconditional) probabilistic con- straints, based on the coherence principle of de Finetti and suitable generalizations of it (Biazzo and Gilio [2], Coletti [10], Coletti and Scozzafava [11, 14, 13], Gilio [22, 23], Gilio and Scozzafava [24], and Scozzafava [34]), or on similar principles which have been adopted for lower and upper probabilities (Pelessoni and Vicig [33], Vicig [35], and Walley [36]). Two important aspects in dealing with uncertainty are: (i) checking the consistency of a proba- bilistic assessment; and (ii) the propagation of a given as- sessment to further uncertain quantities. The problem of reducing or eliminating the computational difficulties of (i) and (ii) has been recently investigated by Biazzo et al. [5, 6], Capotorti and Vantaggi [9], Capotorti et al. [8], and Coletti and Scozzafava [12]. Another approach to handle constraints for probabilities is model-theoretic probabilistic logic, whose roots go back to Boole’s book of 1854 “The Laws of Thought” [7]. There is a wide spectrum of formal languages that have been explored in probabilistic logic, which ranges from constraints for unconditional and conditional events (see especially the work by Nilsson [32], Dubois et al. [15], Amarger et al. [1], and Frisch and Haddawy [18]) to rich languages that specify linear inequalities over events (Fa- gin et al. [17]). The reasoning methods in probabilis- tic logic can be roughly divided into local approaches based on local inference rules and global ones using lin- ear optimization techniques (see especially [31, 30] on the issue of local versus global approaches). As shown by Georgakopoulos et al. [21], deciding satisfiability and logical consequence in probabilistic logic is NP- and co- NP-complete, and thus intractable. Moreover, as recently shown in [28], deciding and computing tight logical con- sequences is complete for the classes and , respec- tively. Substantial research efforts were directed towards efficient techniques for reasoning in probabilistic logic. In particular, column generation techniques from the area of linear optimization have been successfully used to solve large problem instances (see the work by Jaumard et al. [27] and Hansen et al. [26]). Other techniques, which may be described as problem transformations on the lan- guage level, have been successfully applied in probabilis- tic logic programming [28]. Moreover, a global approach for the conjunctive case, which characterizes a reduced set of variables, has been presented in [29]. We point out that in model-theoretic probabilistic logic,

Transcript of Probabilistic logic under coherence: complexity and algorithms

2nd International Symposium on Imprecise Probabilities and Their Applications, Ithaca, New York, 2001

Probabilistic Logic under Coherence: Complexity and Algorithms

Veronica BiazzoDipartimento di Matematica e Informatica,

Universita degli Studi di Catania, Catania, [email protected]

Angelo GilioDipartimento di Metodi e Modelli Matematici,

Universita “La Sapienza”, Roma, [email protected]

Thomas LukasiewiczInstitut und Ludwig Wittgenstein Labor fur Informationssysteme,

Technische Universitat Wien, Vienna, [email protected]

Giuseppe SanfilippoDipartimento di Matematica e Informatica,

Universita degli Studi di Catania, Catania, [email protected]

Abstract

We study probabilistic logic under the viewpoint of the co-herence principle of de Finetti. In detail, we explore the re-lationship between coherence-based and classical model-theoretic probabilistic logic. Interestingly, we show thatthe notions of g-coherence and of g-coherent entailmentcan be expressed by combining notions in model-theoreticprobabilistic logic with concepts from default reasoning.Using these results, we analyze the computational com-plexity of probabilistic reasoning under coherence. More-over, we present new algorithms for deciding g-coher-ence and for computing tight g-coherent intervals, whichreduce these tasks to standard reasoning tasks in model-theoretic probabilistic logic. Thus, efficient techniques formodel-theoretic probabilistic reasoning can immediatelybe applied for probabilistic reasoning under coherence,for example, column generation techniques. We then de-scribe two other interesting techniques for efficient model-theoretic probabilistic reasoning in the conjunctive case.

Keywords. Conditional probability assessments, proba-bilistic logic, g-coherence, g-coherent entailment, com-plexity and algorithms.

1 Introduction

The probabilistic treatment of uncertainty plays an impor-tant role in many applications of knowledge representationand reasoning. Often, we need to reason with uncertaininformation under partial knowledge and then the use ofprecise probabilistic assessments seems unrealistic. More-over, the family of uncertain quantities at hand has oftenno particular algebraic structure.

In such cases, a general approach is obtained by us-ing (conditional and/or unconditional) probabilistic con-straints, based on the coherence principle of de Finetti andsuitable generalizations of it (Biazzo and Gilio [2], Coletti[10], Coletti and Scozzafava [11, 14, 13], Gilio [22, 23],Gilio and Scozzafava [24], and Scozzafava [34]), or onsimilar principles which have been adopted for lower and

upper probabilities (Pelessoni and Vicig [33], Vicig [35],and Walley [36]). Two important aspects in dealing withuncertainty are: (i) checking the consistency of a proba-bilistic assessment; and (ii) the propagation of a given as-sessment to further uncertain quantities. The problem ofreducing or eliminating the computational difficulties of(i) and (ii) has been recently investigated by Biazzo et al.[5, 6], Capotorti and Vantaggi [9], Capotorti et al. [8], andColetti and Scozzafava [12].

Another approach to handle constraints for probabilities ismodel-theoretic probabilistic logic, whose roots go backto Boole’s book of 1854 “The Laws of Thought” [7].There is a wide spectrum of formal languages that havebeen explored in probabilistic logic, which ranges fromconstraints for unconditional and conditional events (seeespecially the work by Nilsson [32], Dubois et al. [15],Amarger et al. [1], and Frisch and Haddawy [18]) to richlanguages that specify linear inequalities over events (Fa-gin et al. [17]). The reasoning methods in probabilis-tic logic can be roughly divided into local approachesbased on local inference rules and global ones using lin-ear optimization techniques (see especially [31, 30] onthe issue of local versus global approaches). As shownby Georgakopoulos et al. [21], deciding satisfiability andlogical consequence in probabilistic logic is NP- and co-NP-complete, and thus intractable. Moreover, as recentlyshown in [28], deciding and computing tight logical con-sequences is complete for the classes �� and���

� , respec-tively. Substantial research efforts were directed towardsefficient techniques for reasoning in probabilistic logic. Inparticular, column generation techniques from the area oflinear optimization have been successfully used to solvelarge problem instances (see the work by Jaumard et al.[27] and Hansen et al. [26]). Other techniques, whichmay be described as problem transformations on the lan-guage level, have been successfully applied in probabilis-tic logic programming [28]. Moreover, a global approachfor the conjunctive case, which characterizes a reduced setof variables, has been presented in [29].

We point out that in model-theoretic probabilistic logic,

for every conditional constraint ��� ������� in thegiven probabilistic knowledge base, the conditional prob-ability � ����� is looked at as the ratio of � �� ���and � ���, so that it is defined only if � ���� �. On thecontrary, as well known, within the approach of de Finetti:(i) one can directly assess conditional probabilities, withno need of defining them as ratios; (ii) no theoretical prob-lem arises when the probabilities of some (or possibly all)conditioning events are (judged equal to) zero; and (iii) byexploiting the zero probabilities, the computational diffi-culties may be reduced (or even eliminated).

Coherence-based and model-theoretic probabilistic rea-soning have been explored quite independently from eachother by two different research communities. For this rea-son, the relationship between the two areas has not beenstudied in depth so far. Our work in [3] and the currentpaper aim at filling this gap. More precisely, our researchis guided by the following two questions:

� Which is the semantic relationship between coherence-based and model-theoretic probabilistic reasoning?

� Can algorithms that have been developed for efficientreasoning in one area also be used in the other area?

Interestingly, it turns out that the answers to these twoquestions are closely related to the area of default rea-soning from conditional knowledge bases [19]. Roughlyspeaking, coherence-based probabilistic reasoning can beunderstood as a combination of model-theoretic proba-bilistic logic with concepts from default reasoning.

That is, deciding coherence and computing tight inter-vals under coherence can be reduced to standard reasoningtasks in model-theoretic probabilistic logic. Thus, efficienttechniques for model-theoretic probabilistic reasoning canbe applied for probabilistic reasoning under coherence.

It is important to point out that the existence of such reduc-tions to model-theoretic probabilistic reasoning does notimply that we actually do not need probabilistic reasoningunder coherence. Model-theoretic probabilistic reasoningand probabilistic reasoning under coherence are two dis-tinct concepts, and they both clearly have their own areaof application. Our reductions now reveal the semantic re-lationships between the two formalisms, and thus help todelineate these areas of applications.

Our work in [3] explores the semantic aspects of this re-lationship. In this paper, we focus on its computationalimplications for coherence-based probabilistic reasoning.

The main contributions can be summarized as follows.

� We define a coherence-based probabilistic logic. Indetail, we define a formal language of logical and condi-tional constraints, which are defined on arbitrary familiesof conditional events. We then define the notions of gener-alized coherence (g-coherence), g-coherent consequence,and tight g-coherent consequence for this language.

� We recall from [3] the relationship between g-coher-ence and g-coherent entailment, on the one hand, and sat-isfiability and logical entailment, on the other hand.

� We analyze the computational complexity of decidingg-coherence and g-coherent consequence. To our knowl-edge, no such results have been derived so far.

� We present new algorithms for deciding g-coherenceand for computing tight g-coherent consequences. Basedon concepts from default reasoning, they reduce check-ing g-coherence and computing tight g-coherent intervalsto standard tasks in model-theoretic probabilistic logic,which can be reduced to linear optimization problems.

� As a consequence of the previous result, all the tech-niques that have been developed for efficient model-theoretic probabilistic reasoning can now also be appliedin probabilistic reasoning under g-coherence, for example,column generation techniques [21, 27, 26].

� We describe two other techniques for the conjunctivecase, namely (i) removing inactive logical and conditionalconstraints, and (ii) producing a reduced set of variablesfor the linear optimization problems. The former is in-spired by [28, 16], while the latter is taken from [29].

� Interestingly, it turns out that the technique of gener-ating a reduced set of variables taken from [29] can becharacterized using the notion of random gain.

The rest of this paper is organized as follows. Section 2introduces the formal background of this work, and re-calls the semantic relationship between coherence-basedand model-theoretic probabilistic reasoning from [3]. InSections 3 and 4, we present our complexity results andnew algorithms for coherence-based probabilistic reason-ing. Section 5 describes two techniques for an increasedefficiency of reasoning in model-theoretic probabilisticlogic in the conjunctive case. In Section 6, we summarizethe main results and give an outlook on future research.

Note that detailed proofs of all results are given in the ex-tended paper [4].

2 Formal Background

In this section, we first introduce some technical prelim-inaries. We then briefly describe precise and impreciseprobability assessments under coherence, and a model-theoretic probabilistic logic of conditional constraints. Wefinally define our coherence-based probabilistic logic.

2.1 Preliminaries

We assume a nonempty set of basic events �. We use �and � to denote false and true, respectively. The set ofevents is the closure of ������� under the Boolean op-erations and �. That is, each element of �������is an event, and if � and � are events, then also �����

and �. We use ���� and ����� to abbreviate����� and �����, respectively, and adopt theusual conventions to eliminate parentheses. We often de-note by �� the negation �, and by �� the conjunction���. A logical constraint is an event of the form ���.

A world � is a truth assignment to the basic events in �(that is, a mapping � � � � ������� �����), which is ex-tended to all events as usual (that is, ����� is true in � iff� and � are true in � , and � is true in � iff � is not truein � ). We use � to denote the set of all worlds for �. Weoften identify the truth values ����� and ���� with the realnumbers � and , respectively. A world � satisfies an event�, or � is a model of �, denoted � ��, iff ���� ����.� satisfies a set of events �, or � is a model of �, denoted� ��, iff � is a model of all ���. An event � (resp., aset of events�) is satisfiable iff a model of � (resp., �) ex-ists. An event � is a logical consequence of � (resp., �),denoted � �� (resp., � ��), iff each model of � (resp.,�) is also a model of �. We use � �� � (resp., � �� �) todenote that � �� (resp., � ��) does not hold.

2.2 Probability Assessments

A conditional event is an expression of the form ��� withevents � and � ��. It can be looked at as a three-valuedlogical entity, with values true, or false, or indeterminate,according to whether� and � are true, or� is false and � istrue, or � is false, respectively. Notice that ��� coincideswith �����. More generally, ����� and ����� coincideiff �� � �� �� � �� and �� ��.

A probability assessment ���� on a set of conditionalevents � consists of a set of logical constraints �, and amapping that assigns each �� a real number in ��� �.Informally,� describes logical relationships, while rep-resents probabilistic knowledge.

For ������� � � � � �������� with �� and � real num-bers �� � � � � �, let the mapping � � � � � be definedas follows. For every � � �:

���� �����

� � ����� � ��������������� �

Intuitively, � can be interpreted as the random gaincorresponding to a combination of � bets of amounts � ��������� � � � � � �������� on ������ � � � � �����with stakes �� � � � � �. In detail, to bet on �����, one paysan amount of � ��������, and one gets back the amountof �, �, and � ��������, when �� ���, �� ���, and��, respectively, turns out to be true.

The following notion of coherence now assures that it isimpossible (for both the gambler and the bookmaker) tohave uniform loss under this betting scheme. A probabil-ity assessment ���� on a set of conditional events � iscoherent iff for every ������� � � � � �������� with ��

and for all real numbers �� � � � , �, the following holds:

������� ����������������

�����

� � ����� � ��������������� � � �

2.3 Imprecise Probability Assessments

An imprecise probability assessment ���� on a set ofconditional events � consists of a set of logical con-straints � and a mapping that assigns each �� an in-terval ��� ��� ��� �with ���. We say ���� is g-coherentiff there exists a coherent precise probability assessment����� on � such that ������ for all �� .

Given a set of logical constraints �, and a set of condi-tional events � ��� � � � � ��, denote by����� the set ofall mappings � that assign each ������ �� a member of��� ������ ������� such that ��� �� ����� � � ���is satisfiable, and ���� ���� ��� for some ���� � � � � ��.For such mappings � and events �, we use � �� to abbre-viate ���� � � � � � ���� ��.

Theorem 2.1 (Gilio [22]) An imprecise probability as-sessment ���� on a set of conditional events � is g-coherent iff for every �� ������� � � � � �������� with�� , the following system of linear constraints over thevariables �� �� ���, where �������, is solvable:

����

�� � ��� � �� �for all ���� � � � � �������

�� � ��� � �� �for all ���� � � � � �������

��

�� � � �for all ���� ,

(1)

where �� and �� are defined by ������� ���� ��� for all���� � � � � ��, and ��� and ��� are defined as follows forall � �� and ���� � � � � ��:

��� (resp., ���)

�����

� if � � �� ���

�� if � � �� ���

�� (resp., ��)� if � � �� .(2)

Equivalent results have been obtained by Coletti [10].

Let ���� be a g-coherent imprecise probability assess-ment on a set of conditional events � . The imprecise prob-ability assessment ��� �� on a conditional event � is calleda g-coherent consequence of ���� iff ����� ��� �� forevery g-coherent precise probability assessment � on� � ��� such that ������ for all �� . It is atight g-coherent consequence of ���� iff � (resp., �) isthe infimum (resp., supremum) of ���� subject to all g-coherent precise probability assessments � on � � ���such that ������ for all �� .

2.4 Probabilistic Logic

In the rest of this paper, we assume that � is finite. A con-ditional constraint is an expression �������� �� with realnumbers �� �� ��� � and events � and �. We call � its an-tecedent and � its consequent. A probabilistic knowledgebase �� ��� � � consists of a finite set of logical con-straints�, and a finite set of conditional constraints� suchthat (i) ��� for all �������� ���� , and (ii) ����� ������for all distinct ����������� ���� ����������� ����� .

A probabilistic interpretation �� is a probability func-tion on � (that is, a mapping �� � �� ��� � such thatall ����� with � � � sum up to 1). The probabilityof an event � in the probabilistic interpretation �� , de-noted �����, is defined as follows:

����� �

����� � ���

����� �

For events � and� with�� ���� �, we use�� ����� to ab-breviate ���� ��� ��� ���. The truth of logical and con-ditional constraints � in a probabilistic interpretation �� ,denoted �� � � , is defined as follows:

� �� � ��� iff �� �� ��� �� ���.

� �� � �������� �� iff ����� � or ������� � ��� ��.

We say �� satisfies a logical or conditional constraint � ,or�� is a model of� , iff�� �� . We say�� satisfies a setof logical and conditional constraints � , or �� is a modelof � , denoted �� � � , iff �� is a model of all � �� .We say that � is satisfiable iff a model of � exists.

We next define the notion of logical entailment. A condi-tional constraint � �������� �� is a logical consequenceof a set of logical and conditional constraints � , denoted� �� , iff each model of � is also a model of � . It is atight logical consequence of � , denoted � � tight � , iff� (resp., �) is the infimum (resp., supremum) of �������subject to all models �� of � with ������ �. Note thatwe define � and ��, when � ��������� ��.

A probabilistic knowledge bases �� ��� � � is satis-fiable iff ��� is satisfiable. A conditional constraint�������� �� is a logical consequence of �� , denoted�� ��������� ��, iff ��� ��������� ��. It is a tight log-ical consequence of �� , denoted �� � tight �������� ��,iff ��� �tight �������� ��.

2.5 Probabilistic Logic under Coherence

Every imprecise probability assessment �� ���� withfinite � on a finite set of conditional events � can be rep-resented by the following probabilistic knowledge base:

�� � ��� ��������� �� � ����� � ����� ��� ���� �

Conversely, every probabilistic knowledge base �� ��� � � can expressed by the following imprecise proba-

bility assessment ���� ����� � on ��� :

�� ������ ��� ��� � �������� ������ �

��� ���� � � �� �� ��� � � �������� ������ �

A probabilistic knowledge base �� is said g-coherentiff ���� is g-coherent. Given a g-coherent probabilis-tic knowledge base �� and a conditional constraint�������� ��, we say �������� �� is a g-coherent consequenceof �� , denoted �� �� �������� ��, iff ������ ��� ���� is ag-coherent consequence of ���� . It is a tight g-coherentconsequence of �� , denoted �� �� tight �������� ��, iff������ ��� ���� is a tight g-coherent consequence of ���� .

The following is an immediate implication.

Theorem 2.2 Let �� ��� � � be a g-coherent proba-bilistic knowledge base, and let �������� �� be a condi-tional constraint. Then, �� �� �������� �� iff ��� � ���������� ���� is not g-coherent for all �� ��� �� � ��� �.

2.6 Relationship to Probabilistic Logic

We now recall some of our results obtained in [3], whichconcern characterizations of the notions of g-coherenceand of g-coherent entailment in terms of the notions ofsatisfiability and of logical entailment.

We adopt the following terminology from the area of de-fault reasoning [19]. A probabilistic interpretation�� ver-ifies a conditional constraint �������� �� iff ������ � and�� � �������� ��. A set of conditional constraints � toler-ates a conditional constraint � under a set of logical con-straints � iff there exists a model of ��� that verifies � .

The following theorem shows that g-coherence has a char-acterization similar to -consistency in default reason-ing [25]. This result follows from Theorem 2.1.

Theorem 2.3 Let �� ��� � � be a probabilistic knowl-edge base. Then, �� is g-coherent iff there is an orderedpartition ���� � � � � �� of � such that each �� is the set ofall elements in

���� �� tolerated under � by

���� �� .

We next express the notion of g-coherent entailment interms of the notion of logical entailment.

For a probabilistic knowledge base �� ��� � � and anevent �, we use ����� � to denote the set of all sub-sets �� ������������ ���� � � � , ����������� ���� of �such that every model �� of ���� with �� ��� � � � �� �� � � satisfies ������ �.

The following lemma shows that ����� � has a uniquegreatest element with respect to set inclusion.

Lemma 2.4 Let �� ��� � � be a g-coherent probabilis-tic knowledge base, and let � be an event. Then, ������contains a unique greatest element.

The next theorem shows the crucial result that g-coherententailment from �� can be reduced to logical entailmentfrom the greatest element in ������.

Theorem 2.5 Let �� ��� � � be a g-coherent proba-bilistic knowledge base, and let � �������� �� be a con-ditional constraint. Let �� ���� � ��, where � � is thegreatest element in ������. Then,

(a) �� ��� iff ��� � � .

(b) �� ��tight � iff ��� �tight � .

Thus, computing tight g-coherent consequences can be re-duced to computing tight logical consequences from thegreatest element � � in ����� �. The following theoremshows how � � can be characterized and thus computed.

Theorem 2.6 Let �� ��� � � be a g-coherent proba-bilistic knowledge base and� be an event. Let � ��� and���� � � � � �� be an ordered partition of ��� � such that(i) every �� is the set of all elements in �� � � � � �� �� �

tolerated under ������� by �� � � � � �� �� �, and(ii) no member of � � is tolerated under � � �����by � �. Then, � � is the greatest element in ����� �.

3 Complexity

In this section, we analyze the computational complexityof deciding g-coherence and g-coherent consequence.

It turns out that deciding g-coherence and g-coherent con-sequence are NP- and co-NP-complete, respectively, andthus intractable, even in very restricted cases. Hence,we cannot hope for algorithms that efficiently solve ev-ery problem instance. But we can still hope for efficientspecial-case, average-case, and approximation algorithms.

This knowledge about the precise complexity of proba-bilistic reasoning under coherence is very useful for devel-oping efficient algorithms. In particular, the above resultsshow that g-coherence and g-coherent consequence can bepolynomially translated into SAT instances, and thus besolved with existing sophisticated SAT packages.

In the sequel, let �� ��� � � be a probabilistic knowl-edge base. We assume that all bounds in � are rationalnumbers. A conditional constraint �������� �� is atomic iff� is a basic event, and � is either � or a basic event. Itis 1-conjunctive iff � is a basic event, and � is either �or a conjunction of basic events. A set of conditional con-straints � is atomic (resp., 1-conjunctive) iff all membersof � are atomic (resp., 1-conjunctive).

The following theorem shows that the problem of decidingg-coherence is NP-complete. Membership in NP followsfrom a characterization of g-coherence similar to Theo-rem 2.3 and a small-model theorem in probabilistic logic.Hardness for NP is obtained by a polynomial reductionfrom the NP-complete graph 3-colorability problem.

Theorem 3.1 Given �� ��� � �, deciding whether ��is g-coherent is NP-complete. Hardness holds even if � isempty and � is atomic.

The next theorem shows that deciding g-coherent conse-quence is co-NP-complete. Here, co-NP-membership fol-lows from Theorems 2.2, 2.5, and 3.1 and a small-modeltheorem in probabilistic logic. Hardness for co-NP can beproved by a polynomial reduction from the complement ofthe NP-complete graph 3-colorability problem.

Theorem 3.2 Given a g-coherent �� ��� � � and aconditional constraint� , deciding whether�� ��� is co-NP-complete. Hardness holds even if � is empty, � is1-conjunctive, and � �������� �� with ���.

4 Algorithms

Our results on the relationship between coherence- andmodel-based probabilistic logic in Section 2.6 open a newperspective on algorithms for deciding g-coherence andfor computing tight g-coherent consequences. They showhow these problems can be reduced to standard reasoningtasks in model-theoretic probabilistic logic.

That is, we get new algorithms for deciding g-coherenceand computing tight g-coherent consequences, which arebased on subprocedures for standard reasoning tasks inmodel-theoretic probabilistic logic. In the rest of this sec-tion, we first describe the new algorithms, and then discussthe tasks in model-theoretic probabilistic reasoning.

As for g-coherence, a previous algorithm by Gilio [22] isreformulated using terminology from the area of defaultreasoning. Interestingly, the resulting algorithm for de-ciding g-coherence is closely related to an algorithm forchecking -consistency in default reasoning [25]. 1

As for tight g-coherent consequence, we have the impor-tant result that a tight interval under g-coherent entailmentcan be computed by first checking g-coherence, and thencomputing a tight interval under logical entailment.

The key features of these new algorithms can be briefly de-scribed as follows. First, they are surprisingly simple com-pared to existing algorithms. This helps to more deeplyunderstand the semantic properties of g-coherence and g-coherent entailment (see [3] for more details). Second,they show that efficient techniques for model-theoreticprobabilistic reasoning can be applied for probabilisticreasoning under coherence. We will illustrate this aspectin Section 5 by describing two such recent techniques.

1Note that a relationship between the algorithms in [22] and [25] wassuggested first by Didier Dubois (personal communication).

Algorithm g-coherence

Input: Probabilistic knowledge base �� ��� � �.Output: “Yes”, if �� is g-coherent, otherwise “No”.

1. � � � ;2. repeat3. � � �� � � � � is tolerated under � by ��;4. � � � ��5. until � �;6. if � � then return “Yes”7. else return “No”.

Figure 1: Algorithm �-���������.

Algorithm tight-g-coherent-consequence

Input: Probabilistic knowledge base �� ��� � � and two events � and �.Output: Unique pair of real numbers ��� ��� ��� �� ��� � such that �� �� tight �������� ��.

1. � � � ;2. repeat3. � � �� � � � � is tolerated under � � ����� by ��;4. � � � ��5. until � �;6. compute �� � � ��� � such that � �� �tight �������� ��;7. return ��� ��.

Figure 2: Algorithm �����-�-��������-���������.

4.1 New Algorithms

In the sequel, let �� ��� � � be a probabilistic knowl-edge base, and let � and � be two events.

By Theorem 2.3, deciding whether �� is g-coherent canbe done by Algorithm �-��������� (see Fig. 1): We firstcompute the set �� of all elements in ��� tolerated by�� under�. We then compute the set �� of all elements in������� tolerated by �� under �. We iterate this pro-cedure until no element in � ������� is toleratedby � under �. Then, �� is g-coherent iff � is empty.

By Theorems 2.5 and 2.6, the real numbers �� �� ��� �such that �� ��tight �������� �� can be computed in a sim-ilar way by Algorithm �����-�-��������-��������� (seeFig. 2): We first compute the set �� of all elementsin ��� tolerated by �� under �������. We thencompute the set �� of all elements in ������� toler-ated by �� under �������. We iterate this procedureuntil no element in � ������� is tolerated by �under �������. Then, we compute the real numbers�� �� ��� � such that ��� �tight �������� ��. They arethe requested �� �� ��� � such that �� �� tight �������� ��.

We now illustrate these algorithms by some examples.

Example 4.1 Let the probabilistic knowledge base �� ��� � � over � ��� ����� be given by:

��� � � ��� �� ������ ���� ���� ���� ���� �������� ����� �

Assume that we ask whether �� is g-coherent. Using Al-gorithm �-���������, we then compute the set of all mem-bers of � tolerated by � under �. As this set already co-incides with � , it follows that �� is g-coherent.

Assume now we ask for the real numbers �� �� ��� � suchthat �� ��tight � �������� ��. Using Algorithm �����-�-��������-���������, we then first compute the set of allmembers of � tolerated by � under �������. Thatis, we ask the following questions:

(1.1) Has � ������ a model �� with ������ � ?(1.2) Has � ������ a model �� with ���� �� � ?(1.3) Has � ������ a model �� with ������ � ?

The answers to (1.1) and (1.2) are (trivially) “No”, whilethe answer to (1.3) is “Yes”. Hence, we continue by com-puting the set of all members of � �� � ��������� ����tolerated by � � under �������, by asking:

(2.1) Has � � ������ a model �� with �� ���� � ?(2.2) Has � � ������ a model �� with �� �� �� � ?

As the answers to (2.1) and (2.2) are both (trivially) “No”,the interval ��� ��� ��� � with �� ��tight � �������� �� isdetermined by ��� � �tight � �������� ��. Numerically,this interval ��� �� is given by ��� ����. �

����� ���� ��

�� �� ��

���� ��� ��

�� �� �� � � �for all �������� ����� � � ��

����� ���� ��

� �� ��

���� ��� ��

��� � �� � � �for all �������� ����� ���

����� ����

��

�� � � �for all � ���

(3)

Figure 3: System of linear constraints for Theorems 4.3 and 4.4.

The following is an example in which a tight g-coherentconsequence coincides with a tight logical consequence.

Example 4.2 Let the probabilistic knowledge base �� ��� � � over � ��� ������������� �� �� be given by:

� ��� ���� �� �� ������ �������

�� ���� ���� �� �

� ���������� ���� ��������� ���� ��� ����� ����

��������� ���� ��������� ���� ��������� ����

��������� ���� ��������� ���� ��������� ���� �

It is now easy to verify that every member of � is toleratedby � under �. Hence, �� is g-coherent.

Assume next we ask for the real numbers �� �� ��� � suchthat �������� �� is a g-coherent consequence of �� . Sinceno member of � is tolerated by � under � � �����,the interval ��� �� � ��� � such that �� ��tight �������� �� isdetermined by � � � �tight �������� ��. Numerically, thisinterval ��� �� is given by ������ �����. �

4.2 Tasks in Probabilistic Logic

The new algorithms in Section 4.1 reduce the problems ofdeciding g-coherence and of computing tight g-coherentconsequences to the two standard tasks in model-theoreticprobabilistic logic of deciding the existence of models thatassociate with a given event a positive probability (PP) andof computing tight logical consequences (TLC).

Task PP can be reduced to deciding whether a system oflinear constraints is solvable. This is more formally ex-pressed by the following well-known result.

Theorem 4.3 Let �� ��� � � be a probabilistic knowl-edge base, and � be an event. Let ������� �������.Then, �� has a model �� such that ������ � iff the sys-tem of linear constraints ��� over the variables �� �� ���is solvable (see Fig. 3).

Task TLC is reducible to computing the optimal values oftwo linear programs, as the next well-known result shows.

Theorem 4.4 Let �� ��� � � be a probabilistic knowl-edge base, and�� � be events. Let� �������������.Suppose that �� has a model �� such that ������ �.Then, � (resp., �) such that �� �tight �������� �� is givenby the optimal value of the following linear program overthe variables �� �� ���:

minimize (resp., maximize)�

���� � �����

�� subject to ��� .

We give an illustrative example.

Example 4.5 Consider again the probabilistic knowledgebase �� ��� � � given in Example 4.1.

Assume we want to reply question (1.3). That is, has� ������ a model �� with �� ���� � ? We now ap-ply Theorem 4.3. The set � ��� ������ is given by:

� � ������ ��������� �

The set � ����� ��� is represented as follows:

� � ������� ����� �

Thus, the system of linear constraints (3) is given by:

��� ������ � ��� ����� � �

������ � �����

������� ����� � � �

(4)

As (4) is solvable, the reply to question (1.3) is “Yes”.

Assume next we want to compute the interval ��� ��� ��� �such that ��� � �tight � �������� ��, where � � and� � �� ������ ���� ���� ���� ����. Then, we have:

� ���� �� �� ����� � ������ � ����� �

and the set ������ is represented as follows:

� � ��� �� ���� ���� ���� �

Thus, the system of linear constraints (3) is given by:

��� ���� � ��� ������ � ����� � ������� � �

��� ����� � ��� ������ � ������� � �

���� � ����� � ����� � ������

����� ������ ������ ������ � � �

(5)

Observe that ��� is solvable. By Theorem 4.3, it thus fol-lows that ��� � has a model �� with ������ �. Hence,by Theorem 4.4, the requested � (resp., �) is given by theoptimal value of the following linear program:

minimize (resp., maximize) ����� subject to ��� . �

5 Techniques for Conjunctive Case

As we reduce the problems of deciding g-coherence and ofcomputing tight g-coherent consequences to tasks in prob-abilistic logic, we can now apply all the techniques thathave been developed there for an increased efficiency.

These techniques aim at reducing the exponential numberof variables in the system of linear constraints (3).

In this section, we describe two such techniques for anincreased efficiency that apply to the conjunctive case,namely (i) removing inactive logical and conditional con-straints, and (ii) producing a reduced set of variables forthe linear optimization problems in Theorems 4.3 and 4.4.

The key idea behind (i) is that some basic events can beassigned the probability zero, when deciding whether agiven event has a positive probability, and when comput-ing tight logical consequences. These basic events and allconstraints in which they appear are then marked as inac-tive, and they are simply removed. The main idea behind(ii) is that we actually do not need the fine grain structureof ������ ������� (resp., ������ �������), and thussome variables can be removed.

5.1 Preliminaries

An event � is conjunctive iff it is either� or a conjunctionof basic events. A conditional constraint �������� �� (resp.,conditional event���) is conjunctive iff � is a conjunctionof basic events, and � is either � or a conjunction of basicevents. A logical constraint ��� is Horn (resp., definite-Horn) iff � is either � or a basic event (resp., � is a basicevent), and � is either � or a conjunction of basic events.A probabilistic knowledge base �� ��� � � is conjunc-tive iff � is a finite set of Horn logical constraints, and �is a finite set of conjunctive conditional constraints.

5.2 Removing Inactive Constraints

We now describe a method in which some logical and con-ditional constraints are characterized as inactive and thenremoved. It is inspired by ideas from [28, 16]. Note thatremoving inactive constraints can be done in linear time.

For sets of Horn logical constraints�, we use� to denotethe set of all definite members of�. For sets of conjunctiveconditional constraints � , denote by � the set of all defi-nite-Horn logical constraints ��� such that ��� occursin the consequent of some �������� ���� with � � �.

Let �� ��� � � be a conjunctive probabilistic knowl-edge base, and let � be a conjunctive event. A basic event��� is active w.r.t. �� and �, iff � �� ���� � �.An event � (resp., a logical or conditional constraint � ) isactive w.r.t. �� and �, iff all basic events in � (resp., � )are active w.r.t. �� and �. An event (resp., a logical orconditional constraint) is inactive w.r.t. �� and �, iff itis not active w.r.t. �� and �. We define ����� asthe probabilistic knowledge base ���� ��� such that ��(resp., ��) denotes the set of all members of � (resp., � )that are active w.r.t. �� and �.

The following two theorems show that, as far as the tasksPP and TLC in Section 4.2 are concerned, we can removeall inactive logical and conditional constraints from �� .Roughly speaking, the main idea behind these results isthat all inactive basic events can be assigned the probabil-ity zero, when solving PP and TLC. As a consequence, allinactive constraints then have some inactive basic events intheir antecedent, and can thus simply be removed.

Theorem 5.1 Let �� ��� � � be a conjunctive proba-bilistic knowledge base, and let � be a conjunctive event.Then,�� has a model�� with�� ���� � iff����� hasa model �� with �� ���� �.

Theorem 5.2 Let �� ��� � � be a conjunctive proba-bilistic knowledge base, and � �������� �� be a conjunc-tive conditional constraint. Then,

�� �tight � iff ������ � �tight � �

5.3 Reduced Set of Variables

In the conjunctive case, the number of variables in the sys-tems of linear constraints for Theorems 4.3 and 4.4 can bedirectly reduced by a technique proposed in [29].

In the sequel, let � be a finite set of conjunctive conditionalevents. Let � be the following set of conjunctive events:

� ���� � �� �� � ��� � ���

The operator associates with each conjunctive event !the set of all contained basic events ���. The opera-tor "� associates with each ��� the set of all ��� suchthat ��� � �. An element !�� is maximal in � iff���!� �� !� for all !� �� � �!�. We use #$%��� to de-note the set of all maximal elements in �. Denote by � �

the least set of subsets of � such that:

(i) If !��, then "�� �!�����.

(ii) If !�#$%���, &��&� ���, and "�� �!���&�� &�,then "��&� �&�� � ��.

For each & ���, let the mapping �� associate with every

���� � an element of �� ���� � ���� as follows:

�������

�����

� �� if ��& � � and ��& � �

� � � if ��& �� � and ��& � �

� if ��& �� �.

We define ������ as the set of all �� such that (i) & ���

and (ii) � � ����� � ��� is satisfiable.

The next two theorems show that we can use ��� instead

of �� for generating the variables in the systems of linearconstraints for Theorems 4.3 and 4.4, respectively.

Theorem 5.3 Let �� ��� � � be a conjunctive proba-bilistic knowledge base, and � be a conjunctive event. Let���

����� �������. Then, �� has a model �� suchthat ������ � iff the system of linear constraints ��� overthe variables �� �� ��� is solvable (see Fig. 3).

Theorem 5.4 Let �� ��� � � be a conjunctive proba-bilistic knowledge base, and �� � be conjunctive events.Let���

����� �������. Suppose that�� has a model�� such that ������ �. Then, � (resp., �) such that�� �tight �������� �� is given by the optimal value of thefollowing linear program over the variables �� �� ���:

minimize (resp., maximize)�

���� � �����

�� subject to ��� .

We give an illustrative example.

Example 5.5 Consider again the probabilistic knowledgebase �� ��� � � given in Example 4.2. Assume that weask whether �� has a model �� such that ������ �.

The set � ��� ������ is given as follows:

� ����������� ����� ���� ���� ���� ���� �������� �

We observe that ����� contains 162 elements. The set ��

contains only 10 elements and is given by:

�� ����� � �� ���� ���� ���� ���� ��� ���������� ������� ��� ��� ������������� �� ��� �

Hence, also ������ contains only 10 elements. More pre-

cisely, �������� is represented as follows:

�� � ��� ��������������� �� ��������������� ��� ����������������� �������������� ��� �������������� ��� ��������������� ��������������� ��� ������������������� �������������� ��������� �

Thus, the number of variables reduces from 162 to 10. �

5.4 Random Gain

Interestingly, the technique of removing variables in Sec-tion 5.3 can be expressed in terms of random gain.

Let �� ��� � � be a probabilistic knowledge base with� ������������ ���� � � � � ����������� ���� and ��. Let������� �. Then, for every � ��, the correspondingrandom gain '� can be represented as follows:

'�

�����

����� � ���� (����� � ��� � (6)

where � � � and (� � � are real numbers, and ��� and ���are defined by (2), for all � �� and ���� � � � � ��.

The following theorem now shows that ������� � can be

characterized using the random gain.

Theorem 5.6 Let �� ��� � � be a conjunctive proba-bilistic knowledge base where � ������������ ���� � � � ,����������� ���� and � � . Let � ������ �, �� ������� �, and � ��. Then, the following holds:

(a) If � � ��, then no � � exists such that � � � �and '�

���� '�.

(b) If � � � ���, then '� �

��� '� for some � ��

with � � � �.

6 Summary and Outlook

We showed that the notions of g-coherence and of g-coher-ent entailment can be expressed by combining notions inmodel-theoretic probabilistic logic with concepts from de-fault reasoning. Using these results, we analyzed the com-plexity of probabilistic reasoning under coherence. More-over, we gave new algorithms for deciding g-coherenceand for computing tight g-coherent intervals, which reducethese tasks to standard reasoning tasks in model-theoreticprobabilistic logic. Thus, efficient methods for model-theoretic probabilistic reasoning can immediately be ap-plied for probabilistic reasoning under coherence. We thendescribed two such techniques for the conjunctive case.

We remark that removing inactive constraints as describedin Section 5.2 can be applied in combination with columngeneration techniques. We expect that the same applies tothe reduced set of variables described in Section 5.3. Ex-ploring this aspect in depth is a subject of future research.

Another interesting topic of future research is to further ex-plore the reduced set of variables described in Section 5.3.In particular, it has been shown in [5] that further variablescan be removed in coherence-based probabilistic reason-ing by a refined characterization based on random gain.We expect that this result carries over to model-theoreticprobabilistic reasoning.

Acknowledgments

This work has been partially supported by a DFG grantand the Austrian Science Fund under project N Z29-INF.We want to thank the reviewers for their useful comments.

References

[1] S. Amarger, D. Dubois, and H. Prade. Constraint propaga-tion with imprecise conditional probabilities. In Proceed-ings UAI-91, pages 26–34. Morgan Kaufmann, 1991.

[2] V. Biazzo and A. Gilio. A generalization of the fundamentaltheorem of de Finetti for imprecise conditional probabilityassessments. Int. J. Approx. Reasoning, 24(2–3):251–272,2000.

[3] V. Biazzo, A. Gilio, T. Lukasiewicz, and G. Sanfilippo.Probabilistic logic under coherence, model-theoretic prob-abilistic logic, and default reasoning. Technical Report IN-FSYS RR-1843-01-03, Institut fur Informationssysteme,TU Wien, 2001. Available at ftp://ftp.kr.tuwien.ac.at/pub/tr/rr0103.ps.gz.

[4] V. Biazzo, A. Gilio, T. Lukasiewicz, and G. Sanfilippo.Probabilistic logic under coherence: Complexity and al-gorithms. Technical Report INFSYS RR-1843-01-04, In-stitut fur Informationssysteme, TU Wien, 2001. Avail-able at ftp://ftp.kr.tuwien.ac.at/pub/tr/rr0104.ps.gz.

[5] V. Biazzo, A. Gilio, and G. Sanfilippo. Computational as-pects in checking of coherence and propagation of condi-tional probability bounds. In Proceedings WUPES-2000,pages 1–13, Jindrichuv Hradec, Czech Republic, 2000.

[6] V. Biazzo, A. Gilio, and G. Sanfilippo. Efficient checkingof coherence and propagation of imprecise probability as-sessments. In Proceedings IPMU-2000, pages 1973–1976,Madrid, Spain, 2000.

[7] G. Boole. An Investigation of the Laws of Thought, onwhich are Founded the Mathematical Theories of Logicand Probabilities. Walton and Maberley, London, 1854.(reprint: Dover Publications, New York, 1958).

[8] A. Capotorti, L. Galli, and B. Vantaggi. How to use stronglocal coherence in an inferential process based on upper-lower probabilities. In Proceedings WUPES-2000, pages14–28, Jindrichuv Hradec, Czech Republic, 2000.

[9] A. Capotorti and B. Vantaggi. An algorithm for coher-ent conditional probability assessments. In ProceedingsIV Congresso Nazionale SIMAI, volume 2, pages 144–148,Giardini Naxos (Messina), Italy, 1998.

[10] G. Coletti. Coherent numerical and ordinal probabilisticassessments. IEEE Transactions on Systems, Man, and Cy-bernetics, 24(12):1747–1754, 1994.

[11] G. Coletti and R. Scozzafava. Characterization of co-herent conditional probabilities as a tool for their assess-ment and extension. Journal of Uncertainty, Fuzziness andKnowledge-based Systems, 4(2):103–127, 1996.

[12] G. Coletti and R. Scozzafava. Exploiting zero probabili-ties. In Proceedings EUFIT-97, pages 1499–1503, Aachen,Germany, 1997.

[13] G. Coletti and R. Scozzafava. Coherent upper and lowerBayesian updating. In Proceedings ISIPTA-99, pages 101–110, Ghent, Belgium, 1999.

[14] G. Coletti and R. Scozzafava. Conditioning and infer-ence in intelligent systems. Soft Computing, 3(3):118–130,1999.

[15] D. Dubois, H. Prade, and J.-M. Touscas. Inference withimprecise numerical quantifiers. In Z. W. Ras and M. Ze-mankova, editors, Intelligent Systems, chapter 3, pages 53–72. Ellis Horwood, 1990.

[16] T. Eiter and T. Lukasiewicz. Default reasoning from con-ditional knowledge bases: Complexity and tractable cases.Artif. Intell., 124(2):169–241, 2000.

[17] R. Fagin, J. Y. Halpern, and N. Megiddo. A logic for rea-soning about probabilities. Inf. Comput., 87:78–128, 1990.

[18] A. M. Frisch and P. Haddawy. Anytime deduction for prob-abilistic logic. Artif. Intell., 69:93–122, 1994.

[19] D. M. Gabbay and P. Smets, editors. Handbook on De-feasible Reasoning and Uncertainty Management Systems.Kluwer Academic, Dordrecht, Netherlands, 1998.

[20] M. R. Garey and D. S. Johnson. Computers and Intractabil-ity: A Guide to the Theory of NP-Completeness. Freeman,New York, 1979.

[21] G. Georgakopoulos, D. Kavvadias, and C. Papadimitriou.Probabilistic satisfiability. J. Complexity, 4(1):1–11, 1988.

[22] A. Gilio. Probabilistic consistency of conditional probabil-ity bounds. In Advances in Intelligent Computing, LNCS945, pages 200–209. Springer, 1995.

[23] A. Gilio. Precise propagation of upper and lower probabil-ity bounds in system P. In Proceedings of the 8th Interna-tional Workshop on Non-monotonic Reasoning, Brecken-ridge, Colorado, USA, 2000.

[24] A. Gilio and R. Scozzafava. Conditional events in probabil-ity assessment and revision. IEEE Transactions on Systems,Man, and Cybernetics, 24(12):1741–1746, 1994.

[25] M. Goldszmidt and J. Pearl. On the consistency of defeasi-ble databases. Artif. Intell., 52(2):121–149, 1991.

[26] P. Hansen, B. Jaumard, G.-B. D. Nguetse, and M. P.de Aragao. Models and algorithms for probabilistic andBayesian logic. In Proceedings IJCAI-95, pages 1862–1868, 1995.

[27] B. Jaumard, P. Hansen, and M. P. de Aragao. Column gen-eration methods for probabilistic logic. ORSA J. Comput.,3:135–147, 1991.

[28] T. Lukasiewicz. Probabilistic logic programming with con-ditional constraints. ACM Transactions on ComputationalLogic, 2001. To appear.

[29] T. Lukasiewicz. Efficient global probabilistic deductionfrom taxonomic and probabilistic knowledge-bases overconjunctive events. In Proceedings CIKM-97, pages 75–82. ACM Press, 1997.

[30] T. Lukasiewicz. Local probabilistic deduction from taxo-nomic and probabilistic knowledge-bases over conjunctiveevents. Int. J. Approx. Reasoning, 21(1):23–61, 1999.

[31] T. Lukasiewicz. Probabilistic deduction with conditionalconstraints over basic events. Journal of Artificial Intelli-gence Research, 10:199–241, 1999.

[32] N. J. Nilsson. Probabilistic logic. Artif. Intell., 28:71–88,1986.

[33] R. Pelessoni and P. Vicig. A consistency problem for im-precise conditional probability assessments. In Proceed-ings IPMU-98, pages 1478–1485, Paris, France, 1998.

[34] R. Scozzafava. Subjective conditional probability and co-herence principles for handling partial information. Math-ware & Soft Computing, 3(1):183–192, 1996.

[35] P. Vicig. An algorithm for imprecise conditional probabilityassessments in expert systems. In Proceedings IPMU-96,pages 61–66, Granada, Spain, 1996.

[36] P. Walley. Statistical Reasoning with Imprecise Probabili-ties. Chapman and Hall, London, 1991.