Post on 02-May-2023
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
Every normal logic programhas a 2-valued Minimal Hypotheses semantics
Alexandre Miguel Pinto and Luís Moniz Pereira
Faculdade de Ciências e Tecnologia
Universidade Nova de Lisboa
October 11th, 2011
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
Outline
1 Introduction
2 Building a semantics
3 Minimal Hypotheses semantics
4 Conclusions and Future Work
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
BackgroundThe ProblemCommon semantics for NLPsGoal: Desired Semantics
Outline
1 Introduction
2 Building a semantics
3 Minimal Hypotheses semantics
4 Conclusions and Future Work
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
BackgroundThe ProblemCommon semantics for NLPsGoal: Desired Semantics
Background (1/2)
Knowledge can be written in sentencesSentences can be translated into logic formalismsLogic Programs are one such formalism for KRLP = Normal Logic Rules + Integrity Constraints
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
BackgroundThe ProblemCommon semantics for NLPsGoal: Desired Semantics
Background (1/2)
Knowledge can be written in sentencesSentences can be translated into logic formalismsLogic Programs are one such formalism for KRLP = Normal Logic Rules + Integrity Constraints
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
BackgroundThe ProblemCommon semantics for NLPsGoal: Desired Semantics
Background (1/2)
Knowledge can be written in sentencesSentences can be translated into logic formalismsLogic Programs are one such formalism for KRLP = Normal Logic Rules + Integrity Constraints
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
BackgroundThe ProblemCommon semantics for NLPsGoal: Desired Semantics
Background (1/2)
Knowledge can be written in sentencesSentences can be translated into logic formalismsLogic Programs are one such formalism for KRLP = Normal Logic Rules + Integrity Constraints
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
BackgroundThe ProblemCommon semantics for NLPsGoal: Desired Semantics
Background (2/2)
Logic Rule (LR)
A LR is of the form h← b1, . . . ,bn,not c1, . . . ,not cm withm,n ≥ 0 and finite, and where bi , cj are ground atoms.
Notation: head(r) = h and body(r) = {b1, . . . , bn, not c1, . . . , not cm}.
Normal Logic Program (NLP)
A NLP is a (countable) set of Normal Logic Rules (NLRs),where a NLR r is s.t. head(r) is a ground atom.
Integrity Constraint (IC)
A IC is a LR r such that head(r) = ⊥.
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
BackgroundThe ProblemCommon semantics for NLPsGoal: Desired Semantics
Background (2/2)
Logic Rule (LR)
A LR is of the form h← b1, . . . ,bn,not c1, . . . ,not cm withm,n ≥ 0 and finite, and where bi , cj are ground atoms.
Notation: head(r) = h and body(r) = {b1, . . . , bn, not c1, . . . , not cm}.
Normal Logic Program (NLP)
A NLP is a (countable) set of Normal Logic Rules (NLRs),where a NLR r is s.t. head(r) is a ground atom.
Integrity Constraint (IC)
A IC is a LR r such that head(r) = ⊥.
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
BackgroundThe ProblemCommon semantics for NLPsGoal: Desired Semantics
Background (2/2)
Logic Rule (LR)
A LR is of the form h← b1, . . . ,bn,not c1, . . . ,not cm withm,n ≥ 0 and finite, and where bi , cj are ground atoms.
Notation: head(r) = h and body(r) = {b1, . . . , bn, not c1, . . . , not cm}.
Normal Logic Program (NLP)
A NLP is a (countable) set of Normal Logic Rules (NLRs),where a NLR r is s.t. head(r) is a ground atom.
Integrity Constraint (IC)
A IC is a LR r such that head(r) = ⊥.
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
BackgroundThe ProblemCommon semantics for NLPsGoal: Desired Semantics
The Problem (1/2)
Ideally, KR Declarativity with LPs would allowFull separation of concerns of rules, i.e.,KR role of NLRs 6= KR role of ICs, i.e.,
Normal Logic Rules⇔ Define space of candidate solutionsIntegrity Constraints⇔ Prune out undesired candidates
Semantics is what determines which sets of beliefs(models) are candidates
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
BackgroundThe ProblemCommon semantics for NLPsGoal: Desired Semantics
The Problem (1/2)
Ideally, KR Declarativity with LPs would allowFull separation of concerns of rules, i.e.,KR role of NLRs 6= KR role of ICs, i.e.,
Normal Logic Rules⇔ Define space of candidate solutionsIntegrity Constraints⇔ Prune out undesired candidates
Semantics is what determines which sets of beliefs(models) are candidates
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
BackgroundThe ProblemCommon semantics for NLPsGoal: Desired Semantics
The Problem (1/2)
Ideally, KR Declarativity with LPs would allowFull separation of concerns of rules, i.e.,KR role of NLRs 6= KR role of ICs, i.e.,
Normal Logic Rules⇔ Define space of candidate solutionsIntegrity Constraints⇔ Prune out undesired candidates
Semantics is what determines which sets of beliefs(models) are candidates
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
BackgroundThe ProblemCommon semantics for NLPsGoal: Desired Semantics
The Problem (1/2)
Ideally, KR Declarativity with LPs would allowFull separation of concerns of rules, i.e.,KR role of NLRs 6= KR role of ICs, i.e.,
Normal Logic Rules⇔ Define space of candidate solutionsIntegrity Constraints⇔ Prune out undesired candidates
Semantics is what determines which sets of beliefs(models) are candidates
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
BackgroundThe ProblemCommon semantics for NLPsGoal: Desired Semantics
The Problem (1/2)
Ideally, KR Declarativity with LPs would allowFull separation of concerns of rules, i.e.,KR role of NLRs 6= KR role of ICs, i.e.,
Normal Logic Rules⇔ Define space of candidate solutionsIntegrity Constraints⇔ Prune out undesired candidates
Semantics is what determines which sets of beliefs(models) are candidates
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
BackgroundThe ProblemCommon semantics for NLPsGoal: Desired Semantics
The Problem (1/2)
Ideally, KR Declarativity with LPs would allowFull separation of concerns of rules, i.e.,KR role of NLRs 6= KR role of ICs, i.e.,
Normal Logic Rules⇔ Define space of candidate solutionsIntegrity Constraints⇔ Prune out undesired candidates
Semantics is what determines which sets of beliefs(models) are candidates
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
BackgroundThe ProblemCommon semantics for NLPsGoal: Desired Semantics
The Problem (2/2)
According to this Ideal KR Declarativity view. . .. . . since only ICs are allowed to prune out candidatemodels. . .. . . LPs with no ICs (i.e., NLPs) should always allow, atleast, one model. . .. . . otherwise NLRs would be allowed to play the role of ICstooThis means: a semantics for NLPs should guaranteemodel existence
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
BackgroundThe ProblemCommon semantics for NLPsGoal: Desired Semantics
The Problem (2/2)
According to this Ideal KR Declarativity view. . .. . . since only ICs are allowed to prune out candidatemodels. . .. . . LPs with no ICs (i.e., NLPs) should always allow, atleast, one model. . .. . . otherwise NLRs would be allowed to play the role of ICstooThis means: a semantics for NLPs should guaranteemodel existence
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
BackgroundThe ProblemCommon semantics for NLPsGoal: Desired Semantics
The Problem (2/2)
According to this Ideal KR Declarativity view. . .. . . since only ICs are allowed to prune out candidatemodels. . .. . . LPs with no ICs (i.e., NLPs) should always allow, atleast, one model. . .. . . otherwise NLRs would be allowed to play the role of ICstooThis means: a semantics for NLPs should guaranteemodel existence
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
BackgroundThe ProblemCommon semantics for NLPsGoal: Desired Semantics
The Problem (2/2)
According to this Ideal KR Declarativity view. . .. . . since only ICs are allowed to prune out candidatemodels. . .. . . LPs with no ICs (i.e., NLPs) should always allow, atleast, one model. . .. . . otherwise NLRs would be allowed to play the role of ICstooThis means: a semantics for NLPs should guaranteemodel existence
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
BackgroundThe ProblemCommon semantics for NLPsGoal: Desired Semantics
The Problem (2/2)
According to this Ideal KR Declarativity view. . .. . . since only ICs are allowed to prune out candidatemodels. . .. . . LPs with no ICs (i.e., NLPs) should always allow, atleast, one model. . .. . . otherwise NLRs would be allowed to play the role of ICstooThis means: a semantics for NLPs should guaranteemodel existence
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
BackgroundThe ProblemCommon semantics for NLPsGoal: Desired Semantics
Common semantics for NLPs
2-valued: Stable ModelsClassically Supported, but. . .Lacks guarantee of model existence — undermining livenesscannot provide semantics to arbitrarily updated/merged programsLacks relevancedoes not allow for top-down query-answering proof-proceduresLacks cumulativitydoes not allow use of tabling methods to speed up computations
3-valued: Well-Founded SemanticsClassically SupportedExistence of ModelRelevanceCumulativity, but. . .it is 3-valued
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
BackgroundThe ProblemCommon semantics for NLPsGoal: Desired Semantics
Common semantics for NLPs
2-valued: Stable ModelsClassically Supported, but. . .Lacks guarantee of model existence — undermining livenesscannot provide semantics to arbitrarily updated/merged programsLacks relevancedoes not allow for top-down query-answering proof-proceduresLacks cumulativitydoes not allow use of tabling methods to speed up computations
3-valued: Well-Founded SemanticsClassically SupportedExistence of ModelRelevanceCumulativity, but. . .it is 3-valued
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
BackgroundThe ProblemCommon semantics for NLPsGoal: Desired Semantics
Common semantics for NLPs
2-valued: Stable ModelsClassically Supported, but. . .Lacks guarantee of model existence — undermining livenesscannot provide semantics to arbitrarily updated/merged programsLacks relevancedoes not allow for top-down query-answering proof-proceduresLacks cumulativitydoes not allow use of tabling methods to speed up computations
3-valued: Well-Founded SemanticsClassically SupportedExistence of ModelRelevanceCumulativity, but. . .it is 3-valued
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
BackgroundThe ProblemCommon semantics for NLPsGoal: Desired Semantics
Common semantics for NLPs
2-valued: Stable ModelsClassically Supported, but. . .Lacks guarantee of model existence — undermining livenesscannot provide semantics to arbitrarily updated/merged programsLacks relevancedoes not allow for top-down query-answering proof-proceduresLacks cumulativitydoes not allow use of tabling methods to speed up computations
3-valued: Well-Founded SemanticsClassically SupportedExistence of ModelRelevanceCumulativity, but. . .it is 3-valued
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
BackgroundThe ProblemCommon semantics for NLPsGoal: Desired Semantics
Common semantics for NLPs
2-valued: Stable ModelsClassically Supported, but. . .Lacks guarantee of model existence — undermining livenesscannot provide semantics to arbitrarily updated/merged programsLacks relevancedoes not allow for top-down query-answering proof-proceduresLacks cumulativitydoes not allow use of tabling methods to speed up computations
3-valued: Well-Founded SemanticsClassically SupportedExistence of ModelRelevanceCumulativity, but. . .it is 3-valued
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
BackgroundThe ProblemCommon semantics for NLPsGoal: Desired Semantics
Common semantics for NLPs
2-valued: Stable ModelsClassically Supported, but. . .Lacks guarantee of model existence — undermining livenesscannot provide semantics to arbitrarily updated/merged programsLacks relevancedoes not allow for top-down query-answering proof-proceduresLacks cumulativitydoes not allow use of tabling methods to speed up computations
3-valued: Well-Founded SemanticsClassically SupportedExistence of ModelRelevanceCumulativity, but. . .it is 3-valued
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
BackgroundThe ProblemCommon semantics for NLPsGoal: Desired Semantics
Common semantics for NLPs
2-valued: Stable ModelsClassically Supported, but. . .Lacks guarantee of model existence — undermining livenesscannot provide semantics to arbitrarily updated/merged programsLacks relevancedoes not allow for top-down query-answering proof-proceduresLacks cumulativitydoes not allow use of tabling methods to speed up computations
3-valued: Well-Founded SemanticsClassically SupportedExistence of ModelRelevanceCumulativity, but. . .it is 3-valued
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
BackgroundThe ProblemCommon semantics for NLPsGoal: Desired Semantics
Common semantics for NLPs
2-valued: Stable ModelsClassically Supported, but. . .Lacks guarantee of model existence — undermining livenesscannot provide semantics to arbitrarily updated/merged programsLacks relevancedoes not allow for top-down query-answering proof-proceduresLacks cumulativitydoes not allow use of tabling methods to speed up computations
3-valued: Well-Founded SemanticsClassically SupportedExistence of ModelRelevanceCumulativity, but. . .it is 3-valued
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
BackgroundThe ProblemCommon semantics for NLPsGoal: Desired Semantics
Common semantics for NLPs
2-valued: Stable ModelsClassically Supported, but. . .Lacks guarantee of model existence — undermining livenesscannot provide semantics to arbitrarily updated/merged programsLacks relevancedoes not allow for top-down query-answering proof-proceduresLacks cumulativitydoes not allow use of tabling methods to speed up computations
3-valued: Well-Founded SemanticsClassically SupportedExistence of ModelRelevanceCumulativity, but. . .it is 3-valued
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
BackgroundThe ProblemCommon semantics for NLPsGoal: Desired Semantics
Common semantics for NLPs
2-valued: Stable ModelsClassically Supported, but. . .Lacks guarantee of model existence — undermining livenesscannot provide semantics to arbitrarily updated/merged programsLacks relevancedoes not allow for top-down query-answering proof-proceduresLacks cumulativitydoes not allow use of tabling methods to speed up computations
3-valued: Well-Founded SemanticsClassically SupportedExistence of ModelRelevanceCumulativity, but. . .it is 3-valued
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
BackgroundThe ProblemCommon semantics for NLPsGoal: Desired Semantics
Common semantics for NLPs
2-valued: Stable ModelsClassically Supported, but. . .Lacks guarantee of model existence — undermining livenesscannot provide semantics to arbitrarily updated/merged programsLacks relevancedoes not allow for top-down query-answering proof-proceduresLacks cumulativitydoes not allow use of tabling methods to speed up computations
3-valued: Well-Founded SemanticsClassically SupportedExistence of ModelRelevanceCumulativity, but. . .it is 3-valued
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
BackgroundThe ProblemCommon semantics for NLPsGoal: Desired Semantics
Goal: Desired Semantics (1/2)
2-valued semanticsGuarantee of model existence(e.g., for liveness in the face of updates)
RelevanceTruth of a literal is determined solely by the literals in its call-graph
CumulativityAtoms true in all models can be safely added to program as facts
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
BackgroundThe ProblemCommon semantics for NLPsGoal: Desired Semantics
Goal: Desired Semantics (1/2)
2-valued semanticsGuarantee of model existence(e.g., for liveness in the face of updates)
RelevanceTruth of a literal is determined solely by the literals in its call-graph
CumulativityAtoms true in all models can be safely added to program as facts
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
BackgroundThe ProblemCommon semantics for NLPsGoal: Desired Semantics
Goal: Desired Semantics (1/2)
2-valued semanticsGuarantee of model existence(e.g., for liveness in the face of updates)
RelevanceTruth of a literal is determined solely by the literals in its call-graph
CumulativityAtoms true in all models can be safely added to program as facts
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
BackgroundThe ProblemCommon semantics for NLPsGoal: Desired Semantics
Goal: Desired Semantics (1/2)
2-valued semanticsGuarantee of model existence(e.g., for liveness in the face of updates)
RelevanceTruth of a literal is determined solely by the literals in its call-graph
CumulativityAtoms true in all models can be safely added to program as facts
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
Semantics intuitionsSemantics principlesSemantic choices
Outline
1 Introduction
2 Building a semantics
3 Minimal Hypotheses semantics
4 Conclusions and Future Work
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
Semantics intuitionsSemantics principlesSemantic choices
Semantics intuitions
Closed World Assumption Programs with stratified negation have only one model. . .. . . which hints that. . .. . . non-determinism (more than one model) comes from“non-stratification” of negationI.e., negated literals in loop play important semantic roleNeed for a different kind of Assumption for non-stratifiednegationA model can be seen as set of assumed hypotheses + theirconclusions
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
Semantics intuitionsSemantics principlesSemantic choices
Semantics intuitions
Closed World Assumption Programs with stratified negation have only one model. . .. . . which hints that. . .. . . non-determinism (more than one model) comes from“non-stratification” of negationI.e., negated literals in loop play important semantic roleNeed for a different kind of Assumption for non-stratifiednegationA model can be seen as set of assumed hypotheses + theirconclusions
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
Semantics intuitionsSemantics principlesSemantic choices
Semantics intuitions
Closed World Assumption Programs with stratified negation have only one model. . .. . . which hints that. . .. . . non-determinism (more than one model) comes from“non-stratification” of negationI.e., negated literals in loop play important semantic roleNeed for a different kind of Assumption for non-stratifiednegationA model can be seen as set of assumed hypotheses + theirconclusions
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
Semantics intuitionsSemantics principlesSemantic choices
Semantics intuitions
Closed World Assumption Programs with stratified negation have only one model. . .. . . which hints that. . .. . . non-determinism (more than one model) comes from“non-stratification” of negationI.e., negated literals in loop play important semantic roleNeed for a different kind of Assumption for non-stratifiednegationA model can be seen as set of assumed hypotheses + theirconclusions
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
Semantics intuitionsSemantics principlesSemantic choices
Semantics intuitions
Closed World Assumption Programs with stratified negation have only one model. . .. . . which hints that. . .. . . non-determinism (more than one model) comes from“non-stratification” of negationI.e., negated literals in loop play important semantic roleNeed for a different kind of Assumption for non-stratifiednegationA model can be seen as set of assumed hypotheses + theirconclusions
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
Semantics intuitionsSemantics principlesSemantic choices
Semantics intuitions
Closed World Assumption Programs with stratified negation have only one model. . .. . . which hints that. . .. . . non-determinism (more than one model) comes from“non-stratification” of negationI.e., negated literals in loop play important semantic roleNeed for a different kind of Assumption for non-stratifiednegationA model can be seen as set of assumed hypotheses + theirconclusions
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
Semantics intuitionsSemantics principlesSemantic choices
Semantics intuitions
Closed World Assumption Programs with stratified negation have only one model. . .. . . which hints that. . .. . . non-determinism (more than one model) comes from“non-stratification” of negationI.e., negated literals in loop play important semantic roleNeed for a different kind of Assumption for non-stratifiednegationA model can be seen as set of assumed hypotheses + theirconclusions
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
Semantics intuitionsSemantics principlesSemantic choices
Semantics principles
Maximal skepticism→ atomic beliefs must have supportCWA is particular case
SupportClassicalbelief in atom requires a rule for it (head) with all body literals true
OR(new) Layered (Stratified)belief in atom requires a rule for it (head) with non-loop body literals trueClassical Support⇒ Layered Support
Maximal skepticism→ minimal atomic beliefsMinimality
Of all atoms true in the modelOR
Of hypotheses assumed true in the model
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
Semantics intuitionsSemantics principlesSemantic choices
Semantics principles
Maximal skepticism→ atomic beliefs must have supportCWA is particular case
SupportClassicalbelief in atom requires a rule for it (head) with all body literals true
OR(new) Layered (Stratified)belief in atom requires a rule for it (head) with non-loop body literals trueClassical Support⇒ Layered Support
Maximal skepticism→ minimal atomic beliefsMinimality
Of all atoms true in the modelOR
Of hypotheses assumed true in the model
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
Semantics intuitionsSemantics principlesSemantic choices
Semantics principles
Maximal skepticism→ atomic beliefs must have supportCWA is particular case
SupportClassicalbelief in atom requires a rule for it (head) with all body literals true
OR(new) Layered (Stratified)belief in atom requires a rule for it (head) with non-loop body literals trueClassical Support⇒ Layered Support
Maximal skepticism→ minimal atomic beliefsMinimality
Of all atoms true in the modelOR
Of hypotheses assumed true in the model
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
Semantics intuitionsSemantics principlesSemantic choices
Semantics principles
Maximal skepticism→ atomic beliefs must have supportCWA is particular case
SupportClassicalbelief in atom requires a rule for it (head) with all body literals true
OR(new) Layered (Stratified)belief in atom requires a rule for it (head) with non-loop body literals trueClassical Support⇒ Layered Support
Maximal skepticism→ minimal atomic beliefsMinimality
Of all atoms true in the modelOR
Of hypotheses assumed true in the model
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
Semantics intuitionsSemantics principlesSemantic choices
Semantics principles
Maximal skepticism→ atomic beliefs must have supportCWA is particular case
SupportClassicalbelief in atom requires a rule for it (head) with all body literals true
OR(new) Layered (Stratified)belief in atom requires a rule for it (head) with non-loop body literals trueClassical Support⇒ Layered Support
Maximal skepticism→ minimal atomic beliefsMinimality
Of all atoms true in the modelOR
Of hypotheses assumed true in the model
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
Semantics intuitionsSemantics principlesSemantic choices
Semantics principles
Maximal skepticism→ atomic beliefs must have supportCWA is particular case
SupportClassicalbelief in atom requires a rule for it (head) with all body literals true
OR(new) Layered (Stratified)belief in atom requires a rule for it (head) with non-loop body literals trueClassical Support⇒ Layered Support
Maximal skepticism→ minimal atomic beliefsMinimality
Of all atoms true in the modelOR
Of hypotheses assumed true in the model
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
Semantics intuitionsSemantics principlesSemantic choices
Semantic choices
Positive hypotheses or negative hypotheses?Traditionally, maximum negative hypotheses, but. . .
Problem with negative hypotheses
a ← not a
Assuming neg. hyp. not a leads to contradiction: aAssuming pos. hyp. a does not lead to contradiction!No explicit ¬a can be derived since we are using NLPs
Minimum positive hypotheses it is, then!In the example above a has no Classical Support. . . but it hasLayered Support!
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
Semantics intuitionsSemantics principlesSemantic choices
Semantic choices
Positive hypotheses or negative hypotheses?Traditionally, maximum negative hypotheses, but. . .
Problem with negative hypotheses
a ← not a
Assuming neg. hyp. not a leads to contradiction: aAssuming pos. hyp. a does not lead to contradiction!No explicit ¬a can be derived since we are using NLPs
Minimum positive hypotheses it is, then!In the example above a has no Classical Support. . . but it hasLayered Support!
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
Semantics intuitionsSemantics principlesSemantic choices
Semantic choices
Positive hypotheses or negative hypotheses?Traditionally, maximum negative hypotheses, but. . .
Problem with negative hypotheses
a ← not a
Assuming neg. hyp. not a leads to contradiction: aAssuming pos. hyp. a does not lead to contradiction!No explicit ¬a can be derived since we are using NLPs
Minimum positive hypotheses it is, then!In the example above a has no Classical Support. . . but it hasLayered Support!
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
Semantics intuitionsSemantics principlesSemantic choices
Semantic choices
Positive hypotheses or negative hypotheses?Traditionally, maximum negative hypotheses, but. . .
Problem with negative hypotheses
a ← not a
Assuming neg. hyp. not a leads to contradiction: aAssuming pos. hyp. a does not lead to contradiction!No explicit ¬a can be derived since we are using NLPs
Minimum positive hypotheses it is, then!In the example above a has no Classical Support. . . but it hasLayered Support!
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
Semantics intuitionsSemantics principlesSemantic choices
Semantic choices
Positive hypotheses or negative hypotheses?Traditionally, maximum negative hypotheses, but. . .
Problem with negative hypotheses
a ← not a
Assuming neg. hyp. not a leads to contradiction: aAssuming pos. hyp. a does not lead to contradiction!No explicit ¬a can be derived since we are using NLPs
Minimum positive hypotheses it is, then!In the example above a has no Classical Support. . . but it hasLayered Support!
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
Semantics intuitionsSemantics principlesSemantic choices
Semantic choices
Positive hypotheses or negative hypotheses?Traditionally, maximum negative hypotheses, but. . .
Problem with negative hypotheses
a ← not a
Assuming neg. hyp. not a leads to contradiction: aAssuming pos. hyp. a does not lead to contradiction!No explicit ¬a can be derived since we are using NLPs
Minimum positive hypotheses it is, then!In the example above a has no Classical Support. . . but it hasLayered Support!
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
Minimal Hypotheses semanticsReasoning with Minimal Hypotheses semantics
Outline
1 Introduction
2 Building a semantics
3 Minimal Hypotheses semantics
4 Conclusions and Future Work
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
Minimal Hypotheses semanticsReasoning with Minimal Hypotheses semantics
Minimal Hypotheses semantics — Intuitive definition
Assumable hypotheses of program:Hyps = (positive) atoms of negative literals in loopSelect a minimal subset H of Hyps assumed true such thatH is enough to propagate 2-valuedness to all literalsIf so, an MH-model is the M = consequences of HPropagation of truth-values via deterministicpolynomial-time Remainder operator (generalization of TPoperator)
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
Minimal Hypotheses semanticsReasoning with Minimal Hypotheses semantics
Minimal Hypotheses semantics — Intuitive definition
Assumable hypotheses of program:Hyps = (positive) atoms of negative literals in loopSelect a minimal subset H of Hyps assumed true such thatH is enough to propagate 2-valuedness to all literalsIf so, an MH-model is the M = consequences of HPropagation of truth-values via deterministicpolynomial-time Remainder operator (generalization of TPoperator)
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
Minimal Hypotheses semanticsReasoning with Minimal Hypotheses semantics
Minimal Hypotheses semantics — Intuitive definition
Assumable hypotheses of program:Hyps = (positive) atoms of negative literals in loopSelect a minimal subset H of Hyps assumed true such thatH is enough to propagate 2-valuedness to all literalsIf so, an MH-model is the M = consequences of HPropagation of truth-values via deterministicpolynomial-time Remainder operator (generalization of TPoperator)
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
Minimal Hypotheses semanticsReasoning with Minimal Hypotheses semantics
Minimal Hypotheses semantics — Intuitive definition
Assumable hypotheses of program:Hyps = (positive) atoms of negative literals in loopSelect a minimal subset H of Hyps assumed true such thatH is enough to propagate 2-valuedness to all literalsIf so, an MH-model is the M = consequences of HPropagation of truth-values via deterministicpolynomial-time Remainder operator (generalization of TPoperator)
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
Minimal Hypotheses semanticsReasoning with Minimal Hypotheses semantics
MH semantics — Example
Vacation example
beach ← not mountainmountain ← not travel
travel ← not beach,passport_okpassport_ok ← not expired_passport
expired_passport ← not passport_ok
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
Minimal Hypotheses semanticsReasoning with Minimal Hypotheses semantics
MH semantics — Example
Vacation example
beach ← not mountainmountain ← not travel
travel ← not beach,passport_okpassport_ok ← not expired_passport
expired_passport ← not passport_okexpired_passport
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
Minimal Hypotheses semanticsReasoning with Minimal Hypotheses semantics
MH semantics — Example
Vacation example
beach ← not mountainmountain ← not travel
travel ← not beach,passport_okexpired_passport
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
Minimal Hypotheses semanticsReasoning with Minimal Hypotheses semantics
MH semantics — Example
Vacation example
beach ← not mountainmountain ← not travel
expired_passport
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
Minimal Hypotheses semanticsReasoning with Minimal Hypotheses semantics
MH semantics — Example
Vacation example
beach ← not mountainmountain
expired_passport
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
Minimal Hypotheses semanticsReasoning with Minimal Hypotheses semantics
MH semantics — Example
Vacation example
mountainexpired_passport
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
Minimal Hypotheses semanticsReasoning with Minimal Hypotheses semantics
MH semantics — Example
Vacation example
beach ← not mountainmountain ← not travel
travel ← not beach,passport_okpassport_ok ← not expired_passport
expired_passport ← not passport_okpassport_ok
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
Minimal Hypotheses semanticsReasoning with Minimal Hypotheses semantics
MH semantics — Example
Vacation example
beach ← not mountainmountain ← not travel
travel ← not beachpassport_ok
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
Minimal Hypotheses semanticsReasoning with Minimal Hypotheses semantics
MH semantics — Example
Vacation example
beach ← not mountainmountain ← not travel
travel ← not beachpassport_ok
travel
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
Minimal Hypotheses semanticsReasoning with Minimal Hypotheses semantics
MH semantics — Example
Vacation example
beach ← not mountainpassport_ok
travel
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
Minimal Hypotheses semanticsReasoning with Minimal Hypotheses semantics
MH semantics — Example
Vacation example
beachpassport_ok
travel
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
Minimal Hypotheses semanticsReasoning with Minimal Hypotheses semantics
Reasoning as Query Answering
In the MH 2-v semantics, models are the deductiveconsequences of a specialized abduction for NLPsExistential query answering (a.k.a. brave reasoning)“is there a model (min. hyps+conseqs.) where query is true?”
More efficient with Relevant semantics, e.g. MH (not SM)1 Get relevant part of P for query; get Hyps for relevant part2 Get MH sub-model satisfying query
Relevance guarantees sub-model is extendible to a full one; no need to compute
full models
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
Minimal Hypotheses semanticsReasoning with Minimal Hypotheses semantics
Reasoning as Query Answering
In the MH 2-v semantics, models are the deductiveconsequences of a specialized abduction for NLPsExistential query answering (a.k.a. brave reasoning)“is there a model (min. hyps+conseqs.) where query is true?”
More efficient with Relevant semantics, e.g. MH (not SM)1 Get relevant part of P for query; get Hyps for relevant part2 Get MH sub-model satisfying query
Relevance guarantees sub-model is extendible to a full one; no need to compute
full models
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
Minimal Hypotheses semanticsReasoning with Minimal Hypotheses semantics
Reasoning as Query Answering
In the MH 2-v semantics, models are the deductiveconsequences of a specialized abduction for NLPsExistential query answering (a.k.a. brave reasoning)“is there a model (min. hyps+conseqs.) where query is true?”
More efficient with Relevant semantics, e.g. MH (not SM)1 Get relevant part of P for query; get Hyps for relevant part2 Get MH sub-model satisfying query
Relevance guarantees sub-model is extendible to a full one; no need to compute
full models
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
Minimal Hypotheses semanticsReasoning with Minimal Hypotheses semantics
Reasoning as Query Answering
In the MH 2-v semantics, models are the deductiveconsequences of a specialized abduction for NLPsExistential query answering (a.k.a. brave reasoning)“is there a model (min. hyps+conseqs.) where query is true?”
More efficient with Relevant semantics, e.g. MH (not SM)1 Get relevant part of P for query; get Hyps for relevant part2 Get MH sub-model satisfying query
Relevance guarantees sub-model is extendible to a full one; no need to compute
full models
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
ConclusionsFuture Work
Outline
1 Introduction
2 Building a semantics
3 Minimal Hypotheses semantics
4 Conclusions and Future Work
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
ConclusionsFuture Work
Conclusions
NLR 6= ICsNLPs must have modelMinimum positive hypotheses: generalization ofmaximum negative hypothesesMinimum positive hypotheses→ Layered Support(generalization of Classical Support)Minimal Hypotheses semantics:
2-valued semanticsrelevance, cumulativity, existence (liveness in face ofupdates)generalizes SMs (all SMs are MH models)
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
ConclusionsFuture Work
Conclusions
NLR 6= ICsNLPs must have modelMinimum positive hypotheses: generalization ofmaximum negative hypothesesMinimum positive hypotheses→ Layered Support(generalization of Classical Support)Minimal Hypotheses semantics:
2-valued semanticsrelevance, cumulativity, existence (liveness in face ofupdates)generalizes SMs (all SMs are MH models)
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
ConclusionsFuture Work
Conclusions
NLR 6= ICsNLPs must have modelMinimum positive hypotheses: generalization ofmaximum negative hypothesesMinimum positive hypotheses→ Layered Support(generalization of Classical Support)Minimal Hypotheses semantics:
2-valued semanticsrelevance, cumulativity, existence (liveness in face ofupdates)generalizes SMs (all SMs are MH models)
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
ConclusionsFuture Work
Conclusions
NLR 6= ICsNLPs must have modelMinimum positive hypotheses: generalization ofmaximum negative hypothesesMinimum positive hypotheses→ Layered Support(generalization of Classical Support)Minimal Hypotheses semantics:
2-valued semanticsrelevance, cumulativity, existence (liveness in face ofupdates)generalizes SMs (all SMs are MH models)
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
ConclusionsFuture Work
Conclusions
NLR 6= ICsNLPs must have modelMinimum positive hypotheses: generalization ofmaximum negative hypothesesMinimum positive hypotheses→ Layered Support(generalization of Classical Support)Minimal Hypotheses semantics:
2-valued semanticsrelevance, cumulativity, existence (liveness in face ofupdates)generalizes SMs (all SMs are MH models)
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
ConclusionsFuture Work
Conclusions
NLR 6= ICsNLPs must have modelMinimum positive hypotheses: generalization ofmaximum negative hypothesesMinimum positive hypotheses→ Layered Support(generalization of Classical Support)Minimal Hypotheses semantics:
2-valued semanticsrelevance, cumulativity, existence (liveness in face ofupdates)generalizes SMs (all SMs are MH models)
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
ConclusionsFuture Work
Conclusions
NLR 6= ICsNLPs must have modelMinimum positive hypotheses: generalization ofmaximum negative hypothesesMinimum positive hypotheses→ Layered Support(generalization of Classical Support)Minimal Hypotheses semantics:
2-valued semanticsrelevance, cumulativity, existence (liveness in face ofupdates)generalizes SMs (all SMs are MH models)
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
ConclusionsFuture Work
Conclusions
NLR 6= ICsNLPs must have modelMinimum positive hypotheses: generalization ofmaximum negative hypothesesMinimum positive hypotheses→ Layered Support(generalization of Classical Support)Minimal Hypotheses semantics:
2-valued semanticsrelevance, cumulativity, existence (liveness in face ofupdates)generalizes SMs (all SMs are MH models)
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
ConclusionsFuture Work
Future Work
Implementation of MH semantics-based system (ongoing)Further comparisons: MH vs RSMs, MH vs PStable, othersGeneralize the MH semantics to Extended Logic ProgramsApplications: Semantic Web, KR (adding regularabduction), combination of 2-v with 3-v
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
ConclusionsFuture Work
Future Work
Implementation of MH semantics-based system (ongoing)Further comparisons: MH vs RSMs, MH vs PStable, othersGeneralize the MH semantics to Extended Logic ProgramsApplications: Semantic Web, KR (adding regularabduction), combination of 2-v with 3-v
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
ConclusionsFuture Work
Future Work
Implementation of MH semantics-based system (ongoing)Further comparisons: MH vs RSMs, MH vs PStable, othersGeneralize the MH semantics to Extended Logic ProgramsApplications: Semantic Web, KR (adding regularabduction), combination of 2-v with 3-v
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
ConclusionsFuture Work
Future Work
Implementation of MH semantics-based system (ongoing)Further comparisons: MH vs RSMs, MH vs PStable, othersGeneralize the MH semantics to Extended Logic ProgramsApplications: Semantic Web, KR (adding regularabduction), combination of 2-v with 3-v
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
ConclusionsFuture Work
Future Work
Thank you
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
ConclusionsFuture Work
Disjunctive Logic Programs (1/2)
Disjunctive LPs (DisjLPs): rules are of the form
h1 ∨ . . . ∨ hq ← b1, . . . ,bn,not c1, . . . ,not cm
with q ≥ 1, m,n ≥ 0 and finite, and where hk ,bi , cj areatomsCan be transformed into NLPs via Shifting Rule
a ∨ b ← Body a← not b,Body
b ← not a,Body
Shifting Rule produces loops over default negationCan be problematic to SMsMH solves any loops by assuming minimal positivehypotheses
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
ConclusionsFuture Work
Disjunctive Logic Programs (1/2)
Disjunctive LPs (DisjLPs): rules are of the form
h1 ∨ . . . ∨ hq ← b1, . . . ,bn,not c1, . . . ,not cm
with q ≥ 1, m,n ≥ 0 and finite, and where hk ,bi , cj areatomsCan be transformed into NLPs via Shifting Rule
a ∨ b ← Body a← not b,Body
b ← not a,Body
Shifting Rule produces loops over default negationCan be problematic to SMsMH solves any loops by assuming minimal positivehypotheses
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
ConclusionsFuture Work
Disjunctive Logic Programs (1/2)
Disjunctive LPs (DisjLPs): rules are of the form
h1 ∨ . . . ∨ hq ← b1, . . . ,bn,not c1, . . . ,not cm
with q ≥ 1, m,n ≥ 0 and finite, and where hk ,bi , cj areatomsCan be transformed into NLPs via Shifting Rule
a ∨ b ← Body a← not b,Body
b ← not a,Body
Shifting Rule produces loops over default negationCan be problematic to SMsMH solves any loops by assuming minimal positivehypotheses
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
ConclusionsFuture Work
Disjunctive Logic Programs (1/2)
Disjunctive LPs (DisjLPs): rules are of the form
h1 ∨ . . . ∨ hq ← b1, . . . ,bn,not c1, . . . ,not cm
with q ≥ 1, m,n ≥ 0 and finite, and where hk ,bi , cj areatomsCan be transformed into NLPs via Shifting Rule
a ∨ b ← Body a← not b,Body
b ← not a,Body
Shifting Rule produces loops over default negationCan be problematic to SMsMH solves any loops by assuming minimal positivehypotheses
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
ConclusionsFuture Work
Disjunctive Logic Programs (2/2)
Disjunctions / Shifting Rule / Loops
a ∨ b a← not b b ← not ab ∨ c b ← not c c ← not bc ∨ a c ← not a a← not c
Basis of MH intuitition = conceptually reverse the Shifting Rule:Loops (any kind of loops) encode disjunction of hypotheses
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
ConclusionsFuture Work
Disjunctive Logic Programs (2/2)
Disjunctions / Shifting Rule / Loops
a ∨ b a← not b b ← not ab ∨ c b ← not c c ← not bc ∨ a c ← not a a← not c
Basis of MH intuitition = conceptually reverse the Shifting Rule:Loops (any kind of loops) encode disjunction of hypotheses
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
ConclusionsFuture Work
Disjunctive Logic Programs (2/2)
Disjunctions / Shifting Rule / Loops
a ∨ b a← not b b ← not ab ∨ c b ← not c c ← not bc ∨ a c ← not a a← not c
Basis of MH intuitition = conceptually reverse the Shifting Rule:Loops (any kind of loops) encode disjunction of hypotheses
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
ConclusionsFuture Work
Motivation for the Layered Negative Reduction
Variation of the vacation examplebeach ← not mountain
mountain ← not traveltravel ← not beach
Three MHs: {beach, travel, not mountain}, {beach, not travel,mountain},{not beach, travel,mountain}.
Variation of the vacation example with fourth friendbeach ← not mountain
mountain ← not traveltravel ← not beachbeach
Two MHs: {beach, travel, not mountain}, {beach, not travel,mountain}.
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics
university-logo
IntroductionBuilding a semantics
Minimal Hypotheses semanticsConclusions and Future Work
ConclusionsFuture Work
Motivation for the Layered Negative Reduction
Variation of the vacation examplebeach ← not mountain
mountain ← not traveltravel ← not beach
Three MHs: {beach, travel, not mountain}, {beach, not travel,mountain},{not beach, travel,mountain}.
Variation of the vacation example with fourth friendbeach ← not mountain
mountain ← not traveltravel ← not beachbeach
Two MHs: {beach, travel, not mountain}, {beach, not travel,mountain}.
October 11th, EPIA 2011, Lisbon Every NLP has a 2-valued MH semantics