A common ontology of agent communication languages: Modeling mental attitudes and social commitments...

38
Applied Ontology 3 (2006) 1–3 1 IOS Press A Common Ontology of Agent Communication Languages: Modelling Mental Attitudes and Social Commitments using Roles Guido Boella a,* , Rossana Damiano a , Joris Hulstijn b and Leendert van der Torre c a Università di Torino, Italy - email: {guido,rossana}@di.unito.it b Vrije Universiteit, Amsterdam, The Netherlands - email: [email protected] c University of Luxembourg, Luxembourg - email: [email protected] Abstract. There are two main traditions in defining a semantics for agent communication languages, based either on mental attitudes or on social commitments. These traditions, even if they share the idea of speech acts as operators with preconditions and effects, and agents playing roles like speaker and hearer, rely on completely distinct ontologies. Not only does the mental attitudes approach refer to concepts like belief, desire, goal or intention and is the social commitment approach based on the notion of commitment, but the two approaches also refer to distinct speech acts, distinct types of dialogue games, and so on. In this paper, we propose a common ontology for both approaches based on public mental attitudes attributed to roles. Public mental attitudes avoid the problems of mentalistic semantics, such as the unverifiability of private mental states, and they allow the reuse of the logics and implementations developed for FIPA compliant approaches. Moreover, a common ontology of communication primitives allows for the construction of agents participating to dialogues with both FIPA and social commitments compliant agents. The challenge of agent communication languages whose semantics is based on public mental attitudes, such as our role- based semantics, is to define mappings from existing languages to the new one. In this paper we show how to map the two existing traditions to our role-based ontology of public mental attitudes and we show how it allows for a comparison. Moreover, to test the generality of our new ontology for agent communication languages, we show how to extend the social commitment approach to cope with persuasion dialogue too. Keywords: Dialogue, speech acts, communication primitives, roles * Corresponding author: Guido Boella, Università di Torino, Italy - email: [email protected] 1570-5838/06/$17.00 c 2006 – IOS Press and the authors. All rights reserved

Transcript of A common ontology of agent communication languages: Modeling mental attitudes and social commitments...

Applied Ontology 3 (2006) 1–3 1IOS Press

A Common Ontology ofAgent Communication Languages:Modelling Mental Attitudes and Social Commitmentsusing Roles

Guido Boella a,∗, Rossana Damiano a, Joris Hulstijn b and Leendert van der Torre c

a Università di Torino, Italy - email: {guido,rossana}@di.unito.itb Vrije Universiteit, Amsterdam, The Netherlands - email: [email protected] University of Luxembourg, Luxembourg - email: [email protected]

Abstract. There are two main traditions in defining a semantics for agent communication languages, based either on mentalattitudes or on social commitments. These traditions, even if they share the idea of speech acts as operators with preconditionsand effects, and agents playing roles like speaker and hearer, rely on completely distinct ontologies. Not only does the mentalattitudes approach refer to concepts like belief, desire, goal or intention and is the social commitment approach based on thenotion of commitment, but the two approaches also refer to distinct speech acts, distinct types of dialogue games, and so on. Inthis paper, we propose a common ontology for both approaches based on public mental attitudes attributed to roles. Public mentalattitudes avoid the problems of mentalistic semantics, such as the unverifiability of private mental states, and they allow the reuseof the logics and implementations developed for FIPA compliant approaches. Moreover, a common ontology of communicationprimitives allows for the construction of agents participating to dialogues with both FIPA and social commitments compliantagents. The challenge of agent communication languages whose semantics is based on public mental attitudes, such as our role-based semantics, is to define mappings from existing languages to the new one. In this paper we show how to map the twoexisting traditions to our role-based ontology of public mental attitudes and we show how it allows for a comparison. Moreover,to test the generality of our new ontology for agent communication languages, we show how to extend the social commitmentapproach to cope with persuasion dialogue too.

Keywords: Dialogue, speech acts, communication primitives, roles

*Corresponding author: Guido Boella, Università di Torino, Italy - email: [email protected]

1570-5838/06/$17.00 c© 2006 – IOS Press and the authors. All rights reserved

2 G. Boella et al. / Common Ontology of ACL

1. Introduction

In this paper we consider the ontology of agent communication languages. Standardization plays animportant role in agent technology, because agents developed by distinct companies have to be able tointeract. The lack of standardization endangers the value of agent systems, in particular, when consideringthe importance of open systems in which heterogeneous agents may enter and leave, and will have tocommunicate and interact together. One of the fundamental problems in these standardization efforts isthe development of a common ontology. For example, the semantics of many communication languages isbased on the BDI model, but the concepts of desire, goal and intention are not used uniformly, and desiresand goals are often used interchangeably. Even if the same terminology is used, then the meaning of someconcepts may vary. For example, the notion of commitment means something else when requesting anaction than when challenging a proposition, as we discuss later in Section 5. As another example, variousapproaches use preconditions, but the feasibility precondition of an inform action in FIPA states that anagent has to believe something before he can inform another agent, thus referring to the semantics, whereasthe precondition of an accept action in a social commitment approach may state that there must have beena request preceding it, referring to the previously uttered speech acts (and, therefore, to the communicationprotocol).

Apart from distinctions disappearing on closer analysis, such as the distinction between desire and goal,there are also more fundamental distinctions between ontologies for agent communication. In particular,there are two main traditions in defining a semantics for communication languages. The first is based onmental attitudes and has been popularized by the standardization efforts of the Foundation for IntelligentPhysical Agents (FIPA, 2002a). The second is based on the notion of social commitments (Singh, 2000;Fornara and Colombetti, 2004). These traditions share the idea that agents communicate by performingspeech acts (Austin, 1962; Searle, 1969), and that these actions can be modeled as a kind of planningoperators with preconditions and effects (Cohen and Perrault, 1979; Allen and Perrault, 1980). Longerstretches of dialogue can then be explained by plans, that structure attempts of participants to reach theirjoint goals, e.g. (Lochbaum, 1998). However, the two approaches do not share their communication prim-itives. In particular, the planning operators’ preconditions and effects operate on distinct communicationprimitives: mental attitudes like beliefs, desires and goals on the one hand, and social commitments on theother.

Recently various approaches have been proposed to bridge the two main traditions in the semanticsof agent programming languages, as visualized in Figure 1. On the left side of the agent are his beliefsand goals, which are used in approaches like FIPA to give feasibility preconditions and rational effectsfor speech acts such as inform and request. On the right hand side of the agent are his commitmentsboth as creditor and as debtor. A distinction has been made between propositional commitment, usedfor example by Hamblin (1970); Walton and Krabbe (1995) for speech acts like assert and challenge,and action commitment on to give a semantics of speech acts like request and propose Singh (2000);Fornara and Colombetti (2004). The comparison given in Table 1, further discussed in Section 7 on relatedresearch, illustrates that both approaches are distinct in most aspects, and consequently that building abridge between them is a challenging problem.

pre and post conditionsAgent

Goals

Beliefs

inform

request

MentalAttitudes Commitment

Commitment

Commitment

Action

Propositional

request

propose

challenge

assertdebtor

creditorfeasibility preconditions

rational effects

Fig. 1. Comparing the FIPA and social commitments ontologies.

G. Boella et al. / Common Ontology of ACL 3

FIPA Social CommitmentsAttitudes beliefs, goals propositional commitments,

action commitmentsSpeech acts inform, request assert, challengeSemantics model theoretic semantics operational semanticsDialogue types information seeking negotiation, persuasionAttitude cooperative competitiveAssumptions sincerity, cooperativity fear of social disapproval or sanction

Table 1Differences between FIPA and social commitments approaches.

The approaches that bridge the two traditions are based on a reinterpretation of the beliefs and goalsin Figure 1. In contrast to FIPA, these mental attitudes are not interpreted as private mental attitudes,but as a kind of public mental attitudes, for example as grounded beliefs (Gaudou et al., 2006a) or asostensible, or public opinions (Nickles et al., 2006). The essential idea that meaning can not be private,but is inter-subjective or based on a common ground, has been accepted for a long time in the philosophyof language. Witness, for example, Lewis (1969) or Stalnaker (2002) from the analytic tradition, whostress the importance of the common ground. The crucial point is that a private semantics does not makeit possible for a language user to be objectively wrong about an interpretation (e.g., from a third personpoint of view). If meaning has to do with individual mental states, how could I dispute your mistakes?This is similar to the argument used by Pitt and Mamdani (1999) and Wooldridge (2000) many years later,to criticize a semantics of agent communication languages based on private mental states. Because privatemental states are inaccessible, such a semantics makes it impossible to prove objectively that a certainprotocol does or does not have desirable properties. Strong cooperative assumptions have to be made aboutthe communicating agents, which is implausible for many agent applications. To counter these objections,Singh (2000) and Fornara and Colombetti (2004), to name a few, have proposed a social semantics of agentcommunication languages. Essentially the meaning of the speech acts is not based on mental attitudes, buton social commitments: the obligations and expectations participants have towards other language users.The notion of a social semantics originates with Habermas (1984, 1987) and Brandom (1994). The ideais that meaning is essentially constructed in a community of language users, and has social implications.For example, an agent is committed to a belief or opinion stated in public, in the sense that when he isquestioned, he has to explain or even defend his views. None of these earlier approaches to bridge the gapbetween the two traditions explicitly attempts to solve the problem at the higher level of abstraction of theontological level, but looks for a solution at a more detailed level.

In this paper we build an ontology for public mental attitudes in agent communication on the basis ofthe concept role. This is a natural concept for agent communication since communication in multi-agentsystems is often associated with the roles agents play in the social structure of the systems (Zambonelliet al., 2003; Ferber et al., 2003), but existing approaches to the semantics of agent communication lan-guages do not take roles into account, and thus far they have not been associated with public mental atti-tudes. Roughly, the basis of the idea to build the ontology on roles is that we associate beliefs and goalswith a role instance played by an agent, and that these role-based mental attitudes have a public character.Thus, we attribute beliefs and goals to roles played by the participants in the dialogue, rather than refer-ring to the participants’ private mental states, which are kept separate from the dialogue model. This idearaises several questions, such as the advantages of a role-based ontology of public mental attitudes overalternative foundations. The main problem of existing semantics based on public mental attitudes is that itseems difficult to map all existing agent communication approaches to the new proposals and they do notexplain how agents can have multiple conversations at the same time with different users using differentdialogue games or protocols. We propose that the various alternatives for public mental attitudes can becompared only when it is shown how existing approaches can be mapped onto it, and in this paper weshow how this can be done for our own role-based semantics. In particular, in this paper we answer thefollowing three questions:

4 G. Boella et al. / Common Ontology of ACL

1. What is a role-based ontology for agent communication?2. How can the agent communication language of FIPA be mapped on the role-based ontology?3. How can languages based on social commitments be mapped on the role-based ontology?a How to map action commitment to role-based ontology?b How to map propositional commitment in persuasion to role-based ontology?

We explain the main challenges in these four questions below. The ontology of agent communicationlanguages answering the first question is a list of concept definitions and relations among them to describethe semantics of agent communication. As an illustrative example, in Figure 2 we visualize the speech actinform. This figure should be read as follows. The ovals x and y represent two agents playing respectivelyroles r1 and r2 in the dialogue game. We use indices i, j to range over role instances, so in the Figure wehave i = x : r1, j = y : r2. We associate beliefs (B) and goals (G) with the role instances, which arethe public mental attitudes. The roles’ beliefs and goals are public and are constructed by the introductionor removal of beliefs and goals by the speech acts. Agent x in role r1 informs agent y in role r2 that φ,and therefore φ is part of his public beliefs (feasibility precondition of FIPA). Moreover, the reason thathe informs the other agent is that he wants him to believe φ, and therefore B(j, φ) is part of public goalsof agent x in role r1. If agent y believes that agent x is reliable, then his own public beliefs will containφ too. The public beliefs and goals of a dialogue participant in a particular role may differ in interestingways from his private beliefs and goals, but if the agents are sincere, then they will believe φ too. Thefigure is explained in more detail in Section 2.1. The ontology has to clearly define the notion of a role,given the many interpretations of roles in computer science and in other sciences, and it has to distinguishrole from role instance, role playing agent, and the agent itself. The challenge in defining role is to explainhow mental attitudes can be attributed to roles, and how these mental attitudes can be made public. Aspart of this problem, it has to be clarified which mental attributes should be assigned to the role and whichto the agent. Finally, the set of concepts has to be extended to a semantic language, with all the constructsand variables needed to describe a semantics of speech acts.

A challenge in defining the mapping from FIPA to our role-based semantic language to answer thesecond question is to deal with the notion of precondition. For example, if an agent x informs anotheragent that φ, for example with inform(x, y, φ) in Figure 2, then in the original FIPA semantics, agent xhas as a so called feasibility precondition, that he should already believe the proposition φ. If not, x wouldbe insincere. With public mental attitudes, however, it is not the case that an agent must already publiclybelieve φ in his role i. To see why, assume that this precondition would hold. When two agents meetthere are no public attitudes besides the common knowledge associated with the roles of the agents beingplayed. So in that case, an agent would not be able to make an inform! Moreover, if the information wouldalready be a public, then there would be no need for an inform in the first place. Clearly, the notion ofprecondition has to be reinterpreted in the context of a public, role-based semantics. Another challenge of

'

&

$

%

'

&

$

%

?

-

G

B

G

B

x

reliability

i j

agents

role instances

sincerity

inform(i,j,φ)

φ

φφ

φ

y

B(j, φ)

B(j, φ)

Fig. 2. Agents’ and roles’ mental attitudes.

G. Boella et al. / Common Ontology of ACL 5

this mapping is that in FIPA notions like reliability and sincerity are built into the formal system. We haveto relax these properties if we want to integrate the social commitment approach, and deal for examplewith non-cooperative types of dialogue like argumentation or negotiation. Therefore, we have to makeproperties like sincerity and cooperativity explicit in the formal language, to distinguish between differentcontexts of use. The following formula explains the use of a reliability concept. Detailed explanation isgiven in Section 3.

B(j, (G(i, B(j, p)) ∧ reliable(i, p)) → B(j, p)) (RL10)

An initial problem of the mapping from the social commitment approach to answer the third questionis that there is not a single social commitment approach, and the available approaches are not worked outin detail. As we stated above, there are approaches based on action commitment, and approaches based onpropositional commitment. Moreover, the social commitment approaches focusing on action commitmentsSingh (2000); Fornara and Colombetti (2004), are usually given an operational semantics, which maybe considered as weaker than the usual model theoretic or modal semantics used for semantic languagesof mental attitudes approaches. An additional challenge is how to we use the proposed bridges betweenthe two traditions to compare them is that we need either to translate one approach into the other, for allpossible pairs of alternatives, or to have a common framework of reference. Moreover, it is necessary tobe able to compare the single speech acts which are not always defined for precisely the same purpose,and to understand which are more general or specific in one approach or the other. Finally. we need notonly to define commitments but also their evolution and the rules describing their evolution.

As ontology language we take a logical language that is reminiscent of the FIPA semantic language(FIPA, 2002a), which we extend with roles and the other concepts we need. Regarding roles, we isolate aminimal notion of role, needed for agent communication. This minimal notion of role possesses what wecall the characteristic property of roles: the fact that we can attribute mental attitudes to it. All other prop-erties can be added to it, such as the use of roles in expectations or to define powers and responsibilitiesin organizations. The essential property remains the attribution of mental attitudes. In other words, insteadof asking ourselves what a role is, we ask ourselves what mental attitudes of a role are. We take mentalattitudes of a role as our ontological primitive, and derive role from it. The ontological question is thenno longer “what is a role?”, but rather “what are role-based mental attitudes?” To answer this question weconsider why multiple sets of mental attitudes are needed, for each role the agent plays, instead of a singleset for the agent itself. We define role-based mental attitudes, and explain why they are public. We discusshow it builds upon the notion of role we developed in our earlier work on normative multiagent systemsBoella and van der Torre (2004). For the mapping from FIPA to role-based ontology we take an axiomaticapproach, defining translation schemes for FIPA speech acts to role-based speech acts. For mapping ofsocial commitment based approach to role-based ontology we take a similar axiomatic approach, trans-lating the states of the operational semantics, to sets of public mental attitudes in the FIPA-like ontologylanguage.

The paper is structured as follows. In Section 2 we introduce our role-based ontology for agent com-munication, and we extend it to a semantic language. In Section 3 and 4 we map respectively FIPA-ACLand social semantics (Fornara and Colombetti, 2004) into role-based semantics. In Section 5 we deal inparticular with the difference between action commitments and propositional commitments, and we showhow we can account for persuasion dialogues in our approach. In Section 6 we translate the Propose in-teraction protocol from FIPA and from social semantics to our role model, to compare the two differentapproaches. The paper ends with a discussion of related research and conclusions.

6 G. Boella et al. / Common Ontology of ACL

2. Role based ontology for agent communication

In Section 2.1 we define the ontology we use in this paper, that is, the set of concepts and the relationsamong them. In Section 2.2 the set of concepts is extended to a semantic language, with the constructsand variables needed to describe a semantics of speech acts.

2.1. The ontology

With a role instance (R) we associate public mental attitudes, i.e., beliefs and goals which are knownto everybody according to specified rules. So our role-based ontology for agent communication can beused to define the semantics of speech acts in terms of public mental attitudes. We do not intend by publicmental attitudes of role, the mental attitudes of players of roles which refer to secrets which may beassociated with a role instance like knowing the value of a password. The roles’ attitudes represent whatthe agent is publicly held responsible for: if the agent does not adhere to his role, he can be sanctionedor blamed. As long as an agent plays a game, he cannot refuse that what he has said and this will beconsidered as a public display of his position in the game, according to his role. Consider the exampleof a liar, who once he starts lying, has to continue the dialogue consistently with what he said before,independently of his real beliefs. We consider beliefs (B) and goals (G), because we need to distinguishbetween the information of an agent and his motivations. We do not refer to knowledge since beliefs aresufficient, and we do not refer to desires since for our purposes they can be used interchangeably withgoals. Finally, we do not refer to intentions, because goals in our use already have an intentional character.

We distinguish the mental attitudes of the role instance from the mental attitudes of the agent, of whichwe consider only beliefs and goals too, because we consider agents which can have multiple communica-tion sessions at the same time. Thus, in the visualization of our model of a dialogue between two agentsin Figure 2, there can be multiple instances of possibly different roles associated with each agent.

For example, consider an agent who informs another agent that his web services can be used by all otheragents, informs a third agent that his services can only be used by agents with the appropriate certificates,requests from a fourth agent a document, and informs a fifth agent that he does not have a goal to obtainthis document. However, though an agent may tell two agents incompatible stories, it seems much lessuseful that an agent is allowed to tell incompatible stories to the same agent. Intuitively, we can makeanother ‘copy’ of the agent for each session, and we may refer to each ‘copy’ as a role instance. There arealso various other good reasons why one might want to distinguish the mental attitudes of a role instancefrom the mental attitudes of the agents themselves to account for divergences between the private mentalattitudes and the public ones, for example:

– If we can inspect the knowledge base of an agent, then we should attribute the knowledge not to oneof his roles, but to the agent itself.

– If we wish to model that the agent is bluffing (telling something he is not sure of) or lying (tellingsomething he believes is false);

– In a negotiation situation the agents have only the common goal of reaching an agreement, but thereis no need that the communicate to each other truthful information or their real preferences. In par-ticular, in a negotiation a bid does not need to correspond to the actual reservation price of the agent.

– In persuasion, if the aim is to win a dispute, an agent can adopt a point of view “for the sake of theargument”, in order to show its inconsistency or bad consequences, without actually endorsing theview himself.

– It is necessary to keep apart the motivations for playing a role from the rules of the game which affectthe state of the roles. For example, an agent may be sincere (i.e., he really acts as expected from hisrole) for pure cooperativity, or for the fear of a sanction.

Since these mental attitudes of role instances are the foundation of our ontology, they have to be pre-cisely defined. In particular, we have to clearly distinguish the mental attitudes attributed to a role instancefrom mental attitudes attributed towards the agent playing the role. The mental attitudes attributed to a

G. Boella et al. / Common Ontology of ACL 7

role instance consist of two parts. First, they contain those public beliefs and goals following from utteringspeech acts. Second, the mental attitudes of a role consist of commonly held beliefs about the attitudes ofroles. For example, a buyer in negotiation is expected to prefer lower prices. No other mental attitudes areattributed to role instances.

In particular, we consider two kinds of mental attitudes attributed to role instances as the result of speechacts. For example, consider an agent informing another one that “it is raining”. In FIPA there is a feasibilityprecondition that the agent believes that it is raining, and the rational effect that the other agent believesit is raining, implying that the agent has the goal that the other agent will believe that it is raining. In ourcase, it is completely different since we do not have that as a precondition the agent publicly believesthat it is raining - if so, then given the public nature of the mental attitudes attributed to role instances, hewould not have to inform the other agent anymore. However, we still do have that from the uttering of thesentence “it is raining” we can derive that the agent publicly believes that it is raining, and he has the goalthat the other agent believes it is raining. The reason why we can infer that the agent publicly believes so,however, is not because it is a precondition of the action, but because it is presupposed when making suchan action. Summarizing, we hold that the same two kinds of inferences made in FIPA can be made in ourrole-based model, but for completely different reason. Moreover, the same holds for the other speech acts.As we illustrate in Section 3, this property will also enable us to reuse a variant of the FIPA language forour role-based semantic language.

Definition 1 Mental attitudes of role instances (R) consist of beliefs (B) and goals (G), and are precisely:

1. Consequences of speech acts (SA) on the mental attitudes of agents overhearing the uttering of thespeech act, representing what the agent is publicly held responsible for, distinguishing between:

(a) Presuppositions of the speech act; these are public mental attitudes from which we can infer thatthey held already before the speech act was uttered;

(b) Other public mental attitudes following from the speech acts; from these we cannot infer thatthey held already before the speech act was uttered.

2. Common knowledge about the roles being played by the agents.

The constituent of our model keeping together mental attitudes and speech act is the set of rules definingthe effects of speech acts on the mental attitudes of role instances, which we call the constitutive rules(CR) of the dialogue game played by the agents. Moreover, we assume that the rules may depend on thedialogue type (DT ) such as information seeking, negotiation and persuasion. Constitutive rules explainhow roles can be autonomous, in the literal sense of the term, even if they are not agents acting in theworld: (beliefs and goals of) roles cannot be changed unless by using the constitutive rules associated withthe role.

Note that the constitutive rules can act on the beliefs and goals associated with role descriptions sincethey are public mental attitudes. They exist only since roles as descriptions are collectively accepted by thereal agents. Moreover, since the constitutive rules which change the beliefs and goals publicly attributed toroles are accepted by all agents, also the beliefs and goals resulting from the application of a constitutiverule after a speech act are public too. This issue of the construction of social reality is discussed in (Boellaand van der Torre, 2006a).

The constitutive norms may also describe the rules of the protocol the role is defined for, or roles agentsplay in the social structure of the systems. The GAIA methodology (Zambonelli et al., 2003) proposesinteraction rules to specify communication among roles, the ROADMAP methodology (Juan and Sterling,2004) specifies the relations among roles in the social model, and in AALAADIN (Ferber et al., 2003)interaction is defined only between the roles of a group: “The communication model within a group canbe more easily described by an abstracted interaction scheme between roles like the ‘bidder’ and the‘manager’ roles rather than between individual, actual agents". Role names, like ‘speaker’ and ‘addressee’or ‘buyer’ and ‘seller’ are often mentioned in the definition of agent communications languages. However,these terms only serve the function to bind individual agents to the speech acts in the protocol, but they

8 G. Boella et al. / Common Ontology of ACL

are not associated with a state which changes during the conversation as a result of the performed speechacts. If such constitutive norms are added, then the function played by roles in dialogue is similar to thefunction they play in an organization, where they define the powers of agents in the organizations (Boellaand van der Torre, 2007). As organizations are composed by constitutive rules defining the powers of thedifferent roles, in dialogue, constitutive rules define the power of roles, i.e., the effects of the speech acts,according to the role played and the type of dialogue in which they are involved. As in organizations, itis possible that different roles are filled in by the same agent, thus determining ambiguities and conflicts(e.g. a command issued by a friend may not be effective, unless the friend is also the addressee’s boss).

We build our ontology on the concept “mental attitudes of a role instance”, rather than on the conceptof role instance itself. However, the ontology has to clarify also what we mean by role instance or bya role, given the many interpretations of roles in computer science and in other sciences. This is a non-trivial problem, since there are numerous incompatible notions of role available in the literature (Boellaet al., 2005b; Loebe, 2005; Masolo et al., 2005; Herrmann, 2005). In other words, given that we havedefined what the mental attitudes of a role are, our definition of role follows from it. Our “solution” to theontological definition of role is to define a minimal notion of role for agent communication, which doesnot take the notion of role as primitive, but the notion of the mental attitudes of role. The fact that in agentcommunication we are interested only in public information indicates the crucial distinction with manyexisting definitions of roles, because some information concerning a role may not be public at all, but mayhave to be private. For example, we may associate a password with a role, which has to be known by theagent playing the role, but this password should not be public. Thus our approach the notion of roles isnot interpreted as role instances in the sense of what is sometimes called Role Enacting Agents (Dastaniet al., 2003).

In this paper we are interested in role instances only, where a role is a generic class and a role instanceis associated with an agent playing the role. However, a role instance is not the same as an agent playinga role, since it considers the role only, not the agent. It is important to distinguish role from role instance,role playing agent, and the agent itself.

Definition 2 Characteristic property of role instance is that it is channel end of communication, andcontains mental attitudes representing the state of the session of interaction. It may or may not refer toother properties of roles such as expertise needed to play the role, powers such as rights associated withthe role, expectations associated with the role, and so on.

Thus, the elements composing our ontology are the following:

Definition 3 A role-based ontology for agent communication 〈A,R,RN,PL, R,B, G, SA, CR〉 is a tu-ple where:

– A is a set of agents in the interaction, e.g., x, y.– R is a set of role instances.– RN is a set of roles, like r1, r2, ...– PL : A×RN 7→ R is a role playing function, such that i = PL(x, r1) is a role instance.– B and G are sets of beliefs and goals respectively, either of individual agents or of role instances.– AD : A ∪R → B ∪G is a function associating mental attitudes with agents and role instances.– SA is a set of speech acts: assert , request, etc.– DT is a set of dialogue types, such as information seeking, negotiation and persuasion.– CR : SA×DT → L(B, G) are the constitutive rules of dialogue games specifying how speech acts

relative to dialogue type affect the attitudes of the role instances, and create institutional facts. Theeffects are formalized in a semantic language L(B, G) (for an example, see Definition 5).

Finally, we explain in which sense the mental attitudes of role instances of the kind defined in Defi-nition 3 are public. This is based on our notion of public communication, defined as follows. Communi-cation is public for every agent who overhears the speech acts, and has same prior knowledge of roles.Consequently, for public communication, the mental attitudes of role are public.

G. Boella et al. / Common Ontology of ACL 9

make public

inheritance

relation

ConstitutiveRules

Dialogue Type

Speech Act

RoleBeliefs Agent

DescriptionAgent

member

enacts

determines

apply to

determines

has

has

Role’sBeliefs

Agent’sBeliefs System

Agent

determines

has

hashas

GoalsRole’s Agent’s

Goals

Goals

has

make public

Fig. 3. The role-based ontology of communication.

Definition 4 Communication is public, if and only if:

1. every agent overhears speech acts, and has same knowledge of roles2. every agent knows the beliefs typically attributed to roles.

2.2. The semantics language

We define a simple formal language, inspired by FIPA (2002a)’s specification language and its predecessor(Sadek, 1990; Sadek et al., 1997). In this way we keep the language simple and understandable for a largecommunity, even if we inherit its limitations.

Definition 5 (Language) Given a set of propositions L and basic actions ACT :q := p | ¬q | q ∨ q || q ∧ q | q → p | B(m, q) | G(n, q) | done(act) | done(m, act) |

send(x, y, SA(i, j, q)) | SA(i, j, q) | i = x : r1

act := action | act; actwhere p, q ∈ L, x, y ∈ A, i, j ∈ R, m,n ∈ A ∪ R, r1 ∈ RN and action ∈ ACT . B and G represent

the beliefs and goals.

Note that according to this definition also roles can have beliefs and goals.The done(act) and done(i, act) expressions denote the execution of an action, specifying or not the

agent of the action. We will use done(act) later when act is a joint action, for example, sell(i, j) =give(i); pay(j). Here we use a minimal definition of actions which allows us to cover the examples.

We use B(i, p) when i ∈ R as shorthand for B(x,B(i, p)) ∧ B(y, B(i, p)), where x and y play a rolein the dialogue.

We add the following axioms RL in a dialog game, representing rationality constraints; they are mostlyinspired by FIPA, apart from those concerning public mental states and distribution of goals in joint plans.For all roles i, j:

– Each role has correct knowledge about its own mental states, in particular, its beliefs about its goalsare correct. This axiom corresponds to FIPA’s (2002) schema φ ↔ Biφ, where φ is governed by anoperator formalising a mental attitude of agent i; for all role instances i ∈ R:

(B(i, G(i, p)) → G(i, p)) ∧ (B(i,¬G(i, p)) → ¬G(i, p)) (RL1)

(B(i, B(i, p)) → B(i, p)) ∧ (B(i,¬B(i, p)) → ¬B(i, p)) (RL2)

10 G. Boella et al. / Common Ontology of ACL

– Since the beliefs of the roles are public, each role has the complete knowledge about the other roles’beliefs and goals (for i 6= j and i, j ∈ R):

(B(j, p) ↔ B(i, B(j, p))) ∧ (¬B(j, p) ↔ B(i,¬B(j, p))) (RL3)

(G(j, p) ↔ B(i, G(j, p))) ∧ (¬G(j, p) ↔ B(i,¬G(j, p))) (RL4)

– Forwarding a message is a way to perform a speech act:

send(x, y, SA(i, j, p)) → SA(i, j, p) (RL5)

where SA ∈ {inform, request, propose, . . .}, for all x, y ∈ A and m ∈ {i, j}. Note that the senderof the message x is an agent playing the role i in the speech act and the receiver agent y plays therole j.

– Each agent is aware of which speech acts have been performed, where i = x : r1, j = y : r2 andm ∈ {i, j}:

send(x, y, SA(i, j, p)) → B(m, send(x, y, SA(i, j, p))) (RL6)

– A rationality constraint concerning the goal that other agents perform an action: if agent i wants thataction act is done by agent j, then agent i has the goal that agent j has the goal to do act:

G(i, done(j, act)) → G(i, G(j, done(j, act))) (RL7)

– Some FIPA axioms concern the execution of complex actions. For example Property 1: G(i, p) →G(i, a1|...|an), where a1|...|an are feasible alternatives to achieve p. In a similar vein, we add twoaxioms concerning the distribution of tasks. Taking inspiration from Cohen and Levesque (1990), weadd an axiom to express that if an agent intends a joint action, then it intends that each part is done atthe right moment. If act = act1; ...; actn, where “;" is the sequence operator, act is a joint action :

G(i, done(act)) →(done(act1(i1); . . . ; actk(ik)) → G(i, done(actk+1(i)))) (RL8)

If actk+1(j) and i 6= j, then by axiom RL7:

G(i, done(act)) →(done(act1(i1); . . . ; actk(ik)) → G(i, G(j, done(actk+1(j))))) (RL9)

Each agent in the group wants that the others do their part.

Note that in the role model it is not assumed that the role’s mental attitudes corresponds to the mentalattitudes of their players. This assumption can be made only when an agent is sincere, see below.

These axioms deserve some discussion. Axiom (5) is used to connect individual agents, that can accessresources and send messages, with the public roles that they play. Based on the speech act SA, a number ofpre and postconditions can be inferred, as detailed below. Since messages on a certain channel are public,and the constitutive rules CR are public, all inferences on the basis of speech acts are public too. Thismotivates axiom (RL3) and (RL4). From left to right: if one can infer for example a belief of i B(i,¬p)by some rule, all agents in their roles j can infer that: B(j, B(i,¬p)). From right to left: if some agent jbelieves something about i’s beliefs can be inferred, i actually believes that.

Only those mental attitudes are publicly attributed to the role, which follow directly from the agent’scommunication or from commonly held beliefs about the attitudes of particular roles, e.g., a buyer in anegotiation is expected to prefer a lower price. All other mental attitudes are attributed to the agent itself.As a consequence, professional secrets like passwords, which are related to playing a role, should underthis interpretation not be modeled as beliefs or goals of the role, but as beliefs and goals of the individualagent. This marks a difference with other approaches to roles, such as Dastani et al. (2003).

We explain the distinction in Figure 2. The constitutive rules of the communication game operate on themental states of the role instances. Because of the feasibility precondition, an inform puts a propositionp to the beliefs of the speaker i, and a goal that j will come to believe p. If the hearer j believes that the

G. Boella et al. / Common Ontology of ACL 11

speaker is reliable, rule (RL10) transfers the belief to the hearer. In a similar way, a successful requestadds an intention to the goals of the hearer. This happens only when we have an axiom such as (RL10),that the hearer is cooperative. Cooperativity and reliability are role-role relationships, that depend on thesocial context in which an interaction takes place. For some interaction types, like information seekingor deliberation, these assumptions make sense. For other interaction types, like persuasion or negotiation,these properties will need to be altered or dropped.

B(j, G(i, B(j, φ))) ∧B(j, reliable(i, φ)) → B(j, φ)B(j, G(i, φ)) ∧B(j, cooperative(j, i)) → G(j, φ) (RL10)

In addition to role-role properties, there are also role-agent properties, that regulate the transfer of beliefsbetween the role instance and the agent. In particular, an external assumption like sincerity, as in rule(RL11) or (RL12), attributes the roles’ beliefs or goals also to the agents themselves. Such assumptionsare not part of the game.

B(x : r, φ) ∧ sincere(x, r) → B(x, φ) (RL11)

G(x : r, φ) ∧ sincere(x, r) → G(x, φ) (RL12)

The beliefs of the role are attributed also to the agent if he is considered sincere. Thus, the agent can beconsidered sincere in one role and insincere in another.

Sincerity is part of a larger class of agent-role properties, that regulate the possible transfer of beliefs andgoals between role instances and agents. An interesting example, has to do with privacy and secrecy (theexample is due to Huib Aldewereld, personal communication). Consider an intelligent agent application,that makes inferences on the basis of police records. To this end, it automatically connects to variouspolice databases throughout the country, using some ACL. However, a police officer is not allowed to seepersonal characteristics of suspects, unless he or she is assigned to that specific case. Therefore, whendisplaying the results of the inference in a public ACL message, the agent should filter out any personalcharacteristics. By contrast, if the same software is used by a police officer assigned to the case, no suchprivacy filter has to be installed. Thus, a filter rule may block transfer from the individual agent’s beliefs,to the role’s beliefs.

In the next two sections, we show how the semantics of speech acts defined by FIPA and by socialsemantics can be expressed in terms of roles.

3. From FIPA to roles

The semantics of agent communication languages provided by FIPA (2002a) are paradigmatic of themodels based on mental attitudes. In FIPA, communicative acts are defined in terms of the mental state ofthe BDI agent who issues them. The bridge between the communicative acts and the behavior of agentsis provided by the notions of rational effect and feasibility preconditions. The rational effect is the mentalstate that the speaker intends to bring about in the hearer by issuing a communicative act, and the feasi-bility preconditions encode the appropriate mental states for issuing a communicative act. To guaranteecommunication, the framework relies on intention recognition on the part of the hearer.

The main drawback of FIPA resides in the fact that mentalistic constructs cannot be verified (Maudetand Chaib-draa, 2002; Wooldridge, 2000). So, they are not appropriate in situations in which agents maybe insincere or non cooperative, like in argumentation or negotiation. In contrast, meaning should be publicas Singh (2000), Walton and Krabbe (1995), Fornara and Colombetti (2004) claim.

In the following, we provide a role-based semantics for FIPA communicative acts by proposing a trans-lation using constitutive rules CR of our dialogue game. Note, however, that the beliefs and goals of rolesresulting from the translation have different properties than the beliefs and goals of the agents referred toby FIPA semantics.

The structure of the constitutive rules of dialogue reflects the structure of FIPA operators: the speechact is mapped to the antecedent, while feasibility preconditions (FP) and rational effects (RE) are mapped

12 G. Boella et al. / Common Ontology of ACL

to the consequent. This methodology relies on some FIPA principles 4 and 5 (p 34), according to which,when a speech act is executed, its feasibility preconditions and its rational effects must be or become true.So, after a speech act, its preconditions and its rational effect are added to the roles’ beliefs and goals:

B(i, done(act) ∧ agent(j, act) → G(j, RE(act))) (Property 4 of FIPA) (RL13)

B(i, done(act) → FP (act)) (Property 5 of FIPA) (RL14)

where FP stands for feasibility preconditions, RE for rational effects, and i and j are the role enactingagents in the conversation. In the role-based semantics these axioms apply to roles, so the belief that thepreconditions of the speech act hold is publicly attributed to the role of agent who is speaking, abstract-ing from the actual beliefs of the agent. For the sake of simplicity, we do not report in this paper pre-conditions concerning uncertain beliefs (the modal operator Uif in FIPA), but the extension to them isstraightforward.

Given these preliminaries, we are now in a position to give a number of translation axioms for thevarious speech acts that are important to the FIPA approach. As usual, we start with inform.

3.1. Inform

The purpose of an inform act for the speaker is to get the hearer to believe some information. Theoriginal FIPA definition reads (almost) as follows:

< i,inform(j, p) >FP:B(i, p) ∧ ¬B(i, B(j, p) ∨B(j,¬p))RE:B(j, p)

Using our interpretation, the first precondition is modeled by the rule:

B(i, inform(i, j, p) → B(i, p)) (CR15)

The second precondition is modeled by:

B(i, inform(i, j, p) → ¬B(i, B(j, p) ∨B(i,¬p))) (CR16)

Remember that in our interpretation B(j, p) → B(i, B(j, p)), so now these preconditions are used toinfer parts of the public state of the dialogue. The rational effect is accounted for by

B(i, inform(i, j, p) → G(i, B(j, p)) ) (CR17)

To model FIPA’s remark that “Whether or not the receiver does, indeed, adopt belief in the propositionwill be a function of the receiver’s trust in the sincerity and reliability of the sender" (p 10) we need thefollowing rule:

B(j, G(i, B(j, p)) ∧ reliable(i, p)) → B(j, p)) (CR18)

As illustrated in the previous section, we keep reliability and sincerity separate. Sincerity is not assumed,but refers to a relationship between the role’s beliefs and the player’s private beliefs, which in somecircumstances may, or may not hold. Reliability is a property which is, or is not, attributed by the hearerto the public role of the speaker. We do not want to comment further on this issue here. See e.g., Dastaniet al. (2004); Demolombe (2001); Liau (2003) for more about reliability and trust.

3.2. Request and Propose

To allow easy comparison with the social commitment approach, we now present the axiomatizationsof directive speech acts. We start with request .

G. Boella et al. / Common Ontology of ACL 13

The purpose of a request is to get the hearer to do some act. This requires that the speaker not alreadybelieves that the hearer has such an intention1. Regarding notation, we have tried to stay close to the FIPAconventions, but we deviate from their practice of listing the agent who performs an action separately.Instead, in the following we will always write the ‘agent’ of an action as part of the done predicate:done(j, act) ≡ done(act) ∧ agent(j, act). Where we need to keep the agent of action unspecified, thiswill be indicated specifically.

< i,request(j, act) >FP: ¬B(i, G(j, done(j, act)))RE: done(j, act)

In our interpretation, the precondition is then modeled as follows2:

B(j, request(i, j, done(j, act)) → ¬B(i, G(j, done(j, act)))) (CR19)

The intended effect is modeled by

B(j, request(i, j, done(j, act)) → G(i, done(j, act))) (CR20)

Analogously to reliability for inform, there is a separate rule to express that only a cooperative hearerwill automatically adopt the apparent goal of the speaker:

B(j, G(i, done(j, act)) ∧ cooperative(j, i) → G(j, done(j, act))) (CR21)

Note that G(i, done(j, act)) here is not derived from a reconstruction of i’s goals by j. It is part ofthe public state of the dialogue, triggered for example by a request. In the FIPA approach, requests arecombined into larger dialogues using the Request Interaction Protocol FIPA (2002b), which contains thespeech acts agree and refuse as possible responses. It also contains speech acts for informing the requesterabout possible success or failure of the requested action, which we leave out for this occasion.

For the sake of simplicity, we also drop the conditions under which requests or proposals are agreedto be carried out; these are better dealt with in Section 4 below, where we explain conditional proposals,which are easier to understand by starting from the Social Commitments approach. But do note that weconsistently use ‘done(i, act)’ instead of ‘act’ to refer to the content of an action commitment. Such apropositional representation of the effect of an action, makes it easier to extend the approach to conditionalproposals later, using material implication.

< i,agree(j, act) >≡< i, inform(j, G(i, done(i, act))) >FP: B(i, G(i, done(i, act))) ∧ ¬B(i, B(j,G(i, done(i, act))) ∨B(j,¬G(i, done(i, act))))RE: B(j,G(i, done(i, act))

Here is the translation of the feasibility preconditions. Note that in almost all these cases in which thedefinition is based on inform, the feasibility preconditions are superfluous, when we assume that agentsare reliable with respect to their own mental states. Moreover, the second precondition is usually canceledby the intended effect.

B(j, agree(i, j, done(i, act)) → B(i, G(i, done(i, act)))) (CR22)

B(j, agree(i, j, done(i, act)) → ¬B(i, B(j, G(i, done(i, act)))∨B(j,¬G(i, done(i, act)))))

(CR23)

And this is our interpretation of the intended effect.

B(j, agree(i, j, done(i, act)) → G(i, done(i, act)) (CR24)

1Note that we drop the precondition from FIPA that i thinks it feasible for j to perform act (p. 29). One reason is that we donot have a proper semantics for the feasibility construct.

2Note that unlike the speech act definitions based on inform below, this one does not have a sincerity precondition:B(i, G(i, done(j, act))). This does not matter, because agents are considered to be reliable about their own goals, so we canderive this condition whenever it is needed.

14 G. Boella et al. / Common Ontology of ACL

This interpretation of the RE needs some discussion. For our interpretation of the RE of inform abovein (CR17), we translated the RE to a goal of the speaker. In that case, we would end up with somethinglike: B(j, agree(i, j, done(i, act)) → G(i, B(j,G(i, done(i, act)))). So it is the communicative intentionof i, that j will believe that i wants to do the action. Following Sadek et al. (1997)), we can simplify thiskind of G(i, B(j, G(i, ..))) -formula into G(i, ..), if we assume that the hearer takes the speaker to bereliable concerning reports about its own mental states. By rule (CR18) we derive (CR24) above. In therest of the section, we will continue to use this practice, of putting the effect on the hearer directly.

In the definition of refuse we also leave out some additional indiosyncrasies of FIPA, like the fact thata refusal is originally expressed as a disconfirm, which is a special kind of inform3, the fact that FIPArequires reasons to be provided for a refusal, and the fact that a refusal is supposed to indicate that itis infeasible for the requested agent to perform the requested act. This last move makes sense, becausewe also dropped this feasibility requirement from requests. So in fact we have greatly simplified andreinterpreted refuse, as the mere opposite of an agree.

< i,refuse(j, act) >≡< i, inform(j,¬G(i, done(i, act)))FP: B(i,¬G(i, done(i, act))) ∧ ¬B(i, B(j,¬G(i, done(i, act))) ∨B(j, G(i, done(i, act))))RE: B(j,¬G(i, done(i, act))

These are the preconditions.

B(j, refuse(i, j, done(i, act)) → B(i,¬G(i, done(i, act)))) (CR25)

B(j, refuse(i, j, done(i, act)) → ¬B(i, B(j,¬G(i, done(i, act)))∨B(j,G(i, done(i, act)))))

(CR26)

Here is the intended effect.

B(j, refuse(i, j, done(i, act)) → ¬G(i, done(i, act))) (CR27)

Since it is of particular importance for our examples, we also illustrate the propose speech act. A pro-posal can be combined with accept-proposal, or reject-proposal, in the so called Propose Interaction Pro-tocol FIPA (2002b). Since propose is modeled by means of an inform act in FIPA, its definition in role-based semantics derives from the definition of the inform provided above:

< i,propose(j, act) >≡< i, inform(j, G(j, done(i, act)) → G(i, done(i, act)))FP: B(i, G(j, done(i, act)) → G(i, done(i, act))) ∧

¬B(i, B(j, G(j, done(i, act)) → G(i, done(i, act))) ∨B(j,¬G(j, done(i, act)) → G(i, done(i, act))))

RE: B(j,G(j, done(i, act)) → G(i, done(i, act)))

Note that act must be carried out by i, otherwise we would call it a request and not a propose.These are the preconditions.

B(j, propose(i, j, done(i, act)) → B(i, G(j, done(act)) → G(i, done(i, act)))) (CR28)

B(j, propose(i, j, done(i, act)) →¬B(i, B(j, (G(j, done(i, act)) → G(i, done(i, act))) ∨

B(j,¬(G(j, done(i, act)) → G(i, done(i, act))))))(CR29)

Here is the intended effect. Remember that for our interpretation of the RE of speech acts derivedfrom inform, we use the simplifying assumption that the hearer takes the speaker to be reliable concerningreports about its own mental states.

B(j, propose(i, j, done(i, act)) → (G(j, done(i, act)) → G(i, done(i, act)))) (CR30)

We also illustrate how the accept-proposal and reject-proposal speech acts are modeled even thoughthey are defined in terms of inform acts in FIPA:

3The difference depends on the precondition of the hearer being uncertain about the proposition or its negation.

G. Boella et al. / Common Ontology of ACL 15

< i,accept proposal(j, act) >≡< i, inform(j, G(i, done(j, act))) >FP: B(i, G(i, done(j, act))) ∧ ¬B(i, B(j, G(i, done(j, act))) ∨B(j,¬G(i, done(j, act))))RE: B(j,G(i, done(j, act)))

These are the preconditions.

B(j, accept proposal(i, j, done(j, act)) → B(i, G(i, done(j, act)))) (CR31)

B(j, accept proposal(i, j, done(j, act)) → ¬B(i, B(j, G(i, done(j, act))) ∨B(j,¬G(i, done(j, act))))

(CR32)

Here is the intended effect.

B(j, accept proposal(i, j, done(j, act)) → G(i, done(j, act))) (CR33)

< i,reject proposal(j, done(j, act)) >≡< i, inform(j,¬G(i, done(j, act))) >FP: B(i,¬G(i, done(j, act))) ∧ ¬B(i, B(j,¬G(i, done(j, act))) ∨B(j,G(i, done(j, act))))RE: B(j,¬G(i, done(j, act)))

These are the preconditions.

B(j, reject proposal(i, j, done(j, act)) → B(i,¬G(i, done(j, act)))) (CR34)

B(j, reject proposal(i, j, done(j, act)) → ¬B(i, B(j,¬G(i, done(j, act))) ∨B(j, G(i, done(j, act))))

(CR35)

Here is the intended effect.

B(j, reject proposal(i, j, done(j, act)) → ¬G(i, done(j, act))) (CR36)

3.3. Example

To illustrate the axioms, we will present an example dialogue. It is concerned with a simple negotiationabout household tasks.

A: Could you do the dishes?B: Only if you put the garbage out.A: OK.

The dialogue can be interpreted as follows. Initially, A makes a request. Then B makes a counterproposal, but because we do not have speech acts to represent counter proposals, we model this as animplicit refusal, followed by a conditional proposal from B. For conditional proposals, that φ only if ψ,we use a regular material implication: ψ → φ. Conditional proposals are explained in more detail inSection 6.

Example 1A1: Could you do the dishes? request(a, b, done(b, dishes))B1: Only if you put the garbage out? refuse(b, a, done(b, dishes));

propose(b, a, done(a, garbage) → done(b, dishes))A2: OK. accept proposal(a, b, done(a, garbage) → done(b, dishes))

We start with an empty set of facts about the mental attitudes of A and B: {}.After A1 we apply rules (CR19) and (CR20). So that produces:

{¬B(a, G(b, done(b, dishes))), B(b,G(a, done(b, dishes)))}.

16 G. Boella et al. / Common Ontology of ACL

If we assume persistence for the sincerity precondition, and for the parts of the set which are not imme-diately contradicted by later updates4, rules (CR25), (CR26) and (CR27) produce the following state ofaffairs after the implicit refusal of B1. The second precondition from (CR26) is canceled by the effect.

{G(a, done(b, dishes)), B(b,¬G(b, done(b, dishes))), B(a,¬G(b, done(b, dishes)))}The implicit proposal of B1, can be represented as follows, using rule (CR28) – (CR30):

{B(b,G(a, done(b, dishes)), B(b,¬G(b, done(b, dishes))), B(a,¬G(b, done(b, dishes))),B(b, (G(b, done(a, garbage) → done(b, dishes)) → G(a, done(a, garbage) → done(b, dishes)))),B(a, (G(b, done(a, garbage) → done(b, dishes)) → G(a, done(a, garbage) → done(b, dishes)))) }

Note the embedded implications here. The inner implication relates to the conditional proposal: b willdo the dishes only if b will put the garbage out. The outer implication derives from the fact that the proposalis still pending; it still needs to be accepted by a, before b will really adopt the conditional goals.

Finally, after acceptance A2 we the following, by rule (CR31) – (CR33):

{B(b,G(a, done(b, dishes)), B(b,¬G(b, done(b, dishes))), B(a,¬G(b, done(b, dishes))),B(b, (G(a, done(a, garbage) → done(b, dishes)) → G(b, done(a, garbage) → done(b, dishes)))),B(a, (G(a, done(a, garbage) → done(b, dishes)) → G(b, done(a, garbage) → done(b, dishes))))B(a,G(a, done(a, garbage) → done(b, dishes))), B(b,G(a, done(a, garbage) → done(b, dishes)))}

By application of Modus Ponens inside both the B(a..) and B(b..) environments, we get:

{B(b,G(a, done(b, dishes)), B(b,¬G(b, done(b, dishes))), B(a,¬G(b, done(b, dishes))),B(b, (G(a, done(a, garbage) → done(b, dishes)) → G(b, done(a, garbage) → done(b, dishes)))),B(a, (G(a, done(a, garbage) → done(b, dishes)) → G(b, done(a, garbage) → done(b, dishes))))B(a,G(a, done(a, garbage) → done(b, dishes))), B(b,G(a, done(a, garbage) → done(b, dishes)))B(a,G(b, done(a, garbage) → done(b, dishes))), B(b,G(b, done(a, garbage) → done(b, dishes)))}

So at the end of this dialogue we can infer that A wants B to do the dishes, but B does not want that byitself. And A and B have a shared conditional goal in which A does the garbage and B does the dishes.

We make the following initial observations, regarding our translation of the FIPA approach:

– One of the nice aspects of this approach – provided that persistence and change are handled well– is that we can keep information derived from previous parts of the dialogue. The approach has a‘memory’. As we will see in the following section, such a memory is lost in the operational semanticsof the Social Commitment approach.

– A drawback of the approach, is that many of the inferences that can be made are superfluous. Thus,the feasibility preconditions become redundant in our public version of the FIPA approach, whereall agents are reliable concerning each others public mental states. By its nature, the inference of thesecond precondition, that the hearer not already believes the information, is usually canceled if thespeech act succeeds.

– To make the approach practical, simplifying assumptions are necessary. For example, for all speechacts based on inform, the nested effect G(i, B(j,G(i, ...))) can be simplified to G(i..), using theassumption that the speaker is reliable concerning reports about its own mental states. The approachis unworkable without such assumptions, which are unwarranted in open environments.

4A proper representation of this kind of reasoning requires an explicit notion of time and persistence, and rules about changesas a result of actions and events. See Boella et al. (2007b) for an initial attempt based on default logic with temporal operators.

G. Boella et al. / Common Ontology of ACL 17

pending

cancelled violated

fulfilledempty

unset active

done

not done

make

set

notcond

cond

time out

jump donejump cond

cancelcancel

Fig. 4. Commitment State Automata (Fornara and Colombetti, 2004).

Action Changemake empty → C(unset , a, b, φ|ψ)

set C(unset , a, b, φ|ψ) → C(pending , a, b, φ|ψ)

cancel C(−, a, b, φ|ψ) → C(cancelled , a, b, φ|ψ)Table 2

Commitment Change Rules, solid lines in Figure 4 (Fornara and Colombetti, 2004).

4. From commitments to roles

Agent communication languages based on social commitment constitute an attempt to overcome thementalistic assumption of FIPA by restricting the analysis to the public level of communication (Singh(2000), Colombetti et al. (2004)). Communicative acts are defined in terms of the social commitments theypublicly determine for the speaker and the hearer. According to Singh (1991), describing communicationusing social commitment has the practical consequence that “[...] one can design agents independently ofeach other and just ensure that their S-commitment would mesh in properly when combined".

Here, we show how to define a particular social semantics in the role-based model. We selected theversion of Fornara and Colombetti (2004), because in our opinion, this one worked out in most detail.

4.1. Social Commitment Semantics

Generally, the social commitment approach uses an operational semantics. That means, that the mean-ing of the speech acts of an agent communication language, are expressed as status-changes, in a data-structure that models the different states a commitment can be in. A social commitment is seen as a kind ofrelationship that is established between two agents: the debtor, i.e. the agent who makes the commitmentand a creditor, i.e., the agent to which the commitment is made. Usually, the commitment is to achieve ormaintain some state of affairs, expressed by a proposition. Because in many application commitments aremade conditional on other commitments, also the conditions under which the commitments are made, areexpressed. So representation C(i, j, φ|ψ) expresses a social commitment between debtor i and creditor j,to accomplish or maintain φ, under condition that ψ.

A commitment can have different states: unset (i.e., to be confirmed), pending (i.e., confirmed, but itscondition is not true), active (i.e., confirmed and its condition is true), fulfilled (i.e., its content is true),violated (i.e., the content is false even if the commitment was active), or canceled (e.g., the debtor doesnot want to be committed to the action). This state can be modified by speech acts of the participants, orby external events, like the execution of an action fulfilling the new commitment state.

The state changes can also be depicted in the form of a finite state automaton (Figure 4).The effects of external events and of speech acts on the status of a commitment, are represented in

Table 2 and Table 3 respectively.

18 G. Boella et al. / Common Ontology of ACL

Rule Label Event Status Change1. done v(φ) := 1 C(active, a, b, φ|>) → C(fulfilled , a, b, φ|>)

2. not done v(φ) := 0 C(active, a, b, φ|>) → C(violated , a, b, φ|>)

3. condition v(ψ) := 1 C(pending , a, b, φ|ψ) → C(active, a, b, φ|>)

4. not condition v(ψ) := 0 C(pending , a, b, φ|ψ) → C(cancelled , a, b, φ|ψ)

5. jump condition v(ψ) := 1 C(unset , a, b, φ|>) → C(active, a, b, φ|>)

6. jump done v(φ) := 1 C(pending , a, b, φ|ψ) → C(fulfilled , a, b, φ|ψ)

7. time out C(unset , a, b, φ|ψ) → C(cancelled , a, b, φ|ψ)Table 3

Update Rules: dashed lines in Figure 4 (Fornara and Colombetti, 2004).

4.2. Translating Commitment States

In order to perform the translation, we adopt the following methodology: First, we map each commit-ment state to certain beliefs and goals of the roles. Second, according to the constitutive rules (CR) of therole model, a speech act directly changes those beliefs and goals in such a way to reflect the commitmentintroduction or change of state.

In Fornara and Colombetti (2004), the difference between propositional and action commitment liesonly in their content. As a result, the difference between an inform and a promise is reduced to the factthat the content of the commitment is a proposition or an action respectively. By contrast, according toWalton and Krabbe (1995), a propositional commitment can be seen as a kind of action commitment todefend it. In the mapping between social commitments and the role model a new distinction emerges:rather than having commitment stores, we model propositional commitments as beliefs of the role andaction commitments as goals. How roles’ beliefs capture the idea of a commitment to defend one’s positionis discussed in Section 5 where we deal with persuasion dialogue. In this section we focus on actioncommitment only.

Here, we represent conditionals commitment C(i, j, p | q) in a simplified way, as a conditional goalp of role i in case q is true: B(i, q → G(i, p)). We believe that conditional attitudes are accounted forin a conditional logic, like Input/Output logic Makinson and van der Torre (2000). Here, as in Section 3,we stick to FIPA’s solution for the sake of clarity, while being aware of its limitations. Moreover, like inSection 3, we use a simplified notation done(i, act) to indicate that agent i is supposed to perform theaction act. The done() construct turns an action into a proposition, expressing a state of affairs.

Given those preliminaries, the state translations are as follows:

– An unset commitment corresponds to a goal of the creditor. We translate this in the constitutive rulesof a dialogue game in this way:

C(unset, i, j, done(i, act) | q) ≡ q → G(j, G(i, done(i, act))) (CR37)

In the antecedent of this rule, the commitment condition q becomes a condition on the goal assumedby the creditor of the commitment. At this stage of the commitment life-cycle, no mental attitudeis attributed to the debtor: it has not publicly assumed any actual goal, but has only been publiclyrequested to.

– A commitment is pending when it is a goal of the creditor and the debtor of the commitment condi-tionally wants to perform the action if the associated condition q is true, and the creditor has this as abelief:

C(pending , i, j, done(i, act) | q) ≡ q → G(j, G(i, done(i, act)))∧B(i, q → G(i, done(i, act))) ∧B(j, q → G(i, done(i, act))) (CR38)

– A commitment is active when it is both a goal of the debtor and of the creditor, and the pendingcondition is true:

C(active, i, j, done(j, act) | >) ≡ G(i, done(i, act)) ∧G(j, done(i, act)) (CR39)

G. Boella et al. / Common Ontology of ACL 19

Note that to make a pending commitment active, it is sufficient that the condition q is believed to betrue, since from

B(i, q ∧ (q → G(i, done(i, act))))

we can derive G(i, done(i, act)) with axiom (RL1).– Commitments are violated or fulfilled when they are goals of the creditor and the content of the

commitment is respectively true or false according to the beliefs of the creditor (abstracting here fromtemporal issues):

C(fulfilled , i, j, done(i, act) | >) ≡ B(j, done(i, act)) ∧G(j, done(i, act)) (CR40)

C(violated , i, j, done(i, act) | >) ≡ B(j,¬done(i, act)) ∧G(j, done(i, act)) (CR41)

Since roles are public, fulfilment and violation are not dependent on what the agents subjectivelybelieve about the truth value of the content of the commitment, but on roles’ public beliefs.

– Finally, a commitment is canceled if the creditor does not want the goal to be achieved anymore, nomatter if the debtor still wants it:

C(cancelled , i, j, done(i, act) | q) ≡ ¬G(j, done(i, act)) (CR42)

4.3. Translating Speech Acts

Given the definition of the commitment state in terms of the mental states of the roles, we can providethe following translation of the speech acts semantics defined by Fornara and Colombetti (2004). In thissection, we only deal with speech acts introducing action commitments, like request, propose, promise,accept and reject, while speech acts introducing propositional commitments, such as assertions, are dis-cussed in Section 5.

Note that in contrast with the FIPA translation, speech acts affect both the beliefs and goals of speakerand hearer and not only of the speaker. This represents the fact that a commitment is seen as a relationship,which is publicly established.

A promise introduces a pending commitment of the speaker (Rule CR38):

promise(i, j, done(i, act), q) → (q → G(j,G(i, done(i, act)))∧B(i, q → G(i, done(i, act))) ∧B(j, q → G(i, done(i, act)))) (CR43)

This interpretation of promise and accept (defined below) needs some discussion. According to Gueriniand Castelfranchi (2006) the definition above is too weak: a promise needs to be explicitly adopted, andthen kept. It is not enough that j has indicated to prefer i to do the action, and that i and j believe that ihas a goal to do the action; what is missing is j’s belief, as a result of the acceptance, that i will actuallydo it. Goals are not good enough; some expectation of success is needed. Nevertheless, we think that theconditional nature of a promise, i.e., that it requires explicit acceptance, is well covered by the combinationof rules (CR43) and (CR45). Rule (CR43) only covers an initiative to make a promise; the promise is onlycomplete when accepted, and fulfilled when successful.

A request introduces an unset commitment with the receiver as debtor, i.e., the agent of the requestedaction (Rule CR37):

request(i, j, done(j, act), q)→(q→G(i, G(j, done(j, act)))) (CR44)

Accept and reject change the state of an existing unset commitment to pending and cancelled respec-tively. In order to account for this fact, we insert in the antecedent of the rules for accept and reject thereference to the configuration of beliefs and goals that represent an existing commitment.

(q → G(j, G(i, done(i, act)))) ∧ accept(i, j, done(i, act), q)) →B(i, q → G(i, done(i, act))) ∧B(j, q → G(i, done(i, act))) (CR45)

20 G. Boella et al. / Common Ontology of ACL

(q → G(j, G(i, done(i, act)))) ∧ reject(i, j, done(j, act), q)) →B(i,¬G(i, done(i, act))) ∧B(j,¬G(i, done(i, act))) (CR46)

A propose is a complex speech act composed by a request and a conditional promise; it introduces anunset commitment with the receiver as debtor and a pending commitment with the speaker as debtor. Sincea propose is used in a negotiation, q and p refer respectively to an action of the speaker and of the receiver.

propose(i, j, done(j, p), done(i, q)) ≡request(i, j, done(j, p), done(i, q)) ∧ promise(i, j, done(i, q), s) (CR47)

where s ≡ B(i, done(i, q) → G(j, done(j, p)))∧B(j, done(i, q) → G(j, done(j, p))), i.e., p is a pendingcommitment of the receiver. So propose is expressed by the following constitutive rules:

propose(i, j, done(j, p), done(i, q)) →(done(i, q) → G(i, G(j, done(j, p))))∧B(i, s → G(i, done(i, q))) ∧B(j, s → G(i, done(i, q))) (CR48)

4.4. Example

We now return the running example, to illustrate some of the details of our interpretation of the socialcommitments approach. The example is repeated below, now using notation φ|ψ for conditional commit-ments.

A1: Could you do the dishes?B1: Only if you put the garbage out?A2: OK.

We start again from state empty. According to Fornara and Colombetti (2004), the request from A

creates an unset commitment waiting for acceptance. Now B responds with a conditional acceptance,which could be seen as a kind of counter proposal. However, there are two issues for discussion.

First, a request can only be accepted or rejected, but not be adjusted by a counter proposal. So justlike in Section 3 we must interpret B’s response here as rejection, followed by an conditional proposal.Moreover, we assume that a canceled commitment is equivalent to an empty state. In other words: thesystem does not remember that something was rejected.

Second, in Fornara and Colombetti’s account, rendered in rule (CR48 above, conditional proposalsfor φ|ψ are seen as a combination of a request for φ with a conditional promise to make sure that ψ.The promise is again conditional on a statement σ which implies the conditional commitment φ|ψ underdiscussion. A promise is in turn interpreted as combination of a make and a set action, resulting in thepending state. So under this interpretation a promise does not need acceptance5.

So our interpretation of the dialogue above, would be as follows.

5We believe this interpretation of proposals is conceptually mistaken. First, proposals are concerned with actions to be doneby the speaker; requests deal with actions to be done by the hearer. Second, implicit requests are better made explicit in theagent communication protocol. Third, promises do need acceptance. Nevertheless we continue with Fornara and Colombetti’sinterpretation here for the sake of the argument

G. Boella et al. / Common Ontology of ACL 21

Example 2A1: Could you do the dishes? request(a, b, done(b, dishes)|>)

C(unset , b, a, done(b, dishes)|>)B1: Only if you put the garbage out? reject(b, a, done(b, dishes)|>)

C(cancelled , b, a, done(b, dishes)|>)empty

propose(b, a, done(a, garbage)|done(b, dishes))C(unset , a, b, done(a, garbage)|done(b, dishes))

C(pending , b, a, done(b, dishes)|σ )σ ≡ C(pending , a, b, done(a, garbage)|done(b, dishes))

A2: OK. accept(a, b, done(a, garbage)|done(b, dishes))C(pending , a, b, done(a, garbage)|done(b, dishes))C(pending , b, a, done(b, dishes)|>) (resolving σ)

So we end up with the following set of commitments.

{C(pending , a, b, done(a, garbage)|done(b, dishes)),C(pending , b, a, done(b, dishes)|>)}

This is a conditional commitment of A to put the garbage out, provided B does the dishes, and acommitment of B to indeed do to the dishes, given the context of this other commitment. Note that oncewe have resolved σ the conditional nature of the commitment of B to do the dishes, is lost.

Using the translation axioms, in particular (CR38), we get

{done(b, dishes) → G(b,G(a, done(a, garbage))),B(a, done(b, dishes) → G(a, done(a, garbage))),B(b, done(b, dishes) → G(a, done(a, garbage))),G(a, G(b, done(b, dishes)B(a,G(b, done(b, dishes)))B(b,G(b, done(b, dishes)))}

This can be compared with the outcome of the same example, at the end of Section 3. Doing that wemake the following observations.

– The social commitments approach uses an operational semantics. This simplifies the structure of thepossible protocols that can be defined with such an agent communication language, but it also meansthat information is lost. In particular, the ‘memory’ of the dialogue is reduced to a single state in astate space. This will make it likely that ‘circular’ dialogues will occur.

– The social commitment approach is quite clever at handling conditional commitments. By interpret-ing a proposal as an implicit request followed by a promise – which is debatable – a mutual commit-ment is established.

22 G. Boella et al. / Common Ontology of ACL

5. Persuasion dialogues in the social commitments approach

The purpose of this section is to extend social commitments approaches to persuasion dialogues to showthat our role-based semantics is general enough. A similar extension could be made for FIPA, but thisextension is more interesting, due to the ambiguity of the term commitment.

We distinguish action commitments from propositional commitments, because an action commitmentis fulfilled when the hearer believes that the action is performed, whereas a propositional commitment isfulfilled only when the hearer concedes to the proposition. Using the role-based semantics, we will showthat propositional commitments are related to public beliefs, and action commitments to public goals.

Consider for example the following two sentences:

(1) A promises B to deliver the goods before Friday.(2) A informs B that Al Gore would have been a better president than Bush.

The first sentence commits agent A to the delivery of goods before Friday, and the second sentencecommits agent A to defend the argument that Al Gore would have been a better president than Bush. Wesay that the first sentence leads to an action commitment and the second sentence leads to a propositionalcommitment. Researchers in the social commitment approach to agent communication (Castelfranchi,1995; Singh, 2000; Fornara and Colombetti, 2004) focus on the former, because they are interested in taskoriented dialogue and negotiation. Researchers in the argumentation tradition on the other hand (Hamblin,1970; Walton and Krabbe, 1995), focus on the latter: “to assert a proposition may amount to becomingcommitted to subsequently defending the proposition, if one is challenged to do so by another speaker”Walton and Krabbe (1995). Despite these differences, in a sense, someone could argue that a promiseand an assertion have the same effect: they create a commitment of the speaker, respectively an actioncommitment or a propositional commitment.

This is coherent with the idea that the meaning of a speech act is defined in terms of the attitudes ofthe speaker, without any effects on the hearer. But according to Kibble (2005), the meaning of a speechact must also be defined in terms of the effects on the hearer. In particular, an assertion that goes unchal-lenged may count as a concession for the hearer. This corresponds to the ‘silence means consent’ prin-ciple, already studied by Mackenzie (1979). Walton and Krabbe argue that in case of a concession, thehearer becomes weakly committed to the proposition: the hearer can no longer make the speaker defendthe proposition by challenging him, albeit the hearer does not have to defend the proposition himself ifchallenged.

After an assertion the hearer can make a concession explicitly, or implicitly by not challenging theassertion. So we reinforce Kibble’s claim that speech acts also have an effect on the attitudes of thehearer. To show the feasibility of the semantics, we model persuasion dialogues inspired by the PPD0

protocol of Walton and Krabbe (1995). We illustrate the approach by a dialogue that involves a mixture ofpropositional and action commitments.

Despite the differences, action commitment and propositional commitment have more than accidentalsimilarities too. This needs some argument, because “. . . the word commitment and the corresponding verbcommit are used in many different ways, and there is no reason to suspect that there is a common coremeaning in all of them." (Walton and Krabbe, 1995, p 13).

Our general intuition is that through making a commitment, the number of future options becomes re-stricted. For action commitment, this can be explained by referring to the well-known slogan that “in-tention is choice with commitment” (Cohen and Levesque, 1990). Having the stability provided by com-mitments, makes sense when re-planning is costly, and when certain resources must to be reserved inadvance, e.g. time slots in an agenda (Bratman, 1987). The same holds for commitments made to otheragents. For example, by agreeing to meet on Friday at noon, future options to do other things on Friday,become restricted. Analogously, conceding to an action of another agent means agreeing not to preventhim from doing the action by making other conflicting commitments. For example, a concession to doan action which needs a car, implies that the conceder will not use the car for other purposes. Thus, therelation between commitment and concession is similar to the one between obligation and permission.

G. Boella et al. / Common Ontology of ACL 23

concede

agree

assert

retractassert ; assert*

pr propwhy

assert ; assert*

Fig. 5. A protocol for persuasion dialogues.

If we view argumentation or persuasion as a kind of game in which players make moves, a propositionalcommitment can also be said to limit the future possible moves a player can make. In particular, wheneveran agent, the proponent, makes an assertion, he or she is committed to uphold that proposition. This meansthat all moves which would enable the other player, the opponent, to force the proponent to retract theproposition, must be avoided. Thus, here too commitments restrict the set of future options.

Just like one needs a specific logic of practical reasoning to explain action commitment, we need aspecific persuasion protocol to explain propositional commitment. Here we will take a persuasion protocolinspired by (Walton and Krabbe, 1995, p.150,151), and elaborated by (Gaudou et al., 2006a, Figure 2).Although this protocol is simplified, we believe it is sufficient to illustrate the notion of propositionalcommitment. The protocol is depicted in Figure 5.

The idea behind the protocol corresponds to the following quote, which is often cited to relate propo-sitional commitment to a kind of action commitment: “Suppose X asserts that P . Depending on context,X may then become committed to a number of things, for example, holding that P , defending that P (ifchallenged), not denying that P , giving evidence that P , arguing that P , proving or establishing that Pand so on" (Walton and Krabbe, 1995, p23).

The protocol is defined as follows. For each instantiation of the protocol, there are participants in tworoles: the proponent (pr), and the opponent (op). By definition, the proponent is the agent who makes theinitial assertion. Proponent and opponent have different burden of proof. Note that the agent who is theopponent of some proposition p, may very well become the proponent of another proposition q later in thedialogue. For this reason, it is crucial that we have an explicit representation of roles.

Following an assertion by the proponent, the opponent can respond by a challenge like a why-question,which essentially requests the proponent to come up with some argument to support the assertion. Theopponent may also agree with the asserted proposition, or concede the asserted proposition. Agreeingmeans not only relieving the proponent from the burden of proof, but also adopting the burden of prooftowards third parties. Conceding means only that the opponent has given up the right to challenge theproposition, and has relieved the proponent of the burden of proof. Thus agreement implies concession.Alternatively, we can say that not challenging means a concession. In this way we capture Kibble’s (2005)idea that entitlement to a commitment comes by default after an assertion; see rule (CR53).

In response to a challenge, the proponent must give some argument in support of the proposition, whichitself consists of one or more assertions, or retract the original assertion. Following the assertion in re-sponse to a challenge, the opponent may either concede the proposition, or challenge or concede any ofthe assertions made by the proponent during the argument in support.

A why-challenge does not add any new material. But an opponent can also challenge propositionsby so called rebutting arguments, that provide an independent argument for the opposite assertion. Boththese cases can be handled as assertions by the opponent, which trigger another instantiation of the sameprotocol, with a role reversal. So for rebutting arguments, the burden of proof lies with the rebutter.

The dialogue ends either when the proponent has run out of arguments to support his assertion; in thiscase she is forced to retract the proposition, or when the opponent has run out of challenges; in this case heis forced to concede. An agreement of the opponent, is essentially an assertion by the opponent. This end

24 G. Boella et al. / Common Ontology of ACL

Before speech act After

−− assert(pr, op, p) B(pr, p)

B(pr, p) why(op, pr, p) ¬B(op, p)

B(pr, p) concede(op, pr, p) ¬B(op,¬p)

B(pr, p) agree(op, pr, p) B(op, p)

B(pr, p),¬B(op, p) retract(pr, op, p) ¬B(pr, p)Table 4

Updates rules in Persuasion protocol; pr = proponent, op = opponent.

condition corresponds to Walton and Krabbe’s win and loss rules of PPD0 persuasion dialogues (Waltonand Krabbe, 1995, p.152).

In our formalization of the persuasion protocol, propositional commitments of the proponent are mod-eled as public beliefs. The open challenges are modeled as public absence of belief of the opponent. Notethat in the following definition, the Before and After fields indicate what is added to the belief bases, asa result of the speech act. Thus they are update rules. They should not be confused with the FP and REconditions of FIPA.

In the translation, we use the principle that a concession PP is represented by the fact that the debtordoes not believe that ¬p. This makes it impossible for the debtor to raise further challenges.

PP (i, j, p) ≡ ¬B(i,¬p)

A propositional commitment PC is active when the debtor believes the proposition, while nothing isrequired to the creditor:

PC(active, i, j, p) ≡ B(i, p) (CR49)

A propositional commitment PC is fulfilled when the creditor concedes the proposition, and thereforecannot challenge it anymore:

PC(fulfilled, i, j, p) ≡ ¬B(j,¬p) (CR50)

A propositional commitment PC is violated when the debtor’s beliefs are in contradiction, due to thefailure of defending some previous commitment.

PC(violated, i, j, p) ≡ ¬B(i, p) ∧B(i, p) (CR51)

Note that a proper treatment of this issue requires a detailed mechanism for dealing with temporal issueswhich is missing in the FIPA formal language we use.

It is not clear if a conditional propositional commitment is different from a propositional commitmentabout a conditional and what means for a propositional commitment to be unset or pending (note that inFornara and Colombetti (2004) there is no way to create an unset or pending propositional commitment).Thus we do not define here these states, nor cancellation, which also requires to introduce time.

5.1. Speech acts

Given the definition of the commitment state in terms of the mental states of the roles, we can providethe following translation of the speech acts semantics. Speech acts affect both the beliefs and goals ofspeaker and hearer and not only of the speaker. This represents the fact that in social commitments agentsare publicly committed to the mental attitudes attributed by the constitutive rules to the roles they play. Nocooperativity or sincerity assumptions are necessary, by contrast to FIPA.

An assertion introduces an active propositional commitment of the speaker and, if it is not challenged,it also introduces a concession of the hearer:

assert(i, j, p) → B(i, p) (CR52)

assert(i, j, p) ∧ ¬(why(j, i, p) ∨ ¬rebut-challenge(j, i, p)) → ¬B(j,¬p) (CR53)

G. Boella et al. / Common Ontology of ACL 25

Proponent a Opponent bA1 B B(a, open) ¬(why(b, a, open) ∨ rebut(b, a, open)) → ¬B(b,¬open)A2 G G(a, G(b, give-exam))B1 ¬(why(a, b, (games → ¬open) ∧ games)∨ B(b, (games → ¬open) ∧ games)

rebut(a, b, (games → ¬open) ∧ games))B → ¬B(a, open) B(b,¬open)

B(a, open)G G(a, G(b, give-exam))

A3 B B(a, (exam ∧ games → open) ∧ exam) ¬(why(b, a, (exam ∧ games → open) ∧ exam)∨rebut(b, a, (exam ∧ games → open) ∧ exam))→ ¬B(b,¬open)

B(a, open) B(b,¬open)G G(a, G(b, give-exam))

B2 B B(a, open) B(b, open)B3 G G(a, G(b, give-exam)) G(b, give-exam)B4 B B(a, give-exam) B(b, give-exam)

Fig. 6. The interpretation of the dialogue in Example 3.

Both implicit and explicit concessions are modelled as absence of the contrary belief. This is similar toweak commitment (Gaudou et al., 2006a, eq 17 p.128).

B(j, p) ∧ concede(i, j, p) → ¬B(i,¬p) (CR54)

Agreement simply means that the hearer becomes committed too. So agreement implies concession.

B(j, p) ∧ agree(i, j, p) → B(i, p) (CR55)

Asserting an argument against p counts as a rebut-challenge. We simplify here for space reason the notionof argument:

B(j, p) ∧ assert(i, j, (q → ¬p) ∧ q) → rebut-challenge(i, j, p) (CR56)

A why-challenge asks arguments to support the assertion. It indicates that the opponent is not yet con-vinced.

B(j, p) ∧ why(i, j, p) → ¬B(i, p) (CR57)

Putting forward an argument in support of the original assertion is a way to reply to a why challenge:

¬B(j, p) ∧ assert(i, j, (q → ¬p) ∧ q) → support(i, j, p) (CR58)

Not replying to a why challenge with a supporting argument, counts as a retraction:

why(i, j, p) ∧ ¬support(j, i, p) → ¬B(j, p) (CR59)

Replying to a rebut challenge with a counter argument is also compulsory, but because the rebut challengeis performed by means of a set of assertions, this is already accounted for by rule 53.

Once a concession has been introduced it prevents the agent from committing itself to the oppositeproposition, since this would lead to a contradiction: ¬B(i,¬p) ∧B(i, p). Thus he cannot make an assertof ¬p nor challenge p, since a challenge is performed by informing about an argument for ¬p. Avoiding acontradiction explains also why an agent is lead to challenge an assert if he previously committed itself tothe contrary.

In Table 6 we show the interpretation of the following dialogue. For each turn, we report the beliefs andgoals which are created and those which persist from the previous turn.

26 G. Boella et al. / Common Ontology of ACL

Roles B and G −FIPA speech act → Roles B and G↑ ↑

trans trans| |

Social Commitments −SC speech act → Social Commitments

s1 s2

Fig. 7. Comparison diagram.

Example 3A1: Tomorrow the University is open. assert(a, b, open)A2: Can you give the exams for me? request(a, b, give-exam)B1: Isn’t it closed for the Olympic games? assert(b, a, (games → ¬open) ∧ games)

rebut-challenge(b, a, open)A3: Not for exams. assert(a, b, (exam ∧ games → open) ∧ exam)

rebut-challenge(a, b, (games → ¬open) ∧ games)B2: I see. agree(b, a, open)B3: OK accept(b, a, give-exam)B4: [B gives the exam] give-exam

6. Comparisons

In this section we propose two examples of comparison between FIPA and social commitments usingour role semantics, as a means to assess the feasibility of the role semantics as an intermediate language.We will illustrate that the translation from the social commitment approach, to the beliefs and goals ofroles, does not lose information.

6.1. Comparing Inform

We first consider the case of inform. We show that by translating a state described in terms of commit-ments into public beliefs and goals and applying speech acts as defined in FIPA extended with roles, weget the same results as applying social commitments (SC) speech acts and then translating the resultingstate in public beliefs and goals. The general idea is shown in Figure 7.

Suppose we have a state s1 with no commitments. So in the social commitments automata we are instate empty. According to Fornara and Colombetti (2004), the meaning of an inform is the combination ofa make and a set of an unconditional commitment. The result is:

C(pending , a, b, φ|>)

.Because it concerns an unconditional commitment, there is no difference between active and pending

commitments. So we use ‘jump condition’ rule number 5 (See Table 4 in Section 4). We also leave thecondition ‘|>’ out of the notation. That produces:

C(active, a, b, φ)

Now we translate according to rule (CR39), which produces

B(a, φ)

So the translation of s2 becomes {B(a, φ)}. That completes the first half of the diagram.For the role translation, the empty state corresponds to a set of no beliefs or goals: {}, Now, the meaning

of the inform speech act, is captured by rules (CR15) – (CR17). So, in the role version of FIPA, s2 becomes

G. Boella et al. / Common Ontology of ACL 27

CA Sender a Receiver b(FIPA) (FIPA)

propose BELIEFS BELIEFG(b, sell(a, b)) ⊃ G(a, sell(a, b))

G(b, sell(a, b)) ⊃ G(a, sell(a, b))GOALS GOALSB(b, G(b, sell(a, b)) ⊃ G(a, sell(a, b)))

accept BELIEFS BELIEFG(b, sell(a, b)) ⊃ G(a, sell(a, b)) G(b, sell(a, b)) ⊃ G(a, sell(a, b))G(b, sell(a, b)) G(b, sell(a, b))G(a, sell(a, b))G(a, give(a)) G(b, give(a))give(a) ⊃ G(a, pay(b)) give(a) ⊃ G(b, pay(b))give(a) ⊃ G(b, pay(b))GOALS GOALS

B(a, G(b, sell(a, b)))give(a)

give(a) BELIEFS BELIEFSG(b, sell(a, b)) ⊃ G(a, sell(a, b)) G(b, sell(a, b)) ⊃ G(a, sell(a, b))give(a) ⊃ G(b, pay(b)) give(a) ⊃ G(b, pay(b))give(a) ⊃ G(a, pay(b))G(a, give(a)) G(b, give(a))give(a) give(a)G(b, pay(b)) G(b, pay(b))G(a, pay(b))GOALS GOALS

pay(b)pay(b) BELIEFS BELIEFS

G(b, sell(a, b)) ⊃ G(a, sell(a, b)) G(b, sell(a, b)) ⊃ G(a, sell(a, b)))give(a) ⊃ G(b, pay(b)) give(a) ⊃ G(b, pay(b))give(a) ⊃ G(a, G(b, pay(b)))G(b, pay(b)) G(b, pay(b))pay(b) pay(b)GOALS GOALS

refuse BELIEFS BELIEFG(b, sell(a, b)) ⊃ G(a, sell(a, b)) G(b, sell(a, b)) ⊃ G(a, sell(a, b))¬G(b, sell(a, b)) ¬G(b, sell(a, b))GOALS GOALS

B(a,¬G(b, sell(a, b)))

Fig. 8. The example with FIPA. Note that for space reasons done(i, act(i)) is abbreviated in act(i).

{B(a, φ),¬B(a,B(b, φ) ∨B(b,¬φ)), G(a,B(b, φ))}.

Clearly, this is more detailed than the translation of the social commitments approach. The FIPA ver-sion contains presupposed information about what the speaker does not know beforehand, and about theaddressee. However, we can say that going from social commitments to roles, no information is lost.

The difference in the amount of detail between the approaches, becomes even more clear when wecompare the outcomes of the garbage example, discussed at the end of Section 3 and Section 4. But inaddition to the different level of detail (feasibility preconditions vs operational semantics), there are alsoimportant conceptual differences in the way proposals are modeled.

6.2. Propose Interaction Protocol

As a second example, we choose the Propose interaction protocol of FIPA (2002a). This simple protocolconsists of a propose followed by an acceptance or a refusal, and does not refer to group coordination orgroup action.

In the following, we illustrate how the speech acts in the two approaches introduce and modify thebeliefs and goals of the roles a and b. Eventually, we compare the set of beliefs and goals produced by

28 G. Boella et al. / Common Ontology of ACL

the translation of FIPA and social commitments into the role-based semantics to assess whether the goalsconcerning executable actions are the same in the two approaches (i.e., the agents would act at the samemoment), and whether it is possible to find in FIPA the same commitments as in the social commitmentsapproach.

The main difficulty in mapping FIPA onto social commitments concerns the FIPA propose communica-tive act. In social commitments approaches, it is viewed as a way to negotiate a joint plan: “If I do q, thenyou do p”. This models for example auctions Fornara and Colombetti (2004). Instead, FIPA definitionof propose refers to one action only. Here, we are inspired by the example reported in FIPA documenta-tion FIPA (2002a), which reports the action of selling an item for a given amount of money: we explicitthe fact that the action of selling is a joint plan composed of the proponent’s action of giving the itemand the receiver’s subsequent action of giving the money: sell(i, j) = give(i); pay(j). In this way, theFIPA propose act becomes an act of proposing a plan to be performed by both agents. Once the goals ofboth agents to perform the plan have been formed, the plan is distributed between the agents according toaxioms RL8 and RL9, and the goals concerning the steps of the plan are formed.

Apart form the mapping of propose the translation of FIPA and social commitments semantics to therole-based semantic is straightforward: at each turn, the constitutive rules for translating the semantics ofFIPA and social commitments into the role-based semantics are applied (see the definition of the rulesin Sections 3 and 4). Then, modus ponens and the axioms are applied. Beliefs and goals which are notaffected by subsequent speech acts persist.

In FIPA (see Figure 8, where a simplified notation is used), the propose to sell (propose(a, b, done(sell(a, b)))is an inform that introduces in the role a the belief that the precondition G(b, done(sell(a, b)))) →G(a, done(sell(a, b))) is true. We skip for space reasons the other feasibility precondition, but the readercan easily check that it is true and consistent with the state of the dialogue. The rational effect is a goal ofthe speaker, but since the speaker is reliable (it has correct beliefs about its own mental states), after theproposal, the receiver believes G(b, done(sell(a, b))) → G(a, done(sell(a, b))) too, by rule CR10.

The acceptance of the proposal by b in FIPA is an inform that b has the goal G(b, done(sell(a, b)):acceptproposal(a, b, done(sell(a, b)))

Again, the receiver believes the content of the accept proposal speech act because an agent in a role is re-liable about its own mental states. Since the speaker believes G(b, done(sell(a, b)))G(a, done(sell(a, b)))and G(b, done(sell(a, b)), it believes also to have done(sell(a, b)) as a goal (by modus ponens) and, byaxiom RL1, it actually has the goal to sell. Most importantly, if an agent in a role has the goal to make thejoint plan, by the axiom RL8, then it has the goal to do its part at the right moment (and the other knowsthis) and the goal that the other does its part. The result of the distribution is:B(a, give(a) → G(a, done(pay(b)))))∧B(b, give(a) → G(b, done(pay(b)))). Thus, when done(give(a))is true, from done(give(a)) → G(a, done(pay(b)))), we derive that a wants that b does its partG(b, done(pay(b))).

The translation from social commitments ACL semantics to the role-based semantics is accomplishedby applying the rules defined in the previous section (see Figure 9). Given the FIPA propose CA, thecorresponding speech act in social commitments ACL is propose(a, b, done(give(a)), done(pay(b))). Byapplying the rule that translates this speech act in the role-based ACL semantics, we get to the statein which both a and b have the belief that done(give(a)) → G(a,G(b, done(pay(b)))) representingan unset commitment of b. Moreover, a pending commitment by a is represented by done(give(a)) →G(b, done(pay(b)))) → G(a, done(give(s))).

The accept proposal is modelled by accept(b, a, Ci(unset, b, a, pay(b) | give(a)) in social commit-ments semantics. This speech act, whose precondition is true, results in b’s act of creating the belief ofa that b believes done(give(a)) → G(b, done(pay(b))). The application of modus ponens to the beliefdone(give(a)) → G(b, done(pay(b)))) → G(a, done(give(s)))) and this new belief results in the intro-duction of an active commitment whose debtor is role a: G(a, done(give(a))) ∧G(b, done(give(a)))

When give(a) is executed, then the commitment of a to do give(a) is fulfilled and the commitment ofb to do pay(b) is active: its condition done(give(a)) is satisfied.

G. Boella et al. / Common Ontology of ACL 29

CA Sender a Receiver b(AC) (AC)

propose BELIEFS BELIEFSgive(a) ⊃ G(a, G(b, pay(b)))(give(a) ⊃ G(b, pay(b))) ⊃ G(a, give(a))) (give(a) ⊃ G(b, pay(b))) ⊃ G(a, give(a)))GOALS GOALS

accept BELIEFS BELIEFSgive(a) ⊃ G(a, G(b, pay(b)))(give(a) ⊃ G(b, pay(b))) ⊃ G(a, give(a))) (give(a) ⊃ G(b, pay(b))) ⊃ G(a, give(a)))give(a) ⊃ G(b, pay(b)) give(a) ⊃ G(b, pay(b))G(a, give(a)) G(a, give(a))GOALS GOALSgive(a)

give(a) BELIEFS BELIEFSgive(a) ⊃ G(a, G(b, pay(b)))(give(a) ⊃ G(b, pay(b))) ⊃ G(a, give(a))) (give(a) ⊃ G(b, pay(b))) ⊃ G(a, give(a)))give(a) ⊃ G(b, pay(b)) give(a) ⊃ G(b, pay(b))G(a, give(a)) G(a, give(a))give(a) give(a)G(b, pay(b)) [MP] G(b, pay(b))GOALS GOALS

pay(b)pay(b) BELIEFS BELIEFS

G(b, pay(b)) G(b, pay(b))pay(b) pay(b)GOALS GOALS

reject BELIEFS BELIEFgive(a) ⊃ G(a, G(b, pay(b)))(give(a) ⊃ G(b, pay(b))) ⊃ G(a, give(a))) (give(a) ⊃ G(b, pay(b))) ⊃ G(a, give(a)))¬G(b, pay(b)) ¬G(b, pay(b))GOALS GOALS

Fig. 9. The example with commitments. Note that for space reasons done(i, act) is abbreviated in act(i).

The reject proposal communicative act in FIPA in social commitments semantics corresponds to thespeech act reject(b, a, Ci(unset, b, a, pay(r) | give(a))). The reject speech act attributes to both a andb the belief that b does not have the goal done(pay(a)), thus retaining a’s pending commitment frombecoming active and cancelling the unset commitment from b role.

Which are the main differences between these approaches? By comparing the two tables in Figures 8and 9, it is possible to observe that, once translated in the role-based semantics, the actual commitmentscoincide in the two approaches, with a significant exception. The difference can be observed in the firstrow, where after the propose speech act there is no equivalent in FIPA of the belief - publicly attributed tothe proponent - that it has the goal that the addressee forms a conditional goal to pay the requested amountof money for the sold item, where the condition consists of the proponent giving the item.

This difference is to be ascribed to the definition of the act of proposing in FIPA. In practice, FIPAdoes not express the advantage of the proponent in proposing the plan. For example, in the selling case,there is no clue of the fact that reason why the proponent proposes the joint plan is that it wants to receivethe specified amount of money. However, this is implicit in the definition of the selling plan. In socialcommitments approaches, reciprocity is expressed by the fact that a propose is composed by a conditionalpromise together with a request (see also the model in Yolum and Singh (2002)), thus providing a wayto express any kind of arrangements, even non conventional ones. In social commitments approaches, thesubsequent accept speech act presupposes the existence of an unset commitment having as debtor the roleto which the proposal has been addressed. However, the accept proposal act in the second turn fills the gapin FIPA: when the addressee displays the goal to take part in the joint plan, the distribution of the tasks ofgiving and paying takes place, generating the appropriate goals in the two roles.

30 G. Boella et al. / Common Ontology of ACL

7. Related work

We discuss related work on agent communication languages, on roles, as well as our own related work.

7.1. Agent communication languages

The linguistic and philosophical theory of speech acts has inspired a number of models of agent com-munication languages. Most models of communicative acts can be classified into two types, depending onwhether they rely on mental attitudes or social commitment (both action commitment and propositionalcommitment).

7.1.1. Mental attitudesThe semantics of agent communication languages provided by the Foundation for Intelligent Physical

Agents (FIPA, 2002a), are paradigmatic of the models based on mental attitudes. The FIPA standards andthe work at France Telecom on which it is based (Bretier and Sadek, 1997; Sadek et al., 1997), derive fromearlier models in natural language interpretation (Cohen and Perrault, 1979; Allen and Perrault, 1980).FIPA has concentrated most on collaborative dialogue types, like information seeking. The sincerity con-dition assumed by FIPA is derived from cooperative dialogues in France Telecom setting, because in suchdialogues agents are cooperative. However, the generalization of the FIPA language to non-cooperativesettings has proven to be problematic. The combination of speech act theory Searle (1969), with a generaltheory of planning and action, e.g. (Pollack, 1990), provides a general model of rational interaction. Es-sentially, the speech acts are treated as any other action, which can be combined into plans by means oftheir preconditions and intended effects. In FIPA, communicative acts are defined in terms of the mentalof the agent who issues them. The mental states of agents are represented according to the Belief-Desire-Intention (BDI) framework (Bratman, 1987; Cohen and Levesque, 1990). The bridge between the com-municative acts and the behavior of agents is provided by the notions of rational effect (RE) and feasibilitypreconditions (FP). The rational effect is the mental state that the speaker intends to bring about in thehearer by issuing a communicative act, and the feasibility preconditions encode the appropriate mentalstates for issuing a communicative act. To guarantee communication, the framework relies on intentionrecognition on the part of the hearer.

The main drawback of FIPA resides in the fact that it is not yet widely used, since it is counterintuitive togeneralize to non-cooperative settings. The private mental states of agents cannot be inspected, and there-fore not be verified (Pitt and Mamdani, 1999; Wooldridge, 2000). So, such an approach is not appropriatein situations in which agents may be insincere or non-cooperative, like in argumentation or negotiation. Incontrast, meaning should be seen as a public notion (Singh, 2000; Walton and Krabbe, 1995; Fornara andColombetti, 2004).

7.1.2. Social commitments: action commitment in negotiationAgent communication languages based on social commitment constitute an attempt to overcome the

mentalistic assumption of FIPA by restricting the analysis to the public level of communication (Castel-franchi, 1995; Singh, 2000; Colombetti et al., 2004; Fornara and Colombetti, 2004; Bentahar et al., 2004).Communicative acts are defined in terms of the social commitments they publicly determine for speakerand the hearer. According to Fornara and Colombetti (2004) commitment is “a social relationship betweenthe speaker and the hearer". The notion of social commitment does not refer to intentionality; it is a prim-itive social notion, developed for modeling mutual commitments in contracts and electronic commerce. Acommitment in Fornara and Colombetti (2004) has debtor and creditor, i.e., respectively, the agent whohas the commitment, and the agent to which the commitment is made. If we interpret it as the beneficiary,we have that the creditor is interested in the content of the commitment. The social commitments approachhas focussed on competitive types of dialogue, such as negotiation, but not persuasion. Researchers work-ing in the tradition of Winograd and Flores (1986), such as Singh (2000), are interested in cooperativedialogue and negotiation. Hence they tend to investigate the action commitments, that typically result fromcommissives and directives like ‘request’ and ‘propose’. which have to be made about the agents partic-

G. Boella et al. / Common Ontology of ACL 31

ipating in a dialogue. FIPA focuses on the private mental states of the agents, and it effectively requiresagents to be sincere and cooperative. Social commitments approaches instead highlight the public charac-ter of meaning, and instead of requiring sincerity or other properties from the participants in the dialogue,they impose that agents keep their social commitments to other agents, and motivate them in case they arenot sincere or cooperative.

Also the social commitment approach has a number of drawbacks. A social commitment describes thestatus of an agreement about future action. Unlike intentions, as usually conceived, social commitmentsdo not have a causal relation to action. So the use of social commitments to model the effects of com-munication relies on the notion of obligation to explain how commitments affect the behavior of the indi-vidual agents. As it appears, commitments require enforcement mechanisms of obligations, like sanctions(Pasquier et al., 2004). This makes sense for competitive environments, like argumentation dialogue ornegotiation, but it does not make sense in cooperative environments, like information seeking or inquirydialogues, where a commitment can simply be interpreted as an expectation. Even though social commit-ments may be useful in the analysis and design of communication protocols for applications like electroniccommerce, this does not mean that they provide an adequate semantics of the speech acts involved.6 Thesocial semantics is an operational semantics, which translates speech acts to transitions in a data-structure,the commitment automata.

7.1.3. Social commitments: propositional commitment in persuasionThe argumentation tradition originating with Hamblin (1970) and Walton and Krabbe (1995), focuses

on propositional commitments: “to assert a proposition may amount to becoming committed to subse-quently defending the proposition, if one is challenged to do so by another speaker” (Walton and Krabbe,1995). However, this kind of propositional commitment is biased towards argumentation dialogue, withassertive speech acts like ‘assert’ or ‘challenge’, thus failing to be general enough for cooperative dia-logues.

In many social semantics the distinction between the commitment to an action and the propositionalcommitments of assertives becomes blurred. This is a problem when dealing with argumentation andpersuasion dialogues where the notion of propositional commitment is basic. According to Bentahar et al.(2004), the difference between an action commitment and a propositional commitment lies only in thetype of content, and both kinds of commitment are fulfilled (or violated) if the content is true (false) inthe world, albeit the debtor cannot do anything to make a propositional commitment true, whereas he canperform the action object of a commitment. At a closer analysis the definition of fulfillment is not correctfor action commitment. According to these authors, a commitment is fulfilled if (at the deadline time) itscontent is true in the world. This objectivistic solution is too weak: fulfillment does not only depend onwhat is true in the world, but also on what is believed by the creditor. Thus the creditor can still claim tobe entitled to the commitment, until he is convinced and the evidence is shared by both agents. Moreover,this view of fulfillment is not realistic for propositional commitment. For propositional commitments theproblem is made worse by the fact that their content is not restricted to actions whose execution can bemonitored in the world. Consider, e.g., a commitment towards the fact that Al Gore would have beena better president than Bush. There is no way to fulfill such a hypothetical commitment, unless one ofthe agents concedes on the basis of arguments. This is the general case in persuasion and argumentation,e.g., in a political debate or in a trial. An alternative solution is to define that a propositional commitmentit fulfilled when the creditor also becomes committed to the proposition. But this is too strong, sincenot all assertions and informs aim to make the hearer believe them, and viceversa, not all informs andassertions aim to satisfy a goal of the creditor to know information. For example, in information seeking,the creditor wants to have reliable information but, in a dispute, he wants to win. In general, and incontrast with action commitment, there is not always a creditor who has the goal to have some information.In our approach, a propositional commitment is fulfilled when the hearer concedes, and, thus, cannot

6We are aware of the difference between semantics in computer science, and in linguistics. In linguistics, a semantics of anagent communication language would rather be called a pragmatics, because it mainly concerns the correct use of these acts. Seethe insightful discussion in Coleman et al. (2006).

32 G. Boella et al. / Common Ontology of ACL

challenge the proposition anymore. Other possible goals, like that the hearer must come to believe theproposition, depend on particular types of dialogues and have their own fulfillment conditions. Our role-based semantics allows to associate different fulfillment conditions to the roles that belong to differentdialogue games.

The social semantics approach – with the noticeable exception of Kibble (2005) – generally only makesthe speaker committed to the propositional content of an assertive. It do not express the notion of com-municative intention: an inform makes mutually believed the intention of the speaker that the addresseebelieves the proposition. Social semantics delegate to the protocol level the coherence of a dialogue, bydropping communicative intentions from the speech act level. Concerning concessions, inspired by Kibble(2005), we argue that an assertion creates by default a concession of the hearer, which can be in contra-diction with his beliefs, unless he challenges the assertion. When the hearer challenges the assertion, thecontradiction is passed to the speaker, and forces a retraction of the speaker, if not challenged in turn; andso on, until one of them does not have any arguments left to put forward, and concedes or retracts explic-itly. Finally, considering the creation of a commitment as a result of an assertion, as social commitmentssemantics does, is different from the traditional interpretation, which sees propositional commitments asa kind of action commitment to defend the proposition. Instead, a commitment to defend, is typicallycreated by the challenge.

7.1.4. BridgesRecently, some other papers went in the same direction of redefining FIPA semantics: e.g., (Nickles

et al., 2006; Verdicchio and Colombetti, 2006; Gaudou et al., 2006a). Like us, most of them distinguishbetween public and private mental attitudes. There are various differences. Other solutions need to add todialogue new theoretical concepts which are not always completely clear or diverge from existing work.In particular, Gaudou et al. (2006a) use an explicit grounding operator, which only partially overlaps withthe tradition of grounding in theories of natural language dialogue. Opinions (Nickles et al., 2006) areintroduced specifically for modeling dialogue, but with no relation with persuasion and argumentation.Finally, commitments in Verdicchio and Colombetti (2006) overlap with obligations. Moreover, the ap-proaches relate to the well known FIPA semantics in different degrees: Gaudou et al. (2006a) and Nickleset al. (2006) try to stay close to the original semantics, as we do, while Verdicchio and Colombetti (2006)substitute it entirely with a new semantics, which, among other things, does not consider preconditions ofactions.

7.1.5. Other work in agent communicationIn this paper we do not consider the construction of communicative protocols out of individual speech

acts. There exist first principle methodologies, in which the semantics of a protocol is defined completelyin terms of the semantics of the individual speech acts. Alternatively, larger protocols may be construedfrom basic protocols consisting of speech acts that naturally belong together. For example, propose, acceptand reject form such a basic interaction protocol, see Section 4.

7.2. Roles

The distinguishing feature of our approach is that the public mental attitudes attributed to agents dur-ing the dialogue are associated with roles. The importance of roles is recognized in multiagent systemsand their function ranges from attributing responsibilities to assigning powers to agents in organizations.In this paper we exploit the notion of roles, as introduced in agent-oriented software engineering, e.g.,GAIA (Wooldridge et al., 2000), TROPOS (Bresciani et al., 2004). We distinguish interactive roles, suchas speaker, (over)hearer and addressee. Clearly, different constitutive rules apply to speaker and hearer.Further, we could add rules so that the effects of implicit acknowledgement differ between the addressee ofa message, and a mere overhearer (Gaudou et al., 2006b). We also distinguish social roles, that belong to aparticular type of dialogue. E.g., in argumentation, the ‘burden of proof’ is different for the defender or theopponent of an opinion. Moreover, if there is some authority relation, a request cannot be refused, since itis an order. Note that one agent can play different roles in different interactions with the same or different

G. Boella et al. / Common Ontology of ACL 33

interlocutors: the buyer of a good may become a seller in a second transaction. Because social roles areassociated with dialogue types, with specific set of dialogue rules, roles allow us to reason about assump-tions in different kinds of dialogues. E.g., sincerity could be assumed in cooperative dialogues, such asinformation exchange, but not in non-cooperative dialogues, such as persuasion or negotiation. Ostensiblebeliefs and the grounding operator distinguish only interactive roles or different groups of agents.

7.2.1. Roles as prescriptionsRoles in a social institution are traditionally used to determine the obligations, permissions and institu-

tional powers of an agent. Roles prescribe which possible speech acts are allowed for an agent, and pos-sibly which acts must be used to respond. This idea is most prominent in the ISLANDER system (Estevaet al., 2002), in which the roles of agents determine the interface with the environment. In our role model(Boella and van der Torre, 2004) roles are always associated to some kind of institution, see also e.g.Kagal and Finin (2005), which is described by constitutive rules. In the case of dialogue, the institution isrepresented by the current dialogue game played by the participants.

In the role based semantics introduced in this paper, the moves available to the dialogue participants inthe same dialogue game are based on the mental attributes of the roles only, not on the mental attitudesof the agent. Since an agent enters in a dialogue only in a certain role, the communicative actions athis disposal, depend on the role. Thus, agents participating in a dialogue in different roles can performdifferent kinds of actions. This is due to the fact that the communicative action performed depends onthe constitutive rules of the dialogue game: if an agent utters a sentence which is not recognized as acommunicative action since the agent is not playing the right role, the communicative action is considerednot to be performed.

A typical situation is represented by the Contract Net Protocol (Smith, 1980) where the initiator roleand the participant role can perform different actions, e.g., call for proposal and proposal respectively. Theinitiator and participant role are already present in the Contract Net Protocol. The distinguishing propertyof our approach is that roles are not simply labels, but they are associated with instances representing thestate of the participant in the interaction. The state is represented as a set of beliefs and goals attributedto the role enacting agent. Such state is modified by the communicative actions performed during thedialogue according to the constitutive rules of the dialogue game.

Each instance of a dialogue game is associated with instances of the roles played by the agents; soeach agent is associated with a different state of the interaction in each dialogue he is participating in. Forexample, an agent who is playing the role of participant in a negotiation can at the same time participate asinitiator in another negotiation to subcontract part of the task. In each of his roles (one as participant andmany as initiator) the agent is associated with a set of beliefs and goals representing the situation of theconversation thus far. Moreover, the price the agent as initiator can pay to a sub-contractor, depends on theprice for which it undertook the task, as participant. So all the roles must be related to a common agent,who has to direct the different negotiations according to his private reservation price and to the outcomeof the other interactions.

7.3. Roles as expectations

Agents can make predictions, and use these predictions to coordinate their behavior with an agent,due to the fact that the agent enacts a particular role. This predictive aspect is common in the socialsciences, made famous by restaurant script (Schank and Abelson, 1977) and emphasized in agent theoryby Castelfranchi (1998). An example from human life, is the fact that the car of someone taking drivinglessons, is clearly marked with a sign, like ‘L’ or ‘E’. This sign does not change the prescriptive status –the traffic code applies just as much – but it signals to other drivers to be careful and more considerate.

In our role-based semantics, expectations are based both on the mental attitudes ascribed to the agentand to the role. To play a role, an agent is expected to act as if the beliefs and goals of the role werehis own, and to keep them coherent, as he does for his own mental attitudes. It should adopt his role’sgoals and carry them out according to his role’s beliefs. This holds despite the fact that the model remains

34 G. Boella et al. / Common Ontology of ACL

neutral with respect to the motivations that agents have when they play a role, since the agents can adoptthe mental attitudes attributed to roles as a form of cooperation, or they can be publicly committed to theirroles. The roles’ attitudes represent what the agent is publicly held responsible for: if the agent does notadhere to his role, he can be sanctioned or blamed.

Expectations follow also from the objectives of the agents. For agent communication languages, themost convincing example has to do with bargaining. In protocols like the Contract Net, listed above, thereare no constraints on the content of the proposals. However, purely based on the apparent objectives ofagents in entering into a conversation in a particular role, of this type in the first place, we can infer anumber of preferences:

– the initiator wants to have some task achieved, otherwise he would not send the call for proposals– the initiator wants to give up as little as possible, in return– a participant, may either want to achieve the task, or not. In the first case, he will send some offer. In

the last case he will send a reject, or fail to reply.– if a participant is interested, he will want as much as possible in return for doing the task

>From such preferences, we can infer some coherence conditions on the content of proposals. For exam-ple, it will be very unlikely that a participant will first offer to do the task for 40, and later to do the taskfor 50, because the participant does not expect the initiator to accept the higher offer, after the lower offerwas declined.

7.4. Our own earlier work

This journal paper is a revised and extended version of various workshop and conference papers. Boellaet al. (2005a, 2006b) discuss a role-based semantics, which can deal with both mentalistic approaches andsocial commitments. Of this work, Boella et al. (2005a) is based on our normative multi-agent systemsframework (Boella and van der Torre, 2006a,b), which describes roles by means of the agent metaphor,and is formalized in Input/Output logic (Makinson and van der Torre, 2000). This work only addressespart of the problem; it focuses on persuasion dialogues. Boella et al. (2006a) provide a common role-based semantics for both approaches, using a representation format based on the FIPA semantic language.We model additional categories of speech acts, like commissives and directives, which can be applied innegotiation dialogues between cooperative agents.

8. Conclusions

Public mental attitudes have been proposed to bridge the gap between the two main traditions in defin-ing a semantics for agent communication languages, based either on mental attitudes or on social com-mitments. These traditions, even if they share the idea of speech acts as operators with preconditions andeffects, and agents playing roles like speaker and hearer, rely on completely distinct ontologies. Not onlydoes the mental attitudes approach refer to concepts like belief, desire, goal or intention and is the socialcommitment approach based on the notion of commitment, but the two approaches also refer to distinctspeech acts, distinct types of dialogue games, and so on. Public mental attitudes avoid the problems ofmentalistic semantics, such as the unverifiability of private mental states, and they allow the reuse of thelogics and implementations developed for FIPA compliant approaches. The distinction between privateand public mental attitudes allows us to model multiple communication sessions, as well as bluffing andlying agents in persuasion and negotiation.

We associate public beliefs and goals, representing what the agent is publicly held responsible for, witha role instance. If the agent does not adhere to his role, then he can be sanctioned or blamed. These mentalattitudes contain precisely those public beliefs and goals following from uttering speech acts – require-ments as well as consequences – together with commonly held beliefs about the attitudes of roles. The ex-istence of these public mental attitudes is the only property of role instances we require, and this property

G. Boella et al. / Common Ontology of ACL 35

therefore characterizes our notion of role in agent communication. It is also the distinction with most otherrole based models, which typically allow private mental attitudes like secrets in role-enacting-agents oragent-in-a-role too. The final constituents of our model are the set of constitutive rules defining the effectsof speech acts on the mental attitudes of role instances, and the dialogue type such as information seeking,negotiation and persuasion. Communication is public for every agent who overhears the speech acts, andhas same prior knowledge of roles. Consequently, for public communication, the mental attitudes of roleare public. The challenge of agent communication languages whose semantics is based on public mentalattitudes, such as our role based semantics, is to define mappings from existing languages to the new one.In this paper we show how to map the two existing traditions to our role based ontology of public mentalattitudes and we show how it allows for a comparison.

The mapping from FIPA semantics to our role based semantic language interprets the feasibility pre-condition as a kind of presupposition. For example, if an agent i informs another agent that φ withinform(i, j, φ), then in the original FIPA semantics, agent i should already believe the proposition φ. Oth-erwise, i would be insincere. With public mental attitudes, however, it is not the case that an agent mustalready publicly believe φ, but it means that it is presupposed that the agent believes φ, and therefore,the agent is publicly held responsible for the fact that φ holds. If φ does not hold, then the agent can besanctioned or blamed. Moreover, FIPA notions like reliability and sincerity are no longer built into theformal system to integrate the social commitment approach, and deal for example with non-cooperativetypes of dialogue like argumentation or negotiation. We make properties like sincerity and cooperativityexplicit in the formal language, to distinguish between different contexts of use.

The first mapping from the social commitment semantics focuses on action commitments Singh (2000);Fornara and Colombetti (2004) with an operational semantics. An agent is committed to his public beliefsand goals, in the sense that he has to defend or explain them in case he is challenged. We introduce sixupdate rules mirroring the commitment update rules of Fornara and Colombetti: an unset commitmentcorresponds to a goal of the creditor, a commitment is pending when it is a goal of the creditor and thedebtor of the commitment conditionally wants to perform the action if the associated condition q is true,and the creditor has this as a belief, and so on. Then:

A promise introduces a pending commitment of the speaker;A request introduces an unset commitment with the receiver as debtor, i.e., the agent of the requested

action;Accept and reject change the state of an existing unset commitment to pending and cancelled respectively.

In order to account for this fact, we insert in the antecedent of the rules for accept and reject thereference to the configuration of beliefs and goals that represent an existing commitment.

A propose is a complex speech act composed by a request and a conditional promise; it introduces anunset commitment with the receiver as debtor and a pending commitment with the speaker as debtor.Since a propose is used in a negotiation, q and p refer respectively to an action of the speaker andof the receiver.

The second mapping from the social commitment semantics focuses on propositional commitments.We distinguish propositional and action commitment by introducing the notion of concession, inspired byWalton and Krabbe (1995) and Kibble (2005). We map propositional commitments to the roles’ beliefs,and action commitments to their goals. The constitutive rules of a dialogue game represent the effects thatspeech acts have on the roles’ mental attitudes, and, thus, indirectly on commitments. Concessions areintroduced as the absence of a belief to the contrary, and prevent further challenges. This is analogous toaction commitments, which prevent future actions that require the same resources. We illustrate proposi-tional commitments by modelling a particular persuasion protocol, inspired by Walton and Krabbe. Thisprotocol allows the ‘silence means consent’ principle. Since under this principle concessions of the hearercan be made by default, i.e., by not challenging an assertion, we demonstrate that the semantics of speechacts should not only be expressed in terms of the effects on the attitudes of the speaker, but also on thoseof the hearer.

36 G. Boella et al. / Common Ontology of ACL

The first issue for further research is the use of our role based ontology for descriptions of expectedbehavior. To play a role, an agent is expected to act as if the beliefs and goals of the role were his own, andto keep them coherent, as he does for his own mental attitudes, see also (Goffman, 1959). It should adopthis role’s goals and carry them out according to his role’s beliefs. The only thing agents have to do whencommunicating is to respect the rules of the dialogue or they are out of the game. The agents thus haveto respect the cognitive coherence of the roles they play or they get into a contradictory position. We mayadopt Pasquier and Chaib-draa’s (2003) view that dialogue arises from the need of maintaining coherenceof mental states: “two agents communicate if an incoherence forces them to do so. [...] Conversationmight be seen [...] as a generic procedure for attempting to reduce incoherence". An agent engaged in thedialogue tries to avoid contradictions, not with his private mental states, but with the public image whichhis role constitutes. In contrast with Pasquier and Chaib-draa (2003), in our model we should not considerhere the coherence of the private mental states of the agents, but the coherence of the mental attitudes ofthe roles. Referring only to the agents’ beliefs would not be realistic, since there is no way to ensure thatthe addressee, as an autonomous agent, accepts to believe a proposition. This is one of the limitations ofthe traditional mentalistic approach. We overcome it using rules referring to mental attitudes, but thoseof the roles composing the game, rather than to the agents’ private ones. According to Goffman (1959),the pretense that roles and individual agents are treated identical, is very common in everyday practice.Although we suspect that others have beliefs and strategic goals different from the ones publicly expressed,we pretend not to see this. If we would, we would have to accuse the other openly of insincerity, oftenwithout proper evidence. The resulting conflict would be unwanted for all participants.

The second issue for further research is a formalization of the notion of responsibility, and the way inwhich the obligation to play a role consistently can be enforced. The introduction of obligations requiresthe reference to an explicit multi-agent normative system, as described in (Boella and van der Torre,2004). Formalizing responsibility and enforcement starts with formalizing violations, which should bebased on explicit evidence. Consider again the example of a liar, who once he starts lying, has to continuethe dialogue consistently with what he said before, independently of his real beliefs. To formalize theconsistent liar, we also have to define what it means that he no longer is able to lie, for example due toexplicit evidence to the contrary.

References

Allen, J. F. and Perrault, C. R. (1980). Analyzing intention in utterances. Artificial Intelligence, 15:143–178.Austin, J. (1962). How to do things with words. Harvard University Press, Cambridge Mass.Bentahar, J., Moulin, B., Meyer, J., and Chaib-draa, B. (2004). A modal semantics for an argumentation-based pragmatics for

agent communication. In LNCS n.3366 Argumentation in Multi-Agent Systems (ArgMAS’04), pages 44–63, Berlin. Springer.Boella, G., Damiano, R., Hulstijn, J., and van der Torre, L. (2006a). ACL semantics between social commitments and mental

attitudes. In Proceedings of the AAMAS 2006 Workshop on Agent Communication (AC’06), LNCS to appear. SpringerVerlag, Berlin.

Boella, G., Damiano, R., Hulstijn, J., and van der Torre, L. (2006b). Role-based semantics for agent communication: Embeddingof the Smental attitudesŠ and Ssocial commitmentsŠ semantics. In Proceedings of the 5th International Joint Conferenceon Autonomous Agents and Multiagent Systems (AAMAS’06), pages 688–690. ACM Press, New York.

Boella, G., Damiano, R., Hulstijn, J., and van der Torre, L. (2007a). Distinguishing propositional and action commitment inagent communication. In Proceedings of the Workshop on Computational Modles of Natural Argument (CMNA’07).

Boella, G., Hulstijn, J., Governatori, G., Riveret, R., Rotolo, A., and van der Torre, L. (2007b). FIPA communicative acts indefeasible logic. In Proceedings of workshop on Nonmonotonic Reasoning, Action and Change (NRAC’07).

Boella, G., Hulstijn, J., and van der Torre, L. (2005a). A synthesis between mental attitudes and social commitments in agentcommunication languages. In Proceedings of the 2005 IEEE/WIC/ACM International Conference on Intelligent Agent Tech-nology (IAT’05), pages 358–364. IEEE Computer Society.

Boella, G., Odell, J., van der Torre, L. W. N., and Verhagen, H., editors (2005b). AAAI 2005 Fall Symposium on Roles, aninterdisciplinary perspective , Arlington, VA, 03/11/05-06/11/05, volume FS-05-08 of AAAI Technical Report.

Boella, G. and van der Torre, L. (2004). Organizations as socially constructed agents in the agent oriented paradigm. InProceedings of workshop on Engineering Societies in the Agents World (ESAW’04), LNAI 3451, pages 1–13, Berlin. SpringerVerlag, Berlin.

Boella, G. and van der Torre, L. (2006a). A game theoretic approach to contracts in multiagent systems. IEEE Transactions onSystems, Man and Cybernetics - Part C: Applications and Reviews, 36(1):68–79.

G. Boella et al. / Common Ontology of ACL 37

Boella, G. and van der Torre, L. (2006b). Security policies for sharing knowledge in virtual communities. IEEE Transactions onSystems, Man and Cybernetics - Part A: Systems and Humans, 36(3):439–450.

Boella, G. and van der Torre, L. W. N. (2007). The ontological properties of social roles in multi-agent systems: Definitionaldependence, powers and roles playing roles. Artificial Intelligence and Law Journal (AILaw).

Brandom, R. (1994). Making it explicit. Harvard University Press, Cambridge, MA.Bratman, M. (1987). Intention, plans, and practical reason. Harvard University Press, Cambridge Mass.Bresciani, P., Giorgini, P., Giunchiglia, F., Mylopoulos, J., and Perini, A. (2004). TROPOS: An agent-oriented software develop-

ment methodology. Autonomous Agents and Multi-Agent Systems, 8(3):203–236.Bretier, P. and Sadek, D. (1997). A rational agent as the kernel of a cooperative spoken dialogue system: implementing a logical

theory of interaction. In Muller, J. P., Wooldridge, M. J., and Jennings, N. R., editors, Lecture Notes in Artificial Intelligencen. 1193 Intelligent Agents III, pages 189–203. Springer.

Castelfranchi, C. (1995). Commitment: from intentions to groups and organizations. In Proc. of ICMAS’95, pages 41–48,Cambridge (MA). AAAI/MIT Press.

Castelfranchi, C. (1998). Modeling social action for AI agents. Artificial Intelligence, 103(1-2):157–182.Cohen, P. R. and Levesque, H. J. (1990). Intention is choice with commitment. Artificial Intelligence, 42:213–261.Cohen, P. R. and Perrault, C. R. (1979). Elements of a plan-based theory of speech acts. Cognitive Science, 3:177–212.Coleman, J., O’Rourke, M., and Edwards, D. B. (2006). Pragmatics and agent communication languages. In Roberta Ferrario,

Nicola Guarino, L. P., editor, Proceedings of the ESSLLI Workshiop on Formal Ontologies for Communicating Agents.ESSLLI.

Colombetti, M., Fornara, N., and Verdicchio, M. (2004). A social approach to communication in multiagent systems. In Leite,J. A., Omicini, A., Sterling, L., and Torroni, P., editors, Declarative Agent Languages and Technologies (DALT’03), volumeLNCS 2990, pages 191–220. Springer Verlag.

Dastani, M., Dignum, V., and Dignum, F. (2003). Role-assignment in open agent societies. In Proceedings of the 2nd Inter-national Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS’03), pages 489–496. ACM Press, NewYork.

Dastani, M., Herzig, A., Hulstijn, J., and Van der Torre, L. (2004). Inferring trust. In Proceedings of Fifth Workshop on Compu-tational Logic in Multi-agent Systems (CLIMA V), LNAI 3487, pages 144– 160. Springer.

Demolombe, R. (2001). To trust information sources: a proposal for a modal logical framework. In Castelfranchi, C. and Tan,Y.-H., editors, Trust and Deception in Virtual Societies, pages 111 – 124. Kluwer.

Esteva, M., de la Cruz, D., and Sierra, C. (2002). ISLANDER: an electronic institutions editor. In First Interantional JointConference on Autonoumous Agents and Multiagent Systems (AAMAS’02), pages 1045 – 1052. ACM Press.

Ferber, J., Gutknecht, O., and Michel, F. (2003). From agents to organizations: an organizational view of multiagent systems. InAgent-Oriented Software Engineering IV (AOSE’03), LNCS 2935, pages 214–230. Springer-Verlag, Berlin.

FIPA (2002a). FIPA ACL communicative act library specification. Technical Report SC00037J, Foundation for IntelligentPhysical Agents.

FIPA (2002b). Fipa propose interaction protocol specification. Technical Report SC00036H, Foundation for Intelligent PhysicalAgents.

Fornara, N. and Colombetti, M. (2004). A commitment-based approach to agent communication. Applied Artificial Intelligence,18(9-10):853–866.

Gaudou, B., Herzig, A., and Longin, D. (2006a). A logical framework for grounding-based dialogue analysis. Electronic Notesin Theoretical Computer Science, 157(4):117–137.

Gaudou, B., Herzig, A., Longin, D., and Nickles, M. (2006b). A new semantics for the fipa agent communication language basedon social attitudes. In Procs. of ECAI’06.

Goffman, E. (1959). The Presentation of Self in Everyday Life. Doubleday, Garden City, New York.Guerini, M. and Castelfranchi, C. (2006). Promises and threats in persuasion. In Proceedings of the ECAI Workshop on Compu-

tational Models of Natural Argument (CMNA’06).Habermas, J. (1984). Theory of Communicative Action, volume I. Beacon, Boston, MA, translated by t. mccarthy edition.Habermas, J. (1987). Theory of Communicative Action, volume II. Beacon, Boston, MA, translated by t. mccarthy edition.Hamblin, C. L. (1970). Fallacies. Methuen & Co, London.Herrmann, S. (2005). Roles in a context. In Procs. of AAAI Fall Symposium Roles’05. AAAI Press.Juan, T. and Sterling, L. (2004). Achieving dynamic interfaces with agents concepts. In Procs. of AAMAS’04.Kagal, L. and Finin, T. (2005). Modeling conversation policies using permissions and obligations. LNCS 3396, pages 120–133.

Springer Verlag, Berlin.Kibble, R. (2005). Speech acts, commitment and multi-agent communication. Computational & Mathematical Organization

Theory, 12(2-3):127–145.Lewis, D. (1969). Convention: A Philosophical Study. Harvard University Press, Cambridge.Liau, C.-J. (2003). Belief, information acquisition, and trust in multi-agent systems – a modal formulation. Artificial Intelligence,

149:31–60.Lochbaum, K. E. (1998). A collaborative planning model of intentional structure. Computational Linguistics, 24(4):525–572.Loebe, F. (2005). Abstract vs. social roles: A refined top-level ontological analysis. In Boella, G., Odell, J., van der Torre, L.,

and Verhagen, H., editors, Proceedings of the 2005 AAAI Fall Symposium on ‘Roles, an Interdisciplinary Perspective’, page93U100. AAAI Press, Menlo Park.

Mackenzie, J. (1979). Question begging in non-cumulative systems. Journal of Philosophical Logic, 8:117–133.Makinson, D. and van der Torre, L. (2000). Input-output logics. Journal of Philosophical Logic, 29(4):383–408.Masolo, C., Guizzardi, G., Vieu, L., Bottazzi, E., and Ferrario, R. (2005). Relational roles and qua-individuals. In Boella,

G., Odell, J., van der Torre, L., and Verhagen, H., editors, Proceedings of the 2005 AAAI Fall Symposium on ‘Roles, anInterdisciplinary Perspective’, page paper 103. AAAI Press, Menlo Park.

38 G. Boella et al. / Common Ontology of ACL

Maudet, N. and Chaib-draa, B. (2002). Commitment-based and dialogue-game based protocols-new trends in agent communi-cation language. The Knowledge Engineering Review, 17(2):157–179.

Nickles, M., Fischer, F., and Weiss, G. (2006). Communication attitudes: A formal approach to ostensible intentions, and indi-vidual and group opinions. Electronic Notes in Theoretical Computer Science, 157(4):95–115.

Pasquier, P. and Chaib-draa, B. (2003). The cognitive coherence approach for agent communication pragmatics. In Proceedingsof the 2nd International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS’03), pages 544–551.ACM Press, New York.

Pasquier, P., Flores, R., and Chaib-draa, B. (2004). Modelling flexible social commitments and their enforcement. In EngineeringSocieties in the Agent World (ESAW’04), LNCS 3451, pages 139–151. Springer-Verlag, Berlin.

Pitt, J. and Mamdani, A. (1999). Some remarks on the semantics of FIPA’s agent communication language. Autonomous Agentsand Multi-Agent Systems, 2(4):333–356.

Pollack, M. E. (1990). Plans as complex mental attitudes. In Cohen, P. R., Morgan, J., and Pollack, M. E., editors, Intentions inCommunication, pages 77–103. MIT Press, Cambridge, Mass.

Sadek, D. M. (1990). Logical task modelling for man-machine dialogue. In Proceedings of the 8th AAAI Conference, pages970–975. Morgan Kaufmann.

Sadek, M. D., Bretier, P., and Panaget, E. (1997). ARTIMIS: Natural dialogue meets rational agency. In Proceedings of theFifteenth International Joint Conference on Artificial Intelligence (IJCAI’97), pages 1030–1035. Morgan Kaufmann.

Schank, R. C. and Abelson, R. P. (1977). Scripts, Plans, Goals and Understanding: an Inquiry into Human Knowledge Structures.Erlbaum, New York.

Searle, J. (1969). Speech Acts: an Essay in the Philosophy of Language. Cambridge University Press, Cambridge (UK).Singh, M. P. (1991). A conceptual analysis of commitments in multiagent systems. In Technical Report, Department of Computer

Science. North Carolina State University, pages 69–74.Singh, M. P. (2000). A social semantics for agent communication languages. In Issues in Agent Communication (AC’2000),

LNCS 1916, pages 31 – 45. Springer-Verlag, Berlin.Smith, R. G. (1980). The contract net protocol: High-level communication and control in a distributed problem solver. IEEE

Transactions on Computers, 29(12):1104–1113.Stalnaker, R. C. (2002). Common ground. Linguistics and Philosophy, 25:701U7219.Verdicchio, M. and Colombetti, M. (2006). From message exchanges to communicative acts to commitments. In Procs. of

LCMAS’05, pages 75–94. ENTCS 157(4).Walton, D. N. and Krabbe, E. C. (1995). Commitment in Dialogue: Basic Concepts of Interpersonal Reasoning. State University

of New York Press.Winograd, T. and Flores, F. (1986). Understanding Computers and Cognition. Reading, MA: Addison-Wesley.Wooldridge, M., Jennings, N., and Kinny, D. (2000). The Gaia methodology for agent-oriented analysis and design. Autonomous

Agents and Multi-Agent Systems, 3(3):285–312.Wooldridge, M. J. (2000). Semantic issues in the verification of agent communication languages. Journal of Autonomous Agents

and Multi-Agent Systems, 3(1):9–31.Yolum, P. and Singh, M. P. (2002). Flexible protocol specification and execution: Applying event calculus planning using

commitments. In Proceedings of the 1st International Joint Conference on Autonomous Agents and MultiAgent Systems(AAMAS’02), pages 527–534. ACM Press.

Zambonelli, F., Jennings, N., and Wooldridge, M. (2003). Developing multiagent systems: The Gaia methodology. IEEE Trans-actions of Software Engineering and Methodology, 12(3):317–370.