A theoretical framework for the conception of agency

23
A THEORETICAL FRAMEWORK FOR THE CONCEPTION OF AGENCY Francesco Amigoni, Marco Somalvico, Damiano Zanisi Politecnico di Milano Artificial Intelligence and Robotics Project. Dipartimento di Elettronica e Informazione. Politecnico di Milano. Italy. ABSTRACT In the field of distributed artificial intelligence, the cooperation among intelligent agents is a matter of growing importance. We propose a new machine, called agency, which is devoted to solve complex problems by means of cooperation among agents, where each agent is able to perform inferential activities. The aim of the paper is to give rigorous and formal descriptions of agency and, using the descriptions, to define and prove some interesting properties. The descriptions are based on three formalisms: multilanguage systems, directed hypergraphs, ER Petri nets. The work is a step in the direction of building a methodology for the project and the development of systems operating in real-world applications. We give a theoretical background on which new techniques can be implemented for testing the requirements of systems of distributed artificial intelligence such as agencies. The fundamental formalism in describing agencies is multilanguage system; starting from it we capture some particular issues (i.e. structure and evolution of an agency) by means of hypergraphs and ER Petri nets. The formalisms support the definition and proof of properties (such as fairness of cooperation among agents). 1. INTRODUCTION In the field of distributed artificial intelligence, the cooperation among intelligent agents is a matter of growing importance. We propose a new machine, called agency, which is devoted to solve complex problems by means of cooperation among agents, where each agent is able to perform inferential activities. The agency belongs to the subfield of DAI concerning the cooperation among intelligent entities to reach the solution of a complex problem. Much work has been done in this area, as reported in the survey of Section 2. The distinguishing feature of agency is that the component elements or agents are inferential intelligent systems, that is, they are systems able to reason on data, exchange information, and so on. The agency, like any system of DAI, can be formally described in many ways (that is, using many different formalisms), each description is suitable to evidence particular aspects of the agency. The aim of the paper is to give rigorous and formal descriptions of agency and to define some properties as well as prove some theorems about agencies. The work is theoretical, in fact we present models and theoretic results but no real system (that is, system built up to operate in real world job) is illustrated. The informal concept of agency is first presented, then we show how the study and analysis of agencies can be simplified by a division into three levels called syntactic, semantic and pragmatic level. Each level is described by means of

Transcript of A theoretical framework for the conception of agency

A THEORETICAL FRAMEWORK FOR THE CONCEPTIONOF AGENCY

Francesco Amigoni, Marco Somalvico, Damiano Zanisi

Politecnico di Milano Artificial Intelligence and Robotics Project.Dipartimento di Elettronica e Informazione. Politecnico di Milano. Italy.

ABSTRACT

In the field of distributed artificial intelligence, the cooperation among intelligent agents is a matterof growing importance. We propose a new machine, called agency, which is devoted to solvecomplex problems by means of cooperation among agents, where each agent is able to performinferential activities.The aim of the paper is to give rigorous and formal descriptions of agency and, using thedescriptions, to define and prove some interesting properties. The descriptions are based on threeformalisms: multilanguage systems, directed hypergraphs, ER Petri nets.The work is a step in the direction of building a methodology for the project and the development ofsystems operating in real-world applications. We give a theoretical background on which newtechniques can be implemented for testing the requirements of systems of distributed artificialintelligence such as agencies. The fundamental formalism in describing agencies is multilanguagesystem; starting from it we capture some particular issues (i.e. structure and evolution of an agency)by means of hypergraphs and ER Petri nets. The formalisms support the definition and proof ofproperties (such as fairness of cooperation among agents).

1. INTRODUCTION

In the field of distributed artificial intelligence, the cooperation among intelligent agents is a matterof growing importance. We propose a new machine, called agency, which is devoted to solvecomplex problems by means of cooperation among agents, where each agent is able to performinferential activities.The agency belongs to the subfield of DAI concerning the cooperation among intelligent entities toreach the solution of a complex problem. Much work has been done in this area, as reported in thesurvey of Section 2. The distinguishing feature of agency is that the component elements or agentsare inferential intelligent systems, that is, they are systems able to reason on data, exchangeinformation, and so on. The agency, like any system of DAI, can be formally described in many ways(that is, using many different formalisms), each description is suitable to evidence particular aspectsof the agency.The aim of the paper is to give rigorous and formal descriptions of agency and to define someproperties as well as prove some theorems about agencies.The work is theoretical, in fact we present models and theoretic results but no real system (that is,system built up to operate in real world job) is illustrated. The informal concept of agency is firstpresented, then we show how the study and analysis of agencies can be simplified by a division intothree levels called syntactic, semantic and pragmatic level. Each level is described by means of

2

formalisms, and each formalism is suited to put in evidence particular aspects involved in the studyand analysis of agencies. The privileged formalism in describing agencies is multilanguage system, aformal system devised by Giunchiglia et al. (see [22]), that allows to represent an agent as a theory(that is, each agent has its own language to represent facts and its own set of rules of inference toderive new facts from older facts) and the links among agents as special rules of inference calledbridge rules, that are rules whose premises and conclusions are formulae of different agents. Atopological description based on hypergraphs (an extension of the usual graphs [1]) works well inorder to provide the definition of some properties and to prove some theorems about agencies. Thethird formalism used for describing agencies is based on ER Petri nets [21] that are an extension ofPetri nets. ER Petri nets are an executable formalism, so we can follow the evolution of an agency(represented by an ER Petri net) and we can simulate the behavior of that agency in critical situationsor conditions. By means of the formal descriptions of agencies we state some properties and provesome theorems, thus an important role is played by the background theory of the formalisms: in factwe apply the consolidate results about formalisms (i.e. multilanguage systems, hypergraphs and ERPetri nets) to the agencies in order to make easiest the job to discover new results.The paper is structured as follows: Section 2 is a survey of the mayor results presented in literaturein the field of DAI. Section 3 is an informal introduction to the agency and to the division into threelevels of the agency studying, in addition it points out some issues and some critical problems whosesolutions are presented in Section 7. In Section 4 we formalize the concept of agency by means ofmultilanguage systems, in particular we describe the three levels of the agency in an incremental way.In Section 5 we describe the three levels from a topological point of view using the formalism ofhypergraphs. In Section 6 we give an evolutional description of the three levels of an agency bymeans of ER Petri nets. Section 7 is a review of the main results obtained taking advantage ofdescriptions given in Sections 4, 5 and 6. Finally, Section 8 summarizes the content of the paper.

2. DAI SYSTEMS

The terms ‘agent’ and ‘agency’ were first used by Marvin Minsky in [28] in order to explain thehuman mind. In fact, Minsky conceives the mind as the result of the cooperation among agentsorganized in an hierarchy of agencies (set of agents), each one specialized to work out a particulartask. Minsky argues that the phenomena of the mind performances are of such complexity that it isquite unlikely that each one of these phenomena can be appropriately captured by one given model ina satisfactory and complete way. Thus to each phenomenon is necessary to associate severalcoexisting and competing models. In conclusion, the totality of the various agents which are requiredto provide a global set of models of all the mind phenomena constitutes a particular agency called byMinsky “the society of mind”, where each agent is associated with each model.Subsequently, the word ‘agent’ was used to name the entities forming the systems of DistributeArtificial Intelligence (DAI, [7]). In this Section we refer to agent as a “classical” agent, whereas wegive our definition of agent of the agency in next Sections. DAI can be conceived as arranged invarious branches which depend on the kind of the agents making up the systems: the ParallelArtificial Intelligence is the field concerning very simple agents, that is agents that are able toperform elementary actions such as the instructions of a microprocessor.When the grain of the agents is less fine, DAI becomes the area of Distributed Expert Systems(DES). In this case, agents are systems of artificial intelligence, because each one has a databaserepresenting its knowledge as well as the capability to produce some newer facts operating on it.DES, in turn, is further detailed into two subareas. The first subarea is represented by DistributedKnowledge Systems (DKS) which deal with systems in which all the agents are different, in the sensethat they have specific abilities. The second subarea is represented by Distributed Problem Solving(DPS) which concerns homogeneous systems, which are systems where every agent may perform the

3

task of any other agent. A subclass of DPS is Cooperative Distributed Problem Solving (CDPS, see[13] for a survey of CDPS systems), a CDPS system is a network of problem solvers (i.e. systems ofartificial intelligence) that work together.Among the many different models of agents that have been developed we can distinguish modelswhich are utilized to describe agents at an implementational level, and models which are utilized todescribe agents at a logical level. The former ones are those used in DAI for solving distributedproblems in particular application domains, the latter ones are logical descriptions used to studyproperties or depict peculiar circumstances. The logical model of an agent could, then, beimplemented with one or more implementational models.Our work is devoted to describe, from a logical point of view, DPS systems; in particular CDPSsystems are suitable to be formalized by the formalisms we propose.In the following we give a survey of the two classes of models: implementational models and logicalmodels.

2.1 Implementational Models of Agents for DPS Systems

Implementational approaches to DPS are systems created for specific application domains. Thus, intheir developing it is necessary to resolve problems that arise when attempting to use a system in areal application; for example problems can be: limited resources for computation andcommunication, errors in elaboration of data or in measurement of signals, and so on. We present alist (in no particular order) of the mayor implementational models that can be found in literature.

a) BlackboardThe system is composed (see [32], [33] for further details) of independent and indistinct KnowledgeSources, KS for short, communicating by means of a global database called blackboard. Thedatabase is organized in levels and a KS can read or write information only in specific levels. Theinformation read or written by a KS is used to derive new knowledge; this process continues until aglobal goal is reached.

b) NegotiationThis model is based on two assumptions: each agent has only a partial view of other agents; theagents cooperate to solve a problem. The agent, the manager, that is solving a complex problemdecomposes it in simpler problems, then gives to all the other agents (broadcasting) a message (taskannouncement message). The agents, in turn, depending on their abilities, return bid messages. Themanager has some parameters to select the best offer, so that the manager gives one of the task tothe agent that has made the offer (this agent is called contractor). The negotiation model is proposedin many formulations in literature: the original model (called Contract-Net) is developed by Smithand Davis (see [38], [39], [40], [41], [37]), other use of negotiation can be found in Conry et al. (see[5]), Cammarata et al. [4].

c) Centralized Multiagent PlanningThe model (see [4]) has been developed in order to solve conflicts among agents. This approach isbased on the identification of a privileged agent elected by all the agents of the system usingnegotiation. The elected agent forms a multiagent plan that specifies all agents’ future actions andinteractions. The criteria used for the election could be of three different types: i) a prioriagreements, ii) exchange of information about the knowledge each agent has about other agents, iii)exchange of parameters about the state of the agents.

d) Distributed Multiagent Planning

4

The model removes the limitation of the previous model of having only a planner. The approach (see[6], [20], [26]) works providing each agent with a model of other agents. In a particular applicationof this field, called Deductive Believe Model (developed by Konolige and Pollack [27]), there aretwo classes of agents: observers and actors. The former class identifies those agents which giveintentions and plans to the agents of the latter class. The model takes advantage of a logicalformalism used to represent the reasonings of the agents. Additional communication is requiredwhen the observers need more information about actors.

e) Functionally-Accurate CooperationIn this model, developed by Lesser, Corkill and Erman [29], agents cooperate exchanging partial andtentative results to converge on unique and consistent problem solution. An evolution ofFunctionally-Accurate Cooperation is the Open System approach elaborated by Hewitt [25].

f) Organizational StructuringThe approach achieves cooperation among agents defining an organizational structure, that is, thelayout of information and control relationship existing among agents. Some systems createdfollowing these directive ideas are due to the works of Galbraith [15], [16], Gasser [18], Durfee,Lesser and Corkill [12].

g) Partial Global PlanningIn the model, developed by Durfee and Lesser [10], [11], each agent can reason about implicationsof its actions on other agents’ state (i.e. on their goals, plans, beliefs). This reasoning ability is thebasis to decide how to coordinate with others.

2.2 Logical Models of Agents for DPS Systems

Logical approaches to DPS are systems that attempt to use rigorous (mathematical or logical)models of agents’ reasonings and interactions to investigate properties of coordination andcooperation that are independent of any application domain. In other words, logical models aredevoted to understand the theoretical capabilities of DPS systems. We present a list (in no particularorder) of the mayor logical models in literature.

a) Modal LogicMazer [30] suggests to use modal logic for representing knowledge in DAI. This is useful forimplementing systems based on negotiation and blackboard; in addition, with the introduction of theconcept of execution, inferential reasoning could be formalized.

b) Knowledge-level AgentThe model, developed by Nilsson and Genesereth [19], is based upon the idea that “intelligenceappears to be a phenomenon that transcends implementation’s technology, such as biology orelectronics”. Consequently, the level of description of intelligent agents should be abstract. Themodel uses a set of sentences of the predicate calculus representing the knowledge of the agent and afunction called database that modifies this set when specific conditions in the world are met.

c) Game TheoryIn this approach, proposed by Rosenschein and Genesereth [36], agents work for individual goals,choosing the best actions for the whole system. This is done by maximizing a set of parametersknown by all the agents and, in this way, a kind of cooperation is accomplished. Some forms of

5

communication can be necessary in situations where complete information about parameters tomaximize is not available.

d) MultientityThe agents of this model (see [31]) are nondeterministic finite state automata. Each agent has tosolve an individual problem. The set of all the states of the agents is called configuration, and thereare actions that modify the configuration. The final goal is reached when all the individual goals arereached (i.e. individual problems are solved).

3. INFORMAL DEFINITION OF AGENCY AND SOME ISSUES

In this Section we introduce the idea of agency (in Subsection 3.1) and the division into three levelsof the agency studying. In conclusion of the Section some critical problems (namely predictions,properties, hypotheses, ...) involved in agency studying are presented, the solutions to theseproblems (namely confirm of predictions, investigation of properties, ...) are presented in Section 7where we exploit the power of formalisms introduced in Sections 4, 5, and 6.

3.1 The Idea of Agency

The agency is a new kind of machine conceived within artificial intelligence that is intended to beused for solving problems which present the following characteristics: i) they are of very complexnature or structure; ii) they lack appropriate and established paradigms able to provide theirsolutions; iii) they present dimensions of very large size, although they are scaleable in thesedimensions. Its main feature is the way a problem is approached: the problem is decomposed insimpler sub-problems that are given to the agents, the components of the agency. In addition theagents may cooperate, namely one agent’s support, in any operation, can be required by anotheragent. Thus, the use of agencies is justified in some situations, for example when no single agent isable to solve a problem.The first idea of agency, intended as a set of agents cooperating in order to solve problems, has beengiven by Marvin Minsky. Minsky conceives the human mind as a society of agents: an agent haspeculiar abilities that may be used by other agents so that their organization (which are the agentsone agent can refer to) produces the mind (memory, learning, reasoning, intelligence, ...). Theexplicit goal of this theory is to explain the mind as an organized aggregation of elements, calledagents, each one providing partial and specific performance of intelligence. In fact each agent canalways be decomposed into other agents until elementary agents are reached. An importantcharacteristic of the agents is that they are specialized, because they can perform only a specific task.Our definition of agency differs from that given by Minsky for these two last aspects: in fact weconsider agents with universal abilities and with a minimum degree of intelligence (whereas Minskyagents have specific abilities and can be non-intelligent). More precisely:

Definition 3.1.1: the agency (or cooperation machine) is the machine achieving cooperation. It ismade up of intelligent components (called agents) interacting each other in order to offer or receivecollaboration for reaching a global goal.

By intelligent agents we mean that each component of the agency has inferential abilities (the abilityof deriving conclusions from premises); so the agents are not elementary and they are not task-

6

oriented. The inference is carried out by means of algorithms developed in artificial intelligence inorder to emulate reasoning. In addition each agent has the means to cooperate with any other agent,if necessary.The following example shows a real problem that can be solved using an agency.

Example 3.1.1: the goal is to have the automatic control of the behavior of people moving in asubway station where more lines cross, thus a big and often overcrowded station. The agents arerobots monitoring specific rooms using cameras. When a dangerous situation is detected (suddenillness, brawl, pickpocketing, panic, ...) the images of the areas interested are shown on monitorschecked by operators. In addition an algorithm managing the situation begins to be executed. Thisalgorithm communicates with other agents (ubiquitous computer, namely crowd wardens) who bringthe movement of crowd in the correct direction in order to restore normality.

Agencies may be studied from two different points of view. First one: we may be interested inanalyzing and describing the physical structure of an agency (what are the agents composing it, whatare the inferential algorithms used by agents, how are the agents connected, what are the protocolsused to communicate, ...). We call this level the implementational level, because the objects of thisstudy are real systems working in particular applications of the real world.Second one: logical description and logical analysis of an agency could be required for modeling theinferential activities, for studying properties and, thus, for helping the project of the agency (at animplementational level). In our work, we focus on logical level to investigate some properties ofagencies. Sections 4, 5 and 6 show different formalisms that can be used for a logical description ofan agency.

3.2 The Role of Agency in Artificial Intelligence

Agencies, as we defined them previously, are new entities of artificial intelligence and, moreprecisely, they belong to the distributed artificial intelligence, the field that uses the technologies andthe methodologies of artificial intelligence for solving problems that are spatially or temporallydistributed. In particular, recalling the tassonomy of DAI presented in Section 2, we can say thatagencies belong to CDPS, in fact, the inferential capabilities of the component agents give them theability of performing, generally, every work which is performed by any other agent; in addition theglobal goal of an agency is reached by means of cooperation among agents, as Definition 3.1.1suggests.

3.3 The Three Levels Involved in Agency Studying

Both implementational description and logical description of an agency can be studied on threedifferent levels, each one stressing particular aspects.The three levels of description are the following ones and the situation is represented in Figure 3.1.

- Syntactic level: this level depicts the structure of an agency and gives an answer to the question:“How is the agency made up?”. If we are analyzing the syntactic level of the implementationaldescription of the agency we are dealing with computers and robots corresponding to agents, theirphysical locations, and all the aspects concerning the communication (channels, messages, protocols,...). The syntactic level of the logical description gives information about the logical connections

7

among agents (which agent requires the cooperation of one agent) and the inferential capability ofeach agent. The syntactic level describes the potential of the agency.

- Semantic level: this level adds to the description of the structure of an agency the description of theworld where the agency is working. Thus it gives an answer to the question “Where does the agencycarry out its job?”. At implementational level, semantic level deals with all the data structures used tomodel the world. At the logical level, semantic level introduces a set of assumptions about the world,which are used to infer knowledge. The semantic level describes the potential of the syntactic leveland illustrates the application domain.

- Pragmatic level: this level, that is added to semantic level, specifies the goals an agency has toreach and the way to reach them. Thus it gives answers to the questions “What are the goals? Andhow does the agency reach them?”. From an implementational point of view this level studies theinferential algorithms used to solve a problem, whereas from a logical one, this level specifies theinferential engine (the logical procedure used for deriving conclusions from premises) and the goals(that are represented in a given formalism). The pragmatic level satisfies the application given by thesemantic level.

We note that the levels are one the more precise specification of the previous; thus we can conceivemore different semantic levels based on the same syntactic level, and more pragmatic levels based onthe same semantic level.

3.4 Critical Problems About Agency Studying

The formalisms introduced in Sections 4, 5 and 6 are the bases for rigorous descriptions of agency.The descriptions are useful in agencies studying and analysis if they support the demonstrations andthe confirmations of hypotheses and predictions. Other ways to use formal descriptions are thoserelated to assessing performance and/or coverage, comparing some issues between differentagencies, probing and presenting results (also if they are negative, unexpected, ...), and so on.In the following we list a preliminary set of critical problems involved in agencies studying:- how can we distinguish agencies given their structure (topology)?- how can we classify agencies with regard to their structure?- what kind of properties can we establish to characterize the cooperation among agents?- how can the properties be detected?- is the detection of a property a tractable problem?

syntactic implementational semantic pragmatic level of analysis syntactic logical semantic pragmatic

Figure 3.1 Levels of description of agencies.

8

Other critical problems can be raised, but the purpose here is to illustrate the power of formaldescriptions, not to exhaust all the directions of research in the field of agencies.We try, using the formalisms presented in Sections 4, 5 and 6, to give solutions to some of listedproblems in Section 7.

4. INFERENTIAL DESCRIPTION OF AGENCY USING MLS

A MultiLanguage System (MLS) is a formal system composed of many distinct theories, where atheory is defined as a triple: language, set of axioms, set of rules of inference. The theories areconnected one to another by means of special rules of inference called bridge rules whose premisesand conclusions can belong to different languages.We give a description of each level of an agency, i.e. syntactic level, semantic level, pragmatic level.These descriptions are based on MLS, so that the attention in describing an agency is focused oninferential aspect.In Subsection 4.1 we give the definitions and summarize the major results about MLS. In Subsection4.2 we apply these ideas to the goal of describing an agency from an inferential point of view.

4.1 MultiLanguage System

Multilanguage systems were devised by Giunchiglia (see [22], [23], [24] for details and applicationsof MLS). A MLS is a way to link reasonings in different theories by a framework in which theoriesare independent but connected.

Definition 4.1.1: let I be a set of indexes, Lii∈I a family of languages and Ωii∈I a family of sets ofwell-formed formulae (wffs) where Ωi⊆Li for each i∈I. A MultiLanguage formal System (MLS) Mis a triple <Lii∈I, Ωii∈I, ∆> where Lii∈I is the family of languages of M, Ωii∈I is the family ofaxioms of M and ∆ is the deduction machinery of M.

A family is a set with repetitions. <α, i> means the wff α and that α is a wff of Li (α is a Li-wff). Thededuction machinery ∆ is a set of rules of inference, informally:

<α0, i0> ... <αn-1, in-1> premises (1) : ζ

<α, j> conclusion

[<β0, k0>] ... [<βm-1, km-1>] discharged assumptions<α0, i0> ... <αn-1, in-1> <γ0, k0> ... <γm-1, km-1> premises

(2) : ψ <α, j> conclusion

Rules of inference can be classified into disjoint classes:- rules whose premises and conclusions belong to the same language Li (called Li-rules or ∆i);- rules whose premises and conclusions belong to distinct language (called bridge rules).

Definition 4.1.2: a theory i is the triple <Li, Ωi, Li-rules>.•

9

Li-rules allow to draw conclusions inside a theory, bridge rules allow to export results from onetheory to another. Inference in MLS is specified by the following definitions.

Definition 4.1.3: given a MLS M, a formula-tree is a deduction in M of <α, i> depending on a set offormulae according to the following rules:(i) if <α, i> is an axiom of M (i.e. α is an element of Ωi), then <α, i> is a deduction in M of <α, i>depending on the empty set;(ii) if <α, i> is not an axiom of M, then <α, i> is a deduction of <α, i> depending on <α, i>;(iii) if Dk is a deduction of <αk, ik> depending on Γk for each k (1≤k≤h), then:

D1 ... Dh

: ξ <α, i>

is a deduction of <α, i> depending on Γ, where Γ is computed in one of the following ways:-a- if the rule of inference ξ is of the form (1), then Γ is the union of all Γk;-b- if the rule of inference ξ is of the form (2), then:

(3) Γ=∪Γk ∪ (∪(Γk-<βk, jk>) ). 0≤k≤n-1 0≤k≤m-1

Definition 4.1.4: given a MLS M:- a deduction D is a deduction (in M) of <α, i> from Γ if and only if D is a deduction in M of <α, i>depending on Γ or any subset of Γ;- <α, i> is derivable from Γ in M, denoted by Γ M<α, i>, if and only if there is a deduction (in M)of <α, i> from Γ;- a deduction (in M) of <α, i> from the empty set is a proof of <α, i> in M;- <α, i> is provable in M (is a theorem of M), in symbols M<α, i>, if and only if there exists aproof of <α, i> in M.

A deduction can be seen as composed of sub-deductions in distinct theories (obtained using Li-rules), the sub-deductions are connected by applications of the bridge rules.

4.2 Inferential Description of Agency

Referring to the definitions of the previous Subsection, we are able to formalize the idea of agencyfrom an inferential (logical) point of view. To do so, we give a description for each one of the threelevels involved in agency studying. Thus we present: syntactic level logical description, semanticlevel logical description, pragmatic level logical description.We start with the definition of syntactic level logical description.

Definition 4.2.1: given an agency A, we call syntactic level logical description (syntacticdescription) of A a MLS where:- a theory <Li, Ωi, ∆i> represents an agent i of A, the correspondence between theories and agents isone-to-one;

10

- the set (called Π) of bridge rules represents the links among agents.•

We can state an equivalence between MLS and agency, in fact each agency is described(syntactically) by a MLS, and each MLS is the syntactic description of an agency. Given a MLS, theagency it describes could have non-inferential agents, that is agents with no reasoning capabilities,because some theories of MLS could have an empty set of local rules of inference (Li-rules). We stillcall agency this degenerate case of cooperation among agents where some, or all, agents are non-inferential.The syntactic level logical description of an agency reflects the potentiality of an agency: the wholeinferential power of the agency is expressed by its syntactic description. The semantic and thepragmatic level descriptions will represent a more limited inferential power than syntacticdescription.We extend syntactic description to give semantic description of an agency as illustrated in thefollowing definition.

Definition 4.2.2: given an agency A, we call semantic level logical description (semanticdescription) of A a MLS where:- <Li, Ωi, ∆i, Γi> represents (the correspondence is one-to-one) an agent i of A, Γi is a set of Li-wffscalled set of assumptions of agent i;- the set Π of bridge rules represents the links among agents;- any deduction is depending on Γ=Γ1∪Γ2∪...∪Γn, where n is the number of the agents in A.

The item Γi added to the theory describing agent i is the set of wffs that represent the state of theworld, i.e. the set Γi represents the context in which the agent is involved; thus Γ is a representationof the world where the agency works.We can say that, in the same way the syntactic level reflects the potentiality of an agency, thesemantic level reflects the application of the agency.The introduction of sets of assumptions limits the applicability of rules of inference, in fact somepremises may be impossible to derive from the given assumptions: the rules of inference having oneof those premises are not applicable.

Example 4.2.1: consider the semantic description of an agency A, if Π has the rule

<α, i> : ξ <β, j>

and <α, i> is not derivable from Γ then ξ is not applicable (ξ cannot be used in a deductiondepending on Γ).

We note that it is possible to give many semantic descriptions based on the same syntacticdescription according to Subsection 3.3.The next definition concerns the pragmatic level.

Definition 4.2.3: given an agency A, we call pragmatic level logical description (pragmaticdescription) of A a MLS where:

11

- <Li, Ωi, ∆i, Γi, mi, Oi> represents (the correspondence is one-to-one) an agent i of A, mi is aninferential engine (specifies the order in which the rules of inference are used in a deduction) ofagent i, Oi is a set of Li-wffs called set of goals of agent i;- the set Π of bridge rules represents the links among agents;- an inferential engine m is specified for bridge rules;- any deduction is depending on Γ=Γ1∪Γ2∪...∪Γn, where n is the number of agents in A;- deductions must derive <γ, i>∈Oi, ∀i; i.e. the goals must be reached;- any deduction follows m and mi, ∀i.

mi provides an indication of the inferential strategy of agent i, mi specifies the order in which Li-rulesmust be applied in a deduction. m does the same for bridge rules.The sets Oi are defined starting from the global goal of the agency, the set O=O1∪O2∪...∪On is arepresentation of the global goal of an agency that is spread on agents by definition of Oi.The pragmatic level reflects the goals of the agency and the way to reach them.The introduction of pragmatic level description limits the applicability of rules of inference; in factthe rules of inference not useful to derive goals are not applicable; in addition, the use of a rule ofinference may be forbidden by inferential engines.In conclusion of the Section we note that the three definitions are incremental, in other words:Definition 4.2.2 adds a condition to those of Definition 4.2.1 providing a better specification ofagency. The same applies to Definition 4.2.3 when compared to Definition 4.2.1 and Definition4.2.2. The use of MLS allows us to give a view of an agency as an unitary machine, if we considerinferential aspects.The next sections present other aspects of agencies studying.

5. TOPOLOGICAL DESCRIPTION OF AGENCY USING DIRECTEDHYPERGRAPHS

Hypergraphs are generalization of graphs, in fact an edge of a graph connects just two nodes,whereas an edge of an hypergraph (called hyper-edge) can connect more than two nodes.Hypergraphs are a suitable formalism for describing the three levels, i.e. syntactic, semantic andpragmatic, of an agency.In Subsection 5.1 we introduce the basic definitions of hypergraphs, in particular of directedhypergraphs; in Subsection 5.2 we use these concepts to give a topological description of the threelevels of an agency.

5.1 Directed Hypergraph

Hypergraphs were studied by Berge [1], [2], [3] some of their applications are presented in [17].Here we give only the definitions useful for representing, from a topological point of view, the levelsof an agency.

Definition 5.1.1: an hypergraph is a pair H=(N, E), where N=n1, n2, ..., nn is the set of nodes andE=E1, E2, ..., Em is the set of hyperedges, with Ei⊆N for i=1, ..., m. If the cardinality of eachhyperedge is 2, that is Ei=2 for all i=1, ..., m, then the hypergraph is a standard graph.

12

It is clear, from the above definition, that hypergraphs are generalization of graphs, in facthyperedges can connect a subset of nodes (with no restrictions on cardinality), on the other hand theedges of a graph connect just two nodes.For applications in the field of agencies a directed hypergraph is defined as follows:

Definition 5.1.2: a directed hypergraph is a pair DH=(V, A), where V=v1, v2, ..., vn is the set ofvertices and A=E1, E2, ..., Em is the set of hyperarc, each Ei, i=1, ..., m, is an ordered pair Ei=(Xi,Yi) where Xi, Yi ⊆V. Xi is called tail of Ei and Yi is called head of Ei . The tail and the head of anhyperarc E will be denoted by T(E) and H(E), respectively.

We note that this definition, compared with the classical definition of directed hypergraphs, lackscondition Xi∩Yi=∅. Thus there can be “loop-hyperarcs” in our directed hypergraphs, loop-hyperarcs are necessary to model some aspects of agencies.In the following, directed hypergraphs will simply be called hypergraphs. A special subclass ofhypergraphs is composed of B-hypergraphs whose hyperarcs (called B-arcs) have an head containingjust one vertex. The next definition is a formalization of this concept:

Definition 5.1.3: a B-hypergraph (also called B-graph) is a directed hypergraph whose hyperarcsare backward hyperarcs (or simply B-arcs), where a B-arc is an hyperarc E=(T(E), H(E)) withH(E)=1.

We allow the presence of B-arcs connecting the same vertices, provided that B-arcs aredistinguished in some way, e.g. by labels.The last definition concerns paths on hypergraph (thus paths on B-graphs):

Definition 5.1.4: given a directed hypergraph DH=(V, A), a path Pod of length q is a sequence ofvertices and hyperarcs Pod=<v1=o, Ei1

, v2, Ei2, ..., Eiq

, vq+1=d>, where o∈T(Ei1), d∈H(Eiq

) and

vj∈H(Eij-1)∩T(Eij

) with j=2, ..., q; vertices o and d are the origin and the destination of Pod,

respectively; we say that o is connected to d.•

In the next Subsection we use B-graphs to model the syntactic, semantic and pragmatic levels of anagency.

5.2 Topological Description of Agency

A B-graph based topological description of an agency is presented. Each level (i.e. syntactic,semantic, pragmatic) can be represented by a B-graph constructed starting from inferentialdescriptions. As in the previous Section the definitions are incremental.

Definition 5.2.1: given the syntactic description of an agency A, we call syntactic level topologicaldescription (syntactic hypergraph) of A, a B-graph HSi=(V, ASi), where:- each vertex v∈V corresponds to a theory (agent) in syntactic description, the correspondence isone-to-one;

13

- each B-arc E∈ASi corresponds to a bridge rule π∈Π in syntactic description, the correspondence isone-to-one; moreover T(E) is the set of vertices corresponding to agents giving premises to π, H(E)is the vertex (since E is a B-arc) corresponding to agent in which the conclusion of π is drawn.

The syntactic hypergraph describes the topology of an agency, i.e. the connections among agentswith no regard to inferential abilities of agents.It is worth noting that the above definition allows the presence of loop-hyperarcs in HSi since avertex can correspond to an agent that both gives premises to a bridge rule and is the agent in whichthe conclusion is drawn. For simplicity we label informally each vertex with the name of thecorresponding agent and each B-arc with the name of the corresponding bridge rule.The second level involved in agencies studying is the semantic level:

Definition 5.2.2: given the semantic description of an agency A, we call semantic level topologicaldescription (semantic hypergraph) of A, a B-graph HSe=(V, ASe), where:- each vertex v∈V corresponds to a theory (agent) in semantic description, the correspondence isone-to-one;- each B-arc E∈ASe corresponds to a bridge rule π∈Π applicable in semantic description, thecorrespondence is one-to-one; moreover T(E) is the set of vertices corresponding to agents givingpremises to π, H(E) is the vertex (since E is a B-arc) corresponding to agent in which the conclusionof π is drawn.

Syntactic hypergraph of an agency A gives information about links among agents in a wayindependent from the specific application in which the agency is involved. Semantic hypergraph takesinto account the application and its constraints; thus, the links (i.e. hyperarcs) at semantic level areless (or equal) than links (i.e. hyperarcs) at syntactic level.Finally, we present the definition of pragmatic hypergraph:

Definition 5.2.3: given the pragmatic description of an agency A, we call pragmatic leveltopological description (pragmatic hypergraph) of A, a B-graph HPr=(V, APr), where:- each vertex v∈V corresponds to a theory (agent) in pragmatic description, the correspondence isone-to-one;- each B-arc E∈APr corresponds to a bridge rule π∈Π applicable in pragmatic description, thecorrespondence is one-to-one; moreover T(E) is the set of vertices corresponding to agents givingpremises to π, H(E) is the vertex (since E is a B-arc) corresponding to agent in which the conclusionof π is drawn.

The pragmatic description specifies goals and inferential engines of an agency, they are a “filter” forrules of inference, this filter limits the number of rules of inference applicable. The situation isreflected by pragmatic hypergraph where the hyperarcs are less (or equal) than hyperarcs in semantichypergraph.In Section 7 we use the three hypergraphs (syntactic, semantic, pragmatic) to classify and to provesome properties about agencies.

6. EVOLUTIONAL DESCRIPTION OF AGENCY USING ER PETRINETS

14

Petri nets [35], [34] are a formalism used to specify and analyze concurrent systems. Since an agencyis a particular concurrent system, we can describe each level of an agency by means of special Petrinets called ER Petri nets.The description is focused on evolutional aspects since Petri nets are an executable formalism.In Subsection 6.1 we give the definition of ER Petri nets. In Subsection 6.2 we describe the threelevels of an agency using the introduced formalism.

6.1 ER Petri Net

ER Petri nets (ER nets) are a special kind of Petri nets where tokens are functions and transitions arerelationships; ER Petri net are introduced in [21] , some of their applications can be found in [14].

Definition 6.1.1: an ER Petri net (ER net) is a Petri net where:- tokens are environments on ID and V, i.e., possibly partial, functions: ID→V, where ID is a set ofidentifiers and V a set of values. Let ENV=VID be a set of all environments. In what follows we usethe terms token and environment interchangeably;- each transition t is associated with an action. An action is a relationship r(t)⊆ENVk(t)×ENVh(t). Herek(t) and h(t) denote the cardinalities of the preset and the postset of transition t, respectively (weconsider only arcs with weight 1). Without loss of generality, we assume h(t)>0 for all t. It isintended that r(t) refers to each input and output place of transition t. The projection of r(t) onENVk(t) is denoted by η(t) and is called the predicate of transition t;- a marking m is an assignment of multisets of environments to places;- a transition t is enabled in a marking m if and only if, for every input place pi of t there exists atleast one token envi such that <env1, ..., envk(t)>∈η(t). <env1, ..., envk(t)> is called an enabling tuplefor transition t. Note that there can be more enabling tuples for the same transition and the sametoken can belong to several enabling tuples;- a firing is a triple x=<enab, t, prod> such that <enab, prod>∈r(t). enab is called the input tuple,while prod is called the output tuple;- the occurrence of a firing <enab, t, prod> in a marking m consists of producing a new marking m'for the net, obtained from the marking m by removing the enabling tuple enab from the input placesof transition t and storing the tuple prod in the output places of transition t;- given an initial marking m0, a firing sequence s is a finite sequence <<enab1, t1, prod1>, ..., < enabn,tn, prodn>> such that transition t1 is enabled in the marking m0 by the tuple enab1, each transition ti,2≤i≤n, is enabled in the marking mi-1 produced by the firing <enabi-1, ti-1, prodi-1> and its firingproduces the marking mi.

There are two levels of nondeterminism in ER nets; in fact the choice of which transition fires whenmore transitions are enabled in the same marking is nondeterministic, in addition the tokensproduced by the firing of a transition are not uniquely determined since they are tied up to input tupleby a relationship (if they are tied up to input tuple by a function then they are uniquely determined).In the following we state two properties of ER net.

Definition 6.1.2: a marking mn is reachable from a marking m0 if and only if n=0 or there exists atleast one firing sequence <<enab1, t1, prod1>,...,< enabn, tn, prodn>>, n>0, such that enabi is enabledin the marking mi-1 and its firing produces the marking mi, 1≤i≤n.

15

Theorem 6.1.1: the property of reachbility is undecidable.•

Definition 6.1.3: an ER net is transition live if and only if for each marking m reachable from theinitial marking m0, and for each transition t there exists a marking m' reachable from m such that t isenabled in m'.

Theorem 6.1.2: the property of transition liveness is undecidable.•

These results will be helpful to prove some properties and to solve some critical problems aboutagencies.

6.2 Evolutional Description of Agency

Using ideas of previous Subsection, we can describe an agency from an evolutional point of view.Each level of an agency is defined in terms of an ER net starting from inferential descriptions ofSection 4.

Definition 6.2.1: given the syntactic description of an agency A, we call syntactic level evolutionaldescription (syntactic ER net) of A an ER net T where:- each place pi corresponds to one (and only one) agent i of A;- each rule of inference σ (Li-rule or bridge rule) is represented by a transition tσ, the preset of tσ areplaces corresponding to agents that give premises and discharged assumptions to σ, the postset of tσ

are places corresponding to agents that give premises and in which the conclusion is drawn;- each token represents a wff <α, i>, so a token is an environment where ID=formula, agent andV=L1∪...∪Ln∪1, ..., n, where the first n sets are values for identifier ‘formula’, whereas the lastset is built up of values for identifier ‘agent’;- the action r(tσ) associated to a transition tσ eliminates tokens corresponding to dischargedassumptions and premises of σ, and creates tokens corresponding to conclusion and premises of σ;- a transition tσ is enabled in a marking m (i.e. the rule of inference σ can produce its conclusionbecause the premises are available) if and only if for each token e=(formula=α, agent=i) of theenabling tuple, α unifies to the corresponding wff in the predicate η(tσ); see [19] for the concept ofunification; we suppose, for simplicity, that the languages Li (for all i) of inferential description of Aare predicative of first order;- the set of axioms of agent i is modeled by a new place pai and a new transition tai that allow place pi

(corresponding to agent i) to have tokens representing axiom wffs by means of firing of tai;- assumptions, that are discharged using (3) of Subsection 4.1, are made accessible to an agent in away similar to axioms.

The syntactic ER net of an agency A is a formalism describing what A can do; this is obvious since asyntactic description is devoted to express the whole power of an agency.It is easy to state the equivalence between the occurrence of a firing <enab, t, prod> in a ER net andthe application of the rule of inference associated to t in the corresponding MLS. The equivalence isstated in the following sense: starting from a marking m, corresponding to a particular “logical state”

16

S (that is, a particular state of presence of wffs derived by agents), the occurrence of a firing <enab,tσ, prod> produces a new marking m’ corresponding to a logical state S’ obtained from S by theapplication of rule of inference σ. The proof can be trivially obtained from the previous definitions.We note that this correspondence will allow to simulate the behavior of an agency by means of ERnets, in other words we can study the evolution of an agency.

Definition 6.2.2: given the semantic description of an agency A, named T the syntactic ER net of A,we call semantic level evolutional description (semantic ER net) of A the ER net obtained from Tspecifying an initial marking m such that m assigns a token e=(formula=α, agent=i) to placecorresponding to agent i if and only if <α, i>∈Γi.

An initial marking m in the evolutional description of an agency is equivalent to sets of assumptionsin the inferential description. In fact, if an initial marking puts token e in some place p, and since atoken represents a wff and a place represents an agent, the wff associated to e is available (likeassumption) to the agent associated to p.We note that Definition 6.2.2 increments (specifies) Definition 6.2.1, the next step is to incrementDefinition 6.2.2 to obtain:

Definition 6.2.3: given the pragmatic description of an agency A, named T' the semantic ER net ofA, we call pragmatic level evolutional description (pragmatic ER net) of A the ER net obtainedfrom T’ by specifying a set of markings called final markings and rules for find out which transitionfires when more than one are enabled. A final marking m is such that for each <γ, i>∈Oi assigns atoken e=(formula=γ, agent=i) to place corresponding to agent i. The rules specify the firing of atransition t, among a set t, t’, t’’, ... of enabled transitions in some marking m, if and only if theinferential engines of pragmatic description specify the application of the rule of inference σ among aset σ, σ’, σ’’, ... of applicable rules of inference; where σ, σ’, σ’’, ... are the rules of inferenceassociated to t, t’, t’’, ..., respectively.

A set of final markings is equivalent to a set of goals for inferential description, in addition rules playthe same role as inferential engines in inferential description. The rules do not eliminatenondeterminism of ER net because they can determine not uniquely a transition to fire where morethan one can.In evolutional description of an agency A, the introduction of a semantic level limits the power of thesyntactic level: some transitions can be unable to fire starting from an initial marking. In a similarway a transition t can be forbidden to fire in pragmatic level, whereas t can fire in semantic level.Starting from the equivalence between the occurrence of a firing and the application of thecorresponding rule of inference, we are able to state the equivalence between a firing sequence in aER net and a deduction in MLS. The equivalence has different characteristics according to the levelof agency we consider: for syntactic ER net a firing sequence is equivalent to a deduction from anyset of assumptions (that is, a deduction in the MLS called syntactic description), for semantic ER neta firing sequence starting from the initial marking is equivalent to a deduction from the specifiedassumptions in the semantic description; finally, for pragmatic ER net a firing sequence starting fromthe initial marking and reaching a final marking is equivalent to a deduction of goals from specifiedassumptions respecting constraints of inferential engines. The proof of these equivalencies is trivialand follows from the previous definitions.

17

At the end of this Section we recall the basic concept introduced: the inferential description of anagency (by means of MLS) can be made executable using ER nets that model each level ofdescription of the agency.

7. SOME PROPERTIES OF AGENCIES

This Section is devoted to presentation of solutions of critical problems listed in Section 3. Inparticular, we apply the power of formalisms introduced in previous sections to derive some resultsand properties about classification and fairness of agencies.

7.1 Classification of Agencies

In this Subsection, we classify agencies according to their topology, thus the elected formalism toaccomplish the goal is the topological description of agencies which is found on hypergraphs (seeSection 5).

Definition 7.1.1: given the syntactic hypergraph HSi=(V, ASi) of an agency A (semantic hypergraphHSe=(V, ASe), pragmatic hypergraph HPr=(V, APr)), we call agency A syntactically (semantically,pragmatically) not connected if there is a set V’⊂V, V’≠∅, such that no v’∈V’ is connected tosome v∈V-V’ and no v∈V-V’ is connected to some v’∈V’. If V’=∅ the agency A is calledsyntactically (semantically, pragmatically) connected.

The first result is presented by the following theorem:

Theorem 7.1.1: given an agency A, the following relations hold (‘P ⇒ Q’ stays for ‘if P then Q’):(i) A is pragmatically connected ⇒ A is semantically connected;(ii) A is semantically connected ⇒ A is syntactically connected.Proof:(i) if A is pragmatically connected then there is no V’⊂V with V’ satisfying the conditions ofDefinition 7.1.1, in other words V’=∅ considering HPr. Since APr is a subset of ASe, the verticesconnected in HPr are connected also in HSe, thus V’=∅ in HSe;(ii) the proof is the same of (i) considering that ASi includes ASe.

The agencies that are connected at a certain level can be further classified by means of followingdefinitions:

Definition 7.1.2: given the syntactic hypergraph HSi=(V, ASi) of a syntactically connected agency A,we call A syntactically weak connected if for each u, v∈V, u≠v, exists a path Puv or a path Pvu. A iscalled syntactically strong connected if for each u, v∈V, u≠v, exists a path Puv. A is calledsyntactically full connected (or syntactically monolithic) if for each u, v∈V, u≠v, exists a B-arcE∈ASi such that u∈T(E) and v∈H(E).

Definition 7.1.3: given the semantic hypergraph HSe=(V, ASe) of a semantically connected agencyA, we call A semantically weak connected if for each u, v∈V, u≠v, exists a path Puv or a path Pvu. A

18

is called semantically strong connected if for each u, v∈V, u≠v, exists a path Puv. A is calledsemantically full connected (or semantically monolithic) if for each u, v∈V, u≠v, exists a B-arcE∈ASe such that u∈T(E) and v∈H(E).

Definition 7.1.4: given the pragmatic hypergraph HPr=(V, APr) of a pragmatically connected agencyA, we call A pragmatically weak connected if for each u, v∈V, u≠v, exists a path Puv or a path Pvu.A is called pragmatically strong connected if for each u, v∈V, u≠v, exists a path Puv. A is calledpragmatically full connected (or pragmatically monolithic) if for each u, v∈V, u≠v, exists a B-arcE∈APr such that u∈T(E) and v∈H(E).

Other properties of agencies are presented by the next two theorems. The first one, given an agencywith a particular property, concerns relations among levels of the agency. The second one, given alevel of an agency, illustrates the relations among properties of the level.

Theorem 7.1.2: given an agency A, the following implications hold:(i) A is pragmatically weak connected (strong connected, monolithic) ⇒ A is semantically weakconnected (strong connected, monolithic);(ii) A is semantically weak connected (strong connected, monolithic) ⇒ A is syntactically weakconnected (strong connected, monolithic).Proof: the proof is analogous to the proof of (i) and (ii) of Theorem 7.1.1.

Theorem 7.1.3: given an agency A, the following implications hold:(i) A is syntactically (semantically, pragmatically) monolithic ⇒ A is syntactically (semantically,pragmatically) strong connected;(ii) A is syntactically (semantically, pragmatically) strong connected ⇒ A is syntactically(semantically, pragmatically) weak connected;(iii) A is syntactically (semantically, pragmatically) weak connected ⇒ A is syntactically(semantically, pragmatically) connected.Proof:(i) if A is syntactically monolithic then for each u, v∈V, u≠v, exists a B-arc E∈ASi such that u∈T(E)and v∈H(E), thus for each u, v∈V, u≠v, there is a path Puv=<u, E, v>, and, by Definition 7.1.2, theagency A is syntactically strong connected;(ii) if A is syntactically strong connected then A is also syntactically weak connected because existsone of the two paths requested by Definition 7.1.2;(iii) if A is syntactically weak connected then for each u, v∈V, u≠v, u is connected to v or v isconnected to u, thus we cannot find the set V’≠∅ of Definition 7.1.1.The proofs for semantic and pragmatic levels are similar.

The definitions presented here allow a classification of agencies on the basis of topological structureof the links among agents.

7.2 Property of Fairness

19

A property which is important to ensure in distributed systems is the fairness of the cooperationamong distributed items (see [8], [9] for a formal definition of property in the field of distributedcomputation). In agencies field it is fundamental to provide a definition of property of fairness andsome techniques to investigate the holding of the property, because cooperation is what characterizeagencies.We first define fairness for an agency at each level of description:

Definition 7.2.1: given the syntactic description of an agency A, we say that A has the property ofsyntactic fairness if for each agent i of A exists a bridge rule of inference π∈∏ such that there isalmost an <..., i> in the premises or in the conclusion of π (where ... stays for any wff). We say thatA is syntactically fair if A has the property of syntactic fairness.

Definition 7.2.2: given the semantic description of an agency A, we say that A has the property ofsemantic fairness if for each agent i of A exists an applicable bridge rule of inference π∈∏ such thatthere is almost an <..., i> in the premises or in the conclusion of π (where ... stays for any wff). Wesay that A is semantically fair if A has the property of semantic fairness.

Definition 7.2.3: given the pragmatic description of an agency A, we say that A has the property ofpragmatic fairness if for each agent i of A exists an applicable bridge rule of inference π∈∏ suchthat there is almost an <..., i> in the premises or in the conclusion of π (where ... stays for any wff).We say that A is pragmatically fair if A has the property of pragmatic fairness.

The following result about fairness relates the three properties of above definitions:

Theorem 7.2.1: given an agency A, the following implications hold:(i) A is not syntactically fair ⇒ A is not semantically fair;(ii) A is not semantically fair ⇒ A is not pragmatically fair;(iii) A is pragmatically fair ⇒ A is semantically fair;(iv) A is semantically fair ⇒ A is syntactically fair.Proof:(i) if A has not the property of syntactic fairness then there is almost an agent a’ of A that does notfurnish premises and does not accept conclusions for all bridge rules in Π. Since the bridge ruleswhich are applicable at semantic level are a subset of Π, we conclude that a’ does not satisfycondition in Definition 7.2.2; thus A is not semantically fair;(ii) the proof is similar to (i), when we recognize that the bridge rules that are applicable at pragmaticlevel are a subset of those that are applicable at semantic level;(iii), (iv): by counterposition from (i) and (ii).

The next theorem suggests an interesting use of hypergraphs in investigating the property of fairness:

Theorem 7.2.2: given an agency A, if A is syntactically (semantically, pragmatically) connected thenA is syntactically (semantically, pragmatically) fair.Proof: if A is syntactically connected then for each vertex v of HSi, v is connected to some u orsome u is connected to v; thus, for each vertex v of HSi there is a B-arc E such that v∈T(E)∪H(E).

20

By definition of HSi (in particular, by equivalence between B-arcs and bridge rules) and byDefinition 7.2.1, we conclude that A is syntactically fair.The proof for semantic and pragmatic levels is similar.

An immediate (see Theorem 7.1.3) corollary is:

Corollary 7.2.1: given an agency A, if A is L-ally weak connected (L-ally strong connected, L-allymonolithic) then A is L-ally fair. L is one of syntactic, semantic, pragmatic.

We note the utility of describing the topology of agencies by means of hypergraphs, in fact theypermit an easy investigation of one of the most important properties involved in studying distributedsystems and agencies in particular, namely the fairness.Another useful tool for analysis of fairness of agencies is the evolutional description. By means offormalism of ER nets we can prove the undecidability of property of fairness at semantic andpragmatic level.

Theorem 7.2.3: the problem of deciding if an agency A is semantically or pragmatically fair isundecidable.Proof: given the semantic level evolutional description of A (i.e. an ER net with an initial markingm’) the property of semantic fairness holds if each agent participates to almost one applicable bridgerule. Thus, for verifying the semantic fairness of A, the set of applicable bridge rules is required.Recalling the equivalence between bridge rules and a subset of transitions in ER net, we can reducethe problem of deciding if a set of bridge rules is composed of applicable rules to the problem ofdeciding if an ER net is transition live (see Definition 6.1.3). We conclude that the property ofsemantic fairness for A involves the property of transition liveness for the semantic ER net of A withinitial marking m’. The theorem is proved noting that the property of transition liveness for an ERnet is undecidable (Theorem 6.1.2).The proof for pragmatic level is similar.

We conclude this Section illustrating an example of another use of ER nets formalism in analyzingagencies properties. We state that the problem of deciding if an agency reaches its goals isundecidable. The proof of statement is easy if we note that the problem can be reduced, as in theproof of Theorem 7.2.3, to the problem of deciding if a marking is reachable in an ER net (seeDefinition 6.1.2) and recalling that the last property is undecidable (by Theorem 6.1.1).At the end of this Section, we point out the relevant use of formalisms in studying properties ofagencies in a rigorous framework.

8. CONCLUSIONS

The aim of this paper has been to define and formally describe the entity of distributed artificialintelligence called agency. The informal definition of agency is based on the works of Minsky aboutmind and intelligence. The formal descriptions of agency (the models of agency) are based on MLS,hypergraphs and ER Petri nets, which are powerful formalisms for establishing properties andproving theorems about agencies.

21

More work is needed to define and investigate other relevant properties of agencies. In additionother theorems must be proved in order to set out a methodology for the project and thedevelopment of agencies.In the paper the attention is focused only on models and theoretical issues. Future works will explorethe field of practical systems used in real world applications. Models presented here can be thecornerstones on which the study and analysis of real systems are founded. Moreover, one examplepresented in the paper, namely Example 3.1.1, is based on a real case research project calledCROMATICA, which is sponsored by the European Union. Such example shows how to takeadvantage from the models we have developed and described in the paper. In fact we can investigatesome properties of the system, find errors in project discovering critical situations that bring to awrong response of the system, in addition we can test the model on particular conditions of interestto certificate the system as working in these conditions.The formalisms presented in the paper can constitute the first step in developing a methodology forproject and analysis of the particular items in the field of DAI called agencies.

REFERENCES

1. C. Berge, “Graphs and Hypergraphs”, North-Holland, Amsterdam, 19732. C. Berge, “Minimax theorems for normal hypergraphs and balanced hypergraphs - a survey”,

Annals of Discrete Mathematics, 21, North-Holland, Amsterdam, 1984, p. 3-193. C. Berge, “Hypergraphs: Combinatorics of Finite Sets”, North-Holland, Amsterdam, 19894. S. Cammarata, D. McArthur, R. Steeb, “Strategies of cooperation in distributed problem solving”,

Proceedings of International Joint Conference on Artificial Intelligence, Karlsruhe, FederalRepublic of Germany, August 1983, p. 767-770; also in A. H. Bond, L. Gasser, Readings inDistributed Artificial Intelligence, San Mateo, CA, Morgan Kaufmann, 1988

5. S. E. Conry, R. A. Meyer, V. R. Lesser, “Multistage negotiation in distributed planning”, in A. H.Bond, L. Gasser, Readings in Distributed Artificial Intelligence, San Mateo, CA, MorganKaufmann, 1988, p. 367-384

6. D. D. Corkill, “Hierarchical planning in distributed environment”, Proceedings of 6th InternationalJoint Conference on Artificial Intelligence, Cambridge, MA, August 1979, p. 168-179

7. K. S. Decker, “Distributed Problem-Solving Techniques: A Survey”, IEEE Transactions onSystems, Man, and Cybernetics, vol. SMC-17, no. 5, September/October 1987, p. 729-740

8. P. Degano, U. Montanari, “Concurrent Histories: A Basis for Observing Distributed Systems”,Journal of Computer and System Sciences, vol. 34, no. 2/3, April/June 1987, p. 422-461

9. P. Degano, U. Montanari, “Liveness Properties as Convergence in Metric Spaces”, 16th ACMAnnual Symposium on Theory of Computing, Washington, DC, 1984, p. 31-38

10. E. H. Durfee, “Coordination of Distributed Problem Solvers”, Boston, Kluwer Academic, 198811. E. H. Durfee, V. R. Lesser, “Using partial global plans to coordinate distributed problem

solvers”, Proceeding of 10th International Joint Conference on Artificial Intelligence, Milan,Italy, August 1987, p. 875-883; also in A. H. Bond, L. Gasser, Readings in DistributedArtificial Intelligence, San Mateo, CA, Morgan Kaufmann, 1988

12. E. H. Durfee, V. R. Lesser, D. D. Corkill, “Coherent cooperation among communicatingproblem solvers”, IEEE Transactions on Computer, vol. C-11, p. 1275-1291; also in A. H.Bond, L. Gasser, Readings in Distributed Artificial Intelligence, San Mateo, CA, MorganKaufmann, 1988, p. 268-284

13. E. H. Durfee, V. R. Lesser, D. D. Corkill, “Trends in Cooperative Distributed Problem Solving”,IEEE Transactions on Knowledge and Data Engineering, vol. 1, no. 1, March 1989, p. 63-83

14. M. Felder, C. Ghezzi, M. Pezzè, “High Level Timed Petri Nets as a Kernel for ExecutableSpecifications”, Internal Report, DEI Politecnico di Milano, Milan, Italy

22

15. J. Galbraith, “Designing Complex Organizations”, Readings, MA, Addison-Wesley, 197316. J. Galbraith, “Organization Design”, Readings, MA, Addison-Wesley, 197717. G. Gallo, G. Longo, S. Pallottino, S. Nguyen, “Directed hypergraphs and applications”, Discrete

Applied Mathematics, 42, 1993, p. 177-20118. L. Gasser, “The integration of computing and routine work”, ACM Transactions on Office

Information Systems, July 198619. M. Genesereth, N. J. Nilsson, “Logical Foundation of Artificial Intelligence”, Addison-Wesley20. M. Georgeff, “A theory of action for multiagent planning”, Proceedings of National Conference

on Artificial Intelligence, Austin, TX, August 1984, p. 121-125; also in A. H. Bond, L. Gasser,Readings in Distributed Artificial Intelligence, San Mateo, CA, Morgan Kaufmann, 1988, p.205-209

21. C. Ghezzi, D. Mandrioli, S. Morasca, M. Pezzè, “A Unified High-Level Petri Net Formalism forTime-Critical Systems”, IEEE Transactions on Software Engineering, vol. 17, no. 2, February1991, p. 160-172

22. F. Giunchiglia, “Multilanguage Systems”, Proceedings AAAI-91 Spring Symposium on LogicalFormalization of Commonsense Reasoning, Stanford University, March 26-28, 1991

23. F. Giunchiglia, “Contextual Reasoning”, Technical Report 9211-20, IRST, Trento, Italy,November 1992

24. F. Giunchiglia, L. Serafini, “Multilanguage hierarchical logics, or: how we can do without modallogics”, Artificial Intelligence, 65 (1994), p. 29-70

25. C. Hewitt, “Offices are open systems”, Communication ACM, vol. 4, July 1987, p. 271-287; alsoin A. H. Bond, L. Gasser, Readings in Distributed Artificial Intelligence, San Mateo, CA,Morgan Kaufmann, 1988, p. 321-330

26. K. Konolige, “A deductive model of belief”, Proceedings of 8th International Joint Conferenceon Artificial Intelligence, Karlsruhe, Federal Republic of Germany, August 1983, p. 377-381

27. K. Konolige, M. E. Pollack, “Ascribing Plans to Agents”, International Joint Conference onArtificial Intelligence, Palo Alto, 1989

28. M. L. Minsky, “The Society of Mind”, Simon & Schuster, New York, 198529. V. R. Lesser, D. D. Corkill, “Functionally accurate, cooperative distributed systems”, IEEE

Transactions on Systems, Man and Cybernetics, vol. SMC-11, January 1981, p. 81-9630. M. S. Mazer, “Reasoning About Knowledge to Understand Distributed AI Systems”, IEEE

Transactions on Systems, Man and Cybernetics, vol. 21, no. 6, November/December 1991, p.1333-1346

31. Y. Moses, M. Tennenholz, “On Cooperation in a Multi-Entity Model”, Proceedings ofInternational Joint Conference on Artificial Intelligence, Palo Alto, 1989

32. H. P. Nii, “Blackboard systems: The blackboard model of problem solving and the evolution ofblackboard architectures”, The AI Magazine, vol. VII, no. 2, Summer 1986, p. 38-53

33. H. P. Nii, “Blackboard systems - Part Two: Blackboard application systems”, The AI Magazine,vol. VII, no. 3, August 1986, p. 82-106

34. J. L. Peterson, “Petri Net Theory and the Modeling of Systems”, Englewood Cliffs, NJ, Prentice-Hall, 1981

35. W. Reising, “Petri Nets: An introduction”, EACTS Monographs on Theoretical ComputerScience, New York, Springer-Verlag, 1985

36. J. S. Rosenschein, M. R. Genesereth, “Deals among rational agents”, Proceedings of 9thInternational Joint Conference on Artificial Intelligence, Los Angeles, CA, August 1985, p. 91-99; also in A. H. Bond, L. Gasser, Readings in Distributed Artificial Intelligence, San Mateo,CA, Morgan Kaufmann, 1988, p. 227-234

37. R. G. Smith, “A framework for problem solving in a distributed processing environment”, Ph. D.thesis, Stanford University, Stanford, CA, December 1978

23

38. R. G. Smith, “A framework for distributed problem solving”, Proceedings of 6th InternationalJoint Conference on Artificial Intelligence, Cambridge, MA, August 1979, p. 836-841

39. R. G. Smith, “The contract net protocol: high-level communication and control in a distributedproblem solver”, IEEE Transactions on Computer, vol. C-29, December 1980, p. 1104-1113

40. R. G. Smith, R. Davis, “Frameworks for cooperation in distributed problem solving”, IEEETransactions on systems, man, and cybernetics, vol. SMC-11, January 1981; also in A. H.Bond , L. Gasser, Readings in Distributed Artificial Intelligence, San Mateo, CA, MorganKaufmann, 1988, p. 61-70

41. R. G. Smith, R. Davis, “Negotiation as a metaphor for distributed problem solving”, ArtificialIntelligence, vol. 20, 1983, p. 63-109