Extending Multi-agent Cooperation by Overhearing

15
CENTRO PER LA RICERCA SCIENTIFICA E TECNOLOGICA 38050 Povo (Trento), Italy Tel.: +39 0461 314312 Fax: +39 0461 302040 e-mail: [email protected] - url: http://www.itc.it EXTENDING MULTI-AGENT COOPERATION BY OVERHEARING Busetta P., Serafini L., Singh D., Zini F. January 2001 Technical Report # 0101-01 Istituto Trentino di Cultura, 2001 LIMITED DISTRIBUTION NOTICE This report has been submitted forpublication outside of ITC and will probably be copyrighted if accepted for publication. It has been issued as a Technical Report forearly dissemination of its contents. In view of the transfert of copy right tothe outside publisher, its distribution outside of ITC priorto publication should be limited to peer communications and specificrequests. After outside publication, material will be available only inthe form authorized by the copyright owner.

Transcript of Extending Multi-agent Cooperation by Overhearing

CENTRO PER LA RICERCA

SCIENTIFICA E TECNOLOGICA

38050 Povo (Trento), ItalyTel.: +39 0461 314312Fax: +39 0461 302040e−mail: [email protected] − url: http://www.itc.it

EXTENDING MULTI−AGENT COOPERATIONBY OVERHEARING

Busetta P., Serafini L.,Singh D., Zini F.

January 2001

Technical Report # 0101−01

Istituto Trentino di Cultura, 2001

LIMITED DISTRIBUTION NOTICE

This report has been submitted forpublication outside of ITC and will probably be copyrighted if accepted for publication. It has beenissued as a Technical Report forearly dissemination of its contents. In view of the transfert of copy right tothe outside publisher, itsdistribution outside of ITC priorto publication should be limited to peer communications and specificrequests. After outside publication,material will be available only inthe form authorized by the copyright owner.

Extending Multi-Agent Cooperation byOverhearingPaolo Busetta(1) Lu iano Sera�ni(1)Dhirendra Singh(2) Floriano Zini(1)(1) ITC-IRSTvia Sommarive 18, 38050 Povo, Trento, Italy(2) Department of Computer S ien eUniversity of Trentovia Inama 5, 38100 Trento, Italyemail: fbusetta, sera�ni, singh, zinig�it .itAbstra t. Mu h ooperation among humans happens following a om-mon pattern: by han e or deliberately, a person overhears a onversationbetween two or more parties and steps in to help, for instan e by suggest-ing answers to questions, by volunteering to perform a tions, by makingobservations or adding information. We des ribe an abstra t ar hite tureto support a similar pattern in so ieties of arti� ial agents. Our ar hi-te ture involves pairs of so- alled servi e agents (or servi es) engaged insome tasks, and unlimited number of suggestive agents (or suggesters).The latter have an understanding of the work behaviours of the formerthrough a publi ly available model, and are able to observe the messagesthey ex hange. Depending on their own obje tives, the understandingthey have available, and the observed ommuni ation, the suggesters tryto ooperate with the servi es, by initiating assisting a tions, and bysending suggestions to the servi es. These in e�e t may indu e a hangein servi es behaviour. Our ar hite ture has been applied in a few in-dustrial and resear h proje ts; a simple demonstrator, implemented bymeans of a BDI toolkit, JACK Intelligent Agents, is dis ussed in detail.Keywords: Agent te hnologies, systems and ar hite tures, Web information sys-tems and servi es.1 Introdu tionHumans work well in teams, provided the environment is one of ommuni ationand ollaboration. Often people an produ e better results than their normal apabilities permit, by onstru tively asso iating with their olleagues. When-ever they are fa ed with tasks that they annot manage, or know that an bemanaged better by other asso iates, people seek assistan e. This observation isnot new, and has inspired mu h resear h in ooperative agents, for example [6,9, 8℄.

We hoose to analyze this asso iation through a slight shift in perspe tive.While asso iation between agents an readily be a hieved by requesting helpwhen required, equal or even improved results an be a hieved when asso iatesobserve the need for help, and initiate a tions or o�er suggestions with theaim of improving the plight of their olleague. In fa t, these asso iates may ommuni ate not only when a olleague needs assistan e, but also when theyfeel that they an help improve the produ tivity of their friend (and hen e oftheir ommunity as a whole).In this paper, we introdu e an abstra t ar hite ture for ooperative agentsbased on a prin iple that we all overhearing. The intuition behind overhearing omes from the modeling of su h human intera tion as aforementioned, in a ollaborative observable environment.The overhearing ar hite ture has been developed while working on the sup-port of impli it ulture [3℄, whi h o�ers a very broad on eptual framework for ollaboration among agents. Impli it ulture is de�ned as a relation betweengroups of agents that behave a ording to a ultural s hema and groups that ontribute to the produ tion of the same ultural s hema. The overhearing ar- hite ture des ribed here has a narrower s ope than impli it ulture, and mostlyfo uses on agent engineering issues.One of our goals is supporting a exible development methodology that pre-s ribes, for its initial phases, the design of only those agents (servi es) requiredto a hieve the basi fun tionality of a system. The behaviour of the agents, how-ever, an be a�e ted by external observers via suggestions, whi h are spe ialmessages arrying information as well as ommands. While fun tionality of theservi es required by an appli ation are assumed to be immutable, suggesters maybe added and removed dynami ally without hampering the ability of the systemto rea h its main obje tives.This has various advantages. Firstly, it is possible to enhan e fun tionality ofa running system. As an example, state-of-the-art ma hine-learning or tunable omponents an be plugged into the system, as and when they be ome available,without the need to bring down, rebuild, and then restart the system. Se ondly,the output of a system an be enhan ed either by suggesting additional relatedinformation, or requesting the deletion of outdated or unrelated results. This exibility omes at the ost of additional sophisti ation, ne essary to performagent state re ognition, to make ommuni ation observable, to handle sugges-tions and to hange behaviour a ordingly.This paper is organized as follows. Se tion 2 presents a pattern of human ollaboration that inspired the work presented here. In Se tion 3 we give someunderlying assumptions for our ar hite ture. Se tion 4 presents the overhearingabstra t ar hite ture and identi�es what is needed to support it. In parti ular,it fo uses on modelling servi es' behaviour, on how we realize the overhearingme hanism, and on how suggesters and servi es an be engineered in order tomake and a ept suggestions. Se tion 5 presents a simple system that we devel-oped to demonstrate our ar hite ture. Finally, Se tion 6 on ludes the paper.

2 A Collaboration PatternThere are several real world examples where our approa h to ollaboration ispra ti al and appropriate.Example 1. Consider the s enario where Ali e (a for short) asks Bob (b for short)for information on the movies s reening at the lo al inema. Suppose also thatOs ar (o for short) happens to overhear this dialogue. In this s enario it is likelythat b's reply will be augmented with ontributions from o. These ontributions(whi h we all suggestions) may either add (or update) information to b's replyor omplement it with personal opinion. This is an open ommuni ation envi-ronment and while a posed the question to b, she may be willing to hear fromother friends as well. In general, a may re eive an inde�nite number of repliesto her question, and she is free to deal with them as she pleases.Example 2. Consider a se ond s enario with the same a tors and the same ques-tion posed by a. This time we will assume that b is not up-to-date with the moviess reening urrently at the inema. So b de ides to ring the re eption and �ndout the details. But as he does, o suggests he better get all details on the a torsas well, be ause a is very likely to ask about them immediately after.Similar o urren es are ommon. They are instan es of the same pattern, wheresuggesters are familiar with, and make suggestions to, either or both of theprimary onversers. We an relate our real world ases with its a tors a, b,and o, to a range of ommonly o urring s enarios in the software ontext. Forexample:{ a is a sear h engine building the result page for the user query, b is theenterprise information server spe i� to, say, fashion, and o is a suggester ofsu h volatile information as fashion shows, exhibitions, and sales.{ a is the operating system, b is the hard disk ontroller, and o is the a hemanager.Another example, involving more than one suggester, is the following. This aseis similar to what we wish to deal with using suggester agents.Example 3. There are delegations from two separate ompanies, who have ometogether for a meeting. Ea h delegation has a speaker. Other members of thedelegation may not speak dire tly, but may make suggestions to the speaker.In this ase, one speaker only talks with the speaker from the other delegation,and makes no attempt to repeat the message for ea h person in the room. Infa t, the speaker makes no assumption on the number of listeners either, sin ehearing is assumed as long as the listeners are in the room.3 Underlying AssumptionsFor suggestions to be e�e tive, a ne essary ondition is that a suggester un-derstands the behaviour of the suggestee, in order to timely suggest something

relevant to its urrent intentions or future goals. In general, this requires the onstru tion of a behaviour model of the suggestee and the adoption of mentalattitude re ognition te hniques as dis ussed, for example, in [7, 10℄, based on theobservation of its a tivity.However, in our approa h we assume that the ommuni ating agents arewilling to re eive suggestions, and therefore they make available a publi behaviormodel for use by suggesters. These assumptions both redu e the load on thesuggesters and improve the quality of their suggestions.A omplementary underlying assumption we make is that servi es are awarethat their environment is populated by benevolent suggesters that will not useservi es' publi models to mali iously suggest them. Nevertheless, servi es areautonomous entities and are so entitled to either a ept or refuse suggestions.This an prevent, at least to some extent, misuse of servi es' publi models aswell as problems related to timing of delivery, ontradi tory suggestions fromdi�erent suggesters and so on.The assumptions above dramati ally simplify the omplex problem of men-tal state re ognition, and therefore make it feasible the implementation of realsystems, as shown in Se tion 5.4 Ar hite tureFigure 1 summarizes the main omponents of our ar hite ture. The primary on-versers are referred to as Servi es ollaborating for their individual needs. Theenvironment ould be onstituted by a publi ommuni ation hannel betweenagents, or even by a se ure ommuni ation me hanism where observation is ex-pli itly authorized by the ommuni ating servi es. Other agents (Suggesters)are free to parti ipate by listening and, if urge be, to suggest ways in whi h thegoals of the servi es are better met. The Overhearing Agent fa ilitates onversa-tion overhearing for an indeterminate number of suggesters, and also managesdi�erent levels of observation for the various suggesters.In a normal o urren e, domain spe i� suggesters would join or leave the sys-tem through a subs ription servi e provided by the overhearing agent. Thereon,whenever the overhearing agent hears onversation on erning these domains,this information is forwarded to the respe tive suggesters. The suggesters arethen free to onta t the onversing servi es with suggestions on their operation.In general, the servi es an expe t an indeterminate number of suggestions ontheir urrent a tivity. Most likely, these suggestions will arrive only when sug-gesters an determine what a servi e is doing, whi h is possible whenever theservi e performs an observable a tion like onversing with another servi e.In order to have its behavior a�e ted, a servi e needs to make some of itsdispositions publi . This is realized in terms of a publi spe i� ation of its behav-iors (the \behavior models" in Figure 1). This model an be realized as a �nitestate ma hine. The task of the suggesters then, an be formulated as makingsuggestions to a�e t the servi e agent in a way su h that it follows a path in itsstate spa e that will satisfy the suggesters own goals.

OVERHEARING

AGENT

SUGGESTER

SERVICE B

BEHAVIOR

MAP

BEHAVIOR

MAP

BEHAVIOR

MODEL

BEHAVIOUR

MODEL

SERVICE BSERVICE A

SERVICE A SERVICE B

SERVICE ACOMMUNICATION CHANNEL

SU

GG

ES

TSU

GG

ES

T

OB

SE

RV

ES

UB

SC

RIB

EN

OT

IFY

Fig. 1. System Ar hite tural View.In an engineering perspe tive, the ability to take suggestions enables the de-sign of agents whose behavior an be hanged from the outside without tou hingexisting ode or restarting a running system.On e a model is in pla e, the next issue is: how does a suggester use thisknowledge about another agent's behavior? One way is to engineer the modelinto the suggester. This however makes for very restri ted operation in an en-vironment where numerous agents with numerous models may exist. Alterna-tively, the suggester may be engineered without any information about parti -ular models, but with the ability to understand other models in terms of itsown goals (the \behavior maps" in Figure 1). This is learly very useful forforward- ompatibility, as suggesters an be built for models whi h are still to beinvented.4.1 Modeling Servi e Agents BehaviorThe following onsiderations are important in our ontext to de�ne an agentbehaviour model.{ The publi model need not be an a urate model of an agents behavior. Wedo not require it to ompletely mat h the behavior pattern of the agent.

We simply wish to make available some representation that an be used byother agents to make suggestions on our agents operation. How this agentinterprets this publi des ription of its own behavior is its own de ision, andsuggesters annot make any assumptions on this relation. This ensure thede ision making autonomy of the agent.{ The ommuni ation language between servi es is strongly related to theirindividual publi models. Indeed, ommuni ation between the servi es isthe only means for suggesters to re ognize the urrent state of the servi es.For this reason, their publi model must be expressed with referen e toobservable ommuni ation.{ Sin e we adopt BDI (Beliefs, Desires, Intentions) as our agent ar hite ture,publi models also express agent behavior in terms of publi beliefs, inten-tions, and a tions, all or some of whi h may be a�e ted by suggestions.In general, the behavior of a BDI agent an be des ribed in terms of transitionsbetween agent mental states. We de�ne the publi model (model, for short) as astate ma hine M = hS; T i, where S is a set of mental states and T is a set oflabeled transitions on S, namely T � S�S�L, where L is a set of labels. A states 2 S is a triple hB;D; Ii, where B, D, and I denote respe tively, beliefs, desires(goals), and intentions of the agent. Here we do not investigate on the internalstru ture of the sets B, D and I , we onsider them as primitive sets. The set T ontains transitions s l! t where the label l 2 L denotes an a tion that the agent an perform in state s, or an event that an happen in the environment. Morethan one a tion or event an be undertaken or an happen in a state s. A tuallyea h label l 2 L denotes an a tion or event type rather than a single a tion orevent. The set of types should be ri h enough to give suÆ ient information aboutwhat the servi e an perform or what an happen in its environment, withoutgoing into too many details, to remain at the proper abstra tion level.A state does not ne essarily ontain all the mental attitudes of the servi e, orthe sets of mental attitudes in luded in a state may not orrespond to the \real"ones. In other words, a state ontains the representation of the servi e's mentalattitudes it wants to be visible from external observers. Analogously, the set Tdoes not ne essarily ontain all the a tions or events the servi e an perform orper eive, but only the representation of them the servi e wants to publi ize.The model an ontain transitions whi h orresponds to a tions or eventsthat an or annot be observed by another agent. In our framework ommuni a-tion is the only observable a tivity of servi es. The model, however, an refer toother a tions that an be undertaken by the servi e. For instan e the model anrefer to the a tion of showing a ertain information to the user, or of revisingthe beliefs, or of retrieving a ertain �le from a server, et . These a tions do notinvolve ommuni ation with the other agents, and therefore they are not observ-able by the suggesters. On the other hand suggesters an have some knowledgeabout these a tions, and an give suggestion to the servi e on the opportunityof taking one of them. The same statement applies to events. The set of labelsL is therefore omposed of two subset Lo and Lno for observable a tions/eventsand non observable a tions/events, respe tively.

Q’’S

S’

1 2 3 5

4

Q

Q’

R’

R

s s s s

s

S’

S

Fig. 2. An example of publi model.Example 4. A publi model for a servi e a ting as an \intelligent" interfa e toa sear h engine (des ribed in Se tion 5) is given by the ma hine M representedin Figure 2. State s1 is the initial state, where the servi e is ready to a ept aquery ( onsisting of a keyword to be sear hed for by the sear h engine) froman user assistant. Re eiving the query (observable a tion Q) leads the servi e tostate s2 (ready to submit the query), from where it an move to three di�erentstates:{ if it does not re eive any suggestions, it submit the query just re eived tothe sear h engine (non-observable a tion Q0) and moves to state s5;{ if it re eives a suggestion related to additional keywords to be sear hed for(observable event S), it moves to state s3;{ if it is suggested that an answer to the query is already available (observableevent S0), it moves to state s4.While in state s3, the servi e an re eive further suggestions on available answers(event S0) or additional keywords (event S); eventually, the servi e submits aquery to the sear h engine whi h is the omposition of Q and S (non-observablea tion Q00). When in state s5 (query submitted), the servi e waits for the answerfrom the sear h engine (non-observable event R) and moves to state s4 (answeravailable). From there, the observable a tion R0, onsisting of sending a reply tothe user assistant with the answer, leads the servi e ba k to the initial state s1.Suppose that a suggester observes an a tion Q (a query); it then knows thatthe servi e has moved to state s2 and is ready to submit the user's query to thesear h engine. Suppose that su h a query on erns a subje t (say \Afri an Mu-si ") whi h the suggester is expert in; thus, the suggester an provide (additional)information about how to query the sear h engine for this subje t. Te hni ally

this means that the suggester sends a message of the form S (a suggestion). Inturn, the servi e an de ide to a ept or to refuse the suggestion, and thus tosubmit or not a omposed query to the sear h engine. Note that the refusal aseis not shown in Figure 2, sin e a publi model does not need to fully orrespondto the a tual behaviour of a servi e.Another possible rea tion of a suggester after observing a query Q is toadvise the servi e on the existen e of an alternative (more suitable or faster)information sour e; for instan e, a a he of previous answers. The model aboveenables a suggester to dire tly send an answer to the query (suggestion S0), so ausing the servi e to hange its state into s4 and eventually to reply to the userassistant with what has been suggested.4.2 OverhearingConsider again the example of the meeting between delegations that we des ribedin Se tion 2. The question is, how do we take this analogy into omputer networkswith onversing servi es and listening suggesters?A solution, onsistent with the human s enario, is that ea h suggester re-trieves the onversation from the underlying network. This solution we will not onsider for two main reasons. Firstly, it puts a physi al restri tion on the pres-en e of the suggesters, be ause they need to be resident on the same networkas the onversing servi es in order to retrieve messages from it. Se ondly, a - essing network messages involves network se urity issues, and does not o�era reliable solution. On the other extreme, ea h servi e ould keep a re ord ofthe suggesters and forward ea h message of the onversation to ea h of them aswell. This however, is a very expensive exer ise as servi es need to keep tra k ofsuggesters who may join or leave the system as they please. Surely, this is notthe human way either.Our solution is an intermediate one. We propose the use of an overhearingagent whi h would a t as an ear for all the suggesters in the system. A simplesolution then is for servi es to always send a dupli ate of the original onversa-tion message to the overhearing agent. The overhearing agent knows about thesuggesters within the system and is responsible for sending the message to them.The advantage is that servi es an onverse without worrying about thenumber of suggesters in the system. Additionally, servi es and suggesters arenot bound by physi al proximity. The overhearer knows about the suggesters,then it may also know about their interests. This means that it is not just arepository of onversation but it sends to the suggesters only messages that arerelevant to them. Furthermore, the overhearer may redu e re-analysis on the partof the suggesters, by providing servi es that perform ommon analysis on e, anddistributing the results to the suggesters in the system.The fun tionality made available by an overhearer to the suggester has a sub-stantial impa t on the design of the latter and the types of observations possibleon a system. Potential fun tionality for an overhearer in lude the following:

Noti� ation of messages. This is the most important fun tionality required ofan overhearer. Suggesters are generally required to spe ify a riteria (su h as apattern) to be used to sele t whi h messages should be forwarded to them. Themore sophisti ated a sele tion riteria is, the lesser is the amount of messagesbeing forwarded and the further �ltering needed on the servi e side. On theother hand, the implementation of a sophisti ated sele tion riteria in reasesthe omputational ost for the overhearer itself.EÆ ient dialogue logging and sear hing. The dialogue between two servi e agents ould be data intensive. In order to maintain logs of a tra table size, the over-hearer should not log every single byte that is ex hanged in the dialogue betweenthe two agents. Consider for instan e the ase where one of the servi e is a streamprovider of movies or musi ; the overhearer should ignore the data stream itself,and log only meta-data about it.Dialogue query language. If an overhearer o�ers a dialogue logging system, itshould also provide a query language for the suggesters. This query language an-not be a simple relational query language, sin e, unlike from a stati database,the data olle ted by the overhearer is a stream, an histori al olle tion. Exam-ples of queries ould be sele tive queries, that provide the suggester with somedata for ea h message satisfying a ertain property ex hanged between a pair ofservi es, or dynami queries, that provide the suggester with all the messagessatisfying a property P until a message satisfying the property Q is ex hanged.Simple analysis. This may be required by suggesters as well as a human oper-ator ontrolling a system. Information to be analyzed may be simply statisti al(e.g., number of messages per type and per unit of time), or ontent-based (e.g.,number of messages mat hing a given pattern). Some of the analysis, as well aspart of logs, ould be sent periodi ally to agents performing further elaboration(e.g., olle ting databases for ollaborative �ltering or user pro�ling).4.3 Suggesting and Being SuggestedDesigning agents able to send or to a ept suggestions is a non-trivial task. In-deed, suggesters need to re ognize the state of a servi e, and to de ide how toin uen e the servi e in order to a hieve their own goals. In turn, servi es that re- eive suggestions may have to perform re e tive reasoning on erning mentalisti aspe ts (beliefs, goals, intentions), as well as handling interruptions, performingre overy and doing whatever else is ne essary when a ommitted ourse of a -tions has to hange be ause of external stimuli. An exhaustive treatment of theissues involved and of the appli able solutions would far ex eed the s ope of thispaper. However, we highlight some of the points that will be addressed by futurework.Provided the publi model of a servi e is viewed as a �nite state ma hine,suggestions an be divided in two general ategories, depending on the e�e tsmeant to a hieve on a servi e:

{ given that the servi e is in a publi state s, suggestions for the hoi e of aparti ular a tion Ai from the list of possible a tions A1; A2; : : : ; An for thatpubli state; and,{ given that the servi e is in a publi state sj , suggestions for the hange instate to sk, without performing any a tions, but through hanges in publi beliefs, desires and intentions.Multi-agent model he king [2℄ and planning as model he king [5℄ have beenadopted as our starting points for the resear h on erning the reasoning on the urrent and intended state of a servi e and on the suggestions to be sent by asuggester.From the perspe tive of a servi e whose internal state orresponds to somepubli state s, suggestions an be a epted only if it is safe to do so, and mustbe refused otherwise. Indeed, there are a number of onstraints that have tobe taken are of; most importantly, onsisten y in mental attitudes (e.g., avoid-ing ontradi tory beliefs or goals, possibly aused by suggestions from di�erentsour es) and timing (a suggestion on erning s may arrive when the servi e hasalready moved to a di�erent state t).Parti ular are is required for the treatment of suggestions that a�e t a om-mitted ourse of a tions (intention). For instan e, a suggestion of adopting anintention ip (that is, using a plan p on a standard BDI omputational model [11℄) an arrive too late, after that the agent has already ommitted to a di�erent in-tention ij . Another example is a suggestion that makes some intentions pointless(for instan e, intentions of omputing or sear hing the same information beingsuggested). In these ases, the servi e should revise its intentions to take advan-tage of the suggestions. Computationally, the issues with intention revisions areeased by the adoption of guard onditions ( alled maintenan e onditions in thestandard BDI model), whi h ause a running intention to stop or abort on theirfailure, that is, when some ne essary pre onditions are no longer met. However,the e�e ts of a tions already performed when an intention is stopped shouldbe properly taken in a ount, or otherwise reversed or ompensated. This issueis, in the general ase, hard to deal with, but it is manageable in the ase ofanytime algorithms [12℄. An extension of the standard BDI model has also beenproposed [4℄, whi h exploits the automati re overy apabilities of distributednested ACID transa tions to handle the same problem for agents operating ondatabases or other re overable resour es.5 A Simple Appli ationWe have applied the overhearing ar hite ture to a few resear h and industrialappli ations, either to support an impli it ulture-based design [3℄, or as anar hite tural pattern on its own. For the sake of illustration, we give an overviewof a simple demonstrator, developed by means of the JACK Intelligent AgentTMplatform by Agent Oriented Software. None of its fun tionality is parti ularlyinnovative, or ould not be engineered otherwise; rather, our aim is to show

at some level of detail how to put in pra ti e some of the prin iples des ribedearlier.The demonstrator a ts as an \intelligent" interfa e to an open sour e sear hengine, HTDIG [1℄. HTDIG is a world wide web indexing and sear hing systemfor an Internet or intranet domain; that is, it works on the ontents of one ormore web sites.The intera tion between a user and our demonstrator onsists of two mainphases. In the �rst phase, the user is shown a list of topi s and is asked topi k one or more of them, depending on her interests. In the se ond phase, theuser browses through the list of words extra ted by HTDIG during its indexingoperations. The user an hoose a word from this list; this auses the system to all HTDIG in order to obtain the pointers to the pages ontaining either theword itself (whi h is the usual behaviour of HTDIG) or other words related tothe hosen one in the ontext of the topi s sele ted during the �rst phase, forinstan e synonims or spe ializations.The list of topi s shown to the user in the �rst phase is edited o�-line bythe system administrator. For ea h topi , the administrator also edits a simpletaxonomy of keywords. These taxonomies are used to enri h the user's requestsduring the se ond phase des ribed above. For instan e, if the user sele ts \Soft-ware Engineering" and \Literature" as her topi s of interest, and then hooses\language" from the list of words, the sear h is automati ally extended to what-ever happens to be subsumed by \language" in the Software Engineering andLiterature taxonomies, whi h may in lude things as diverse as Java, Prolog,English, and Italian.The demonstrator is implemented as a multi-agent system, in luding an over-hearer able to observe the ommuni ation. Four main roles are played in the sys-tem: the \user assistant", the \referen er", the \topi suggester" and the \ a hesuggester". A user assistant is in harge for the intera tion with the user, whilethe a tual interfa ing with HTDIG is the task of a referen er agent. On e a userhas sele ted a word, her user assistant requests a referen er to sear h for it; thereferen er alls the sear h fa ility of HTDIG, then replies to the user assistantwith the URL of an HTML page ontaining the results of the sear h.As a matter of fa t, the pro essing of a sear h request by a referen er issomething more than simply alling HTDIG; a simpli�ed model for this a tivity,suÆ ient to support the design of { or reasoning by { a suggester agent, is shownin Figure 2. The referen er posts itself a subgoal, GetReferen ePage, whi h anbe satis�ed in three di�erent ways (i.e., by three alternative JACK plans). In the�rst ase, nothing but the word being sear hed is known; HTDIG is then alledfor this word. In the se ond ase, the referen er believes that additional keywordsare asso iated to the word; HTDIG is then alled to sear h for the original wordand any of the asso iated keywords. In the third ase, the referen er believesthat there is already a page with the results of the sear h; thus, it simply replieswith the URL of that page.Notably, the referen er itself does not ontain any logi for asso iating addi-tional keywords or URLs to a word being sear hed. However, it is able to a ept

messages (suggestions) on erning those asso iations. Between re eiving a re-quest and submitting the GetReferen ePage subgoal, the referen er waits for ashort time, to allow suggesters to rea t; if suggestions arrive after the subgoalhas been pro essed, they are ignored.A topi suggester spe ializes on the taxonomy of a given topi . It subs ribewith the overhearer to be noti�ed of all request messages ontaining one ofthe words in its taxonomy, with the ex lusion of the leaf nodes. Whenever anassistant sends a request to a referen er, all suggesters for the topi s sele ted bythe user rea t by simply suggesting all the subsumed words.A a he suggester keeps tra k of the URLs sent ba k by referen ers, in orderto suggest the appropriate URL when a word is sear hed for the se ond time by auser (or a user who sele ted the same topi s). To this end, a a he suggester asksto be noti�ed of all requests and replies between user assistants and referen ers.A simple multi asting me hanism has been implemented, whi h extends the\send" and \reply" primitives of JACK by forwarding a opy of the messagesto the overhearer; the latter uses the Java re e tion pa kage to de ompose mes-sages and to mat h them against subs riptions from suggesters. Subs riptionsare expressed as exa t mat hes on ontents and attributes of messages, su h assender, re eiver, performative, message type and type-spe i� �elds.This simple demonstrator ould easily be enhan ed by plugging in additionaltypes of suggesters or enhan ing the existing ones, without any hange either tothe user assistants or to the referen ers; onsider, for instan e, a user pro�lingmodule. Building more exibility { in terms of possible options (e.g. breaking uptasks in subgoals with strategies sele ted by means of ri h beliefs stru tures) {into the user assistants would potentially enable novel fun tionality, su h as thesele tion of referen ers depending on riteria outside of the ontrol of the system(network or omputing loads, ollaborative �ltering strategies, and so on) that ould be tuned or even de�ned ase by ase.6 Con lusionsThe overhearing ar hite ture is intended to help and ease the design of exible, ollaborative multi-agent systems, where the main fun tionality of the system(provided by servi e agents) are immutable, and additional fun tions (providedby suggestive agents) may be added and removed dynami ally. This is a promis-ing approa h when servi es are willing to a ept suggestions from benevolentsuggesters in their environment and make a publi model of their behavior avail-able.In future, we expe t to deeply investigate some of the major aspe ts of thear hite ture: ommuni ation infrastru tures to support the publi ommuni a-tion required for overhearing; modeling, re ognition and reasoning on behaviourof servi e agents; and, omputational models for dealing with suggestions.

A knowledgmentsWe thank Antonia Don�a and Srinath Anantharaju for their ontribution to theimplementation of the demonstrator, and for reviewing this paper.Referen es1. ht://Dig { WWW Sear h Engine Software, 2001. http://www.htdig.org/.2. M. Benere etti, F. Giun higlia, and L. Sera�ni. Model Che king Multiagent Sys-tems. Journal of Logi and Computation, 8(3), 1998.3. E. Blanzieri and P. Giorgini. From Collaborating Filtering to Impli it Culture:a General Agent-Based Framework. In Pro . of the Workshop on Agent-BasedRe ommender Systems (WARS), Bar elona, Spain, June 2000. ACM Press.4. P. Busetta and R. Kotagiri. An ar hite ture for mobile BDI agents. In Pro . of the1998 ACM Symposium on Applied Computing (SAC'98), Atlanta, Georgia, USA,1998. ACM Press.5. A. Cimatti, M. Roveri, and P. Traverso. Automati OBDD-based Generation ofUniversal Plans in Non-Deterministi Domains. In Pro . of the 15th Nat. Conf. onArti� ial Intelligen e (AAAI-98), Madison, Wis onsin, 1998. AAAI-Press.6. J. Doran, S. Franklin, N. Jennings, and T. Norman. On Cooperation in Multi-Agent Systems. The Knowledge Engineering Review, 12(3), 1997.7. A. F. Dragoni, P. Giorgini, and L. Sera�ni. Updating Mental States from Commu-ni ation. In Intelligent Agents VII. Agent Theories, Ar hite tures, and Languages- 7th Int. Workshop, LNAI, Boston, MA, USA, July 2000. Springer-Verlag.8. M. Klus h, editor. Intelligent Information Systems. Springer-Verlag, 1999.9. T. Oates, M. Prasad, and V. Lesser. Cooperative Information Gathering: A Dis-tributed Problem Solving Approa h. IEE Pro . on Software Engineering, 144(1),1997.10. A. S. Rao. Means-End Plan Re ognition: Towards a Theory of Rea tive Re og-nition. In Pro . of the 4th Int. Conf. on Prin iples of Knowledge Representationand Reasoning (KRR-94), Bonn, Germany, 1994.11. A. S. Rao and M. P. George�. An Abstra t Ar hite ture for Rational Agents.In Pro . of the 3rd Int. Conf. on Prin iples of Knowledge Representation andReasoning (KR'92), San Mateo, CA, 1992. Morgan Kaufmann Publishers.12. S. Zilberstein. Using Anytime Algorithms in Intelligent Systems. AI Magazine,17(3), Fall 1996.