User-Tailored Planning of Mixed Initiative Information-Seeking Dialogues

36
User-Tailored Planning of Mixed Initiative Information-Seeking Dialogues Adelheit Stein GMD-IPSI, German National Research Center for Information Technology, Integrated Publication and Information Systems Institute, Darmstadt, Germany Email: [email protected] Jon Atle Gulla Norwegian University of Science and Technology, Department of Computer Science, Trondheim, Norway Ulrich Thiel GMD-IPSI, German National Research Center for Information Technology, Integrated Publication and Information Systems Institute, Darmstadt, Germany Abstract. Intelligent dialogue systems usually concentrate on user support at the level of the domain of discourse, following a plan-based approach. Whereas this is appropriate for col- laborative planning tasks, the situation in interactive information retrieval systems is quite different: there is no inherent plan–goal hierarchy, and users are known to often opportunis- tically change their goals and strategies during and through interaction. We need to allow for mixed-initiative retrieval dialogues, where the system evaluates the user , individual dialogue behavior and performs situation-dependent interpretation of user goals, to determine when to take the initiative and to change the control of the dialogue, e.g., to propose (new) problem- solving strategies to the user. In this article, we present the dialogue planning component of a concept-oriented, logic-based retrieval system (MIRACLE). Users are guided through the global stages of the retrieval interaction but may depart, at any time, from this guidance and change the direction of the dialogue. When users submit ambiguous queries or enter unex- pected dialogue control acts, abductive reasoning is used to generate interpretations of these user inputs in light of the dialogue history and other internal knowledge sources. Based on these interpretations, the system initiates a short dialogue offering the user suitable options and strategies for proceeding with the retrieval dialogue. Depending on the user , choice and constraints resulting from the history, the system adapts its strategy accordingly. Key words: conversational retrieval, mixed initiative, dialogue planning, dialogue act inter- pretation, abductive reasoning 1. Mixed Initiative in Retrieval Systems: Why, When, and How? Artificial intelligence systems which support users of information systems often interpret human–computer interaction as a collaborative dialogue in which both information seeker and provider may take the initiative and con- trol the dialogue. Most current approaches in this area presuppose well-defined tasks, e.g., looking up information for travel planning, where the task and In: User Modeling and User-Adapted Interaction (Special Issue on Computational Mod- els for Mixed-Initiative Interaction), 1999, 8 (1-2): 133-166.

Transcript of User-Tailored Planning of Mixed Initiative Information-Seeking Dialogues

User-Tailored Planning of Mixed InitiativeInformation-Seeking Dialogues �

Adelheit SteinGMD-IPSI, German National Research Center for Information Technology, IntegratedPublication and Information Systems Institute, Darmstadt, GermanyEmail: [email protected]

Jon Atle GullaNorwegian University of Science and Technology, Department of Computer Science,Trondheim, Norway

Ulrich ThielGMD-IPSI, German National Research Center for Information Technology, IntegratedPublication and Information Systems Institute, Darmstadt, Germany

Abstract. Intelligent dialogue systems usually concentrate on user support at the level of thedomain of discourse, following a plan-based approach. Whereas this is appropriate for col-laborative planning tasks, the situation in interactive information retrieval systems is quitedifferent: there is no inherent plan–goal hierarchy, and users are known to often opportunis-tically change their goals and strategies during and through interaction. We need to allow formixed-initiative retrieval dialogues, where the system evaluates the user

,individual dialogue

behavior and performs situation-dependent interpretation of user goals, to determine when totake the initiative and to change the control of the dialogue, e.g., to propose (new) problem-solving strategies to the user. In this article, we present the dialogue planning component ofa concept-oriented, logic-based retrieval system (MIRACLE). Users are guided through theglobal stages of the retrieval interaction but may depart, at any time, from this guidance andchange the direction of the dialogue. When users submit ambiguous queries or enter unex-pected dialogue control acts, abductive reasoning is used to generate interpretations of theseuser inputs in light of the dialogue history and other internal knowledge sources. Based onthese interpretations, the system initiates a short dialogue offering the user suitable optionsand strategies for proceeding with the retrieval dialogue. Depending on the user

,choice and

constraints resulting from the history, the system adapts its strategy accordingly.

Key words: conversational retrieval, mixed initiative, dialogue planning, dialogue act inter-pretation, abductive reasoning

1. Mixed Initiative in Retrieval Systems: Why, When, and How?

Artificial intelligence systems which support users of information systemsoften interpret human–computer interaction as a collaborative dialogue inwhich both information seeker and provider may take the initiative and con-trol the dialogue. Most current approaches in this area presuppose well-definedtasks, e.g., looking up information for travel planning, where the task and

� In: User Modeling and User-Adapted Interaction (Special Issue on Computational Mod-els for Mixed-Initiative Interaction), 1999, 8 (1-2): 133-166.

2 A. Stein et al.

domain levels are closely intertwined. Bunt (1989) coined the term “infor-mation dialogue” for the type of dialogue occurring in simple factual infor-mation systems, such as electronic services which provide access to phonedirectories, train schedules and the like.

By contrast, classical information retrieval (IR) systems accessing largetextual or multimedia databases have to deal with a different kind of userbehavior. In such settings, many users initially have vague information needs:they know they need some information but often cannot specify it. Belkin andVickery (1985) call this an “anomalous state of knowledge”. The usual solu-tion in this case is to consult a human intermediary. To obtain a satisfactoryresult it is crucial that the dialogue partners establish – in addition to a sharednotion of what information is desired – a mutual understanding of the crite-ria to be used to decide whether retrieved items constitute a solution to theinformation problem.

Humans have, of course, a certain repertoire of behavioral patterns forhandling problems of this type, and a whole set of retrieval tactics and strate-gies known to skilled searchers has been identified in IR research (see, e.g.,Saracevic et al., 1997). Cooperative systems that aim to support the user infinding and pursuing appropriate retrieval strategies have to internally repre-sent and use knowledge about such behavioral patterns. They must also takeinto account that users may at any time change their understanding of theinformation problem and adopt other strategies than those suggested by thesystem. Hence, we need context-dependent user guidance without presup-posing a strict hierarchy of plans and task goals. At the same time, sufficientdialogue control options should be available, for example, to force the systemto react to changed user strategies.

In order to offer effective dialogue support, any system must rely on animplicit or explicit model of when initiative shifts (or should shift) amongthe dialogue participants. Engaging in cooperative mixed-initiative dialoguesrequires the system to apply mechanisms for deciding when to take or relin-quish initiative depending on the current dialogue situation, and howto adjustits behavior accordingly. In mixed-initiative dialogues, the roles of the twoparticipating agents are not predetermined (cf. Allen, 1994). Agents shouldnot only collaborate to solve domain problems, but also to perform “interac-tion management” (Bunt, 1996). That is, they should have options for negoti-ating both dialogue control (“about” dialogue) and problem-solving strategy(“about” task or strategy).

The type of information-seeking interaction we are considering in thispaper requires an inference process which reasons about both the informationproblem and the dialogue situation. Typically, the information problem tendsto be “open-ended” in the sense that the user

,information need is underspec-

ified at the beginning, and both dialogue partners contribute to a constructivesolution based on additional assumptions or hypotheses about the information

User-Tailored Planning of Mixed Initiative Information-Seeking Dialogues 3

need, which then may be negotiated and clarified. Hence, it is crucial that thesystem is capable of showing initiative at the task level, i.e., by attempting totransform the originally vague information problem into a solvable one. Thisis accomplished by an abductive reasoning system analyzing the user

,queries

(cf. Muller and Thiel, 1994).

The focus of this paper, however, is at the level of dialogue, where thesystem needs to be highly flexible in detecting and interpreting the user

,ini-

tiative – be it a rejection of the system,

hypotheses or a change of interest.As dialogue management imposes logical requirements different from thoseencountered in the retrieval task, we employ two different abductive infer-ence engines for the system

,retrieval and dialogue components (cf. Thiel

et al., 1996, and Stein et al., 1997a).

Starting with a general model of information-seeking dialogue, our ap-proach identifies dialogue situations in which the system should seize ini-tiative to act cooperatively. Given a repertoire of retrieval strategies (repre-sented as abstract dialogue plans, called “scripts”), a set of dialogue controlrules, and the abductive inference mechanism, the system is able to initiatenegotiations about the current strategy based on an evaluation of the dialoguehistory. Although certainly superior to an automated system, even a humanintermediary has only a limited understanding of the user

,actual information

need. Thus, the intermediary must rely on obvious, observable features of thedialogue situation, for example, when a query yields no or too many results,is too ambiguous, etc. The skilled searcher – or a system endowed with anequivalent behavioral repertoire – can then suggest helpful strategies to cir-cumvent the problem. Users may follow the suggested strategies or performspecific dialogue control acts to change the direction of the dialogue whenev-er they want, e.g., if a certain aspect of their information need turns out to beambiguous and they decide to follow a newly detected thread.

In the remainder of this article, we outline our approach to mixed-initiativedialogue in the framework of an intelligent IR system, discussing how the sys-tem uses this model to actively engage in the dialogue and to adapt its behav-ior to the current situation. Our experimental prototype MIRACLE and someinteraction examples are described in Section 4. We employ a comprehensivedialogue model (see Section 5) to represent various facets of the user

,behav-

ior and the system,

reactions in a structured dialogue history. The dialoguecomponent described in Section 6 uses abduction to make sense of ambiguoususer inputs, evaluating the history. It generates situation-dependent interpreta-tions of these inputs, infers possible follow-up actions, and – taking individualuser preferences into account – offers the user suitable options for proceedingin the dialogue.

4 A. Stein et al.

2. Related Research

Recently, issues of mixed initiative and dialogue control have been increas-ingly addressed in research on intelligent human-computer collaboration (seeTerveen, 1995, for an overview). Relevant AI-oriented approaches concen-trate on collaborative activity focusing on the agents

,beliefs, goals, and plans.

Most computational models of collaborative discourse in AI follow a “plan-based approach” (Taylor et al., 1996), where the agents are seen to have goalsto achieve plans on the level of the domain of discourse. Typical applicationareas are transportation or travel planning (e.g., Rich and Sidner, 1998). Someexisting discourse models (e.g., Traum and Allen, 1994; Jameson and Weis,1995; McRoy and Hirst, 1995) also address social factors, such as conversa-tional conventions, expectations, and obligations.

Most of the numerous research prototypes concerned with discourse plan-ning and adaptive user modeling concentrate on natural language applica-tions (see Wahlster and Kobsa, 1989, for an early survey, and Schuster et al.,1988, for a discussion of the relationship between user models and discoursemodels). As typical examples we refer to explanation dialogue systems (e.g.,Moore and Paris, 1993; Chu-Carroll and Carberry, 1995; Stock et al., 1997;McRoy et al., 1997), intelligent presentation systems (see a selection in May-bury and Wahlster, 1998), and task-oriented spoken dialogue systems (see,e.g., Maier et al., 1997, for a recent collection). User modeling componentsare generally construed as a means to enhance a system

,reactiveness to user

needs. Whereas most approaches concentrate on user stereotypes and char-acteristics such as background knowledge, other valuable approaches rely on“short-term individual user models” built up incrementally (Rich, 1989). Forthe latter one needs to decide what entities are to be represented in the dia-logue history (e.g., only the topics/concepts addressed during the discourseor, additionally, intentional structures as described by Grosz and Sidner, 1986,1990). Also, the system must employ an active component which exploits thehistory or user model in order to generate cooperative user support.

Research relevant to our application context stems from the field of infor-mation retrieval (IR) and, more specifically, intelligent multimedia IR (seeRuthven, 1996, and Maybury, 1997, for recent collections). Information re-trieval involves a variety of reasoning tasks, ranging from problem defini-tion to relevance assessment. Some researchers in IR established a retrieval-as-inference approach following a proposal put forward by van Rijsbergen(1989). Here, a document is assumed to be relevant if its contents (as a setof logical propositions) allow the query to be inferred. Several logics havebeen proposed in order to formalize this relationship. A complementary viewof the retrieval process relies on cognitive models, trying to capture the inter-active nature of IR (e.g., Logan et al., 1994). Cognitive approaches attemptto represent the mental state of users, e.g., their knowledge, intention, and

User-Tailored Planning of Mixed Initiative Information-Seeking Dialogues 5

beliefs. The dialogue contributions are modeled as dialogue acts which exertcertain well-defined effects on these representations.

Early online information systems like online library catalogues and bibli-ographical databases offered only very few interaction facilities and little usersupport. They simply processed submitted queries. The shortcomings of this“matching” paradigm (cf. Bates, 1986) were soon recognized, and informa-tion retrieval was more and more regarded as a process involving cycles ofquery refinements following inspection of results. Simple document-orientedrelevance feedback facilities were employed in early experimental systems(cf. Salton and McGill, 1983), allowing the users to accept or reject individualdocuments retrieved. Although this kind of interaction was more reminiscentof a dialogue (cf. Oddy, 1977) than a programming activity (as suggested bythe matching paradigm), the effectiveness of the interaction was impeded bythe insufficient means of dialogue control available to the user.

Relevance is usually decided upon in terms of meaning and use. Hence,more advanced IR systems not only refer to retrieved objects/documents butalso to the concepts the user has in mind when formulating a query. Usingtools like domain models, thesauri, and certain inference mechanisms, con-cept-based IR systems can enhance the conceptual coherence of the retrievalprocess decisively. However, even if a system can determine the intendedmeaning of a user

,conceptual query, its relevance to the user

,information

need can only be inferred from the user,

reactions during interaction. Con-temporary IR systems that provide additional user support, such as automat-ic query expansion and relevance feedback facilities, treat such facilities asextra-dialogic functions – usually as additional operations on the query orresult set, which are regarded as data objects but not as dialogue contribu-tions. In most cases, these systems do not incorporate any elaborate dialoguemodels or dialogue planning components. We believe, however, that methodsderived from dialogue modeling, intelligent information retrieval, and usermodeling, should be combined to improve a system

,capability to actively

participate in a dialogue and deal with vague and changing user goals.Following a retrieval-as-interaction point of view (see, e.g., Belkin and

Vickery, 1985, and Ingwersen, 1992), the notion of “conversational IR” wasintroduced, and it was suggested to model the entire interaction as a complexweb of mode-independent “conversational acts” based on a multi-layeredmodel of conversational tactics and information-seeking strategies (see Steinand Thiel, 1993, and Belkin et al., 1995). In this article, we refer to the the-oretical framework proposed there, describe some recent extensions and theformal specifications of the dialogue model we developed for integration inMIRACLE, and finally discuss a dialogue planning approach which enablesthe system to detect problematic dialogue situations and suggest solutions.Similar to the underlying retrieval procedure, the dialogue planning compo-nent employs abductive reasoning as its inferential framework.

6 A. Stein et al.

3. “Initiative” and “Control”

In our dialogue system, we use the COnversational Roles (COR) model (Sitterand Stein, 1992, 1996) as a general model of information-seeking dialogueand combine it with a number of abstract dialogue plans (called “scripts”)in order to allow both dialogue flexibility and goal-directed user guidance.Whereas COR describes all of the possible interchanges of dialogue movesand acts that may occur at the various states of a dialogue, scripts are used toguide the user through the global stages of retrieval interaction. Both the CORmodel and scripts are domain and application independent, COR coveringthe illocutionary aspects and flexible conversational tactics of the dialogueand the scripts representing useful strategies and “recommended” actions forsolving different types of retrieval tasks.

The integrated dialogue model (see Section 5) is made explicit in a declar-ative notation, which is sufficient for purposes of dialogue analysis and theconstruction of a hierarchically structured dialogue history. For dialogue plan-ning purposes, however, we need an additional, active mechanism whichenables the system to generate cooperative dialogue contributions and henceto engage in a mixed-initiative interaction. To achieve this goal, our dialogueplanning component (see Section 6) exploits its internal knowledge sources,i.e., the dialogue model, a number of dialogue control rules, and the dynami-cally created dialogue history, in order to manage the user–system interactionand be able to actively participate in the dialogue. The system reacts imme-diately to unexpected situations (user actions not covered in scripts or plans);it uses abductive inference in order to compute suitable follow-up actionswhich are not pre-specified in the current script for this specific situation ordialogue state and offers the user the resulting set of options for continuingthe dialogue.

In the context of our view of mixed initiative, we introduce below somespecific definitions of “taking initiative” and related terms, and then discussthe distinctions between initiative – control and task initiative – dialogue ini-tiative in our model (see Section 5.1 for a more concrete discussion of howthese definitions are used and interpreted in the COR dialogue model).

Turn taking (choice/change of speaker) may be an important indicator forshifts in initiative or control, but surely is not identical with taking the initia-tive. Agents are seen to take the initiative when they take the turn to start anew dialogue segment (COR dialogue cycle or embedded subdialogue) deter-mining the dialogue focus and the “conditions of action” for the subsequentmoves. Typical examples of moves/acts which initiate a new dialogue cycleor subdialogue are requests for information, such as queries or help requests,and voluntary offers to do some specific action (e.g., to change the presenta-tion mode or to provide a detailed explanation).

User-Tailored Planning of Mixed Initiative Information-Seeking Dialogues 7

Referring to interpersonal aspects, the term “condition of action” is used todenote one of several possible functions of a dialogue act. We regard the dia-logue as a “cooperative negotiation” where both agents have discourse obli-gations and expectations. When performing an act, the speaker enters upon acommitment or tries to fulfill a pending commitment, or, if this is impossible,withdraws or rejects it in a way that can be understood by the addressee. Aswill be discussed in more detail in Section 5.1, initiating acts like “request”and “offer” usually do introduce new conditions of action, specifying whichagent is to perform what action. Responding acts like “accept”, “promise”,and “inform”, on the other hand, adopt or fulfill the conditions from a pre-ceding turn. An inform act, for example, does not define any new conditionsof action (although it might be uttered with the general expectation that theaddressee will/should give some feedback). From what has been said aboveit follows that the agent that initiates a new dialogue cycle holds the initia-tive until the requested/offered information is presented, or the request/offeris withdrawn, rejected, or countered by either agent.

Initiative and control are not equivalent from our point of view. Agentsare seen to take control of the dialogue when they attempt to “change thedirection” of the dialogue. This may happen either when an unexpected act isperformed (e.g., an act not included in the current plan/script, such as reject-ing an offer or refusing to answer a question) or when the current dialoguecourse is suspended (e.g., by initiating a clarification subdialogue). In thelatter case the agent controls the direction but also takes over the initiative,at least until the subdialogue is finished. We use the term “dialogue controlact” when an unexpected/non-recommended act is performed without takingover the initiative, i.e., without specifying which agent should do what actionnext. Typical examples are withdrawals (of previously made commitments),rejections (of a commitment or suggestion of the other agent), and negativeevaluations (of information provided by the dialogue partner).

Some of the existing models of mixed initiative in AI-based dialogue sys-tems (cf. Cohen et al., 1998, for an extended review) address different types ofinitiative/control, on different levels of the discourse. Chu-Carroll and Brown(1998), for example, make an explicit distinction between “task initiative”and “dialogue initiative”, claiming that both must be modeled in order toaccount for complex behavior and interaction patterns. As mentioned above,most existing IR systems do not provide sufficient user support at the dialoguelevel, and retrieval tasks are mainly supported by providing extra-dialogicfunctions or tools to the user (e.g., thesauri or other terminological aids). Inthe MIRACLE system, we try to couple the retrieval and dialogue function-alities more closely in an integrated framework, distinguishing, as do Chu-Carroll and Brown, between dialogue and task initiative.

In our view, taking the dialogue initiative means to start a dialogue cycle,determining a new dialogue focus and condition of action, as described above.

8 A. Stein et al.

We speak of task initiative when the agent not only initiates a new cycle butalso proposes some problem-solving action(s) for a given task. In principle,our model allows each dialogue partner to take the dialogue initiative, thetask initiative, and the control of the dialogue (there exist some examples ofeach form in MIRACLE). However, taking into account specific differencesbetween user and system, the implementation in MIRACLE clearly favors orrestricts some of these options. For example, the user takes the dialogue ini-tiative whenever submitting a query or clarification request and is completelyfree to take control and change the current direction of the dialogue, whereasthe system should only do so if it can, at the same time, propose ways of howto best proceed or at least explain why the current strategy or direction failed.

Knowing what kind of dialogue acts have actually been performed, thesystem is able to track initiative shifts and then decide whether it needs toseize initiative itself, and in which way. When the user directly responds tosome system initiative, e.g., accepts an offer, the system keeps the initiativeand continues as expected. When users take the dialogue initiative or con-trol and their input (request/query or control act) is unambiguous, the systemresponds by providing the retrieved results or simply executing what the userwanted – while the user is still holding the initiative. If the user

,request or

control act is ambiguous, however, the system needs to take the initiative backin order to cooperate, for instance, in the task of finding a good/better queryor next step, rather than leaving this to the user alone. Using abduction, thesystem is usually able to generate a number of different interpretations of theambiguous user act and to actively suggest suitable problem-solving actions(although the system does not propose domain actions or plans).

According to our definitions above, the system is taking the task initia-tive (which “includes” dialogue initiative) whenever it presents the generatedquery and control act interpretations to the user. In both cases, the system doesnot just initiate a simple clarification dialogue, e.g., asking users to specifythe input themselves, but offers its own interpretations and suggestions of howthe user could proceed. This may be regarded as initiating the establishmentof mutual beliefs about what the user actually intended – not only concerningthe user

,current information need or dialogue control act (trying to identify

the act type), but also inferring the user,

future intention, e.g., to change thecurrent strategy. This model is discussed in detail in Section 6.

4. Retrieval Interaction in MIRACLE

Most contemporary IR systems concentrate on the efficient processing of que-ries, whereas the user–system interaction is not in the focus of the systemdesign. Our rationale (cf. Thiel et al., 1996) takes a different point of viewand regards both aspects as equally important. This is reflected in the baseline

User-Tailored Planning of Mixed Initiative Information-Seeking Dialogues 9

architecture of our system prototype (see Figure 1). MIRACLE (MultImediaconcept Retrieval bAsed on logiCaL query Expansion) allows concept-basedretrieval in large multimedia databases and is implemented in C, Smalltalk,and Prolog. The system integrates three active components: a probabilisticindexer for texts and images (Muller and Kutschekmanesch, 1996) (or, alter-natively, the full text retrieval system INQUERY, cf. Callan et al., 1992); anabductive retrieval engine (Muller and Thiel, 1994); and the abductive dia-logue component discussed in this article. Currently, the system interfacestwo applications, a subset of Macmillan

,Dictionary of Art in electronic form

and a relational database with information about European-funded researchprojects (ProCORDIS). The arts databases consist of SGML-structured full-text documents (biographies of artists, reference articles, etc.), factual knowl-edge about the artists and their works of arts, and a sample of photographs ofthe art objects. All examples dicussed in the following are taken from the artsapplication domain.

queryexpanded

database access

Indexing Component

input response

WWW user interface

AbductiveInformation

Retrieval engine(AIR)

DB

AbductiveDialogue

Component(ADC)

document structure model

concept index

semantic domain model

dialogue model & rules

syntactic index

dialogue history

StratifiedKnowledge Base:

Figure 1. Architecture of MIRACLE.

Both the retrieval engine and the dialogue manager employ abductive rea-soning to make sense of ambiguous user inputs – the retrieval engine whendealing with the interpretation of user queries, and the dialogue componentwith other ambiguous user acts, i.e., dialogue control acts, such as withdraw,reject, request for clarification. The dialogue component will be described in

10 A. Stein et al.

later sections; below we briefly outline MIRACLE,

retrieval functionality soas to illustrate the wider application context of the dialogue planning.

4.1. QUERY EXPANSION USING ABDUCTION

Queries are entered via a query form, which allows the specification of restric-tions on the attributes of objects to be retrieved. The system retrieves relevantparts of artists

,biographies or other documents related to the query concepts,

together with factual information from a database (see Stein et al., 1997a, fora number of illustrative examples). However, the retrieval process is not con-fined to keyword matching but employs a concept model to establish the rela-tionship between query concepts and access paths to the database. If a userquery is ambiguous with respect to the semantic domain model, the abductiveinference mechanism (see Figure 2) generates query reformulations consider-ing the available information structures. A query like “Search for artists withstyle Impressionism in country France” might be interpreted in several ways,depending on the actual domain model. The country attribute, for instance,can either be mapped onto the artists

,nationality, place of birth or death, the

location of one/most of their works of art, etc.

from: a → b All artists have created a work of art. (rule)a Christo Javacheff is an artist. (fact)

–––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––infer: b Christo Javacheff has created a work of art. (conclusion)

Deduction

Abduction

from: a → b All works of art have a creation date. (rule)b Michelangelo’s Pietà has a creation date. (observation)

–––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––infer: a Probably, Michelangelo’s Pietà is a work of art. (hypothesis)

Figure 2. Two kinds of inference.

Abduction can be roughly characterized as a process which generatesexplanations for a given observation. Unlike deductive inference (see Fig-ure 2), abduction allows not only truth-preserving but also hypothetical rea-soning. In our context, the observation to be explained is the user

,query

(or any other ambiguous dialogue act of the user, as discussed in Section 6below). In general, abduction will find all possible “explanations” with respectto a given set of data and a query formulation (for more detailed discussionsof abductive retrieval see Muller and Thiel, 1994, and Thiel et al., 1996).As not all of the explanations need to be valid altogether, we refer to eachexplanation as a feasible hypothesis. The system presents these hypotheses

User-Tailored Planning of Mixed Initiative Information-Seeking Dialogues 11

to the user as query interpretations, and in the next step the user may selectthe appropriate interpretation(s) for searching the database. Thus, the inter-pretations are negotiated interactively, and the user

,choices are stored in the

dialogue history and used as constraints to prevent the system from producinginappropriate query interpretations in subsequent steps.

4.2. INTERACTION EXAMPLES

Consider the example dialogue displayed in Figure 3. To improve readabilityit is given in natural language, although in MIRACLE the user performs adialogue act by direct graphical manipulation, for example, filling in queryforms and manipulating graphical interface objects.

After an introductory sequence, where the global task and strategy arenegotiated, the user submits a query which can be paraphrased as: “Searchfor painters in Paris who were concerned with Madonna as a subject of workof art”. Since this query is ambiguous, the retrieval engine generates fourquery interpretations (Figure 4 shows the second, # 2 of 4). These interpreta-tions differ from one another mainly with respect to the interpretation of thequery attributes “town” and “subject”, whereas “profession” is unambiguousin MIRACLE. Concerning the town attribute, for example, the system offersto either look for artists who were born in Paris or died there; as to subject(a free text field), the interpretations differ depending on the search methodapplied, e.g., partial string matching versus probabilistic fulltext retrieval.

Concerning the structure of the example dialogue, there are four glob-al phases (an analysis showing more fine-grained structures and the typesof dialogue moves and acts will be given in Figure 7 in the next section).In the introductory phase the user is offered several options, and she choos-es to search the domain database rather than looking at the available meta-information sources. The system instantiates a script for retrieval interaction(see Section 5.2), whereas it would use different scripts for browsing in themeta-information resources, e.g., a thesaurus.

According to the retrieval script the user is expected to submit a query asa first step. In our example the user fills in three slots of the query form (pro-fession, subject, and town) and then hits the search DB button. This initialquery (Q1) is refined in the following two phases: the user enters the opera-tor OR and an additional search term (Maria) in the subject field (Q1a) andreplaces one of the search terms in her last query (Q1b). Note that all of thethree queries use the same three attributes and differ from one another onlyin the search term specifications. Hence, the structure of the interpretationsgenerated by the retrieval engine is identical for all queries in our example.

As shown in Figure 4, a query interpretation is presented to the user in theform of a list of the internal rules relevant to each of the interpretations, high-lighting the currently active rules (black bullets) but also showing the non-

12 A. Stein et al.

S: You may (1) search DB, (2) check DB structure, (3) check thesaurus.U: chooses (1) to search the databaseS: displays the query form–––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––

U: enters query: profession “painter”, subject “Madonna”, town “Paris” [ Q1 ]S: Which query interpretation do you want:

(1) The artist is related to Paris because ... and the subject Madonna is ..., (2) ..., (3) ..., (4) ...

U: selects query interpretation (4)U: clicks on “withdraw”S: Do you want to

(1) choose another query interpretation(2) modify your previous query (3) enter a completely new query

U: chooses (2)S: OK.S: displays the last query (filled–in query form)–––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––

U: adds in the subject field “OR Maria” [ Q1a ]S: Which query interpretation do you want: (1) ..., (2) ..., (3) ..., (4) ...U: chooses (2)S: Here are the retrieved hits: ... (displays table)U: clicks on “reject”S: Do you want to

(1) save the results (yes – no) and then (2) choose another query interpretation (3) modify your previous query (4) enter a completely new query (5) forget your query and restart the session ?

U: selects (1) to save and (3) to modify previous queryS: displays the previous query–––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––

U: replaces in the subject field “Maria” by “Mother and child” [ Q1b ]S: informs user that query interpretation (2) is used to search the DB and shows resultsU: . . .

Figure 3. Example dialogue.

active ones (the light-colored bullets 7 and 10 in Figure 4). Both after Q1 andQ1a the user is asked/gets the option to choose among four query interpreta-tions: the system presents the first interpretation, a “slider” for inspection ofthe other interpretations, and a button (search DB) for selecting the one queryinterpretation to be executed. It is only in the last dialogue cycle, after Q1b,that the system does not “ask” the user again to select a query interpretationbut immediately shows the retrieved results, additionally informing the userthat query interpretation # 2 has been used again (i.e., the same interpretationthe user selected for the previous query). This change of the system

,strategy

User-Tailored Planning of Mixed Initiative Information-Seeking Dialogues 13

is based on the dynamic user model built up as the dialogue develops. We willshow in Section 6 how this is accomplished by using constraints stored in thedialogue history.

Figure 4. MIRACLE screen (query interpretation state).

Dialogue control options are represented in the upper part of the MIRA-CLE screen (see Figure 4). The buttons request, withdraw, and reject are gen-erally available but may invoke different functions, depending on the currentdialogue situation and the previous history. Although the button labels aresimilar to the names of the generic dialogue acts defined by the COR model(Section 5.1), they do not exactly match the act definitions. When the user,for instance, clicks on reject, this is internally represented as a reject offer or

14 A. Stein et al.

a negative evaluate inform act, thus as an immediate reaction to the system,

preceding offer/inform. Even if the user by mistake presses the “wrong” but-ton (e.g., reject if she wants to revise an own decision), the dialogue compo-nent – consulting the dialogue model and history – is able to either identifythe correct act or to generate a number of plausible interpretations, then ask-ing the user which one she actually intended.

At a closer look it is obvious that both user and system take the initia-tive and change the direction of the dialogue at several points: (1) after anyambiguous user query the system initiates a subdialogue asking the user tochoose among the generated query interpretations; (2) the user takes the ini-tiative and changes the direction of the dialogue any time she does not fol-low the recommended dialogue course, e.g., clicking the withdraw and rejectbuttons in our example; (3) if such a dialogue control act is ambiguous, thesystem takes the initiative back and offers the user reasonable continuationsof the dialogue (the latter will be explained in detail throughout the next sec-tions). In some cases the subdialogues initiated by the system may be quiteshort and have the character of simple clarifications. However, depending onthe number and “difficulty” of the system-generated interpretations or sugges-tions, the dialogue may become quite complex. Experiences with test users ofthe retrieval component of MIRACLE showed, for example, that for certaintypes of (non-trivial) queries a rather large number of query interpretationswere generated. In such cases the users took the time for inspecting and com-paring the interpretations in peace, asking for additional explanations to beable to make their choice or to learn more about the retrieval method.

5. Modeling Information-Seeking Dialogue

To allow for mixed-initiative interaction, where both user and system mayactively engage in the negotiation of dialogue goals, a good dialogue systemmust provide both the necessary flexibility and dialogue guidance. In our sys-tem we deal with these issues by employing a dialogue model which consistsof two interrelated tiers. The COnversational Roles (COR) model describesall dialogue acts and interaction options available at any state of a dialogue,allowing for symmetric changes of initiative between the agents. Scripts rep-resent prototype classes of dialogues for particular goal-directed information-seeking strategies. Modeling the recommended dialogue steps/acts (a subsetof the available COR acts) for a given strategy, scripts are used to guide theuser through the various stages of a retrieval session. Whereas the scriptsdecide who should take the turn in which situation and what are useful actions,the user may at any time depart from the current script by performing an actnot included there, i.e., a COR “dialogue control act”. Thus, users may decide

User-Tailored Planning of Mixed Initiative Information-Seeking Dialogues 15

on their own whether (and for how long) they want to follow the guided modeor suspend it and take the initiative themselves.

We will show in the next sections how the two tiers of our dialogue modelrepresent mixed initiative and dialogue control at the descriptive level. Sec-tion 6 then describes in detail how the dialogue component uses this modelto monitor the interaction, construct the dialogue history, and control changesin initiative depending on the actual dialogue course.

5.1. CONVERSATIONAL ROLES (COR) MODEL

We developed the COR model (Sitter and Stein, 1992) as a general model ofcooperative information-seeking dialogue between two agents that alternate-ly take the roles of information seeker (A) and information provider (B). Thecontributions of A and B are categorized as generic dialogue acts based ontheir main illocutionary point, similar to the “Conversation for Action” mod-el proposed by Winograd and Flores (1986). As such, COR abstracts awayfrom both the specific semantic content and the modality (linguistic vs. non-linguistic) and concentrates on the interpersonal function of dialogue actsand the role assignments. COR covers all kinds of information-seeking dia-logue, e.g., simple factual information dialogues as well as more complexnegotiations. The model has later been refined and used in various researchprototypes developed at our institute (MIRACLE being the only one usingabductive dialogue planning) and elsewhere (e.g., Hagen, 1999).

Comparable dialogue models or grammars (e.g., Fawcett et al., 1988 andTraum and Hinkelman, 1992) are mostly concerned with natural languageand specific kinds of dialogue, e.g., exchange of factual information withinwell-defined task settings, such as most of the approaches presented in thisissue. To account for the requirements of natural language they cover a varietyof linguistic aspects, e.g., rhetorical and textual structures. The COR modelitself does not rely on such linguistic knowledge, but it was used in naturallanguage dialogue systems (e.g., “SPEAK!”, see Grote et al., 1997 and Stein,1997) for monitoring the dialogue, whereas the linguistic resources were con-stituted by other strata representing the semantics and grammar.

COR uses recursive state–transition networks as the representation for-malism. Figure 5 displays the main network for representing dialogues (thetop-level dialogue as well as embedded subdialogues and meta-dialogues).Circles and squares depict the dialogue states; arcs/transitions between themgeneric dialogue moves. A dialogue consists of dialogue cycles, i.e., sequencesof moves starting in state 1 and leading back to it. Note that in the simplestcase, a move (Figure 6) consists of a single atomic dialogue act (e.g., a requestact in the request move). Moves may also include additional elements (e.g.,subdialogues) or may even be entirely omitted (empty). A COR analysis ofour example dialogue is given in Figure 7 in the form of a complex history

16 A. Stein et al.

tree. It shows the COR acts assigned to the single contributions and illus-trates the resulting dialogue structure consisting of dialogue acts, moves, andcycles.

evaluate(A,B,T)

evaluate (A,B,T)

reject_request (B,A,T)

withdraw_request/accept (A,B,T)

withdraw_offer/promise (B,A,T)

inform (B,A,T)

dialogue (_ ,_ ,m)

withdraw_request (A,B,T)

withdraw_offer (B,A,T)reject_offer (A,B,T)

withdraw_request (A,B,T) reject_request (B,A,T)

withdraw(_,_,T)

withdraw_offer (B,A,T)

reject_offer (A,B,T)

2

1 3 4 5

6

7

7’ 8

2’withdraw_offer/promise(B,A,T)

withdraw_ request/accept (A,B,T)

DIALOGUE (A,B,T)

dialogue state (squares = terminal states)

dialogue move (bold arcs = expected moves)

A information seeker

B information provider

T type of dialogue or move: m = meta-level; r = retrieval

– parameter A or Border of parameters: 1 = speaker; 2 = addressee; 3 = type of dialogue/move

withdraw_inform (B,A,T)

Figure 5. COR net for DIALOGUES.

The dialogue network (Figure 5) visualizes different types and functionsof moves by graphical means. The arcs pointing to the right-hand side, forexample, denote a goal-directed (ideal) course of action consisting of so-called “expected moves”, i.e., a move complies with the expectation expressedby the preceding move (e.g., acceptance of an offer). Arcs that point back tostate 1 or a terminal state represent deviations from the expected course (thesemoves being, however, a legitimate and useful means for controlling the dia-logue).

Each dialogue cycle may either be initiated by A with a request for infor-mation or by B with an offer to look up some specific piece of information.These initiating moves/acts of a cycle or whole (sub-)dialogue have a spe-cial status in terms of our definitions of initiative and dialogue control givenin Section 3 above. They are instrumental in determining the dialogue focusand at the same time may have a pronounced dialogue control function. Therespective speaker takes the dialogue initiative, determining both the globaltopic of the following dialogue cycle and new (“forward-directed”) condi-

User-Tailored Planning of Mixed Initiative Information-Seeking Dialogues 17

tions of action. By contrast, inform (giving requested information) and otherstatements (volunteered information, such as explanations) do not define newconditions of action.

Aside from inform, responding moves/acts have mainly dialogue controlfunctions. They are “backward directed” in that they refer to previous actsand the conditions of action expressed there, whether in the affirmative ornegative (for instance, adopting or rejecting/withdrawing these conditions).Some of them may contribute to the topical progression, for example, when atopic is specified or selected from a list of options offered (accept). However,their primary function is not giving information or introducing new discoursetopics, but to either keep the dialogue going or change its direction.

Responding moves that follow the expected dialogue course (promise andaccept) neither introduce new conditions of action nor aim to change thedirection of the dialogue. An exception is the evaluate move: it is certainlyresponding, as some information presented is evaluated (relevance assess-ment); on the other hand, if the evaluation expressed is negative, the movemay have a similar function as a rejection. The speaker may not only meanto tell that the information given was irrelevant but may intend to changethe direction of the dialogue, possibly in order to shift the global topic. Themulti-functionality of acts is quite obvious in this case.

As discussed in Section 3 above, there are basically two ways of changingthe direction and taking control of the dialogue: withdrawing/rejecting someprevious act or initiating an embedded subdialogue. In the withdraw/rejectcase this does not mean a change in initiative, since these are responding“control acts”. They may be performed, however, precisely with the aim totake the initiative right after, e.g., when rejecting an offer first and then comeup with a counteroffer or a request for information/action. Initiating a subdi-alogue, on the other hand, certainly means a change in initiative, since a newdialogue topic is being raised, be it a request for help on topic X or an offerto provide an explanation of Y.

Figure 6 shows that an atomic dialogue act (e.g., A: request) may bepreceded or followed by optional transitions, i.e., an additional statement(assert) of the speaker (A) or a subdialogue initiated by the addressee (B).The atomic act determines the main illocutionary function of the whole moveand is called its “nucleus”, whereas the subdialogues and assert moves areoptional “satellites” (cf. Mann and Thompson, 1987).

Analyses of transcripts of real dialogues between human information seek-ers and information brokers showed (cf. Sitter and Stein, 1992, Fischer et al.,1994) that in many cases certain moves/acts were explicitly made/utteredbut were often skipped in quite similar situations in other states of the dia-logue. Usually, information requests and offers were explicit but even thesewere sometimes missing, for instance, when the specification of the request-ed/offered information could be inferred from the context or was anticipated.

18 A. Stein et al.

��

dialogue MOVE (A,B,T)

dialogue (B,A,T) [solicit context info]

assert (A,B,T)[supply context info]

dialogue (B,A,T) [identify ACT]

a

b

b’

cassert (A,B,T)[supply context info]

atomic ACTMOVE / DIALOGUEjump ( � = empty)

requestofferacceptevaluate

MOVEs:

reject_requestreject_offerwithdrawwithdraw_requestwithdraw_offerwithdraw_promisewithdraw_accept

A: ...

Figure 6. COR net for MOVES.

Only inform acts providing requested information/retrieved results (if avail-able), were hardly ever skipped and may hence be used as fix-points for ana-lyzing the dialogues. (These analyses also showed that it was often the humanintermediary who took the conversational lead and suggested new ways ofhow to interpret and solve the information problem.)

Therefore, COR allows in theory any move/act in a retrieval dialogue –except inform – to be omitted, i.e., be a tacit transition (empty acts (�) arealso stored in the dialogue history at their respective position in the tree, seeFigure 7). For a given application, however, one might need to simplify theCOR model somewhat in order to facilitate the generation of the history andreduce its complexity, e.g., avoiding large numbers of empty transitions in arow. In MIRACLE, for example, we have allowed at most two moves per dia-logue cycle to be entirely jumped (typically, these are promise and evaluate,but not request and inform), and the scripts described below decide which sin-gle acts must not be jumped within a move. For other (types of) applicationsone might disallow the omission of certain moves, globally or for particulardialogue situations. This can be done by specifying variants of networks fordifferent types of moves and dialogues, e.g., considering the domain of dis-course (object-level vs. meta-level) and the general function of moves for adialogue cycle (task-oriented vs. dialogue control function).

Using the COR model we can analyze and represent the complex inter-change of dialogue acts – including their propositional content – in a hier-archical dialogue history (see Figure 7). Representing the act type of eachindividual input action and locating it in the larger structure of the dialoguehistory (e.g., within moves and subdialogues), the system is in a position toknow in which specific situations the user took the initiative and/or tried tochange the direction of the dialogue. This builds an important knowledgesource for the dialogue component for planning the subsequent interaction.

User-Tailored Planning of Mixed Initiative Information-Seeking Dialogues 19

dialogue(u ,s , r )

request u: request

[introductory sequence]

user query (Q1)dialogue(u ,s , r )

request s: request Which interpretation?promise �

inform u: inform Interpretation # 4withdraw_inform

offer s:offer Do you want to: 1 – 3dialogue(u ,s ,m)

accept u: acceptinform s: informevaluate

#2: modify queryOk, shows last query

clicks on “withdraw”

request u: requestdialogue(u ,s , r )

user query (Q1a)

request s: requestpromise �

inform u: inform

Which interpretation?

Interpretation # 2evaluate �

inform s: inform displays hitsevaluate u: evaluate clicks on “reject”

dialogue(u ,s ,m)offer s:offer Do you want to: 1 – 5accept u: acceptinform s: informevaluate

# 1 and # 3

displays last query

request u: request user query (Q1b)promise s:promise

s:assertI’m searching,using query int # 2

inform s: inform displays hits. . . . . .

u:withdrawwithdraw_promise �

dialogue(u ,s ,m)

withdraw_request �

dialogue(u ,s , r )

[ . . . ]

u = user; s = system; � = empty r = retrieval dialogue; m = meta-dialogue

dialogue(u ,s , r )

dialoguecycle

dialoguemove

C1

C2

C constraint

Figure 7. History tree of the example dialogue.

5.2. SCRIPTS FOR INFORMATION-SEEKING STRATEGIES

Whereas the COR model addresses mixed initiative on the “dialogue level”,scripts address the “task level” (see also Chu-Carroll and Brown, 1998). CORaccounts for possible continuations in any dialogue state but does not placerestrictions on the selection of moves, e.g., to determine which of the possi-ble moves should be preferred for a particular task or strategy. The latter isachieved by scripts which give us structured guidelines for tasks like queryinga database or browsing in a meta-information resource.

20 A. Stein et al.

Based on a multi-dimensional classification of information-seeking strate-gies, the scripts proposed by Belkin et al. (1995) implement prototypicalinteraction patterns corresponding to the various strategies. In MIRACLEthey are used as “tentative” global plans for guiding the user through a ses-sion. Here, a script is an executable representation of those types of acts thatare useful to fulfill a given task and related goals. That is, a script containsforward-directed, recommended user actions and possible system reactions atthe various stages of a retrieval dialogue. A script may call other sub-scriptsto deal with smaller tasks.

Compared to plan operators in traditional planning systems, the scriptsare more like sequences of connected plan operators that implement typi-cal information-seeking strategies. Although plan operators could give us thesame functionality as scripts, the scripts make it easier to relate the dialogueto real information-seeking strategies, and these can be inspected and selectedby the end-users without problems.

A recursive transition network (RTN) formalism is used to represent scripts,the transitions containing references to COR dialogue acts (see Figure 8). Anadditional and unique parameter is added to the COR acts so that the dia-logue component can distinguish between the different instances of the actsin the reasoning system. Preconditions decide when an act is available, andpostconditions ensure that the system executes the necessary actions.

Note that the system invoked all of the three scripts displayed in Figure 8in the example dialogue given above. S3 is the standard script of a retrievaldialogue in MIRACLE for the strategy “searching, with the goal of selec-tion, by specification, in the target domain database” (Belkin et al., 1995).This script differs from classical retrieval processes in that it is not restrictedto flat query–result sequences but also includes (like all scripts in our sys-tem) branching points and loops as well as some embedded dialogue cycles.In state 2 of S3, for instance, the system may use one of the transitions 2,3, or 10, depending on the number of query interpretations it generated; instate 3 the user may either select the one query interpretation to be execut-ed (transition 4) or “tell” the system that she wants to stop inspecting theinterpretations, go back to the old query and modify it (transition 8).

S1 and S2 are scripts for two different kinds of meta-dialogue. S1 mod-els the introductory phase of any session and is also called in some latersituations in a session, for example, after the user decided to restart, anduser and system need to negotiate for a new strategy and script. S2 imple-ments the system

,strategy for dealing with unexpected user acts, e.g., when

the user hits the withdraw and buttons in the example dialogue, trying tochange the direction of the dialogue. In our example the user first chooses tosearch the database (which triggers retrieval script S3). If the user had, forinstance, decided to check the structure of the database first, a different scriptfor browsing the structure would have been instantiated. Then the user selects

User-Tailored Planning of Mixed Initiative Information-Seeking Dialogues 21

12

S31

S32

S33

S34

S35

S36

S37

S38

1

2 3

4

5

6

7

8

9

10

11

1

2

3

4

5

1 request(u,s,r,query(_)):u posts query

2 request(s,u,r,interpretation(one,_)):s presents one interpretation

3 request(s,u,r,interpretation(many,_)):s presents many interpretations

4 inform(u,s,r,choice):u selects query interpretation

5 inform(s,u,r,result(_)):s shows list of data

6 offer(s,u,r,menu):s offers list of things to do

7 accept(u,s,r,new_query):u selects to formulate new query

8 request(u,s,r,modify_query):u wants to modify query

9 accept(u,s,r,modify_query):u wants to modify query

10 reject(s,u,r,interpretation(no,_)):s found no interpretation

11 inform(s,u,r,form(_)):s shows new query form[re-initialize COR model]

12 inform(s,u,r,old_form):s shows old query form

S23

S21

S22

S24

1

2

3

4

S13

S11

S12

S14

1 offer(s,u,m,act_int):s presents interpretations

2 accept(u,s,m,act_int):u selects correct interpretation

3 inform(s,u,m,act_exe):s informs that act is executed

4 reject_offer(u,s,m,act_int):u rejects all interpretations

5 inform(s,u,m,act_exe):s informs that unambiguousact is being executed

1 offer(s,u,m,menu_strategy):s offers various strategies

2 accept(u,s,m,menu_strategy):u selects a strategy

3 inform(s,u,m,script_exe):s informs that script for thisstrategy is being executed

4 inform(s,u,m,script_exe):s informs that the one scriptavailable is being executed

S1: Meta-Dialogue (Strategy)

S2: Meta-Dialogue(Act Interpretation)

S3: Retrieval Dialogue

state of script

dialogue act

u = users = system

m = meta-dialoguer = retrieval dialogue

Figure 8. Structure of scripts S1, S2, and S3.

query interpretation 4 and clicks on the withdraw button. The dialogue man-ager now instantiates script S2 in order to communicate the generated inter-pretations of this ambiguous withdraw act to the user, as will be explained inthe next section.

It is important to note how the COR model and the scripts together formthe dialogue component

,behavior. As a descriptive model of dialogue acts

and roles, the COR model allows a mixed-initiative strategy to be adopt-ed without deciding on how the strategy should be realized. Within a scriptdescribing a particular task, there is no room for changing the direction of thedialogue. However, letting the user select a dialogue control act (e.g., reject orwithdraw), which is a COR act that forces her out of the active script, we takeadvantage of a change of initiative at the dialogue level in order to reconsider

22 A. Stein et al.

the whole task. When this dialogue control act is then interpreted by the sys-tem, the user gets the chance to choose another task, choose another strategyfor doing the task, choose other constraints or search terms for the task, etc.Even though the task-oriented scripts are not designed for mixed-initiativeproblem solving, the change of initiative at the dialogue level combined witha comprehensive reasoning about the user

,actions and intentions do also have

profound effects at the task level.

6. Abductive Dialogue Planning

The Abductive Dialogue Component (ADC) forms a separate component inMIRACLE and provides an interface to the retrieval component. The ADCis implemented in SWI-Prolog using Flach

,(1994) abduction engine, and it

combines the COR model, the scripts, and other internal knowledge sourcesinto an executable system with reasoning capabilities. With reference to thearchitecture shown in Figure 9, we first explain the functional aspects of thedialogue component. After going through the use of abduction to interpretambiguous user acts, we discuss how dynamic and static user models fit intothe logical framework and are used to tailor the dialogues to certain charac-teristics of the user.

Dialoguecontrol act

Recom-mended act

Act Inter-pretation

DialogueMonitoring

Dialogue Guidance

DialoguecontrolrulesHistory

COR

ADC

Retrieval engineConstraintsPresentation data

Scripts

Set of dialoguecontrol acts

Userprofile

Set of recom-mended acts

Set of act interpretations

Figure 9. Architecture of the Abductive Dialogue Component (ADC).

6.1. FLEXIBILITY AND GUIDANCE IN RETRIEVAL DIALOGUES

A dialogue starts out with the COR model in its initial state, an empty dia-logue history, and a script invoked by the system after the user has selecteda task and/or information-seeking strategy from a list or menu offered in theintroductory phase of the dialogue. As the dialogue develops, ADC executes

User-Tailored Planning of Mixed Initiative Information-Seeking Dialogues 23

the COR model and the script in parallel. For every act in the dialogue, thecorresponding transition in the script is fired, and the state of the retrievalsession changes. The acts recommended to the user at a certain stage of thedialogue are all the dialogue acts leading out from the active state of thescript. By firing script transitions and presenting these recommended acts,the Dialogue Guidance module shown in Figure 9 helps the user to select theactions that are appropriate for satisfying her information needs. The scriptalso includes the system acts that are linked to the transitions and call routinesin the underlying retrieval engine.

Since every transition in the script is assigned a generic COR act, likerequest or inform, the Dialogue Monitoring module can fire transitions inthe COR model and build up a structured dialogue history while the scriptis being executed. COR acts missing in the current script (e.g., jumps andsubstructures such as subdialogues) are identified by searching the COR net-works. The search yields a list of possible COR transitions leading to thescript transition just fired, and a ranking based on complexity is used to pickout the most appropriate one. The resulting tree-like history of primitive dia-logue acts and more abstract moves reveals the dialogue structure and tellsus what has happened earlier in the dialogue. The history of our exampledialogue is sketched in Figure 7.

Now, as long as the user performs one of the recommended acts in theactive script, the dialogue history is not used by the system. The recommend-ed acts currently available from the script pop up on the MIRACLE screenas interactive objects (e.g., in Figure 4 the slider for the four query interpre-tations and the “search DB” button for executing the currently selected queryinterpretation). This shows the user what she is supposed to do at the variousstages of the retrieval session. Even though these acts are intended to helpthe user, there might be situations in which the user would like to deviatefrom the recommendations. She might, for example, change her mind abouther query or suddenly realize that she should have chosen another script inthe first place. Since we cannot anticipate these deviations in advance, theuser needs a set of generic dialogue control acts that at least indicate in whatdirection she might like to change the course of the dialogue. These dialoguecontrol acts are found as backward-directed acts leading out from the activestate of the COR model (see dialogue net in Figure 5). They are representedon the MIRACLE screen as generally available dialogue control buttons (asshown in the upper part of Figure 4). Presenting these acts to the user as alter-natives to the recommendations from the script, the system gets more flexibleand the user can take control of the dialogue when needed.

If the user selects one of the dialogue control acts from the COR model,the Act Interpretation module uses abduction to interpret the act in light ofthe current dialogue context. In the remaining sections, we have a look at theabduction system and its use in dealing with vague or ambiguous user acts.

24 A. Stein et al.

We first explain the principles of abductive reasoning and then show how thedialogue component works for our example dialogue.

6.2. THE PRINCIPLE OF ABDUCTIVE DIALOGUE PLANNING

As shown in Figure 2, abductive reasoning assumes that there is already aconclusion or observation available. The task of the abduction engine is tofind potential facts that together with a rule base would logically imply theconclusion at hand. In MIRACLE the conclusion comes in the form of anobservation of user behavior (e.g., a user

,query or a dialogue control act),

whereas the potential facts are referred to as a hypothesis explaining thisbehavior. The rule base defines the concepts found in the observation andthe logical relationships between these and other concepts. Some conceptsare not defined at all in the rule base – these are called abducibles and arethe concepts that can be included in the hypotheses explaining the observa-tion. If there are several hypotheses that imply the observed phenomenon,each hypothesis forms an interpretation of the observation. The user can thenchoose the interpretation she finds closest to her intentions.

There are a number of recent approaches that also deal with the disam-biguation of dialogue acts or misconceptions using abductive inference (e.g.,Hobbs et al., 1993 and McRoy and Hirst, 1995). The fundamental differenceto our approach is that they focus on linguistic ambiguities in natural languagediscourse, whereas we deal with contextual (dialogue-dependent) ambiguity.

In ADC the dialogue control act chosen from the COR model is the obser-vation, and the rule base is made up of certain dialogue control rules (seeFigure 10). The rules map from concrete actions and properties of the dia-logue history to these backward-directed control acts from COR. When a dia-logue control act is chosen by the user, a part of the dialogue history togetherwith the dialogue control rules provide the logical basis for interpreting thisambiguous act. We look for a hypothesis that allows the dialogue control actto be inferred according to the following logical relationship:

Dialogue context � Dialogue control rules � Hypothesis �Dialogue control act

The dialogue context is a subset of the complete dialogue history. It con-tains the list of atomic system acts and atomic user acts that are found relevanton the basis of the history tree. An act � in the history tree is deemed relevantto a dialogue control act � if the two acts do not belong to different (high-level) dialogue cycles at the same level in the dialogue tree and if there isno other dialogue control act � between � and �. The content of these dia-logue contexts will become clearer when we explain the reasoning behind theexample dialogue.

User-Tailored Planning of Mixed Initiative Information-Seeking Dialogues 25

Some of the dialogue control rules are shown in Figure 10. Note thatthese rules contain atomic acts (but not moves), and the logical representa-tion used here corresponds to the prefix notation for atomic acts used in theCOR nets and the dialogue history, e.g., request(A , , , ) would correspondto A: request. The concrete actions included in the hypothesis explain whatthe user intended to do when she chose the dialogue control act in that partic-ular context. When there are many hypotheses that imply the dialogue controlact of the user, each of them is referred to as an interpretation of the act. Therules are set up on the basis of some initial system testing and are to be refinedand extended as the system is evaluated with real users.

1. resume(X) → change_act(X)If the user intends to completely reformulate the propositional content of an act, she changes the original act.

2. modify(request(X,Y ,Z ,query(Q))) → change_act(request(X,Y ,Z ,query(Q)))If the user wants to modify a previous query, she changes the original query.

3. request(X,Y ,Z ,W) ∧ change_act(request(X,Y ,Z ,W)) → change_input(X,Y ,Z)If the user intends to change her previous request, she changes her inputs.

4. inform(X,Y ,Z ,W) ∧ change_act(inform(X,Y ,Z ,W)) → change_input(X,Y ,Z)If the user intends to change her previous inform act, she changes her inputs .

5. suppress(X) ∧ change_session → evaluate(X,neg)If the user wants to suppress the system act and change to another session, she hits the reject button.

6. suppress(inform(X,Y ,Z ,W)) ∧ change_input(Y,X ,Z) → evaluate(inform(X,Y ,Z ,W) ,neg)If the user wants to suppress the data presented and change her inputs to the system, she hits the reject button.

7. inform(X,Y ,Z ,W) ∧ change_input(X,Y ,Z) → withdraw(inform(X,Y ,Z ,W))If the user has given some information and wants to change the inputs, she withdraws her inform act.

8. inform(X) ∧ delete_or_store(inform(X,Y ,Z ,W)) → suppress((inform(X,Y ,Z ,W))If the user wants to delete or store some presented data for later use, she suppresses them in the current context.

Figure 10. Some dialogue control rules.

Looking at the dialogue control rules in Figure 10, we see that a negativeevaluate act can be interpreted either as a user

,intention to change her pre-

vious input (Rule 6) or as a dissatisfaction with the current script (Rule 5).A withdraw act is only interpreted as the intention to change the previousinput (Rule 7). The predicates resume/1, modify/1, change session/0, anddelete or store/1 are abducibles and are offered to the user as possible inter-pretations of her ambiguous dialogue act. Each of them is linked to a function

26 A. Stein et al.

in the IR system that the user might be interested in running. The atomic dia-logue acts referred to in the rules, like inform/4, count as facts if they arefound in the dialogue context.

6.3. INTERPRETING DIALOGUE CONTROL ACTS

Consider the example dialogue presented in Figure 3, in which the user issearching for some information about painters in the database. She enters aquery using the fields profession, subject, and town, and the retrieval enginepresents four interpretations of the query. So far, the dialogue is a simple exe-cution of script S3. After choosing one of the query interpretations generatedand displayed on the screen as shown in Figure 4, however, the user suddenlyhits the withdraw button. This act is not anticipated in script S3, but is one ofthose dialogue control acts made available from the COR model. Since it isnot clear what the user wants to do next, the abduction process is triggered,and ADC tries to figure out how the act should be understood and what dia-logue continuations are possible and can be recommended in this situation.

At this moment, there is a dialogue context H1 containing the followingthree acts (see also the COR history tree in Figure 7):

H1 � frequest(u , s , r , query(1)), request(s , u , r , interpretations(1 , many)),inform(u , s , r , choice(1 , 4))g

The first predicate in H1 is the user,

query (request), the second is thesystem

,presentation of query interpretations (request, asking the user which

one she prefers), and the last is the user,

answer (inform), i.e., her choice ofthe one interpretation to be used for searching in the database. The observa-tion to be explained is the subsequent withdrawal of the user, which is storedin ADC as withdraw inform(u , s , r , choice(1 , 4)). The abduction engine usesthe dialogue control rules in the reverse direction to find possible proofs forthe observation (see the proof trees in Figure 11).

A proof tree is valid when all non-proven predicates are either abduciblesor come from the dialogue trace. In our example dialogue three interpretationsof the withdraw act are inferred. The starting point of the proof trees shown inFigure 11 is the dialogue control act withdraw inform(u , s , r , choice(1 , 4)).In the first proof (a) rule 7 is used to explain this act. Using rules 4 and 1to infer one of the premises of rule 7, we get a complete proof tree withone reference to the dialogue history and one abducible that serves as theinterpretation of the control act.

The resume/1 predicate means that the intention is to completely refor-mulate the propositions of a previous act in the script, whereas the modify/1predicate means that only parts of the previous act are to be modified and weshould keep the old input (query) as a basis for the modifications. The system

User-Tailored Planning of Mixed Initiative Information-Seeking Dialogues 27

(c) Enter a completely new query

(b) Modify previous query

(a) Choose another query interpretation

modify(request(u,s , r ,query(1)))

withdraw(inform(u,s , r , choice(1 ,4)))

resume(inform(u,s , r , choice(1 ,4)))

change_act(inform(u,s , r , choice(1 ,4)))

change_input(u,s , r)

inform(u,s , r , choice(1 ,4))

withdraw(inform(u,s , r , choice(1 ,4)))

request(u ,s , r ,query(1))

change_input(u,s , r)inform(u,s , r , choice(1 ,4))

change_act(request(u,s , r ,query(1))

resume(request(u,s , r ,query(1)))

withdraw(inform(u,s , r , choice(1 ,4)))

request(u ,s , r ,query(1))

change_input(u,s , r)inform(u,s , r , choice(1 ,4))

change_act(request(u,s , r ,query(1))

Figure 11. Interpretations of the withdraw act (proof trees).

presents the three interpretations using script S2, and the user selects to mod-ify her previous query. When the user adds “OR Maria” in the subject field ofthe old query form, script S1 is being executed again.

Now, a new cycle in the COR model, at the same level as the old one, iscreated and the dialogue context is defined to be empty. The system presentsthe query interpretations for the modified query, and the user decides to gonow for interpretation 2 to search the database.

As soon as the system shows the retrieved data, the user hits the rejectbutton. This means that she is rejecting the data presented to her, but it isstill unclear what she wants to do instead. The abduction process is triggeredagain, trying to explain why the user hit the reject button in this particularsituation. This time, the script-based dialogue is interrupted after the databasehas been accessed, and the dialogue context now contains four elements:

28 A. Stein et al.

H2 � frequest(u , s , r , query(2)), request(s , u , r , interpretations(2 , many)),inform(u , s , r , choice(2 , 2)), inform(s , u , r , result(2))g

The user,click on the reject button is identified as the dialogue control act

evaluate(inform(s , u , r , result(2)) , neg), and ADC is now able to constructfour possible proof trees for this act. Figure 12 lists the four interpretationsthat are presented as alternative actions to the user. Since the system has pre-sented some data to the user, the user is given a chance to save the data beforeshe starts doing something else (the delete or store predicate). She is thengiven the choice between choosing another interpretation, creating a new ormodified query, or starting a new retrieval session.

In this way the dialogue component is able to deal with ambiguous useracts that take the initiative in the dialogue and change its direction. The dia-logue component tries to explain ambiguous acts on the basis of the contextand maps them to possible concrete actions that are presented to the user. Atthe same time, executing the script makes it possible for the ADC to guidethe user and help her work efficiently with the system. The logical founda-tion ensures the extensibility and reliability of the component, and – as wewill see in the next section – it enables the system to use two very powerfuluser-tailoring mechanisms.

6.4. USER-TAILORING OF DIALOGUES

User-tailoring in ADC is possible both at the retrieval level and at the dialogueact level. As far as retrieval is concerned, the component dynamically adaptsthe retrieval engine

,behavior according to the user

,previous choices. User-

tailoring at the dialogue level is more static, but implies that user profiles aretaken into consideration when ambiguous acts are encountered. From a morelogical point of view, user-tailoring in ADC means interpreting

� ambiguous queries in light of previously interpreted queries, and

� ambiguous dialogue control acts on the basis of user profiles.

The first type of user-tailoring depends on the use of constraints in theabductive reasoning process. When an interpretation of a query is chosen, thecorresponding proof tree is added to the dialogue history together with the actitself. In Figure 7, the proof trees C1 and C2 are added to the dialogue historyas attributes of the user

,selection acts. C1 is the proof tree for interpretation

4 of the first query, and C2 is the proof tree for interpretation 2 of the second,modified query. These proof trees are referred to as constraints, since theymay be used to constrain the interpretation of later queries.

Logically, queries are interpreted by the retrieval engine on the basis of adomain theory and a set of relevant constraints:

User-Tailored Planning of Mixed Initiative Information-Seeking Dialogues 29

(c) Enter new query

(b) Modify previous query

(a) Choose another query interpretation

modify(request(u,s , r ,query(2)))

evaluate(inform(s,u , r , result(2)), neg))

resume(inform(u,s , r , choice(2 ,2)))

change_act(inform(u,s , r , choice(2 ,2)))

change_input(u,s , r)

inform(s ,u , r , result(2))

request(u ,s , r ,query(2))

change_input(u,s , r)

resume(request(u,s , r ,query(2)))

request(u ,s , r ,query(2))

change_input(u,s , r)

(d) Start new session

change_session

suppress(inform(s,u , r , result(2 )))

delete_or_store(inform(s,u , r , result(2)))

inform(s ,u , r , result(2)) delete_or_store(inform(s,u , r , result(2)))

evaluate(inform(s,u , r , result(2)) ,neg))

evaluate(inform(s,u , r , result(2)) ,neg))

evaluate(inform(s,u , r , result(2)) ,neg))

suppress(inform(s,u , r , result(2 )))

suppress(inform(s,u , r , result(2 )))

suppress(inform(s,u , r , result(2 )))

inform(u,s , r , choice(2 ,2))

change_act(request(u,s , r ,query(2)))

change_act(request(u,s , r ,query(2)))

inform(s ,u , r , result(2))

delete_or_store(inform(s,u , r , result(2)))

inform(s ,u , r , result(2))

delete_or_store(inform(s,u , r , result(2)))

Figure 12. Interpretations of the evaluate act (proof trees).

Domain theory � Constraints � Hypothesis � Query

Hypothesis is here an interpretation of the user,ambiguous query in terms

of actual database structures. The set of relevant constraints contains exactlythose constraints that are linked to acts included in the dialogue context atthis stage of the dialogue.

30 A. Stein et al.

Let us now go to the last part of our example dialogue in Figure 3. Afterhitting the reject button, the user decides to post a modified query to theretrieval system. The query is submitted to the retrieval engine together withthe relevant constraints, as shown in Figure 9. In this case C1 is not consideredrelevant any more, since it belongs to a separate dialogue cycle (where theuser had withdrawn her choice of query interpretation). In the new cycle, onlyC2 is sent to the retrieval engine for the interpretation of the query. IncludingC2 in the abduction process gives us only one interpretation, i.e., the inter-pretation chosen for the previous query, and the system does not need to startfurther negotiations with the user. The ambiguous query is understood cor-rectly, because it is analyzed as a modification of the previous query and theuser has told the system how to interpret that one.

The dialogue history with constraints serves as a dynamic user model (seealso Stein et al., 1997b) that keeps track of the user

,choice of query inter-

pretations. Using the constraints, the retrieval engine gradually builds up anunderstanding of how the user wants her queries to be interpreted. The CORanalysis of the dialogue (see Figure 7) indicates which of the user

,previous

choices are relevant to the new queries. If the COR analysis of the dialoguestructure is wrong, important constraints may be lost and the user would beforced to specify the same choices several times.

Now, the second form of user modeling deals with the user,

choice ofdialogue acts. Without user-tailoring, the dialogue context decides which dia-logue control rules can be used to construct a proof tree. It is, however, pos-sible to specify a user profile that further restricts the abductive reasoningprocess. Such a profile is a set of facts that determines the desirability of cer-tain dialogue acts. For taking user profiles into consideration, the abductionprocess has to be modified as follows:

Dialogue context � Dialogue control rules � User profile � Hypothesis �Dialogue control act

The system now looks for an interpretation that takes the dialogue contextas well as the specific preferences of the user into account. If the user as a ruleprefers not to modify old queries, she would simply specify a user profile Plike this: P� f� modify( , , , )g. This profile would rule out all act interpre-tations that assume the user to modify an existing query. The interpretationsinvolving the posting of completely new queries would be allowed, as wouldinterpretations that do not directly address any previous queries. If this userprofile had been specified before the withdraw act in the example dialogueabove, for instance, only the interpretations (a) and (c) would have been pre-sented: “Do you want to (1) choose another query interpretation or (2) entera completely new query?”.

The user profile helps us set up a personalized user interface. Only actionspreferred by the user are made available to the act interpretation process, and

User-Tailored Planning of Mixed Initiative Information-Seeking Dialogues 31

the user can change the profile at any time during the dialogue. In theory,the user profile can also include dialogue control rules that are relevant onlyto one person or a group of persons. Such rules have to be formulated withcare, though, since they have to be consistent with the other dialogue controlrules available. Automatic update of the profile on the basis of the ongoingdialogue is feasible, though it is hard to say how useful that is.

The kernel of ADC is the abduction engine that constitutes a platform forcombining different kinds of dialogue information. User profiles, constraints,dialogue histories, and rules for interpreting dialogue acts all fit into the samelogical framework. As a whole, this system allows us to infer observable dia-logue control acts from properties of the user and the dialogue context bymaking assumptions about the user

,real intentions. The assumptions come

in the form of system functions that may change the direction of the dia-logue and the whole purpose of the retrieval session. Logically speaking, thus,mixed-initiative dialogues at the task level arise when

� the user triggers the abduction process with a dialogue control act,

� the user,

real intentions cannot unambiguously be inferred, and

� the relevant dialogue control rules address task-oriented concepts.

As such, mixed-initiative dialogues come into play when the user sus-pends the current dialogue course, e.g., rejecting a system

,suggestion, or

when she does not precisely formulate her information need, e.g., enteringan ambiguous query. In both kinds of situations the system is able to engagein a cooperative dialogue with the user. Making assumptions about the user

,

actual information need, the system takes the initiative and offers the userinterpretations of her query based on the domain model of the database andthe dialogue context (i.e., constraints stored in the dialogue history). The dia-logue manager, on the other hand, analyzes the intentional structure of thedialogue history and is thus able to identify situations where the users tries tochange the current dialogue strategy. It acts cooperatively in such situations,suggesting suitable continuations of the retrieval dialogue in such a way thatthe user can easily switch to the newly suggested dialogue course and doesnot need to activate complicated sequences of interface functions.

7. Conclusions

The model of mixed-initiative retrieval dialogues presented in this articlerelies on knowledge about users which is incrementally acquired from theirinteractive behavior during a session with an IR system. The Abductive Dia-logue Component (ADC) of the MIRACLE system uses a comprehensive

32 A. Stein et al.

dialogue model to construct a complex dialogue history, part of which is usedas a dynamic user model. The dialogue history explicitly represents both theintentional structure of the discourse and knowledge about the user

,prefer-

ences for particular retrieval strategies and query interpretations.Whereas the retrieval interaction is roughly guided by dialogue scripts, the

user may change the direction of the dialogue at any time. The dialogue mod-el allows her to perform certain “dialogue control acts”, such as reject andwithdraw, in order to depart from the initial script. If such a dialogue controlact is ambiguous in a given situation, the dialogue component uses abductionto generate plausible “interpretations” of the act based on a number of dia-logue control rules and the dialogue history/context. The interpretations showthe user which concrete retrieval functions and strategies are available in thissituation. Aside from just informing the user about these options, the systemtakes the initiative and recommends suitable dialogue continuations to theuser, i.e., in this case, actions that are not pre-specified by scripts. Based onthe user

,choice of action, the system adapts its subsequent behavior, updating

the initial script or instantiating a new one. A specific interpretation may alsoinclude an abducible that assumes certain characteristics of the user, and ifthis interpretation is selected, the assumption is found valid and used to filterthe interpretations of later user acts.

Given the dialogue model and the abductive reasoning mechanism, thenext task for the dialogue component is to extend the set of dialogue con-trol rules, i.e., formulate adequate rules that have the necessary generality.The rules must refer to the functionality of the given retrieval system but alsotake the short-term conditions of the dialogue into account. A larger num-ber of dialogue control rules will allow us to gradually dispense with thedialogue scripts and to achieve more flexibility in the handling of mixed-initiative retrieval dialogues.

Acknowledgements

We would like to thank Adrian Muller for the design and implementation ofthe abductive retrieval component of the MIRACLE system, and the threeanonymous reviewers for their comments on earlier drafts of this paper.

References

Allen, J. F.: 1994, Mixed Initiative Planning: Position Paper. Paper presented at ARPA/RomeLabs Planning Initiative Workshop. Also available from http://www.cs.rochester.edu-/research/trains/mip/home.html.

Bates, M.: 1986, An Exploratory Paradigm for Online Information Retrieval'. In: B. Brookes(ed.): Intelligent Information Systems for the Information Society. Amsterdam: North-Holland, pp. 91–99.

User-Tailored Planning of Mixed Initiative Information-Seeking Dialogues 33

Belkin, N. J., C. Cool, A. Stein, and U. Thiel: 1995, Cases, Scripts, and Information SeekingStrategies: On the Design of Interactive Information Retrieval Systems. Expert Systemsand Applications 9(3), 379–395.

Belkin, N. J. and A. Vickery: 1985, Interaction in Information Systems: A Review of Researchfrom Document Retrieval to Knowledge-Based Systems. London: The British Library.

Bunt, H. C.: 1989, Information Dialogues as Communicative Action in Relation to PartnerModeling and Information Processing. In: M. M. Taylor, F. Neel, and D. G. Bouwhuis(eds.): The Structure of Multimodal Dialogue. Amsterdam: North-Holland, pp. 47–73.

Bunt, H. C.: 1996, Interaction Management Functions and Context Representation Require-ments. In: S. LuperFoy, A. Nijholt, and G. van Zanten (eds.): Dialogue Management inNatural Language Systems. Proceedings of the Eleventh Twente Workshop on LanguageTechnology. Enschede, NL, pp. 187–198.

Callan, J. P., W. B. Croft, and S. M. Harding: 1992, The INQUERY Retrieval System. In:Proceedings of the 3rd International Conference on Database and Expert Systems Appli-cation. Berlin and New York: Springer, pp. 78–83.

Chu-Carroll, J. and M. K. Brown: 1998, An Evidential Model for Tracking Initiative in Col-laborative Dialogue Interactions. User Modeling and User-Adapted Interaction 7(3+4).

Chu-Carroll, J. and S. Carberry: 1995, Generating Information-Sharing Subdialogues inExpert–User Consultation. In: C. S. Mellish (ed.): Proceedings of the 14th InternationalJoint Conference on Artificial Intelligence (IJCAI '95). San Mateo, CA: Morgan Kauf-mann, pp. 1243–1250.

Cohen, R. et al.: 1998, What is Initiative?. User Modeling and User-Adapted Interaction7(3+4).

Fawcett, R. P., A. van der Mije, and C. van Wissen: 1988, Towards a Systemic FlowchartModel for Discourse. In: New Developments in Systemic Linguistics. London: Pinter, pp.116–143.

Fischer, M., E. A. Maier, and A. Stein: 1994, Generating Cooperative System Responses inInformation Retrieval Dialogues. In: Proceedings of the Seventh International Workshopon Natural Language Generation (INLG '94), Kennebunkport, ME. pp. 207–216.

Flach, P.: 1994, Simply Logical: Intelligent Reasoning by Example. Chichester: John Wiley.Grosz, B. J. and C. L. Sidner: 1986, Attention, Intentions and the Structure of Discourse.

Computational Linguistics 12(3), 175–204.Grosz, B. J. and C. L. Sidner: 1990, Plans for Discourse. In: P. R. Cohen, J. Morgan, and M. E.

Pollack (eds.): Intentions in Communication. Cambridge, MA: MIT Press, pp. 417–444.Grote, B., E. Hagen, A. Stein, and E. Teich: 1997, Speech Production in Human-Machine

Dialogue: A Natural Language Generation Perspective. In: E. Maier, M. Mast, and S.LuperFoy (eds.): Dialogue Processing in Spoken Language Systems. ECAI'96 Workshop,Budapest, Hungary. Berlin, Heidelberg and New York: Springer, pp. 70–85.

Hagen, E.: 1999, An Approach to Mixed Initiative Spoken Dialogue. In this issue.Hobbs, J. R., M. E. Stickel, D. E. Appelt, and P. Martin: 1993, Interpretation as Abduction.

Artificial Intelligence 63(1-2), 69–142.Ingwersen, P.: 1992, Information Retrieval Interaction. London: Taylor Graham.Jameson, A. and T. Weis: 1995, How to Juggle Discourse Obligations. In: Proceedings of the

Symposium on Conceptual and Semantic Knowledge in Language Generation. Heidelberg.(Also available as Report No. 133, SFB378 (READY), University of the Saarland, Oct.1996).

Logan, B., S. Reece, and K. Sparck Jones: 1994, Modelling Information Retrieval Agents withBelief Revision. In: W. Croft and C. van Rijsbergen (eds.): Proceedings of the 17th AnnualInternational Conference on Research and Development in Information Retrieval (SIGIR'94). Berlin: Springer, pp. 91–100.

Maier, E., M. Mast, and S. LuperFoy (eds.): 1997, Dialogue Processing in Spoken LanguageSystems. ECAI'96 Workshop, Budapest, Hungary. Berlin and New York: Springer.

Mann, W. C. and S. A. Thompson: 1987, Rhetorical Structure Theory: A Theory of TextOrganization. In: L. Polanyi (ed.): The Structure of Discourse. Norwood, NJ: Ablex, pp.85–96.

34 A. Stein et al.

Maybury, M. T. (ed.): 1997, Intelligent Multimedia Information Retrieval. Menlo Park, CA:AAAI Press/MIT Press.

Maybury, M. T. and W. Wahlster (eds.): 1998, Readings in Intelligent User Interfaces. SanMateo, CA: Morgan Kaufman.

McRoy, S. W., S. Haller, and S. S. Ali: 1997, Uniform Knowledge Representation for Lan-guage Processing in the B2 System. Journal of Natural Language Engineering 3(2–3),123–145.

McRoy, S. W. and G. Hirst: 1995, The Repair of Speech Act Misunderstandings by AbductiveInference. Computational Linguistics 21(4), 435–478.

Moore, J. D. and C. L. Paris: 1993, Planning Text for Advisory Dialogues: Capturing Inten-tional and Rhetorical Information. Computational Linguistics 19(4), 651–694.

Muller, A. and S. Kutschekmanesch: 1996, Using Abductive Inference and Dynamic Indexingto Retrieve Multimedia SGML Documents. In: I. Ruthven (ed.): MIRO 95. Proceedings ofthe Final Workshop on Multimedia Information Retrieval. Berlin and New York: Springer(eWiC, electronic Workshops in Computing series).

Muller, A. and U. Thiel: 1994, Query Expansion in an Abductive Information Retrieval Sys-tem. In: Proceedings of the Conference on Intelligent Multimedia Information RetrievalSystems and Management (RIAO '94), Vol. 1. New York, pp. 461–480.

Oddy, R.: 1977, Information Retrieval through Man-Machine-Dialogue. Journal of Documen-tation 33(1), 1–14.

Rich, C. and C. L. Sidner: 1998, COLLAGEN: A Collaboration Manager for Software Inter-face Agents. User Modeling and User-Adapted Interaction 7(3+4).

Rich, E.: 1989, Stereotypes and User Modeling. In: A. Kobsa and W. Wahlster (eds.): UserModels in Dialogue Systems. Berlin and New York: Springer, pp. 35–51.

Ruthven, I. (ed.): 1996, MIRO 95. Proceedings of the Final Workshop on Multimedia Infor-mation Retrieval. Berlin and New York:, Springer (eWiC, electronic Workshops in Com-puting series).

Salton, G. and M. J. McGill: 1983, Introduction to Modern Information Retrieval. New York:McGraw-Hill.

Saracevic, T., A. Spink, and M.-M. Wu: 1997, Users and Intermediaries in InformationRetrieval: What Are they Talking About?. In: A. Jameson, C. Paris, and C. Tasso (eds.):User Modeling: Proceedings of the Sixth International Conference, UM '97. Vienna andNew York: Springer Wien New York, pp. 43–54.

Schuster, E., D. Chin, R. Cohen, A. Kobsa, K. Morik, K. Sparck Jones, and W. Wahlster:1988, Discussion Section on the Relationship between User Models and Dialogue Models.Computational Linguistics 14(3), 79–103.

Sitter, S. and A. Stein: 1992, Modeling the Illocutionary Aspects of Information-Seeking Dia-logues. Information Processing & Management 28(2), 165–180.

Sitter, S. and A. Stein: 1996, Modeling Information-Seeking Dialogues: The ConversationalRoles (COR) Model. RIS: Review of Information Science (online journal) 1(1). Availablefrom http://www.inf-wiss.uni-konstanz.de/RIS/...

Stein, A.: 1997, Usability and Assessments of Multimodal Interaction in the SPEAK! System:An Experimental Case Study. The New Review of Hypermedia and Multimedia (NRHM),Special Issue on Evaluation 3: 159-180.

Stein, A., J. A. Gulla, A. Muller, and U. Thiel: 1997a, Conversational Interaction for Seman-tic Access to Multimedia Information. In: M. T. Maybury (ed.): Intelligent MultimediaInformation Retrieval. Menlo Park, CA: AAAI/The MIT Press, pp. 399–421.

Stein, A., J. A. Gulla, and U. Thiel: 1997b, Making Sense of User Mouse Clicks: AbductiveReasoning and Conversational Dialogue Modeling. In: A. Jameson, C. Paris, and C. Tasso(eds.): User Modeling: Proceedings of the Sixth International Conference, UM '97. Viennaand New York: Springer Wien New York, pp. 89–100.

Stein, A. and U. Thiel: 1993, A Conversational Model of Multimodal Interaction in Informa-tion Systems. In: Proceedings of the 11th National Conference on Artificial Intelligence(AAAI '93), Washington D.C. Menlo Park, CA: AAAI Press/ MIT Press, pp. 283–288.

User-Tailored Planning of Mixed Initiative Information-Seeking Dialogues 35

Stock, O., C. Strapparava, and M. Zancanaro: 1997, Explorations in an Environment for Nat-ural Language Multi-Modal Information Access. In: M. T. Maybury (ed.): IntelligentMultimedia Information Retrieval. Menlo Park, CA: AAAI Press/The MIT Press, pp.381–398.

Taylor, J. A., J. Carletta, and C. Mellish: 1996, Requirements for Belief Models in CooperativeDialogue. User Modeling and User-Adapted Interaction 6(1), 23–68.

Terveen, L. G.: 1995, Overview of Human-Computer Collaboration. Knowledge-Based Sys-tems. Special Issue on Human-Computer Collaboration 8(2-3), 67–81.

Thiel, U., J. A. Gulla, A. Muller, and A. Stein: 1996, Dialogue Strategies for MultimediaRetrieval: Intertwining Abductive Reasoning and Dialogue Planning. In: I. Ruthven (ed.):MIRO 95. Proceedings of the Final Workshop on Multimedia Information Retrieval. Berlinand New York: Springer (eWiC, electronic Workshops in Computing series).

Traum, D. R. and J. F. Allen: 1994, Discourse Obligations in Dialogue Processing. In: Pro-ceedings of the 32nd Annual Meeting of the Association for Computational Linguistics(ACL '94). pp. 1–8.

Traum, D. R. and E. Hinkelman: 1992, Conversation Acts in Task-Oriented Spoken Dialogue.Computational Intelligence 8(3), 575–599.

van Rijsbergen, C. J.: 1989, Towards an Information Logic. In: N. Belkin and C. van Rijsber-gen (eds.): Proceedings of the SIGIR '89. New York: ACM Press, pp. 77–86.

Wahlster, W. and A. Kobsa: 1989, User Models in Dialogue Systems. In: A. Kobsa andW. Wahlster (eds.): User Models in Dialogue Systems. Berlin and New York: Springer,Chapt. 1, pp. 4–34.

Winograd, T. and F. Flores: 1986, Understanding Computers and Cognition. Norwood, NJ:Ablex.

Authors,

vitae

Dr. Adelheit SteinGMD-IPSI, German National Research Center for Information Technology,Integrated Publication and Information Systems Institute, D-64293 Darm-stadt

Dr. A. Stein has been a senior researcher at GMD-IPSI since 1988. Shereceived her M.S. degree in Philosophy, Sociology, and Psychology and herPh.D. in Social Sciences from the University of Constance in 1980 and 1983,respectively. She worked as a research assistant at the University of Constanceuntil 1984 in the fields of social interaction theory, human development, andfamily studies. Until 1987 she was deputy manager of a German associationof online information providers in Frankfurt. Her current research interestsinclude human–computer collaboration, dialogue modeling and planning, andintelligent interfaces for information retrieval. The present article describesjoint research initiated during Dr. Gulla

,research stay at GMD-IPSI.

Dr. Jon Atle GullaNorsk Hydro, Av. de Marcel Thiry 83, 1200 Brussels, Belgium.Also: Norwegian University of Science and Technology, Department of Com-puter Science, N-7034 Trondheim, Norway

Dr. J. A. Gulla is a senior consultant at Norsk Hydro and also a part-timeassociate professor at the Norwegian University of Science and Technology.

36 A. Stein et al.

He received his M.S. and Ph.D. degrees in Computer Science from the Nor-wegian Institute of Technology in 1988 and 1993, respectively. He receivedhis M.S. degree in Linguistics from the University of Trondheim in 1995.From 1995 to 1996 he was a guest researcher at GMD-IPSI. Dr. Gulla hasworked in several areas of software engineering and artificial intelligence,including conceptual modeling, knowledge representation, text generation,and dialogue systems. He is now focusing on change management issues insoftware engineering projects.

Dr. Ulrich ThielGMD-IPSI, German National Research Center for Information Technology,Integrated Publication and Information Systems Institute, D-64293 Darm-stadt

Dr. U. Thiel is head of the research division “Advanced Retrieval Sup-port for Digital Libraries” at GMD-IPSI. He received his M.S. (diploma) inComputer Science from the University of Dortmund in 1983 and his Ph.D.in Information Science from the University of Constance in 1990. Until 1988he was a researcher and lecturer of Information Science at the University ofConstance. Since 1990 he has been manager of several projects and researchgroups in GMD-IPSI. His primary research interests are in multimedia infor-mation retrieval, intelligent interfaces, and discourse and user modeling. Cur-rent projects of the division focus on digital library systems, information fil-tering, recommender systems, multimedia retrieval, intelligent retrieval anddialogue management.