The situated processing of situated language

11
SUSAN U. STUCKY THE SITUATED PROCESSING OF SITUATED LANGUAGE 1. INTRODUCTION People often characterize the semantic component of a natural-language processing system as the implementation of a semantical theory. As system designers they adopt or perhaps even develop a theory of natural- language semantics (frame semantics or Montague semantics, say). Then they provide a computational mechanism that derives the semantic representation used in that theory from the syntactic representation produced by a parser. That mechanism may in turn connect to the rest of a system in such a way as to ensure appropriate action by the agent. This characterization, though simple, illustrates an important point: the search is for a single semantical account that can serve simultaneously as an account of the semantics of the utterances of the language and as the semantic representation of those utterances to be used by a computer. 1 There is growing evidence, however, that suggests the need for a re-evaluation of the assumption that there is, or could be, any such single account. On the face of it, one might imagine that a serious challenge could come quite simply from fundamental differences between humans (and hence, human language) and computers. After all, we might imagine that because computers and humans are different, computers could not in any robust way be endowed with anything like the human language ability. From this point of view, no matter what your account of human language is like, if it were to adequately describe full-blooded language use, it c'ouldn't be embodied in a computer. Arguments of this type have been made of course, Searle's (1980) Chinese box argument being an especi- ally familiar one. One can also imagine arguments against the direct embedding of a linguist's account in a computer coming from differences in the way linguists theorize and what seems best from an implementer's point of view. And there have been cogent arguments of this latter sort too, at least for syntactic representations (see, for example, Shieber, 1984, 1986). Finally, one can imagine limits arising from more standard practical exigencies of computers themselves, i.e., their limits on time and memory. But - and this is perhaps unexpected - there is also an argument Linguistics and Philosophy 12: 347-357, 1989. © 1989 Kluwer Academic Publishers. Printed in the Netherlands.

Transcript of The situated processing of situated language

S U S A N U. S T U C K Y

T H E S I T U A T E D P R O C E S S I N G OF S I T U A T E D

L A N G U A G E

1. INTRODUCTION

People often characterize the semantic component of a natural-language processing system as the implementation of a semantical theory. As system designers they adopt or perhaps even develop a theory of natural- language semantics (frame semantics or Montague semantics, say). Then they provide a computational mechanism that derives the semantic representation used in that theory from the syntactic representation produced by a parser. That mechanism may in turn connect to the rest of a system in such a way as to ensure appropriate action by the agent. This characterization, though simple, illustrates an important point: the search is for a single semantical account that can serve simultaneously as an account of the semantics of the utterances of the language and as the semantic representation of those utterances to be used by a computer. 1 There is growing evidence, however, that suggests the need for a re-evaluation of the assumption that there is, or could be, any such single account.

On the face of it, one might imagine that a serious challenge could come quite simply from fundamental differences between humans (and hence, human language) and computers. After all, we might imagine that because computers and humans are different, computers could not in any robust way be endowed with anything like the human language ability. From this point of view, no matter what your account of human language is like, if it were to adequately describe full-blooded language use, it c'ouldn't be embodied in a computer. Arguments of this type have been made of course, Searle's (1980) Chinese box argument being an especi- ally familiar one. One can also imagine arguments against the direct embedding of a linguist's account in a computer coming from differences in the way linguists theorize and what seems best from an implementer's point of view. And there have been cogent arguments of this latter sort too, at least for syntactic representations (see, for example, Shieber, 1984, 1986). Finally, one can imagine limits arising from more standard practical exigencies of computers themselves, i.e., their limits on time and memory.

But - and this is perhaps unexpected - there is also an argument

Linguistics and Philosophy 12: 347-357, 1989. © 1989 Kluwer Academic Publishers. Printed in the Netherlands.

348 S U S A N U. S T U C K Y

against a single semantical account that comes from a profound similarity between how language works, and how computers are used. Namely, both language and computational state are contextually-dependent: much of natural language and much computational state cannot be understood independent of the the context in which each is used. However , because the two kinds of context typically differ, the same meaning and content cannot be assigned to both expressions of a language and the cor- responding computational states. In consequence, contrary to much of current practice, the design of natural-language processing systems needs to include separate, though coordinated, accounts of the semantics both of expressions and of internal state. And moreover , because the theorist's representation of the meaning of an expression is necessarily more independent of context than the corresponding internal state in most cases, the semantic representation the theorist gives to the language cannot be what is implemented in any very simple sense.

This fact - that two accounts are needed - should cause theorists to think once again not only about the relation between theories of lan- guage and computational implementation, but also about the relation between theories of language and agents (including human ones) more generally. Of course the general issue involved here isn't new (see, for instance, Partee's discussions of semantics as to whether it is better understood as mathematics or psychology (1979)). By focusing on the semantical relations holding among language, agents, and their respec- tive embedding circumstances, we will develop an understanding of the nature of the relationship between expressions of language and the state of the agent resulting from its processing an expression of language, which is different in some fundamental ways from current accounts.

2. S O M E B A C K G R O U N D A S S U M P T I O N S

Before making the argument, however, it will be useful to lay out several working assumptions and to clarify the use of some familiar terms along

the way. I begin with the - uncontroversial, I trust - assumption that the content

of semantic structures can be context-dependent. Natural language is perhaps the first example of something having semantic structure that comes to mind, and its dependence on context for interpretation is unquestionable and pervasive. There are the indexicals and demon- stratives, of course. Even the content of common nouns such as ' lemon' depends on which lemon (or pie, or car) you are talking about. Just as contextual dependence is a prominent feature of natural language, it is

T H E S I T U A T E D P R O C E S S I N G O F S I T U A T E D L A N G U A G E 349

also a prominent feature of computational state. 2 That is, a computa- tional state can correspond to states of the world that are determined not by the state alone, but by state together with the environment it is embedded in. Consider defaults, as when, for example, you send mail to someone on your own time-sharing machine (i.e., the machine you are writing from) and, crucially, you don't include your own machine's address in what you type. (Crucial to the example, of course, is the fact that the corresponding internal computational state does not represent the machine you are typing from either.) The content of the state induced by your typing the user name in a 'to' field depends on a context, namely, the very machine you are typing from. Or consider a case in which a default is adopted such that the content of some internal structure is taken to be whatever time it is when some particular bit of program is evaluated; the content of the corresponding state will then be context-dependent much in the way that the English word 'now' is.

Given the central role that the contextual dependence of content is going to play in what follows, it will be useful to locate the discussion within a particular analysis. Here I wish to draw on the perspective offered by situation semantics (Barwise and Perry, e.g., 1983, pp. 9-26). Two claims are relevant: (1) that meaning is distinct from content, and (2) that meaning is relational. (I should point out that the original accounts of Situation semantics used the term 'interpretation' instead of 'content ' ; but the word 'interpretation' has so often been used in other contexts to refer to the result of the activity of interpreting, i.e., some- thing that people do, that it seems wise to choose another term.) The distinction between meaning and content will allow us to talk about what is constant across contexts for a semantic structure, namely meaning, and what is not, namely content. In other words, that which is actually described, that which is referred to, that which an expression is about on a particular occasion of use will be called the content of an expression. The second claim, that meaning is relational, is central too, because it allows us to specify the systematicity in the way a given semantic structure depends on context, i.e., by specifying the invariant constraints that hold among semantic structures, the context they are used in, and the content. For example, the meaning of the word 'here' is stated as a relation among the facts about the situation the phrase is used in, the word itself, and what it is being used to describe. Its content, on an occasion of use, is the very place itself. Similarly, the meaning of a chunk of computational state can also be seen as relational, only in this case involving a relation holding among various states internal to a machine as well as things external to it. The meaning of the chunk of internal state

3 5 0 S U S A N U. S T U C K Y

resulting from the internalization of what you typed in includes, for instance, the very machine from which a message was sent. 3

So far we have assumed that the content of semantic structures can be dependent on context and that both natural-language expressions and computational state can be viewed as semantic structures. Thus, the content of both expressions and state can depend on context. However, for natural-language processing to work with any significant degree of reliability over time, there must be a systematic pairing of expressions and states. Given the contextual-dependence of both, we will need to ask about the nature of that correspondence. For instance, are the contexts that determine the content the same or different? Are the contents the same or different? How about meaning? And, importantly, just what is the correspondence between an expression and the corresponding com- putational state going to be like on this view?

3. THE C O O R D I N A T I O N O F E X P R E S S I O N S A N D I N T E R N A L S T A T E

Just how an expression of language that is contextually dependent and the possibly contextually-dependent computational state corresponding to it are coordinated is a matter of some complexity. To begin with, let's call the state that results from the successful processing of natural- language expressions successful states, or s-states for short. We don't yet know a lot about these s-states: all we expect so far is that they are something such that they can have have both meaning and content. What else can we ascertain about the relation between an expression and its corresponding s-state?

First there is the matter of the content of the expression and its corresponding s-state. Are they the same or different? More precisely, is the s-state about the same state-of-affairs as the input expression? Surely that is a requirement we would want to put on the design of a natural- language processing system. Here's why: suppose you notice CoffeeBot, your new coffee-delivering robot, scuttling down the hall making its rounds and about to run into someone coming around the corner. So you say something like "Careful, now, there's somebody coming around the corner!", intending for the robot to avoid that person. Obviously, for things to work out for the best, we want some part of the robot's internal state, the part that results from his having processed the word 'somebody' to be about the very same person you are referring to, so that the robot can swerve before hitting him. Now just as there is a semantic relation between the expression 'somebody' and some person rounding the corner, there is a semantic relation between the relevant part of the

T H E S I T U A T E D P R O C E S S I N G O F S I T U A T E D L A N G U A G E 351

s-state and that very same person. (Note that simply having a more explicit representation of the person won't, on its own, ensure this result.) Similarly, you want the robot to stop then, not five days later or even five minutes later. Of course, reference may not succeed, and the robot may not arrive at an internal state with the same content. But that, I would argue, is not an unnatural result.

Meaning, on the other hand, would seem to place different constraints on the coordination. Take our rattled CoffeeBot hearing the utterance, "Careful , now, there's somebody coming around the corner ." Even though the contexts that determine the content of an utterance, call it u and the contexts that determine the content of an s-state, call it s, overlap (minimally, we hope they are about the same state of affairs), those factors that determine the content of u (e.g., u itself, the discourse context u is said in and the state of the world it is about) and those that determine the content of the corresponding s (e.g., s itself, the surround- ing computational states, and the state of the world s is about) are not necessarily identical. For example, take u' to be the English word 'somebody, ' as in 'somebody's coming around the corner, ' for example, which an agent might take in as a kind of indexical s' (which I will represent as $X), whose meaning is a relation among (at least) $X, the surrounding configuration inside the machine, and the individual to

whom the original use of ' somebody' referred. The point is that the internal context that (in part) determines the content of S X is not the same as the external context involved in the determination of the content of 'somebody' . So it looks as if we might wish to design a system in such a way that the contents of the expressions and of the resultant s-states were the same, but that the exigencies of the differences in meaning were respected.

Other questions arise when we think about the representation of the context. Does an agent, in the course of understanding, need to represent the relevant part of the context that determines the content as part of an s-state; and if so, how much? For instance, does our robot need to represent explicitly the time that it is in order to take effective action? (No doubt the theorist/designer should account for how all this works, but how about the robot itself?) In addition, does the agent 's s-state correspond to the content any more, or less explicitly than the input expression does? In other words, does the robot need an internal re- presentation that represents the content of ' somebody' any more explic- itly than the English word does? Recall that to satisfy the equivalance of content requirement, an agent need only develop an s-state that has the same content, so there is no reason in principle w h y t h e internal state of

352 S U S A N U . S T U C K Y

an agent couldn' t itself be contextually dependent in just the same way that the English word 'somebody' is. On this view, our agent not only uses situated language, its internal states are situated.

Let 's flesh out this example a little further. We might expect Coffee- Bot, upon discovering a note on someone's, say Samantha's, door say- ing " I 'm at lunch" to infer directly that she was not there then and so (knowing not to deliver coffee to empty offices) not deliver the cup of coffee she had ordered. It is certainly conceivable that this could be achieved without using an s-state that explicitly represents what theorists think of as either the note's content or the relevant context in ways we are more accustomed to, i.e., by deriving a sort of logical form or semantic representation that is explicit in the way "Samantha is not in Ventura Hall, Room 23 at 12:00 p.m. October 15, 1988" is. Instead, it would reason more in this fashion: oh, this person's at lunch; that means this person isn't here; so I shouldn't leave the coffee because it will get cold. On the other hand, though we designers/theoreticians must under- stand (and specify, if we are building the machine) what states mean what in what contexts, the agent doesn ' t need to. In other words, our agent can do situated inference making use of an s-state that merely has the same content as " I 'm at lunch" does. Now why should we expect that this might work? Because of the overlap in the temporal and spatial circum- stances of the situation of inference and the situation being reasoned about; the agent's being in the stuff it reasons about (i.e., the right office at the right time) allows for effective action without any more explicit representation of the content than the input expression used, or without any explicit representation of the context at all. 4

In contrast, current accounts of natural-language processing would have us believe that some representation of the syntax of u, i.e., Rsu is computed from u (think of Rsu as a parse tree as generated by a chart parser, for example, or as a representation of some facts about u, as in a Prolog implementation) and then a representation of the meaning and/or content of u is computed from R~u, e.g., an expression of first order logic is computed. It's the last bit that seems questionable. On the view subscribed to in the present paper, a representation of the meaning of u (i.e., R,,,u) would include a representation of the relevant context along with the situation described. Such a representation would hardly qualify as an s-state, which, I have been arguing, may well be more efficient than is useful for the theorist. For instance, if we're walking CoffeeBot down the hall and say to it "Let ' s stop here for minute," we don' t want it to spin its wheels and sputter, "But I don' t know where here is." Not only does deriving a full representation of the meaning of the utterance seem

T H E S I T U A T E D P R O C E S S I N G . O F S I T U A T E D L A N G U A G E 353

to be out of the spirit of the enterprise, but getting from Rmu to s may actually be non-trivial because we would have to get from more explicit s-states, (e.g., S V E N T U R A - H A L L : 2 6 ) to less explicit ones (e.g., such as $HERE). Even going from a representation of the content of the sort a theorist or designer offers won' t help, because the theorist's represen- tation of the content of an utterance will be contextually more in- dependent (and, arguably necessarily so) than an s-state will be. Thus, to have only an account of the external significance of language would fail to provide what we need for the external significance of internal state.

What we want, ideally, is a system that could go directly from u to some more 'indexical' s, all the while preserving the equivalence of content in a theoretically principled fashion, but without invoking an intermediate representation in which what place it is is unambiguously represented. And this seems right. After all, we can understand what 'here' in English means without knowing where we are.

Now an efficient representation of the sort just described will suffice, so long as the agent is in (i.e., has direct perceptual access to) the circum- stances its internal states are about. It can behave effectively and even, on the face of it, rationally. But such efficient states won't suffice if Samantha later complains that CoffeeBot failed to deliver the coffee she had ordered, and we inquire of the robot what happened. Then CoffeeBot will need to have the capability for rendering explicit more information about what happened in order to be able to answer the question. (Aha, the skeptic is thinking, of course this efficiency tack won't work. And so it won't, on its own.) But the ability to render explicit more information can come in a variety of ways other than through explicit representation of the context or through the development of more explicit s-states as part of the internalization process. For instance, there might well be information left lying about internally from the original order for coffee, which would allow certain inferences to be drawn. Or there may be indexing of the relevant s-state that facilitates look-up (e.g., a time stamp). There may be both. The point is that this extra ability is viewed in the theory as being derivative on the simple case, not as prior.

Of course there may already be systems that embody something like this sort of semantic efficiency in the odd case. What's needed, I claim, is the development of the theoretical underpinnings that would help us understand how the coordination between expressions and the cor- responding computational states is to be effected quite generally and what the correspondences between efficient expressions of language and efficient s-states are like. All that is provided in the present paper are two

354 S U S A N U. S T U C K Y

requirements and a suggestion. The requirements are: (1) that the content of the expressions and their corresponding s-states be the same, (2) that the meaning differences between external language and internal state be respected. And the suggestion is: how we get from an utterance to an s-state may not be via the theorist's representation of the meaning and content of that utterance in any very literal sense.

4 . C O N S E Q U E N C E S F O R N A T U R A L - L A N G U A G E

P R O C E S S I N G S Y S T E M S

The consequences for the theory and design of natural-language processing (by machine) are somewhat complex. You can't just take your favorite grammar formalism, code it up, implement a parser, and expect to derive the semantic representation of the utterances processed that your theory of natural-language semantics gives you and be done with it (as if that were an easy thing to do). You can't, on this view, design a theoretically motivated language front-end for a system unless you have a theory of the relevant level of description of the computation together with the contextual features that determine its content. And we've demonstrated that that structure can't profitably be analyzed as being equivalent to the theorist's representation of either the meaning or the content of the expressions of the language being processed.

On the other hand it may well be possible to get from utterances to s-states in much more direct ways than theorists have so far imagined. When the content of u and s are determined by the same external circumstances (i.e., as with the case of the English word 'here'), we needn't invoke explicit representations of the sort a theorist might require in the first instance. Effective action can be taken without it. Thus, having theories of the various kinds of contextual dependence and how they interrelate should allow for more realistic systems.

What does all this have to do with the human case? Well, perhaps quite a lot. Perry, elaborating on the arguments he and Barwise began in Situations and Attitudes (1983), has argued for the circumstantial dependence of belief states (e.g., Perry, 1986a). Perry has made the further very important point (Perry, 1986b) that it is the more efficient representations that are the ones that lead us to act. More generally, to the extent that we can motivate semantical, indexical, cognitive states, we will require our theories to do justice to this more complicated picture: we will have to give an account not only of the relation between

T H E S I T U A T E D P R O C E S S I N G OF S I T U A T E D L A N G U A G E 355

expressions of language and the world they are about, but also of the corresponding internal representations and what they are about and then, of how the two are coordinated. 5

5 . C O N C L U S I O N

Because I have relied on a situated perspective that is compatible with situation semantics and its companion logical theory, we do well to ask whether we expect situation semantics to help in the design of systems. The answer is yes, I think, but not in the way some theoreticians would have expected. Situation semantics and situation theory are good vehi- cles in which to develop accounts of the circumstantial dependencies in the full interplay of language, inference, and computation on the view sketched here. Similarly, using situation semantics isn't a bad way to go about giving an account of the external significance of language and internal state, as surely we must. On the other hand, coding up the situation semantics representation of the external significance of lan- guage or replacing the semantic representation in a current system with representations based on situation theory won't do justice either to situation semantics or to the machine. 6

All of this leads to a commitment, not simply to the processing of situated language, but to situated natural-language processing. Viewing computation, language, and inference through this 'situated' perspec- tive suggests a conception of natural-language processing that is both more powerful and more realistic than that underlying much current practice. In the end, it may well be that less needs to be represented explicitly by the agent.

A C K N O W L E D G E M E N T S

Thanks are due to the Situated Inference Engine project members at CSLI for clarification of many the ideas discussed here, and to Jon Barwise, Adrian Cussins, Barbara Grosz, Jim des Rivi~res, Geoffrey Nunberg, Brian Smith, and Tom Wasow for comments. A much earlier version of this paper appeared as notes in the proceedings of TINLAP3 (which was held at New Mexico State University in January 1987). Thanks also to audiences at Simon Fraser University, the University of Michigan, Bethel College, and the Linguistics and AI Departments at AT&T Bell Laboratories for commentary and discussion of the ideas discussed herein. And to an anonymous referee for careful comments.

356 S U S A N U. S T U C K Y

T h e u s u a l d i s c l a i m e r s a p p l y . T h i s r e s e a r c h w a s c o n d u c t e d a t t h e C e n t e r

f o r t h e S t u d y o f L a n g u a g e a n d I n f o r m a t i o n , a t b o t h S t a n f o r d U n i v e r s i t y

a n d a t X e r o x P A R C , a n d s u p p o r t e d b y a n a w a r d f r o m t h e S y s t e m

D e v e l o p m e n t F o u n d a t i o n .

N O T E S

Note that the caricature is of the description of natural-language systems. That is, it is a characterization of how people theorize or talk about the systems they build, not necessarily how they are implemented. I have in mind examples such as the description of the use of a representation of first-order logic in the T E A M natural-language interface system built at SRI International (as reported on in Grosz et al., 1985), in an early implementation of Generalized Phrase-Structure Grammar at Hewlett-Packard Laboratories (see, for ex- ample, Gawron, 1982) in which an approximation of a Montagovian intentional logic representation was transformed into a first-order logic representation, and various systems involving semantic formalisms that are variants of Kamp's Discourse Representation Theory (e.g., Johnson and Klein, 1986). Whether the 'situated' approach argued for in the present paper will provide explanation for existing systems such as these, or whether it will turn out to be useful only for systems designed under it explicitly is unknown at present. 2 As in the natural language case, the information carried (by the execution of a program, for instance) is complex; dependencies arise from both the internal machine environment and the state of the external world. Delimiting the kinds of context and finding appropriate ways to characterize the complex of relations has only just begun. For work relevant to computation, see in particular Smith, 1986a, 1986b. 3 So far what I have said is mostly consistent with the 'situated' view as expressed in situation semantics; however, I will be rejecting an assumption made early on by Barwise and Perry in Situations and Attitudes (1983), namely, that an explanation of language use can be forthcoming solely on the basis of observable behavior (even broadly interpreted) and without giving an account of the internal architecture. The rejection turns out to be a consequence of the above mentioned observations: given that both language and com- putation are dependent on context for content, and that understanding rests on overlapping but non-identical contexts, description of the internal architecture and its external significance (in contrast to language's external significance) will have to be part of the theoretical account of how agents use language. Barwise's current position, as I understand it, entails that he reject the earlier position too. See his 1987 comments in 'Unburdening the Language of Thought'. 4 This is the insight underlying Smith's notion of embedded computation. Systems work in part, he argues, because they are embedded in the situations they are about. 5 The Situated Inference (SIE) project at CSLI is a project to design and build a computational system that engages in situated inference. However, in accord with the ideas being developed in the present paper, the point is not just that the language the SIE uses will be situated (that much is also true of current natural-language processing systems); or even that the content of internal structure depends likewise on circumstance (that much is already true of current systems); rather, the interest lies in the SIE's being designed with two additional purposes in mind: (i) all three, inference, internal structures, and language will be situated in compatible ways, and (ii) there is a commitment to develop a common theoretical framework in terms of which to understand the full interplay among language, content, and the internal structures. 6 Robin Cooper, in a recent paper (forthcoming), addresses the same general issue for human agents and arrives at much the same conclusion, but for different reasons.

THE SITUATED PROCESSING OF S ITUATED L A N G U A G E 357

R E F E R E N C E S

Barwise, K. Jon: 1987, 'Unburdening the Language of Thought', in Two Replies, Report No. CSLI-87-74, Center for the Study of Language and Information, Stanford Uni- versity.

Barwise, K. Jon and John Perry: 1983, Situations and Attitudes, Bradford Books/The MIT Press, Cambridge, Massachusetts.

Cooper, Robin: forthcoming, 'Fact Algebras: Representation, Psychology or Reality?', in Ruth Kempson (ed.), Mental Representation and Properties of Logical Form, Cambridge University Press, Cambridge, England.

Gawron, Jean Mark: 1982, 'The GPSG Linguistics System', in Proceedings of the 20th Annual Meeting of the ACL in Toronto, Ontario.

Grosz, Barbara J., et al.: 1985, 'The TEAM Natural-Language Interface System', Final Report: Project 4865, Artificial Intelligence Center, SRI International, Menlo Park, California.

Johnson, Mark and Ewan Klein: 1986, 'Discourse, Anaphora and Parsing', in Proceedings of the l l th International Conference on Computational Linguistics--24th Annual meeting of the Association for Computational Linguistics, August, 1986, Institut fuer Kommunikationsforschung und Phonetik, Bonn University, Bonn.

Partee, Barbara: 1979, 'Semantics - Mathematics or Psychology', in R. Bauerle, U. Egli, and A. von Stechow (eds.), Semantics from Different Points of View, Springer-Verlag, Berlin, pp. 1-14.

Perry, John R.: 1986a, 'Circumstantial Attitudes and Benevolent Cognition', in J. But- terfield (ed.), Language, Mind, and Logic, Cambridge University Press, Cambridge, England. Also available as Report No. CSLI-85-53, Center for the Study of Language and Information, Stanford University, 1986.

Perry, John R.: 1986b, 'Perception, Action, and the Structure of Believing', in R. Grandy and R. Warner (eds.), Philosophical Grounds of Rationality, Oxford University Press, Oxford, England.

Searle, John, R.: 1980, 'Minds, Brains, and Programs', The Behavioral and Brain Sciences 3, 417-457.

Shieber, Stuart M.: 1984, 'The Design of a Computer Language for Linguistic In- formation', in Proceedings of the 10th International Conference on Computational Linguistics - 22nd Annual Meeting of the Association for Computational Linguistics, Stanford, California, pp. 362-366.

Shieber, Stuart M.: 1986, 'Separating Linguistic Analyses from Linguistic Theories', Alvey/ICL Workshop on Linguistic Theory and Computer Applications, University of Manchester, September, 1985.

Smith, Brian Cantweil: 1986a, 'The Correspondence Continuum', Proceedings of the Sixth Canadian AI Conference, Montrael. Also available as Report No. CSLI-87-7l, Center for the Study of Language and Information, Stanford University, 1987 and to appear in Artificial Intelligence.

Smith, Brian Cantwell: 1986b, 'Varieties of Self-Reference', in Proceedings of the 1986 Conference on Theoretical Aspects of Reasoning about Knowledge, Morgan Kaufmann, Los Altos, California, pp. 19-43, revised version to appear in Artificial Intelligence.

IRL 2550 Hanover Street Palo Alto, CA 94304 U.S.A.