Morphological Cues for Lexical Semantics - DTIC

157
Morphological Cues for Lexical Semantics Marc Noel Light Technical Report 624 and Ph.D. Thesis June 1996 19971007 104 App~roved ? r ~ei~ U~r- Er UNIVERSITY OF ROCHR COMPUTER SCIENCE

Transcript of Morphological Cues for Lexical Semantics - DTIC

Morphological Cuesfor Lexical Semantics

Marc Noel Light

Technical Report 624 and Ph.D. ThesisJune 1996

19971007 104

App~roved ? r ~ei~ U~r- Er

UNIVERSITY OFROCHRCOMPUTER SCIENCE

Morphological Cues for Lexical Semantics

by

Marc Noel Light

Submitted in Partial Fulfillment

of the

Requirements for the Degree

Doctor of Philosophy

Supervised by

Professor Lenhart K. Schubert

Department of Computer ScienceThe College

Arts and Sciences

University of RochesterRochester, New York

1996

REPORT DOCUMENTATION PAGE j Form Approved

I OMB No. 0704-0188Public reporting burden for this collection of Information Is estimated to average 1 hour per response, Including the time for reviewing Instructions, searching existing datasources, gathering and maintaining the data needed, and completing and reviewing the collection of Information. Send comments regarding this burden estimate or any otheraspect of this collection of information, Including suggestions for reducing this burden, to Washington Headquarters Services, Directorate for Information Operations andReports, 1215 Jefferson Davis Highway, Suite 1204, Arlington, VA 22202-4302, and to the Office of Management and Budget, Paperwork Reduction Project (0704-0188),Washington, DC 20503.

1. AGENCY USE ONLY (Leave blank) 12. REPORT DATE 13. REPORT TYPE AND DATES COVERED

.I June_1996 technical report___________4. TITLE AND SUBTITLE 5. FUNDING NUMBERS

Morphological Cues for Lexical Semantics ONR N00014-92-J-1512

6. AUTHOR(S)

Marc Noel Light

7. PERFORMING ORGANIZATION NAME(S) AND ADDRESSES 8. PERFORMING ORGANIZATIONComputer Science Dept.734 Computer Studies Bldg.University of RochesterRochester NY 14627-0226

9. SPONSORING / MONITORING AGENCY NAME(S) AND ADDRESSES(ES) 10. SPONSORING / MONITORINGOffice of Naval Research ARPA AGENCY REPORT NUMBERInformation Systems 3701 N. Fairfax Drive TR 624 and Ph.D. ThesisArlington VA 22217 Arlington VA 22203

11. SUPPLEMENTARY NOTES

12a. DISTRIBUTION / AVAILABILITY STATEMENT j12b. DISTRIBUTION CODE

Distribution of this document is unlimited.

13. ABSTRACT (Maximum 200 words)

(see title page)

14. SUBJECT TERMS 15. NUMBER OF PAGES

lexical acquisition; lexical semantics; morphology 145 pages16. PRICE CODE

free to sponsors; else $6.5017. SECURITY CLASSIFICATION 18. SECURITY CLASSIFICATION 119. SECURITY CLASSIFICATION 20. LIMITATION OF ABSTRACT

OF REPORT OF THIS PAGE OF ABSTRACTunclassified unclassified unclassified UL

NSN 7540-01-280-5500 Standard Form 298 (Rev. 2-89)Prescribed by ANSI Std. 239-18

Clurriculuin Vitae

Marc Ltg Lt 7-,,'i-vd hi. B.S. in

w,:,rked av the University or Zurich Widli 1OW :pCrvisior, of Dr. MatIC I)i-euig.C-11 kcxical dalabase.s.

lie cntercd the Ph.D. p'rogrami, :i Ilh Uiversity of Rochcste, ;it 19S9 andirceivd an M.S. ii (ompivLer Science in 1991. At; Rochester. lie has worked

priimarily with Prof. Lenhart SchldIr. in nat.ira] language sernanti.., ayi pro! o~ssnz.

---------------------------

iii

Acknowledgments

It is a rainy Sunday morning in Tiibingen, Germany; my daughter is sleeping,my wife is reading, and my graduate career is almost over. I started the Universityof Rochester Computer Science graduate program in the Fall of 1989 at the ageof twenty-three. Since that time I have lived in three different cities, married,become a father, and turned thirty-graduate school has been a long varied hauland I could never have made it alone. Thus it is a good time to thank all thepeople that helped make my completion of the program possible.

I will start by thanking the city of Rochester, the community in which I spentthe majority of my graduate career. Rochester cannot claim to be one of theworld's great cities. In fact, it cannot even claim to be one of the great Americancities. Rochester does not offer the sites, restaurants, music, or theater that citieslike New York, Boston, Chicago, or San Francisco do. However, Rochester hasenough of these things for me. In addition, they all come with neither an elitistnor a defeatist attitude: Rochester doesn't think it is better or worse than therest, it just thinks it is pretty good. I believe that this attitude was, in part,responsible for my enjoyment of the city. More specifically, I would like to thankthe following Rochester organizations: Wegmans, the Amerks, Country Sweet,the Genesee Brewing Company, the Rochester Players Ultimate League, and mostimportantly Dog's Life.

While residing in Rochester, I lived in a house at 152 Shelbourne Road. I wouldlike to thank Shinta Cheng, David Kumasaka, Peter Kumasaka, Beth Murphy,Mike Burek, and Jeff Thomas for making that house a great place to come hometo.

One factor in my coming to Rochester was that the University of RochesterComputer Science department seemed, during the application process, to be asmall community-oriented place. It is in fact such a place and I enjoyed mytime there. I would like to thank Jill Forster, Peggy Franz, Marty Guenther,Pat Marshall, and Peg Meeker for providing a well-oiled administrative machineand Luidvikas Bukys, Jim Roche, Tim Becker, and Brad Miller for providinga well-oiled computer system. I would also like to thank Tom Leblanc for thehockey. Finally, thanks for all the good times, Chris Brown, Corinna Cortes,

iv

Mark Crovella, Virginia Desa, Cezary Dubnicki, Derek Gross, Yenjo Han, PeterHeeman, Janet Hitzeman, Chung Hee Hwang, Jonas Karlsson, Graham Katz,Leonidas Kontothanassis, Andrea Li, Nat Martin, Massimo Poesio, Polly Pook,Rajeev Raman, Eric Ringger, Jeff Schneider, Dave Traum, and Lambert Wixson.A special thanks goes out to George Ferguson and Pia Cseri-Briones for theirfriendship and hospitality.

A number of people provided more direct support of my graduate work. First,I would like to thank my advisor, Len Schubert. He pushed me to understand mytopic more completely than I would have on my own. Perhaps more importantly,I always felt that I could trust him and that he truly wanted me to succeed. Iwould also like to thank the other members of my committee-James Allen, GregCarlson, and Peter Lasersohn-for their support and advice. I regret that I wasnot able to make greater use of their wisdom. This was due in part to the factthat for the last four years of graduate school my presence in Rochester was onlysporadic.

This brings me to another set of people that I want to thank. Due to my wifeand my instance of the two-body problem, I lived the majority of 1993 and 1994 inIthaca. I would like to thank the Cornell Computer Science Department and morespecifically Dan Huttenlocker for letting me use their computer system during thistime. I would also like to thank Sally McConnell-Ginet and Alessandro Zucchi ofthe Cornell Linguistics Department for a number of helpful discussions. Finally, Iwould like to thank John Cordo, Jackson Galloway, and Nick Straley for showingme many a good time and making me feel at home in Ithaca.

After Ithaca, we moved to Tiibingen where I have been working at the Uni-versity of Tiibingen in the Computational Linguistics department. I spent manyweekends and holidays working on my thesis and I often was anything but well-rested when I showed up for work on Monday morning. I would like to thankSteve Abney and Erhard Hinrichs for their understanding.

Since I began working on my dissertation, my wife, Jill Gaulding, has supportedmy every step. This has been particularly true in the last 6 months, during whichmy job and my thesis have left little time for anything else. She has also provideduseful outsider advice at a number of critical moments. Thanks. And thank you,Ellen, for keeping us up a minimal number of nights.

Finally, and most importantly, I would like to thank my parents for always be-ing there for me. My successful completion would not have been possible withoutthe security and support they provided and still provide.

(This dissertation is based on work supported by ONR/ARPA research grantnumber N00014-92-J-1512.)

v

Abstract

Most natural language processing tasks require lexical semantic information suchas verbal argument structure and selectional restrictions, corresponding nominalsemantic class, verbal aspectual class, synonym and antonym relationships be-tween words, and various verbal semantic features such as causation and manner.This dissertation addresses two primary questions related to such information:how should one represent it and how can one acquire it.

It is argued that, in order to support inferencing, a representation with well-understood semantics should be used. Standard first order logic has well-under-stood semantics and a multitude of inferencing systems have been implementedfor it. However, standard first order logic, although a good starting point, needs tobe extended before it can efficiently and concisely support all the lexically-basedinferences needed. Using data primarily from the TRAINS dialogues, the followingextensions are argued for: modal operators, predicate modification, restrictedquantification, and non-standard quantifiers. These representational tools arepresent in many systems for sentence-level semantics but have not been discussedin the context of lexical semantics.

A number of approaches to automatic acquisition are considered and it is ar-gued that a "surface cueing" approach is currently the most promising. Mor-phological cueing, a type of surface cueing, is introduced. It makes use of fixedcorrespondences between derivational affixes and lexical semantic information.The semantics of a number of affixes are discussed and data resulting from theapplication of the method to the Brown corpus is presented.

Finally, even if lexical semantics could be acquired on a large scale, natu-ral language processing systems would continue to encounter unknown words.Derivational morphology can also be used at run-time to help natural languageunderstanding systems deal with unknown words. A system is presented thatprovides lexical semantic information for such derived unknown words.

vi

Table of Contents

Curriculum Vitae ii

Acknowledgments iii

Abstract v

List of Tables ix

1 Introduction x

2 The Need for Fine-grained Lexical Semantic Information 6

2.1 Verbal argument structure ....... ...................... 7

2.2 Aspect .......... ................................. 11

2.3 Other verbal semantic information ...... .................. 13

2.4 Nominal information ........ ......................... 14

2.5 Relations between words ....... ....................... 17

2.6 General characteristics of the use of lexical semantic information 18

3 Representing Lexical Semantic Information 20

3.1 Extensions to FOL needed to support lexically-based inferences 23

3.1.1 Non-standard quantifiers and restricted quantification . . . 23

3.1.2 Modal operators (or modal predicates) .... ........... 25

3.1.3 Predicate modification ....... .................... 27

3.1.4 Predicate nominalization ...... ................... 29

3.2 Conclusion ......... ............................... 31

vii

4 Morphological Cues for Lexical Semantic Information 33

4.1 Three approaches to the acquisition of lexical semantic information 33

4.2 A computational linguistics perspective. .. .. .. .... ...... 37

4.3 Morphological cueing. .. .. ... ... .... .... .... ... 39

4.3.1 Results. .. .. .. .... ... .... .... .... ..... 42

4.3.2 Evaluation .. .. .. ... .... .... ... .... ..... 44

4.3.3 Morphological cueing in human language acquisition . . .46

5 Handling New Words in the Input Stream 49

5.1 Introduction. .. .. .. .... .... .... ... .... ...... 49

5.2 Previous work. .. .. ... ... .... .... .... .... .... 50

5.3 Kenilex. .. ... ... .... .... .... .... ... ...... 52

6 Semantic Analyses of Various Derivational Affixes 55

6.1 Event structure .. .. .. .... .... .... ... .... ..... 55

6. 1.1 re-. .. .. .. .. .... .... .... ... .... ...... 59

6.1.2 un- and de-. .. .. .. .. .... .... .... .... .... 64

6.1.3 -ize, -ify, -en, and -ate .. .. ........ .. ..... .... 66

6.1.4 -le. .. .. .... .... ... .... .... .... ..... 67

6.2 Event participants .. .. ... .... .... .... .... ..... 68

6.2.1 -ee. .. .. ... .... .... .... ... .... ...... 70

6.2.2 -er and -ant. .. .. .. .... .... .... .... ..... 71

6.3 Referring to events. .. .. ... .... .... .... .... .... 72

6.3.1 -ment .. .. ... .... .... .... ... .... ..... 74

6.3.2 -age. .. .. .... .... .... ...... .. .. ...... 75

6.4 Other verbal affixes. .. .. .. .... .... .... .... ..... 75

6.4.1 m is-.. .. .. ... ..... ..... ...... ..... ... 75

6.4.2 -able. .. .. ... .... .... ... .... .... ..... 76

6.5 Abstractions. .. .. .. .... .... .... ... .... ...... 76

6.5.1 -ness. .. .. ... ... .... .... .... .... ..... 76

6.5.2 -ful .. .. .. ... .... .... .... ... .... ..... 76

6.6 Antonyms. .. .. ... .... .... .... ... .... ...... 77

6.6.1 -ful/-less .. .. .. ... .... .... .... ... ...... 77

7 Conclusion 78

viii

Bibliography 81

A A Lexicon Produced by Applying Morphological Cueing to theBrown Corpus 93A.1 Axioms and tables................................ 93A.2 Word lists..................................... 98

A.2.1 Words derived using re-..........................99

A.2.2 Words serving as bases for re-. .. .. .. .. ... .... .. 102

A.2.3 Words derived using un-.. .. .. .. ... ... .... ... 105A.2.4 Words serving as bases for un . .. .. .. .. .. .... ... 105

A.2.5 Words derived using de-.. .. .. .. ... .... .... .. 106

A.2.6 Words serving as bases for de-. .. .. .. .. ... .... .. 106

A.2.7 Words derived using -Aize..... .. .. .. .. .. .. .. .... 107

A.2.8 Words serving as bases for -Aize .. .. .. ... ... ..... 108A.2.9 Words derived using -Nize .. .. .. ... .... .... .. 109A.2.10 Words derived using -Aify.... .. .. .. .. .. .. .. .... 1

A.2.11 Words serving as bases for -Aify... .. .. .. .. .. .. .... 1

A.2.12 Words derived using -Nify. .. .. ... .... ... ..... 112

A.2.13 Words derived using -en .. .. .. .... .... .... .. 112

A.2.14 Words serving as bases for -en. .. .. .. .... .... .. 113

A.2.15 Words derived using -ate .. .. .. ... .... ... ..... 113A.2.16 Words derived using -le. .. .. .... .... .... ... 120

A.2.17 Words derived using -cce.. .. .. ... ... ... ... .. 121

A.2.18 Words derived using -er .. .. .. .... .... .... .. 121

A.2.19 Words derived using -ant. .. .. .. ... .... .... .. 131A.2.20 Words derived using -ment .. .. .. ... ... .... ... 132

A.2.21 Words derived using -age. .. .. .. ... .... .... .. 134

A.2.22 Words derived using mis-... .. .. .. .. .... .... .. 135

A.2.23 Words derived using -able. .. .. .... ... .... ... 136

A.2.24 Words derived using -ness. .. .. ... .... ... ..... 138

A.2.25 Words serving as bases for -ful. .. .. .. ... .... ... 143

A.2.26 Words derived using -fuli.. .. .. ... .... .... ... 144

A.2.27 Words derived using -less. .. .. .... .... .... .. 145

ix

List of Tables

4.1 Derived words ....... ............................. 45

4.2 Base words ........ .............................. 45

4.3 The number and percentage of morphologically defined word typein the sample of 434 Main Entries (adapted from [Anglin, 1993] p.47) ........ ................................... 47

4.4 The mean proportion of children's main entry vocabulary knowl-edge accounted for by each morphologically defined type at eachgrade (adapted from [Anglin, 1993] p. 70) .... ............. 48

5.1 Breakdown of the Unknown Tokens ..................... 50

x

1 Introduction

Information about individual words is crucial for every natural language processing(NLP) task-it is hard to imagine an NLP system without a lexicon of some form.Lexicons can contain phonological, morphological, syntactic, semantic, and/orpragmatic information. The form of this information can range from simple atomicfeatures to designer non-monotonic intensional logics. Increasingly, such discreterepresentations are being augmented, and in some cases replaced, by statisticalinformation. Thus, the builder of an NLP lexicon has many types of informationand representations from which to choose. This thesis will concentrate on lexicalsemantic information.

Automatic methods for acquiring lexical semantic information are less plentifuland thus currently the semantic component of most NLP lexicons is built by hand.However, the situation is rapidly improving for some types of lexical semantic in-formation. The appearance of large online corpora has made the collection of largeamounts of co-occurrence data possible. Statistical methods can then be applied tothis data to cluster words that appear in similar contexts (e.g., [Brown et al., 1992;Schuetze, 1993]). These methods are particularly good at inducing coarse-grainedsemantic information such as semantic relatedness and subject area. Such in-formation is very useful for some NLP tasks: examples include information re-trieval [Hearst and Schuetze, 1994] and word sense disambiguation [Schuetze, 1993;Brown et al., 1991]. Semantic relatedness can also be obtained by the automaticprocessing of machine readable dictionaries (MRDs) (i.e., dictionaries for humans)[Pentheroudakis and Vanderwende, 1993].

However, many NLP tasks require at least a partial understanding of everysentence or utterance in the input and thus have a much greater need for lexi-cal semantics. Natural language generation, providing a natural language frontend to a database, information extraction, machine translation, and task-orienteddialogue understanding all require fine-grained lexical semantic information, e.g.,verbal argument structure and selectional restrictions, corresponding nominal se-mantic class, verbal aspectual class, synonym and antonym relationships betweenwords, and various verbal semantic features such as causation and manner. Such

2

fine-grained information is more difficult to acquire. Statistical methods basedon distributional data can provide some of this information such as selectionalrestrictions (e.g., [Grishman and Sterling, 1992]): nouns are clustered based onthe verbs they appear with and verbs are clustered based on the nouns theyappear with. However, the link between distribution and information such asaspectual class or antonym is more tenuous and thus it will be more difficult forstatistical methods to extract this information. MRDs would seem more promis-ing since they are specifically designed to convey the meaning of a word. Un-fortunately in most cases MRDs only contain such information implicitly andmaking it explicit often presupposes a general solution to NLP which requireslexical semantic information, the very information we are trying to acquire. How-ever, some success has been achieved by keying off set phrases such as any ofa and one that which correspond to the semantic relationships hyponym and

humanness [Markowitz et al., 1986; Chodorow et al., 1985] but this method is re-

stricted to particular semantic relations that correspond to easily identifiable pat-terns. [Hearst, 1992] describes a similar technique which utilizes robust tagging

and parsing systems to enable the search of open text corpora for more complexsyntactic contexts. A pattern utilized in [Hearst, 1992] to extract hyponyms isNPosuch as{ NP1 , NP 2, ... {andjor} NP. From the sentence mammals such as catsare covered with fur, such a system'would extract from that mammal is a hyper-nym of cat. Again the problem for such an approach is finding easily identifiablecontexts which correspond to the desired semantic property. Another techniquethat has achieved partial success in acquiring fine-grained lexical semantics utilizesa partial processing of the text containing an unknown word, i.e., the semanticsof surrounding words constrain the meaning of an unknown word [Granger, 1977;Berwick, 1983; Hastings, 1994]. However, as with MRDs, this approach is hin-dered by the need for a large amount of initial lexical semantic information andthe need for an initial robust natural language understanding system.

As a consequence of this situation, current NLP systems which require fine-grained lexical semantics have only small hand built lexicons and thus can onlyoperate in restricted domains. Improved methods for automatic acquisition oflexical semantics could increase both the robustness and the portability of suchsystems. In addition, such methods might provide insight into human languageacquisition.

This thesis provides a new method for acquiring fine-grained lexical semanticinformation. The method is based on derivational morphology which is used as a"cue" for fine-grained lexical semantic information. The method is analogous tothe work described in [Hearst, 1992] and mentioned above: it also uses a surface"pattern" as a cue for a particular semantic characteristic. The advantage of

using morphological cues is that they are often easy to identify and correspondto very fine-grained semantic information. The disadvantages are that not all theneeded information is encoded morphologically and that not every word that has

3

a semantic property is morphologically marked for it. However, it will be shownthat even for a morphologically impoverished language like English a great deal ofsemantic information is encoded morphologically. In addition, it appears that themajority of English words are either morphological complex themselves or serveas a base of a morphologically complex word.'

An example of a morphological cue is the verbal prefix un- which applies tochange-of-state verbs and produces change-of-state derived forms. A change-of-state verb is one that presupposes the negation of its result state, e.g., in orderto fasten something it must be at least partially unattached to begin with. Thus,it is possible to use un- as a cue for the change-of-state feature. By searching asufficiently large corpus we are able to identify a number of change-of-state verbs.Examples from the Brown corpus include clasp, coil, fasten, lace, screw, and load.In addition, if an NLP system encounters, at run-time, an unknown verb such asunfluggle which appears to have un- as a prefix, it can infer that both this newverb and its base (i.e.fluggle are likely to be change-of-state verbs).

This information is crucial for some NLP tasks. For example, consider thefollowing dialogue segment which was taken from the TRAINS dialogues.2

(1) M: bring it all the way down to Bathto pick up a boxcar and bring it all the way back to Avonload it with bananas and then take it off to Corning

S: okayif we did thatit would take ...

In order to verify the plausibility of the manager's plan (M) and to carry it out,the system (S) needs to find a boxcar that is empty, since it will be filled withbananas. Thus, to pick an appropriate boxcar, the system would have to know thatto load a boxcar, it has to be empty first. One possible place for this informationto reside is as the lexical semantics of the verb load: the entry for the verb loadshould contain the information or pointers to the information that load is a change-of-state verb, i.e., that before one can load a boxcar it needs to be empty or atleast not full. '

1The following experiment supports this claim. Just over 400 open class words were pickedrandomly from the Brown corpus and the derived forms were marked by hand. Based on thisdata, a random open class word in the Brown corpus has a 17% chance of being derived, a 56%chance of being a stem of a derived form from the corpus, and an 8% chance of being both.

2In a TRAINS dialogue, two people, one as the system and the other as the user, togethertry to perform a planning task. The tasks involving trains, cities, factories, products, anddeadlines. Spoken discourse was recorded by microphone and tape recorder. For an overview ofthe TRAINS project see [Allen ei al., 1994] and for a discussion and listing of the dialogues see[Gross et al., 1993].

'One might argue that this information is really world knowledge not linguistic knowledge and

4

In addition, to describing a method based on derivational morphology for ac-quiring fine-grained lexical semantic (both off-line and at run-time), this thesis willalso discuss what sort of representational system is required for encoding such in-formation. It will be argued that a representation with a denotation should beused and that it should include extensions to standard first order logic (FOL) suchas non-standard quantifiers, restricted quantification, modal operators, predicatemodification, and predicate nominalization. One representation system that meetsthese needs is Episodic Logic [Hwang and Schubert, 1993a].

An example of the representation which will be used in this thesis is the axiombelow which partially defines the change-of-state (COS) feature.

For all predicates P with features COS and DYADIC4 :Vx, y, e [P(x, y) *,e D [3el :[at-end-of (el, e) A cause(e, el)]

[rstate(P)(y) * *el] A3e2: [at-beginning-of(e2, e)]

[[-irstate(P)(y)] * *e2]]]

The operator ** is analogous to H in situation semantics; it indicates that aformula describes an event. The axiom states that if a change-of-state predicatedescribes an event, then the result state of this predicate holds at the end of thisevent and that it did not hold at the beginning, e.g., if one wants to formalizesomething it must be non-formal to begin with and will be formal after.

In sum, the main contributions of this thesis are i) a method of acquiringlexical semantics information which utilizes the morphological structure of words,ii) a method for processing some of the new s morphologically complex words asthey appear in the input stream of an NLP system, and iii) an argument thata language more expressive than FOL is needed to represent fine-grained lexicalsemantic information.

This thesis is structured as follows. A review of the lexical semantic informa-tion utilized by various NLP system is presented in the Chapter 2. The followingchapter, Chapter 3, examines the issue of representation and argues for the repre-sentation briefly illustrated above. It also reviews previous work on representinglexical semantic information. Chapter 4 reviews some acquisition methods andpresents a new method: morphological cueing. Chapter 5 deals with the relatedproblem of unknown words in the input stream of an NLP system. Many claims

has no place in the lexicon but instead should be in world knowledge module. The dispute overdistinction between world and linguistic knowledge has a long history in philosophy. Whetherthe distinction exists and if so how it should be drawn are hotly contested. Regardless of thecategorization of the information, it is clear that it is crucial for NLP tasks and that there mustbe some mechanism for tying a word in an utterance to this information.

5Here and throughout this thesis new will have a relative meaning: new to whatever systemor human we are discussing. Whether or not a word is in some dictionary somewhere will mostoften be irrelevant to the discussion.

5

of this dissertation are based on semantic analyses of different derivational affixes.However, in order to allow for an uninterrupted discussion of representation andacquisition, the details of the analyses are presented in Chapter 6. Chapter 7 sum-

marizes the dissertation and concludes. In the appendix the results of applyingmorphological cueing to the Brown corpus [Kucera and Francis, 1967] are listed.

6

7

2 The Need for Fine-grainedLexical Semantic Information

The previous chapter only contained one concrete example of fine-grained lexi-cal semantic information and only one NLP task to which this information wasrelevant: change-of-state aspectual information and task-oriented dialogue under-standing. In this chapter, examples and tasks will be provided for many types oflexical semantic information, namely, verbal argument structure and selectional re-strictions, corresponding nominal semantic class, verbal aspectual class, synonymand antonym relationships between words, and various verbal semantic featuressuch as causation and manner. The purpose of these examples is to make themeaning of the term fine-grained lexical semantic information clearer and to showwhy fine-grained information is crucial for NLP. The list is in no way complete.

2.1 Verbal argument structure

In argument structure, we include information about subcategorization, the-matic arguments, and selectional restrictions. Argument structure is also referredto as verb valence and case frames. Although exactly what information shouldbe part of argument structure and how it should be defined is disputed, the corephenomenon is clear: the argument structure of a verb is the information neededto relate its subcategorized (syntactic) complements to its thematic (semantic)arguments and vice versa. In other words, the argument structure of a verb spec-ifies the main players in the situation described by the verb and their possiblesyntactic realization if any. For example, in (2a), the verb of the sentence is sendand its argument structure specifies that the subject noun phrase is filled by thething that does the action and the direct object noun phrase is filled by the thingthat is affected by the action. The rest of (2) lists predicates that have variousargument structures.

(2) a. Mary sent the book.

b. John received a book from Mary.

c. Mary ate.

d. The glass broke.

e. John laughed.

A number of generalizations can be made about how the meaning of a verbpartially determines its argument structure. A simple example is that a verb thatdescribes events involving two participants tends to be transitive. Levin [1993]documents a large set of such generalizations for English verbs.

Many of these generalizations involve semantic characteristics of the partici-pants, e.g., the participant that is the agent is usually realized as the subject.However, what is an agent? Following [Dowty, 1991], specifying that an argument

is the agent of an action amounts to saying that it has some number of "agen-tive" properties such as volition, sentience, caused a change in the patient, and/or

exists independent of the event. What is crucial is that "agentive" participanthave more of these properties than any of the other arguments. Other thematicarguments can be defined similarly.

For a particular verb, many characteristics required of a participant are calledselectional restrictions. The difference between selectional restrictions and the-matic arguments is that, whereas the property sets defining thematic argumentsare general and apply to many different verbs, selectional restrictions are oftenparticular to a single verb. A clear example of a selectional restriction manifestsitself in the contrast between the German verbs essen and fressen (see (3)). Es-sen restricts its agent to be human and fressen restricts it to be animate but nothuman.

(3) a. Die Frau iIlt. (The woman ate.)

b. Der Hund friBt. (The dog ate.)

Selectional restrictions often only hold in limited domains. For example, ifa dialogue understanding system only has to process dialogues about reservingseats on a plane, the verb reserve can restrict its direct object to be a seat ona flight. However, in a more general setting, it is possible to reserve things suchas a particular car from a car rental office. Thus, selectional restrictions may be

properly seen as a shortcut to doing more general domain reasoning.

Why is argument structure crucial for NLP tasks? Consider natural languageunderstanding (NLU) systems which translate natural language into some internallanguage such as a database query language for database frontends or informationtemplate for abstraction systems. One of the main subtasks is mapping the verbto a semantic predicate and mapping the verb's syntactic complements to thepredicate's arguments. There are two complicating aspects to this mapping thatmake it a problem for NLU tasks:

9

" many-to-many verb/predicate mapping: there is a many-to-many mappingbetween Verbs and internal language predicates,

" many-to-many complement/argument mapping: a verb's arguments config-ure themselves syntactically in many different ways.

Consider, as an example, the verb load. For many tasks each of the sentencesbelow would map to a different predicate.

(4) a. the woman loaded the gun with bullets

b. the man loaded the truck with hay

c. the child loaded up on chips

d. the courses loaded down the student

e. the occupants load the circuit with heaters in the winter

In addition, the predicate load-container could correspond to each of the verbsin the sentences below.

(5) a. the woman loaded the truck with hay

b. the woman filled the truck with hay

c. the woman put the hay into the truck

Thus, we potentially have a many-to-many mapping. The corresponding problemfor an NLU system is to map each token of a verb to the correct predicate: verbsense disambiguation. Verbal argument structure helps by specifying the contextthat particular pairings occur in. One possible argument structure for load mightspecify that it has a human as the performer in its subject complement, a weaponas the one of the patients of the action in its object complement, and an ammu-nition as the other patients of the action in a prepositional phrase complement.This information would enable a system to pick out the load-firearm predicatefor (4a) and rule it out for the (4b) and (4c).

As an example of the second aspect listed above consider the sentences below.

(6) a. The coach loaded the players onto the bus and left for Hershey

b. The coach loaded the bus with players and left for Hershey

c. The players loaded the bus and left for Hershey

Even if we assume that each sentence involves the same verb/predicate pairthere is a problem mapping the complements of the verb to the arguments of

10

the predicate. Again, argument structure information can help. It gives spe-cific information aboit the characteristics of a predicate's arguments, e.g., forload-container(x,y,z), x has to be the initiator of the action, y has to be thecontainer, and z the filler. This information can be used along with more generalprinciples to match the complements to the arguments.

Verb sense disambiguation and multiple argument realizations are central prob-lems of the MUC-3 task [Chinchor et al., 1993]: separate a large number ofmessages into those that are relevant and those that are not; then fill set tem-plates with information from the relevant messages. Relevant messages are thosethat have to do with terrorist acts in Latin America. The templates consist ofslots such as type of incident, perpetrator, human target, physical target, in-strument, effect. Verb sense disambiguation helps decide if a message is aboutterrorism. And dealing with multiple argument realizations helps fill the tem-plates consisting of slots. Similarly, the project in [Pugeault, 1994] uses argu-ment structure extensively to extract information from short research project de-scriptions. The extracted information is used for a variety of tasks among themlisting who does what and what kind of results are available and identificationof relations between projects. Other extraction projects that utilize argumentstructure are [Palmer, 1990; Palmer et al., 1986; Herzog and Rollinger, 1991;Gomez, 1994].

Natural language front ends for data base systems also have to deal with verbsense disambiguation and multiple argument realizations. Early systems such asthat described in [Bobrow and Webber, 1980] used argument structure to helpsolve these problems. Current systems continue this practice [Bates et al., 1994;Alshawi, 1992; Dowding et al., 1993; Backofen and Nerbonne, 1992].

For natural language generation (NLG), verb/predicate mapping and comple-ment/argument mapping are equally important but in the opposite direction:the task is to pick an appropriate verb and syntactic complements based on apredicate, arguments, and other semantic/pragmatic information. This task isreferred to as lexical choice or as lexicalization in the literature. A simple ap-proach to lexical choice is to assume that these mappings are one-to-one. Theneach predicate maps to one verb and that verb's argument structure specifies thesyntactic complement to which each semantic argument maps. This simplisticapproach often produces acceptable results. But if predicates map to differentverbs depending on the situation or if discourse considerations require alternativesyntactic complements then the two problems mentioned above, many-to-manyverb/predicate mapping and the many-to-many complement/argument mappingwill cause problems. Again, for many systems, verbal argument structure providesbasic information for the choice of verb and its complements [Cumming, 1986;Matthiessen, 1991; Iordanskaja et al., 1991]. For example, consider generating asentence to express an event where Mary transferred a car to John in exchangefor money. If Mary is currently in focus, for discourse structure reasons it might

11

be desirable to have Mary in the subject position of the sentence. Thus, the verbsold should be used instead of bought as in Mary sold the car to John. The ar-gument structures of bought and sold are crucial to this decision. (This examplewas adapted from [Pustejovsky and Nirenburg, 1987].)

Machine translation (MT) also makes extensive use of argument structure.Interlingua systems must grapple with the problems outlined for NLU and NLGabove and thus argument structure is crucial for all the same reasons (e.g., [Dorr,1993; Nirenburg and Raskin, 1987]). In transfer systems, corresponding verbsmust specify their argument structures so that the linking preserves thematicrelations (e.g., [Nagao, 1987]). In addition, lexical gaps in one language may forcea correspondence between two verbs where the syntactic realization of semanticarguments are rearranged as in (7) (taken from [Dorr, 1994]).

(7) E: I like Mary # S: Marie me gusta a mi ('Mary pleases me')

As for MT tasks, dialogue understanding tasks involve both NLU and NLG andthus inherits their problems and needs. In addition, for the TRAINS project, theargument structure in the lexicon is crucial for mapping to internal representationsused for plan recognition and reasoning. Event types have argument role slots. Forexample, the unload event has an argument role for the container being unloadedand a role for the stuff being taken out. As we have seen this information can beused in conjunction with aspectual information to infer that the object filling thecontainer role will be empty at the end of the unload event.

2.2 Aspect

The aspect of a verb specifies the structure of the event described by the verb.The aspectual classes that will be utilized in this thesis are those of [Vendler, 1967]:telic, non-telic, accomplishments, achievements, statives, and activities (see theexamples in (8)).

(8) accomplishment: build a house, climb a mountain

achievement: awake, reach the top

stative: like, understand

activity: mingle, walk

For natural language tasks, aspect helps specify when events described occur.For example, aspectual class plays a part in the temporal ordering of the eventsin a dialogue.

12

(9) M: bring it all the way down to Bathto pick up a boxcar and bring it all the way back to Avonload it with bananas and then take it off to Corning

M: the tanker will carry the OJ and the entire train will go to Avon

The often noted (see [Kamp, 1979; Hinrichs, 1981; Partee, 1984; Eberle, 1991])default generalization is that telics move the temporal reference forward and non-telics do not. Thus in the example, since the first three underlined verbs are telic,the picking up is followed by the bringing which if followed by the loading. Incontrast, carry is an activity verb and thus the carrying takes place at the sametime as the

In addition, the meaning of temporal modifiers is dependent on the aspectof the predicate they modify. For example, in (10a) temporal modifier in anhour specifies the amount of time leading up to the bouncing (activity) event. In

contrast, in an hour specifies the length of the loading event itself in (10b) (see[Dowty, 1979] p.56).

(10) a. Mary will bounce the ball in an hour

b. Mary will load the truck in an hour

The examples given above show that aspect is important for NLU tasks, boththose that involve multiple sentence inputs and those that might be constrained tosingle sentence inputs (e.g., database front ends) [Eberle, 1991; Passonneau, 1988;Dahl et al., 1987; Webber, 1988; Hwang and Schubert, 1993a].

Aspect is also clearly important for generating a discourse describing a set oftemporally-related events. An example of the role of aspect in producing singlesentences is given in [McDonald, 1991]. He describes the information and process-

ing needed to choose between the two alternative ways in (11) of expressing theidea that the addressee needs to be in transition from being at a certain locationto not being there.

(11) a. you need to leave by ...

b. you can stay until ...

The telic leave stresses the result of the transition whereas the activity stay stressesthe activity before the transition. In the situation described in [McDonald, 1991]it is desirable to stress the result of the transition and thus (Ila) is preferred.

An example of the need for is the aspectual information for MT is the trans-lation of the Japanese verb kaburu into English [Hutchins and Somers, 1992]. Ifit is used to describe the process of getting a hat onto the agent then it shouldbe translated as put on but if it describes the state of the hat being in place

13

then it should be translated as wear-Japanese conflates the telic and activityevents concerning the wearing of a hat whereas English has two separate verbs.To translate from Japanese to English the aspect of the event needs to be derivedfrom context in Japanese and then matched to the aspectual information in theEnglish lexicon.

2.3 Other verbal semantic information

In this section we will discuss some of the other semantic characteristics of verbsthat can be utilized for NLP tasks. Almost any piece of information about a verb'smeaning could be used in service of an NLP system. But there are a number ofcharacteristics that seem to have extensive syntactic and semantic ramifications.Often they seem to underlie other semantic information such as verbal argumentstructure and aspect as many linguists have noted (e.g., [Jackendoff, 1990; Pinker,1989; Levin, 1993]). Some of these semantic characteristics are listed in (12) alongwith exemplary verbs.

(12) a. incorporated theme: butter (the toast)

b. causation: (John) melted (the butter), kill

c. creation: build, bake, sew, cook, make [Pinker, 1989]

d. magnitude: toss vs. hurl

e. incorporated instrument: brush, comb, hose [Pinker, 1989]

f. effect: clean, cleanse, empty, strip [Pinker, 1989]

g. manner: spill vs. ladle, pour vs. spray [Pinker, 1989]

[Wu and Palmer, 1994] shows how classes based on such information can be usedin a lexical choice procedure for MT and NLG systems. And [Palmer et al.,1986; Dahl et al., 1987; Palmer, 1991] show how to use such information to solvethe many-to-many complement/argument mapping problem for an NLU problem.Dorr [1993; 1994] shows how such information can be used to handle translationmismatches.

Given that there is an infinite number of semantic distinctions that could bedrawn, a important question is how to decide what verbal semantic informationshould be part of the linguistic knowledge base. From an engineering standpoint,the answer is whatever is needed to solve the problem at hand. For example, inthe TRAINS system, semantic distinctions are made that are needed to discussand reason about plans in the TRAINS domain. Similarly, the system describedin [Palmer, 1990], which uses verbal semantics to link syntactic complements to

14

semantic arguments, uses the following principle: "the decompositions only needto go far enough to pick up any arguments to the verb that can be mentionedsyntactically" ([Palmer, 1990], p.87). Here and throughout this dissertation, I willassume the same pragmatic stance.

2.4 Nominal information

In comparison to verbs the study of nominal semantics has received little atten-tion. In fact much of the nominal semantics information in NLP systems is presentto make applying verbal selectional restrictions possible (e.g., [Palmer et al., 1986;Gomez, 1994; Rickheit, 1991; Backofen and Nerbonne, 1992; Wu and Palmer, 1994;Pustejovsky and Nirenburg, 1987]. For example, we could classify knives as toolsand men as humans and use this information to distinguish between the linkingof the subject noun phrase with the instrument versus the agentive semantic ar-gument in (13a). A similar mapping ambiguity in (13b) could be solved if truckswere classifying as containers and hay as a commodity. The sentence in (13c)contains an ambiguity between two senses of start or at least if there is only onesense of start it has different entailments depending on what sort of thing its pa-tient argument is. For example, if it is a book we can assume Mary can read andif it is a car that she can drive.i

(13) a. the knife/the man cut the bread

b. Mary loaded the truck/the hay

c. Mary started the book/the car

Of course, the nouns themselves may be ambiguous and thus the process of usingselectional restrictions to disambiguate verbs, nouns, and complement/argumentmappings might involve some sort of constraint propagation.

In task-oriented dialogues, information about what the referent of a noun isused for or how it is created is needed to interpret an utterance. For example, inthe dialogue fragment below (this dialogue is handled by the TRAINS system) theinformation that OJ is something that is made by processing oranges and thatan OJ factory can do this processing is needed to recognize the second and thirdutterances as suggestions about which elements in the domain should be utilizedto achieve the goal of making OJ and what role they should play.

(14) M: We have to make OJ.There are oranges at Iand an OJ factory at B.

1This example is based on a discussion of start in [Pustejovsky, 1991].

15

Engine E3 is scheduled to arrive at I at 3PM.Shall we ship the oranges?

S: Yes,: shall I start loading the oranges in the empty car at I?

Another use of nominal semantics is for the lexical choice subtask of NLG.As with selectional restrictions, pretty much any piece of semantic informationcould be used. For example, in [Pustejovsky and Nirenburg, 1987] the problem ofexpressing the concept of a black, small, average-height, made of steel, 'location-of eat' table is addressed. They utilize the information about where it is usuallylocated ('location-of eat') to pick out dining table from a set of words that includedesk, table, coffee table, and utility table. Another example of information used forlexical choice is the characteristic of basic-level class. Following [Reiter, 1991], if aconcept of a particular pit bull terrier, Spot, is to be expressed, and we don't wantto specify anything special about Spot then we should use the word dog. If we usethe word mammal or pit bull then we would be saying something special aboutSpot. Thus, for NLG, information about which nouns correspond to basic-levelclasses is needed. Finally, almost any piece of information might be useful. Forexample, the lexical choice system described in [Sondheimer et al., 1989] utilizesa classifier that takes a set of characteristics of an entity and attempts to findthe most general concept in the hierarchy that has these characteristics. Anycharacteristic could be helpful for the classification task.

The primary tool used for representing nominal semantic information is a sortalhierarchy. Concepts lower in a hierarchy are more specific than those higher up.In addition, concepts inherit properties from their supertypes. Such hierarchiesare also called is-a hierarchies or taxonomies.

One problem with much of the work on nominal semantics is that there does notseem to be a theory behind decisions about what information should be stored. Itappears that researchers have not been able to find constraints on what nominalsemantic information is important to language. ( This is in contrast to verbalsemantics where information such as aspect or argument structure has been foundto be important to all verbs and which is tightly constrained by language (e.g.one does not find verbs that refer to a process that includes a change from -IX toX and back to -X). In fact a number of studies make strong claims about whatverb meanings are possible, e.g., [Pinker, 1989; Tenny, 1992].) One exception tothis is the work of Pustejovsky (e.g., [Pustejovsky and Anick, 1988; Pustejovsky,1991]). He has tried to remedy this situation by introducing the concept of qualiastructure. Qualia structure consists of four slots that each noun must fill:

Constitutive role: what it is made of?

16

Formal role: what picks it out from the other objects in the domain? (e.g.,orientation, magnitude, shape, color, dimensionality)

Telic role: what is it used for? what is its purpose?

Agentive role: what brought this thing about? how was it made?

Each noun would have these four roles and each would point to a concept in asortal hierarchy. Thus, one can think of qualia structure as a way of linking alexical item to the sortal hierarchy. Instead of simply having an is-a link, we haveis-constitutive, is-formal, is-telic, and is-agentive links [Pustejovsky and Boguraev,1993].

One use of such information would be to reduce the number of senses of verbsneeded to get the right inferences. Consider the sentence in (15) from [Pustejovsky,1991].

(15) Kim began a book

This sentence is ambiguous between meaning that Kim began to write a bookand Kim began reading a book. If we assume that begin can combine with eitherbook's telic role or agentive role both of these readings fall out.

As with selectional restrictions, it seems that qualia structure provides a wayto shortcut a more general domain knowledge inference chain. As with selectionalrestrictions, such shortcuts are often specific to a single domain. For example, ifthe Kim in (15) was a book binder, then the sentence would have another reading.The fact that books are bound could be incorporated into the Agentive role,however it seems unlikely that this reading should be present in other domains.

Up until this point we have been discussing nouns that pick out one thingindependent of anything else in the domain: the noun dog will pick out the dog inthe domain regardless of what else is there. However, there are other nouns, knownas relational nouns in the linguistic literature, that pick out different elements inthe domain depending on other elements in the domain. In [Barker and Dowty,

1993] they are defined as follows.

In general, a relational noun is one such that an entity qualifies formembership in the extension of the noun only by virtue of there beinga specific second entity which stands in a particular relation to thefirst, and where that relation is determined solely by the noun's lexicalmeaning.

One such noun is father; in some domains (where there is more than one father)you need to know who his children are. Some other nouns that are relational arescratch and cut (which are meaningless without the scratched or cut thing) andwindow and door (see [Pustejovsky and Anick, 1988; de Bruin and Scha, 1988;

17

Barker and Dowty, 1993]). Such relational nouns can be important in some do-mains. For example, databases that have information about workers and managersor commanders and military units need to handle relational nouns (e.g., [de Bruinand Scha, 1988]). Another sort of relational noun is the eventive noun. Eventivenouns refer to events or situations; some examples are the attack, the hit, thespeech. Eventive nouns relate an event to its participants in much the same wayas a verb does-eventive nouns often take syntactic complements which map tosemantic arguments (e.g., the attack on the town by the enemy). In fact, manyeventive nouns are nominalizations: nouns derived from verbs.

Eventive nouns are important to NLU for the same reason verbal argumentstructure is: to find out who did what to whom (see [Dahl et al., 1987; Chinchoret al., 1993]). Although there has been little work on eventive nouns in NLG,it seems likely that a system that could use nominalizations, to refer back topreviously discussed events for example, would be able to produce better text.The next section contains an example (16) where knowledge of a nominalizationand its relation to its verbal base makes generation possible. Similarly, the MTtask could benefit from such knowledge. For example, if an eventive noun in asource language does not have a corresponding noun in the target language, itwould be important to be able to relate an eventive noun and its arguments tothe appropriate verb.

2.5 Relations between words

The two most common semantic relationships between words are antonymand synonym. Another might be the relation between a verb and the name ofsomeone who does the verb habitually, e.g., sell/vendor, write/author. Or therelation between a verb and a corresponding eventive noun, e.g., kiss/kiss, per-form/performance, move/movement.

Semantic relations such as antonym, synonym, and nominalized nouns can beused to provide multiple paths to verbalization [Iordanskaja et al., 1991] withinthe NLG task.

The ability of a generation model to provide paraphrase alternatives isnot just a measure of its grammatical and lexical coverage. It is an inte-gral part of its ability to produce compact, unambiguous and stylisticallysatisfying text. Texts in typically complex domains must meet a largenumber of constraints or optimize over a large set of preference func-tions. A linguistically rich model can provide a basis for "paraphrasingaround" problems which arise when the most direct lexical and syntac-tic realization path runs into conflicting constraints, or scores poorly onthe aggregate preference rating. [lordanskaja et al., 1991] p. 294

18

They give the following example (p. 307): Suppose that the verb attack has beenchosen and the system attempts to generate (16a). It will find that there is alexical gap in English: there is no adverb that magnifies the verb attack.

(16) a. A would-be penetrator attacked the system full-scale-ly(?)

b. A would-be penetrator mounted a full-scale attack on the system

However, if the system has access to eventive noun corresponding to attackvnamely attackN then it could produce the sentence in (16b). Presumable MTsystems would also gain from such an ability.

For many insights on semantic relations between words see the literature on lex-ical functions within Meaning Text Theory [Mel'duk and Polgu~re, 1987; Haeneltand Wanner, 1992; Mel' uk and Polgu~re, 1995].

2.6 General characteristics of the use of lexicalsemantic information

Consider the following observations about lexical semantic information.

" Lexical semantic information is often used in combination with other in-formation. For example, aspectual information is only one of the pieces ofinformation needed to determine the temporal ordering of events. Similarly,selectional restrictions must be combined with nominal lexical semantics andoften other information to do word sense disambiguation. In addition, lexicalchoice in NLG often involves many different types of information.

" Lexical semantic information is used for multiple things by the same system.For example, nominal information is used for word sense disambiguation andfor plan recognition in the TRAINS system and if the system had a full blownNLG system the information might also be used for lexical selection.

" Lexical semantic information often needs to "change its shape" to serve thegiven purpose. For example, the noun plumber might be classified as a humanbut a verb such as eat might restrict its subjects to animate objects. Theremust be a way to get from the fact that an object is human to the fact thatit is animate. A more complicated example is that from the aspectual classof make, the sentence the factory made the OJ a system might need to knowwhen the OJ came into existence; since make is telic, the OJ would comecompletely into existence at the end of the making event.

These observations support an important claim of this thesis: it is appropriateand beneficial to view the uses of lexical semantic information discussed in this

19

chapter as applications of some form of deductive inference. The observationssupport this claim because (i) combining semantic information is what deductiveinference is all about, (ii) deductive inferencing is a flexible general mechanismand can be used by many systems, (iii) deductive inferencing "changes the shape"of information.

It follows from this claim that the form of the information should lend itselfto inference. This corollary is also supported by the fact that often the systems,with which NLP systems interface, perform inferencing themselves. For example,the output of natural language extraction systems is often used by inferencingsystems [Palmer et al., 1986; Herzog and Rollinger, 1991; Gomez, 1994]. Anotherexample is NLG systems that generate text from expert systems. In order to sup-port deductive inference, it will be argued in the next chapter that an expressivedenotational logic should be used to represent lexical semantic information.

20

21

3 Representing LexicalSemantic Information

It was argued in the previous chapter that natural language processing applica-

tions require extensive inferencing. Many of these inferences are based on the

meaning of particular open class I words. Providing a representation that can

support such lexically-based inferences is a primary concern of this chapter.

The representation language of first order logic (FOL) has well-understood se-

mantics and a multitude of inferencing systems have been implemented for it.

Thus it is a prime candidate to serve as a lexical semantics representation. How-

ever, it will be argued that FOL, although a good starting point, needs to be

extended before it can efficiently and concisely support all the lexically-based

inferences needed.

Most lexical semantics representation systems utilize either KL-ONE-inspired

terminological logics [Bobrow and Webber, 1980; Alshawi, 1987; Herzog and Roll-

inger, 1991; Kuhlen, 1983] or typed feature structure (TFS) logics [Copestake

et al., 1993; Copestake and Briscoe, 1992]. Representationally, terminological

logics are subsets of FOL [Nebel and Smolka, 1991; Schmolze and Israel, 1983;

Brachman and Levesque, 1984; Schubert, 1990; Hayes, 1979] as are TFS log-

ics [Nebel and Smolka, 1991; Kasper and Rounds, 1986; Johnson, 1987; Smolka,

1991].2 Thus, it is suggested here that lexical semanticists interested in support-

ing lexically-based inferences need to look for ways to enrich their representational

systems. This suggestion has been made elsewhere (see [Burkert and Forster, 1992;

Pletat, 1991; Mercer, 1992]). However, the specific extensions suggested here are

novel to the lexical semantics literature.

Most of the examples on which the discussion below is based come from dia-

logues collected for the TRAINS project. The dialogues were collected by having

1Open class words are those that are either adjectives, adverbs, nouns, or verbs. Closed class

words are prepositions, determiners, conjunctions, etc.2 It should be noted that many of the systems that use TFS logics, view their TFS repre-

sentations as descriptions/short-hand for a more expressive semantic representation not as the

representation itself [Nerbonne, 1993; Copestake and Briscoe, 1992; Rupp ei al., 1992]. The

argument presented here is compatible with this position.

22

the role of the system played by a human (see [Gross, 1993]). As mentionedpreviously, the goal of the TRAINS project is to build a system that can assista human manager who is attempting to solve a planning problem. A typicalplanning problem would be to deliver 1000 gallons of orange juice to a specificcity by a certain time. To solve this problem the manager would be assisted by

the system in scheduling the delivery of the oranges to the orange juice factoryand the subsequent shipping of juice to the designated city. By relying mostlyon examples taken from these dialogues, the relevance of the issues addressedto mundane, naturally occurring discourse is illustrated. Moreover, the dialoguesprovide a task-oriented context in which it is generally clear what inferences are re-quired for understanding a given utterance; thus they provide a more constrainedframework for semantic theorizing and experimentation than unrestricted texts ordialogues.

In addition to examples from the TRAINS dialogues, the extensions will alsobe supported by examples of lexical semantic information required to representmorphologically derived words.

Before particular representational extensions for lexically-based inferences canbe introduced, a clearer idea of what are and what are not lexically-based infer-ences is needed. A lexically-based inference is one that depends on a lexical axiom.A lexical axiom is one that involves a semantic atom that is the translation of anopen class word (assuming a meaning postulate approach). The following axiomis a lexical axiom and links the verb enter with its result state (using the predicatemodifier rstate).

(17) VxVy[rstate(enter(x, y)) D contained-in(x, y)]

Using it, we could make a lexically-based inference from the boxcar entered thefactory that the boxcar is in the factory. Furthermore, this inference is based onthe word enter and not on the word boxcar. Such inferences can be contrastedwith "structural" inferences such as in (18):

(18) there are at least three cities with orange juice factories and large trainstations -+ there are at least three cities with orange juice factories

This depends on properties of certain classes of logical operators, specifically theclass of upward monotone quantifiers [Barwise and Cooper, 1981] (e.g., at leastthree) and the conjunction operator, rather than on the lexical semantics of specificopen-class words. Note that if one substitutes the downward monotone quantifierfewer than three for at least three the inference no longer follows.

23

3.1 Extensions to FOL needed to support lexi-cally-based inferences

This section introduces a number of extensions to FOL and provide examplesthat motivate them. More specifically, one should add

" restricted quantification and non-standard quantifiers,

" modal operators,

" predicate modification,

e and predicate nominalization.

These representational tools are available in some systems for sentence-level se-mantics [Hwang and Schubert, 1993a; Moore, 1981; Alshawi, 1989].

It should be noted at the outset that each example used to motivate an ex-tension can be handled in FOL. However, the use of FOL leads to complex andunnatural paraphrases of intuitively simple facts, makes the encoded knowledgeharder for system developers to comprehend and modify, and complicates infer-ence. By adding a small amount of expressive power, concise and comprehensiblerepresentations can be given which facilitate efficient inferencing.

In our examples, direct and indirect motivation of specific extensions to FOLare distinguished; i.e., a lexical item may directly correspond to a type of operator(such as predicate modifiers) unavailable in FOL, and it may indirectly involvenonstandard operators through its axiomatization.

3.1.1 Non-standard quantifiers and restricted quantifica-tion

For this extension there seems to be little evidence from the TRAINS dialoguesor derivational morphology. However, non-standard quantifiers and restrictedquantification can be motivated from a more general perspective and are com-monly used for representing sentence-level semantics.

Previously, when circumscribing lexically-based inferences, upward monotonequantifiers were mentioned. These include at least three (see (18)), all, a few,most, etc. Such examples motivate the augmentation of FOL with correspondingnon-standard quantifiers; and the nominals with which they combine (as in atleast three cities with orange juice factories) motivate the inclusion of formulasrestricting the domains of the quantified variables.3 . By utilizing these extensions,

'A number of terminological logics are sorted logics (e.g. [Pletat, 1991]). In sorted logics,the domain of a variable is restricted by the variable's sort. Representationally, this is much likerestricted quantification, though more limited.

24

the following axiom enables inferences like the one in (18) to be made efficiently(i.e., in one step).

(19) For all upward monotone quantifiers Q and all predicates P1, P2, and P3 :Q x: Pi(x) [P2(x) A P3(x)] D Q x: Pi(x) [P2 (x)]

Note that no reasoning about cardinality is required. This is in contrast toFOL: assuming a finite domain, something like the logical form in (20) for thereare at least three cities with orange juice factories and large train stations andthe axioms in (21) for cardinality would be required to support the informationin FOL. It addition, multiple reasoning steps would be required.

(20) 3 x[card(x) > 3 AV y[in(y,x) D [city(y) A

3z[have(y, z) A ojf actory(z)] A3zl[have(y, zl) A trainstation(zl) A large(zl)]]]]

(21) a. Vx(card(x) = 0) -(3y in(y,x))

b. Vx(card(x) = 1) []y in(y, x) A Vz[in(z, x) D z = y]]

c. Vx, y(card(x) = yAy > 0) -- [3z[card(z)= IAcard(x-z)+ 1 = card(x)]]

As a direct motivation for a nonstandard quantifier syntax, the above argumentpertains only to the closed category of determiners (in combination with certainadverbs and numeral adjectives). However, this syntax will also simplify theaxiomatization of many open-class words. Consider the constructed example (22)and assume that the system has in its knowledge base that of ten cars, seven aretankers.

(22)M: Are the majority of the cars tankers?

It should be able to answer affirmatively. By using the axiom below and a suitabletreatment of conjunction, the system could make use of the axioms for upwardmonotone quantifiers.

(23) Va, b majority(a, b) D Most z : [z E a] [z E b]

In the above axiom we assume that a and b denote collections and that E hasbeen appropriately axiomatized.

Some other words that would benefit from non-standard quantifiers and re-stricted quantification are scarce, rare, mi nority, scant, and predominate. It alsoseems likely that degree adjectives such as expensive, difficult, or intelligent willrequire axiomatizations involving nonstandard, restricted quantifiers. For exam-ple, a difficult problem in the TRAINS domain is one that exacts more time and

25

effort from the problem solver(s) than most problems in this domain. Simi-larly dispositional adjectives such as perishable or fragile and frequency adverbssuch as usually also call for restricted non-standard quantification in their axiom-atization, since they require a quantification other than over events of a certaintype that is neither existential or universal.

3.1.2 Modal operators (or modal predicates)

Standard FOL has difficulty representing necessity, possibility and proposi-tional attitudes. For example, it is not possible, directly, to differentiate a closedformula that could be true from one that could never be true. There is no builtin conception of possible world nor is it possible to apply a predicate to a closedformula since a closed formula does not denote an individual in the ontology (itdenotes a truth value). One can make worlds and propositions individuals of theontology which correspond to terms in the logic and then axiomatize the relationsbetween worlds, between propositions, and between closed formulas and propo-sitions. However, this approach increases the complexity of the ontology andincreases the complexity of the axiom set and thus makes it harder to maintainthe system and to deduce things with it.

Modal logics were introduced to remedy this situation. They include operatorswhose interpretation involves possible worlds. And thus move the complexity outof the logic and syntactic proof system and into the model theory and interpre-tation function. A model structure with either possible worlds (e.g. [Montague,1974]) or situations (e.g. [Barwise and Perry, 1983]) is required. Since the modeltheory for modal logics is well understood, this move is a win. For example,possibility can be encoded as an operator that applies to closed formulas and isinterpreted as true if there exists a world accessible to the current one in whichthe formula is true.

Examples like (24) and (25) involve adverbs that are most naturally viewed asmodal operators:

(24) M: That will probably work

(25) M: Maybe we'll get lucky again

In its context of occurrence, the second sentence refers to the possibility thatall the orange juice needed for certain deliveries already exists, obviating the needfor orange juice production. It would clearly be hazardous for the system to ignorethe adverbs, turning wishful thinking into fact.

Such examples provide direct motivation for allowing modal operators in lex-ical semantics. The argument is weakened by the fact that modal adverbs are

26

somewhat marginal as an open class of lexical items; but we can also argue fromadjectives such as reasonable, reliable, correct and right, and verbs such as foundout that ..., said that ..., would like to ..., make sure ..., trying to ... , wonder if...,believe, and assume. Further comments are restricted here to some observed usesof correct and reliable. For instance, in the following request for confirmation, thesystem should interpret correct as applying to the proposition that the time is 2pm:

(26)M: The time is two pm - is that correct?

Now if the system believes that the time is indeed 2 pm, it should infer an affir-mative answer to the question - i.e., that it is correct that the time is 2 pm. Thus,for the relevant sense of correct, the lexical semantics should tell the system thatfor any closed formula €,

(27) correct(o) = 0.

If we adopt such a schema for the meaning of correct, we are treating it asa modal operator since it is being applied to a closed formula not a term. Analternative is to assume that correct is a predicate, but one that applies to reifiedpropositions. In turn, such an approach calls for the introduction of a reify-ing operator (such as that) for converting sentence contents (propositions) intoindividuals, allowing their use as predicate arguments. In either case, we areintroducing a modal extension to FOL. The case of reliable is similar but moresubtle. In actually occurring examples this property is often ascribed to items ofinformation:

(28) M: That's reliable information

Intuitively, reliable information is not necessarily correct, though it is neces-sarily well-founded (i.e., there are good reasons for the presumption of truth). Sothe axiomatization is less trivial than (27), but it still calls for use of a modaloperator or modal predicate in the same way.

Concerning indirect motivation for modals, an interesting example is compatible(with), as used below.

(29)M: So that sounds like a good temporary plan - let's see if it's compatiblewith our next objective here which is to deliver a boxcar of bananas toCorning

In order for the plan in (29) to be compatible with the additional banana delivery,it must be possible to realize both action types (within the given temporal andother constraints). In general,

27

(30) Vx, y[[action-type(x) A action-type(y) A compatible-with(x, y)] D03x', y'[realize(x', x) A realize(y', y)]] .

KO entails that there exists a situation or possible world sufficiently connectedto the current one where 4 is true.

The suffix -able also provides indirect evidence for modal operators. A adjectivederived using -able from a verb entails that it is possible to perform the actiondenoted by the verb, e.g., something is enforceable if it is possible that somethingcan enforce it [Dowty, 1979]. Dowty uses the 0 operator in his treatment of thesemantics of the suffix -able as shown below.

(31) For adjectival predicates Padj derived from verbs p, using -ableVX[Padj(X) D 03y[Pv(y,x)]]

3.1.3 Predicate modification

By predicate modification the transformation of one predicate into another ismeant. Within a general setting for language understanding, the case for allowingpredicate modifying operators is most easily made by pointing to non-intersectiveattributive adjectives such as former, cancelled, fake, supposed, simulated, or fic-titious. For instance, applying cancelled to an event nominal such as trip yieldsa new predicate which is not true of actual trips, and so should not be analyzedas cancelled(x) A trip(x), the straightforward FOL encoding. An FOL treat-ment that fares better is to reify predicates corresponding to nouns and thenuse a predicate such as hasprop to relate such properties to objects that havethem: hasprop(Trip, x). Adjectives such as cancelled would then be functionsfrom terms to terms: hasprop(Cancelled(Trip), x). Axioms such as (32) wouldpartially define such functions.

(32) Vx, e hasprop(Cancelled(x), e) D -'hasprop(Actual(x), e)

However, this treatment also has problems. Consider the pair of sentences below.

(33) a. Kim wrestled skillfully with Pat

b. Pat wrestled awkwardly with Kim

Using this treatment, the wrestling event would be both skillful and awkward atthe same time. A workaround also exists for this problem but introduces furthercomplexity. In contrast, treating modifiers like skillfully as predicate modifiersproduces a more straightforward solution: Skillfully(wrestled-with) (Pat) (Kim)vs. Awkwardly(wrestled-with)(Kim)(Pat).

28

Such adjectives and adverbs do not occur in the TRAINS dialogues collectedso far. However, verbs such as (make, get, look, sound, seem, begin, construct) dooccur and also provide direct motivation for predicate modifiers.

For instance, the dialogues contain instances where the manager asks

(34)M: Does that sound reasonable?

(referring to a plan), or comments

(35)M: Problem number two looks difficult.

A plan can sound reasonable even if more careful analysis reveals it to be unreason-able. So the system should realize that an affirmative response to the querymerely requires the absence of obvious flaws in the plan (detectable with limitedinferential effort), rather than an actual proof of correctness.

One could attempt to handle such locutions by decomposing them into morecomplex modal patterns; e.g., x sounds P, for P a predicate, might be decomposedinto something like "when one considers x, one (initially) feels that x is P". Thisis precisely the strategy that has often been suggested for intensional verbs such asseeks. But while plausible definitions (decompositions) exist for some intensionalverbs, they are very difficult to contrive for ones like resemble (as in the one-hornedgoat resembled a unicorn) or imagine. A more general, straightforward approachis to add predicate modifiers to FOL. Thus the translation of sounds (when ittakes an adjectival complement) would be a predicate modifier, whose meaning isconstrained by axioms like the following:

(36) For all monadic predicates P:Vxsounds(P)(x) DVs, t1person(s) A consider(s, x, t) D feel-that(s, P(x), end-of (t))]

where various subtleties are being neglecting for the sake of exposition.4 Thusto answer (34), the system would make use of (36) to infer that it need only"consider" the plan in question, until it "feels-that" (i.e., tentatively concludesthat) the plan is reasonable or otherwise.

A third class of examples also directly motivating predicate modifiers, namelycertain VP adverbs such as almost, nearly and apparently. Again, these do notappear (as yet) in the TRAINS corpus, but are common in other corpora. Forexample, the "pear stories" of Chafe [Chafe, 1980] contain examples such as "[the

4This is, of course, the Montagovian approach, though Montague's intension operator isbeing dispensed with (writing sounds(P)(x) rather than sounds(^P)(x)) by relying on a slightdeparture from standard intensional semantics that treats the world (or situation) argument asthe last, rather than first, argument of the semantic value of a predicate [Hwang and Schubert,1993b].

29

boy on the bicycle] almost ran into a girl", where the desired inference is that hedid not run into her, but came very close to her.

The prefixes co-, pre-, post-, and mid- can also be handled straightforwardlyby representing them as predicate modifiers. Some examples are co-captain, co-lead, pre-primary, pre-payment, mid-April, and mid-continent. In addition, manyverbal affixes have meanings that are much easier to axiomatize when predicatemodifiers are available. For example, the suffix -ize produces verbs where theirresult state is the adjectival base, e.g., the result of formalizing something is thatit is formal. The axiom for this information is listed in (37)

(37) For predicates p produced by izeVy[rstate(p)(y) D base(p)]

The predicate modifier rstate produces the result state that holds at the endof events described by the verb and the predicate modifier base produces theadjectival base predicate from the derived verb predicate.

3.1.4 Predicate nominalization

In natural language, many types of expressions can be turned into noun-likeentities. For example, in (38b), that turns the statement the car was blue intosomething that can play the same role as car in (38a).

(38) a. the car made Mary happy

b. that the car was blue made Mary happy

This section will argue that FOL should be extended with the operators that nom-inalize predicates. Such operators form terms (denoting individuals in the domainof discourse) from predicates (denoting classes of objects, actions or events). Inother words, predicate nominalization involves the reification of properties (in-cluding kinds/species, action types, and event types) by application of nominalizing(reifying) operators.

The line of argument for utilizing such operators for representing lexical se-mantics is less direct than for the previous extensions. It is claimed that (i) manywords denote predicates with one or more arguments ranging over kinds of things,properties, and actions/events (this already came up incidentally in (29)); and (ii)the lexical axioms for these predicates will either explicitly involve nominalizedpredicates or require the substitution of nominalized predicates when used forinference.

As examples of a variety of lexical predicates over (reified) action types, con-sider the italicized words in the following TRAINS excerpts. Correspondingaction-type arguments where these are explicitly present have been underlined.

30

(39)M: What is the best way for me to accomplish my task ...

(40) S: That's a little beyond my abilities

(41) S: The way it's going to work is, engine E2 is going to go to city E ...

(42) S: Our current plan is to fill ... tankers T3 and T4 with beer ...

S: One other suggestion would be that you take the other tanker whichisn't being used ...

(43)M: That's not gonna work

(44) S: Well that will delay departure

(45) S: Right, we can begin production ...

(46) M: ... send it off on a particular route and do it several times

Way, task, plan, and suggestion as used in (39 - 42) are predicates over types ofactions or events, as the underlined arguments confirm. For instance, the actiondescriptions underlined in (41) and (42) do not refer to particular future actionsat particular times, but to types of actions whose eventual realization is hoped tosolve the problem at hand. (And the ability deictically referred to in (40) is theability to specify the best way for the manager to accomplish the current task in(39) - again an action type.) Similarly (43 - 46) illustrate verbs whose subject orobject ranges over action types. Note for instance in (44) that a particular depar-ture event cannot be delayed - particular events have fixed times of occurrence,but event types in general do not. Likewise in (46), only an action type, not aparticular action, can be done "several times".

Similarly the following excerpts contain predicates over kinds/species, againwith corresponding arguments underlined:

(47)M: One boxcar of oranges is enough to produce the required amount of or-ange juice

(48)M: And fill two tankers with beer

31

(49)M: There's [an] unlimited source of malt and hops ...

Note that in (47) the phrase predicated by enough refers to a kind of load orquantity, not to any particular load. Similarly the underlined objects of fill withand source of in (48) and (49) are kinds of stuff, not particular realizations ofthem. (In fact, no particular batch of malt and hops could be "unlimited".)

Turning to the second step of the argument, concerning the explicit occurrenceof nominalization operators in argument positions of predicates like those above,consider the sense of do with an action type as object, as in (46). To understand(46), the system will have to substitute a term for the action type, "send [the train]off on a particular route", for the pronoun. To infer any further consequences, itwill need a meaning postulate something like the following:

(50) For all monadic action predicates P:Vx do(Ka(P))(x) D P(x)

where Ka reifies an action predicate (in this case, "send [the train] off on a partic-ular route"). It can then apply semantic and world knowledge about P to drawconclusions about the effects of P(x) (in this case that the train will follow theroute in question and reach its destination).

3.2 Conclusion

Many NLP tasks require lexicons that can support extensive lexically-basedinferencing. In order to support these inferences, representations for lexical se-mantics will have to be richer than they are now. This chapter has motivated forparticular extensions to FOL using examples drawn from actual dialogues in theTRAINS domain. Although these extensions are not new, their motivation basedon the semantics of open class words is.

These extensions are a subset of the extensions that are available in EpisodicLogic (EL) [Hwang and Schubert, 1993a; Hwang and Schubert, 1993b], a logic de-signed to be expressively and inferentially adequate as both a logical form for natu-ral language and as a general representation for commonsense knowledge. Similarformalisms are those used in the Core Language Engine [Alshawi, 1987; Alshawi,1989] and Nerbonne and Laubsch's NLL [Laubsch and Nerbonne, 1991; Nerbonne,1992]. EL is an intensional, situational extension of FOL that provides a system-atic syntax and formal semantics for sentence and predicate nominalization (reifi-cation), sentence and predicate modification, nonstandard restricted quantifiers,A-abstraction, and pairing of arbitrary sentences with situation- (episode-) denot-ing terms (where those sentences are interpreted as describing or characterizingthose situations).

32

Inference in EL has been shown to be practical through the EPILOG implemen-tation [Schaeffer et al., 1993], with examples ranging from fairy tale fragments andaircraft maintenance reports [Hwang and Schubert, 1993a; Hwang and Schubert,1993b] to the Steamroller theorem-proving problem. In addition, EL as been usedas the front-end logical form in the TRAINS system [Allen et al., 1994]. A con-clusion from the text understanding experiments is that increased expressivenessoften simplifies inference, allowing conclusions to be drawn in one or two steps thatwould require numerous steps in an FOL "reduction" of the same information.

33

4 Morphological Cues forLexical Semantic Information

The first two chapters of this thesis discussed what fine-grained lexical seman-tic information is and why it is needed. The representation of lexical semanticinformation was discussed in the previous chapter. This chapter discusses theacquisition of such information and describes a method based on morphologicalcues. The chapter starts by discussing three approaches to acquisition that havebeen debated in the human language acquisition literature: perceptual cueing,surface cueing, and language semantics cueing. It then discusses these approachesfrom a computational linguistics perspective. Finally, morphological cueing, atype of surface cueing, is introduced. Results derived from an implementation arepresented and the method is evaluated on the basis of these results.

4.1 Three approaches to the acquisition of lexi-cal semantic information

In recent years there has been a fruitful dialectic between the supporters ofperceptual cueing of verb semantics and those supporting surface cueing of verbsemantics. [Grimshaw, 1981; Pinker, 1989; Pinker, 1994] advocate perceptual cue-ing and [Gleitman, 1990; Fisher et al., 1991; Naigles et al., 1989] surface cueing.'This dialectic will be used as a vehicle to introduce the different approaches.

Perceptual cueing is based on the intuitively appealing idea that humans ac-quire the meanings of words by relating them to semantic representations resultingfrom perceptual or cognitive processing. For example, in a situation where thefather says Kim is throwing the ball and points at Kim who is throwing the ball,a child might be able learn what throw and ball mean.

However, [Gleitman, 1990] argues that a language learner whose sole learningmechanism is perceptual cueing will have difficulty learning the meaning of verbsbecause of the following:

'Many of the articles relevant to this debate are collected in [Gleitman and Landau, 1994].

34

1. The event being referred to by the verb is often not present when the verbis uttered. The results of experiments suggest that it is often the case that averb is uttered when the participants of the event described by the verb arenot physically present.

2. There is an overabundance of salient features of the world that could be partof the semantics of an uttered verb.

"For instance, over the discourse as a whole, probably the motherhas different aims in mind when she tells the child to 'look at' someobject than when she tells her to 'hold' or 'give' it. The child couldcode the observed world for these perceived aims and enter theseproperties as aspects of the words' meanings. But also the mothermay be angry or distant or lying down or eating lunch and the objectin motion may be furry or alive or large or slimy or hot, an the childmay code for these properties of the situation as well, entering themtoo, as facets of the words' meanings." ([Gleitman, 1990], p. 10)

3. There are an overabundance of events in any given situation. Using an exam-ple from [Gleitman, 1990], whenever someone pushes a toy truck that truckis also moving. Thus there is the pushing event and the moving event. Thususing cross-situational observation will be difficult. This problem has twoespecially bad cases.

(a) There are events that always occur together and depend on the perspec-tive of the observer such as buying and selling, fleeing and chasing, andwinning and beating (these examples were taken from [Gleitman, 1990]).Cross-situational observation will not help since in every situation wherethere is buying, there is selling.

(b) Some events are more specific than others: whenever someone taps some-thing she also touches it. If the learner chooses "touch" as the meaning oftap then cross-situational observation will not help correct this mistakesince in every situation where there is tapping, there is touching.

4. Some verbs, such as think and doubt refer to events that are not directly ob-servable and thus if the learner is basing its meanings on the observed aloneit cannot acquire these verbs. In Gleitman's words, "If the child is to learnthe meanings from perceptual discrimination in the real world, the primi-tive vocabulary of infant perception has to be pretty narrow to bring thenumber and variety of data storing and manipulative procedures under con-trol. But no such narrow vocabulary of perception could possibly select thethinkingness properties from events. I conclude that an unaided observation-based verb-learning theory is untenable because it could not acquire think"(Gleitman's emphasis) ([Gleitman, 1990] p. 18).

35

To help the perceptual semantic cueing mechanism produce better results,Gleitman suggests that a form of surface cueing be used to restrict the eventsand features to which the learner attends. In her own words, "children's so-phisticated perceptual conceptual capacities yield a good many possibilities forinterpreting a scene, but the syntax acts as a kind of mental zoom lens for fixingon the interpretation, among these possible ones that the speaker is expressing"([Gleitman, 1990], p.21). More specifically, she suggests that the subcategoriza-tion frames of verbs correspond to "... certain semantic elements such as 'cause,''transfer,' 'symmetry,' and 'cognition' ..." ([Gleitman, 1990] p.29) and that thiscorrespondence can be used to acquire lexical semantic information about verbs.This is just the information needed to reduce the search space for the perceptualcueing mechanism.

She refers to this approach as "syntactic bootstrapping" but in this thesis itwill be referred to as syntactic cueing, a form of surface cueing. Surface cue-ing does not require any semantic information related to perceptual input or theinput utterance. Instead it makes use of fixed correspondences between surfacecharacteristics of language input and lexical semantic information: surface char-acteristics serve as cues for lexical semantic information. For example, if a verb isseen with a noun phrase subject and a sentential complement, it often has seman-tics involving spatial perception and cognition, e.g., believe, think, worry, and see[Fisher et al., 1991; Gleitman, 1990].

[Pinker, 1994] replies that although syntactic cueing could be used to zoom inon relevant aspects of a situation to which an utterance corresponds, the percep-tual semantic cueing mechanism does not require this information. In addition,he claims that the semantic information provided by syntactic cueing is peripheraland only applies to the use of the verb in the particular frame from which theinformation was derived.

"I conclude that attention to a verb's syntactic frame can help nar-row down the child's interpretation of the perspective meaning of theverb in that frame, but disagree with the claim that there is some in-principle limitation in learning a verb's content from its situations of usethat could only be resolved by using the verb's set of subcategorizationframes." ([Pinker, 1994], p. 379)

[Pinker, 1994] specifically addresses each of [Gleitman, 1990]'s arguments listedabove. Only some of his major points are presented here. One point he makes isthat perceptual cueing involves using semantic representations produced by bothperceptual processing and cognitive processing. Thus, one would not expect thelearner to only attend to events that are perceivable at the time of the utterance.Instead, all of the past, present, and possible events conceivable by the learner forboth the physical and the mental "world" could be used as a basis for deriving

36

lexical semantic information for an uttered verb. This point addresses Gleitman'sconcerns about the fact that a verb is often uttered when the the participantsof the event described by the verb are not physically present. In addition, itaddresses the problems [Gleitman, 1990] claims perceptual cueing has with wordslike believe.

Another point Pinker makes is that comparing and contrasting the situationsin which a verb has been heard does provide, in principle, enough information totease apart the meaning of verbs like push/move, buy/sell and touch/tap. Thispoint is is based on two claims for which there is a fair amount of supportingdata: language learners have a very rich representational system and differentverbs never have exactly the same semantics (no true synonyms). Thus, since thelearner is always trying to give different verbs different meanings, the learner willattend to different usages. To use one of Pinker's examples, one can buy a sodafrom a coke machine even though the machine did not sell it. Using these sorts ofsituations and the assumption that buy and sell mean different things, the learnercould tease apart their meanings.

While arguing against a particular set of experiments designed to support theargument for surface cueing, [Pinker, 1994] discusses the third approach to the ac-quisition of lexical semantics that will be discussed in this section. This approachis based on the idea that the semantics of an unknown word can be derived fromthe elements of the utterance that are known by the learner. Consider the follow-ing example.

"... if someone were to hear I glipped the paper to shreds or I filped thedelicious sandwich and now I'm full, presumably he or she could figureout that glip means something like 'tear' and filp means something like'eat'. But although these inferences are highly specific and accurate, nothanks are due to the verbs' syntactic frames (in this case, transitive).Rather, we know what those verbs mean because of the semantics ofpaper, shreds, sandwich, delicious, full, and the partial syntactic analysisthat links them together (partial, because it can proceed in the absenceof knowledge of the specific subcategorization requirements of the verb,which is the data source appealed to by Gleitman)." ([Pinker, 1994] p.382)

This approach will be referred to here as language semantics cueing. Pinkernotes that in some cases what looks like syntactic cueing may in fact be languagesemantics cueing.

To summarize, three approaches have been introduced: perceptual cueing, sur-face cueing and language semantics cueing. These approaches differ in that theirinput is different: surface cueing takes non-semantic representations derived bylanguage processing, perceptual cueing takes semantic representations derived by

37

perceptual processing and/or cognitive processing along with the correspondinguttered verb, and language semantics cueing takes semantic representations de-rived directly from the utterance.

4.2 A computational linguistics perspective

It seems likely that perceptual cueing is the central method human childrenuse to acquire lexical semantics. However, key modules are currently lacking thatwould be needed for a large scale computer implementation, namely perceptualand conceptual processing modules complex enough to produce the semantic in-formation perceptual cueing requires as input. There does not yet exist a robotthat can interact with and reason about the world with the complexity required.Accordingly, the research projects that fall into this class [Pustejovsky, 1988;Siskind, 1990; Webster and Marcus, 1989] do not claim to provide a method foracquiring a large NLP lexicon. Instead they are meant as proof of concept studieswith an aim at explaining human language acquisition.

Language semantics cueing is more promising from a computational perspectiveand consequently has a long (for computational linguistics) history. The acqui-sition systems described in [Granger, 1977; Selfridge, 1980] use scripts [Schankand Abelson, 1977] to represent the semantics of a passage and then use variousheuristics to fit an unknown word into a slot of the script and then use the se-mantic restrictions corresponding to the slot to infer information about the word.For example, if the word garcon was used in a passage that could be matchedto a restaurant script and garcon could be fit into the slot for the taker of theorder then one could infer that a garcon is a person. [Berwick, 1983] presents asystem that is similar but uses a terminological logic representation system. Morerecently [Hastings, 1994] presents a system that uses verb selectional restrictionsto assign unknown nouns to noun classes and noun semantic classes to assignunknown verbs selectional restrictions. For example, to bomb restricts its directobject to be a Human-or-Place and thus if an unknown word is encountered asthe direct object of to bomb then the system can deduce that it is a member of theHuman-or-Place class and possibly a subclass. The constraints would work in theopposite direction if the noun was known and the verb unknown. Such systemsare able to acquire very fine-grained lexical semantic information. However, acentral problem for these approaches is that they rely on having lexical semanticinformation to begin with. If they do not have enough seed information, thenthey do not have enough constraints to effectively infer information for unknownwords.

The most attractive feature of surface cueing is that it does not, at least directly,rely on any lexical semantic information. Often the surface cues can be identifiedrelatively easily. For example, [Hearst, 1992] describes a system that searches

38

for syntactic patterns to find semantic relations. One pattern the system uses isNP, NP * ,and other'NP and this pattern cues a hyponym relationship betweenthe initial noun phrases and the final one: temples, treasuries, and other importantcivic buildings. Note that the lexical semantic information is inferred from asingle token. [Pustejovsky et al.; Jacobs and Zernik, 1988; Dorr and Lee, 1992;Velardi et al., 1991; Andry et al., 1995] describe similar systems.

Another type of surface cueing does not base its inferences on any single to-ken. Instead it makes inferences based on statistical measures of distributionalsimilarity. The methods differ on what information they base their measures ofdistributional similarity and on the statistical measure they employ. One methoddescribed in [Hindle, 1990] bases its measure of distributional similarity on syntac-tic predicate argument relations and employs mutual information as a statisticalmeasure. Thus two nouns are judged similar if they are likely to occur (or not oc-cur) as direct objects of the same verbs. More accurately, the comparison of nounsis based on the mutual information between verbs and nouns where mutual infor-mation corresponds roughly to the amount that the presence of one word affectsthe probability of the other. Thus, drink and beer have a high mutual informationin most corpora. One set of nouns that had a high similarity in Hindle's tests wasboat, ship, plane, bus, jet, vessel, truck, helicopter, ferry, and man. Some of theverbs that had high mutual information measures with these nouns were hijack,

charter, intercept, and park. Some methods that use predicate argument structurebut different statistical measures are [Rooth, 1995] which uses a latent class lan-guage model and Baum-Welsh reestimation to set the parameters, [Pereira et al.,1993] which uses deterministic annealing to cluster words, and [Grishman andSterling, 1993] which uses co-occurrence smoothing. [Schuetze, 1992a] describesa method that bases its measure of distributional similarity on co-occurrence in asentence and applies single value decomposition to the co-occurrence matrix andmeasures the similarity of resulting vectors to measure similarity.

Whether statistical or non-statistical, the main advantage of the surface cueingapproach is that the input required is currently available: there is an ever increas-ing supply of online text, which can be automatically part-of-speech tagged, as-signed shallow syntactic structure by robust partial parsing systems, and morpho-logically analyzed, all without any prior lexical semantics. And it is the methodthat will be pursued further here.

A possible disadvantage of surface cueing is that surface cues for a particularpiece of lexical semantics might be difficult to uncover or they might not exist atall. In addition, the cues might not be present for the words of interest. Thus, itis an empirical question what subset of the needed lexical semantic informationcorresponds to easily identifiable abundant surface cues. The rest of this chapterexplores the possibility of using derivational affixes as surface cues for lexicalsemantics.

39

4.3 Morphological cueing

Many derivational affixes only apply to bases with certain semantic charac-teristics and only produce derived forms with certain semantic characteristics.For example, the verbal prefix un- applies to telic verbs and produces telic de-rived forms. Thus, it is possible to use un- as a cue for telicity. By searchinga sufficiently large corpus we should be able to identify a number of telic verbs.Examples from the Brown corpus include clasp, coil, fasten, lace, and screw.

A more implementation-oriented description of the process is the following: (i)collect a large corpus of text, (ii) tag it with part-of-speech tags, (iii) morpho-logically analyze its words, (iv) assign word senses to the base and the derivedforms of these analyses, and (v) use this morphological structure to assign seman-tics to both the base senses and the derived form senses. Tagging the corpus isnecessary to make word sense disambiguation and morphological analysis easier.Word sense disambiguation is necessary because one needs to know which sense ofthe base is involved in a particular derived form, more specifically, to which senseshould one assign the feature cued by the affix. For example, stress can be eithera noun the stress on the third syllable or a verb the advisor stressed the importanceof finishing quickly. Since the suffix -ful applies to nominal bases, only a nounreading is possible as the stem of stressful and thus one would attach the lexicalsemantics cued by -ful to the noun sense. However, stress has multiple readingseven as a noun: it also has the reading exemplified by the new parent was undera lot of stress. Only this reading is possible for stressful.

In order to produce the results presented in the next section, the above stepswere performed as follows. The Penn Treebank version of the Brown corpus [Mar-cus et al., 1993] served as the corpus. Only its words and part-of-speech tags wereutilized. Although these tags were corrected by hand, part-of-speech tagging canbe automatically performed with an error rate of 3 to 4 percent [Merialdo, 1994;Brill, 1994]. The Alvey morphological analyzer [Ritchie et al., 1992b] was usedto assign morphological structure. It uses a lexicon with just over 62,000 en-tries. This lexicon was derived from a machine readable dictionary but containsno semantic information. No word sense disambiguation was performed for thebases and derived forms. However, there exist systems for word sense disambigua-tion which do not require explicit lexical semantic information [Yarowsky, 1993;Schuetze, 1992b]. Because of the lack of word senses, the semantics correspondingto the morphological structure of the derivation was assigned to each normalizedword (i.e., uninflected) involved in the derivation and the semantics assigned to aparticular word was considered correct if it held for all senses present in the cor-pus. Such scoring requires that the semantics are expressed in a precise way. Thecued lexical semantic information are axiomatized using Episodic Logic [Hwangand Schubert, 1993a] which was mentioned in the previous chapter.

Let us consider an example. The suffix -ize applies to adjectival bases (e.g.,

40

centralize). There is a semantic distinction that can be made between this af-fix and the -ize that applies to nouns (e.g., glamorize). The first affix willbe referred to as -Aize and the second as -Nize. First, the regular expressions".*IZ(EJINGIESJED)$" and "^V.*" are used to collect tokens from the corpus thatwere likely to be derived using -ize. The Alvey morphological analyzer is thenapplied to each type and each of the analyses it produces is considered. It stripsoff -Aize from a word form if it can find an entry with a reference form of theappropriate orthographic shape and has the features "uninflected," "latinate,"and "adjective." It may also build an appropriate base using other affixes, e.g.,[[tradition -al] -Aize]. If this also fails, it will attempt to construct a base on itsown. Finally, the derived forms are assigned the lexical semantic feature COS(change-of-state) defined by the axiom below.

For all predicates P with features COS and DYADIC:

(Vx,y,e [[Lx P y]**e] -> [(lel: [[el at-end-of el A[e cause ell]

[[y (rstate P)]**el]) A(le2: [e2 at-beginning-of e]

[(-[y (rstate P)])**e2])]])

In this axiom and throughout the remaining of this dissertation, lisp-like brack-eting conventions are used with the addition of square brackets to indicate thatthe operator is infixed as is the convention in Episodic Logic. As mentioned in thefirst, the operator ** is analogous to in situation semantics; it indicates that aformula describes an event. The axiom states that if a change-of-state predicatedescribes an event, then the result state of this predicate holds at the end of thisevent and that it did not hold at the beginning, e.g., if one wants to formalizesomething it must be non-formal to begin with and will be formal after. The resultstate of an -Aize predicate is the predicate corresponding to its base; this is statedin another axiom (see Section 4.3.1). P is a place holder for the semantic predicatecorresponding to the word sense which has the feature. It is assumed that eachword sense corresponds to a single semantic predicate. In addition, note that theaxiom only uses a one-way entailment and thus it only constrains the meaning ofthe predicate; it does not define it.2 As mentioned above, in this study a word isconsidered to have a feature if its axiom is true of all the word senses present in

2This method of assigning every word a unique predicate and then constraining the meaning

of the predicates with one-way meaning postulates was suggested in [Chierchia and McConnell-Ginet, 1990]. This scheme avoids some of the problems that a decompositional approach has. Forexample, a decompositional approach might have formalize denote Ax,y Ex (cause (becomeformal)) y]. There are two problems with this approach. First, it can introduce spuriousscoping ambiguities since there are now a number of operators in the logical form and modifiersmight be scoped to apply to only part of the word. It appears that such internal word scopingsdo not exist. Second, unwanted inferences could occur since the meanings of the words arelogically equivalent to the decompositions. For example, formalize is being defined as Ex (cause

41

the corpus. As shown in Table 1, there were 63 -Aize derived forms of which 78%conformed to the COS axiom.

Precision figures for the method were collected as follows. The method returns aset of normalized (i.e., uninflected) word/feature pairs. A human then determineswhich pairs are "correct" where correct means that the axiom defining the feature

holds for the instances (tokens) of the word (type). Because of the lack of wordsenses, the semantics assigned to a particular word is only considered correct, if itholds for all senses occurring in the relevant derived word tokens.3 For example,

the axiom above must hold for all senses of centralize occurring in the corpus inorder for the centralize/ COS pair to be correct. The axiom for IZE-DEP musthold only for those senses of central that occur in the tokens of centralize for thecentral/IZE-DEP pair to be correct. This definition of correct was constructed,in part, to make relatively quick human judgements possible. It should also benoted that the semantic judgements require that the semantics be expressed in aprecise way. This discipline is enforced in part by requiring that the features beaxiomatized in a denotational logic. Another argument for such an axiomatizationis that many NLP systems utilize a denotational logic for representing semanticinformation and thus the axioms provide a straightforward interface to the lexicon.

To return to our example, as shown in Table 1, there were 63 -Aize derivedwords (types) of which 78 percent conform to the COS axiom. Of the bases, 80

percent conform to the IZE-DEP axiom which will be discussed in the next section.Among the conforming words were equalize, stabilize, and federalize. Two wordsthat seem to be derived using the -ize suffix but do not conform to the COS axiomare penalize and socialize (with the guests). A different sort of non-conformity isproduced when the morphological analyzer finds a spurious parse. For example,it analyzed subsidize as [sub- [side -ize]] and thus produced the sidize/COS pairwhich for the relevant tokens was incorrect. In the first sort, the non-conformityarises because the cue does not always correspond to the relevant lexical semanticinformation. In the second sort, the non-conformity arises because a cue has beenfound where one does not exist. A system that utilizes a lexicon so constructedis interested primarily in the overall precision of the information contained withinand thus the results presented in the next section conflate these two types of false

positives.

(become formal)) y] which might also be the logical form for X caused Y io become formal.Although the meanings are similar, it is unlikely that they are logically equivalent (see [Fodor,1970]).

3 Although this definition is required for many cases, in the vast majority of the cases, thederived form and its base have only one possible sense (e.g., siressful).

42

4.3.1 Results

This section starts by discussing the semantics of 18 derivational affixes: re-,un-, de-, -Aize, -Nize, -en, -Aify, -Nify, -le, -ate, -ee, -er, -ant, -age, -ment, mis-,-able, -ful, -less, and -ness. (These affixes were chosen primarily because theyprovide a range of lexical semantic information. In addition, the author has aless-than-complete background in semantics and thus many affixes were excludedfrom the list simply because he was not familiar with with area of semantics withwhich the affixes dealt. The important point is that these affixes where not chosenon the basis of their precision scores and thus the precision numbers presented hereshould not be seen as an upper bound.) Following the discussion of these affixes, atable of precision statistics for the performance of these surface cues is presented.

The word types on which these statistics are based are listed in the appendix alongwith other information about the actual implementation. The lexical semantics

cued by these affixes are loosely specified here but are axiomatized in Chapter 6 ina fashion exemplified by the COS axiom above. In addition, the semantic analysesrepresented by these axioms are supported in Chapter 6.

The verbal prefixes un-, de-, and re- cue aspectual information for their baseand derived forms. Some examples from the Brown corpus are unfasten, unwind,decompose, defocus, reactivate, and readapt. Above, it was noted that un- is acue for telicity. In fact, both un- and de- cue the change-of-state (COS) featurefor their base and derived forms-the COS feature entails the TELIC feature. Inaddition, for un- and de-, the result state of the derived form is the negation ofthe result state of the base (BASE-NEG-RSTATE), e.g., the result of unfasteningsomething is the opposite of the result of fastening it. As shown by exampleslike reswim the last lap, re- only cues the TELIC feature for its base and derivedforms: the lap might have been swum previously and thus the negation of theresult state does not have to have held previously [Dowty, 1979]. For re-, theresult state of the derived form is the same as that of the base (BASE-RSTATE),e.g., the result of reactivating something is the same as activating it. In fact, ifone reactivates something then it is also being activated: the derived form entailsthe base (ENTAIL-BASE). Finally, for re-, the derived form entails that its resultstate held previously, e.g., if one recentralizes something then it must have beencentral at some point previous to the event of recentralization (REPRESUP)[Light, 1992].

The suffixes -Aize, -Nize, -en, -Aify, -Nify all cue the COS feature for theirderived form as was discussed for -Aize above. Some exemplars are centralize,formalize, categorize, colonize, brighten, stiffen, falsify, intensify, mummify, andglorify. For -Aize, -en and -Aify we can say a bit more about the result state: it isthe base predicate (RSTATE-EQ-BASE), e.g., the result of formalizing somethingis that it is formal. Finally -Aize, -en, and -Aify cue the following feature for theirbases: if a state holds of some individual then either an event described by the

43

derived form predicate occurred previously or the predicate was always true of theindividual (IZE-DEP), e.g., if something is central then either it was centralizedor it was always central.

The "suffixes" -le and -ate should really be called verbal endings since theyare not suffixes in English, i.e., if one strips them off one is seldom left witha word. (Consequently, only regular expressions were used to collect types; themorphological analyzer was not used.) Nonetheless, they cue lexical semantics andare easily identified. Some examples are chuckle, dangle, alleviate, and assimilate.The ending -ate cues a COS verb and -le an ACTIVITY verb.

The derived forms produced by -ec, -er, and -ant all refer to participants ofan event described by their base (PART-IN-E). Some examples are appointee,deportee, blower, campaigner, assailant, and claimant. In addition, the derivedform of -ec is also sentient of this event and non-volitional with respect to it[Barker, 1995].

The nominalizing suffixes -age and -ment both produce derived forms that referto something resulting from an event of the verbal base predicate. Some examplesare blockage, seepage, marriage, payment, restatement, shipment, and treatment.The derived forms of -age entail that an event occurred and refer to somethingresulting from it (E-AND-R), e.g., seepage entails that seeping took place andthat the seepage resulted from this seeping. Similarly, the derived forms of -mententail that an event took place and refer either to this event, the proposition thatthe event occurred, or something resulting from the event (E-OR-P-OR-R), e.g.,a restatement entails that a restating occurred and refers either to this event,the proposition that the event occurred, or to the actual utterance or writtendocument resulting from the restating event. (This analysis is based on the workof Zucchi [1989].)

The verbal prefix mis-, e.g., miscalculate and misquote, cues the feature thatan action is performed in an incorrect manner (INCOR). The suffix -able cuesa feature that it is possible to perform some action (ABLE), e.g., something isenforceable if it is possible that something can enforce it [Dowty, 1979]. Thewords derived using -ness refer to a property of a kind of state where instances ofsuch a kind have the property of the base (NESS), e.g., in Kim's fierceness at themeeting yesterday was unusual the word fierceness refers to a kind of state wherefor instances of this kind, Kim is fierce. The suffix -ful marks its base as abstract(ABST): careful, peaceful, powerful, etc. In addition, it marks its derived form asthe antonym of a form derived by -less if it exists (LESSANT). The suffix -lessmarks its derived forms with the analogous feature (FULANT). Some examplesare colorful/less, fearful/less, harmful/less, and tasteful/less.

The precision statistics for the individual lexical semantics features discussedabove are presented in Table 4.1 and Table 4.2. Lexical semantic information wascollected for 2535 words (bases and derived forms). One way to summarize these

44

tables is to calculate a single precision number for all the features in a table i.e.,

average the number of correct types for each affix, sum these averages, and thendivide this sum by the total number of types. The result: if a random word isderived, its features have a 76 percent chance of being true. If it is a stem of aderived form, its features have an 82 percent chance of being true.

Data for recall was only collected for the verbal prefix re- and it was found

to be 85 percent. The majority of the missed re- verbs were due to the factthat the system only looked at verbs starting with re and not other categories

and many nominalizations such as reaccommodation contain a morphological cuetoken. However, increasing recall by looking at all open class categories would

probably decrease precision. Another cause of reduced recall is that some stems

were not in the Alvey lexicon or could not be properly extracted by the mor-

phological analyzer. For example, -Nize could not be stripped from hypothesizebecause Alvey failed to reconstruct hypothesis from hypothes. However, for theaffixes discussed here, 89 percent of the bases were present in the Alvey lexicon. 4

4.3.2 Evaluation

Good surface cues are easy to identify, abundant, and correspond to the needed

lexical semantic information (Hearst (1992) identifies a similar set of desiderata).

With respect to these desiderata, derivational morphology is both a good cue anda bad cue.

Let us start with why it is a bad cue: there may be no derivational cues for thelexical semantics of a particular word. This is not the case for other surface cues,

e.g., distributional cues exist for every word in a corpus. In addition, even if a

derivational cue does exist, the reliability (on average approximately 76 percent) ofthe lexical semantic information is too low for many NLP tasks. This unreliability

is due in part to the inherent exceptionality of lexical generalization and thus canbe improved only partially.

However, derivational morphology is a good cue in the following ways. It pro-vides exactly the type of lexical semantics needed for many NLP tasks: the affixesdiscussed in the previous section cued nominal semantic class, verbal aspectual

class, antonym relationships between words, sentience, etc. In addition, workingwith the Brown corpus (1.1 million words) and 18 affixes provided such informa-

tion for over 2500 words. Since corpora with over 40 million words are commonand English has over 40 common derivational affixes, one would expect to be able

to increase this number by an order of magnitude. In addition, most English words

4 Note that collecting a relatively complete list of English stems is a task that can be performedin a couple of weeks by one person. Primitive stemmers can be used to get a large initial list oftypes from a corpus and these types can be corrected by hand quickly. Alternatively, the headwords of a MRD can be used as is the case in this study.

45

Feature JAffix [Types Precision II Feature IAffix [Types [ Precision7TELIC re- 164 91% RSTATE-EQ-BASE -AiJ9 17 58%

BASE-RSTATE re- 164 65% C0S -Nify 21 67%ENTAIL-BASE re- 164 65% COS -ate 365 48%

REPRESUP re- 164 65% PART-IN-E -ee 22 91%COS un- 23 100% SENTIENT -ee 22 82%

BASE-NEG-RSTATE un- 23 91% NON-VOL -ee 22 68%COS de- 35 34% PART-IN-E -er 471 85%

BASE-NEG-RSTATE de- 35 20% PART-IN-E -ant 21 81%C05 -Aize 63 78% E-AND-R -age 43 58%

RSTATE-EQ-BASE -Aize 63 75% E-OR-P-OR-R -ment 166 88%COS -Nize 86 56% INCOR mis- 21 86%

ACTIVITY -le 71 55% ABLE -able 148 84%COS j-en j 36 100% NESS -ness 37 971r

RSTATE-EQ-BASE -en 1 36 1 97% FULANT -less 22 77%RCOS -Aify 1 17 1 94% LESSANT -Jul 1 22 1 7M

Table 4.1: Derived words

Feature Affix Types Precision 11Feature IAffix Types Preiso

TELIC re- 164 91% IZE-DEP -AenV 36 72%I COSI Vun-I 231 91%IIIZE-DEPI -AifyI 151 40%

COS Vde- 33 36% ABST -Jul 76 93%IZE-DEP -A ize 64 80%

Table 4.2: Base words

46

are either derived themselves or serve as bases of at least one derivational affix.5

Finally, for some NLP tasks, 76 percent reliability may be adequate. If higherreliability is required then only the affixes with high precision might be used.In addition, the precision could be increased by ordering the analyses providedby the morphological analyzer and only considering the top ones. Thus analysessuch as [[re- ache] -ed] would be ignored in favor of a more plausible parse such as[reach -ed]. A list of monomorphemic words would be helpful here. In addition,stochastic methods might also be used to order the parses (see [Heemskerk, 1993]).

The above discussion makes it clear that morphological cueing provides only apartial solution to the problem of acquiring lexical semantic information. However,as mentioned in previously there are many types of surface cues which correspondto a variety of lexical semantic information. A combination of cues should producebetter precision where the same information is indicated by multiple cues. Forexample, the morphological cue re- indicates telicity and as mentioned above, thesyntactic cue the progressive tense indicates non-stativity [Dorr and Lee, 1992].Since telicity is a type of non-stativity, the information is mutually supportive.Even some morphological cues are mutually supportive, e.g., the verbs activate,decorate, and calculate end in ate which cues telicity and they also occur prefixedwith re- which also cues telicity. In addition, using many different types of cuesshould provide a greater variety of information in general. Thus morphologicalcueing is best seen as one type of surface cueing that can be used in combinationwith others to provide lexical semantic information.

It should be mentioned that there is semantic information about any givenword that surface cueing is unlikely to be able to provide. For example, from thetoken careful, one can deduce that care refers to an abstract concept. However,when one does something with care, what does this action look like? How doesit differ from a careless version? Such information differentiates care from otherwords. (Pinker [1994] refers to this sort of information as the core meaning of theword and argues that perceptual cueing can provide it and surface cueing cannot.)To uncover lexical semantic information, surface cueing utilizes a mapping froma surface cue to a specific semantic feature. The idea is that this cue can be usedfor many different words. Thus, information specific to one word will not be cued.

4.3.3 Morphological cueing in human language acquisition

As a final note, this section returns to human language acquisition and considersevidence that children make extensive use of derivational morphology cues for

5The following experiment supports this claim. Just over 400 open class words were pickedrandomly from the Brown corpus and the derived forms were marked by hand. Based on thisdata, a random open class word in the Brown corpus has a 17 percent chance of being derived,a 56 percent chance of being a stem of a derived form, and an 8 percent chance of being both.

47

learning new words. [Anglin, 1993] studied the English vocabulary knowledge of6, 8, and 10 year olds. More specifically, he tested the semantic knowledge of thesechildren on 196 words (chosen randomly from a large dictionary). By multiplyingthe number of words in the dictionary by the proportion of the random samplethe children knew, he estimated the size of their vocabularies. Thus, using ahypothetical example, if a test group of children knew half of the words in thesample and the dictionary had say 200,000 words, then he would estimate thatthe children knew 100,000 words. Using the same technique, he also estimated themake up of their vocabularies with respect to the morphological structure of thewords, e.g., again using a hypothetical example, if derived words make up a third ofthe sample and the children know half of these derived words and they know a thirdof the sample overall, then he estimates that the children know 33,333 words andof these half are derived. This study supports the view that children between 6 and10 have an extensive knowledge of derivational morphology and this knowledgeaccounts for a significant portion of their learning. The tables below (adapted from[Anglin, 1993] summarize the results).6 First consider Table 4.3 which illustratesthe makeup of the main entries of Webster's Third New International Dictionaryof The English Language. Note that derived words make up the largest segment ofwords. Next consider Table 4.4 where the words the children knew are partitionedby their morphological category. Again notice that, for children in Grade 3 andGrade 5, derived words make up the single largest portion of the words they couldunderstand and use.

Word Type Number PercentageRoot words 124 29Inflected words 20 5Derived words 140 32Literal compounds 68 16Idioms 82 19

Table 4.3: The number and percentage of morphologically defined word type inthe sample of 434 Main Entries (adapted from [Anglin, 1993] p. 47)

6 [Gordon, 1989; Tyler, 1989; Bowerman, 1982; Clark and Cohen, 1984; Clark, 1982], asreferenced in [Anglin, 1993], also report on the knowledge of derivational morphology childrenhave and use.

48

Word Type Grade 1I Grade 3 Grade 5Root words .31 .24 .20Inflected words .27 .22 .15Derived words .16 .28 .39Literal compounds .25 .23 .21Idioms .01 .03 .05

Table 4.4: The mean proportion of children's main entry vocabulary knowledgeaccounted for by each morphologically defined type at each grade (adapted from[Anglin, 1993] p. 70)

49

5 Handling New Words in theInput Stream

5.1 Introduction

The previous chapter discussed the acquisition of lexical semantic informationfor NLP systems. However, even if lexical semantics could be acquired on a largescale and thus NLP systems would all have large lexicon filled with lexical seman-tic information, it is likely that such systems would still encounter words that arenot included in this lexicon. For example, Sproat [1992] describes an experimentwhere 300,000 different words were collected from the Associated Press newswirein 10 and a half months (44 million words of text). On the day after the last day ofcollection the following new words were encountered: armhole, groveled, fuzzier,oxidized, over-emphasized, outclassing, antiprejudice, and refinancings. A quicksearch through a current news magazine produced the following words: uncontam-Inated, titleless, mini-series, underretailed, megamall, and unfussy. None of thewords are listed in the 200,000 definition American Heritage dictionary [Berubeet al., 1982].

This chapter considers the problem of dealing with such unknown words. Morespecifically, it presents a system, Kenilex, that uses morphological cues to helpprocess unknown derived words. Upon encountering an unknown derived word,Kenilex uses a modified version of the Alvey morphological system [Ritchie et al.,1992a] to analyze the word and then based on the analysis constructs a newentry for the word. In addition, it creates a new Episodic Logic predicate andassigns it lexical semantic features corresponding to the affix. (Such features wereintroduced in the previous chapter and are discussed in detail in Chapter 6.).One can view the Kenilex system as a run-time version of the acquisition processdescribed in the previous chapter.

Before Kenilex is discussed further, consider the following study of the makeupof unknown words. The Alvey morphological analyzer (with 60,000 entry lexi-con) was run on the first 100,000 tokens of the Brown corpus. Words that didnot correspond directly to an entry citation or an inflected form of a citation were

50

considered unknown. Thirteen percent of the tokens were unknown. These tokenswere classified by hand into the groups listed in the table. Derivations are tokensthat involve a derivational affix-a token "involves" a derivational affix if it con-tains the affix orthographically and its semantics are built compositionally fromthe base and the affix. Names are tokens that involve a proper name. Numbertokens are those that involve a number (e.g. , 1987, 10-year-old, 256). Finally,the "others" category is filled with all the tokens that don't fit into the groupsdefined above.

derivations 11%names 58%

numbers 15%others 16%

Table 5.1: Breakdown of the Unknown Tokens

Although derived forms make up a smaller percentage of unknown tokens thannames or numbers, they are often an important predicate of the sentence (e.g. amatrix verb, predicate adjective, or central nominalization). And as such, theyare central to understanding what the sentence predicates of its participants. Inthe sentences below, the emphasized tokens were unknown.

(51) a. Note, too, that the Kennedy textile plan looks toward modernization orshrinkage of the U.S. textile industry.

b. The resolution urged the reconvention of the Congolese parliament andthe reorganization of the army.

c. Future clouded Barnett, as the titular head of the Democratic Party, ap-parently must make the move to reestablish relations with the NationalDemocratic Party or see a movement come from the loyalist ranks to com-pletely bypass him as a party functionary.

d. The Congolese were clamoring for their independence, even though mostwere unsure what it meant.

As a final note: since sentences in the Brown corpus are, on average, 18 tokenslong, every fourth sentence will, on average, contain an unknown token that is aderivation and it is likely that it will be central to that sentence's meaning.

5.2 Previous work

Most of the work in computational linguistics has focused on the syntactic andphonological aspects of derivational affixation and ignored its semantics. Systems

51

have been developed that can take a complex word as an input, break it into itsconstituent parts, and use the information stored for these parts to derive thesyntactic and phonological features of the word. This task is usually broken intotwo parts: segmentation and parsing. These subtasks can be interleaved or doneone after the other. The segmentation process undoes the phonological rules of alanguage and produces a normalized form of the constituent parts that can thenbe looked up in the lexicon. The parsing task amounts to using a rule set toassign an internal structure to a word. Example (52) contains a complex word,its segmentation, and its parse.

(52) a. undecidability

b. un-decide-able-ity

c. [[un- [decide -able]] -ity]

The most influential segmentation system is the KIMMO system originallydeveloped by Kimmo Koskenniemi [Koskenniemi, 1984] and expanded upon byKarttunen [1983]. Most of the systems in use today use a formalism relatedto that used by KIMMO for segmentation [Black et al., 1986; Domenig, 1988;Bear, 1988]. These systems are quite successful at the segmentation task. Largesubsets of morphologically complicated languages such a Finnish can be success-fully segmented by such systems.

The systems that attempt word parsing use different subsets of the techniquesdeveloped for sentence-level parsing (e.g. unification, the concept of head, complexfeatures, chart parsing, etc.). The Alvey morphological parser uses a chart parserand a GPSG-inspired set of percolation constraints to perform the word pars-ing task [Ritchie et al., 1992b]. Byrd's work [Byrd, 1983] combines parsing andsegmentation into a single process based on Aronoff's [1976] conception of wordformation rules. [Trost, 1993] describes a system that uses an HPSG-inspired wordgrammar and parser. For an insightful review of the literature on segmentationand parsing see [Sproat, 1992].

Two works that do address the semantics of derivational affixation are [Schu-bert, 1982] and [Alshawi, 1992]. They describe systems that build a semanticsfor a derived word from its parse compositionally, i.e., each affix denotes a logicaloperator and this operator is applied to the denotation of the base to producethe denotation of the derived form. The Kenilex system also builds the semanticsof a derived form from the affix and the base but, in contrast to the systemsmentioned above, the denotation of the derived form is a unique semantic atomwhose meaning is explicated by axioms. As mentioned in the previous chapter,some advantages of this approach are that no spurious scoping ambiguities areintroduced and each derived form maintains its own individual meaning which isonly constrained by the axioms, not defined.

52

5.3 Kenilex

The Kenilex system is an extension of the the Alvey morphological analyzer.Thus the Alvey system will be briefly described first.

The Alvey morphological analyzer has three modules: a dictionary, a seg-menter, and a word parser. The segmenter is a KIMMO-like system. It breaksthe word into pieces that are consistent with its two-level rules which encode or-thographic information. The pieces (or segments) of the word may or may notturn out to correspond to affixes or stems. The parser, as mentioned above, is achart parser augmented with feature percolation principles. The dictionary con-sists of entries for each word which contain syntactic features and a semanticatom. In addition, at the time when the dictionary is compiled into a run-timesystem, a set of feature cooccurence rules are applied to these entries to augmentthe features of the entries and to produce new entries.

The modules interact as follows. At run-time, the input string is given to theword parser. It calls the segmenter to get the first segment of the word. Thesegmenter uses the dictionary and its own spelling rules to propose a number ofsegments where each segment must correspond to an (expanded) entry in the dic-tionary. The word parser uses the features of these entries to drive a bottom-upchart-parsing process. This process uses the word grammar rules to license possi-ble structures. When it has constructed the possible structures for the segmentsthat have been discovered so far, the word parser asks the segmenter for the nextsegment. The modules constrain each other as follows: the word parser only usesentries that correspond to segments proposed by the segmenter, the segmenteronly uses segments that can correspond to entries in the dictionary, the wordparser uses features from the lexicon, and the word parser rejects segmentationsfor which it cannot find an acceptable parse.

The input to the Kenilex system is a parse tree from the word parser. (Am-biguity is discussed shortly.) If the parse tree does not contain any derivationalaffixes then it is returned by Kenilex unchanged. If it does contain derivationalaffixes, Kenilex creates dictionary entries for the derived words. The syntacticfeatures of the entries are taken from the parse tree. A unique semantic atom isproduced for the entry and it is assigned the features that correspond to the affix.In addition the feature cooccurence rules are applied to expand the entry.

Consider the sentences in (53) and assume that reload is new to the system

(53) Get the empty boxcar along with a boxcar of oranges from Elmira. Use theoranges to make OJ in Bath. Reload the boxcar with the bananas andtake it back to Elmira. Leave the other boxcar in Bath.

Notice that the use of reload instead of load is crucial to the disambiguationof the boxcar in the emphasized sentence in (53). Thus a relatively complete

53

understanding of reload is required. Intuitively the system should be able to makethe inferences that the boxcar will be in the state of "is loaded" if the commandis carried out and that the boxcar was previously in the state of "is loaded."

When the sentence-level parsing system encounters the string "reload" itqueries the lexicon system for information concerning the string. On the basisof the Alvey system's parse of the string and the semantics of the prefix re-, Ke-nilex creates an entry for the word and augments Alvey's dictionary. It returnsthis lexical entry to the sentence-level parser which proceeds with its parse. Thesymbol reload will show up in the logical form produced by the the NLU systemfor the sentence reload the boxcar and it will have the features TELIC, BASE-RSTATE, BASE, and REPRESUP whose corresponding axioms could be usedthe other modules of the NLU system to make the inferences mentioned above.

There are two complications to this process that should be mentioned. First,in most cases there are multiple parses for any given input string. Kenilex ordersthese analyses using the following rules and takes the first analysis as correct.'

" Analyses that do not involve affixes go before those that do involve affixes.

" Analyses whose outermost affix is inflectional go before those that have aderivational affix as their outer-most affix.

" Analyses that involve fewer derivational affixes are preferred.

The second complication is that for some derived words, the base will not be inthe Alvey dictionary. Consider an input string like rewugable. Even if people donot have any prior knowledge about what wug means, they can infer some thingsabout what rewug, rewugable, and even wug itself mean, when they encounterrewugable. Kenilex can also make some inferences based on the internal structureof a possible word like rewugable. Of course, such inferences are inherently lessreliable than the analogous one that would be made for reloadable. They are moreuseful than simply giving up when presented with such a word. A number of minormodifications to the parser had to be made to the Alvey system to deal with newbases. In addition, the dictionary had to be modified so that it contained newstem entry that are so constructed that they match any surface string of equallength.

When Kenilex, running in normal mode, fails to produce a parse for a word,it switches to new stem mode and starts again. Kenilex processes the parsesproduced by the word parser as described above, producing new entries whenderivational affixes are used. In addition, when a new stem entry is encounteredin the parse tree structure, the surface input string covered by the node, itsfeatures unified with those provided by the feature percolation conventions, and

'Stochastic methods might also be used to order the parses (see [Heemskerk, 1993]).

54

a new semantic atom are used to construct a raw entry for the new stem and thefeatures corresponding to the affix are assigned to it. It is expanded and addedto the dictionary with the other new entries.

The Kenilex system has been integrated with the TRAINS-93 parsing and log-ical form producing systems. Thus, when a word is not in the TRAINS lexicon,Kenilex system attempts to analyze it. If an analysis is found, Kenilex makesa new entry for the word and returns the TRAINS parser syntactic informationabout word. It also constructs axioms for a new semantic atom. These axioms areimportant for both the language understanding tasks and the planning tasks inthe TRAINS scenario. For example, (53) illustrates their usefulness for anaphoraresolution. In (53), the axioms would also enable the plan recognition system toat least partially incorporate the "reload" action into the plan. Although theseuses are possible in principle, the TRAINS-93 did not make use of the axiomsproduced by Kenilex. This is primarily the case because the system was built towork for pre-selected dialogues and thus handling new words was not a focus.

55

6 Semantic Analyses of VariousDerivational Affixes

This chapter presents the semantic analyses on which the dissertation depends.They have been arranged into the following categories: event structure, event par-ticipants, referring to events, other verbal semantics, abstractions, and antonyms.These groupings follow, in part, from the types of information needed for NLPtasks and also from the types of information that English derivational affixes pro-vide. The examples and statistics that are presented for each affix were taken fromthe study of the Brown corpus using the Alvey lexicon presented in Chapter 4.

6.1 Event structure

Events are ubiquitous in natural language: verbs and sentences describe them,nouns and pronouns refer to them, and noun phrases often refer to their partici-pants. Thus, most NLP systems have found it useful to have some notion of anevent in their representation system. Many treat events as first-class membersof their ontology. In addition, many NLP tasks require classification of eventsand even information about their internal structure (i.e., their internal tempo-ral structure). The aspectual classes described in [Vendler, 1967] specify some ofthis information. For example, statives like love in Mary loves John, provide adescription of an event that is true at all times within the event; the event is ho-mogeneous. Activities do the same down to a particular granularity of description.Telics combine an activity and a state in such a way that the state results fromthe activity. These pieces of an event may either have duration or be punctual.These distinctions only scratch the surface of the topic of event structure but theywill suffice for the semantic analyses presented here.

Before I can formalize some of the observations mentioned above in EpisodicLogic, I need to discuss what events are in Episodic Logic. In Episodic Logic,events are elements of the ontology and play a central role. A crucial featureof Episodic Logic is that the truth of a proposition is evaluated with respect toan event (as in Situation Semantics). Thus propositions pick out sets of events

56

and sets of events propositions. There are two ways for a sentence to be true ofan event: it can either characterize it or it can merely describe it. A sentencecharacterizes an event when no proper part of it can be described by the samesentence. P**e specifies that P is true in e and furthermore that it characterizese.

1

P*e specifies that P is true in e and nothing more. The * operator is roughlyequivalent to in situation semantics. In Section 6.3 below, I will discuss thedistinction between ** and * in more detail.

Event structure, in Episodic Logic, is represented by relations between events,most often between subevents and the main event, making use of predicates such

as at-end-of and sub-ep. In Episodic Logic, event structure is represented by re-

lations between events such as at-end-of and during. In addition, a subepisoderelation, subep-of, over events forms a join-semilattice of events from the same

world. These representational tools can be used to partially axiomatize the as-

pectual classes as follows.

(54) a. For all predicates P with features TELIC and DYADIC:(Vx,y,e [[[x P y]**e] -> (3el: [Eel at-end-of el A

[e cause ell]H[y (rstate P)]**el])])

b. For all predicates P with features ACTIVITY and DYADIC:

(Vx,y,e [[[x P y]**e] -> (Vt: [Ht time] A [t during el A((appr-grain P) t)]

[Lx P y]*t])])

The axioms above and throughout this chapter work as follows. P is a place

holder for the semantic predicate corresponding to the word sense which has thenamed features. There is a one-to-one correspondence between word senses andsemantic atoms. Thus an axiom can be seen as a template that produces formulaefor all the predicates with the appropriate feature. Many features are assigned to

the word sense by the lexical rule for the affix. In addition to axioms particular tothe affix, most affixes also assign the feature COMPLEX to their derived forms.This feature is defined in (55) (for dyadic predicates) and links a derived form toits base using the predicate modifier base.

(55) For all predicates D with features COMPLEX and DYADIC and for

1The characterizing relation .p**e does not rule out homogeneity. Although e cannot haveproper parts described by p - it is in that sense indivisible - it can perfectly well coexist with"smaller" events that occur during it and are described by o. In fact, if W is a homogeneouspredication, the existence of such smaller events is an entailment: (Vt,t' [Et' during t] ALo*t -> p*t']) where t,t' range over time intervals. (In Episodic Logic, time intervals arespecial cases of events, with "exhaustive" information content.)

57

their corresponding base predicate B with feature DYADIC

(Vx,y [Ex (base D) y] -> [x B y]])

It is assumed that predicates are typed by their arity and that features such asDYADIC refer to these types. Most of the axioms will only be written for dyadicpredicates; however, the axioms for other arities follow straightforwardly.

Returning to our discussion of (54a), the predicate modifier, rstate, is appliedto the telic predicate, P, and the resulting predicate to the second argument of P.This sentence-intension is true of an event whose beginning is temporally locatedat the end of the central event. When rstate is applied to a telic predicate, itproduces its result state. Axioms specific to P provide semantic content. Oneexample would be the axiom for capture given below.

(56) (Vy [y (rstate capture)] -> (3x [x control y]))

The result state of telic verbs is crucial for the semantics of a number of affixesand thus what a result state is and how it should be represented needs to beexplored here in some detail. Quoting Wechsler: "A result state is a specific statebuilt into the predicate which serves as a criterion for the action to be perfected"[Wechsler, 1989] (p.4 2 1).

There are numerous states that hold at the end of an event described by a telicverb. The question that will now be considered is: given a verb, what is its resultstate? Notice that for many telics there exists a pair of states, one presupposedand one that holds true at the end of the event described by the verb; see (57).The first is a negation of the latter.

(57) open, enter, capture, create, awake

Consider enter: one cannot enter a room if he or she is already inside that roomand after one has entered the room he or she is inside. The presupposition isstated formally in (58).2

(58) (Vx,y,e [[[x enter y]**e] ->

(3el: [el at-beginning-of el

[(-[x contained-in y])**ell)])

The result state for such verbs is the state in the pair that holds at the end ofthe event. For enter this state is contained-in. Axioms can be used to encodeinformation about the result state of specific verbs. For example, (59) encodesthe result state for enter.

(59) (Vy [[y (rstate enter)] -> (3a [a contained-in y])])

2Throughout this dissertation, presupposition is treated as entailment for expository reasons.

58

There is another subset of telic verbs that do not have such a pair of states(see (60)). For example, a book does not have to be "not read" to be read.

(60) play (a game), perform (an opera), run (a race), read (a book), write (amemo), type (a letter), design (a deck)

The class exemplified by the first four verbs in (60) is called the 'creation of aperformance object' class by Dowty [1979] and the 'path accomplishments' classby Wechsler [1989]. Both Dowty and Wechsler do not group words like type andwrite in this class. However, these verbs also lack any sort of negated presupposedstate and, for the analysis presented here, this is the crucial distinction betweenthe verbs in (60) and (57).

What are the result states for the verbs (60)? One possibility is that theydo not have any result state. This is the position taken by Wechsler. However,

intuitively something has changed about the patient: the game is played, the race

is run, the letter is typed. I claim that a stative predicate, intuitively similar totheir adjectival passive, is the result state of the verbs in (60). Thus, axioms likethat in (61) will specify the result state of such verbs.

(61) (Vy [[y (rstate play)] -> [y (adj-pas play)]])

The predicate modifier adj-pas represents the portion of the meaning of theadjectival passive form that is of concern here. Namely, (62), which keys offadj-pas and states that an episode of the action described by the base verboccurred and it involved an agent.3

(62) For all predicates P with features VERBAL and DYADIC:(Vy,e [Ey (adj-pas P)]*e] -> (3el: [Eel before el A

Eel at-beginning-of eli

[(3x Ex P y])**el])])

Thus, the rule for assigning result states for telic verbs is as stated in (63). Theaxioms in (59) and (61) are products of this rule.

(63) a. If a verb presupposes the negation of a state and this state holds trueat the end of events described by the verb, this state is its result state.(change-of-state verbs)

b. Otherwise, a verb's result state is its adjectival passive.

In summary, it is assumed, unlike in [Dowty, 1979] and [Wechsler, 1989], thatall telic verbs have a result state and provide a principled way of discovering

3 Note that this information is not necessarily implicit in the type of the direct object: anopera my never have been performed or a book never read.

59

what the result state of a verb is. This analysis localizes the difference in typesof result states, in the way the result state is chosen. The difference does notforce a systematic ambiguity in the system. Once the result state is set, the basicanalysis of telic verbs proposed here can handle both kinds of verbs with the samemechanism. The axiom for the change-of-state (COS) feature is listed below.

(64) For all predicates P with features COS and DYADIC:(Vx,y,e [[Ex P y]**e] -> [(3el: [Eel at-end-of el A

[e cause ell]

[[y (rstate P)]**el]) A(3e2: [e2 at-beginning-of el

[(-I[y (rstate P)])**e2])]])

As a final note on result states, the two different kinds of verbs have resultstates with different properties: the result states corresponding to performanceverbs and verbs like type are intrinsically dependent on the action denoted by the

verb; the state cannot come about if the action has not been performed. Dowtymakes this observation for verbs like perform but does not discuss verbs like type.

Wechsler also makes this observation but considers it evidence that these verbsdo not have a result state. In contrast the result states corresponding to change-of-state verbs can often come about for a variety of other reasons. An opera canonly become 'performed' if someone performs it. But a tract of land can becomecontrolled by someone if it is bought, inherited, received as a gift, captured, etc.

[Smith, 1970] (as referenced in [Levin and Rappaport Hovav, 1994]) calls this theindependence of the state.

6.1.1 re-

The discussion above has set the stage for an analysis of the semantics of theverbal prefix re-. First note that re- applies to telic verbs and derives a new verbthat is itself telic. (Of the 164 types encountered in the Brown corpus 91 percenthad telic bases and were themselves telic.) Thus, this derived verb has a resultstate. The first semantic generalization about re- is that re- verbs have the sameresult states as their base verbs (e.g. the result state of reenter is the same as thatfor enter). This is stated in (65).

(65) For all predicates P with features BASE-RSTATE(Vy [Ey (rstate P)] -> [y (rstate (base P))]])

Of 164 types encountered, 65 percent conform to this axiom.

60

The second generalization is that if a derived verb holds true of a set of argu-ments then so does the base verb (e.g. if you reenter a room then you also entera room). This is stated in (66).'

(66) For all predicates P with features ENTAIL-BASE and DYADIC:(Vx,y [[x P y] -> Ex (base P) y]])

Again, 65 percent of the 164 types conform to this axiom.

Finally, not surprisingly, the re- form of a verb presupposes that its result state

held previously. Dowty [1979], Marchand [1969], and Wechsler [1989] make this

observation. This presupposition is represented in (67).

(67) For all predicates P with features REPRESUP and DYADIC:

(Vx,y,e [[Ex P y]**e] ->(3el: [el before el [[y (rstate P)]*el])])

(As before, 65 percent of the 164 types conform to this axiom.)

What is surprising is that this is all that needs to be presupposed. For example,re- forms do not presuppose the entire event described by the telic verb occurredpreviously. Consider the examples in (68). (68a) would be true in a situationwhere the door was hung open when it was created and was closed only oncebefore Mary came along and opened it. (68b) would be true even if the satellitehas never entered the atmosphere before. (68c) would be true even if the Druids

never captured their homeland before.

(68) a. Mary reopened the door.

b. The satellite reentered the atmosphere.

c. The Druids recaptured their homeland.'

As mentioned above for verbs such as capture and open there are multiple ways toachieve their result states (e.g. the state of being possessed by someone can come

about as a result of being stolen, received, bought, etc.). In contrast, for verbs

such as play (the game) there is only one way to achieve the result state: do the

action described by the verb. Thus the re- form of such verbs indirectly entails

that the action denoted by the telic verb occurred previously. This entailment

4 The reader may have noticed that (66) in conjunction with the axiom for the TELIC feature(54) entail that the result state of the base verb holds at the end of episodes described by there- verb. Thus, it seems that the axiom in (65) is unnecessary. However, re- verbs are telic andshould, therefore, denote telic predicates. To be consistent, these telic predicates should have aresult state. In addition, remember that the general axiom for telic verbs in (54) applies to alltelic predicates and involves the rstate predicate modifier.

'These three examples are from [Dowty, 1979].

61

falls out of the fact that I am representing the result state of such verbs as theiradjectival passive. As noted above, the adjectival passive stative predicate entailsthat the relevant episode is also described by the base predicate. For a verb suchas retype, (65) and (67) along with (62) and (69) chain to produce the inferencethat a typing event occurred previously.

(69) (Vy H[y (rstate type)] <-> [y (adj-pas type)]])

In addition, re- verbs do not presuppose, by virtue of being re- verbs, thenegation of their result state. Dowty assumes that re- makes this stronger pre-supposition.6 He claims that not only does the re- form of a verb presuppose thatthe result state held previously but, in addition, it presupposes that it did nothold at some point between when it previously held and the current time (see (70)where re- denotes again').

(70) a. VpE[again'(p) <-+ ['p A H[-^p A H^p]]]

b. "That is, again'(p) is true just in case p is now true, there was an ear-lier time at which p was false, and a still earlier time at which p wastrue."([Dowty, 1979], p. 261)

For verbs like reenter and recapture, this additional presupposition is unnecessary;for verbs like retype and replay the game, it is wrong. To see this, consider enter.As mentioned above, enter presupposes that the space being entered does notalready contain the object entering (see (58)). Since reenter entails enter (see(66)), all the presuppositions for enter apply for reenter. Therefore, they do notneed to be stated explicitly as presuppositions for reenter.

For verbs such as retype and reread, there is no presupposition that the nega-tion of the result state held previously (e.g. once a book is read it can never be"unread"). And as mentioned above, the base forms of these words do not presup-pose the negation of their result state. Thus, the weaker entailment for re- verbsproposed here (i.e. (67)) performs better than the stronger form (70) proposed byDowty.

Re- provides us with an opportunity to explore the nature of result statesfurther. More specifically which participants in an event are involved in the resultstate. With respect to re-, Wechsler poses the question as follows: "In what waysmust the presupposed earlier situation resemble the denoted one? Specifically,which participants must be common to both?" [Wechsler, 1989] (p.425). Wechslercalls these common participants the nuclear arguments. He gives the followingheuristic for deciding which participants of a telic event are nuclear arguments:"a simple test for nuclear argumenthood is that criteria for the completion of

6Wechsler lists the axiom in (70) but does not say anything for or against it in his finalanalysis.

62

an accomplishment are given in terms of them" [Wechsler, 1989] (p.424). Forexample, in (71a), John and the English Channel are nuclear arguments since onehas to watch both in order to determine whether the swimming event is complete.However, the flippers are not nuclear since they are not crucial to the completion.Consequently, they are not required to be a part of the presupposed event.

(71) a. John reswam the English Channel with flippers.

b. Mary reread Ulysses in one day.

c. Kim reran the last lap of the race for the TV cameras.

d. The train recrossed the border. 7

When the heuristic is applied to the other sentences in (71), it gives the sameresult: both the agent and the theme are nuclear arguments and must be commonto both the current situation and the presupposed one. The heuristic also worksthe sentence in (72).

(72) John reawoke at the sound of his cat scratching the door.

Thus far the heuristic works well. However, as Wechsler notes there are anumber of problematic sentences. Some of his examples are in (73).

(73) a. John swam the Channel yesterday and found nothing; but I don't believehe looked carefully enough. So I'm going to reswim it today.

b. After Smith's famous crossing in 1930, the Channel remained unconqueredfor ever 50 years. Finally, in 1986, a young swimmer named Jones reswamthe Channel.

c. On September 17th, Dr. Jones reentered the crypt of the pharoah.

d. Mary read the poem aloud and then John reread it.

All of these sentences have a reading where the agent of the current event is not the,same as the agent of the presupposed event. Wechsler points out that the themesin these examples are 'affected' in a way that is not present in a reading wherethe agent must be the same in both events. Thus, one need only pay attentionto the theme to tell if the telic event is over. If affected themes can be effectively

delineated then his heuristic would remain untarnished. In his own words:

"Notice that these shifts in scope are not the result of any extraprinciples of a pragmatic nature or any other kind. The verbs we havelooked at (even in unprefixed form) exhibit a certain plasticity with

7The examples in (71) and (72) are from Wechsler.

63

respect to interpretation. But once that interpretation is fixed, theinterpretation of re- simply follows from the meaning we needed to assignto the morpheme anyway on the basis of the simple cases." [Wechsler,1989] (p.427)

However, the concept of affectedness is notoriously slippery. In addition, thefollowing examples cannot be handled by the heuristic.

(74) a. Mary swam the last lap of the relay race but the judge did not see it.Since he was fresh, John immediately jumped in and reswam the last lap.Amazingly their team still placed.

b. The US recaptured Kuwait for the Kuwaitis.

Since the theme of reswim in (74a) not affected in any way and the observationof both the agent and the theme are required to tell if the telic event is complete,Wechsler's heuristic would predict that both arguments are nuclear. However,only the theme is common to both events. In (74b), the result state of recaptureinvolves both the capturer and the capturee and thus both should be common.However, (74b) seems true in a situation that involves the following sequenceof events: the Kuwaitis control Kuwait, the Iraqis capture it, the US capturesKuwait, the US gives Kuwait back to the Kuwaitis. Again, the heuristic makesthe agent nuclear when it should not.

Although Wechsler's analysis provides many insights and his heuristic seemsto be on the right track, this aspect of result states is an area where furtherresearch is needed. For the time being, however, the following generalization willbe incorporated into the analysis presented here: the theme is always a nuclearargument. This is encoded in my analysis by given wide scope to the quantifierfor the theme argument variable y in (67).

The agent of the presupposed event is left indeterminate and it is assumed thatcontext will decide the most salient element to fill this argument. This approachis encoded as follows. If the result state involves a dyadic stative predicate, theagent position is treated as being existentially bound (see the variable a in (59)).Thus, for examples like the platoon captured the hill, some process will have toequate the platoon with the thing which is in possession of the hill.' Notice thatsuch a process is needed for other existentials. Consider the example in (75).9

'Some verbs seem to focus more on the change of state of the agent. Consider the contrastbetween the US (re-)invaded Kuwait and the US (re-)captured Kuwait. One could not call theUS invasion a re-invasion because of the previous Iraqi invasion whereas one could call the UScapture a recapturing because of the previous control of Kuwait by the Kuwaitis. Thus it seemsthat for such verbs the result state should predicate the agent.

9This example and general approach was pointed out to me by Lenhart Schubert, personalcommunication.

64

(75) As the parents of the murder victim looked on, the murderer was executed.

A meaning postulate can represent the entailment that if someone is a murderer,they killed someone. There must be some way to equate that someone who was

killed with the murdered child of the parents. Episodic Logic is particularly well-suited to handle such cases since the truth conditions for expressions involvingexistentially bound variables state that if the variable is bound elsewhere thatbinding should be used (see [Schubert and Hwang, 1990], page 19, footnote 13).

There is one last feature of the arguments of re- verbs that should be notedbefore leaving this topic. The arguments of re- have an extensional quality.10

Contrast the sentences in (76a,c) with those in (76b,d). Those in (76a,c) allow areading where the agents and themes are intensional (e.g. it could be that it was

a different sixth grade class that evaluated a different teacher previously). Thesentences in (76b,d) do not allow such a reading.

(76) a. The sixth grade class evaluated their teacher again.

b. The sixth grade class reevaluated their teacher.

c. The Queen inaugurated the Prime minister again.

d. The Queen reinaugurated the Prime minister.

Note that the wide scope of the quantifiers in (67) captures this observation.However, no explanation for this phenomenon is offered here. 11

6.1.2 un- and de-

In [Marchand, 1969], three senses are assigned to these negative prefixes:

privative: 'clear - of' defrost, dehusk, debone, unnerve,

ablative: 'remove from -' unsaddle, dethrone

reversative: 'reverse the effect of -ing' unbutton, desegregate, desynchronize

The reversative sense is dominant in the Brown corpus. In addition, it only appliesto verbal bases whereas the other two senses tend to apply to nouns. Thus here

the concentration will be on the reversative sense of de- and un-.

"0 Wechsler also makes this observation.1 It should be noted that there might be a further problem with this analysis of re- related

to intensionality. For example, when one redesigns a house, there does not seem to be a singlehouse that has been in the same state twice. One way to fix this would be make re- apply tothe entire verb phrase, i.e., re would apply to the monadic predicate (design house).

65

The first thing to notice is that the reversative only applies to change-of-stateverbs. In addition, change-of-state verbs are derived. Remember from our dis-

cussion of result states above that change-of-state verbs presuppose the negationof their result state. Of the 23 un- types encountered all were change-of-statethemselves; 91% of them had change-of-state bases. The counterexamples wereunnerve and undo. Unnerve seems to be an example of the privative sense andthus is derived from the noun nerve not the verb. Do is a light verb and, on itsown, seems to only entail that something is done to the participant correspondentof the direct object. This action may be a change-of-state or it may be simplytelic. It would seem plausible that only the actions that are change-of-state can be"undone" and therefore undo can be seen as regular in a more complicated way.The sentences that contain undo tokens in the Brown corpus are he undid thebow and there was a divine justice in one wrong thus undoing another. The firstclearly conforms to the hypothesis. The second has uses a metaphorical meaningfor undo and it is unclear what should be made of it.

Moving on to de-, of the 35 de- types encountered 43 percent of them hadchange-of-state bases and 34 percent were change-of-state themselves. The num-bers for de- are less than encouraging. However, most of the counterexamples(e.g., debut, decry, defend, delay, delight, depart) seem to have little to do withthe prefix, i.e., it seems unlikely that they were derived from the putative stem.

For both un- and de-, there is more to the relation between the base andthe derived form than the fact that they are both change-of-state verbs-thepresupposed state and the result state of the base are swapped in the derivedform. For example, in order to lace a pair of shoes they must not be alreadylaced up and the result is that they are laced; unlacing has the reversed pre- andpost-conditions.

This observation is formalized in the axiom (77).

(77) For all predicates P with feature BASE-NEG-RSTATE:(Vy [((rstate P) y) -> -((rstate (base P)) y)])

Note that the idea that the result state is monadic has been retained. Thisclaim was argued for in Section 6.1.1 based on data concerning the prefix re-that showed the only the affected participant was necessarily involved in the pre-supposed state (see examples (71-75)). The presupposed states for un- and de-derived forms reinforce this claim. For example, to decentralize something, it isonly crucial that the affected participant is central prior to the event of decen-tralizing. This is true of all the un-/de- change-of-state examples in the Browncorpus.

66

6.1.3 -ize, -ify, -en, and -ate

The central meaning of verbs derived using -ize, -en, -ify, or -ate1 2 is a de-scription of a process of obtaining a result state that is related in some way to thebase. The process is usually left largely underspecified. What is important is thatthe result state is obtained for one of the participants-the function of these verbsin English seems to be to provide very general verbs for referring to acquisitionof a result state. Some examples are equalize, stabilize, awaken, darken, glorify,intensify, alienate, and contaminate. In contrast, some telics that do incorporatespecific restrictions on the process are fax (a letter), quaff (a beer), and paint (apicture). The verb fax specifies how the letter is sent, quaff specifies how thebeer was drunk, and paint specifies how the picture was created. Because thederived verbs are so general, it is hard to imagine the result state coming aboutin a way that could not be described by the derived verb. For example, it is hardto conceive of a process by which something becomes legal which could not bedescribed with the verb legalize. The only way something could be legal and nothave been legalized if it started its existence legal, e.g., drinking water was neverlegalized in New York State-it has always been legal. Not all states are like this.For example, the result state of capture is 'control' and one can control somethingby buying it, inheriting it, or claiming it-just because something is controlleddoesn't mean it was captured. In sum, when a base adjective holds of an object,this presupposes that either an event described by the deadjectival verb or theadjective has always been true of the object. This is encoded with the followingaxiom.

(78) For all predicates P with features IZE-DEP and DYADIC:(Vy [((base P) y)*e -> (]el [[(ix [x P y])**el A [el before eli

V (Ve2: [e2 before el((base P) y)*e 2 )]]))

80 percent of the 64 adjectival bases of -ize, 72 percent of the 36 bases of -en, and93 percent of the 15 adjectival bases of -ify conform to this axiom.

The majority of the derived forms are change-of-state verbs: 78 percent of the63 -Aize types, 56 percent of the 86 -Nize types, 100 percent of the 36 -en types,94 percent of the 17 -Aify types, 67 percent of the 21 -Nify types, and 48 percentof the 365 -ate types.

In addition, for -Aize, -Aify, and -en, the result state of a derived verb is oftenthe base itself.

(79) For all predicates P with features RSTATE-EQ-BASE and DYADIC:

12The "suffix" -ale should really be called a verbal ending since it is not a suffix in English,z.e., if one strips it off one is seldom left with a word. Nonetheless, it corresponds to lexicalsemantic information.

67

(Vx ((rstate P) x) -> ((base P) x))

For 75 percent of the 63 -Aize types, for 58 percent of the 17 -Aify types, and for100 percent of the 36 -Aize types this axiom holds. Some of the counterexamplesare materialize, penalize and specialize.

A subclass of the denominal -ize forms also conform to this template, namelythose that can be glossed as 'to make into a -'. Some examples are novelize,colonize, deputize, fetishize, idolize, and hypothesize. In Episodic Logic nouns,adjectives, and verbs are all of the same type: predicates. Thus (79) could beused for these nouns also. However, denominal forms are much less homogeneousthan deadjectival forms. [Marchand, 1969] gives four principal semantic types inaddition to the one just mentioned. The types are, quoting from him 'convertinto, put into the form of, give the character or shape of-', e.g., itemize anddramatize, 'subject to the action, treatment, or process of-', e.g., propagandizeand hospitalize, 'subject to a special (technical) process connected with -', e.g.,winterize and weatherize, 'impregnate, treat, combine with -', e.g., alcoholize andalkalize. These types cannot be distinguished from each other on purely morpho-syntactic grounds and thus the axiom cannot be automatically assigned to any ofthe derived forms.

Finally, notice that many of these derived verbs seem to have readings that arecontrary to the change-of-state axiom: their patient changes its state but not ina binary fashion. For example, one can blacken a pot but not make it completelyblack and one can blacken it again. Some other examples are listed in (80).

(80) he brightened the room, she further centralized the power

Such verbs move their patient in a particular direction along a scale and thusthe binary terms of negation and statement which we have been using to refer tochange of state need to be defined for verbs that involve such scalar adjectives.This problem will not be address here, however.

6.1.4 -le

All of the attention in this section has been on telic verbs and this is becausemost derivational affixes in English seem to be concerned with telic verbs. How-ever, there is a verb ending, in the sense that -ate is a verb ending, in Englishthat tends to mark activities: -le. Some examples are chuckle, crumble, dazzle,grumble, guzzle, mingle, ramble, rustle, and struggle. 55 percent of the 71 typesending in -le are activities (as defined by the axiom for the ACTIVITY featuregiven above). If one removes by hand the denominals, i.e., any type with a cor-responding homonym noun that is not eventive, 71 percent of the remaining 49types are activities.

68

6.2 Event participants

Some verbs specify extremely particular properties for their participants. Forexample, the German verb fressen specifies that its agent cannot be a human.However, there are properties that many verbs specify for their participants. Forexample, many verbs specify that the doer of the verb must be volitional: mustmake a conscious choice to perform the verbal action. Some examples are tell,pay, write, and sell. Such properties are the basis for Dowty's [1991] theory ofargument selection. Argument selection is the mapping from participants in anevent to syntactic complements of a verb that described the event. Dowty positsthat there are two sets of properties: proto-agent properties and proto-patientproperties. The syntactic realization of participants is determined by the numberof such properties that they have. For example if three participants are to besyntactically realized, then the one with the most proto-agent properties will berealized as the subject; of the remaining two, the one with the most proto-patientproperties will be realized as the direct object; and the remaining one will berealized as the oblique object. 3 Dowty's features are listed in (81) and (82)(taken verbatim from [Dowty, 1991] p. 572).

(81) Contributing properties for the Agent Proto-Role:

a. volitional involvement in the event or state

b. sentience (and/or perception)

c. causing an event or change of state in another participant

d. movement (relative to the position of another participant)

(e. exists independently of the event named by the verb)

(82) Contributing properties for the Patient Proto-Role:

a. undergoes change of state

b. incremental theme

c. causally affected by another participant

d. stationary relative to movement of another participant

(e. does not exist independently of the event, or not at all)

If one knows which participants have which properties for a verb then one isa long way towards knowing the verb's argument structure. The following quotetaken from [Barker and Dowty, 1993] explores this point further.

"3Note that Dowty's system does not predict how many or which participants will be realized.

69

"This view of roles suggests there many (sic) not be a need for 'argumentstructure" as a distinct level of linguistic representation from d-structure(or other syntactic structure) or for 'linking rules' to connect one to theother, as these exist in the currently more common views of Grimshaw[1990] and other works cited there. For if semantic arguments of verbsdo not fall into disjoint categories like traditional Agent, Theme, Goal,etc. but are fine-grained to an indefinitely specific degree, and if thedistribution of a verb's semantic arguments among its subcategorizedNPs is really only partially predictable on such semantic grounds at all(i.e. is to an extent idiosyncratic to the individual verb), then the most apriori straightforward theoretical architecture is one in which the associ-ation of semantic arguments with syntactic ones is trivial and a separateargument structure level is not invoked: semantic arguments can just aswell be indexed solely by the grammatical role the verb's lexical entryassociates with them, not in terms of Agent, Goal, etc. Generalizationsof linking or argument selection principles can be simply static partialgeneralizations in the lexicon about meaning and subcategorized NPs;this is the view assumed by Dowty [1991] ... To be sure, the proto-rolehypothesis is not really incompatible with the more complex view of ar-gument structure; it only suggests it may need more motivation and/orrethinking."

Can these ideas about argument selection be formalized? One can start bycreating dyadic predicates that take a participant and an event as arguments,e.g., [x volitional-involved-in el and [x sentient-of el. Then a numberof entailments relationships could be based on these predicates (see (83)).

(83) a. (Vx (3e [x volitional-involved-in el) -> [x sentient-of el)

b. (Vx (3e [x sentient-of el) -> (animate x))

These ideas have been introduced here because they are useful for specifyingthe semantics of the nominalizing suffix -ee [Barker and Dowty, 1993]. In addition,they set the stage for two other nominalizing affixes: -er and -ant. Some examplesare listed in (84).

(84) a. appointee, assignee, draftee, honoree, interviewee

b. adapter, admirer, advertiser, adviser, analyzer, autoloader

c. accountant, acceptant, applicant, assailant, assistant

Before looking at each affix individually, consider a lexical inference that issupported by all of the derived forms: the existence of an event describable bytheir base, e.g., if one encounters the noun assailant in a dialogue then an event ofassailing must have occurred or be occurring. This can be axiomatized as follows.

70

(85) For all predicates P with features PART-IN-E and DYADIC-BASE:(Vx (P x) -> ('3e (3y,z [y (base P) z]*e) A [x part el))

6.2.1 -ee

At first glance it might seem that -ee nouns refer to the direct object of theverb as in (86).

(86) employee, nominee, draftee

However a central point of [Barker and Dowty, 1993] is that a syntactic character-ization of the referent of -ee nouns is "either descriptively inadequate or severelydisjunctive" [Barker, 1995]. He goes on to say that "the main difficulty for thesesyntactic theories is that from a syntactic point of view, the set of possible refer-ents for -ee nouns does not seem to be a natural class" [Barker and Dowty, 1993]and gives the examples below.

(87) indirect object: donee, payee, addressee

object of governed prep: experimentee, laughee, gazee

subject: attendee, standee, escapee, retiree

As a final conclusive argument against a syntactic approach, he points out thatmany -ee nouns have no corresponding verbal syntactic complement, e.g., am-putee, jokee, complainee, and twistee.

In lieu of a syntactic analysis, he provides a semantic one. He posits that thereferent has the properties listed in (88).

(88) a. episodically linked

b. sentient of the event

c. non-volitional with respect to some aspect of the event

In the previous section the property (88a) marking it with the feature PART-IN-E was discussed and it was axiomatized in (85). Properties (88b) and (88c)are, in part, familiar from our above discussion of verbal argument selection. Theadditional phrase "with respect to some aspect of the event" of (88c) was addedby Barker to handle examples like retiree, resignee, and escapee which refer tovolitional participants. He argues that often the referent of the -ee noun will have

no volitional control of the duration of the event or of the occurrence of the event:draftee, honoree, licensee. However, if the event is punctual then it is often theresult state that the participant does not have control of: retiree, resignee. Finally,Barker argues that escapee "emphasizes the dire consequences resulting from the

71

decision to escape, since escapees are fugitives" [Barker, 1995]. These seem lessthan convincing since they are apply only to a small group of counterexamplesand are heterogeneous with respect to them also. It seems preferable to rephrase(88c) as (89c') and treat words like escapee as exceptions.

(89) c'. non-volitional with respect to the event

(88b) and (89) are axiomatized in (90) and (91) respectively.

(90) For all predicates P with features SENTIENT and DYADIC-BASE:(Vx (P x) -> (3e (3y,z [y (base P) zl*e) A [x sentient-of e]))

(91) For all predicates P with features NON-VOLITIONAL andDYADIC-BASE:(Vx (P x) -> (]e (3y,z [y (base P) z]*e) A

-[x volitional-involved-in el))

91 percent of the 22 -ee derived forms conform to the PART-IN-E axiom; 82percent conform to the SENTIENT axiom; and 68 percent conform to the NON-VOLITIONAL axiom.

6.2.2 -er and -ant

Since -ee data can be explained using a semantic generalization, it would seemlikely that the corresponding deverbal affixes -er and -ant might also. However,unlike -ee nouns, -er/-ant nouns refer to a semantically heterogeneous set ofparticipants; see (92).

(92) non-volitional and non-sentient ('instrument'):-er: adapter, baffler, binder, boiler, feeder, hanger,

heater, poker, sander, tranquilizer, transformer, viewer-ant: deterrent, coolant, depressant, lubricant, stimulant

causer but non-volitional and non-sentient:-er: reminder, clencher, eye-opener, shocker, thriller,

contender(?), disturber(?)-ant: descendant, communicant 'one who communicates

information', participant

habitual:-er: smoker, announcer, achiever, baker, commuter-ant : accountant, assistant, attendant, informant, servant

episodic:-er: borrower, backer, bather-ant: celebrant, combatant, assailant, discussant

72

undergoes change-of-state with respect to the base V, (patient-like):-er: bouncer 'this ball is a bouncer',

melter 'that ice cream flavor is a melter',keeper 'this fish is a keeper', sampler 'I ordered a sampler'

location:-er: kneeler, scribbler

This situation is similar to that of semantic generalizations about participantsthat correspond to the subject position of a verb. Thus, in contrast to -ee, theprimary generalization about the participant deverbal affixes -er and -ant seemsto be a syntactic one: -er/-ant nouns can refer to a participant that can appearas the subject of the base verb. There appear to be very few exceptions to thisgeneralization; however, see (93) for a few.

(93) kneeler 'stool to kneel on' [Marchand, 1969], scribbler 'writing pad' [Marc-hand, 1969], confidant 'someone you confide in'

Thus, upon encountering an -er/-ant derived form token, one can only concludethat an event of the base has occurred or is occurring and the referent of the nounis a participant in this event (i.e.PART-IN-E). Of the 471 -er types, 85 percentconform to this generalization. Of the 21 -ant types, 81 percent conform.

6.3 Referring to events

Until now events have been something that an expression describes: Marykissed John has the following logical form (ignoring tense):

(94) (3e [Mary kiss John]**e).

However nominalizations can refer to an event directly: the performance was fab-ulous has the following logical form (ignoring tense):

(95) (the e: (performance e) (fabulous e)).

There are a number suffixes in English that produce such nouns from verbs, e.g.,-ment, -ance, -ion, and -al.

All of the derived forms presuppose an event of the base verb. For example,the use of the word performance presupposes that something was performed. Infact, the individual referred by the word performance is often this presupposedevent. However, performance can also refer to the proposition that something wasperformed, not to the performance itself. Consider the (96).

(96) Mary's performance of the cello suites upset John

73

This sentence could either mean (i) that the particular performance upset Johnor (ii) that the fact that Mary performed the suites upset John. In reading (i),performance refers to the event and in (ii) to the proposition. This distinction isdiscussed at length in [Zucchi, 1989].

For many nominalizations, there is a third possible referent type and that isto an object resulting from the event: encampment, restatement, approximation,characterization, rental. Actually, a rental car does not result from the rentingevent-it would still exist if no event of renting occurred. More accurately, if norenting occurred it would not have the property of being a rental. The object thatresults from the event is one of the participant of the event.

The axiom encoding the presupposed event and the three possible referents islisted below.

(97) For all predicates P with features E-OR-P-OR-R and DYADIC-BASE:(Vx (P x) -> (e [(3y,z [y (base P) z]**e) A

[Ex = el VEx = (that [y P z]*e)] V[[x part el A[(Vyl,zi,el (-,([yi (base P) zl]**el))) ->

(Px)1111))

Notice that the three reference possibilities are disjunctively expressed in thelast three lines of (97). This axiom also contains the presupposition of an eventdescribed by the base verb.

Notice also that in the sentence the performance was fabulous, information isbeing "added" to the event, i.e., not only was it a performance, it was fabulous.There is a problem with this addition of information. If the ** operator is usedas is done in the first line of (97) then the event is a minimal one: the charac-terizing expression, in this case [y (base P) z] constitutes the full informationcontent of the event. Thus the "the fabulous performance" will be an event thatis coextensive but informationally larger than the "performance". However, intu-itively there is only one event. If the * is used instead of ** then the event whichis referred to could have too much or too little information in it. For example,exactly what upset John in (98a) is ambiguous: it could be the whole of the per-formance itself or it could be some particular aspect such as when it happened orthe manner in which it happened. However, it probably does not have anythingto do with the snow storm that was taking place in New Zealand.

(98) a. Mary performed the 1st cello suite and it upset John

b. The performance took place at 7pm

c. The performance was too slow.

74

Episodic Logic includes ** as a tool for controlling the information content of anevent and thus excluding unrelated pieces of information that could be spuriouslyadded to the information content of an event. It says that its first argument, andthe things following from it via axioms, constitute the whole of the informationin the event. What nominalizations make clear is that it is too crude of a tool forthe intricate work of circumscribing the information content of an event.

One might think that one could introduce an event with slightly more informa-tion than the first when another piece of information is given. However, the factthat one can refer to events with pronouns makes this approach less appealing.

Consider (99).

(99) Mary performed the 1st cello suite and it upset John. It took place at 7pmand it was too slow.

If a new event is introduced for each piece of information, each pronoun wouldrefer to a different event which is undesirable. Thus, ** will be continued tobe used in the axioms presented here and hope that a better definition can bedevised.

Nominalizations present of a number of other interesting semantic problemsincluding the relation between the arguments of the nominalization and those ofthe base verb (e.g., (100a)), the count/mass characteristics of the nominalizationand the relation between the aspect of the base verb (e.g., (100a)), and the re-lation of adjectival modification of the nominalization and adverbial modificationof the base verb (e.g., (100a)). However, I will not discuss these problems here.See [Zucchi, 1989] and [Hoeksema, 1985] for formal approaches to some of theseproblems.

(100) a. a depreciation in the value of dollar <-+ the value of the dollar depreciated

b. *Preventions of diseases are good but Developments in the Middle Eastare encouraging

c. The skillful performance of the song by Mary ++ Mary performed the songskillfully

6.3.1 -ment

Nouns derived by -ment conformed to the E-OR-P-OR-R axiom 88 percent ofthe time. There were 166 types. Some non-conforming examples are commence-ment, compliment, basement, and department.

A way to distinguish the semantics of -ment from that of the other nominal-izing suffixes would be of interest and a number of minimal pairs exist (e.g., ex-citation/excitement and installation/installment). However, there does not seemto be any coherent semantic distinction.

75

6.3.2 -age

Like -ment, -ance, -ion, and -al, -age produces deverbal nouns which presup-pose an event described by their base. Some examples are blockage, seepage,marriage, drainage, and wreckage. However, these derived forms can only referto some object resulting from their presupposed event. They do not refer to theevent itself or to a corresponding proposition, e.g., seepage entails that seepingtook place and that the seepage resulted from this seeping. The feature is calledE-AND-R and the axiom is listed below. It is a reduced version of (97).

(101)For all predicates P with features E-AND-R and DYADIC-BASE:(Vx (P x) -> (3e [(Gy,z Ly (base P) z]**e) A

[x part el A[(Vyl zi el (-'([yl (base P) zl]**el))) ->

-(P x)]1))

Of the 43 -age derived types, 58 percent conformed to this axiom. The coun-terexamples include massage, salvage, sewage, frontage, carriage, and average.

6.4 Other verbal affixes

In addition to aspect, there is a variety of verbal lexical semantic informationto which derivational affixes refer. Some of these affixes include -able, mis-, pre-,post-, and out-. The affixes mis- and -able are discussed below.

6.4.1 mis-

The verbal prefix mis-, e.g., miscalculate and misquote, produces verbs wherean action is performed in an incorrect manner. The axiom is presented below.(Of the 21 types, 86 percent conformed to this axiom.)

(102)For all predicates P with features INCOR and DYADIC andDYADIC-BASE:(Vx,y [x P y] -> Ex ((adv-m incorrect) ((base P) y))])

This axiom makes use of the operator adv-m which takes a predicate and producesa predicate modifier that modifies a verb's manner. Exactly what incorrectentails would depend on the meaning of the base verb. However, one generalizationthat can be made is that most of the ramifications of a verb would be put intodoubt by an incorrect performance.

76

6.4.2 -able

The words derived using -able state that it is possible to perform the actiondenoted by the base verb on the predicated object, e.g., something is enforceableif it is possible that something can enforce it. This analysis is taken from [Dowty,1979] and his axiomatization translated into Episodic Logic is presented below.

(103)For all predicates P with features ABLE and DYADIC-BASE:(Vy (P y) -> K(3x,e [x (base P) y]**e))

84 percent of the 148 types conform to this axiom.

6.5 Abstractions

English provides a number of affixes that produce nouns for referring to abstractentities. For example, nouns that refer to the state of having some property areproduced by -ancy, -ness, and -ity as in dormancy, happiness, and fluidity. Nounsthat refer to some abstract place or theory are produced by -ship and -ism as inchomskyism and friendship. Finally, -ful only applies to abstract nouns: careful,fearful, and graceful. Two of these affixes, -ness and -ful, are discussed below.

6.5.1 -hess

The base predicates for -ness are adjectives and thus are stative predicates.The derived forms refer to a kind of event where instances of this kind are eventsof something having this state (as formalized below).

(104)For all predicates P with the feature NESS:(Vx (P x) -> (Ve: [e instance-of x] (3y ((base P) y)**e)))

Thus, "happiness" predicates a kind of event, instances of which are described bysomeone being happy. 97 percent of the 307 types conform to this axiom.

6.5.2 -ful

As mentioned above -ful and -less only apply to abstract nouns. The wordabstract has an intuitively clear meaning: having no physical substance. However,it is very difficult to axiomatize this intuition satisfactory. (105) represents an

attempt.

(105)For all predicates P with the feature ABST:(Vx (P x) -> (no-physical-extent x))

77

What exactly would follow from no-physical-extent is unclear. However, of the76 -ful bases 93 percent conformed to this intuition.

6.6 Antonyms

The prefixes dis-, un-, de-, and non- produce antonyms of their bases. Theprefixes un- and de- have already been looked at. Below we will look at a pair ofsuffixes -ful/-less that produce derived forms that are antonyms. Another pair ofaffixes with this behavior is anti-/pro-.

6.6.1 -ful/-less

Of the 22 pairs of -ful/-less pairs, 77 percent stand in an antonym relationship.Some examples are art, care, color, fruit, harm, meaning, and purpose.

In order to state the antonym relationship it is necessary to be able to referto the -less and -ful forms in the representational logic. Towards this end all thederived forms of -ful and -less have the features FUL and LESS respectively. Theaxioms corresponding to these features introduce predicate modifiers that relatethe base predicate to the derived predicate (see (106)).

(106) For all predicates P with the feature LESS:(Vx (P x) <-> (less (base P)) x)

For all predicates P with the feature FUL:(Vx (P x) <-> (ful (base P)) x)

These links back from the base predicate to the derived form predicate can beused to specify a relationship between two forms derived from different affixes butusing the same base as in (107).

(107) For all predicates P with the feature LESSANT:(Vx (P x) -> -'((less (base P)) x))

For all predicates P with the feature FULANT:

(Vx (P x) -> -'((ful (base P)) x))

Note that both of these axioms could be derived from axiom below.

(108)For all predicates P with the feature LESSFULANT:(Vx [x (less P)] -> -[x (ful P))

78

79

7 Conclusion

Some natural language processing (NLP) tasks can be performed with only coarse-grained semantic information about individual words. For example, a systemcould utilize word frequency and a word co-occurrence matrix in order to per-form information retrieval. However, many NLP tasks require at least a partialunderstanding of every sentence or utterance in the input and thus have a muchgreater need for lexical semantics. Natural language generation, providing a natu-ral language front end to a database, information extraction, machine translation,and task-oriented dialogue understanding all require lexical semantics. The lex-ical semantic information commonly utilized includes verbal argument structureand selectional restrictions, corresponding nominal semantic class, verbal aspec-tual class, synonym and antonym relationships between words, and various verbalsemantic features such as causation and manner. The previous chapters have ad-dressed two primary concerns related to such lexical semantic information: howto represent it and how to acquire it.

With respect to representation, the focus has been on supporting lexically-based inferences. The representation language of First Order Logic (FOL) haswell-understood semantics and a multitude of inferencing systems have been im-plemented for it. Thus it is a prime candidate to serve as a lexical semanticsrepresentation. However, it has been argued here that FOL, although a goodstarting point, needs to be extended before it can efficiently and concisely supportall the lexically-based inferences needed. Using data primarily from the TRAINSdialogues, the following extensions were argued for: modal operators, predicatemodification, restricted quantification, and non-standard quantifiers. These rep-resentational tools are present in many systems for sentence-level semantics buthave not previously been discussed in the context of lexical semantics.

A number of approaches to acquisition were considered and the approach ofsurface cueing was found particularly attractive given the current state of technol-ogy. Morphological cueing, a type of surface cueing, was introduced. It makes useof fixed correspondences between derivational affixes and lexical semantic infor-mation. The semantics of 18 affixes were discussed briefly and data resulting from

80

the application of the method to the Brown corpus presented. To summarize, lex-ical semantic information for over 2500 words was collected and the informationhad a precision of 76 percent on average. These words are listed along with theirsemantic features in the appendix. The axioms defining these features and theaffixes that cue them were discussed in detail in Chapter 6.

In addition to this off-line process of sifting through corpora looking for mor-phological cues, these cues can also be used at run-time to help a natural languageunderstanding system deal with unknown words. On average, every fourth sen-tence in the Brown corpus contains a derived word not in the 60,000 word Alveydictionary. The Kenilex system provides lexical semantic information for suchderived unknown words.

In sum, this dissertation has considered the representation and acquisition oflexical semantic information and presented a method for acquiring such informa-

tion based on derivational morphology.

Of course there are a great many extensions of the work presented here thatshould be considered. Only a small number will be mentioned here.

First, more data is needed. Only 18 derivational affixes have been consideredand English has over 40. In addition, there is certainly more semantic infor-mation that could be gleaned from the 18 affixes than has been presented here.Furthermore, corpora larger than the Brown corpus should to be used.

Next, as mentioned in Chapter 4, word sense disambiguation needs to be per-formed for the base of the derived forms. As was also mentioned in Chapter 4,instead of considering all of the analyses provided by the morphological analyzer,only the most plausible ones should be considered. In addition, the mechanismfor dealing with unknown bases needs to be improved.

Finally, only one aspect of representing lexical semantic information has beenconsidered here: supporting lexically-based inferences. Other pressing issues in-clude representing the default nature of lexical semantics generalizations and rep-resenting the hierarchical structure of this information (see [Copestake, 1992]).

81

Bibliography

[Allen et al., 1994] J. Allen, L. Schubert, G. Ferguson, P. Heeman, C. H. Hwang,K. Tsuneaki, M. Light, N. Martin, B. Miller, M. Poesio, and D. Traum, "TheTRAINS Project: a Case Study in Building a Conversational Planning Agent,"Journal of Experimental and Theoretical Artificial Intelligence, 7, 1994.

[Alshawi, 1987] H. Alshawi, Memory and Context for Language Interpretation,Cambridge University Press, 1987.

[Alshawi, 1989] H. Alshawi, "Analysing the Dictionary Definitions," In BranBoguraev and T. Briscoe, editors, Computational Lexicography for Natural Lan-guage Processing, pages 153-169. Longman, 1989.

[Alshawi, 1992] H. Alshawi, The Core Language Engine, MIT Press, 1992.

[Andry et al., 1995] F. Andry, J.M. Gawron, J. Dowding, and R. Moore, "A Toolfor Collecting Domain Dependent Sortal Constraints from Corpora," In Pro-ceedings of the 33th Meeting of the Association for Computational Linguistics,1995.

[Anglin, 1993] J. M. Anglin, 'Vocabulary Development: a Morphological Anal-ysis," Monographs of the Society for Research in Child Development, 58(10),1993.

[Aronoff, 1976] M. Aronoff, Word Formation in Generative Grammar, MIT press,1976.

[Backofen and Nerbonne, 1992] R. Backofen and J. Nerbonne, "Realization andDirected Parsing," In Proceedings of the German Workshops on Artificial In-telligence, 1992.

[Barker, 1995] C. Barker, "Episodic -ee in English: Thematic Relations and NewWord Formation," In Proceedings of Semantics and Linguistic Theory SALT 5

Conference. DMLL Cornell University, 1995.

[Barker and Dowty, 1993] C. Barker and D. Dowty, "Non-verbal Thematic Proto-roles," In A. J. Schafer, editor, Proceedings of the North East Linguistic Society23, pages 49-62. G1SA Amherst, 1993.

82

[Barwise and Cooper, 1981] J. Barwise and R. Cooper, "Generalized Quantifiersand Natural Language," Linguistics and Philosophy, 4:159-219, 1981.

[Barwise and Perry, 1983] J. Barwise and J. Perry, Situations and Attitudes, MITPress, 1983.

[Bates et al., 1994] M. Bates, R. Bobrow, R. Ingria, and D. Stallard, "The DEL-PHI Natural Language Understanding System," In Proceedings of the 4th Con-

ference on Applied Natural Language Processing (ANLP), pages 132-137, 1994.

[Bear, 1988] J Bear, "Morphology with Two-Level Rules and Negative Rule Fea-tures," In COLING, pages 272-276, 1988.

[Berube et al., 1982] M. Berube, D. Neely, and P. DeVinne, American Heritage

Dictionary Second College Edition, Houghton Miffin, 1982.

[Berwick, 1983] R. Berwick, "Learning Word Meanings from Examples," InProceedings of the 8th International Joint Conference on Artificial Intelligence

(IJCAI-83), 1983.

[Black et al., 1986] A.W. Black, S.G. Pulman, G.D. Ritchie, and G.J. Russell,"A Dictionary and Morphological Analyser for English," In COLING, pages277-279, 1986.

[Bobrow and Webber, 1980] R. Bobrow and L. Webber, "Knowledge Representa-tion for Syntactic/Semantic Processing," In Proceedings of the First National

Conference of the American Association for Artificial Intelligence (AAAI-80),pages 316-323, 1980.

[Bowerman, 1982] M. Bowerman, "Reorganizational Processes in Lexical and Syn-tactic Development," In E. Wanner and L. R. Gleitman, editors, LanguageAcquisition: the State of the Art. 1982.

[Brachman and Levesque, 1984] R. J. Brachman and H. J. Levesque, "TheTractability of Subsumption in Frame-based Description Languages," In Pro-ceedings of the fourth National Conference of the American Association for

Artificial Intelligence (AAAI-84), pages 34-37, 1984.

[Brill, 1994] E. Brill, "Some Advances in Transformation-Based Part of SpeechTagging," In Proceedings of AAAI-94, 1994.

[Brown et al., 1991] P. Brown, S. Della Pietra, V. Della Pietra, and R. Mercer,"Word-Sense Disambiguation Using Statistical Methods," In Proceedings ofACL 29, 1991.

[Brown et al., 1992] P.F. Brown, P.V. deSouza, R.L Mercer, V.J Della Pietra, andJ.C. Lai, "Class-Based n-gram Models of Natural Language," ComputationalLinguistics, 18(4), 1992.

83

[Burkert and Forster, 1992] G. Burkert and P. Forster, "Representation of Se-mantic Knowledge with Term Subsumption Languages," In J. Pustejovsky andS. Bergler, editors, Lexical Semantics and Knowledge Representation. Springer-

Verlag, 1992.

[Byrd, 1983] R.J. Byrd, "Word Formation in Natural Language Processing Sys-tems," In COLING, pages 704-706, 1983.

[Chafe, 1980] W. Chafe, The Pear Stories: Cognitive, Cultural and LinguisticAspects of Narrative Production, Ablex Publishing Corp., Norwood, NJ, 1980.

[Chierchia and McConnell-Ginet, 1990] Gennaro Chierchia and Sally McConnell-Ginet, Meaning and Grammar: an Introduction to Semantics, MIT Press,Cambridge, 1990.

[Chinchor et al., 1993] N. Chinchor, L. Hirschman, and D.D. Lewis, "Evaluat-ing Message Understanding Systems: An Analysis of the Third Message Un-derstanding Conference (MUC-3)," Computational Linguistics, 19(3):409-449,1993.

[Chodorow et al., 1985] M. S. Chodorow, R. J. Byrd, and G. E. Heidorn, "Ex-tracting Semantic Hierarchies from a Large On-line Dictionary," In Proceedingsof the 23th Annual Meeting of the Association for Computational Linguistics,1985.

[Clark, 1982] E. V. Clark, "The Young Word Maker: a Case Study of Innovationin the Child's Lexicon," In E. Wanner and L. R. Gleitman, editors, LanguageAcquisition: the State of the Art. 1982.

[Clark and Cohen, 1984] E. V. Clark and S. R. Cohen, "Productivity and Memoryfor Newly Formed Words," Journal of Child Language, 2:611-625, 1984.

[Copestake, 1992] A Copestake, The Representation of Lexical Semantic Infor-mation, PhD thesis, University of Sussex, Brighton, UK, 1992.

[Copestake and Briscoe, 1992] A. Copestake and T. Briscoe, "Lexical Operationsin a Unification-Based Framework," In J. Pustejovsky and S. Bergler, editors,Lexical Semantics and Knowledge Representation. Springer-Verlag, 1992.

[Copestake et al., 1993] A. Copestake, Antonio Sanfilippo, T. Briscoe, andV. de Paiva, "The ACQUILEX LKB," In T. Briscoe, V. de Paiva, and A. Copes-take, editors, Inheritance, Defaults, and the Lexicon, pages 148-163. CambridgeUniversity Press, 1993.

[Cumming, 1986] S. Cumming, "The Lexicon in Text Generation," TechnicalReport ISI/RR-86-168, Information Sciences Institute, University of SouthernCalifornia, 1986.

84

[Dahl et al., 1987] D.A. Dahl, M. Palmer, and R.J. Passonneau, "Nominaliza-tions in PUNDIT," In Proceedings of the 25th Meeting of the Association forComputational Linguistics, 1987.

[de Bruin and Scha, 1988] J. de Bruin and R. Scha, "The Interpretation of Re-lational Nouns," In Proceedings of the 26th Meeting of the Association forComputational Linguistics, 1988.

[Domenig, 1988] M. Domenig, "Word Manager: A System for the Definition,Access and Maintenance of Lexical Data bases," In COLING, pages 154-159,1988.

[Dorr and Lee, 1992] B. J. Dorr and K. Lee, "Building a Lexicon for MachineTranslation: Use of Corpora for Aspectual Classification of Verbs," TechnicalReport CS-TR-2876, University of Maryland, 1992.

[Dorr, 1993] B.J. Dorr, Machine Translation: a View from the Lexicon, MITPress, 1993.

[Dorr, 1994] B.J. Dorr, "Machine Translation Divergences: a Formal Descriptionand a Proposed Solution," Computational Linguistics, 20(4):597-633, 1994.

[Dowding et al., 1993] J. Dowding, J.M. Gawron, D. Appelt, J. Bear, L. Cherny,R. Moore, and D. Moran, "Gemini: a Natural Language System for Spoken-Language Understanding," In Proceedings of the 31th Meeting of the Associationfor Computational Linguistics, 1993.

[Dowty, 1979] David Dowty, Word Meaning and Montague Grammar, D. ReidelPublishing, 1979.

[Dowty, 1991] David Dowty, "Thematic Proto-Roles and Argument Selection,"Language, 67(3), 1991.

[Eberle, 1991] K. Eberle, "On Representing the Temporal Structure of Texts,"In 0. Herzog and C.-R Rollinger, editors, Text Understanding in LILOG: Inte-grating computational linguistics and artificial Intelligence Final report on theIBM Germany LILOG-Project. Springer-Verlag, 1991.

[Fisher et al., 1991] C. Fisher, H. Gleitman, and L. R. Gleitman, "On the Seman-tic Content of Subcategorization Frames," Cognitive Psychology, 23:331-392,1991.

[Fodor, 1970] J. A. Fodor, "Three Reasons for Not Deriving 'Kill' from 'Cause toDie'," Linguistic Inquiry, 1:429-438, 1970.

[Gleitman, 1990] L. Gleitman, "The Structural Sources of Verb Meanings," Lan-guage Acquisition, 1:1-55, 1990.

85

[Gleitman and Landau, 1994] L. Gleitman and B. Landau, The Acquisition of theLexicon, MIT Press, 1994.

[Gomez, 1994] F. Gomez, "Acquiring Knowledge from encyclopedic texts," InProceedings of the 4th Conference on Applied Natural Language Processing(ANLP), pages 84-90, 1994.

[Gordon, 1989] P. Gordon, "Levels of Affixtion in the Acquisition of English Mor-phology," Journal of Memory and Language, 28:519-530, 1989.

[Granger, 1977] R. Granger, "Foulup: a Program that Figures out Meanings ofWords from Context," In Proceedings of the 5th International Joint Conferenceon Artificial Intelligence, 1977.

[Grimshaw, 1981] J. Grimshaw, "Form, Function, and the Language AcquisitionDevice," In C. L. Baker and J. J. McCarthy, editors, The Logical Problem ofLanguage Acquisition. MIT Press, 1981.

[Grimshaw, 1990] J. Grimshaw, Argument structure, MIT Press, Cambridge,1990.

[Grishman and Sterling, 1992] R. Grishman and J. Sterling, "Acquisition of Se-lectional Patterns," In Proceedings of COLING-92, 1992.

[Grishman and Sterling, 1993] R. Grishman and J. Sterling, "Smoothing of Au-tomatically Generated Selectional Constraints," In Proceedings of the HumanLanguage Technology Workshop ARPA, 1993.

[Gross, 1993] D. Gross, "The TRAINS 91 Dialogues," Technical Report 92-1,Department of Computer Science, University of Rochester, 1993.

[Gross et al., 1993] D. Gross, Allen J.F., and D.R. Traum, "The TRAINS 91 Di-alogues," Technical Report 92-1, Department of Computer Science, Universityof Rochester, 1993.

[Haenelt and Wanner, 1992] Karin Haenelt and Leo Wanner, International Work-shop on The Meaning-Text Theory, Gesellschaft fir Mathematik und Daten-verarbeitung MBH, 1992.

[Hastings, 1994] P. Hastings, Automatic Acquistion of Word Meaning from Con-text, PhD thesis, University of Michigan, 1994.

[Hayes, 1979] P.J. Hayes, "The Logic of Frames," In D. Metzing, editor, FrameConceptions and Text Understanding. de Gruyter, 1979.

[Hearst, 1992] M. Hearst, "Automatic Acquisition of Hyponyms from Large TextCorpora," In COLING 14 (Nantes), 1992.

86

[Hearst and Schuetze, 1994] M. A. Hearst and H. Schuetze, "Customizing a Lex-icon to Better Suit a Computational Task," ms., 1994.

[Heemskerk, 1993] J. S. Heemskerk, "A Probabilistic Context-free Grammar forDisambiguation in Morphological Parsing," In Proceedings of the Sixth Confer-ence of the European Chapter of the Association for Computational Linguistics,pages 183-192, 1993.

[Herzog and Rollinger, 1991] 0. Herzog and C. R. Rollinger, Text Understandingin LILOG: Integrating computational linguistics and artificial Intelligence Finalreport on the IBM Germany LILOG-Project, Springer-Verlag, 1991.

[Hindle, 1990] D. Hindle, "Noun Classification from Predicate-Argument Struc-tures," In Proceedings of the 28th Annual Meeting of the Assocation of Com-putational Linguistics, 1990.

[Hinrichs, 1981] E. W. Hinrichs, "Temporale Anaphora in Englischen," StaatsEx-amen Thesis, Universitit Tubingen, 1981, StaatsExam.

[Hoeksema, 1985] J. Hoeksema, Categorial Morphology, Garland Publishing, Inc.,1985.

[Hutchins and Somers, 1992] W.J. Hutchins and H.L. Somers, An Introduction toMachine Translation, Academic Press, 1992.

[Hwang and Schubert, 1993a] C.H. Hwang and L. Schubert, "Episodic Logic: aComprehensive Natural Representation for Language Understanding," Mindand Machine, 3(4):381-419, 1993.

[Hwang and Schubert, 1993b] C.H. Hwang and L. Schubert, "Episodic Logic: aSituational Logic for Natural Language Processing," In P. Aczel, D. Israel,Y. Katagiri, and S. Peters, editors, Situation Theory and its Applications, vol-ume 3, pages 303-338. CSLI, Stanford University, 1993.

[Iordanskaja et al., 1991] L. Iordanskaja, R. Kittredge, and A. Polgudre, "LexicalSelection and Paraphrase in a Meaning-Text Generation Model," In C.L. Paris,W.R. Swartout, and W.C. Mann, editors, Natural Language Generation in Ar-

tificial Intelligence and Computational Linguistics. Kluwer Academic Publisher,1991.

[Jackendoff, 1990] Ray Jackendoff, Semantic Structures, MIT Press, Cambridge,1990.

[Jacobs and Zernik, 1988] P. Jacobs and U. Zernik, "Acquiring Lexical Knowledgefrom Text: a Case Study," In Proceedings of the Eighth National Conference ofthe American Association for Artificial Intelligence (AAAI-88), pages 739-744,1988.

87

[Johnson, 1987] M. Johnson, Attribute-Value Logic and the Theory of Grammar,Center For The Study of Language And Information, 1987.

[Kamp, 1979] H. Kamp, "Events, Instant and Temporal Reference," InR. Bauerle, U. Egli, and A. von Stechow, editors, Semantics from DifferentPoints of View, pages 376-417. Springer-Verlag, 1979.

[Karttunen, 1983] L. Karttunen, "KIMMO: A General Morphological Processor,"Texas Linguistic Forum, 22:165-278, 1983.

[Kasper and Rounds, 1986] R. T. Kasper and W. C. Rounds, "A Logical Seman-tics for Feature Structures," In Proceedings of the 24th Annual Meeting of theAssociation for Computational Linguistics, 1986.

[Koskenniemi, 1984] K. Koskenniemi, "A General Computational Model ForWord-Form Recognition and Production," In COLING, pages 178-181, 1984.

[Kucera and Francis, 1967] H. Kucera and W. Francis, Computational Analysisof Present-day American English, Brown University Press: Providence, R.I.,1967.

[Kuhlen, 1983] R. Kuhlen, "Topic: a System for Automatic Text Condensation,"ACM SIGART Newsletter, 83:20-21, 1983.

[Laubsch and Nerbonne, 1991] J. Laubsch and J. Nerbonne, "An overview ofNLL," Technical report, DFKI, Saarbriicken, Germany, 1991.

[Levin, 1993] B. Levin, English Verb Classes and Alternations: a PreliminaryInvestigation, The University of Chicago Press, 1993.

[Levin and Rappaport Hovav, 1994] B. Levin and M. Rappaport Hovav, "A Pre-liminary Analysis of Causative Verbs in English," In L. Gleitman and B. Lan-dau, editors, The Acquisition of the Lexicon. MIT Press, 1994.

[Light, 1992] M. Light, "Rehashing Re-," In Proceedings of the Eastern StatesConference on Linguistics. Cornell University Linguistics Department WorkingPapers, 1992.

[Marchand, 1969] Hans Marchand, The Categories and Types of Present-Day En-glish Word-Formation. 2nd Ed., Beck, Muenchen, 1969.

[Marcus et al., 1993] M. Marcus, B. Santorini, and M. A. Marcinkiewicz, "Build-ing a Large Annotated Corpus of English: The Penn Treebank," ComputationalLinguistics, 19(2):313-330, 1993.

[Markowitz et al., 1986] J. Markowitz, T. Ahlswede, and M. Evens, "Semanti-cally Significant Patterns in Dictionary Definitions," In Proceedings of the 24thAnnual Meeting of the Association for Computational Linguistics, 1986.

88

[Matthiessen, 1991] C. Matthiessen, "Lexico(grammatical) Choice in Text Gen-eration," In C.L. Paris, W.R. Swartout, and W.C. Mann, editors, NaturalLanguage Generation in Artificial Intelligence and Computational Linguistics.Kluwer Academic Publisher, 1991.

[McDonald, 1991] D.D. McDonald, "On the Place of Words in the GenerationProcess," In C.L. Paris, W.R. Swartout, and W.C. Mann, editors, NaturalLanguage Generation in Artificial Intelligence and Computational Linguistics.Kluwer Academic Publisher, 1991.

[Mel' uk and Polgu~re, 1987] I. Mel' uk and A. Polgu~re, "A Formal Lexicon inthe Meaning-Text Theory (or How to Do Lexica with Words)," ComputationalLinguistics, 13(3-4):261-275, 1987.

[Mel' uk and Polgu~re, 1995] I. Mel'duk and A. Polgu~re, "Lexical Functions:a Tool for the Description of Lexical Relations in a Lexicon," unpublishedmanuscript, 1995.

[Mercer, 1992] R. Mercer, "Presuppositions and Default Reasoning: A Study inLexical Pragmatics," In J. Pustejovsky and S. Bergler, editors, Lexical Seman-tics and Knowledge Representation. Springer-Verlag, 1992.

[Merialdo, 1994] B. Merialdo, "Tagging English Text with a Probabilistic Model,"

Computational Linguistics, 20(2):155-172, 1994.

[Montague, 1974] R. Montague, "The Proper Treatment of Quantification in Or-dinary English," In R. H. Thomason, editor, Formal Philosophy. Selected Papersof Richard Montague. Yale University Press, 1974.

[Moore, 1981] R.C. Moore, "Problems in Logical Form," In Proceedings of the19th Annual Meeting of the Association for Computational Linguistics, 1981.

[Nagao, 1987] M. Nagao, "Role of Structural Transformation in a Machine Trans-lation System," In S. Nirenburg, editor, Machine Translation: Theoretical andMethodological Issues. Cambridge University Press, 1987.

[Naigles et al., 1989] L. Naigles, H Gleitman, and L.R. Gleitman, "Children Ac-quire Word Meaning Components from Syntactic Evidence," In E. Dromi,editor, Linguistic and Conceptual Development. 1989.

[Nebel and Smolka, 1991] B. Nebel and G. Smolka, "Attributive Description For-malisms ... and the Rest of the World," In 0. Herzog and C. Rollinger, editors,Text Understanding in LILOG. Springer-Verlag, 1991.

[Nerbonne, 1992] J. Nerbonne, "NLL models," Technical report, DFKI,Saarbriicken, Germany, 1992.

89

[Nerbonne, 1993] John Nerbonne, "A Feature-based Syntax/Semantics Inter-face," Annals of Mathematics and Artificial Intelligence, 8:107-132, 1993.

[Nirenburg and Raskin, 1987] S. Nirenburg and V. Raskin, "The Subworld Con-cept Lexicon and the Lexicon Management System," Computational Linguis-tics, 13(3-4):276-289, 1987.

[Palmer, 1991] M. Palmer, "General Lexical Representation for an Effect Pred-icate," In J. Pustejovsky and S. Bergler, editors, Proceedings of the LexicalSemantics and Knowledge Representation Workshop Sponsored by the SIGLex,1991.

[Palmer et al., 1986] M. Palmer, D.A. Dahl, R.J. Schiffman, L. Hirschman,M. Linebarger, and J. Dowding, "Recovering Implicit Information," In Pro-ceedings of the Strategic Computing Natural Language Workshop sponsored byDARPA, 1986.

[Palmer, 1990] M.S. Palmer, Semantic Processing for Finite Domains, CambridgeUniversity Press, 1990.

[Partee, 1984] B. H. Partee, "Nominal and Temporal Anaphora," Linguistics andPhilosophy, 7:243-286, 1984.

[Passonneau, 1988] R.J. Passonneau, "A Computational Model of the Semanticsof Tense and Aspect," Computational Linguistics, 14(2):44-60, 1988.

[Pentheroudakis and Vanderwende, 1993] J. Pentheroudakis and L. Vander-wende, "Automatically Identifying Morphologial Relations in Machine-Readable Dictionaries," In Proceedings of the Ninth Annual Conference of theUW Centre for the New OED and text research, 1993.

[Pereira et al., 1993] F Pereira, N. Tishby, and L. Lee, "Distributional Clusteringof English Words," In Proceedings of the 31st Annual Meeting of the Assocationof Computational Linguistics, 1993.

[Pinker, 1989] S. Pinker, Learnability and Cognition: The Acquisition of Argu-ment Structure, MIT Press, Cambridge, 1989.

[Pinker, 1994] S. Pinker, "How Could a Child Use Verb Syntax to Learn VerbSemantics?," In L. Gleitman and B. Landau, editors, The Acquisition of theLexicon. MIT Press, 1994.

[Pletat, 1991] U. Pletat, "The Knowledge Representation Language LLILOG," In0. Herzog and C. Rollinger, editors, Text Understanding in LILOG. Springer-Verlag, 1991.

[Pugeault, 1994] F. Pugeault, "Handling Thematic Role Distributions in Argu-ment Structures," In H. Trost, editor, Proceedings of KONVENS-94, 1994.

90

[Pustejovsky, 1988] J. Pustejovsky, "Constraints on the Acquisition of SemanticKnowledge," International Journal of Intelligent Systems, 3:247-268, 1988.

[Pustejovsky and Anick, 1988] J. Pustejovsky and P. Anick, "On the Semanticsof Nominals," In COLING, 1988.

[Pustejovsky et al.] J. Pustejovsky, S. Bergler, and P. Anick, "Lexical SemanticTechniques for Corpus Analysis," unpublished manuscript.

[Pustejovsky and Boguraev, 1993] J. Pustejovsky and B. Boguraev, "LexicalKnowledge Representation and Natural Language Processing," Artificial in-

telligence, 63:193-223, 1993.

[Pustejovsky and Nirenburg, 1987] J. Pustejovsky and S. Nirenburg, "Lexical Se-lection in the Process of Language Generation," In Proceedings of the 25thMeeting of the Association for Computational Linguistics, 1987.

[Pustejovsky, 1991] James Pustejovsky, "The Generative Lexicon," Computa-tional Linguistics, 17:409-441, 1991.

[Reiter, 1991] E. Reiter, "A New Model of Lexical Choice for Nouns," Computa-tional Intelligence, 7:240-251, 1991.

[Rickheit, 1991] M. Rickheit, "Sortal Information in lexical concepts," In 0. Her-zog and C.-R Rollinger, editors, Text Understanding in LILOG: Integratingcomputational linguistics and artificial Intelligence Final report on the IBMGermany LILOG-Project. Springer-Verlag, 1991.

[Ritchie et al., 1992a] G.D. Ritchie, G.J. Russell, A.W. Black, and S.C. Pulman,Computational Morphology: Practical Mechanisms for the English Lexicon, MITPress, 1992.

[Ritchie et al., 1992b] Graeme D. Ritchie, Graham J. Russell, Alan W. Black,and Steve G. Pulman, Computational Morphology: practical mechanisms forthe English lexicon, MIT press, 1992.

[Rooth, 1995] M. Rooth, "Latent Class Models for Grammatical Relations," un-published manuscript, 1995.

[Rupp et al., 1992] C.J. Rupp, R. Johnson, and M. Rosner, "Situation Schemataand Linguistic Representation," In M. Rosner and R. Johnson, editors, Com-putational linguistics and formal semantics. Cambridge University Press, 1992.

[Schaeffer et al., 1993] S. Schaeffer, C.H. Hwang, J. de Haan, and L.K. Schubert,"EPILOG: The Computational System for Episodic Logic (User's Guide, QuickReference Manual, and Programmer's Guide)," Technical report, Dept. of Com-puting Science, Univ. of Alberta, Edmonton, Canada T6G 2H1, 1993.

91

[Schank and Abelson, 1977] R. Schank and R. Abelson, Scripts, plans, goals, andunderstanding, Lawrence Erlbaum Associates, 1977.

[Schmolze and Israel, 1983] J. G. Schmolze and D. J. Israel, "KL-ONE: Seman-tics and classification," In Research in Knowledge Representation and NaturalLanguage Understanding, pages 27-39. Bolt, Beranek, and Newman Inc., 1983,BBN Technical Report No. 5421.

[Schubert, 1982] L. Schubert, "An Approach to the Syntax and Semantics ofAffixes in Conventionalized Phrase Structure Grammar," In Proceedings of theFourth Bienn. Conference of the CSCSI/SCEIO, pages 189-195, 1982.

[Schubert, 1990] L. K. Schubert, "Semantic Nets are in the Eye of the Beholder,"Technical Report 346, University of Rochester, 1990.

[Schubert and Hwang, 1990] L. K. Schubert and C. H. Hwang, "An EpisodicKnowledge Representation for Narrative Texts," Technical Report 345, De-partment of Computer Science, University of Rochester, 1990.

[Schuetze, 1992a] H. Schuetze, "Dimensions of Meaning," In Proceedings of Su-percomputing 92, 1992.

[Schuetze, 1992b] H. Schuetze, "Word Sense Disambiguation With SublexicalRepresentations," In Statistically-Based NLP Techniques (AAAI Workshop,July 12-16, 1992, San Jose, CA.), pages 109-113, 1992.

[Schuetze, 1993] H. Schuetze, "Word Space," In S.J. Hanson, J.D. Cowan, andC.L. Giles, editors, Advances in Neural Information Processing Systems 5. Mor-gan Kaufmann, San Mateo CA, 1993.

[Selfridge, 1980] M. G. R. Selfridge, A Process Model of Language Acquisition,PhD thesis, Yale, 1980.

[Siskind, 1990] J. M. Siskind, "Acquiring Core Meanings of Words, Represented asJackendoff-style Conceptual Structures, from Correlated Streams of Linguisticand Non-linguistic Input," In Proceedings of the 28th Meeting of the Associationfor Computational Linguistics, 1990.

[Smith, 1970] C. S. Smith, "Jespersen's 'move' and 'change' class and causativeverbs in English," In M. A. Jazayery, E. C. Palome, and W. Winter, editors,Linguistic and literary studies in honor of Archibald A. Hill, Vol. 2: Descriptivelinguistics. The Hague: Mouton, 1970.

[Smolka, 1991] G. Smolka, "A Feature Logic with Subsorts," In J. Wedekind andC. Rohrer, editors, Unification in Grammar. MIT Press, 1991.

[Sondheimer et al., 1989] N.K. Sondheimer, S. Cumming, and R. Albano, "Howto Realize a Concept: Lexical Selection and the Conceptual Network in Text

92

Generation," Technical Report ISI/RR-89-248, Information Sciences Institute,University of Southern California, 1989.

[Sproat, 1992] R. Sproat, Morphology and Computation, MIT Press, 1992.

[Tenny, 1992] C. Tenny, "The Aspectual Interface Hypothesis," In I. A. Sag andA. Szabolcsi, editors, Lexical Matters. CSLI, 1992.

[Trost, 1993] H. Trost, "Coping with Derivation in a Morphological Component,"In Proceedings of the Sixth Conference of the European Chapter of the Associ-ation for Computational Linguistics, pages 368-376, 1993.

[Tyler, 1989] A. Tyler, "The acquisition of English derivational morphology,"Journal of Memory and Language, 28:649-667, 1989.

[Velardi et al., 1991] P. Velardi, M. T. Pazienza, and M. Fasolo, "How to EncodeSemantic Knowledge: a Method for Meaning Representation and Computer-aided Acquisition," Computational Linguistics, 17(2), 1991.

[Vendler, 1967] Zeno Vendler, Linguistics in Philosophy, Cornell University Press,Ithaca, New York, 1967.

[Webber, 1988] B.L. Webber, "Tense as Discourse Anaphora," ComputationalLinguistics, 14(2):61-73, 1988.

[Webster and Marcus, 1989] M Webster and M. Marcus, "Automatic Acquisitionof the Lexical Semantics of Verbs from Sentence Frames," In Proceedings of the

27th Annual Meeting of the Association for Computational Linguistics, 1989.

[Wechsler, 1989] S. Wechsler, "Accomplishments and the Prefix re-," In Proceed-ings of the North East Linguistic Society 19, 1989.

[Wu and Palmer, 1994] Z. Wu and M. Palmer, "Verb Semantics and Lexical Selec-tion," In Proceedings of the 32th Meeting of the Association for ComputationalLinguistics, 1994.

[Yarowsky, 1993] D. Yarowsky, "One Sense per Collocation," In Proceedings of the

ARPA Workshop on Human Language Technology. Morgan Kaufmann, 1993.

[Zucchi, 1989] Alessandro Zucchi, The Language of Propositions and Events: Is-sues in the Syntax and the Semantics of Nominalization, PhD thesis, Universityof Massachusetts, Amherst, MA, 1989.

93

A A Lexicon Produced byApplying MorphologicalCueing to the Brown Corpus

This appendix contains the results of applying the process of morphological cueingdescribed in Chapter 4 to the Brown corpus: a lists of words marked with semanticfeatures. In addition, the appendix contains information relevant to the process:lists of axioms, tables of results, etc. This information will be presented firstfollowed by the lists of words.

A.1 Axioms and tables

Below is a list of axioms collected from the previous appendix.

COMPLEXFor all predicates D with features COMPLEX and DYADIC andand for their corresponding base predicate B withfeature DYADIC(Vx,y [[x (base D) y] -> [x B y]])

TELICFor all predicates P with features TELIC and DYADIC:(Vx,y,e [[[x P y]**el -> (3el: [Eel at-end-of el A

[e cause ell][[y (rstate P)]**el])])

ACTIVITY

For all predicates P with features ACTIVITY and DYADIC:(Vx,y,e [[[x P y]**e] -> (Vt: [[t time] A [t during el A

((appr-grain P) t)]

[Lx P y]*t])])

94

BASE-RSTATEFor all predicates P with features BASE-RSTATE(Vy [[y (rstate P)] -> [y (rstate (base P))]])

ENTAIL-BASEFor all predicates P with features ENTAIL-BASE and DYADIC:

(Vx,y [[x P y] -> [x (base P) yl])

REPRESUPFor all predicates P with features REPRESUP and DYADIC:

(Vx,y,e [[[x P y]**e] ->

(Gel: Eel before e] [[y (rstate P)]*el])])

COSFor all predicates P with features COS and DYADIC:

(Vx,y,e [[[x P y]**e] -> [(3el: [Eel at-end-of el A[e cause ell]

Ely (rstate P)]**el]) A(3e2: [e2 at-beginning-of el

[(-Ey (rstate P)])**e2])]])

BASE-NEG-RSTATEFor all predicates P with feature BASE-NEG-RSTATE:(Vy [((rstate P) y) -> -((rstate (base P)) y)])

IZE-DEP

For all predicates P with features IZE-DEP and DYADIC:

(Vy [((base P) y)*e -> (3el [[(3x Ex P y])**el A Eel before eli

V (Ve2:Le2 before el((base P) y)*e 2)ll))

RSTATE-EQ-BASEFor all predicates P with features RESTATE-EQ-BASE and DYADIC:(Vi ((rstate P) x) -> ((base P) x))

PART-IN-EFor all predicates P with features PART-IN-E and DYADIC-BASE:(Vx (P x) -> (3e (3y,z [y (base P) z]*e) A Lx part el))

SENTIENTFor all predicates P with features SENTIENT and DYADIC-BASE:(Vx (P x) -> (3e (3y,z Ly (base P) z]*e) A Lx sentient-of el))

NON-VOLITIONALFor all predicates P with features NON-VOLITIONAL and

DYADIC-BASE:

95

(Vx (P x) -> (3e (iy,z [y (base P) zl*e) A-[x volitional-involved-in el))

E-AND-RFor all predicates P with features E-AND-R and DYADIC-BASE:

(Vx (P x) -> (3e [(3y,z [y (base P) zl**e) AEx part el A[(Vyl ziel (-([yl (base P) zl]**el))) ->

-(P x)]1))

E-OR-P-OR-RFor all predicates P with features E-OR-P-OR-R and DYADIC-BASE:

(Vx (P x) -> (3e [(Ey,z [y (base P) z]**e) A[[x = e] V

[x = (that [y P z]*e)] V[[x part el A[(Vyl,zl,el (-'([yl (base P) zl]**el))) ->

-(P x)]111))

INCORFor all predicates P with features INCOR and DYADIC andDYADIC-BASE:(Vx,y Ex P y] -> [x ((adv-m incorrect) ((base P) y))])

ABLEFor all predicates P with features ABLE and DYADIC-BASE:

(Vy (P y) -> K>(]x,e [x (base P) y]**e))

NESSFor all predicates P with the feature NESS:

(Vx (P x) -> (Ve: [e instance-of x] (3y ((base P) y)**e)))

ABSTFor all predicates P with the feature ABST:(Vx (P x) -> (no-physical-extent x))

LESSANTFor all predicates P with the feature LESSANT:

(Vx (P x) -> -((less (base P)) x))

FULANT

For all predicates P with the feature FULANT:

(Vx (P x) -> -((ful (base P)) x))

LESSFor all predicates P with the feature LESS:

(Vx (P x) <-> (less (base P)) x)

96

FULFor all predicates P with the feature FUL:(Vx (P x) <-> (ful (base P)) x)

Next, a number of tables are presented that are relevant to the extraction ofprospective tokens for the morphological cueing process. The first table lists thetags that were used in the version of the Brown corpus used here [Marcus et al.,1993]. The second table lists the regular expressions used to extract prospectivetokens from the corpus. The token regular expressions were matched against thewords of the corpus and the tag regular expressions were matched against the

tags. The next table lists, for each affix, the number of types found and thenumber of types that contain stems that were not in the Alvey dictionary. Thefinal two tables were presented in Chapter 4 and contain the precision statisticsfor the semantic features defined by the axioms above. These statistics are derivedfrom the words listed below ignoring types where the base was not in the Alveydictionary (see the discussion of collection features at the beginning of the nextsection).

Brown Corpus Tags

1. cc Coordinating conjunction2. CD Cardinal number3. DT Determiner4. EX Existential there5. FW Foreign word6. IN Preposition or subordinating conjunction7. JJ Adjective8. JJR Adjective, comparative9. JJS Adjective, superlative

10. LS List item marker11. MD Modal12. NN Noun, singular or mass13. NNS Noun, plural14. NP Proper noun, singular15. NPS Proper noun, plural16. PDT Predeterminer

17. POS Possessive ending18. PP Personal pronoun

19. PP$ Possessive pronoun20. RB Adverb21. RBR Adverb, comparative22. RBS Adverb, superlative23. RP Particle24. SYM Symbol

25. TO to26. UH Interjection27. VB Verb, base form28. VBD Verb, past tense29. VBG Verb, gerund or present participle30. VBN Verb, past participle31. VBP Verb, non-3rd person singular present32. VBZ Verb, 3rd person singular present33. WDT Wh-determiner34. WP Wh-pronoun

35. WPS Possessive wh-pronoun36. WRB Wh-adverb

97

Regular expressions used to extract prospective tokens

Affix Token Regular Exp. Tag Regular Exp.Vun- AUN.+ AVB [AN]

Vde- AE AVB[AN]re- ARE.+ AV .*

-Aize .*IZ(EIINGIESIED)$ AV.*

-Nize . *IZ(EIINGIESIED)$ AV.*~

-Aijy I.*IF(YIYINGIIESIIED)$ AV.*

-Nify .*IF(YIYINGIIESIIED)$ AV.*

-ate .*AT(EIINGIESIED)$ AV.

-AenV .*EN(INGISIED)$ AV.*

-le V A [T ]*[ffAAE1I0U] L E$ AV.*

-ee .+EE(S1)$ ANN*.-VerN .+(EIO)R(SI)$ ANN.*

-ant I .+(EIA)NT(Sl)$ A NN-*-ment I .+MENT(SI)$ A NN.*

-age I .+AGE Sj $ A NN.*mis- AMIS.+ AV.*-able .+ABLE$ Ajj.*

-nss.+NESS$ WNN.*-Jul .*FUL$ Aj.

-Tess .*LESS$ AJ.*

The number of types and new stems per affix

Affix ITypes Total New StemsVun- 24 1Vde- 38 3

re- 179 15-Aize 75 _________7

-Nize 91 5 _________

-Aify 19 1-Nify 1 22 1

-AenV 40 3-ee 31 9

-VerN 634 147-ant 30 5

-ment 182 11-age 44 0

mis- 26 3-able 236 16

-ness 318 9-Jul 79 2

-less 112 0

98

Precision for derived words

Feature 3 Affix J Types Precision Feature 3 Affix ] Types Precision

TELIC re- 164 91% RSTATE-EQ-BASE -Aify 17 58%BASE-RSTATE re- 164 65% COS -Nify 21 67%ENTAIL-BASE re- 164 65% COS -ate 365 48%

REPRESUP re- 164 65% PART-IN-E -ee 22 91%COS un- 23 100% SENTIENT -ee 22 82%

BASE-NEG-RSTATE Un- 23 91% NON-VOL -ee 22 68%COS de- 35 34% PART-IN-E -er 471 85%

BASE-NEG-RSTATE de- 35 20% PART-IN-E -ant 21 81%COS -Aize 63 78% E-AND-R -age 43 58%

RSTATE-EQ-BASE -Aize 63 75% E-OR-P-OR-R -ment 166 88%COS -Nize 86 56% INCOR mis- 21 86%

ACTIVITY -le 71 55% ABLE -able 148 84%COS -en 36 100% NESS -ness 307 97%

RSTATE-EQ-BASE -en 36 97% FULANT -less 22 77%COS -Aify 17 94% LESSANT -Jul 22 77%

Precision for base words

Feature [ Affix I Types Precision Feature I Affix Types I Precision

TELIC re- 164 91% IZE-DEP -AenV 36 72%COS Vun- 23 91% IZE-DEP -Aify 15 40%COS Vde- 33 36% ABST -Jul 76 93%

IZE-DEP -Aize 64 80%

A.2 Word lists

The word lists have been organized by affix: for each affix all of its derived formsand/or bases are listed. (Some affixes do not provide any semantic informationabout one or the other of their bases or derived forms and these words are notlisted.) For each word, its citation form is listed followed by a collection featureand a set of semantic features. The collection features give information about theword in relation to the Alvey dictionary, e.g., was it in the lexicon, was its stemin the lexicon, etc. Below are the possible collection features for derived forms.

NEWDER: The word is a derived form and was not in the dictionary but itsbase was.

NEWSTEMDER: The word is a derived form and was constructed from a stemthat was not in the dictionary.

INLEX: The word and its base, if there is one, were in the dictionary.

A number of additional collection features were needed to mark words which

involve particular affixes. The DENOM feature was used to mark words endingin -le with a corresponding homonym noun that is not eventive. The NOTINLEXwas used to mark -ate words that were not in the dictionary. The collection

features for bases are listed below.

99

NEWDERSTEM: The word was in the dictionary but the corresponding de-rived form was not.

INLEX: The word and its derived form were in the dictionary.

NEWSTEM: The word was not in the dictionary.

The semantic features are defined by the axioms listed in the previous section.They are suffixed by the affix which cued them. A semantic feature may beprefixed by a ""or a "?". A star signifies that the feature was posited bythe morphological cueing process but in fact the word does not conform to thefeatures axiom. A question mark signifies that it was indeterminate whether theword conformed. A feature holds if and only if it is true of the particular senseused in the majority of the tokens.

A.2.1 Words derived using r-e-

reache INLEX *TELICre- *BASE;-RSTATEre- *BASEre- *REPRESUPre-reacquaint NEWDER TELICre- BA5E-RSTATEre- BASEre- REPRESUPre-react INLEX '*TELICre- *BASFRSTATEre- BASEre- *REPRESUPre-reactivate INLEX TELICre- BASE-RSTATEre- BASEre- REPRESUPre-readapt NEWDER TELICre- BASE-RSTATEre- BASEre- REPRESUPre-readjust INLEX TELICre- BASE-RSTATEre- BASEre- REPRESUPre-reaffirm NEWDER TELICre- BASE-RSTATEre- BASEre- REPRESUPre-realign INLEX TELICre- BASE-RSTATEre- BASEre- REPRESUPre-realize INLEX TELICre- *BASE-RSTATEre- *BASEre- *REPRESUPrC-reape INLEX *TELICre- *BASE-RSTATEre- *BASEre- *REPRESUPre-reappear INLEX TELICre- BASE-RSTATEre- BASEre- REPRESUPre-reapportion NEWDER TELICre- BASE-RSTATEre- BASEre- REPRESUPre-rearm INLEX TELICre- BASE-RSTATEre- BASEre- REPRESUPre-rearrange INLEX TELICre- BASE-RSTATEre- BASEre- REPRESUPre-reassemble NEWDER TELICre- BASE-RSTATEre- BASEre- REPRESUPre-reassert NEWDER TELICre- BASE-RSTATEre- BASEre- REPRESUPre-reassign NEWDER TELICre- BASE-RSTATEre- BASEre- REPRESUPre-reassure INLEX TELICre- BASE-RSTATEre- BASErC- *REPRESUPre-reawaken NEWDER TELICre- BASE-RSTATEre- BASEre- REPRESUPre-rebell INLEX *TELICre- *BASERSTATEre- *BASEre- *REPRESUPre-rebound INLEX *TELICre- *BASE-RSTATEre- *BASEre- *REPRESUPre-rebuff INLEX TELICre- *BASFRSTATEre- *BASEre- *REPRESUPre-rebuild INLEX TELICre- BASE-RSTATEre- BASEre- REPRESUPre-rebutt NEWDER TELICre- *BASE-RSTATEre- *BASEre- *REPRESUPre-recalculate NEWDER TELICre- BASE-RSTATEre- BASEre- REPRESUPre-recall INLEX TELICre- *BASE-RSTATEre- *BASEre- *REPRESUPre-recant INLEX TELICre- *BASE-RSTATEre- *BASEre- *REPRESUPre-recapitulate INLEX TELICre- *BASE-RSTATEre- *BASEre- *REPRESUP re-recapture INLEX TELICre- BASE-RSTATEre- BASEre- REPRESUPre-receave NEWSTEMDER TELICre- *BASE-RSTATEre- *BASEre- *REPRESUP re-recede INLEX TELICre- BASE-RSTATEre- BASEre- REPRESUPre-recheck NEWDER TELICre- BASE-RSTATEre- BASEre- REPRESUPre-recite INLEX TELICre- *BASE-RSTATEre- *BASEre- *REPRESUPre-reclaim INLEX TELICre- BASE-RSTATEre- BASEre- REPRESUPre-reclassify NEWDER TELICre- BASE-RSTATEre- BASEre- REPRESUPre-recoil INLEX TELICre- *BASE-RSTATEre- *BASEre- *REPRESUPre-recollect INLEX *TELICre- *BASE-RSTATEre- *BASEre- *REPRESUPre-recommence NEWDER TELICre- BASE-RSTATEre- BASEre- REPRESUPre-recornmend INLEX TELICre- *BASFRSTATEre- *BASEre- *REPRESUPre-recond NEWSTEMIDER *TELIC re- *BASEFRSTATEre- *BASEre- *REPRESUP re-

100

recondition INLEX TELIC re- *BASE-RSTATEre *BASEre- REPRESUP re-reconsider INLEX. TELICre- BASE-RSTATEre- BASEre- REPRESUPre-reconstruct INLEX TELICre- BASE-RSTATEre- BASEre- REPRESUPre-reconvene NEWDER TELICre- BASE-RSTATEre- BASEre- REPRESUPre-reconvert NEWDER TELICre- BASE-RSTATEre- BASEre- REPRESUPre-recoon NEWSTEMDER *TELIC re- *BASE-RSTATEre- *BASEre- *REPRESUPre-recooned NEWSTEMDER *TELJCre- *BASE-RSTATEre- *BASEre- *REPRESUP re-recopy NEWDER TELICre- BASE-RSTATEre- BASEre- REPRESUPre-record JNLEX TELICre- *BASF-RSTATEre- *BASEm- *REPRESUPrc-recount INLEX TELICre- *BASE-RSTATEre- *BASEre- *REPRESUPre-recover INLEX TELICre- *BASE-RSTATEre- *BASEre- *REPRESUPre-recreate INLEX TELICre- BASE-RSTATEre- BASEre- REPRESUPre-redecorate INLEX TELICre- BASE-RSTATEre- BASEre- REPRESU~re-rededicate NEWDER TELICre- BASE-RSTATEre- BASEre- REPRESUPre-redeem INLEX TELICre- *BASE-RSTATEre- *BASEre- *REPRESUPre-redefine NEWDER TELICre- BASE-RSTATEre- BASEre- REPRESUP re-redirect INLEX TELICre- BASE-RSTATEre- BASEre- REPRESUPre-rediscover NEWDER TELICre- BASE-RSTATEre- BASEre- REPRESUPre-redistribute INLEX TELICre- BASE-RSTATEre- BASEre- REPRESUP re-redistrict NEWSTEMDER TELICre- BASE-RSTATEre- BASEre- REPRESUPre-redistricting NEWSTEMDER TELIC re- BASE-RSTATEre- BASEre- REPRESUP re-redo INLEX TELICre- BASE-RSTATEre- BASEre- REPRESUPre-redouble INLEX TELICre- BASE-RSTATEre- BASEre- REPRESUPre-redress INLEX TELICre- *BASE-RSTATEre- *BASEre- *REPRESUPrc-reek INLEX *TELICrC- *BASE-RSTATErc- *BASErc- *REPRESUPre-reelect NEWDER TELICre- BASE-RSTATEre- BASErc- REPRESUPre-reemerge NEWDER TELICre- BASE-RSTATEre- BASEre- REPRESUPre-reenact NEWDER TELICre- BASE-RSTATEre- BASEre- REPRESUPre-reenter NEWDER TELICre- BASE-RSTATEre- BASEre- REPRESUP re-reestablish NEWDER TELICre- BASE-RSTATEre- BASEre- REPRESUPre-reexamine NEWDER TELICre- BASE-RSTATEre- BASEre- REPRESUPre-refashion INLEX TELICre- BASE-RSTATEre- BASEre- REPRESUPre-refill INLEX TELICre- BASE-RSTATEre- BASEre- REPRESUPre-refinance NEWDER TELICre- BASE-RSTATEre- BASEre- REPRESUPre-refine INLEX TELICre- *BASEFRSTATEre- *BASEre- *REPRESUPrc-refine NEWDER TELICre- *BASE-RSTATEre- *BASEre- *REPRESUPre-refold NEWDER TELICre- BASE-RSTATEre- BASEre- REPRESUPre-reform INLEX TELICre- BASE-RSTATEre- BASEre- REPRESUPre-reformulate NEWDER TELICre- BASE-RSTATEre- BASEre- REPRESUP re-refuel INLEX TELICre- BASE-RSTATEre- BASEre- REPRESUPre-refund INLEX TELICre- BASE-RSTATEre- *BASEre- REPRESUPre-refurbish INLEX TELICre- BASE-RSTATEre- BASErc- REPRESUPre-refuse INLEX TELICre- *BASEF-RSTATEre- *BASEre- *REPRESUPrC-regain INLEX TELICre- BASE-RSTATEre- BASEre- REPRESUPre-regenerate INLEX TELICre- BASE-RSTATEre- BASEre- REPRESUPre-reground NEWDER TELICre- BASE-RSTATEre- BASEre- REPRESUPre-regroup INLEX TELICre- BASE-RSTATEre- BASEre- REPRESUPre-rehash INLEX TELICre- BASE-RSTATEre- BASEre- REPRESUPre-rehear INLEX TELIC re- BASE-RSTATEre- BASEre- REPRESUP re-reincarnate INLEX TELICre- BASE-RSTATEre- BASEre- REPRESUPre-reinstall NEWDER TELICre- BASE-RSTATEre- BASEre- REPRESUPre-reinterpret NEWDER TELICre- BASE-RSTATEre- BASEre- REPRESUPre-reinterpret NEWSTEMDER TELICre- BASE-RSTATEre- BASEre- REPRESUPre-reinterpreted NEWSTEMDER TELIC re- BASE-RSTATEre- BASEre- REPRESUPre-reintroduce NEWDER TELIC re- BASE-RSTATEre- BASEre- REPRESUP re-reiterate INLEX TELICre- BASE-RSTATEre- BASEre- REPRESUPre-rejoin INLEX TELICre- BASE-RSTATEre- BASEre- REPRESUPre-relay INLEX TELICre- *BASE-RSTATEre- *BASEre- *REPRESUP re-relearn NEWDER TELICre- BASE-RSTATEre- BASEre- REPRESUPre-release INLEX TELICre- *BASE.FRSTATEre- 'BASEre- *REPRESUPre-relie INLEX *TELICre- *BASF>RSTATEre- **BASEre- *REPRESUP re-relive INLEX TELICre- BASE-RSTATEre- BASEre- REPRESUPre-reload INLEX TELICrc- BASE-RSTATEre- BASEre- REPRESUPre-rely INLEX *TELICre- *BASEFRSTATEre- *BASEre- *REPRESUPre-

101

relyric NEWSTEMDER TELICre- BASE-RSTATEre- BASEre- REPRESUPre-relyriced NEWSTEMDER TELI~re- BASE-RSTATEre- BASEre- REPRESUPre-remake INLEX TELICre- BASE-RSTATEre- BASEre- REPRESUPre-remark INLEX TELICre- *BASE-RSTATEre- *BASEre- *REPRESUPre-remarry INLEX TELICre- BASE-RSTATEre- BASEre- REPRESUPre-remind INLEX TELICre- *BASEXRSTATEre- *BASEre- *REPRESUPre-remodel INLEX TELICre- *BASEXRSTATEre- *BASEre- REPRESUPre-remold INLEX TELICre- BASE-RSTATEre- BASEre- REPRESUPre-remount INLEX TELICre- BASE-RSTATEre- BASEre- REPRESUPre-remove INLEX TELICre- BASE-RSTATEre- BASEre- REPRESUPre-rename INLEX TELICre- BASE-RSTATEre- BASEre- REPRESUPre-reopen INLEX TELICre- BASE-RSTATEre- BASEre- REPRESUPre-reorder NEWDER TELICre- BASE-RSTATEre- BASEre- REPRESUPre-reorganize INLEX TELICre- BASE-RSTATEre- BASEre- REPRESUPre-reorient NEWSTEMDER TELICre- BASE-RSTATEre- BASEre- REPRESUPre-reoriented NEWSTEMDER TELICre- BASE-RSTATEre- BASEre- REPRESUPre-repaint NEWDER TELICre- BASE-RSTATEre- BASEre- REPRESUPre-repair INLEX TELICre- *BASE-RSTATEre- *BASEre- *REPRESUPre-repay INLEX TELICre- BASE-RSTATEre- BASEre- REPRESUPre-repeal INLEX TELICre- *BASFRSTATEre- *BASEre- *REPRESUPre-rephrase NEWDER TELICre- BASE-RSTATEre- BASEre- REPRESUPre-replace INLEX TELICre- *BASEXRSTATEre- *BASEre- *REPRESUPre-replant NEWDER TELICre- BASE-RSTATEre- BASEre- REPRESUPre-reply INLEX TELICre- *BASE-RSTATEre- *BASEre- *REPRESUPre-report INLEX TELICre- *BASE-RSTATEre- *BASEre- *REPRESUPre-repose INLEX TELICre- *BASE-RSTATEre- *BASEre- *REPRESUP re-represent INLEX *TELJCre- *BASE-RSTATEre- *BASEre- *REPRESUP re-repress INLEX *TELJCre- *BASE-RSTATEre- *BASEre- *REPRESUPre-reprint INLEX TELICre- BASE-RSTATEre- BASEre- REPRESUPre-reprobate INLEX TELICre- BASE-RSTATEre- BASEre- REPRESUPre-reproduce INLEX TELICre- BASE-RSTATEre- BASEre- REPRESUPre-repulse INLEX TELICre- *BASE-RSTATEre- *BASEre- *REPRESUPre-request INLEX TELICre- *BASE.RSTATEre- *BASEre- *REPRESUPre-reread NEWDER TELICre- BASE-RSTATEre- BASEre- REPRESUPre-reseal NEWDER TELICre- BASE-RSTATEre- BASEre- REPRESUPre-research INLEX TELICre- *BAS&-RSTATEre- *BASEre- *REPRESUPre-reserve INLEX TELICre- *BAS&RSTATEre- *BASEre- *REPRESUPre-resettle INLEX TELICre- BASE-RSTATEre- BASEre- REPRESUPre-reshape NEWDER TELICre- BASE-RSTATEre- BASEre- REPRESUPre-reside INLEX *TELICre- *BASE-RSTATEre- *BASEre- *REPRESUP re-resift NEWDER TELICre- BASE-RSTATEre- BASEre- REPRESUPre-resign INLEX TELICre- *BASE-RSTATEre- *BASEre- *REPRESUPre-resolve INLEX TELICre- *BASE-RSTATEre- *BASEre- *REPRESUPre-resort INLEX TELICre- *BASE-RSTATEre- *BASEre- *REPRESUPre-resound INLEX *TELICre- *BASE-RSTATEre- *BASEre- *REPRESUPre-restate INLEX TELICre- BASE-RSTATEre- BASEre- REPRESUPre-resting INLEX "'TELICre- *BAS&RSTATEre- *BASEre- *REPRESUPre-restock INLEX TELICre- BASE-RSTATEre- BASEre- REPRESUPre-restore INLEX TELICre- *BASE-RSTATEre- *BASEre- *REPRESUPre-restrain INLEX TELICre- *BAS&-RSTATEre- *BASEre- *REPRESUPre-restructure INLEX TELICre- BASE-RSTATEre- BASEre- REPRESUPre-restudy NEWDER TELICre- BASE-RSTATEre- BASEre- REPRESUPre-resublime NEWDER TELICre- BASE-RSTATEre- BASEre- REPRESUPre-resuspend NEWDER TELICre- BASE-RSTATEre- BASEre- REPRESUPre-retail INLEX *TELICre- *BASE-RSTATEre- *BASEre- *REPRESUPre-retell INLEX TELICre- BASE-RSTATEre- BASEre- REPRESUPre-rethink INLEX TELICre- BASE-RSTATEre- BASEre- REPRESUPre-retie NEWDER TELICre- BASE-RSTATEre- BASEre- REPRESUPre-retire INLEX TELICre- *BASF-RSTATEre- *BASEre- *REPRESUPre-retrace INLEX TELICre- BASE-RSTATEre- BASEre- REPRESUPre-retrain NEWDER TELICre- BASE-RSTATEre- BASEre- REPRESUPre-retranslate NEWDER TELICre- BASE-RSTATEre- BASEre- REPRESUPre-retreat INLEX TELICre- *BAS53-RSTATEre- *BASEre- *REPRESUPre-retrench INLEX TELICre- BASE-RSTATEre- BASEre- REPRESUPre-

102

return INLEX TELICre- *BASE-RSTATEre- *BASEre- *REPRESUPre-reunite INLEX TELICre- BASE-RSTATEre- BASEre- REPRESUP re-reuphoister NEWDER TELICre- BASE-RSTATEre- BASEre- REPRESUPre-revamp INLEX TELICre- *BASERSTATEre *BASEre *REPRESUPre-review INLEX TELICre- BASE-RSTATEre- BASEre- REFRESUPre-revisit NEWSTEMDER TELICre- BASE-RSTATEre- BASEre- REPRESUPre-revisited NEWSTEMDER TELICre- BASE-RSTATEre- BASEre- REPRESUPre-revitalize INLEX TELICre- BASE-RSTATEre- BASEre- REPRESUPre-revved NEWSTEMDER *TELIC re- *BASE-RSTATEre- *BASEre- *REPRESUPre-reward INLEX TELICre- *BASE-RSTATEre- *BASEre- *REPRESUP-e-rewrite INLEX TELICre- BASE-RSTATEre- BASEre- REPRESUP-e-

A.2.2 Words serving as bases for re-ache INLEX *TELICre-acquaint NEWDERSTEM TELICre-act JNLEX TELICre-activate INLEX TELICre-adapt NEWDERSTEM TELICre-adjust INLEX TELICre-affirm NEWDERSTEM TELICre-align INLEX TELICre-alize INLEX TELICrc-ape INLEX *TELICre-appear INLEX *TELICre-apportion NEWDERSTEM TELICrc-arm INLEX TELICre-arrange INLEX TELICre-assemble NEWDERSTEM TELICre-assert NEWDERSTEM TELICre-assign NEWDERSTEM TELICre-assure INLEX TELICre-awaken NEWDERSTEM TELICre-bell INLEX TELICre-bound INLEX *TELICre-buff INLEX *TELICre-build INLEX TELICre-butt NEWDERSTEM *TELICre-calculate NEWDERSTEM TELICre-call INLEX TELICre-cant INLEX TELICre-capitulate INLEX TELICre-capture INLEX TELICre-ceave NEWSTEM TELICr--cede INLEX TELICre-check NEWDERSTEM TELICre-cite INLEX TELICre-claim INLEX TELICre-classify NEWDERSTEM TELICre-coil INLEX TELICre-collect INLEX TELICre-commence NEWDERSTEM TELICre-commend INLEX TELICre-cond NEWSTEM TELICre-condition INLEX TELICre-consider INLEX TELICre-construct INLEX TELICre-convene NEWDERSTEM TELICre-convert NEWDERSTEM TELICre-coon NEWSTEM TELICre-cooned NEWSTEM TELICre-copy NEWDERSTEM TELICre-

103

cord INLEX TELIC re-count INLEX TELICre-cover INLEX TELICre-create INLEX TELICre-decorate INLEX TELICre-dedicate NEWDERSTEM TELICre-deem INLEX *TELICre-define NEWDERSTEM TELICre-direct INLEX TELICre-discover NEWDERSTEM TELICre-distribute INLEX TELIC re-district NEWSTEM TELICre-districting NEWSTEM TELICre-do INLEX TELICre-double INLEX TELIC re-dress INLEX TELICre-eke INLEX TELICre-elect NEWDERSTEM TELICre-emerge NEWDERSTEM TELICre-enact NEWDERSTEM TELICre-enter NEWDERSTEM TELICre-establish NEWDERSTEM TELICre-examine NEWDERSTEM TELICre-fashion INLEX TELICre-fill INLEX TELICre-fine INLEX TELIC re-fine NEWDERSTEM TELICre-finance NEWDERSTEM TELICre-fold NEWDERSTEM TELICre-form INLEX TELICre-formulate NEWDERSTEM TELICre-fuel INLEX TELICre-fund INLEX TELICre-furbish INLEX TELICre-fuse INLEX TELICre-gain INLEX TELICre-generate INLEX TELICre-ground NEWDERSTEM TELICre-group INLEX TELICre-hash INLEX TELICre-hear INLEX TELICre-incarnate INLEX TELICre-install NEWDERSTEM TELICre-interpret NEWDERSTEM TELICre-interpret NEWSTEM TELICre-interpreted NEWSTEM TELICre-introduce NEWDERSTEM TELIC re-iterate INLEX TELICre-join INLEX TELICre-lay INLEX TELIC re-learn NEWDERSTEM TELICre-lease INLEX TELICre-lie INLEX *TELICre-live INLEX TELICre-load INLEX TELICre-ly INLEX TELICre-lyric NEWSTEM TELICre-lyriced NEWSTEM TELICre-make INLEX TELICre-mark INLEX TELICre-marry INLEX TELICre-mind INLEX *TELJCre-model INLEX *TELICre-mold INLEX TELICre-

104

mount INLEX TELICre-move INLEX TELICre-name INLEX TELICre-open INLEX TELICre-order NEWDERSTEM TELICre-organize INLEX TELICre-orient NEWSTEM TELICre-oriented NEWSTEM TELICre-paint NEWDERSTEM TELICre-pair INLEX TELICre-pay INLEX TELICre-peal INLEX TELICre-phrase NEWDERSTEM TELICre-place INLEX TELICre-plant NEWDERSTEM TELICre-ply INLEX TELICre-port INLEX TELICre-pose INLEX TELICre-present INLEX TELICre-press INLEX TELICre-print INLEX TELICre-probate INLEX TELICre-produce INLEX TELICre-pulse INLEX *TELICrequest INLEX *'TELICre-read NEWDERSTEM TELICre-seal NEWDERSTEM TELICre-search INLEX *TELICre-serve INLEX TELICre-settle INLEX TELICre-shape NEWDERSTEM TELICre-side INLEX TELICre-sift NEWDERSTEM TELICre-sign INLEX TELICre-solve INLEX TELICre-sort INLEX TELICre-sound INLEX TELICre-state INLEX TELICre-sting INLEX TELICre-stock INLEX TELICre-store INLEX *TELICre-strain INLEX *TELICre-structure INLEX TELICre-study NEWDERSTEM TELICre-sublime NEWDERSTEM TELICre-suspend NEWDERSTEM TELICre-tail INLEX TELICre-tell INLEX TELICre-think INLEX TELICre-tie NEWDERSTEM TELICre-tire INLEX TELICre-trace INLEX TELICre-train NEWDERSTEM TELICre-translate NEWDERSTEM TELICre-treat INLEX TELICre-trench INLEX TELICre-turn INLEX TELICre-unite INLEX TELICre-upholster NEWDERSTEM TELICre-vamp INLEX TELICre-view INLEX TELICre-visit NEWSTEM TELICre-visited NEWSTEM TELICre-vitalize INLEX TELICre-

105

vved NEWSTEM TELICre-ward INLEX TELICre-write INLEX TELICre-

A.2.3 Words derived using un-

unclasp NEWDER COS Vun- BASE-NEG-RSTATE 1un- DYADIC Vun-uncoil NEWDER COS 1/un- BASE-NEG-RSTATE 1un- DYADIC Vun-uncork INLEX COS 1/un- BASE-NEG-RSTATE 1/un- DYADIC 1/un-uncurl NEWDER COS 1/un- BASE-NEG-RSTATE Vun- DYADIC 1/un-

undo INLEX COS Vun- *BASE-NEG-RSTATE Vun- DYADIC 1/un-undress INLEX COS 1/un- BASE-NEG-RSTATE 1/un- DYADIC 1/un-unearth INLEX COS Vun- BASE-NEG-RSTATE Vun- DYADIC 1/un-unfasten NEWDER COS 1/un- BASE-NEG-RSTATE Vun- DYADIC 1/un-unfold INLEX C0S 1/un- BASE-NEG-RSTATE Vun- *DYADIC Vun-unhitch NEWDER C0S Vun- BASE-NEG-RSTATE 1/un- DYADIC Vun-unlace NEWDER COS Vun- BASE-NEG-RSTATE 1un- DYADIC 1/un-unlash NEWDER COS 1/un- BASE-NEG-RSTATE Vun- DYADIC Vun-unload INLEX COS 1/un- BASE-NEG-RSTATE 1/un- DYADIC 1/un-unlock INLEX COS 1/un- BASE-NEG-RSTATE 1un- DYADIC 1/un-unnerve INLEX COS 1/un- *BASE-NEG-RSTATE 1un- DYADIC Vun-unpack INLEX COS 1/un- BASE-NEG-RSTATE 1un- DYADIC Vun-unreel NEWDER COS Vun- BASE-NEG-RSTATE Vun- DYADIC 1/un-unscrew INLEX C0S 1/un- BASE-NEG-RSTATE 1/un- DYADIC 1/un-unsettle INLEX COS Vun- BASE-NEG-RSTATE 1/un- DYADIC 1/un-unsheath NEWSTEMDER COS Vun- BASE-NEG-RSTATE 1un- DYADIC Vun-untie INLEX C0S 1/un- BASE-NEG-RSTATE 1un- DYADIC 1/un-unveil INLEX COS 1/un- BASE-NEC-RSTATE 1un- DYADIC Vun-unwind INLEX COS 1/un- BASE-NEG-RSTATE Vun- DYADIC 1/un-

unwire NEWDER COS Vun- BASE-NEG-RSTATE Vun- DYADIC 1/un-

A.2.4 Words serving as bases for un-clasp NEWDERSTEM COS 1/un- DYADIC 1/un-coil NEWDERSTEM COS 1/un- DYADIC 1/un-cork INLEX COS Vun- DYADIC 1/un-curl NEWDERSTEM COS 1/un- *DYADIC 1/un-do LNLEX *COS) 1/un- DYADIC 1/un-dress INLEX COS 1/un- DYADIC 1/un-earth INLEX COS 1/un- DYADIC 1/un-fasten NEWDERSTEM COS 1/un- DYADIC 1/un-fold INLEX COS 1/un- DYADIC 1/un-hitch NEWDERSTEM COS 1/un- DYADIC 1/un-lace NEWDERSTEM COS 1/un- DYADIC 1/un-lash NEWDERSTEM COS 1/un- DYADIC 1/un-load INLEX C05 1/un- DYADIC 1/un-lock INLEX COS 1/un- DYADIC 1/un-nerve INLEX *COS 1/un- DYADIC 1/un-pack INLEX COS 1/un- DYADIC 1/un-reel NEWDERSTEM COS 1/un- DYADIC 1/un-screw INLEX COS 1/un- DYADIC 1/un-settle INLEX COS 1/un- *DYADIC 1/un-sheath NEWSTEM COS 1/un- DYADIC 1/un-

tie INLEX COS 1/un- DYADIC 1/un-veil INLEX COS 1/un- DYADIC 1/un-wind INLEX COS 1/un- DYADIC 1/un-wire NEWDERSTEM COS Vun- DYADIC 1/un-

106

A.2.5 Words derived using de-debunk INLEX *COS Vde- *BASE- NEG-RSTATE Vde- DYADIC Vde-debut NEWSTEMDER *COS Vde- *BASE-NEG-RSTATE Vde- *DYADIC Vde-decant INLEX COS Vde- *BASE-NEG-RSTATE Vde- DYADIC Vde-decentralize INLEX COS Vde- BASE-NEG-RSTATE Vde- *DYADIC Vde-declaim INLEX *CO Vde- *BASE-NEG-RSTATE Vde- DYADIC Vde-decompose INLEX COS Vde- BASE-NEG-RSTATE Vde- *DYADIC Vde-decrease INLEX *COS Vde- *BASFNEG-RSTATE Vde- *DYADIC Vde-decry INLEX *COS Vde- *BASEFNEG-RSTATE Vde- DYADIC Vde-defend INLEX *~COS Vde- *BASE-NEG-RSTATE Vde- DYADIC Vde-define INLEX *COS~ Vde- '*BAS-NEG-RSTATE Vde- DYADIC Vde-defocus NEWSTEMDER COS Vde- BASE-NEG-RSTATE Vde- DYADIC Vde-degenerate INLEX COS Vde- BASE-NEG-RSTATE Vde- *DYADIC Vde-degrade INLEX *COS Vde- *BASE-NEG-RSTATE Vdc- *DYADIC Vde-delay INLEX *COS Vde- *BASE-NEG-RSTATE Vde- DYADIC Vde-delight INLEX *~COS Vde- *BASE-NEG-RSTATE Vde- DYADIC Vde-delimit INLEX *CO Vde- *BASE-NEG-RSTATE Vde- *DYADIC Vde-delive INLEX *COS Vde- *BASE-NEG-RSTATEVde- *DYADIC Vde-demean INLEX *'COS Vde- *BASE-NEG-RSTATE Vde- *DYADIC Vde-demoralize NEWDER COS Vde- BASE-NEG-RSTATE Vde- DYADIC Vde-demythologize NEWDER COS Vde- BASE-NEG-RSTATE Vde- DYADIC Vde-denote INLEX *COS Vde- *BASE-NEG-RSTATEVde- *DYADIC Vde-depart INLEX COS Vde- *BASE-NEG-RSTATE Vde- *DYADIC Vde-depose INLEX COS Vde- *BASFNEG-RSTATE Vde- DYADIC Vde-depress INLEX COS I/dc- *BASE-NEG-RSTATE Idc- DYADIC I/de-derive INLEX *COS I/de- *BASE-NEG-RSTATEI/de- *DYADIC Vde-describe INLEX *COS I/de- *BS-EGR T I/dc- *DYADIC I/de-desegregate INLEX COS I/dc- BASE-NEG-RSTATE I/dc- DYADIC I/de-deserve INLEX *CO I/dc- *BASE-NEG-RSTATE Vde- *DYADIC I/dc-design INLEX *COS Vde- *BASE-NEG-RSTATE Vde- DYADIC Vdc-desire INLEX *CO I/d- *BASE-NEG-RSTATE Idc- *DYADIC I/dc-despise INLEX *COS I/de- *BASE-NEG-RSTATE Idc- *DYADIC I/de-despoil INLEX COS Vde- *BASFNEG-RSTATE Vde- DYADIC Vde-desynchronize NEWDER 00S Vde- BASE-NEG-RSTATE Vde- *DYADIC Vde-determ (typo) NEWDER *COS I/de- *BASEFNEG-RSTATEI/de- *DYADIC I/de-detest INLEX *COS Idc- *BASE-NEG-RSTATE Vde- *DYADIC Ide-detour INLEX *COS I/de- *BASFNEG-RSTATE I/dc- *DYADIC Vde-develop NEWSTEMDER COS Vde- *BASF-NEG..RSTATE Vdc- *DYADIC Vde-devote INLEX *COS I/de *BASF-NEG-RSTATE/dc- DYADIC I/dc-

A.2.6 Words serving as bases for de-bunk INLEX *COS I/de- *DYADIC Vde-but NEWSTEM *COS I/dc- *DYADIC I/de-cant INLEX COS I/dc- *DYADIC Vde-centralize INLEX COS Vde- DYADIC I/de-claim INLEX *'COS I/dc- *DYADIC I/dc-compose INLEX COS I/dc- DYADIC I/de-crease INLEX *COS Vdc- *DYADIC Vde-cry INLEX *COS Vde- *DYADIC Vdc-fend INLEX *COS Vde- *DYADIC I/de-fine INLEX COS Vde- ?DYADIC I/de-focus NEWSTEM COS I/dc- DYADIC I/de-generate INLEX COS Vde- DYADIC I/de-grade INLEX COS I/de- DYADIC I/de-lay INLEX *COS5 Vde- *DYADIC Vde-light INLEX COS Vde- DYADIC Vde-limri t INLEX *COS I/de- *DYADIC I/dc-live INLEX *COS I/de- *DYADIC I/dc-mean INLEX *COS I/dc- *DYADIC I/de-moralize NEWDERSTEM COS Vde- DYADIC I/dc-

107

mythologize NEWDERSTEM COS Vde- DYADIC Vde-note INLEX *COS5 Vde- *DYADIC Vde-part INLEX *COS5 Vde- *DYADIC Vde-pose INLEX *CO Vde- *DYADIC Vde-

press INLEX *COS Vde- *DYADIC Vde-rive INLEX COS Vde- ?DYADIC Vak-scribe INLEX *COS Vde- *DYADIC Vde-segregate INLEX COS Vde- DYADIC Vde-serve INLEX *COS Vde- *DYADIC Vde-sign INLEX *COS5 Vde- *DYADIC Vde-sire INLEX COS Vde- DYADIC Vde-spise (spy ize) INLEX *CO Vde- *DYADIC Vde-spoil INLEX COS Vde- *DYADIC Vde-synchronize NEWDERSTEM C0S Vde- *DYADIC Vde-term (typo) NEWDERSTEM *CO Vde- *DYADIC Vde-test INLEX *COS5 Vde- *DYADIC Vde-tour INLEX *COS5 Vde- *DYADIC Vde-velop NEWSTEM *COS Vde- *DYADIC Vde-vote INLEX *COS Vde- *DYADIC Vde-

A.2.'7 Words derived using -Aizebalkcanize NEWDER COS-A ire *RSTATE-EQ-BASE-Aizebrutalize INLEX 005-A ire RSTATE-EQ-BASE-Aizecapitalize INLEX *COS-A ire *RSTATE-EQ-BASE-Aizecentralize INLEX 005-A ire RSTATE-EQ-BASE-Aizecharacterize INLEX *COS-A ie *RSTATE-EQ-BASE-Airecivilize INLEX COS-A ire RSTATE-EQ-BASE-Aizecommunize NEWSTEMDER COS-Aize *RSTATE:.EQ-BASE-Aizeconventionalize NEWDER COS-A ire RSTATE-EQ-BASE-Aizedemythologize NEWSTEMDER COS-A ire *RSTATF>EQBASE-Aizeeconomize INLEX ?C0S-Aize *RSTATE-EQ-BASE-Aireequalize INLEX 005-A ize RSTATE-EQ-BASE-Aizefederalize NEWDER 005-A ire RSTATE-EQ-BASE.Aizefertilize INLEX *COS-Aire ?RSTATE-EQ-BASE-Aizefetishize NEWSTEMDER ?COS-A ire *RSTATE-EQ-BASE-Aizeformalize INLEX C0S-A ire RSTATE-EQ-BASE-Aizefossilize INLEX COS-Aize RSTATE-EQ-BASE-Aizegeneralize INLEX 005-A ire RSTATE-EQ-BASE-Aizegermanize NEWDER C0S-A ire RSTATE-EQ-BASE-Aizehumanize INLEX COS-Aire RSTATE-EQ-BASE-Airehypothesize NEWSTEMDER *COS-Aire RSTATE-EQ-BASE-Aireidealize INLEX 005-A ire RSTATE-EQ-BASE.Aireimmortalize INLEX 005-A ire RSTATE-EQ-BASE-Aireimpersonalize NEWDER COS-A ire RSTATE-EQ-BASE-Aireindividualize INLEX C0S-Aire RSTATE-EQ-BASE-Aireindustrialize INLEX COS-A ire RSTATE-EQ-BASE-Aireinternalize INLEX C0S-Aire RSTATE-EQ-BASE.Aireinternationalize INLEX COS-Aire RSTATE-EQ-BASE-Aireitalicize INLEX COS-Aire RSTATE-EQ-BASE-Airelegalize INLEX OOS-Aire RSTATE-EQ-BASE-Aireliberalize INLEX COS-A ire RSTATE-EQ-BASE-Airelocalize INLEX COS-Aire RSTATE-EQ-BASE-Airematerialize INLEX C0S-Aire *RSTATE-EQ-BASE-Airememorize NEWSTEMDER *C0S-A ire *RSTATF>EQ-BASE-Airemineralize NEWDER COS-Aire RSTATE-EQ-BASE-A iremobilize INLEX C0S-A ire RSTATE-EQ-BASE-A iremodernize INLEX C0S-Aire RSTATE-EQ-BASE-Airemoralize NEWDER C0S-Aire *RSTATE-EQ-BASE-Airemythologize NEWSTEMDER *COS-A ire RSTATE-EQ-BASE-Airenarcotize NEWSTEMDER 005-A ire *RSTATE-EQ-BASE-Aizenationalize INLEX COS-A ire RSTATE-EQ-BASE-Aire

108

naturalize INLEX COS -A ize RSTATE-EQ-BASE-A izeneutralize INLEX- GOS-Aize RSTATE-EQ-BASE-Aizenormalize INLEX COS -A ize RSTATE-EQ-BASE-A izenovelize NEWDER COS-A ize RSTATE-EQ-BASE-A izepenalize INLEX *COS-Aize *RSTATF-EQ-BASE-Aizcpersonalize INLEX COS -A ize RSTATE-EQ-BASE-A izcpolarize INLEX COS-Aizc RSTATE-EQ-BASE-Aizcpublicize INLEX *COS -A ize RSTATE-EQ-BASE-A izerationalize INLEX *COS-Aize *RSTATE-EQ-BASE-Aizerealize INLEX *'COS-Aize *RSTATEF.EQ-BASE-Aizeritualize NEWDER COS -A ize RSTATE-EQ-BASE-Aireromanticize INLEX *COS-Aize *RSTATEFEQ-BASE-Aizerubberize INLEX COS-Aize RSTATE-EQ-BASE-Aizesectionalize NEWDER COS -A ire RSTATE-EQ-BASE-A izesecularize INLEX C OS-A ire RSTATE-EQ-BASE-Aizesentimentalize INLEX *COS-Aize *RSTATE-EQ-BASE-Airesexualize NEWDER COS-Aize RSTATE-EQ-BASE-Aizesidize INLEX *COS-Aize *RSTATFEQ-BASE-Aizesignalize INLEX ?COS-Aize RSTATE-EQ-BASE-Aizesocialize INLEX *COS -A ire *RSTATE-EQ-BASE-A irespecialize INLEX ?COS-Aize *RSTATE>EQ-BASE-Aizestabilize INLEX COS-Aize RSTATE-EQ-BASE-Aizestandardize INLEX COS-Aize RSTATE-EQ-BASE-Aizesterilize INLEX COS-Aize RSTATE-EQ-BASE-Aizesubsidize INLEX *COS-Aize *RSTATE-EQ-BASE-Aizesuburbanize NEWDER COS -Aizc RSTATE-EQ-BASE-Aizesummarize INLEX *COS-Aize *RSTATE-EQ-BASE-Aizetraditionalize NEWDER COS-A ize RSTATE-EQ-BASE-Aizetribalize NEWDER COS-Aize RSTATE-EQ-BASE-Aizeuniversalize NEWDER COS-A ire RSTATE-EQ-BASE-A izeurbanize INLEX COS-A ire RSTATE-EQ-BASE-A izevisualize JNLEX *COS-Aizc *RSTATE-EQ-BASE-Aizcvitalize INLEX COS-Aize RSTATE-EQ-BASE-Aizevocalize INLEX *COS-Aize *RSTATE-EQ-BASE-Aize

A.2.8 Words serving as bases for -Aizebalkan (balk an) NEWDERSTEM IZE-DEP-Aizebrutal INLEX *IZE-DEP-Aizecapital INLEX '*IZE-DEP-Aizecentral INLEX IZE-DEP-Aizccharacter INLEX *IZE-DEP-Aizecivil INLEX IZE-DEP-Aizecommun NEWSTEM IZE-DEP-Aizeconventional NEWDERSTEM JZE-DEP-Aizcdemytholog NEWSTEM ?IZE-DEP-Aizeeconomy INLEX IZE-DEP-A izeequal INLEX IZE-DEP-A izefederal NEWDERSTEM IZE-DEP-Aizefertil INLEX ?IZE-DEP-Aizefetish NEWSTEM IZE-DEP-Aizeformal INLEX IZE-DEP-Aizefossil INLEX IZE-DEP-Aizegeneral INLEX JZE-DEP-Aizegerman NEWDERSTEM IZE-DEP-Aizehuman INLEX IZE-DEP-Aizehypothes NEWSTEM IZE-DEP-Aizeideal INLEX IZE-DEP-A ireimmortal INLEX JZE-DEP-Aizeimpersonal NEWDERSTEM IZE-DEP-Aizeindividual INLEX IZE-DEP-Aizeindustrial INLEX IZE-DEP-Aize

109

internal INLEX IZE-DEP-Aizeinternational INLEX IZE-DEP-Aizeitalic INLEX IZE-DEP-Aizelegal INLEX IZE-DEP-Aizeliberal INLEX IZE-DEP-Aizelocal INLEX IZE-DEP-Aizematerial INLEX IZE-DEP-Aizememor NEWSTEM IZE-DEP-Aizemineral NEWDERSTEM IZE-DEP-Aizemobil INLEX IZE-DEP-Aizemodern INLEX IZE-DEP-Aizemoral NEWDERSTEM IZE-DEP-Aizemytholog NEWSTEM IZE-DEP-Aizenarcot NEWSTEM IZE-DEP-Aizenational INLEX IZE-DEP-Aizenatural INLEX IZE-DEP-Aizeneutral INLEX IZE-DEP-Aizenormal INLEX IZE-DEP-Aizenovel NEWDERSTEM IZE-DEP-Aizepenal INLEX *IZE-DEP-Aizepersonal INLEX IZE-DEP-Aizepolar INLEX IZE-DEP-Aizepublic INLEX *IZE-DEP-Aizerational INLEX *IZE-DEP-Aizereal INLEX *IZE-DEP-Aizeritual NEWDERSTEM IZE-DEP-Aizeromantic INLEX *IZE-DEP-Aizerubbery INLEX IZE-DEP-Aizesectional NEWDERSTEM IZE-DEP-Aizesecular INLEX IZE-DEP-Aizesentimental INLEX *IZE-DEP-Aizesexual NEWDERSTEM IZE-DEP-Aizeside INLEX *IZE-DEP-AizCsignal INLEX IZE-DEP-Aizesocial INLEX *IZE-DEP-Aizespecial INLEX IZE-DEP-Aizestabile INLEX *IZE-DEP-Aizestandard INLEX *IZE-DEP-Aizesteril INLEX *JZE-DEP-Aizesubside INLEX *JZE-DEP-Aizesuburban NEWDERSTEM IZE-DEP-Aizesummary INLEX *IZE-DEP-Aizetraditional NEWDERSTEM IZE-DEP-Aizetribal NEWDERSTEM IZE-DEP-Aizeuniversal NEWDERSTEM IZE-DEP-Aizeurban INLEX IZE-DEP-Aizevisual INLEX *IZE-DEP-Aizevital INLEX IZE-DEP-Aizevocal INLEX IZE-DEP-Aize

A.2.9 Words derived using -Nizeaerosolize NEWDER COS-Nizeagonize INLEX *COS-Nizealize INLEX *COS-Nizeapologize INLEX *COS-Nizeauthorize INLEX COS-Nizebalkcanize NEWDER OOS-Nizecanonize INLEX COS-Nizecapitalize INLEX *COS-Nizecategorize INLEX COS-Nizecentralize INLEX COS-Nize

110

characterize INLEX *COS-Nizecolonize INLEX COS-Nizecommunize NEWDER COS-Nizecriticize INLEX *COS-Nizedeputize INLEX COS-Nizeeconomize INLEX COS-Nizeenergize INLEX COS-Nizeepitomize INLEX '*COS-Nizeequalize INLEX COS-Nizeeulogize INLEX *COS-Nizefederalize NEWSTEMDER COS-Nizefetishize NEWDER COS-Nizefossilize INLEX COS-Nizegeneralize INLEX *COS-Nizegermanize NEWDER COS-Nizeglamorize INLEX *COS-Nizehospitalize INLEX COS-Nizehypothesize NEWSTEMDER *COS-Nizeidealize INLEX *COS-Nizcidolize INLEX *COS-Nizeindividualize INLEX COS-Nizeinternationalize INLEX COS-Nizeionize INLEX GOS-Nizeitalicize INLEX COS-Nizeitize NEWDER *COS-Nizeiversalize NEWSTEMDER *COS-Nizejeopardize INLEX *COS-Nizeliberalize INLEX COS-NizeLionize INLEX *COS-Nizelocalize INLEX COS-Nizematerialize INLEX COS-Nizemaximize INLEX COS-Nizememorialize NEWDER *COS-Nizcmemorize NEWDER *COS-Nizemineralize NEWDER COS-Nizemobilize INLEX COS-Nizemodernize INLEX COS-Njzemonopolize INLEX COS-Nizemoralize NEWDER COS-Nizrmythologize NEWDER COS-Nizenarcotize NEWSTEMDER COS-Nizenationalize INLEX COS-Nizenaturalize INLEX COS-Nizeneutralize INLEX COS-Nizenotarize INLEX COS-Nizcnovelize NEWDER COS-Nizeorganize INLEX *COS-Nizepanelize NEWDER COS-Nizepatronize INLEX *COS-Nizephilosophize INLEX *COS-Nizepoetize NEWDER COS-Nizeproselytize INLEX COS-Nizepublicize INLEX COS-Nizcrationalize INLEX *COS-Nizerealize INLEX *COS-Nizerevolutionize INLEX COS-Nizeritualize NEWDER COS-Nizeromanticize INLEX *COS-Nizeribberize INLEX COS-Nizesatirize INLEX *COS-Nizescandalize INLEX *COS-Nizescrutinize INLEX *COS-Nizesidize INLEX *COS-Nizesignalize INLEX *'COS-Nize

socilizeINLE *CO111z

socialize INLEX *COS-Nizespecilize INLEX *COS-Nizestabilize INLEX GOS-Nizestayaize INLEX COS-Nize

subsidize INLEX *GOS-Nizesuburbanize NEWDER COS-Nizesummarize INLEX *COS-Nizesymbolize INLEX *COS-Nizesympathize INLEX *COS-Nizeterrorize INLEX *COS-Nizetheorize INLEX *COS-Nizetribalize NEWDER COS-Nizetyrannize INLEX *00S-Nizeunitize NEWDER GOS-Nizeuniversalize NEWSTEMDER COS-Nizevocalize INLEX *COS-Nize

A.2.10 Words derived using -Aifyamplify INLEX GOS-Aify *RSTATE-EQ-BASE-Aifybeautify INLEX COS-A ify RSTATE-EQ-BASE-Aifyclassify INLEX COS-Aify *RSTATE-EQ-BASE-Aifydiversify INLEX COS-Aify RSTATE-EQ-BASE-Aifyfalsify INLEX COS-Aify RSTATE-EQ-BASE-Aifyfortify INLEX COS-Aify *RSTATEEQ-BASE-Aifyintensify INLEX 005-A ify RSTATE-EQ-BASE-Aifyjustify INLEX COS-Aify RSTATE-EQ-BASE-Aifynullify INLEX COS-Aify RSTATE-EQ-BASE-A ifyoversimplify INLEX C05-A if y RSTATE-EQ-BASE-Aifypurify INLEX COS-Aify RSTATE-EQ-BASE-Aifyrarify NEWDER COS-A ify RSTATE-EQ-BASE-Aifyscarify INLEX COS-Aify *RSTATF>EQ-BASE-Aifysimplify INLEX COS-Aify RSTATE-EQ-BASE-Aifytestify INLEX *COS-Aify *PR5TATFEQ-BASE-Aifyverify INLEX GOS-Aify *RSTATE-EQ-BASE-Aifyvilify INLEX C05-A ify *RSTATE-EQ-BASE-Aifyvivify NEWSTEMDER COS-A ify *RSTATE-EQ-BASE-Aiy

A.2.11 Words serving as bases for -Ailfyample INLEX *IZE-DEP-Aifybeaut INLEX ?IZE-DEP-Aifyclassy INLEX *IZFDEP-Aifydiverse INLEX IZE-DEP-Aifyfalse INLEX *IZE-DEP-Aifyforte INLEX *IZE-DEP-Aifyintense INLEX IZE-DEP-Aifyjust INLEX *IZE-DEP-Aifynull INLEX IZE-DEP-Aifyoversimple INLEX IZE-DEP-Aifypure INLEX IZE-DEP-Aifyrare NEWDERSTEM *IZE-DEP-Aifyscary INLEX *IZE-DEP-Aifysimple INLEX IZE-DEP-Aifytesty INLEX *IZE-DEP-Aifyvery INLEX *IZE-DEP-Aifyvile INLEX *IZE-DEP-Aifyviv NEWSTEM *IZE-DEP-Aiy

112

A.2.12 Words derived using -Nifybeautify INLEX COS-Nifyboobify NEWDER COS-Nifycertify INLEX COS-Nifyclassify INLEX '*COS-Nifycodify INLEX *COS-Nifyfortify INLEX COS-Nifyglorify INLEX *COS-Nijygratify INLEX COS-Nifyminify NEWDER COS-Nifymodify INLEX *COS-NUymollify INLEX COS-Nifymummify INLEX COS-Nifynotify INLEX *COS-Nifypacify INLEX GOS-Nifypersonify INLEX *COS-Niyratify INLEX COS-Nifyscarify INLEX COS-Nifysignify INLEX *COS-Nifysimplify INLEX COS-Nifytestify INLEX COS-Nijytypify INLEX COS-Nifyvivify NEWSTEMDER COS-Nify

A.2.13 Words derived using -enawaken INLEX COS-AenV RSTATE-EQ-BASE-AenVbemadden NEWSTEMDER COS-Acn V *RSTATEFEQ-BASE-Aen Vblacken INLEX COS-AenV RSTATE-EQ-BASE-AenVbrighten JNLEX COS-AenV RSTATE-EQ-BASE-AenVbroaden INLEX COS-Aen V RSTATE-EQ-BASE-Aen Vcoarsen INLEX GOS-AenV RSTATE-EQ-BASE-AenVdampen INLEX COS-AenV RSTATE-EQ-BASE-AenVdarken INLEX COS-AenV RSTATE-EQ-BASE-AenVdeaden INLEX COS-Aen V RSTATE-EQ-BASE-Aen Vdeafen INLEX COS-AenV RSTATE-EQ-BASE-AenVdeepen INLEX COS-AenV RSTATE-EQ-BASE-AenVfasten INLEX COS-AenV *RSTATEEQ-BASE-AenVfreshen INLEX COS-Aen V RSTATE-EQ-BASE-Aen Vgreen NEWSTEMDER GOS-AcnV *RSTATE>EQ-BASE-AenVharden INLEX COS-AenV RSTATE-EQ-BASE-AenVharshen INLEX COS-AenV RSTATE-EQ-BASE-AenVlessen INLEX COS-AenV RSTATE-EQ-BASE-AenVlighten INLEX COS-AenV RSTATE-EQ-BASE-AenVliken INLEX GOS-AenV RSTATE-EQ-BASE-AenVloosen INLEX COS-AenV RSTATE-EQ-BASE-AenVmoisten INLEX COS-AcnV RSTATE-EQ-BASE-AenVquicken INLEX COS-AenV RSTATE-EQ-BASE-AenVripen INLEX COS-AenV RSTATE-EQ-BASE-AenVrougben INLEX COS-AcnV RSTATE-EQ-BASE-AenVsbarpen INLEX COS-AcnV RSTATE-EQ-BASE-AenVshorten INLEX COS-AenV RSTATE-EQ-BASE-AenVsicken INLEX COS-AenV RSTATE-EQ-BASE-AenVslacken INLEX COS-AenV RSTATE-EQ-BASE-AenVsoften INLEX COS-AenV RSTATE-EQ-BASE-AenVstiffen INLEX COS-AenV RSTATE-EQ-BASE-AenVstraighten INLEX COS-AenV RSTATE-EQ-BASE-AenVstrengten NEWSTEMDER *COS-Aen V *RSTATE-EQ-BASE-Aen Vth-icken INLEX COS-AenV RSTATE-EQ-BASE-AenVtighten INLEX COS-AenV RSTATE-EQ-BASE-AenVunfasten NEWDER COS-AenV RSTATE-EQ-BASE-AenV

113

weaken INLEX COS-AenV RSTATE-EQ-BASE-AenVwhiten INLEX COS-AenV RSTATE-EQ-BASE-AenVwiden INLEX GOS-AenV RSTATE-EQ-BASE-AenVworsen INLEX COS-Aen V RSTATE-EQ-BASE-Aen V

A-2.14 Words serving as bases for -enawake INLEX *IZEFDEP-AenVbemnadd. NEWSTEM *1ZE-DEP-AenVblack INLEX *JZE-DEP-AenVbright INLEX JZE-DEP-AenVbroad INLEX IZE-DEP-AenVcoarse INLEX IZE-DEP-AenVdamp INLEX JZE-DEP-AenVdark INLEX IZE-DEP-AenVdead INLEX *IZE.DEP-AenVdeaf INLEX *IZE-DEP-AenVdeep INLEX IZE-DEP-AenVfast INLEX *IZE-DEP-AenVfresh INLEX IZE-DEP-AenVgre NEWSTEM *IZ&DEP-AenVhard INLEX JZE-DEP-AenVharsh INLEX JZE-DEP-AenVless INLEX *IZEXDEP-AenVlight INLEX IZE-DEP-AenVlike INLEX *IZE-DEP-AenVloose INLEX IZE-DEP-A en Vmoist INLEX JZE-DEP-AenVquick INLEX IZE-DEP-AenVripe INLEX IZE-DEP-AenVrough INLEX IZE-DEP-AenVsharp INLEX IZE-DEP-AenVshort INLEX IZE-DEP-AenVsick INLEX *IZE-DEP-AenVslack INLEX IZE-DEP-AenVsoft INLEX IZE-DEP-AenVstiff INLEX JZE-DEP-AenVstraight INLEX IZE-DEP-AenVstrength NEWSTEM *IZFXDEP-AenVthick INLEX IZE-DEP-AenVtight INLEX IZE-DEP-AenVunfast NEWDERSTEM *1ZE>DEP-AenVweak INLEX IZE-DEP-AenVwhite INLEX IZE-DEP-AenVwide INLEX IZE-DEP-AenVworse INLEX *IZEXDEP-AenV

A.2.15 Words derived using -ateabate INLEX COS-ateabbreviate INLEX COS-ateablate NOTINLEX COS-ateabrogate INLEX COS-ateaccelerate INLEX COS-ateaccentuate INLEX C0S-ateaccommodate INLEX *COS-ateacculturate NOTINLEX COS-ateaccumulate INLEX *C0S-ateactivate INLEX COS-ate

114

actuate INLEX COS-ateadjudicate INLEX. *COS- ateadulterate INLEX COS-ateadvocate INLEX *COS-ateaerate INLEX *COS-ateaffiliate INLEX "COS-ateagglomerate INLEX COS-ateagglutinate NOTINLEX COS-ateaggravate INLEX *COS-ateagitate INLEX *COS-atealienate INLEX *COS-atealleviate INLEX COS-ateallocate INLEX COS-atealternate INLEX *COS-ateamalgamate INLEX COS-ateamputate INLEX COS-ateanimate INLEX COS-ateannihilate INLEX COS-ateannunciate NOTINLEX *COS-ateanticipate INLEX *COS-ateantiquate NOTINLEX COS-ateappreciate INLEX *COS-ateappropriate INLEX COS-ateapproximate INLEX *COS-atearbitrate INLEX *COS-atearrogate INLEX COS-atearticulate INLEX *COS-ateassassinate INLEX COS-ateassimilate INLEX COS-ateassociate INLEX *COS- ateate INLEX *COS- ateauthenticate INLEX *COS- ateautomate INLEX COS-atebeat INLEX COS-ateberate INLEX *COS-atebleat INLEX COS-atebloat INLEX *COS-ateboat INLEX "COS-atecalculate INLEX *COS-atecalibrate INLEX COS-atecalumniate INLEX *COS- atecapitulate INLEX COS-atecaptivate INLEX COS-atecastigate INLEX *COS-atecelebrate INLEX *COS-atecheat INLEX *COS-atecirculate INLEX *COS-ateco-operate NOTINLEX *COS-ateco-ordinate NOTINLEX *COS-atecoagulate INLEX COS-atecoat INLEX *COS-atecollaborate INLEX *COS-atecollate INLEX COS-atecollimate NOTINLEX COS-atecombat INLEX *COS-atecommemorate INLEX *COS-atecommiserate INLEX *COS-atecommunicate INLEX *COS- atecompensate INLEX *COS-atecompleat NOTINLEX COS-atecomplicate INLEX *COS-ateconcentrate INLEX *COS- ateconciliate INLEX COS-ateconfabulate INLEX *COS- ate

115

confiscate INLEX COS-atecongratulate INLEX *COS-atecongregate INLEX COS-ateconjugate INLEX COS-ateconsolidate INLEX COS-ateconsummate INLEX COS-atecontaminate INLEX COS-atecontemplate INLEX *COS-atecooperate INLEX *COS-atecoordinate INLEX *COS-atecorrelate INLEX COS-atecorroborate INLEX *COS- atecorrugate INLEX COS-atecreate INLEX COS-atecremate INLEX COS-ateculminate INLEX COS-atecultivate INLEX *COS-atecumulate NOTINLEX COS-atedate INLEX *COS-atede-iodinate NOTINLEX COS-atedeactivate NEWDER COS-atedebate INLEX *COS- atedebilitate INLEX COS-atedecelerate INLEX COS-atedecorate INLEX *COS-atedecorticate NOTINLEX COS-atededicate INLEX *COS-atededifferentiate NEWDER COS-atedefeat INLEX *COS- atedefecate INLEX *COS- atedeflate INLEX COS-atedegenerate INLEX COS-atedehydrate INLEX COS-atedelegate INLEX *COS-atedelineate INLEX *COS-atedemarcate INLEX *COS- atedemonstrate INLEX *COS-atedenominate INLEX *COS-atederogate INLEX COS-atedesecrate INLEX *COS- atedesegregate INLEX COS-atedesignate INLEX COS-atedeteriorate INLEX COS-atedetonate INLEX COS-atedeuterate NOTINLEX COS-atedevastate INLEX COS-atedeviate INLEX *COS-atedictate INLEX *COS-atedifferentiate INLEX *COS- atedilapidate INLEX COS-atedilate INLEX COS-atedisaffiliate INLEX COS-atediscorporate NOTINLEX COS-atediscriminate INLEX *COS-atedisintegrate INLEX COS-atedislocate INLEX COS-atedisseminate INLEX COS-atedissipate INLEX COS-atedissociate INLEX COS-atedominate INLEX *COS- atedonate INLEX COS-ateduplicate INLEX *COS-ateeat INLEX *COS-ateeducate INLEX COS-ate

116

effectuate INLEX COS-ateejaculate INLEX COS-ateelaborate INLEX *COS-ateelate INLEX COS-ateelevate INLEX COS-ateeliminate INLEX COS-ateelongate INLEX COS-ateelucidate INLEX COS-ateemaciate INLEX COS-ateemanate INLEX *COS-ateemancipate INLEX COS-ateemasculate INLEX COS-ateemigrate INLEX COS-ateemulate INLEX *COS-ateenervate INLEX COS-ateentreat INLEX *COS-ateenumerate INLEX *COS- ateenunciate INLEX *COS-ateequate INLEX *COS-ateequilibrate NOTINLEX COS-ateeradicate INLEX COS-ateestimate INLEX *COS-ateevacuate INLEX COS-ateevaporate INLEX COS-ateeventuate INLEX COS-ateexacerbate INLEX COS-ateexaggerate INLEX *COS-ateexasperate INLEX COS-ateexcommunicate INLEX COS-ateexcoriate INLEX *COS- ateexhilarate INLEX COS-ateexonerate INLEX COS-ateexpiate INLEX *COS-ateexpropriate INLEX COS-ateextenuate INLEX COS-ateexterminate INLEX COS-ateextirpate INLEX COS-ateextrapolate INLEX *COS-ateextricate INLEX COS-atefabricate INLEX COS-atefacilitate INLEX *COS-atefascinate INLEX *COS-ateflagellate INLEX *COS-atefloat INLEX *COS- ateflocculate NOTINLEX COS-atefluctuate INLEX *COS- atefluorinate NOTINLEX COS-ateformulate INLEX *COS- atefractionate NOTINLEX COS-atefrustrate INLEX *COS-atefulminate INLEX *COS-ategenerate INLEX COS-ategerminate INLEX COS-ategesticulate INLEX COS-ategloat INLEX *COS-ateglycerinate NOTINLEX COS-ategraduate INLEX COS-ategrate INLEX COS-atehallucinate INLEX *COS -atehate INLEX *COS-ateheat INLEX COS-atehesitate INLEX *COS-atehibernate INLEX *COS-atehumiliate INLEX *COS-ate

117

hydrate NOTINLEX COS-atehyphenate INLEX COS-ateilluminate INLEX COS-ateillustrate INLEX *COS-ateimitate INLEX *COS-ateimpersonate INLEX *COS-ateimplicate INLEX *COS- ateimprecate NOTINLEX *COS-ateinactivate NOTINLEX COS-ateinaugurate INLEX COS-ateincapacitate INLEX COS-ateincarcerate INLEX COS-ateincarnate INLEX COS-ateincorporate INLEX COS-ateincriminate INLEX *COS- ateincubate INLEX COS-ateinculcate INLEX COS-ateindicate INLEX *COS-ateindoctrinate INLEX COS-ateinfiltrate INLEX COS-ateinflate INLEX COS-ateinfuriate INLEX COS-ateinitiate INLEX COS-ateinnovate INLEX COS-ateinsinuate INLEX *COS- ateinstigate INLEX COS-ateinsulate INLEX COS-ateintegrate INLEX COS-ateinterpenetrate NEWDER COS-ateinterpolate INLEX COS-ateinterrelate INLEX *COS-ateintimate INLEX *COS-ateintoxicate INLEX COS-ateinundate INLEX COS-ateinvalidate INLEX COS-ateinvestigate INLEX *COS-ateinvigorate INLEX COS-ateiodinate NOTINLEX COS-ateirradiate INLEX *COS- ateirrigate INLEX COS-ateirritate INLEX *COS- ateisolate INLEX COS-atelacerate INLEX COS-atelactate NOTINLEX *COS- atelaminated INLEX COS-atelegislate INLEX COS-ateliberate INLEX COS-ateliquidate INLEX COS-atelocate INLEX *COS- atelubricate INLEX COS-atemandate INLEX COS-atemanipulate INLEX *COS-atemarinate INLEX COS-atemate INLEX *COS-atematriculate INLEX COS-atemediate INLEX *COS- atemeditate INLEX *COS- atemigrate INLEX COS-atemilitate INLEX *COS-atemiscalculate INLEX *COS- atenisrelate NEWDER *COS-atemitigate INLEX COS-atemoderate INLEX COS-atemodulate INLEX *COS- ate

118at NLXCO-t

motivate INLEX COS-atemutiate INLEX *COS-atenarrate INLEX CSatenavieate INLEX 005S-atenaveiate INLEX *COS-ateneceiate INLEX *COS-atenegoate INLEX *C05-ate

nominate INLEX *COS-atenucleate NOTINLEX COS-ateobligate INLEX *COS-ateobliterate INLEX COS-ateofficiate INLEX *COS-ateoperate INLEX *COS-ateorate NOTINLEX *C0S-ateoriginate INLEX *COS-ateoscillate INLEX *COS-ateoutdate NEWDER *COS-ateovereat NEWDER *COS-ateoverestimate INLEX *COS-ateoverheat INLEX COS-ateoverpopulate NEWDER COS-ateoverrate INLEX *C05 atepaginate NOTINLEX COS-ateparticipate INLEX *COS-atepenetrate INLEX C0S-ateperforate INLEX COS-atepermeate INLEX COS-ateperpetrate INLEX *C0S-ateperpetuate INLEX COS-ateplacate INLEX COS-ateplate INLEX COS-atepontificate INLEX *C0S-atepopulate INLEX COS-atepostulate INLEX *COS-ateprecipitate INLEX *COS-atepredominate INLEX *C0S-ateprefabricate INLEX COS-atepreisolate NEWDER COS-atepreponderate INLEX *C0S-ateprocrastinate INLEX *C05 ateproliferate INLEX *C0S-atepromulgate INLEX COS-atepropagate INLEX COS-atepropitiate INLEX *C0S-ateprorate NOTINLEX *C0S-atepulsate INLEX *C0S-atepunctuate INLEX COS-atepupate INLEX COS-ateradiate INLEX *COS-aterate INLEX *C0S-ateratiocinate NOTINLEX COS-atere-activate NEWDER C0S-atere-create NEWDER *C0S-atere-evaluate NEWDER *005 atere-incorporate NEWDER COS-atereactivate INLEX C0S-aterecalculate NEWDER *COS-aterecapitulate INLEX *C0S-atereciprocate INLEX ?C0S-ate ?recreate INLEX *C0S-aterecuperate INLEX COS-ateredecorate INLEX *COS-aterededicate NEWDER *COS-ate

119

reformulate NEWDER *COS-aterefrigerate INLEX COS-ateregenerate INLEX *COS-ateregulate INLEX *COS-aterehabilitate INLEX *COS-atereincarnate INLEX *COS-atereinstate INLEX *COS-atereiterate INLEX *COS-aterelate INLEX *COS-aterelegate INLEX COS-ateremonstrate INLEX *COS-aterenovate INLEX *COS-aterepeat INLEX *COS-atereprobate INLEX *COS-aterepudiate INLEX *COS-aterestate INLEX *COS- ateretaliate INLEX *COS-ateretranslate NEWDER *COS-ateretreat INLEX *COS-atereverberate INLEX *COS-aterotate INLEX *COS-atesalivate INLEX *COS-atesatiate INLEX COS-atesaturate INLEX COS-atescintillate INLEX *COS-ateseat INLEX COS-atesegregate INLEX COS-ateseparate INLEX COS-atesimulate INLEX *COS-atesituate NOTINLEX *COS- ateskate INLEX *COS-ateslate INLEX *COS-atesolvate NOTINLEX COS-atespeculate INLEX *COS-atestate INLEX *COS- atestimulate INLEX COS-atestipulate INLEX *COS-atesubjugate INLEX COS-atesublimate INLEX COS-atesubordinate INLEX COS-atesubstantiate INLEX *COS- atesuffocate INLEX COS-atesummate NOTINLEX COS-atesupplicate INLEX *COS-atesweat INLEX *COS-atesyndicate INLEX COS-atetabulate INLEX *COS-ateterminate INLEX COS-atethoriate NOTINLEX COS-atetinplate NOTINLEX COS-atetitillate INLEX COS-atetolerate INLEX *COS-atetranslate INLEX *COS-atetranspirate NOTINLEX COS-atetreat INLEX *COS-atetruncate INLEX COS-ateulcerate INLEX COS-ateunderestimate INLEX *COS-ateunderrate INLEX *COS- ateunderstate INLEX *COS-ateundulate INLEX *COS-ateupdate INLEX COS-atevacate INLEX COS-atevaccinate INLEX COS-ate

120

vacuolate NOTINLEX COS-atevalidate INLEX- COS-atevariegate NOTINLEX COS-atevenerate INLEX *'COS-ateventilate INLEX *COS-atvibrate INLEX *COS-atevindicate INLEX *COS-ateviolate INLEX *COS-atevitiate INLEX COS-ate

A.2.16 Words derived using -leangle INLEX DENOM ACTIVITY-IeVassemble INLEX *ACTIVITY-ic Vbattle INLEX DENOM ACTIVITY-ic Vbuckle INLEX DENOM *ACTIVITY-icVbundle INLEX DENOM *ACTJVITY-leVcable INLEX DENOM *ACTIVITY-ic Vchandelle *ACTIVITY-c Vchuckle INLEX DENOM ACTIVITY-Ic Vcircle INLEX DENOM ACTIVITY-ic Vcrumble INLEX DENOM *ACTIVITY-icVdangle INLEX ACTIVITY-Ic Vdazzle INLEX DENOM ACTIVITY-Ic Vdisable INLEX *ACTIVITY-leVdisassemble INLEX *ACTIVITY-leVdisentangle INLEX '*ACTIVITY-ie Vdouble INLEX DENOM *ACTIVITY-icVdwindle INLEX ACTIVITY-Ic Vembezzle INLEX ACTIVITY-ic Venable INLEX *ACTIVITY-icVencircle INLEX ACTIVITY-Ic Ventitle INLEX ACTIVITY-ic Vgamble INLEX DENOM ACTIVITY-ic Vgrapple INLEX ACTIVITY-ic Vgrumble INLEX DENOM ACTIVITY-ic Vguzzle INLEX *ACTIVITY-ic Vhandle INLEX DENOM ACTIVITY-ic Vbobble INLEX ACTIVITY-ic Vbumble INLEX *'ACTIVITY-Ic Vhurdle INLEX DENOM *ACTIVITYJCeVbustle INLEX DENOM ACTIVITY-IeVjostle INLEX ACTIVITY-ic Vknucle INLEX DENOM ACTIVITY-ic Vmeddle INLEX ACTIVITY-ic Vmingle INLEX ACTIVITY-ic Vnolle INLEX *ACTIVITY-leVpaddle INLEX DENOM ACTIVITY-ic Vpeddle INLEX ACTIVITY-ic Vpeople INLEX DENOM *ACTIVITY-leVprevayle *ACTIVITY-leVpuzzle INLEX DENOM ACTIVITY-ic Vquadruple INLEX *ACTIVITY-icVquibble INLEX DENOM ACTIVITY-ic Vramble INLEX DENOM ACTIVITY-ic Vreassemble INLEX DENOM *ACTIVITY-leVresemble INLEX *ACTIVITY-leVriffle INLEX ACTIVITY-ic Vripple INLEX DENOM ACTIVITY-ic Vrustle INLEX DENOM ACTIVITY-ic Vsaddle INLEX DENOM *ACTIVITY-leVsample INLEX DENOM *ACTIVITY-icV

121

settle INLEX DENOM *ACTIVITY-e Vshuffle INLEX DENOM ACTIVITY-le Vsmuggle INLEX *ACTIVITY-le Vsprinkle INLEX DENOM ACTIVITY-ie Vstartle INLEX *ACTIVITY-le Vstifle INLEX *ACTIVITY-le Vstraggle INLEX *ACTIVITY-eVstruggle INLEX DENOM ACTIVITY-leVstumble INLEX DENOM ACTIVITY-le Vtable INLEX DENOM *ACTIVITY-/e Vtackle INLEX DENOM *ACTIVITY-/e Vtangle INLEX DENOM ACTIVITY-le Vtopple INLEX *ACTIVITY-leVtrample INLEX *ACTIVITY-leVtremble INLEX DENOM ACTIVITY-le Vtrouble INLEX DENOM ACTIVITY-le Vtumble INLEX DENOM ACTIVITY-Ie Vunscramble INLEX *ACTIVITY-leVwhistle INLEX DENOM ACTIVITY-/e Vwobble INLEX DENOM ACTIVITY-e Vwrestle INLEX ACTIVITY-It V

A.2.17 Words derived using -eeabsentee INLEX PART-IN-E-ee SENTIENT-ee *NON-VOL-eeaddressee INLEX PART-IN-E-ee *SENTIENT-ec NON-VOL-eeappointee NEWDER PART-IN-E-ee SENTIENT-cc NON-VOL-eeassignee NEWDER PART-IN-E-ee SENTIENT-ee NON-VOL-ceconferee NEWSTEMDER PART-IN-E-ce SENTIENT-ee NON-VOL-eedeportee INLEX PART-IN-E-ec SENTIENT-ee NON-VOL-eedevisee NEWDER PART-IN-E-ee *SENTIENT-ee NON-VOL-eedevotee INLEX PART-IN-E-ee SENTIENT-ee *NON-VOL-edivorcee INLEX PART-IN-E-ee SENTIENT-ce *NON-VOL-eedraftee INLEX PART-IN-E-ee SENTIENT-ee NON-VOL-eeemcee NEWSTEMDER *PART-IN-E-ee *SENTIENT-ee *NON-VOL-eemployee INLEX PART-IN-E-ce SENTIENT-ec NON-VOL-eeenrollee NEWDER PART-IN-E-ce SENTIENT-ee *NON-VOL-eeescapee INLEX PART-IN-E-ee SENTIENT-ee *NON-VOL-eefilagree NEWSTEMDER *PART-IN-E-ee *SENTIENT-ee *NON-VOL-eehonoree NEWDER PART-IN-E-ee SENTIENT-ee NON-VOL-eeinductee NEWDER PART-IN-E-ee SENTIENT-ec NON-VOL-eeinterviewee NEWDER PART-IN-E-ee SENTIENT-ee NON-VOL-eeinvitee NEWDER PART-IN-E-ee SENTIENT-ee NON-VOL-celicensee INLEX PART-IN-E-ee SENTIENT-ce NON-VOL-eemillidegree NEWSTEMDER *PART-IN-E-ee *SENTIENT-ee *NON-VOL-eenferee NEWSTEMDER *PART-IN-E-ee *SENTIENT-cc *NON-VOL-ecparolee NEWDER PART-IN-E-ee SENTIENT-ee NON-VOL-ecpatentee INLEX PART-IN-E-ee SENTIENT-ee NON-VOL-eerepartee INLEX *PART-IN-E-ee *SENTIENT-ee *NON-VOL-eesangaree NEWSTEMDER *PART-IN-E-ee *SENTIENT-ee *NON-VOL-eethree NEWSTEMDER *PART-IN-E-ee *SENTIENT-ee *NON-VOL-eetransferee NEWSTEMDER PART-IN-E-ee SENTIENT-ee NON-VOL-eetrustee INLEX *PART-IN-E-ee *SENTIENT-ee *NON-VOL-eeviewee NEWDER PART-IN-E-ee SENTIENT-ce NON-VOL-ee

A.2.18 Words derived using -erabsorber NEWDER PART-IN-E- VerNaccelerometer NEWSTEMDER *PART-IN-E- VerN

122

accelerometer NEWSTEMDER *PART-IN-E- VerNachiever NEWDER PART-IN-E- VerNactinometer NEWSTEMDER *PART-IN-E- VerNadapter INLEX PART-IN-E- VerNadmirer INLEX PART-IN-E- VerNadvertiser NEWDER PART-IN-E- VerNadviser INLEX PART-IN-E- VerNamplifier INLEX PART-IN-E- VerNanalyzer NEWSTEMDER PART-IN-E- VerNannouncer INLEX PART-IN-E- VerNappraiser NEWDER PART-IN-E- VerNarranger NEWDER PART-IN-E- VerNattacker NEWDER PART-IN-E- VerNautoloader NEWSTEMDER PART-IN-E- VerNbacker INLEX PART-IN-E- VerNbaffler NEWDER PART-IN-E- VerNbaker INLEX PART-IN-E- VerNballplayer NEWSTEMDER *PART-IN-E- VerNbanker INLEX PART-IN-E- VerNbarnstormer NEWDER PART-IN-E- VerNbather NEWDER PART-IN-E- VerNbearer INLEX PART-IN-E- VerNbeliever INLEX PART-IN-E- VerNbellwether NEWSTEMDER *PART-IN-E- VerNbesieger NEWDER PART-IN-E- VerNbetrayer NEWDER PART-IN-E- VerNbinder INLEX PART-IN-E- VerNbiter NEWDER PART-IN-E- VerNblackmailer NEWDER PART-IN-E- VerNblazer INLEX *PART-IN-E- VerNbleacher INLEX *PART-IN-E- VerNblinker INLEX PART-IN-E- VerNblower INLEX PART-IN-E- VerNboarder INLEX PART-IN-E- VerNboater INLEX PART-IN-E- VerNbodybuilder NEWSTEMDER PART-IN-E- VerNboiler INLEX PART-IN-E- VerNbomber INLEX PART-IN-E- VerNbooker NEWDER PART-IN-E- VerNbooster INLEX PART-IN-E- VerNborer INLEX PART-IN-E- VerNborrower NEWDER PART-IN-E- VerNbower INLEX *PART-IN-E- VerNboxer INLEX PART-IN-E- VerNbreaker INLEX PART-IN-E- VerNbrewer INLEX PART-IN-E- VerNbriber NEWDER PART-IN-E- VerNbroadcaster NEWDER PART-IN-E- VerNbroiler INLEX PART-IN-E- VerNbuffer INLEX *PART-IN-E- VerNbugler NEWSTEMDER PART-IN-E- VerNbuilder INLEX PART-IN-E- VerNbullwhacker NEWSTEMDER ?PART-IN-E- VerNbumper INLEX PART-IN-E- VerNbunker INLEX *PART-IN-E- VerNbunter NEWSTEMDER PART-IN-E- VerNburner INLEX PART-IN-E- VerNbutter INLEX *PART-IN-E- VerNbuyer INLEX PART-IN-E- VerNcaller INLEX PART-IN-E- VerNcalligrapher NEWSTEMDER *PART-IN-E- VerNcalorimeter NEWSTEMDER *PART-IN-E- VerNcampaigner NEWDER PART-IN-E- VerNcamper INLEX PART-IN-E- VerN

123

canter INLEX *PART-IN-E- VerNcanvasser NEWDER PART-IN-E- VerNcarreer NEWSTEMDER *PART-IN-E- VerNcarrier INLEX PART-IN-E- VerNcarver INLEX PART-IN-E- VerNcaseworker NEWSTEMDER *PART-IN-E- VerNcaster INLEX PART-IN-E- VerNcatcher INLEX PART-IN-E- VerNchallenger NEWDER PART-IN-E- VerNchamfer NEWSTEMDER *PART-IN-E- VerNchanter NEWDER PART-IN-E- VerNcharmer INLEX PART-IN-E- VerNcharter INLEX *PART-IN-E- VerNchecker INLEX PART-IN-E- VerNchowder NEWSTEMDER *PART-IN-E- VerNchronicler NEWDER PART-IN-E- VerNclassifier NEWDER PART-IN-E- VerNcleaner INLEX PART-IN-E- VerNclincher INLEX PART-IN-E- VerNco-signer NEWDER PART-IN-E- VerNco-worker INLEX PART-IN-E- VerNcomer INLEX *PART-IN-E- VerNcommander INLEX PART-IN-E- VerNcommissioner INLEX *PART-IN-E- VerNcommuter INLEX PART-IN-E- VerNcompiler NEWDER PART-IN-E- VerNcomposer INLEX PART-IN-E- VerNcomputer INLEX PART-IN-E- VerNcondenser INLEX PART-IN-E- VerNconditioner NEWDER PART-IN-E- VerNconniver NEWDER PART-IN-E- VerNconsumer INLEX PART-IN-E- VerNcontainer INLEX PART-IN-E- VerNcontender INLEX PART-IN-E- VerNcooler INLEX PART-IN-E- VerNcorker INLEX PART-IN-E- VerNcorner INLEX *PART-IN-E- VerNcorrupter INLEX PART-IN-E- VerNcounter INLEX *PART-IN-E- VerNcoupler NEWDER PART-IN-E- VerNcoworker INLEX PART-IN-E- VerNcowpuncher NEWSTEMDER *PART-IN-E- VerNcowrtier NEWSTEMDER *PART-IN-E- VerNcracker INLEX *PART-IN-E- VerNcrafter NEWDER PART-IN-E- VerNcrasher INLEX PART-IN-E- VerNcrater INLEX *PART-IN-E- VerNcreamer INLEX *PART-IN-E- VerNcreeper INLEX PART-IN-E- VerNcruiser INLEX PART-IN-E- VerNcrusader NEWDER PART-IN-E- VerNcrusher NEWDER PART-IN-E- VerNcrystallographer NEWSTEMDER *PART-IN-E- VerNcuirassier NEWSTEMDER *PART-IN-E- VerNdabbler NEWDER PART-IN-E- VerNdager NEWSTEMDER *PART-IN-E- VerNdancer NEWDER PART-IN-E- VerNdazzler NEWDER PART-IN-E- VerNdealer INLEX PART-IN-E- VerNdefender NEWDER PART-IN-E- VerNdeluxer NEWSTEMDER *PART-IN-E- VerNdemander NEWDER PART-IN-E- VerNdesigner INLEX PART-IN-E- VerNdespoiler NEWDER PART-IN-E- VerN

124

destroyer INLEX PART-IN-E- VerNdiagnometer NEWSTEMDER *PART-IN-E- VerNdieter NEWDER PART-IN-E- VerNdiffuser NEWDER PART-IN-E- VerNdimer NEWSTEMDER *PART-IN-E- VerNdisclaimer INLEX PART-IN-E- VtrNdiscoverer NEWDER PART-IN-E- VerNdispenser INLEX PART-IN-E- VerNdissenter INLEX PART-IN-E- VerNdistiller INLEX PART-IN-E- VerNdisturber NEWDER PART-IN-E- VerNditcher NEWDER *PART-IN-E- VerNdiver INLEX PART-IN-E- VrNdivider NEWDER PART-IN-E- VerNdocter NEWSTEMDER *PART-IN-E- VerNdoer INLEX PART-IN-E- VerNdouble-crosser NEWDER PART-IN-E- VerNdoubleheader NEWSTEMDER ?PART-IN-E- VrNdrafter NEWDER PART-IN-E- VerNdraper INLEX *PART-IN-E- VerNdrawer INLEX *PART-IN-E- VerNdreamer INLEX PART-IN-E- VerNdresser INLEX PART-IN-E- VerNdrier INLEX PART-IN-E- VerNdrinker INLEX PART-IN-E- VerNdriver INLEX PART-IN-E- VerNducer NEWSTEMDER PART-IN-E- VrNdweller NEWDER PART-IN-E- VerNeasterner NEWSTEMDER *PART-IN-E- VtrNeater INLEX PART-IN-E- VerNemployer INLEX PART-IN-E- VerNenforcer NEWDER PART-IN-E- VtrNengraver NEWDER PART-IN-E- VerNenjoinder NEWSTEMDER *PART-IN-E- VerNentertainer INLEX PART-IN-E- VerNequalizer NEWDER PART-IN-E- VerNeraser INLEX PART-IN-E- VerNester NEWSTEMDER *PART-IN-E- VerNether NEWSTEMDER *PART-IN-E- VrNeulogizer NEWDER PART-IN-E- VerNex-schoolteacher NEWSTEMDER *PART-IN-E- VerNexaminer NEWDER PART-IN-E- VerNexperimenter NEWDER PART-IN-E- VrrNexploiter NEWDER PART-IN-E- VerNexplorer NEWDER PART-IN-E- VerNexporter INLEX PART-IN-E- VerNextruder NEWDER PART-IN-E- VerNfairgoer NEWSTEMDER *PART-IN-E- VerNfaker INLEX PART-IN-E- VerNfarmer INLEX PART-IN-E- VerNfavorer NEWDER PART-IN-E- VerNfeeder INLEX PART-IN-E- VerNfeeler INLEX PART-IN-E- VtrNfeler NEWSTEMDER *PART-IN-E- VerNfeller INLEX *PART-IN-E- VerNfender INLEX *PART-IN-E- VerNferometer NEWSTEMDER *PART-IN-E- VerNfertilizer INLEX PART-IN-E- VerNfielder INLEX PART-IN-E- VerNfighter INLEX PART-IN-E- VerNfiller INLEX PART-IN-E- VrNfinder INLEX PART-IN-E- VerNfinisher NEWDER PART-IN-E- VtrNfisher NEWDER PART-IN-E- VerN

125

fixer NEWDER PART-IN-E- VerNflier INLEX PART-IN-E- VerNfloater NEWDER PART-IN-E- VerNflower INLEX *PART-IN-E- VerNfolder INLEX *PART-IN-E- VerNfollower INLEX PART-IN-E- VerNforecaster NEWDER PART-IN-E- VerNfounder INLEX PART-IN-E- VerNframer NEWDER PART-IN-E- VerNfreewheeler NEWDER PART-IN-E- VerNfreezer INLEX PART-IN-E- VerNfreighter INLEX PART-IN-E- VerNgagwriter NEWSTEMDER *PART-IN-E- VerNgainer NEWDER PART-IN-E- VerNgallbladder NEWSTEMDER *PART-IN-E- VerNgambler NEWDER PART-IN-E- VerNgardener NEWDER PART-IN-E- VerNgazer NEWDER PART-IN-E- VerNgirder INLEX *PART-IN-E- VerNgiver NEWDER PART-IN-E- VerNglander NEWSTEMDER *PART-IN-E- VerNglider INLEX PART-IN-E- VerNglover NEWSTEMDER *PART-IN-E- VerNgobbler INLEX PART-IN-E- VerNgolfer NEWDER PART-IN-E- VerNgrader NEWDER PART-IN-E- VerNgrasser NEWDER *PART-IN-E- VerNgrazer NEWDER PART-IN-E- VerNgrinder INLEX PART-IN-E- VerNgrounder NEWDER *PART-IN-E- VerNgrower INLEX PART-IN-E- VerNgunfighter NEWSTEMDER *PART-IN-E- VerNgunslinger NEWSTEMDER *PART-IN-E- VerNgusher INLEX PART-IN-E- VerNhacker NEWDER PART-IN-E- VerNhalter INLEX *PART-IN-E- VerNhander NEWDER *PART-IN-E- VerNhandler INLEX PART-IN-E- VerNhanger INLEX PART-IN-E- VerNhardener NEWDER PART-IN-E- VerNhasher NEWDER *PART-IN-E- VerNhawker INLEX PART-IN-E- VerNheader INLEX *PART-IN-E- VerNheadquarter NEWSTEMDER *PART-IN-E- VerNheadwater NEWSTEMDER *PART-IN-E- VerNhealer NEWDER PART-IN-E- VerNhearer INLEX PART-IN-E- VerNheater INLEX PART-IN-E- VerNheaver NEWDER ?PART-IN-E- VerNheeler NEWDER PART-IN-E- VerNhelper NEWDER PART-IN-E- VerNhesiometer NEWSTEMDER *PART-IN-E- VerNhijacker NEWDER PART-IN-E- VerNholder INLEX PART-IN-E- VerNhomebuilder NEWSTEMDER *PART-IN-E- VerNhomemaker NEWSTEMDER *PART-IN-E- VerNhomeowner NEWSTEMDER *PART-IN-E- VerNhomer INLEX *PART-IN-E- VerNhomesteader NEWSTEMDER PART-IN-E- VerNhomopolymer NEWSTEMDER *PART-IN-E- VerNhoneymooner NEWDER PART-IN-E- VerNhunter INLEX PART-IN-E- VerNhurler NEWDER PART-IN-E- VerNhustler INLEX PART-IN-E- VerN

126

idler INLEX PART-IN-E- VerNimpresser NEWDER PART-IN-E- VerNimproviser NEWDER PART-IN-E- VerNintensifier INLEX PART-IN-E- VerNinterferometer NEWSTEMDER *PART-IN-E- VerNinterviewer NEWDER PART-IN-E- VerNintruder INLEX PART-IN-E- VerNinvader NEWDER PART-IN-E- VerNisomer NEWSTEMDER *PART-IN-E- VerNjetliner NEWSTEMDER *PART-IN-E- VerNjoiner INLEX PART-IN-E- VerNjoker INLEX PART-IN-E- VerNjumper INLEX PART-IN-E- VerNkeeper INLEX PART-IN-E- VerNkidnaper NEWSTEMDER PART-IN-E- VerNkiller INLEX PART-IN-E- VerNlaborer INLEX PART-IN-E- VerNlandowner NEWSTEMDER *PART-IN-E- VerNlarder INLEX *PART-IN-E- VerNlauncher NEWDER PART-IN-E- VerNlawmaker NEWSTEMDER *PART-IN-E- VerNlayer INLEX *PART-IN-E- VerNleader INLEX PART-IN-E- VerNleafhopper NEWSTEMDER *PART-IN-E- VerNleaguer NEWDER *PART-IN-E- VerNlearner INLEX PART-IN-E- VerNlecher INLEX *PART-IN-E- VerNlecturer INLEX PART-IN-E- VerNlifter NEWDER PART-IN-E- VerNlighter INLEX PART-IN-E- VerNlinebacker NEWSTEMDER *PART-IN-E- VerNliner INLEX *PART-IN-E- VerNlistener INLEX PART-IN-E- VerNliver INLEX *PART-IN-E- VerNloader NEWDER PART-IN-E- VerNlocker INLEX PART-IN-E- VerNloser INLEX PART-IN-E- VerNlover INLEX PART-IN-E- VerNlower INLEX *PART-IN-E- VerNluster INLEX *PART-IN-E- VerNmaker INLEX PART-IN-E- VerNmanager INLEX PART-IN-E- VerNmanufacturer INLEX PART-IN-E- VerNmarauder NEWDER PART-IN-E- VerNmarker INLEX PART-IN-E- VerNmauler NEWDER PART-IN-E- VerNmerger INLEX PART-IN-E- VerNmidwesterner NEWSTEMDER *PART-IN-E- VerNmillivoltmeter NEWSTEMDER *PART-IN-E- VerNminer INLEX PART-IN-E- VerNminter NEWDER PART-IN-E- VerNmisinterpreter NEWSTEMDER PART-IN-E- VerNmisunderstander NEWDER PART-IN-E- VerNmixer INLEX PART-IN-E- VerNmodifier INLEX PART-IN-E- VerNmonomer NEWSTEMDER *PART-IN-E- VerNmooncurser NEWSTEMDER *PART-IN-E- VerNmotorscooter NEWSTEMDER *PART-IN-E- VerNmourner INLEX PART-IN-E- VerNmover INLEX PART-IN-E- VerNmucker NEWDER *PART-IN-E- VerNmuffler INLEX PART-IN-E- VerNmurderer NEWDER PART-IN-E- VerNmutterer NEWDER PART-IN-E- VerN

127

nester NEWDER PART-IN-E- VerNnibbler NEWDER PART-IN-E- VerNnighter NEWSTEMDER *PART-IN-E- VerNnoisemaker NEWSTEMDER *PART-IN-E- VerNnorthener NEWSTEMDER *PART-IN-E- VerNnorther NEWSTEMDER *PART-IN-E- VerNnortherner NEWSTEMDER *PART-IN-E- VerNnullifier NEWDER PART-IN-E- VerNnumber INLEX *PART-IN-E- VerNobserver INLEX PART-IN-E- VerNoffender INLEX PART-IN-E- VerNopener INLEX PART-IN-E- VerNoperagoer NEWSTEMDER *PART-IN-E- VerNorganizer NEWDER PART-IN-E- VerNother NEWSTEMDER *PART-IN-E- VerNouster NEWDER PART-IN-E- VerNoutfielder NEWDER *PART-IN-E- VerNoutlander NEWDER *PART-IN-E- VerNoutsider INLEX *PART-IN-E- VerNover-achiever NEWDER PART-IN-E- VerNovernighter NEWSTEMDER *PART-IN-E- VerNoverseer INLEX PART-IN-E- VerNowner INLEX PART-IN-E- VerNoystcher NEWSTEMDER *PART-IN-E- VerNpacer NEWDER PART-IN-E- VerNpacifier INLEX PART-IN-E- VerNpainter INLEX PART-IN-E- VerNparameter NEWSTEMDER *PART-IN-E- VerNparameter NEWSTEMDER *PART-IN-E- VerNpartaker NEWDER PART-IN-E- VerNpateroller NEWSTEMDER *PART-IN-E- VerNpeddler INLEX PART-IN-E- VerNpeer INLEX *PART-IN-E- VerNpensioner INLEX *PART-IN-E- VerNperformer INLEX PART-IN-E- VerNpersuader NEWDER PART-IN-E- VerNpetitioner INLEX PART-IN-E- VerNphotographer INLEX PART-IN-E- VerNpicker INLEX PART-IN-E- VerNpiper INLEX PART-IN-E- VerNpistoleer NEWSTEMDER *PART-IN-E- VerNpitcher INLEX PART-IN-E- VerNplaner INLEX PART-IN-E- VerNplanter INLEX PART-IN-E- VerNplasterer INLEX PART-IN-E- VerNplayer INLEX PART-IN-E- VerNpleader NEWDER PART-IN-E- VerNplier INLEX *PART-IN-E- VerNplumber INLEX *PART-IN-E- VerNplunderer NEWDER PART-IN-E- VerNplunker NEWDER ?PART-IN-E- VerNpointer INLEX PART-IN-E- VerNpoker INLEX PART-IN-E- VerNpolyester NEWSTEMDER *PART-IN-E- VerNpolyether NEWSTEMDER *PART-IN-E- VerNpolyether NEWSTEMDER *PART-IN-E- VerNpornographer NEWSTEMDER *PART-IN-E- VerNporter INLEX *PART-IN-E- VerNportwatcher NEWSTEMDER *PART-IN-E- VerNposter INLEX PART-IN-E- VerNpotentiometer NEWSTEMDER *PART-IN-E- VerNprayer INLEX PART-IN-E- VerNpreacher NEWDER PART-IN-E- VerNprepolymer NEWSTEMDER *PART-IN-E- VerN

128

presenter INLEX PART-IN-E- VerNpresser NEWDER PART-IN-E- VerNpretender INLEX PART-IN-E- VerNprimer INLEX PART-IN-E- VerNprinter INLEX PART-IN-E- VerNprocurer INLEX PART-IN-E- VerNproducer INLEX PART-IN-E- VerNprogrammer INLEX PART-IN-E- VerNpromoter INLEX PART-IN-E- VerNproprieter NEWSTEMDER *PART-IN-E- VerNprowler INLEX PART-IN-E- VerNpublisher INLEX PART-IN-E- VerNpuncher NEWDER PART-IN-E- VerNpurchaser INLEX PART-IN-E- VerNpursuer INLEX PART-IN-E- VerNpusher INLEX PART-IN-E- VerNputter INLEX PART-IN-E- VerNpuzzler INLEX PART-IN-E- VerNpyrometer NEWSTEMDER *PART-IN-E- VerNpyrometer NEWSTEMDER *PART-IN-E- VerNquestioner INLEX PART-IN-E- VerNracer INLEX PART-IN-E- VerNrafter INLEX PART-IN-E- VerNraider INLEX PART-IN-E- VerNrailroader NEWDER PART-IN-E- VerNraiser NEWDER PART-IN-E- VerNranger INLEX PART-IN-E- VerNrattler NEWDER PART-IN-E- VerNreader INLEX PART-IN-E- VerNreceiver INLEX PART-IN-E- VerNrecorder INLEX PART-IN-E- VerNrecruiter NEWDER PART-IN-E- VerNrectifier INLEX PART-IN-E- VerNredeveloper NEWSTEMDER PART-IN-E- VerNredheader NEWSTEMDER *PART-IN-E- VerNreducer NEWDER PART-IN-E- VerNreformer NEWDER PART-IN-E- VerNrefresher INLEX PART-IN-E- VerNreminder INLEX PART-IN-E- VerNrepeater INLEX PART-IN-E- VerNreporter INLEX PART-IN-E- VerNrequester NEWDER PART-IN-E- VerNresearcher NEWDER PART-IN-E- VerNrestorer INLEX PART-IN-E- VerNretailer INLEX PART-IN-E- VerNretainer INLEX PART-IN-E- VerNretriever INLEX PART-IN-E- VerNrevenuer NEWSTEMDER *PART-IN-E- VerNreviewer INLEX PART-IN-E- VerNrevolver INLEX *PART-IN-E- VerNrider INLEX PART-IN-E- VerNringer INLEX *PART-IN-E- VerNringsider NEWSTEMDER *PART-IN-E- VerNrioter NEWDER PART-IN-E- VerNriver INLEX *PART-IN-E- VerNrocker INLEX PART-IN-E- VerNrodder NEWSTEMDER *PART-IN-E- VerNroemer NEWSTEMDER ?PART-IN-E- VerNroller INLEX PART-IN-E- VerNromancer NEWDER PART-IN-E- VerNroofer NEWDER PART-IN-E- VerNrooster INLEX *PART-IN-E- VerNroper NEWDER PART-IN-E- VerNrover INLEX *PART-IN-E- VerN

129

ruler INLEX PART-IN-E- VerNrustler INLEX PART-IN-E- VerNsacker NEWDER PART-IN-E- VerNsaloonkeeper NEWSTEMDER *PART-IN-E- VerNsampler INLEX PART-IN-E- VerNsander INLEX PART-IN-E- VerNsaucer INLEX *PART-IN-E- VerNsaver INLEX PART-IN-E- VerNsawtimber NEWSTEMDER *PART-IN-E- VerNscavenger INLEX PART-IN-E- VerNschooler NEWDER PART-IN-E- VerNschoolteacher NEWSTEMDER PART-IN-E- VerNscorcher INLEX PART-IN-E- VerNscriber INLEX PART-IN-E- VerNseafarer NEWSTEMDER *PART-IN-E- VerNseducer NEWDER PART-IN-E- VerNseeker NEWDER PART-IN-E- VerNseer INLEX *PART-IN-E- VerNseller INLEX PART-IN-E- VerNsender INLEX PART-IN-E- VerNsettler INLEX PART-IN-E- VerNsewer INLEX *PART-IN-E- VerNshaker INLEX PART-IN-E- VerNsharer NEWDER PART-IN-E- VerNshifter NEWDER PART-IN-E- VerNshocker INLEX PART-IN-E- VerNshooter NEWDER PART-IN-E- VerNshouder NEWSTEMDER *PART-IN-E- VerNshower INLEX *PART-IN-E- VerNsidewinder NEWSTEMDER *PART-IN-E- VerNsightseer INLEX PART-IN-E- VerNsigner INLEX PART-IN-E- VerNsinger NEWDER PART-IN-E- VerNskirmisher NEWDER PART-IN-E- VerNskyjacker NEWDER PART-IN-E- VerNslanderer NEWDER PART-IN-E- VerNsleeper INLEX PART-IN-E- VerNslicker INLEX *PART-IN-E- VerNsmoker INLEX PART-IN-E- VerNsmuggler NEWDER PART-IN-E- VerNsneaker INLEX *PART-IN-E- VerNsniper NEWDER *PART-IN-E- VerNsnuffer INLEX PART-IN-E- VerNsoftener NEWDER PART-IN-E- VerNsojourner NEWDER PART-IN-E- VerNsommelier NEWSTEMDER *PART-IN-E- VerNsoutherner NEWSTEMDER *PART-IN-E- VerNsoyaburger NEWSTEMDER *PART-IN-E- VerNspacer NEWDER PART-IN-E- VerNspeaker INLEX PART-IN-E- VerNspectrometer NEWSTEMDER *PART-IN-E- VerNspectrophotometer NEWSTEMDER *PART-IN-E- VerNspender INLEX PART-IN-E- VerNspoiler NEWDER PART-IN-E- VerNsportswriter NEWSTEMDER *PART-IN-E- VerNspreader NEWDER PART-IN-E- VerNstabilizer INLEX PART-IN-E- VerNstager INLEX PART-IN-E- VerNstarter INLEX PART-IN-E- VerNstealer NEWDER PART-IN-E- VerNsteamer INLEX PART-IN-E- VerNsteelmaker NEWSTEMDER *PART-IN-E- VerNstepmother NEWSTEMDER *PART-IN-E- VerNstinkpotter NEWSTEMDER *PART-IN-E- VerN

130

stoker NEWDER PART-IN-E- VerNstraggler NEWDER PART-IN-E- VerNstreamer INLEX *PART-IN-E- VerNstreamliner NEWDER *PART-IN-E- VerNstretcher INLEX PART-IN-E- VerNsubscriber INLEX PART-IN-E- VerNsucker INLEX *PART-IN-E- VerNsufferer INLEX PART-IN-E- VerNsupplier INLEX PART-IN-E- VerNsupporter INLEX PART-IN-E- VerNsuspender INLEX PART-IN-E- VerNsweater INLEX *PART-IN-E- VerNsynchronizer NEWDER PART-IN-E- VerNtaker INLEX PART-IN-E- VerNtalker INLEX PART-IN-E- VerNtaper INLEX *PART-IN-E- VerNtaxpayer NEWSTEMDER *PART-IN-E- VerNteacher INLEX PART-IN-E- VerNtelegrapher INLEX PART-IN-E- VerNteller INLEX PART-IN-E- VerNtempter NEWDER PART-IN-E- VerNtheatergoer NEWSTEMDER *PART-IN-E- VerNthickener INLEX PART-IN-E- VerNthinker NEWDER PART-IN-E- VerNthriller INLEX PART-IN-E- VerNthrower NEWDER PART-IN-E- VerNtier INLEX *PART-IN-E- VerNtiller INLEX PART-IN-E- VerNtimer INLEX PART-IN-E- VerNtiter NEWSTEMDER *PART-IN-E- VerNtoddler INLEX PART-IN-E- VerNtoner NEWDER PART-IN-E- VerNtoolmaker NEWSTEMDER *PART-IN-E- VerNtormenter NEWDER PART-IN-E- VerNtorquer NEWSTEMDER PART-IN-E- VerNtower INLEX *PART-IN-E- VerNtracer INLEX PART-IN-E- VerNtrader INLEX PART-IN-E- VerNtrailer INLEX PART-IN-E- VerNtranquilizer NEWDER PART-IN-E- VerNtransducer NEWSTEMDER PART-IN-E- VerNtransformer INLEX PART-IN-E- VerNtraveler INLEX PART-IN-E- VerNtrawler INLEX PART-IN-E- VerNtreasurer INLEX *PART-IN-E- VerNtrooper INLEX *PART-IN-E- VerNtruckdriver NEWSTEMDER *PART-IN-E- VerNtrucker NEWSTEMDER PART-IN-E- VerNtrumpeter NEWSTEMDER PART-IN-E- VerNtumbler INLEX PART-IN-E- VerNtwirler NEWDER PART-IN-E- VerNtwister INLEX PART-IN-E- VerNunder-achiever NEWDER PART-IN-E- VerNunderachiever NEWDER PART-IN-E- VerNundertaker INLEX *PART-IN-E- VerNunderwriter INLEX PART-IN-E- VerNupholder NEWDER PART-IN-E- VerNuser INLEX PART-IN-E- VerNvacationer NEWDER PART-IN-E- VerNvernier NEWSTEMDER *PART-IN-E- VerNviewer INLEX PART-IN-E- VerNviscometer NEWSTEMDER *PART-IN-E- VerNvoltmeter NEWSTEMDER *PART-IN-E- VerNvoter INLEX PART-IN-E- VerN

131

voucher INLEX PART-IN-B- VerNvoyager INLEX PART-IN-B- VerNwager INLEX *PART-IN-B- VerNwaiter INLEX PART-IN-B- VerNwalker INLEX PART-IN-B- VerNwanderer INLEX PART-IN-B- VerNwasher INLEX PART-IN-B- VerNwastewater NBWSTBMDBR *PART-IN-EB VerNwatcher NBWDER PART- IN-B- VerNwaver INLEX PART-IN-B- VerNwesterner NBWSTBMDBR *PART-IN-B- VerNwesterner NBWSTBMDBR *PART-IN-EB VerNwhisker INLEX TPART-IN-EB VerNwielder NEWDER PART-IN-B- VerNwiener NBWSTBMDBR TPART-IN-B- VerNwigmaker NBWSTBMDER *PARTAINBE VerNwinder NEWDER PART-IN-B- VerNwisenheimner NBWSTBMDBR *PART-IN-EB VerNwoodcarver NBWSTBMDBR TPART-IN-EB VerNwoolworker NBWSTBMDBR *PART-IN-EB VerNworker INLEX PART-IN-B- VerNwonlder NEWSTBMDBR *PART-IN-EB VerNwpuncher NBWSTBMDER *PARTINBE VerNwrangler INLEX PART-IN-B- VerNwriter INLEX PART-IN-B- VerNwrongdoer NBWSTBMDBR *PART-INBE VerNwrtier NBWSTBMDBR *PARTINBE VerNyachter NBWSTBMDBR PART-IN-B- VerN

A.2.19 Words derived using -ant

accountant INLEX PART-IN-B-antaspirant INLEX PART-IN-B-antassailant INLEX PART-IN-B-antassistant INLEX PART-IN-B-antattendant INLBX PART-IN-B-antclaimant INLBX PART-IN-B-antconunandant INLBX PART-IN-B-antconsultant INLEX PART-IN-B-antcontestant INLEX PART-IN-B-antcoolant INLBX PART-IN-B-antdefendant INLBX PART-IN-B-antdepressant NBWDBR PART-IN-B-antdescendant INLBX PART-IN-B-antdeterminant INLBX ?PART-IN-B-antdiscussant NBWDBR PART-IN-B-anthabitant NBWSTBMDBR *PART-IN-B antinformant INLEX PART-IN-B-antintendant NEWDER *PART-IN-B-antironpant NBWSTBMDBR *PART-IN-B-antmalfeasant NEWSTBMDBR TPART-IN-B-antmisdemeanant NEWDER *PART-IN-B antpageant INLEX *PART-IN-B-antpleasant INLEX *PART-IN-B-antpowerplant NBWSTBMDBR *PART-IN-B-antresultant INLEX PART-IN-B-antservant INLEX PART-IN-B-antsignificant NBWSTBMDBR *PART-IN-B-antvariant INLEX ?PART-IN-B-ant

132

A.2.20 Words derived using -ment

abandonment NEWDER E-OR-P-OR-R-mcntabasement NEWDER E-OR-P-OR-R-mnentabutment INLEX E-OR-P-OR-R-mentaccompaniment INLEX E-OR-P-OR-R-mnentaccomplishment INLEX E-OR-P-OR-R-mentaccouterment NEWSTEMDER E-OR-P-OR-R-mentachievement TNLEX E-OR-P-OR-R-mentacknowledgement NEWDER E-OR-P-OR-R-mnentadjournment NEWDER E-OR-P-OR-R-mentadjustment NEWDER E-OR-P-OR-R-mentadmonishment NEWDER E-OR-P-OR-R-mentadvancement INLEX E-OR-P-OR-R-mentadvertisement INLEX E-OR-P-OR-R-mentadvisement NEWDER E-OR-P-OR-R-mentagreement INLEX E-OR-P-OR-R-mentailment INLEX E-OR-P-OR-R-mentalignment INLEX E-OR-P-OR-R-mentallotment JNLEX E-OR-P-OR-R-mentallurement INLEX E-OR-P-OR-R-mnentamazement NEWDER E-OR-P-OR-R-mentamendment INLEX E-OR-P-OR-R-mentamusement INLEX E-OR-P-OR-R-mentannouncement INLEX E-OR-P-OR-R-mentappeasement INLEX E-OR-P-OR-R-mentappointment INLEX E-OR-P-OR-R-mentapportionment NEWDER E-OR-P-OR-R-mentarrangement INLEX E-OR-P-OR-R-mentassesment NEWSTEMDER E-OR-P-OR-R-mentassessment INLEX E-OR-P-OR-R-mcntassignment INLEX E-OR-P-OR-R-mentassortment INLEX E-OR-P-OR-R-mentastonishment INLEX E-OR-P-OR-R-mentatonement NEWDER E-OR-P-OR-R-mentattachment INLEX E-OR-P-OR-R-mentattainment INLEX E-OR-P-OR-R-mentbanishment NEWDER E-OR-P-OR-R-mentbasement INLEX *E-OR-P-OR-R-mnentbattlement INLEX *E-ORPOR-Rmentbedazzlement NEWSTEMDER E-OR-P-OR-R-mentbereavement INLEX E-OR-P-OR-R-mentbetterment INLEX E-OR-P-OR-R-mentbewilderment NEWDER E-OR-P-OR-R-rnentbombardment INLEX E-OR-P-OR-R-mentchastisement INLEX E-OR-P-OR-R- mentcitement INLEX *EO--RRmncommandment INLEX E-OR-P-OR-R-mentcommencement INLEX *E-OR-P-OR-R-mentcommitment INLEX E-OR-P--OR-R-mcntcommittment NEWSTEMDER E-OR-P-OR-R-mcntcompliment INLEX *E-OR-P-OR-R-mcntcomportment INLEX E-OR-P-OR-R-mnentconcealment INLEX E-OR-P-OR-R-mentconfinement INLEX E-OR-P-OR-R-mentcontainment INLEX E-OR-P-OR-R-mentcontentment INLEX E-OR-P-OR-R-mnentcurement NEWDER *F-OR-POR-Rmentdecrement NEWSTEMDER *E-OR-P-OR-R-rnentdeferment NEWDER E-OR-P-OR-R-mentdelineament NEWSTEMDER E-OR-P-OR-R-mentdepartment INLEX *EO--RRmndeployment NEWDER E-OR-P-OR-R-mentderangement NEWDER E-OR-P-OR-R-ment

133

detachment INLEX E-OR-P-OR-R-mentdetriment INLEX *E-ORPOR-Rmentdevelopment INLEX E-OR-P-OR-R-mentdisagreement INLEX E-OR-P-OR-R-mentdisappointment INLEX E-OR-P-OR-R-mentdisbursement INLEX E-OR-P-OR-R-mentdiscernment INLEX E-OR-P-OR-R-mentdiscouragement INLEX E-OR-P-OR-R-mentdisenfranchisement NEWDER E-OR-P-OR-R-rnentdisengagement NEWDER E-OR-P-OR-R-mentdisillusionment INLEX E-OR-P-OR-R-mentdismemberment NEWDER E-OR-P-OR-R-mentdisparagement NEWDER E-OR-P-0R-R-mentdispersement NEWDER E-OR-P-OR-R-mentdisplacement INLEX E-OR-P-OR-R-rnentdownpayment NEWSTEMDER E-OR-P-OR-R-menteasement NEWDER *E&ORPOR-Rmentembarrassment INLEX E-OR-P-OR-R-mentembezzlement NEWDER E-OR-P-OR-R-mentembodiment INLEX E-OR-P-OR-R-mentemployment INLEX E-OR-P-OR-R-mentenactment INLEX E-OR-P-OR1-R-mentencampment INLEX E-OR-P-OR-R-mentenchantment INLEX E-OR-P-OR-R-mentencouragement INLEX E-OR-P-OR-R-mentencroachment INLEX E-OR-P-OR-R-mentendearment INLEX E-OR-P-OR-R-,nentendorsement NEWDER E-OR-P-OR-R-mentendowment INLEX E-OR-P-OR-R-mentenforcement NEWDER E-OR-P-OR-R-mentengagement INLEX *FOR-F-OR-R-mentenjoyment INLEX E-OR-P-OR-R-mentenlargement INLEX E-OR-P-OR-R-mentenlightenment INLEX E-OR-P-OR-R-mentenlistment NEWDER E-OR-P-OR-R-rnentenrichment NEWDER E-OR-P-OR-R-,nentenrollment NEWDER E-OR-P-OR-R-mentenslavement NEWDER E-OR-P-OR-R-mententanglement INLEX E-OR-P-OR-R-mententertainment INLEX E-OR-P-OR-R-mententicement INLEX E-OR-P-OR-R-mentequipment INLEX E-OR-P-OR-R-mentestablishment INLEX E-OR-P-OR-R-mentestrangement INLEX E-OR-P-OR-R-mentexcitement INLEX E-OR-P-OR-R-mentfulfillment NEWDER E-OR-P-OR-R-mentgovernment INLEX *E>OR-P-OR-R-mentharrassment NEWSTEMDER E-OR-P-OR-R-mentimpairment NEWDER E-OR-P-OR-R-mentimpoundment NEWDER E-OR-P-OR-R-mentimprisonment NEWDER E-OR-P-OR-R-mentimprovement INLEX E-OR-P-OR-R-mentincitement NEWDER E-OR-P-OR-R-mentindictment NEWDER E-OR-P-OR-R-mentinducement INLEX E-OR-P-OR-R-mentinfringement NEWDER E-OR-P-OR-R-rnentinstallment NEWDER *E-OR-POR-Rmentinterment INLEX E-OR-P-OR-R-mentinvestment INLEX E-OR-P-OR-R-mentinvolvement NEWDER E-OR-P-OR-R-mentjudgement INLEX E-OR-P-OR-R-mentmaladjustment NEWSTEMDER E-OR-P-OR-R-mentmaladjustment NEWSTEMDER E-OR-P-OR-R-rnentmanagement INLEX E-OR-P-OR-R-ment

134

management NEWDER E-OR-P-OR-R-mentmeasurement INLEX E-OR1-P-OR1-R-mentmisalignment NEWDER E-OR1-P-OR1-R-mentmisplacement NEWDER E-OR1-P-OR1-R-mentmmittment NEWSTEMDER *EFORPOR1-1mentmovement INLEX E-OR1-P-OR1-R-mentnourislhnent INLEX E-OR1-P-OR1-R-mentover- achievement NEWDER E-OR-P-OR-R-mentoverpayment NEWDER E-OR-P-OR-R-mentparchment INLEX *E-ORPOR-Rmentpavement INLEX E-OR1-P-OR1-R-mentpayment INLEX E-OR-P-OR1-R-mentpigment INLEX *EO--RRmnplacement INLEX E-OR-P-OR1-R-mentpostponement NEWDER E-OR-P-OR1-R-mentpreferment INLEX *EO-P01 1mentpreordainment NEWDER E-OR-P-OR-R-mentprepayment NEWDER E-OR1-P-OR1-R-mentpresentment NEWDER *E-OR-P-01R-R-mentprocurement NEWDER E-OR-P-OR1-R1mentpronouncement INLEX E-OR-P-OR-R-mentpunishment INLEX E-OR-P-OR-R-mentpuzzlement INLEX E-OR-P-OR1-R-mentre-enactment NEWDER E-OR-P-OR-R-mentreadjustment NEWDER E-OR-P-OR1-R-mentreapportionment NEWDER E-OR-P-OR-R1-mentrecrui tment NEWDER E-OR-P-OR-R-mentredevelopment NEWDER E-OR1-P-OR-R1-mentrefinement INLEX E-OR-P-OR-R-mentrefreshment INLEX *E-OR-POR-Rmentreimbursement INLEX E-OR-P-OR1-R-mentreinforcement INLEX E-OR1-P-OR1-R-mentrepayment INLEX E-OR-P-OR-R-mentreplacement INLEX E-OR-P-OR1-R-mentreplenishment NEWDER E-OR-P-OR-R-mentrequirement INLEX E-OR-P-OR1-R-mentresentment INLEX E-OR-P-OR-R-mentresettlement NEWDER E-OR-P-OR-R-mentrestatement NEWDER E-OR1-P-OR1-R-mentretirement INLEX E-OR-P-OR-R-mentrevetment INLEX *E.OR-P-OR-R-mentsettlement INLEX E-OR-P-OR1-R-mentshipment INLEX E-OR-P-OR-R-mentstatement INLEX E-OR-P-OR-R- menttransshipment NEWDER E-OR-P-OR-R-menttreatment INLEX E-OR1-P-OR-R1-mentunder- achi evement NEWDER E-OR1-P-OR1-R-mentunderstatement INLEX E-OR-P-OR-R-mentunemployment INLEX *'E-OR-P-01R-R-mentunfoldment NEWDER E-OR1-F-OR1-R-mentvestment INLEX *F-ORPOR-Rmene

A.2.21 Words derived using -ageanchorage INLEX *E-AND-R-ageappendage INLEX E-AND-R-ageassemblage INLEX E-AND-1- ageaverage INLEX *EFAND-R- agebandage INLEX *E-AND-R- ageblockage INLEX E-AND-R-agebondage INLEX E-AND-R-agebreakage INLEX E-AND-R-age

135

carriage INLEX *E-AND-R-agecleavage INLEX E-AND-li-agecoverage INLEX E-AND-R-agedosage INLEX *EAND-R- agedrainage INLEX E-AND-R-agefootage INLEX AN-agfrontage INLEX *&-AND-R- agegarbage INLEX *&AND-R-agehaulage INLEX li-AND-li-agehomage INLEX *B-AND..liagehostage INLEX 5 B-AND-R-ageintermarriage INLEX E-AND-li-ageleakage IiNLEX li-AND-i- ageleverage INLEX liAND-li-agelinkage INLEX li-AND-li-agemarriage INLEX liAND-li-agemassage INLEX *E-AND-R-agemessage INLEX *E-AND-R- ageorphanage INLEX *E-AND-R-~agepackage INLEX liAND-li-agepassage INLEX liAND-li-agerampage INLEX 5 B-ANDRageravage INLEX *E-AND-R-agereportage INLEX li-AND-li-agesalvage INLEX *&-AND-li-agesavage INLEX *&-AND-li.ageseepage INLEX li-AND-li-agesewage INLEX *B-AND-li-ageshrinkage INLEX li-AND-li-agespoilage INLEX E-AND-li-agestorage INLEX li-AND-li-agestumpage NEWDER li-AND-li-ageusage INLEX E-AND-i- agewastage INLEX E-AND-i- agewreckage INLEX B-AND-li-age

A.2.22 Words derived using mis3-misbrand NEWDER INGOlimis-miscalculate INLEX INCOlitis-miscarry INLEX ?JNCOlRnis-misconstrue INLEX INCOlimis-misfire INLEX *JNCOlimis-misgauge NEWDER INCOlimis-misguide INLEX INCOlimis-misinterpret INLEX INCORruis-misjudge INLEX INCOlimis-mislead INLEX INCOlimis-mismanage INLEX INCOlimis-mnisname INLEX INCOlimis-misperceive NEWDER INCOlimis-misplace INLEX INCOlimis-misquote INLEX INCOlRmis-misrelate NEWDER INCOlimis-m-isrepresent INLEX JNCOlRnis-missing INLEX *INCOlimis-mistake INLEX INCOlimis-mistrust INLEX *INCOlimis-misunderstand INLEX JNCOlRnis-misuse INLEX INCOlRnis-mniswritten NEWSTEMDER *JNCOlimis-

136

A.2.23 Words derived using -able

acceptable INLEX ABLE-ableaccountable INLEX ABLE-ableachievable NEWDER ABLE-ableadaptable INLEX ABLE-ableadjustable NEWDER ABLE-ableadmirable INLEX ABLE-ableadorable INLEX ABLE-ableadvisable INLEX ABLE-ableagreeable INLEX ABLE-ablealienable INLEX ABLE-ableallocable NEWSTEMDER *ABLE-ableallowable INLEX ABLE-ablealterable NEWDER ABLE-ableanalyzable NEWSTEMDER ABLE-ableanswerable INLEX *ABLE-ableappeasable NEWDER ABLE-ableapproachable INLEX ABLE-ableascertainable NEWDER ABLE-ableattainable NEWDER ABLE-ableattributable INLEX ABLE-ableavailable INLEX *ABLE-ableavaliable NEWSTEMDER *ABLE-ableavoidable NEWDER ABLE-ablebearable INLEX ABLE-ablebelievable INLEX ABLE-ablebreakable NEWDER ABLE-ablecallable NEWDER ABLE-ablechangeable INLEX ABLE-ablechargeable INLEX ABLE-ablecombable NEWDER ABLE-ablecombinable NEWDER ABLE-ablecomfortable INLEX *ABLE-ablecommendable INLEX ABLE-ablecomparable INLEX ABLE-ableconceivable INLEX ABLE-ableconquerable NEWDER ABLE-ableconscionable NEWSTEMDER *ABLE-ableconsiderable INLEX *ABLE-ablecontestable INLEX ABLE-ablecurable INLEX ABLE-abledebatable INLEX ABLE-abledecipherable INLEX ABLE-abledeductable NEWDER ABLE-abledefinable INLEX ABLE-abledeniable INLEX ABLE-abledependable INLEX ABLE-abledeplorable INLEX ABLE-abledescribable INLEX ABLE-abledesirable INLEX ABLE-abledetachable NEWDER ABLE-abledetectable NEWDER ABLE-abledeterminable NEWDER ABLE-abledetestable NEWDER ABLE-ablediagnosable NEWDER ABLE-ablediagonalizable NEWDER ABLE-abledifferentiable NEWSTEMDER *ABLE-abledisagreeable INLEX ABLE-ablediscernable NEWDER ABLE-abledispensable INLEX ABLE-abledisputable INLEX ABLE-abledistinguishable INLEX ABLE-abledistortable NEWDER ABLE-able

137

drinkable NEWDER ABLE-ableduplicable NEWSTEMDER *ABLE-ableeatable INLEX ABLE-ableendurable NEWDER ABLE-ableenforceable NEWDER ABLE-ableenjoyable INLEX ABLE-ableenviable INLEX ABLE-ableescapable INLEX ABLE-ableexcusable INLEX ABLE-ableexpandable NEWDER ABLE-ableexpectable NEWDER ABLE-ableexpendable INLEX ABLE-ableexplainable NEWDER ABLE-ablefashionable INLEX *ABLE-ablefavorable INLEX *ABLE- ableforeseeable INLEX ABLE-ableforgivable NEWDER ABLE-ablefriable INLEX *ABLE- ablehonorable INLEX *ABLE- ableidentifiable NEWDER ABLE-ableimaginable NEWDER ABLE-ableimcomparable NEWSTEMDER *ABLE-ableimpeachable INLEX ABLE-ableinjectable NEWDER ABLE-ableinterchangeable INLEX ABLE-ableinterminable INLEX *ABLE-ableirresolvable NEWSTEMDER *ABLE-ablejustifiable INLEX ABLE-ablekillable NEWDER ABLE-ablelivable INLEX ABLE-ablelovable INLEX ABLE-ablemanageable NEWDER ABLE-ablemarketable NEWSTEMDER ABLE-ablemeasurable INLEX ABLE-ableminable INLEX *ABLE- ablemistakable INLEX ABLE-ablemovable INLEX ABLE-ablenameable NEWSTEMDER ABLE-ablenotable INLEX ABLE-ablenoticeable INLEX ABLE-ableobservable INLEX ABLE-ableobtainable INLEX ABLE-ablepaintable NEWDER ABLE-ablepardonable INLEX ABLE-ablepassable INLEX ABLE-ablepayable INLEX ABLE-ableperishable INLEX *ABLE-ablepitiable INLEX ABLE-ableplantable NEWDER ABLE-ableplayable INLEX ABLE-ablepliable INLEX *ABLE-ableportable INLEX *ABLE-ablepredictable INLEX ABLE-ablepreferable INLEX ABLE-ablepresentable INLEX ABLE-ableprintable INLEX ABLE-ableprobable INLEX *ABLE-ableprocurable NEWDER ABLE-ablepunishable INLEX ABLE-ablequestionable INLEX ABLE-ableratable INLEX ABLE-ablereadable INLEX ABLE-ablereasonable INLEX *ABLE-ablerecognizable NEWDER ABLE-able

138

recoverable NEWDER ABLE-ablereimburseable NEWSTEMDER ABLE-ablereliable INLEX ABLE-ableremarkable INLEX *ABLE -ableremovable NEWDER ABLE-ablerenewable INLEX ABLE-ablerepayable INLEX ABLE-ableresearchable NEWDER ABLE-ableresonable NEWSTEMDER *ABLE-ablerespectable INLEX ABLE-ableseasonable NEWDER *ABLE-ableserviceable INLEX *ABLE- ableshakable INLEX ABLE-ablesinkable NEWDER ABLE-ablesizable INLEX *ABLE-ablespeakable INLEX ABLE-ablestandable NEWDER *ABLE-ablesuable NEWDER ABLE-ablesuitable INLEX *ABLE-ablesuperable INLEX *ABLE-ablesupportable NEWDER ABLE-ablesurmountable INLEX ABLE-abletaxable NEWDER ABLE-abletellable NEWDER ABLE-ablethinkable INLEX ABLE-abletintable NEWDER ABLE-abletraceable NEWDER ABLE-abletransplantable NEWDER ABLE-ableunderstandable NEWDER ABLE-ableusable NEWDER ABLE-ableuseable NEWSTEMDER ABLE-ablevaluable INLEX ABLE-ablevariable INLEX ABLE-ablewarrantable NEWDER ABLE-ableworkable INLEX *ABLE-able

A.2.24 Words derived using -nessabruptness NEWDER NESS-nessabsoluteness NEWDER NESS-nessabstractedness NEWDER NESS-nessacquisitiveness NEWDER NESS-nessadroitness NEWDER NESS-nessaggressiveness NEWDER NESS-nessagreeableness NEWDER NESS-nessalertness NEWDER NESS-nessallusiveness NEWSTEMDER NESS-nessaloneness NEWDER NESS-nessaloofness NEWDER NESS-nessamateurishness NEWDER NESS-nessappropriateness NEWDER NESS-nessaptness NEWDER NESS-nessartfulness NEWDER NESS-nessassertiveness NEWDER NESS-nessastuteness NEWDER NESS-nessawareness NEWDER NESS-nessawfulness NEWDER NESS-nessawkwardness NEWDER NESS-nessbadness NEWDER NESS-nessbaldness NEWDER NESS-nessbalkiness NEWDER NESS-nessbitterness NEWDER NESS-ness

139

blackness NEWDER NESS-nessblandness NEWDER NESS-nessblindness NEWDER NESS-nessbluntness NEWDER NESS-nessboldness NEWDER NESS-nessbrashness NEWDER NESS-nessbrazenness NEWDER NESS-nessbrightness NEWDER NESS-nessbriskness NEWDER NESS-nessbusiness INLEX *NESS-nessbusyness NEWSTEMDER NESS-nesscallousness NEWDER NESS-nesscalmness NEWDER NESS-nesscarefulness NEWDER NESS-nesscarelessness NEWDER NESS-nesscheerfulness NEWDER NESS-nesschildishness NEWDER NESS-nesschumminess NEWDER NESS-nessclannishness NEWDER NESS-nessclearness NEWDER NESS-nesscleverness NEWDER NESS-nesscloddishness NEWDER NESS-nesscloseness NEWDER NESS-nesscoarseness NEWDER NESS-nesscohesiveness NEWDER NESS-nesscoldness NEWDER NESS-nesscommonness NEWDER NESS-nesscompleteness NEWDER NESS-nessconciseness NEWDER NESS-nessconnectedness NEWDER NESS-nessconsciousness INLEX NESS-nesscoolness NEWDER NESS-nesscorrectness NEWDER NESS-nesscourtliness NEWDER NESS-nesscovetousness NEWDER NESS-nesscoyness NEWDER NESS-nesscrassness NEWDER NESS-nesscreativeness NEWDER NESS-nesscredulousness NEWDER NESS-nesscrispness NEWDER NESS-nesscurtness NEWDER NESS-nessdampness NEWDER NESS-nessdarkness NEWDER NESS-nessdeadliness NEWDER NESS-nessdeadness NEWDER NESS-nessdecisiveness NEWDER NESS-nessdecorativeness NEWDER NESS-nessdefensiveness NEWDER NESS-nessdeftness NEWDER NESS-nessdirectness NEWDER NESS-nessdiscursiveness NEWDER NESS-nessdisorderliness NEWDER NESS-nessdizziness NEWDER NESS-nessdreariness NEWDER NESS-nessdrunkenness NEWDER *NESS-nessdryness NEWSTEMDER NESS-nessdullness NEWDER NESS-nesseagerness NEWDER NESS-nessearnestness NEWDER NESS-nesseffectiveness INLEX NESS-nesselusiveness NEWDER NESS-nessemptiness NEWDER NESS-nessexclusiveness NEWDER NESS-nessexpansiveness NEWDER NESS-ness

140

explicitness NEWDER NESS-nessexpressiveness NEWDER NESS-nessexpressivness NEWSTEMIDER NESS-nessexquisiteness NEWDER NESS-nessextraneousness NEWDER NESS-nessfairness NEWDER NESS-nessfamiliarness NEWDER NESS-nessfierceness NEWDER NESS-nessfineness NEWDER NESS-nessfirnmess NEWDER NESS-nessfitness INLEX NESS-nessflatness NEWDER NESS-nessfondness NEWDER NESS-nessfoolishness NEWDER NESS-nessforcefulness NEWDER NESS-nessforgetfulness NEWDER NESS-nessforthrightness NEWDER NESS-nessfrankness NEWDER NESS-nessfreshness NEWDER NESS-nessfriendliness NEWDER NESS-nessfruitfulness NEWDER NESS-nessfullness INLEX NESS-nessgarishness NEWDER NESS-nessgentleness NEWDER NESS-nessgiddiness NEWDER NESS-nessgivenness NEWDER NESS-nessgladness NEWDER NESS-nessgodliness NEWDER NESS-nessgoodness INLEX NESS-nessgreatness NEWDER NESS-nessgreenness NEWDER NESS-nessgrimness NEWDER NESS-nessguardedness NEWDER NESS-nessguiltiness NEWDER NESS-nesshappiness INLEX NESS-nesshardness INLEX NESS-nessharshness NEWDER NESS-nesshaughtiness NEWDER NESS-nessheaviness NEWDER NESS-nesshelpfulness NEWDER NESS-nesshelplessness NEWDER NESS-nesshighness NEWDER *NESS-nesshoarseness NEWDER NESS-nessholiness INLEX NESS-nesshollowness NEWDER NESS-nesshomesickness NEWDER NESS-nesshopelessness NEWDER NESS-nesshumanness NEWDER NESS-nesshuskiness NEWDER NESS-nessidleness NEWDER NESS-nessillness INLEX NESS-nessinappropriateness NEWDER NESS-nessincisiveness NEWDER NESS-nessinclusiveness NEWDER NESS-nessincompleteness NEWDER NESS-nessindecisiveness NEWDER NESS-nessindefiniteness NEWDER NESS-nessineffectiveness NEWDER NESS-nessineptness NEWDER NESS-nessinterconnectedness NEWDER NESS-nessinwardness INLEX *NESS-nessjewishness NEWSTEMIDER NESS-nessjoblessness NEWDER NESS-nessjustness NEWDER NESS-ness

141

kindliness NEWDER NESS-nesskindness INLEX NESS-nesslaxness NEWDER NESS-nesslight-headedness NEWDER NESS-nesslight-mindedness NEWDER NESS-nesslightness INLEX NESS-nesslikeness INLEX *NESS-nessliteralness NEWDER NESS-nessliveliness NEWDER NESS-nessloneliness NEWDER NESS-nesslooseness NEWDER NESS-nesslousiness NEWDER NESS-nessloveliness NEWDER NESS-nessludicrousness NEWDER NESS-nessmadness INLEX NESS-nessmaleness NEWDER NESS-nessmanliness NEWDER NESS-nessmatter-of-factness NEWDER NESS-nessmeaningfulness NEWDER NESS-nessmeanness NEWDER NESS-nessmustiness NEWDER NESS-nessnakedness NEWDER NESS-nessnarrowness NEWDER NESS-nessnaturalness INLEX NESS-nessnearness NEWSTEMDER NESS-nessneatness NEWDER NESS-nessneighborliness NEWDER NESS-nessnervousness NEWDER NESS-nessnumbness NEWDER NESS-nessobjectiveness NEWDER NESS-nessobtrusiveness NEWDER NESS-nessobviousness NEWDER NESS-nessoneness NEWDER NESS-nessorderliness NEWDER NESS-nessoversoftness NEWDER NESS-nesspaleness NEWDER NESS-nesspassiveness NEWDER NESS-nesspastness NEWDER NESS-nesspettiness NEWDER NESS-nessphysicalness NEWDER NESS-nesspleasantness NEWDER NESS-nessplumpness NEWDER NESS-nesspoliteness NEWDER NESS-nesspompousness NEWDER NESS-nesspowerfulness NEWDER NESS-nesspreparedness INLEX NESS-nesspresentness NEWDER NESS-nesspressivness NEWSTEMDER *NESS-nessprettiness NEWDER NESS-nessproneness NEWDER NESS-nesspseudo-happiness NEWDER NESS-nessqueasiness NEWDER NESS-nessquickness NEWDER NESS-nessquietness NEWDER NESS-nessraggedness NEWDER NESS-nessreadiness INLEX NESS-nessrealness NEWDER NESS-nessrecklessness NEWDER NESS-nessrelatedness NEWDER NESS-nessrelentlessness NEWDER NESS-nessreligiousness NEWDER NESS-nessremoteness NEWDER NESS-nessresourcefulness NEWDER NESS-nessresponsiveness NEWDER NESS-ness

142

restlessness NEWDER NESS-nessretentiveness NEWDER NESS-nessrichness INLEX NESS-nessrighteousness NEWDER NESS-nessrightness NEWDER NESS-nessrobustness NEWDER NESS-nessroughness INLEX NESS-nessroundness NEWDER NESS-nessruddiness NEWDER NESS-nessrudeness NEWDER NESS-nessruefulness NEWDER NESS-nessruthlessness NEWDER NESS-nesssacredness NEWDER NESS-nesssadness NEWDER NESS-nesssaintliness NEWDER NESS-nesssameness INLEX *NESS-nessscratchiness NEWDER NESS-nessself-consciousness NEWDER NESS-nessself-righteousness NEWDER NESS-nessselfishness NEWDER NESS-nessselflessness NEWDER NESS-nessseparateness NEWDER ?NESS-nessseriousness NEWDER NESS-nessshallowness NEWDER NESS-nesssharpness NEWDER NESS-nessshortness NEWDER NESS-nessshortsightedness NEWDER NESS-nessshrillness NEWDER NESS-nesssickness INLEX *NESS-nesssinfulness NEWDER NESS-nesssingleness INLEX NESS-nesssinuousness NEWDER NESS-nessskillfulness NEWDER NESS-nessslovenliness NEWDER NESS-nessslowness NEWDER NESS-nessslyness NEWSTEMDER NESS-nesssmallness NEWDER NESS-nesssmoothness NEWDER NESS-nesssoftness NEWDER NESS-nesssolicitousness NEWDER NESS-nesssoreness NEWDER NESS-nesssoundness NEWDER NESS-nessspaciousness NEWDER NESS-nessspeechlessness NEWDER NESS-nesssqueamishness NEWDER NESS-nessstaginess NEWDER NESS-nesssteadiness NEWDER NESS-nessstiffness NEWDER NESS-nessstillness NEWDER NESS-nessstrangeness NEWDER NESS-nessstubbornness NEWDER NESS-nesssuddenness NEWDER NESS-nesssuppleness NEWDER NESS-nesssurfaceness NEWDER *NESS-nesssweetness NEWDER NESS-nessswiftness NEWDER NESS-nesstactlessness NEWDER NESS-nesstardiness NEWDER NESS-nesstenderness NEWDER NESS-nessthankfulness NEWDER NESS-nessthickness INLEX *NESS-nessthinness NEWDER NESS-nessthoroughness NEWDER NESS-nessthoughtfulness NEWDER NESS-ness

143

tidiness NEWDER NESS-nesstimeliness NEWDER NESS-nesstiredness NEWDER NESS-nesstoughness NEWDER NESS-nesstruthfulness NEWDER NESS-nesstunefulness NEWDER NESS-nessugliness NEWDER NESS-nessunawareness NEWDER NESS-nessunderhandedness NEWDER NESS-nessuneasiness NEWDER NESS-nessunhappiness NEWDER NESS-nessuniqueness NEWDER NESS-nessunnaturalness NEWDER NESS-nessunpleasantness NEWDER NESS-nessunselfconsciousness NEWDER NESS-nessuntidiness NEWDER NESS-nessuntrustworthiness NEWSTEMDER NESS-nessunwillingness NEWDER NESS-nessusefulness INLEX NESS-nessuselessness NEWDER NESS-nessvagueness NEWDER NESS-nessviciousness NEWDER NESS-nessvividness NEWDER NESS-nessvociferousness NEWDER NESS-nesswakefulness NEWDER NESS-nessweakness INLEX NESS-nessweariness NEWDER NESS-nessweightlessness NEWDER NESS-nesswetness NEWDER NESS-nesswhiteness NEWDER NESS-nesswholeness NEWDER NESS-nesswickedness NEWDER NESS-nesswildness NEWDER NESS-nesswillingness NEWDER NESS-nesswonderfulness NEWDER NESS-nessworthlessness NEWDER NESS-nesswretchedness NEWDER NESS-ness

A.2.25 Words serving as bases for -fulart INLEX ABST-Julbane INLEX ABST-fulbash INLEX *ABST-fulbeauty INLEX ABST-fulbliss NEWDERSTEM ABST-fulbrim INLEX *ABST-fulcare INLEX ABST-fulcheer INLEX ABST-fulcolor INLEX ABST-Juldeceit INLEX ABST-Juldelight INLEX ABST-fuldisgrace INLEX ABST-Juldistaste INLEX ABST-fuldole INLEX *ABST-Juldoubt INLEX ABST-fuldread INLEX ABST-fulfaith INLEX ABST-fulfancy INLEX ABST-fulfate INLEX ABST-fulfear INLEX ABST-fulfit INLEX *ABST-Julforce INLEX ABST-ful

144

forgit NEWSTEM *ABST-fulfright INLEX ABST-fulfrui t INLEX ABST-fulgain INLEX ABST-ftilgrace INLEX ABST-fu!grate INLEX ABST-fulharm NEWDERSTEM ABST-ftilhate INLEX ABST-julhealth INLEX ABST-fulhelp INLEX ABST-fulhope INLEX ABST-fuljoy INLEX ABST-fullaw INLEX ABST-fullight INLEX ABST-fullust INLEX ABST-fulmaster INLEX *ABST-Julmeaning INLEX ABST-fulmercy INLEX ABST-fu!mind INLEX ABST-fulpain INLEX ABST-fulpeace INLEX ABST-fulpity INLEX ABST-fulplay INLEX ABST-fulplenty INLEX ABST-fulpower INLEX ABST-fulpurpose INLEX ABST-fulremorse INLEX ABST-fulresent NEWSTEM *ABST-Julresource INLEX ABST-Julrespect INLEX ABST-fulrest INLEX ABSI-fulright INLEX ABST-fulscorn NEWDERSTEM ABST-fulshame INLEX ABST-fulsin INLEX ABST-fulskill INLEX ABST-fulsloth INLEX ABST-fulsong NEWDERSTEM ABST-fulsoul INLEX ABST-fulstress NEWDERSTEM ABST-fulsuccess INLEX ABST-fulsuccess NEWDERSTEM ABST-fultaste INLEX AEST-fulthought INLEX ABST-fultruth INLEX ABST-fultune INLEX ABST-fuluse INLEX ABST-fulwake INLEX ABST-fulwaste INLEX ABST-Julwatch INLEX ABST-fulwish NEWDERSTEM ABST-fulwoe INLEX ABST-fulwonder INLEX ABST-fulworship INLEX ABST-fulwrath NEWDERSTEM ABST-Julwrong INLEX ABST-fulyouth INLEX ABST-ful

A.2.26 Words derived using -fulartful INLEX LESSANT-fulcareful INLEX LESSANT-Jul

145

colorful INLEX LESSANT-fulfearful INLEX LESSANT-fulfruitful INLEX LESSANT-fulharmful NEWDER LESSANT-fulhelpful INLEX *LESSANT-fuIhopeful INLEX *LESSANT-fullawful INLEX LESSANT-fulmeaningful INLEX LESSANT-fulmerciful INLEX LESSANT-fulmindful INLEX *LESSANT-Julpainful INLEX LESSANT-fulpitiful INLEX *LESSANT -fulpowerful INLEX LESSANT-fulpurposeful INLEX LESSANT-fulremorseful INLEX LESSANT-fulrestful INLEX *LESSANT-Julsinful INLEX LESSANT-fultasteful INLEX LESSANT-fulthoughtful INLEX LESSANT-fuluseful INLEX LESSANT-Jul

A.2.27 Words derived using -lessartless INLEX FULANT-lesscareless INLEX FULANT-lesscolorless INLEX FULANT-lessfearless INLEX FULANT-lessfruitless INLEX FULANT-lessharmless INLEX FULANT-lesshelpless INLEX *FULANT-lesshopeless INLEX *FULANT-lesslawless INLEX FULANT-lessmeaningless INLEX FULANT-lessmerciless INLEX FULANT-lessmindless INLEX *FULANT-lesspainless INLEX FULANT-lesspitiless INLEX *FULANT-lesspowerless INLEX FULANT-lesspurposeless INLEX FULANT-lessremorseless NEWDER FULANT-lessrestless INLEX *FULANT-lesssinless INLEX FULANT-lesstasteless INLEX FULANT-lessthoughtless INLEX FULANT-lessuseless INLEX FULANT-less