The transparency principle What it is and why it doesn't work

24
Lingua 55 ( 198 1) 277-300 North-Holland Publishing Company 277 THE TRANSPARENCY PRINCIPLE WHAT IT IS AND WHY IT DOESN’T WORK* Suzanne ROMAINE Received December 1980 Lightfoot has recently attempted to clarify the relationship between a theory of grammar and a theory of diachronic (syntactic) change. The theory of grammar is to be seen within the context of a theory of markedness and a so-called Transparency Principle. which define the level of tolerable opacity and degree of exceptionality or derivatic .lal complexity. When a grammar approaches the limit. some kind of thera- peutic re-analysis will be necessary to eliminate the offending complexity. Lightfoot explains such re-analyses in terms of the Transparency Principle, which manifests itself in diachronic change to predict the point at which radical restructuring takes place. The ontological status of the Transparency Principle within the theory of grammar and its role in diachronic change are crucial in evaluating the nature and kinds of claims made for it. Lightfoot has described the Transparency Principle as an inde- pendent metagrammatical principle restricting the grammatical component, which is entirely consistent with the autonomy thesis. The latter stipulates that syntactic rules operate independently of considerations of meaning and use and do not have access to any form of semantic information. It is ultimately the irresolute conflict between the status of the Transparency Principle as an independent metagrammatical principle and its therapeutic role as a functionalist mechanism making languages more trans- parent, which renders the Transparency Principle vacuous as an explanation of syntactic change. * Early versions of this paper were presented at the University of Manchester and the University of Birmingham, and also at the Copenhagen Linguistics Circle. I am grateful to the following for their comments on the issues raised by the Transparency Principle: Oscar Collinge, Elizabeth Traugott, Bob Le Page and Nancy Wiegand. I have incorporated some of their helpful suggestions in this revised version. I would particularly like to acknowl- edge a lengthy and informative discussion I had with Martin Harris about syntactic change and explanation. 0024-3841/81/0000-0000/$02.7S 8 198 1 North-Holland

Transcript of The transparency principle What it is and why it doesn't work

Lingua 55 ( 198 1) 277-300

North-Holland Publishing Company 277

THE TRANSPARENCY PRINCIPLE WHAT IT IS AND WHY IT DOESN’T WORK*

Suzanne ROMAINE

Received December 1980

Lightfoot has recently attempted to clarify the relationship between a theory of grammar and a theory of diachronic (syntactic) change. The theory of grammar is to be seen within the context of a theory of markedness and a so-called Transparency Principle. which define the level of tolerable opacity and degree of exceptionality or

derivatic .lal complexity. When a grammar approaches the limit. some kind of thera- peutic re-analysis will be necessary to eliminate the offending complexity. Lightfoot explains such re-analyses in terms of the Transparency Principle, which manifests itself in diachronic change to predict the point at which radical restructuring takes place.

The ontological status of the Transparency Principle within the theory of grammar and its role in diachronic change are crucial in evaluating the nature and kinds of

claims made for it. Lightfoot has described the Transparency Principle as an inde- pendent metagrammatical principle restricting the grammatical component, which is entirely consistent with the autonomy thesis. The latter stipulates that syntactic rules operate independently of considerations of meaning and use and do not have access to any form of semantic information. It is ultimately the irresolute conflict between the status of the Transparency Principle as an independent metagrammatical principle and its therapeutic role as a functionalist mechanism making languages more trans- parent, which renders the Transparency Principle vacuous as an explanation of syntactic

change.

* Early versions of this paper were presented at the University of Manchester and the University of Birmingham, and also at the Copenhagen Linguistics Circle. I am grateful to the following for their comments on the issues raised by the Transparency Principle: Oscar Collinge, Elizabeth Traugott, Bob Le Page and Nancy Wiegand. I have incorporated

some of their helpful suggestions in this revised version. I would particularly like to acknowl- edge a lengthy and informative discussion I had with Martin Harris about syntactic change

and explanation.

0024-3841/81/0000-0000/$02.7S 8 198 1 North-Holland

278 S. Rotnaine i’ The transparency primipk

David Lightfoot’s book, Principles of Diachronic Syntax (1979), attempts to clarify the relationship between a theory of grammar and a theory of diachronic (syntactic) change. His claim is this: assuming that diachronic change informs us about the limits of possible grammars, a theory of grammar should be responsive to such changes by placing restrictions on the class of available grammars. A theory of grammar which is restricted in this way will provide limits on possible diachronic changes and the extent to which two consecutive grammars of a given language may dicfer. The theory of grammar is to be seen within the context of a theory of markedness and a so-called Transparency Principle (hereafter TP), which define the level of tolerable opacity and degree of exceptionality or deriva- tional complexity. When a grammar approaches the limit, some kind of therapeutic re-analysis will be necessary to eliminate the offending com- plexity. Lightfoot explains such re-analyses in terms of the TP, which manifests itself in diachronic change to predict the point at which radical restructuring takes place.

Even though Lightfoot’s ideas are not entirely novel, his book must

still be considered a pioneering attempt to provide us with a full-fledged and comprehensive theory of syntactic change within a particular model of grammar. Although there are a number of substantive claims put forward about the nature of syntactic change and the kind of theory of grammar able to account for them, I will be concerned here primarily with the TP. Since the latter forms the backbone of Lightfoot’s argument, if I can demonstrate that the TP does not work, i.e. account for syntactic change in the way described by Lightfoot, then this will constitute serious counter- evidence against many of his other claims. What I propose to do then is show why the TP cannot predict or explain syntactic (or indeed other kinds of) change.

What is the Transparency Principle and what does it do?

Since the ontological status of the Transparency Principle within the theory of grammar and its role in diachronic change are crucial in evaluating the nature and kinds of claims made for it. I will begin my discussion by reviewing Lightfoot’s characterization of what the TP is and what it is supposed to do.

The scope and power of the TP is actually rather astonishing if one

S. Romaine / The trartspartwy prirtcipk 270

compiles a list of its attributes and capabilities. At various points in the book Lightfoot ascribes all of the following traits to the TP: (1) It is an independent principle or component of the theory of grammar (pp. 114, 239). 1

(2) It manifests itself primarily through historical change (p. 123). (3) It predicts the point at which syntactic change!re-analysis takes place (p. 122). (4) It explains re-analysis/syntactic change (p. 114). (5) It characterizes the limits to the permitted degree of exceptionality or derivational complexity and permits only shallow derivations (pp. 115, 122).

(6) It applies only when necessary and not randomly (p. 134). (7) It predicts only &e?z change is necessary and not its ftirnl (p. 343). (8) It applies blindly, i.e. re-analyses solve essentially local problems (p. 123).

(9) It serves as a basis for choosing between competing analyses/synchronic grammars by selecting a description consistent with its application (p. 125). (10) It may apply to all change, and not just syntax.

It is perhaps worth pointing out that the TP is not really so novel. The idea that highly marked, complex, or messy areas of the grammar are liable to therapeutic change and that languages restructure themselves to repair ambiguity and dysfunction is rather an old one. In one form or another many schools of Ynguistic thought have embodied the notion that change is functional or purposeful, i.e. teleological. The Neogrammarians, for example, believed that analogy operated to restore the patterned symmetry wrecked by phonological change. The Prague School (cf. in particular, Martinet 1955) of course is more. recently associated with the concept of functionalism. Even early generative views on change involved notions like optimization, minimization of allomorphy and simplification (cf. for example. King 1969). Langacker (1977) has also explicitly discussed the notion of transparency. In his view the ideal, optimal linguistic code will be one in which every surface unit will have associated with it clear and reasonably consistent meaning or function; and every semantic element in a sentence will be associated with a distinct and recognizable surface form. Simply

1 Since the time I first wrote this article, Lightfoot has revised his ideas on syntactic change; therefore, some of my points should be understood to have reference only to the views espoused by Lightfoot in his book. He (Lightfoot 1981) now maintains that Transparency was never intended to be an independent principle of grammar, but is to be understood instead as a limit on the child’s ability to abduce historical derivations. However, I think my general arguments in this paper will demonstrate that this formulation is also problematic.

speaking then, languages tend towards a one-to-one correspondence between units of form and meaning.

In arguing for an enhanced role of surface structures and restrictions on the complexity of derivations in terms of distance between deep and surface structure, I take it that Lightfoot is giving substance to many of these functionalist notions by attempting to characterize them formally within a particular theoretical framework, namely, Extended Standard Theory (hereafter EST). Otherwise, the TP in essence is really an old friend in a new (but, dare I say it, still transparent?), even if more omnipotent, guise.

With respect to the first claim about the nature of the TP, namely, that it is an independent principle within the theory of grammar, I am some- what dubious; and Lightfoot at best is ambiguous. He (1979: 239) describes the TP as a ‘metagrammatical principle’ restricting the possible syntactic components, which is entirely consistent with the autonomy thesis. The autonomy thesis stipulates that syntactic * rules operate independently of considerations of meaning and use and do not have access to any form of semantic information. This restriction on the set of available grammars limits the class of syntactic rules and is nothing very new in the history of transformatiol?al generative grammar.

In other places Lightfoot (1979: 121) argues that the TP requires deriva- tions to be minimally complex and initial, underlying structures to be ‘close’ to their respective surface structures. In this way the TP helps to define what constitutes a possible grammar of a particular natural language. While claiming independent status for the TP, Lightfoot does not deny the possibility that it might turn out to be subsumed under more general perceptual strategies since they too may define possible grammars. Later, Lightfoot (1979: 150) specifically ties the theory of grammar to a set of general perceptual mechanisms which together define “less highly valued” grammars in accordance with the logic of markedness. Thus, while Lightfoot appears to defend a’ strict autonomy hypothesis, he also acknowledges that the TP must - at some unspecified level and in some mysterious way - interact with perceptual processes.

It is ultimately this irresolute conflict between the status of the TP as an independent metagrammatical principle and its therapeutic role as a func- tionalist mechanism making languages more transparent, and therefore more easily learnable by their speakers by eliminating opacity and ambiguity, which renders the TP vacuous as an explanation of change. Thus my argu- ment is this: If syntactic change is autonomous and unaffected by meaning

and use, then why should it be conceived as therapeutic and adaptive to speakers’ needs? Is there any reason why languages should have to change to get rid of complexity if grammar is an autonomous entity? If not, then the TP is basically nothing more than an esthetic (and therefore essentially unfalsifiable) criterion imposed on a theory of grammatical descriptions. Such a TP could not have explanatory force.

Why the Transparency Principle does not work

In order to explain why the TP does not work, I will need to consider first whether Lightfoot’s hypotheses are in principle falsifiable, and if so, what kinds of evidence constitute counter-arguments. Lightfoot (1979: 73-76, 125) himself is ostensibly concerned with this issue of falsifiability and discusses it in several places. Thus, for example, Lightfoot (1979 : 125):

The principle, whose formulation is based crucially on the assumption that languages under- take radical re-analyses only when forced to by some principle of grammar. is falsifiable in essence, but hitherto not actually falsified.

Nevertheless, Lightfoot sets up various arbitrary blocking devices, which serve to defuse virtually all possible attacks and potentially damaging evidence, as we shall see. He (1979: 67) maintains that a restricted gram- matical theory of the EST type yields the best account of and good pre- dictions about the nature of historical change. Obviously the way in which syntactic (or indeed any kind of) change may be characterized will vary according to the theory of grammar in accordance with which the individual grammars of two historical stages are written. It follows then that Lightfoot’s views about. the possible types of change must be compatible with the machinery permissible within EST. Thus, in order to be convinced entirely by Lightfoot’s views on change, one has also to be convinced that EST is a reasonable model of grammar within which something like the TP can operate and that it provides acceptable ways of eliminating opacity. Assuming a model of grammar has rules and that these rules may change in various ways over time, one wants to know what the possibilities for change are within EST (or any model of grammar). Basically EST permits four classes of rules which are relevant to the discussion: phrase structure rules, trans- formational rules, lexical redundancy rules and rules of semantic inter- pretation. Only the last category of rules is excluded from Lightfoot’s discussion of the possible types amenable to change; this, of course, would

fol:ow from a strict interpretation of the autonomy thesis. Thus Lightfoot’s claim is that change takes place in the base component of the grammar and that there are changes which may affect only the syntactic component.

The following is a compendium of the types of changes and examples of them, which Lightfoot discusses. ( 1) Changes in phrase structure rules

(a) Introduction of a new category, e.g. English modals and quantifiers. (b) Reassignment of an irflectional class to a different category, e.g.

loss of NP status of the to infinitive in English and development of verbal character of infinitival complements.

(c) Rearrangement or re-distribution of existing categories (i.e. change in phrase structure rules without introducing a new category or re-

assigning lexical material), e.g. introduction of the rule NP -+ s. (2) Changes in the lexicon

(a) Lexical redundancy rules, e.g. serial verbs in Kwa. (b) New sub-categorization requirements, e.g. the demise of English im-

personal verbs. (c) Lexical rule becomes non-lexical, e.g. the introduction of the trans-

formational passive in English. (d) Loss of a morphological category and replacement by new inflec-

tional system, e.g. Greek moods. (3) C’hanges in the transformational component

(a} Introduction of a new transformational rule, e.g. NP pre-posing. (bl Reformulation of transformational rules, e.g. Wh-movement. (c) Rule re-ordering. (d) Rule loss.’ We can already note at least one important difference between the nature

of change within the EST and earlier transformational generative models; namely, the locus of diachronic change is shifted from the transformational rule sub-component to elsewhere in the Sase component. This follows from the theoretical assumption made within EST that the grammar of English

has only two cyclic transforma:;ons, NP pre-posing and Wh-movement. Earlier views placed the burden of work specifically on the transformational rule component (cf. for example, Traugott 1972 and King 1969), whose scope then was rather less limited. It is also interesting that many of the changes permissible within EST can be subsumed under the more general heading of ‘category change’. In fact, all those occurring in ‘the phrase structure rule component and the lexicon could be classified as involving some kind of category change. The notion of change taking place in a

283

language via periods in which it displays fuzzy or squishy category member- ship is one I am very sympathetic to, but I will not say anything more about that here (cf. Romaine, forthcoming a).

Whether or not one accepts EST and the view of syntactic change which is compatible with it depends of course on where one’s theoretical allegiances lie. The assumption that English has only these two cyclic transformations is consistent with the EST view that syntax operates independently of considerations of -meaning and use. Since these rules play a part in the generation of many consiructions, one assumes they are not associated with particular semantic properties. Thus, EST and the kind of change permitted within it, i.e. pure syntactic, are incompatible with the strongest views of semantically-based grammars (e.g. ones which admit no distinction between syntax and semantics). I do not happen to be in agreement with this aspect of EST or in sympathy with any theory of grammar which argues for the autonomy of syntax. I think approaches which attempt to locate sources outside the purely linguistic system which motivate the bulk of grammatical constraints are more insightful (particularly with regard to understanding syntactic variation, cf. Romaine, forthcoming b). But this is

a matter of personal preference and does not constitute a counter-claim against Lightfoot’s case. It is not too difficult to argue on the basis of data from English (and notably, 1%16th-century English) that syntax is inde- pendent of semantics and pragmatics. It is less easily done in other !anguages such as Japanese, where pragmatic distinctions have apparently been gram- maticalized.

Lightfoot (1979: 385), however, explicitly rejects alternative approaches (i.e. those not based on purely grammatical principles), such (3s Greenberg’s (1978), which is based on independent diachronic universals and typology. He furthermore (1979: 343) says there are no formally definable limits to the ways io which two consecutive grammars ‘may differ; therefore, there are no reasons to expect a theory to delimit the notion of a possible syntactic change on formal grounds. Now this is either an admission of defeat or a blocking device against claims that the changes described as being consistent with the application of the TP may be given equally plausible and viable accounts in grammars of a different form.

In fact, Lightfoot (1979: 109) himself admits that his analysis of changes in the English medals “has not proved anything”; merely that the description

of the changes can be accounted for by assuming- that re-analysis is a genuine syntactic change affiting the base rules of the grammar of English. Thus,

the TP cannot really serve as an evaluation metric which permits a choice

between two competing grammars in the way Lightfoot suggests if there are no formal limits to possible change; all the TP does, given any two competing analyses, one of which is consistent with it and another one which is not, is dictate the selection of the one that is as the correct one. But what happens if there are two competing analyses neither of which is compatible with the TP?

Even if we do confront Lightfoot with analyses which are incompatible with the TP, he would probably reject them. His claim is that it is the fact of change or point at which change is necessary, which has to be predicted on the grounds of some general grammatical theory, rather than the particular change. But I would maintain that in order to ‘explain’ syntactic change to the fullest extent. we would have to predict both the change and its mechanism (cf. also Lass 1980: 33). In fact, Lightfoot’s TP can do neither, which is surely damaging evidence (regardless of whether one thinks that a particular change and its mechanism are in principle amenable to prediction and explanation. I do not think they are).

For the record, however, I will point out that LieL. - (1979) has offered an alternative account of the changes in the English passi\:; Bennett (1979) of changes in the English modals, quantifiers, passive and impersonal verbs; and Romaine (1980a) of changes in the relativizer system of English. Bennett (1979) specifically argues that EST cannot adequately capture syn- tactic change. He says (1979: 850) that the evidence from the history of English quantifiers and modals should be formulated in terms of the elimina- tion of exception features, minor rules and rule-specific conditions, rather than in terms of reducing the distance between the deep and surface struc- tures. Restructuring does not keep the deep structure close to the surface structure but merely eliminates the use of exception features and has the effect of making class membership more regular.2 I will discuss some details of this counter-evidence later, after I have considered more generally the kinds of evidence which can be used to refute the TP and thus would constitute falsification.

Lightfoot (1979: 76) puts somewhat of a global blocking device on falsi- fication in general. He says that if Popper”s criteria for falsification of hypotheses and constraints on empirical research are too rigorous for physics, then it is unreasonable to apply them to less mature research

’ This in fact sounds a lot like analogy. Lightfoot (1979: 373), however, mainlains that the fact that many re-analyses may be interpreted as analogical extensions does ot make * ,,,I,._. ,” --inpi& - arlurw&y a p ..a.#_ _. f change or anything more than a pre-theoretical concept.

programmes like linguistics and the psychological sciences. The crucial factor, in Lightfoot’s opinion, is “depth of explanation, not data-coverage”. I find

this strange; for if one is not accountable to data, then what is one accountable to? Since Lightfoot makes no claims about the form or shape of syntactic change (other than those imposed by the formal machnery available in EST, which are actual!y rather loose), and he reject: the possibility of falsification by data which do not fit the EST model (or rather by the fkt

that the same data’ can be. treated differently in alternative models of grammar), there is really only one manner of falsification open to us: namely, one must demonstrate the existence of modes of syntactic change which are inconsistent with EST and the application of the TP.

Although Lightfoot also mounts a variety of blocking devices against counter-evidence in this domain, I nevertheless take it that the strongest possible counter-claims to Lightfoot’s hypotheses are produced by:

(1) differential failure - e.g. the TP does not apply and change does not occur where predicted; or random change occurs which is not and/or cannot be predicted by the TP ;

(2) gradualness of syntactic change - e.g. syntactic I:hange is not radical restructuring at one stroke.

Let us first review Lightfoot’s ideas about how syntactic change takes place. We can make a distinction between the change itself and the constrain& governing its implementation. (This distinction is in fact my own.) Thie scenario for syntactic change is one in which complexity builds up gradually until a sudden cataclysmic and wholesale restructuring of the grammar takes place, whereby derivational complexity is eliminated at one stroke (i.e. the Transparency Principle applies). A set of simultaneous surface changes (which may be inter-related) takes place, which is a manifestation of a single basic change at some point in the abstract grammar. A’L various points in his argument Lightfoot places the following constraints on change (some of which we have already mentioned in connection with the character- ization of the TP): (1) There are no formal constraints on change beyond those imposed by

the theory of grammar. (2) Only necessary change takes place. (3) Grammars practice therapy, not prophylaxis. (4) Syntactic change is autonomous, i.e. governed by purely syntactic prin-

ciples. Change affects only the syntactic component with no reference to either semantic or phonological factors. Syntactic rules operate in- dependently of considerations of meaning and use.

(5) Communicability must be preserved between generations of speakers.

286 S. Romaine / The transparency principle

Let us now look briefly at two synopses of paradigm cases which Light- foot claims are consistent with the application of the TP and the predictions made by his theory. The first case concerns changes in the Middle English impersonal verbs (cf. Lightfoot 1979: 229-239). In Old and Middle English there was a large class of so-called impersonal verbs which could occur without surface subjects in normal position, e.g. hine hungred- ‘him hungers’. These a-e now obsolete. We can identify the following stages in the change, following Lightfoot’s discussion.

Cbw I: English impersonal verbs

Stage I. Many of these impersonal verbs fall into disuse before or during the ME period (though some new verbs also enter the class, which in OE numbered over 40).

Stage II. Many imper>dnal verbs developed a dummy it subject. Stage III. Catastrophic re-analysis occurs sometime between the end of the

15th and mid-16th centuries. The pre-verbal NP became re-analyzed as a subject taking on nominative form.

E.g..

barn cynge licodon peran. the king liceden pears. the king liked pears. he liked pears. Lightfoot’s explanation of the change re!l ts on the view that the loss of

inflectional endings on nouns and verbs toi;ether with the rigidification of

SVO word order made the subjectless impersonals structurally aimbiguous. That is, a construction like the king likedpears could be analyzed syntactically in two ways, either as SVO or OVS. The establishment of SVO word order forced the re-analysis, and the Transparency Principle eliminated ambiguity. Thus, the demise of the subjectless impersonals is a change in the abstract grammar resulting from the fixing of word order and loss of inflections. The locus of the change is the lexical component. In the case of a verb like think, recategorization took place so that such verbs had to occur in the frame NP -_S, where NP could not be empty. Old English, however,

had two verbs, the per*sonal pencan- ‘think’ and the impersonal pyncan- ‘seem’; so this change might be construed as loss of the impersonal rather than as sub-categorization change since it always occurred in an NP-_ NP slot; but the semantic specification changed from ‘give pleasure to’ to *receive pleasure from’ (Lightfoot 1979: 238). The precise form of the

S. Rornainv 1 The trarrspatwyy principle 287

change does not matter, since both are consistent with EST and Lightfoot says we do not have to predict the form of the change anyway.

There is, however, one further difficulty. Given the fact that there were three ways in which the subjectless impersonal verbs could have changed to make the grammar less opaque, why were certain verbs lost, e.g. PY~CU~, and others personalized, e.g. like, instead of developing a dummy it construc- tion such as modern German (es hungert mid)? All of these would make for a transparent derivation and thus satisfy the SVO requirement.

If this syntactic change ‘is a consequence of the rigidification of SVO word order, as Lightfoot (1979: 239) maintains, and opacity is both a necessary and sufficient cause for re-analysis/change. then, given similar conditions in other languages, we would predict that a similar (but formally unspecified) change would take place in these cases too. What we are asking then is whether there are differential failures. The answer is yes. Rumanian has OVS structures entirely parallel to those of OE, although it is SVO. The Scandinavian languages are all SVO but have not lost all impersonal constructions. Retention of impersonal verbs correlates with retention of a case system. Re-analysis is not therefore simply a function of SVO order, but depends on loss of a case system (cf. Bennett 1979: 856). What is more damaging, however, is the fact that OVS constructions sur- vived with these impersonal verbs 200 years after the establishment of underlying SVO order. The impersonal OVS forms survive through Middle English and in some cases exist alongside personal uses of the same verb (e.g. think occurred personally and impersonally). Lightfoot chooses to ignore both these facts (presumably because depth of explanation out-

weighs the concern for data-coverage). 3 Although Lightfoot ascribes the change to the TP, he clearly makes

appeal to perceptual strategies. That is, opacity is seen from the point of view of the language learner, specifically, the child learning its native language. Thus, re-analysis is a consequence of imperfect learning. Historical innovations arise when children re-analyze strings by assigning them a different structure. One could say then that opacity is the cause and re- analysis by children is the mechanism of syntactic change. It is worth pointing out that the role ascribed to children in initiating linguistic change is more or less the standard generative one. Recall, for example, Chomsky and Halle’s (1968) notion that only children (aad not adults) restructure the grammar. The Neogrammarians also referred to ‘imgerfect learning’

3 One could also use these facts to argue that this change took place gradually and variably.

and misinterpretation as a primary factor in linguistic change (cf. Paul’s

1920 Einiihrrn~.~th~~~~i~~~. I see no reason why children should bear the burden of changing language

Sy actuating re-analyses. In fact, there is a great deal of sociolinguistic evidence that adults play an important role in many kinds of linguistic change which may or may not involve re-structuring (cf. Labov et al. 1972).4 However. there is good psycholinguistic evidence from a variety of languages (among them, for example, Russian, French and English) that children in general tend to interpret NP-V-NP sequences as SVO even if case-marking clearly shows them to be OVS. Children apparently ignore morphological clues in assigning surface grammatical relations and place greater reliance on word order (cf. for example, Ervin-Tripp 1978). The evidence comes from children’s comprehension of passives and relative clauses. In fact, Slobin (1973: 198) has proposed the following development universal:

!kntenccs deviating from standard word order [i.e. SVO- SR] will be interpreted at early stages of development as if they were examples of standard word order.

This principle is also operative in other developmental continua, e.g. L2 acquisition and pidginization/creolization.

This might at first blush seem to be additional support for Lightfoot’s claim. But why is it the case that if most children and L2 learners typically re-analyze these structures in accordance with a developmental universal, that English-speaking children exerted sufficient force to bring about a permanent re-analysis despite morphological and semantic constraints, whble child-speakers of other languages did not? In other words, why does change in apparent time get converted into change in real time in some cases but not others?

Let us consider another problematic case which invokes differential failure too. The specific change in question concerns the English relative system, which Lightfoot ascribes more generally to the reformulation of the Wh-mcvement rule (cf. Lightfoot 1979: 313-342). The scenario is rather

a One case where adults were found to play a major role in syntactic change is described b) %r&off and &own (1976). who observed that adult Tok Pisin speakers were the first in the pidgin-speaking community to use a new strategy of relativization. I am grateful to

Elizabeth Traugott for reminding me of this example and also for the comment that word order re-analysis needs to bt considered separately from other kinds of syntactic change involving re-analysis. Word order in language acquisition appears to be more heavily influenced by input than other syntactic phenomena.

more complex than in the case of the impersonal verbs. The relevant facts to be explained are as follows. Old English relative clauses could be intro- duced by the complementizer ]e, or by a demonstrative pronoun (se, sea, @v). In Middle English relative clauses were introduced by the com- plementizer bat (Je is gradually repiaced), Wh-forms which now began to be used as relatives, and by Wh + lat. In early modern English, relatives were introduced by either the complementizer or a relative pronoun (i.e. a Wh-form), but not both.

Again we can conveniently divide the change into a number of stages.

(i) The demonstrative pronouns underwent loss of inflection, which affect- ed the language generally as the case system was lost. (ii) The demonstrative pronouns suffered analogical levelling in that sib- ilant forms se, SLJO, assum.:d fricatives like thz rest of the paradigm, becoming pe, be,, by analogy to Pm,

(iii) Invariant he was incrca4ngly common as a marker of subordination, perhaps as a consequence of the fact that SVO word order beg:ur to develop in subordinate cltiu:+es ard the latter were no longer distinguished by OV word order. (iv) The new nominative singular of the demonstrative was homophonous with complementizer Pe and at the same time being extended to serve a new function as a definite article.

Stage II. Catastrophic re-analysis in the lsth- 16th century occurs. The Wh relative pronouns were used as new relatives occurring first where the co-referential NP was in oblique case or the object of a preposition. Lightfoot’s explanation is that in many relative clauses it would be

unclear whether be was a nominative demonstrative, article or comple- mentizer. This created parsing difficulties and lack of transparency. The changes in the relative system represent a surface manifestation of a single change in the abstract grammar, namely, reformulation of the Wh-movement rule. From Old English to modern English then there was a chang: in the possible (surface) constituent membership of the node COMP. In modern English, jtir, hut, and Wh-elements are mutually exclusive in the COMP position in any given clause. Until the end of the early modern English period COMP could contain two items: only a conjunction or demonstra- tive pronoun in Old English; conjunction or Wh-word in Middle English;

or conjunction in early modern English. (There was some variation in the typt qf item which could occur in first position.)

We can see then two (to some extent, competing) changes at work in the history of the relativization system in English. One is the adoption of the Wh-strategy of relative clause formation and the other is the introduction of constraints on deletion of relative pronouns. Lightfoot claims that the differential introduction of the Wh relative pronouns into various syntactic

positions is consistent with the Transparency Principle. Specifically, he maintains that the lateness of M&O can be explained by the fact that its environment (i.e. a subject NP) was the icast ambiguous. A relative clause introduced by ~/rclz and a deleted subject NP would present no parsing difficulties and permit a less opaque analysis.

One could also view the second change, i.e. deletion, as following from the Transparency Principle, although Lightfoot does not pursue this. For example, Bever and Langendoen’s claim that modern English no longer permits deletion of subject relatives because relative clauses without relatives in subject position became perceptually complex would fit in nicely with the TP interpretation. Bever and Langendoen argue that the constraints governing deletion of relatives have changed by dint of the demands of perceptual or cognitive strategies used by language learners to decode sentence-internal relations. Lightfoot of course would maintain that the change was governed solely by pure syntactic principles and was induced by an independent principle, i.e. the TP. However, I have already indicated that the autonomous status of the TP is suspect and Lightfoot himself is ambiguous about the location of the boundary between purely grammatical constraints and perceptual mechanisms. I see no real difference in the type of explanation offered by Lightfoot and Bever & Langendoen in so far as both see opacity from the point of view of the language learner.

Now I would not dispute the fact that the entrance of the Wh relative pronouns is governed by syntactic principles. In other words, since relative pronouns like English who, etc. do not require stranding, the adoption of these interrogative pronouns as relativizers must be seen as motivated in part by purely syntactic principles. In fact, my own work on relative &uses in Middle Scats demonstrates that the Wh-pronoum were introduced in a differentially sensitive manner with respect to syntactic positions ordered in a strict implicational sequence. The latter is in accordance with the KeenanXomrie hierarchy of NP accessibility (cf. Keenan and Comrie 1977); Wh-pronouns enter from right (i.e. genitive) to left (i.e. subject), as shown in fig. I. The direction of spread of deletion is, however, opposite to that

S. Romaine / The transparency principle 291

Syntactic position in the case hierarchy:

Subject Direct object Oblique Genitive

Change 1 entrance of Wh-relatives

) Change 2 spread of deletion

Fig. 1.

of the Wh-relativization strategy, i.e. it goes from left to right through the syntactic positions in the case hierarchy. Both these changes in relation to the case hierarchy are sketched out in fig. 1 (cf. also Romaine 1980a and DeKeyser 198 1).

I have argued that both these changes are also governed by social and stylistic factors in addition to purely syntactic ones. Using a measure of syntactic complexity based on the frequency with which NPs in certain syntactic positions are relativized, I found that the Wh-relativization strategy entered one variety of English (Middle Scats) in the most complex styles (e.g. legal registers) and least frequently relativized syntactic positions. until it eventually spread throughout the system. The point of origin and direction of change are depicted in the diagram in fig. 2.

Syntactic position in the case' hierarchy: Style:

Entrance of Wh-relatives

Fig. 2.

The most crucial point about these changes in the relative system is not the fact that they can be accounted for in terms of the TP (or purely syntactic principles), as Lightfoot claims, or perceptual mechanisms, as Bever and Langendoen claim, or in terms of social and stylistic (in addition to syntactic)

factors, as I claim. That is, Lightfoot would reject falsification based on a claim that the same data in evidence of a change may be analyzed in different ways. TIlese data however constitute falsification in a different manner, since the very nature of the mechanism of change, i.e. its gradualness, argues against Lightfoot’s interpretation based on the TP. For not only is syntactic change gradual in this case (and indeed others), the precise mode of implementation accords well with current models of phonological change. That is, certain phonological changes are manifested initially in patterned fluctuation in certain relevant environments (which may be both linguistic and social) until certain quantitative changes become cumulative and the resulting system becomes qualitatively different over a sufficient period of time. This model of change is essentially in agreement with the one presented in Z ,&abov et al. (1972) where the direction and location of certain sound changes in progress were plotted through various phonetic environments, social groups and styles. which were fociid either to inhibit or to accelerate linguistic change.

Thus, I would claim that instances in which syntactic change may take place by gradual and variable diffusion challenge Lightfoot’s view that syntactic change consists of radical re-analysis or cataclysmic restructuring at one stroke. There is additional support for the gradualness of syntactic change in creoi 2 studies (cf. for example, Bickerton 1977; and in natural languages. cf. for example, the papers in Li 1977). This case, however, also questions the autonomy principle as well as Lightfoot’s hypothesis that phonological change works in accordance with the TP. In other words, the implementation of syntactic change does not appear fundamentally different from that of phonological change in that both may be gradual and variable. I do not doubt that there muy be changes which affect only

the syntactic component, but the fact that there are changes which do rzot,

like this one, damages Lightfoot’s case. Lightfoot (1979: 153) protects him- self against this kind of charge by saying that to deny the possibility of changes affecting only the syntactic component is to “confuse the character- ization of the change with its so-called cause”. This does not hold water, as we will soon see. Me furthermore ( 1979: 377) dismisses the gradualness issue as uninteresting; and with regard to changes in the English passive he (1979: 280) notes that the apparent gradual spread of some of the English pa&es does not follow from EST, but he considers the apparent gradualness to be an artifact of the restricted data.

Lightfoot is ambiguous about the boundaries or differences between many things (not the least of which is the difference between explanation and

S. Romaine / The transparency principle 293

prediction); among these are the distinction between implementation and mechanism, and cause and actuation. He is not tl 2 only one who is confused; Weinreich et al. (1968: 102) equate the explanation of a change with the analysis of its mechanism. This amounts of course to saying that description is to be equated with explanation. Lightfoot’s argument is circular. He says that a single change in the abstract grammar accounts for the simultaneity of a set of surface changes, which eliminates much of the accumulated complexity; and the latter, in turn, is taken as evidence for the singularity of change in the abstract grammar. If the ultimate locus of change is in the abstract grammar (as transformational generative grammar has always main- tained), then we have no way of observing it, i.e. bringing empirical facts to bear on the gradualness vs. discreteness issue. The simultaneity of a set of concomitant surface changes may be an artifact of a method of analysis which allows us to state/describe a series of changes as a single rule addition or category change etc. (cf. also Aitchison 1980: 139). Just because we may perceive connections among a set of changes in retrospect does not mean that the connections are in fact there. Virtually any set of changes when viewed post hoc may be accommodated within Lightfoot’s schema. It is also not clear in Lightfoot’s model how one distinguishes between the changes which allow the build-up of complexity and those which eliminate it. At what point does the restructuring take place and the build-up of com- plexity cease ? This is, of course, equivalent to asking where the precise point of spatio-temporal actuation is located; while it is in principle possible that we could record this point quite by accident by monitoring the outputs of all the speakers of a language over a long enough period of time, it is only in retrospect that we could identify the point at which a change was introduced.

The changes in the relative system in English also illustrate differential failure of the TP. There are dialects of modern English, like Scats, which have never really integrated the Wh-strategy of relativization. Some varieties of modern spoken Scats allow the relativization of NPs in all syntactic positions by the complementizer that. One cannct really regard the change as completed if certain laggard dialects like Scats are still undergoing it or have yet to implement it. Now, if opacity were a sufficient and necessary condition for change, why did it not happen all at once in all varieties of English, instead of variably and gradually?

There were differential failures elsewhere in Germanic. For example Old Dutch, like Old English and Old High German, used demonstrative pronouns as relativizers. In Middle Dutch however, the interrogative pro-

294 S. Romaine 1 The transparency principle

nouns, i.e. isielzs and u+er, began to be used along with the demonstratives. The former are, however, still limited to the genitive and oblique positions. At this time the inflections of the Dutch demonstratives were exactly parallel 10 those of the interrogatives; thus loss of inflections does not seem to have been a motivating factor in the introduction and spread of interrogatives as relativizers in Dutch. Clearly, a decline in the inflectional ‘marking of the determiner system is not a sufficient condition for change (cf. Romaine, forthcoming c). Actually, what neither Lightfoot nor Bever and Langendoen have considered is that parsing difficulties and opacity, which are responsible for re-analysis, are really problems only in the written language anyway.

The TP is also in trouble with regard to the issue of deletion of relative pronouns in subject position, for there are varieties of English (pace Bever and Langendoen) which do allow it. Scats is one of these (cf. Romaine 198 1 b). And even if subject pronoun deletion is prohibited now in some dialects of English (mainly standard written English), it was not in Shakespeare’s time, when largely the same constraints on word order operated as do today. If this is true, must we posit different perceptual strategies or mechanisms for those speakers who can and do delete subject relatives? Or to put this question in another way: are some speakers less offended by opacity than others, or less personally speaking, do different languages and varieties of them tolerate higher levels of opacity?5

Lightfoot himself seems puzzled by some of his Greek data which suggest that the level of tolerated opacity may be a language-specific variable and therefore not predictable by an independent metric like the TP (cf. also Aitchison 1980: 143). He (1979: 293) questions why Greek, having lost its subjunctive category as a result of phonological change, should then develop a new morphological class with almost exactly the same distribution and semantic interpretation as the old subjunctive? Even if Latin could apparently tolerate such a radical redistribution in its sutljunctive from Proto-Indo- European, why did Greek then, when it lost its subjunctive, replace it? Lack of evidence, Lightfoot says, prevents us from drawing reliable conclu- sions. 6

5 Jane McBrearty has pointed out a seeming paradox in Lightfoot’s views on children and

restructuring which I failed to note here. If res: dcturing has a psvLhologica1 basis and children

are the main agents of change, then it follows that children (M language learners in general) have a lower tolerance for opacity than adults. This may well be true of course, but I know of no evidence which bears on this issue,

’ I am grateful to Oscar Collinge for drawing my attention to some anomalies in Light-

foot’s interpretation of the Greek and Latin data.

Lightfoot (1979: 334) blocks the differential failure issue by saying that the fact that change does not occur wherever the causal factor, X, is present or, conversely, that the change may occur even when the causal factor X

is not present, does not indicate that X is not a causal factor. Now this is surely a strange interpretation of causality and one which certainly won’t hold water elsewhere. The more usual one, I take it, is in accordance with the deductive-nomological schema discussed by philosophers of science like Popper. This type of explanation is based on deductive inference, i.e. a well- formed explanation has the form of a deduction and is in principle equivalent to a prediction. In other words, X happens because Y caused it, with the stringent assumption that Y be a nomically sufficient, necessary and ante- cedent condition to X. The most important characteristic of this type of explanation is that if it adheres to all the appropriate conditions and has empirical content, it cannot be denied, i.e. it is necesarily the case that X. As far as many philosophers of science are concerned (e.g. the logical positivists like Carnap and Hempel), if a discipline cannot establish true D-N connections, then it is neither truly explanatory nor genuinely scientific (cf. for example, Popper 1977).

What Lightfoot seems to be saying is this: sometimes the TP works/ applies and sometimes it does not, but it still explains and predicts change. He (1979: 345) also maintains that exact predictions will be derivable only when one provides an exact characterization of the tolerance level for opacity and complexity. This is just another blocking device. His Gnal conclusion is ( 1979: 344) :

I shall make no attempt to formalize the Transparency Principle or to give a precise account

of the permitted degree of derivational opacity. At this \tage I do not have enough evidence

to make an exact proposal. Although several re-analyses have been examined here. one weds

to look at many more and, of course, re-analyses of this type are not easy to find and must

be argued for and justified. Therefore, while precision in this respect is now premature.

a reasonable goal for work in diachronic syntax is a characterization of tolerable opacity.

In fact, an exact quantification may not be possible, but this does not gainsiiy the value of

the goal.

I think I have seriously damaged a number of Lightfoot’s claims about the TP, in particular, its explanatory and predictive power. As I suggested in my introduction, the ontological status of the TP was crucial in evaluating the claims made for it. In its present state. Lightfoot’s characterization of it as an independent principle in the theory of grammar is inaccurate, but is there any reason to expect that it could achieve such a status‘?

296 S. Rotttaittc ,’ T/w rratt.sparcw~~ prittcipk

Even if we try to define precisely and independently such notions as derivational complexity and opacity, I assume that some reference must be made to speakers, i.e. in terms of perceptual mechanisms for de-coding/ processing sentences and acquisition strategies. Lightfoot (1979: 129) ex- plicitly makes this connection for us; he asks how smart language learners are and what the limits are to the abstractions that they can postulate. In other word?, we are in some sense talking about speakers and how they use the language, and not just autonomous grammars/languages when we say things like ‘grammars eliminate complexity’ an’d ‘syntactic change is

therapeutic’. What we are really claiming is that speakers (or their br, ins) have the ability to recognize when their grammars have become too corn1 ,rex and implement the TP to bring about the desired reduction in opacity, which makes the language easy to learn. And what makes a language hard to learn may also make it easy to understand, e.g. case-coding of the syntactic position of relatives by unequivocal marking of the pronoun. That is, infIectiona1 endings, allomorphy, etc. are among the linguistic devices which are ‘left out’ in child and second language acquisition, and in pidginization. but are ‘put back in’ later, e.g. in creolization. There does not seem to be any a priori reason for assuming that languages or grammars need things like the TP, unless we accept that languages need speakers in order to change. The TP seems rather to be an esthetic criterion relating to linguistic description.

There are a lot of epistemological worries about functionalism and tele- ological explanation in general, even discounting the inability of functions to be nomically sufficient conditions for the implementation of change (cf. the discussion in Lass 1980: ch. 3). The possibility of dysfunction, i.e. a lan- guage which has exceeded its limit of opacity, brought about by a series of changes, raises at least two tricky questions:

(1) Why do languages allow opacity to build up anyway? The implication of this is that some changes must be non-therapeutic.

(2) If language change is to be explained as teleological, i.e. if changes eliminate complexity so that children can learn a language, then the history of language is adaptive.

Lightfoot (1979: 123) would dismiss the first objection by saying that the TP applies blindly. In other words, re-analyses solve essentially local problems and grammars practice therapy and not prophylaxis. Thus change may take place which has a therapeutic effect in orle part of the grammar but which contributes to derivational opacity elsewhere. This seems to be just another blocking device, presumably in anticipation of arguments to the

effect that all language change is not simplification, a criticism levelled against earlier generative views of change.’

With regard to the second point, however, if anything, a better case can be made for the maladaptive function of linguistic change and diversity (cf. Romaine 198Ob: 55). A strict interpretation of functionalism and thera- peutic change would require us to conclude that Old English became Mddle English because Old English was in some respects unlit or no longer adapted to the needs of its spea:;kers (i.e. speakers need transparent grammars). Lass (1980: 85) claims, and I think rightly, that no one has ever produced a rational basis for a theory of therapeutic change by identifying inde- pendently the unfitness of a language except post and ad hoc. As far as we know, there are no real cases of maladaptation, e.g. no cases of language death have resulted from a failure to implement a functional change. Alternativeiy, I have not seen any evidence cited that children have failed to learn a particular language or acquire a particular construction in their language because it was too opaque. Children are apparently able to switch over from developmental or discourse universals, such as the one requiring NP-V-NP structures to be interpreted as SVO, to the specific morphological and syntactic constraints applying in their native language (cf. Romaine, forthcoming c, for a discussion of similar shifts from discourse-pragmatic to grammatical-syntactic constraints in other developmental continua).

The functionalist interpretation is also incompatible with both the gradual- ness and variability of some changes. If a change is functional, then functions must somehow be propagated along with change. Now if X ---f Y via a stage where X alternates with Y, and there is a causal connection in the form of a function being implemented, why does the change not occur all at once, i.e. discretely and categorically? How can we account for the fact that for some *people who speak a language, a function is a motivating force for change at time t, but for others it is not? Now, all things being equal, speakers/speech communities have a cho’ce whether to capitalize on an available simplification strategy. It is generally accepted that learners may use idiosyncratic acquisition strategies at different stages (cf. for example, the papers in Hatch 1978).

I think one must conclude that the explanatory force of functional tendencies when invoked as causes of language change is empty. If such

’ Perhaps I am being unfair to Lightfoot here. Martin Harris has noted that it is unrealistic to expect to be able to judge the relative opacity of a language as a whole (or for that matter, the extent of adaptiveness of a language as a whole); and therefore, Lightfoot’s restriction of the TP to local problems is simply an unavoidable consequence.

tendencies are causes, then they are neither necessary nor sufficient. Thus, functions such as transparency, minimization of allomorphy, etc. are no good as explanations if they can be implemented or not at will. Unless one can develop a reasonably rigorous and non-particularistic theory with some predictive power, i.e. one not based on post hoc identifications of functions, then there is no way out of the dilt~mna. I submit that Lightfoot’s claim of independent status for the TP as both an explanation and cause of syntactic change does not stand up under critical scrutiny. It is thus vacuous. Judging from the use of the term VQC’UOUS elsewhere in the trans- formational generative literature, I assume this is the worst charge that can be laid against it.

To conclude, I would like to consider just briefly whether an autonomous theory can provide the kind of explanation of language change that Light- foot would wish. If languages are autonomous systems and not affected by considerations of use, then there is no reason for them to change. If we exclude speakers, functional explanations fall by the wayside.* If the speaker s implicated, then language and its history is essentially a non-nomic domain, i.e. causal explanations are not valid. If one assumes the existence of free choice, human behavior is non-nomic. (I have condensed this argu- ment to a very great extent; cf. Lass 1980 for a full exposition, and also Romaine I98 1 a).

Given the nature of Lightfoot’s subject, i.e. diachronic syntax, it is some- what odd to tind that he neglects to mention any of the work on pidgins and creoles. Creolists are in fact producing some of the most interesting findings in the area of syntactic change and are also greatly concerned with functionalism. Aitchison (1980: 144) censures Lightfoot for his silence on syntactic change in progress as well as for the failure to mention pidgins and creoles. It is, however, generally the case that synchronic and diachronic syntacticians often work in ignorance of each others’ findings, and both also unfortunately in ignorance of those of sociolinguists (and vice versa).

’ It is, of course, possible to eqvisage functionalism which is seemingly independent of speakers’ needs. Harris (forthcoming). for example, refers to the need of the system itself to remain in conformity with the principles of universal grammar. I find this dubious. I am quite wiliing to accept that ‘order’ should be a property of grammars though not necessarily a property of languages; that is. orderliness may be a property of a theory of grammar without being a property of the real world. Why should orderly change be preferred to non-orderly change in view of the fact that we have no evidence for the real world order-

lines6 of languages? This seems to be another attempt to elevate an esthetic criterion to the status of candidate for reality.

S. Rontairle / The tramparenc_t* principle 299

Surely one of the most important questions to be raised is whether the changes which occur under creolization involve functions. For example, when a language is used to manipulate complex (particularly written) dis- course, are there certain structures it must have/develop in order to do certain things, e.g. relativization, marking of subordination etc.? In fact the two avenues of enquiry which Lightfoot rules out (i.e. the independent diachronic universals_ or external approaches) are really, in my opinion, the most promising ones for’ research on syntactic (and other kinds of) change. A more illuminating (but not necessarily explanatory) account of certain changes will be provided not by an autonomous theory, but by one which allows for the interaction of syntactic processes with both dis- course-pragmatic constraints and perceptual strategies.

Reference2

Aitchison, J., 1980. Review of: David Lightfoot, Principles of diachronic syntax (London: Cambridge Univ. Press, 1979). Linguistics 18, 137- 146.

Bennett, P. A., 1979. Observations on the transparency principle. Linguistics 17, 843-863.

Bever, T. G., D. T. Langendoen. 1972. The interaction of speech perception and grammatic:! structure in the evolution of language. In: R. P. Stockwell, R. K. S. Macaulay (eds.), Lin- guistic change and generative theory, 32-95. Bloomington: Indiana Univ. Press.

Bickerton, D., 1977. Change and variation in Hawaiian English, vol. II: Creole syntax. Uni- versity of Hawaii (Social Sciences and Linguistics Inst .tute).

Chomsky. N., M. Halle, 1968. The sound pattern of English. New York: Harper and Row. DeKeyser, X., 1981. Relativizers in early modern English. Paper to be presented at the

International Conference on Historical Syntax, Poznan, Poland. Ervin-Tripp, S.. 1978. Is second language learning like the first? In: E. Hatch (ed.), Second

language acquisition. Rowley, Mass. : Newbury House. Greenberg, J., 1978. Diachrony. synchrony and language universals. In: J. Greenberg (ed.).

Universnls of hu’man language, vol. I: Method and theory, 61-93. Stanford: Stanford Univ. Press.

Harris, M., forthcoming. Explaining language change. To appear in: A. Ahlqvist (ed.), Current issues in linguistic theory. Proceedings of the Fifth International Conference on Historical Linguistics. Amsterdam: Benjamins.

Hatch, E. (ed.), 1978. Second language acquisition. Rowley, Mass.: Newbury House. Keenan, E., B. Comrie, 1977. Noun phrase accessibility and universal grammar. Linguistic

Inquiry 8, 63-99. King, R. D., 1969. Historical linguistics and generative grammar. Englewood Cliffs, N.J.:

Prentice-Hall. Labov, W., M. Yaeger, R. Steiner, 1972. A quantitative study of sound change in progress.

Final Report on NSF Contract 3287. 2 ~01s. Philadelphia: U.S. Regional Survey.

Langacker, R., 1977. Syntactic re-analysis. In: C. Li (ed.), Mechanisms of syntactic change. 57-141. Austin: Univ. of Texas Press.

300 S. Romaine / The rransparency principle

Lass. R.. 1980. On explaining language change. London: Cambridge Univ. Press.

Li. C. (ed.). 1977. Mechanisms of syntactic change. Austin: Univ. of Texas Press.

Lieber. R.. 1979. The English passive: An argument for historical rule stability. Linguistic

inquiry 10. 667-688. Lightfoot. D.. 1979. Principles of diachronic syntax. London: Cambridge Univ. Press.

Lightfoot, D., 1981. Transparency and historical explanations. Paper presented at the Fifth International Conference on Historical Linguistics, Galway.

Martinet, A.. 1955. Economic des changements phonetiques. Bern: Francke.

Paul. H.. 1920. Prinzipien der Sprachgeschichte. Halle: Niemeyer. Popper. K., 1977. The logic of scientific discovery. London: Hutchinson. Romaine. S.. 198Oa. The relative clause marker in Scats English: Diffusion, complexity and

style as dimensions of syntactic change. Language in Society 9, 221-249. Romaine. S.. 1980b. What is a speech community ?. Belfast Working Papers in Language and

Linguistics 4. 41-60. Romaine. S.. 198la. The status of variable rules in sociolinguistic theory. Journal of Lin-

guistics 17. 93 119. Romaine. S.. 198 I b. Syntactic complexity. relativization and stylistic levels in Middle Scats.

Folia Linguistica Historica 2, 56-77. Romaine. S.. forthcoming a. Syntactic change as category change by re-analysis and diffusion:

Some evidence from the history of English. Paper given at the Second International Con- ference on English Historical Linguistics. Odense, Denmark. April, 1981.

Romaine. S., forthcoming b. On the problem of syntactic variation. To appear in: Language in Society.

Romaine, S.. forthcoming c. Towards a typology of relative clause formation in Germanic.

Paper given at rhe International Conference on Historical Syntax, Poznan, Poland. March, 1981. To appear in: J. Fisiak (ed.), Historical syntax. The Hague: Mouton.

Sankoff, G.. P. Brown, 1976. The origins of syntax in discourse: The case of Tok Pisin relatives. Language 52. 631666.

Slobin, D., 1973. Cognitive. prerequisites for the development of grammar. In: C. Ferguson, D. Slobin (eds.). Studies of child language development. New York: Holt, Rinehart and Winston.

Traugott. E-C.. 1972. The history of English syntax. New York: Holt, Rinehart and Winston. Weinreich. U.. W. Labov. M. Herzog, 1968. Empirical foundations for a theory of language

change. In: W. P. Letmann, Y. Malkiel (eds.), Directions for historical linguistics, 95-189. Austin: Univ. of Texas Press.