From the Knowledge of Language to the Knowledge of the Brain

32
Reti, saperi, linguaggi – 1/2014 – pp. 131-162 ISSN 2279-7777 © Società editrice il Mulino FROM THE KNOWLEDGE OF LANGUAGE TO THE KNOWLEDGE OF THE BRAIN Edoardo Lombardi Vallauri Abstract: One of the most interesting aspects of linguistic facts is what they can tell us about the organism that supports them, and in particular about the brain.The article tries to sketch a brief overview of some phenomena that linguistics “deliv- ers” to the knowledge of the mind/brain: facts that we know about thanks to linguis- tic studies, which at the same time suggest their being strongly determined by the brain, by the way it works, by its efficiency limits, by the path it may have followed during its evolution under the pressure of an environment made of human com- munities engaged in commuincation tasks. Such linguistic facts, both the best known and those that have not been considered in this perspective so far, are presented in the light of the specific hints they provide on the working of the brain. Keywords: language and brain, memory, language evolution, implicit meaning, lan- guage processing. 1. THE NON-NEUTRALITY OF LANGUAGE WITH RE- SPECT TO THE BRAIN This paper reflects the point of view of a linguist. That is to say, it starts from language, and it points out some features of language as it is de- scribed by linguists. These features more or less recognizably «point» to the brain, in that their explanation may reside in how the brain is made and works. Actually, there are some properties of language that seem to depend on the brain rather than on other external con- straints (pragmatical, historical in nature 1 ). Since language is presently better known than the brain, these features potentially provide us with hypotheses on how the brain is made and works. Such hypotheses are obviously to be verified (and several of them are presently being veri- fied) through experimental work on the brain itself. The «Logical Problem of Language Evolution» (Christiansen, Chater 2008) essentially admits three solutions:

Transcript of From the Knowledge of Language to the Knowledge of the Brain

Reti, saperi, linguaggi – 1/2014 – pp. 131-162 ISSN 2279-7777 © Società editrice il Mulino

FRoM the kNowledge oF laNguage to the

kNowledge oF the bRaINedoardo lombardi Vallauri

Abstract: one of the most interesting aspects of linguistic facts is what they can tell us about the organism that supports them, and in particular about the brain. the article tries to sketch a brief overview of some phenomena that linguistics “deliv-ers” to the knowledge of the mind/brain: facts that we know about thanks to linguis-tic studies, which at the same time suggest their being strongly determined by the brain, by the way it works, by its efficiency limits, by the path it may have followed during its evolution under the pressure of an environment made of human com-munities engaged in commuincation tasks. Such linguistic facts, both the best known and those that have not been considered in this perspective so far, are presented in the light of the specific hints they provide on the working of the brain. Keywords: language and brain, memory, language evolution, implicit meaning, lan-guage processing.

1. the NoN-NeutRalIty oF laNguage wIth Re-Spect to the bRaIN

This paper reflects the point of view of a linguist. That is to say, it starts from language, and it points out some features of language as it is de-scribed by linguists. These features more or less recognizably «point» to the brain, in that their explanation may reside in how the brain is made and works. Actually, there are some properties of language that seem to depend on the brain rather than on other external con-straints (pragmatical, historical in nature1). Since language is presently better known than the brain, these features potentially provide us with hypotheses on how the brain is made and works. Such hypotheses are obviously to be verified (and several of them are presently being veri-fied) through experimental work on the brain itself.

The «Logical Problem of Language Evolution» (Christiansen, Chater 2008) essentially admits three solutions:

edoaRdo loMbaRdI VallauRI

132

(A) Language may have arisen in the human (and only the hu-man) brain d’emblée, in the shape of the so-called Universal Grammar (UG), thanks to an extraordinary event, not by evolution. This has long been and still is Noam Chomsky’s claim, which is shared for in-stance by Derek Bickerton, Lyle Jenkins and some others.

(B) Language (UG) may have arisen in the brain through evolu-tion, thanks to progressive adaptation of the grammar-in-the-brain to its environment, crucially including the use of systems of verbal com-munication (languages) in human groups. This is the position held by Stephen Pinker, Ray Jackendoff, James Hurford, and others.

(C) Things may have developed the other way round, namely language may have been shaped by its use in human communities un-der the pressure to fit the limits and properties of the human brain (among other environmental pressures), and not vice versa. This is the opinion of Christiansen and Chater, and many others.

All these hypotheses share the view that language reflects the structure and working of the brain. In perspectives (A) and (B), lin-guistic features are seen mainly as revealing the language-in-the-brain, i.e. a specifically linguistic (grammatical) module by which the brain runs language. In perspective (C), virtually any fact that characterizes language may reveal general features of the brain, because the brain evolved to operate language by means of structures and procedures that are also devoted to other tasks and functions. We are convinced that (C) is more plausible than (A) or (B). It is impossible to justify this endorsement explicitly here, so we refer to the arguments directly or indirectly contributing to the issue proposed by Putnam (1967), Piaget (in Piattelli Palmarini 1979), Pullum, Scholz (2002), Deacon (2003), Tomasello (2003), Lombardi Vallauri (2004), Sampson (2005), Gallese, Lakoff (2005), Christiansen, Chater (2008), Evans, Levinson (2009), Corballis (2011), Baronchelli, Loreto, Puglisi (in press).

What counts for us here is what we may call the non-neutrality of language with respect to the brain, i.e. the fact that the properties of language can be regarded as not independent from the structure and working of the brain, and «saying something» about them. If language presents a certain feature, we cannot consider this as neutral with re-spect to what we are allowed to think about how the brain is made. Rather, it means that such a feature is – as a minimum – allowed by the brain and – as a maximum – directly caused by it. In this perspec-tive, we will propose here a small list of facts that seem to instantiate a

FRoM the kNowledge oF laNguage to the kNowledge oF the bRaIN

133

high level of influence of the brain on language. Linguistics «delivers» such notions to the study of the brain, in the sense that they are facts we know about thanks to linguistic studies, but at the same time they suggest their being strongly determined by the brain, the way it works, its limits of efficiency, the evolutionary path it may have covered under the selective pressures of the environment.

We will mention some phenomena that are already well known and studied from the point of view and with the methods of neuro-science, but also some properties of language that – from this perspec-tive – have drawn less attention so far. The instruments of linguistics cannot take us beyond the linguistic side, i.e. the description of each linguistic feature and the mere suggestion that it concerns the brain. The other side, the one specifically related to the brain, cannot be done here, and not by a linguist: we will just hint at it.

Since we have limited space and our aim is a survey that may give an idea of the kinds of language features that can be studied from the brain perspective, each phenomenon will be treated synthetically, refer-ring to relevant literature (when present) for more in-depth discussion.

2. FRequeNcy (INcludINg aNalogy) VS. Regu-laRIty

It is well known (Bybee 2007) that frequency of occurrence strongly influences the possibility for linguistic items to have more or less irre-gular morphological paradigms. More frequent words are allowed hi-gher degrees of irregularity. This can be observed in verbal paradigms of all languages with verbal morphology. For example, verbs meaning «be», «have», «see», «know», «take» etc. are much more likely to have irregular paradigms than verbs like «operate», «obey», «postpone» or «result».

The explanation for this is – in principle – quite obvious: the fact that some words trigger more frequent processing experiences makes it easier for speakers to remember their forms. Repetition of an expe-rience eases the process by which it can get entrenched in the brain2. This is confirmed by the fact that unfrequent verbs can show irregular patterns when these are shared by a sufficient number of other verbs, leading to a relatively high frequency of the pattern itself. For example, in English the pattern weep – wept is «supported» by its belonging to

edoaRdo loMbaRdI VallauRI

134

a wider correlation with keep – kept, sleep – slept, whose overall fre-quency is higher than the one of each single verb.

These facts seem to say not only that the positive effect of fre-quency on remembering, which can be observed throughout the work-ing of the brain, is also obviously valid for language, but – perhaps less obviously – that analogy plays a role in the effectiveness of frequency. Slightly different experiences sharing certain features count as the same experience as concerns that feature.

3. FRequeNcy VS. leNgth

One of the language features more often explicitly related to brain processing are the so-called Zipf’s laws (Zipf 1935). The one which has been most discussed and further inquired is the law according to which (to paraphrase George A. Miller’s words in his Introduction to Zipf’s book) there is a stable relation (a) between the frequency of oc-currence of a word and the number of different words occurring with that frequency, or (b) between the frequency of occurrence of a word and its rank when words are ordered with respect of frequency of oc-currence. This law constantly characterizes not only human languages, but «appears to be a universal property of complex communicating systems» (Solé et al. 2011).

Even better known to linguists is Zipf’s «Law of Abbreviation», stating that «the length of a word tends to bear an inverse relationship to its frequency» (Zipf 1935, 38). Benoit Mandelbrot was the first to show that this law automatically arises from statistical reasons: even if words are generated randomly, short words will be more frequent than long words. Moreover, as pointed out by G.A. Miller, identical short sequences will always be more frequent than identical long ones. As a result, «a few short words will be used an enormous number of times while a vast number of longer words will occur infrequently»3.

But it remains true that a language made of shorter words is more ergonomic, less effort-consuming, than one made of longer words. Zipf explicitly wonders whether it is brevity which causes frequency, or vice versa. Are short words (casually born) subsequently selected to be used more frequently because of their shortness, or do frequently used words undergo processes of shortening, in order to become more ergonomic, whereas the same does not happen to less frequent words?

FRoM the kNowledge oF laNguage to the kNowledge oF the bRaIN

135

Zipf’s answer is that since «a speaker selects his words not according to their lengths, but solely according to the meanings of the words and ideas he wishes to convey» (Zipf 1935, 29), frequency must cause brev-ity. In fact, he concludes, the diachronic study of languages shows that frequent words typically undergo various phenomena of «truncation» or «substitution» that lead them to become shorter.

Fig. 1. Zipf’s frontispiece.

As a result, in English conversations the 200 most frequent words are almost all monosyllables (and a handful of bisyllables). This kind of facts, since they equally characterize all languages, can be regarded as a confirmation that human language has evolved in strict obedience to the general needs of the brain; or else, if one takes the opposite per-spective, that the need of effort saving on the part of the brain is met by the possibility to process things that are shorter / smaller / made of less elements, rather than things that are longer / bigger / made of more elements.

4. explIcIt MaRkINg IS betteR thaN lINeaR oR-deR. coNSISteNcy IS eaSIeR thaN VaRIatIoN

In a study based on children reactions to sequences that violate the canonical sentence form of their language, Slobin, Bever (1982) sho-wed that the acquisition of grammatical patterns has different speed

edoaRdo loMbaRdI VallauRI

136

according to the processing eases of different structures. In particular, children acquiring Turkish, Italian, English and Serbo-Croatian are fa-ced to different means to express the participants’ roles in transitive sentences. While Turkish marks syntactic roles such as Subject and Object by means of specific case inflection morphemes agglutinated to the words, Italian and English use SVO word order (Subjects precede, Objects follow the Verb). Serbo-Croatian basically uses inflection, but in some instances the Subject and Object case are phonetically identi-cal. This means that some sentences in Serbo-Croatian do not indica-te morphologically which noun is the Subject and which the Object. Speakers must make use of some convention to deal with those cases, and the convention is to follow SVO order in inflectionally ambiguous sentences (Slobin, Bever 1982, 235).

Now, children acquiring these languages come to master the structure of transitive sentences at different ages. The completely ex-plicit strategy of Turkish (case marking) allows them to reach good per-formances at the age of 2. Italian and English children reach the same level at about 3. Serbo-Croatian children have worse performances and arrive later at complete mastership of transitive sentences. This – along with the results of similar experiments – suggests that explicit marking directly added to an element is the best processable way to encode some information about its function. Indirect encoding through no explicit mark and conventional assignment of structural meaning to sequential order is more difficult to process and to learn. In other words, it seems that the biunivocal association of a specific linguistic element to a spe-cific function is the most natural way (easiest for both processing and acquiring) to do the job. When the same function is assigned to a dis-embodied pattern such as word order, this results in lesser ergonomy.

However, entrusting one function to more than one different strategy seems to be the worst thing. Serbo-Croatian children are de-layed by a lack of consistency: since the strategies used by their lan-guage are two, mastering them becomes more difficult.

Of course, many other linguistic facts point to the same con-clusion. Just to cite one of the most studied examples, the so-called universals of basic word order can be regarded as general implications effected by the preference for consistent over contradictory patterns. The preference of VO languages for Prepositions and for such orders as Noun-Adjective, Noun-Genitive and Noun-Relative Clause is ex-plained in terms of the preferability of consistent basic order Head-

FRoM the kNowledge oF laNguage to the kNowledge oF the bRaIN

137

Modifier both in the Verb Phrase and in the Noun Phrase. Conversely, the preference of OV languages for Postpositions, Adjective-Noun, Genitive-Noun and Relative Clause-Noun can be described as the preference for the Modifier-Head order. When processing linguistic material, it seems that the brain finds it easier to establish what the relevant pattern is, and to apply it consistently. Variation, especially if not easily predictable, is costly.

5. thINgS that aRe NeaR INteRFeRe

Lombardi Vallauri (2004, 2012) provide a more economic explanation for a well known Chomskyan argument in favour of the innateness of the Structure Dependency Principle. The argument was first pro-posed by Chomsky (1980), then developed by Crain and Nakayama (1987), and adopted by very authoritative syntheses of the theory such as Akmajian and coll. (1984) and Cook and Newson (1996). We must summarize it in few words here, referring to the cited works for more details.

Crain and Nakayama (following Chomsky) point out that the interrogative version of the English utterance in (1) is (2):

(1) the man is tall(2) is the man tall?

Children acquiring their mother tongue could – theoretically – infer that the rule for interrogative sentences is something like: «take the verb of the assertive sentence, and move it to the beginning». The consequence would be that from sentences like (3) children would build questions like (4):

(3) the man who is tall is in the room(4) *is the man who tall is in the room?

Now, crucially,

children do not produce questions like the ill-formed (4). Therefore, it appears that children know that structure, and not just the more salient linear order property of sentences, is relevant in the formation of yes/no questions. (Akmajian et al. 1984, 470)

edoaRdo loMbaRdI VallauRI

138

This knowledge is attributed to an innate endowment. In order to build the correct interrogative:

(5) is the man who is tall in the room?

speakers must know that (3) is not just a list of words, but a struc-ture containing hierarchical relations where, in particular, the relative clause who is tall is a depending unit, in some way separate from the rest. Otherwise, according to the cited authors, they would be tempted to front the verb of the relative clause as in (4) instead of the verb of the main clause as in (5):

(4) * is the man who Ø tall is in the room?

(5) is the man who is tall Ø in the room?

Why this notion need not be innate is explained extensively in Lombardi Vallauri (2004, 2012). The example constantly chosen by the innatist tradition to elucidate this problem is really misleading, because it contains the same verb in both clauses, thus entrusting to syntax alone the role of providing speakers with the relevant informa-tion about the difference between the two verbs. But in the majority of sentences, semantics intervenes. And, of course, the pattern can be acquired from the majority of sentences. For example, in order to un-derstand that the interrogative version of (6) is (8) and not (7), speak-ers do not need the syntactic/structural knowledge we mentioned, be-cause they can rely on a semantic criterion:

(6) the man who lives with Mary owns a Porsche(7) *does the man who live with Mary owns a Porsche?(8) does the man who lives with Mary own a Porsche?

With verbs different from the copula, interrogatives are obtained by auxiliarization, not fronting. Speakers understand that the verb they must auxiliarize is the one whose meaning they want to interrogate, not the other. So, if they want to ask whether the man owns a Porsche,

FRoM the kNowledge oF laNguage to the kNowledge oF the bRaIN

139

why should they auxiliarize The man... lives in (6), resulting in does the man... live? They obviously auxiliarize The man... owns, resulting in Does the man... own. In the same way, in (3), if they want to ask wheth-er the man lives in the room, why should they front The man... is tall, resulting in Is the man... tall? They obviously interrogate (by fronting) the verb expressing the content to be interrogated, namely The man... is in the room, resulting in Is the man... in the room.

One could (perhaps somewhat pedantically) object that cases like (6) do not contain a verb allowing fronting, so no analogy could be drawn from them to handle cases like (3). But the same pattern obvi-ously applies also in sentences where one verb undergoes interrogation by fronting and the other by auxiliarization, from which analogy can be drawn by speakers in order to role out (4) without having recourse to any innate grammatical knowledge:

(9) the man who is tall drinks Fanta Grape(10) *is the man who Ø tall drinks Fanta Grape?(11) does the man who is tall drink Fanta Grape?

(12) the girl who cooks your raamen is from Papeete(13) *does the girl who cook your raamen is from Papeete?(14) is the girl who cooks your raamen Ø from Papeete?

In other words, the only knowledge (previous to the direct expe-rience of the linguistic stimulus) the child needs in order to build cor-rect interrogatives from sentences with an embedded relative clause is the following, quite general and not specifically linguistic rule:

«Act-on-the-target» Rule«If you want to modify something, act on it (not on other things)»

General knowledge can act very effectively in grammatical proc-esses, so that specific grammatical knowledge must not necessarily pre-cede. Then, the specific form taken by this not-specifically-linguistic rule in the case of English interrogatives is:

«If you want to interrogate a verb, front/auxiliarize that verb (not an-other)» Exactly in the same way, when a verb has to take different tenses

or to become negative, or when a noun has to become plural, such

edoaRdo loMbaRdI VallauRI

140

grammatical actions are applied to that word, not to another, and the consequent meaning applies just to it:

(15a) I am buying the theater they opened near the sea shore(15b) I will buy the theater they opened near the sea shore (15c) I bought the theater they will open near the sea shore

(16a) the man who isn’t tall is in the room(16b) the man who is tall isn’t in the room

(17a) the boy broke the vase(17b) the boys broke the vase(17c) the boy broke the vases

Now, this state of affairs is contiguous to a generalization known as «First Law of Behaghel», because Otto Behaghel formulated it no later than in the last volume of his Deutsche Syntax in 1932: «What belongs together in a mentalist sense, is placed together».4 That is to say, in language, things that are conceptually near will be placed close to each other. This is obviously ergonomic: a language that builds sen-tences like the cat of my neighbours doesn’t chase mice will always lead to easier processing for a human brain than one that generates sen-tences expressing the same meaning by something like the chase of doesn’t neighbours cat mice my.

Fig. 2. Behaghel’s frontispiece.

FRoM the kNowledge oF laNguage to the kNowledge oF the bRaIN

141

This confirms on the side of language the fact, well documented by cognitive psychology studies, that our brain has a predisposition to hypothesize that things that are near to each other interact more than things that are distant. Such a predisposition is by no means specifi-cally linguistic, and must have developed under the pressure of physi-cal reality. If you want to move a stone, you must touch that stone. The spear kills the individual which it is thrown to, not another.

6. achIeVINg INFINIty wIth FINIte MeaNS

Deixis, or indexicality, is a universal feature shared by all human lan-guages. It is hard to imagine how a language could work without it. The essence of deixis is contextualization, i.e. the fact that the meaning of certain words depends on the context in which they are uttered. The persons meant by I, you, we depend on who speaks. The portion of time selected by now, yesterday, tomorrow and by the tense of verbs such as did or will do depends on when the utterance is produced. The reference of here, there, this, that varies according to where they are uttered. Even the blackboard and George point to different things in different contexts.

If words were not able to work like this, a single different name would be necessary for every single entity in reality, a single different verbal tense would be needed for every single moment in time, and so on. With such a holistic lexicon, instead of entrusting the interpreta-tion of this cup, these spoons, that glass or this blade of grass to context we would be obliged to pronounce – and remember! – as many dif-ferent names: a different name for every blade of grass in the world. Remembering them all would be impossible. Building a shared agree-ment on all such conventional names with all other human individuals would be even more impossible.

It is clear that deixis is deeply intertwined with another feature of language, namely the fact that language categorizes, i.e. it encodes categories, not individuals. We have a noun for the category «glass», not for every glass; for the category «grass», not for every turf or blade of grass. We have a verb form for the past and the future, not one for every different moment. We would not be able to manage a boundless, holistic lexicon, but we are able to manage a much smaller, categoriz-ing lexicon, plus deictic mechanisms of contextualization5. We could not manage a boundless morphology (with, e.g., as many tenses as the

edoaRdo loMbaRdI VallauRI

142

different moments in time, or as many modes as the different epistemic nuances), but we can use the past, present and future tenses, comple-mented by other means to locate our predications in time.

More generally, between a communication code made of count-less units endowed with perfectly univocal meaning and a code made of a finite number of units whose meaning is vague and can become more precise only in context, we are obliged to choose the latter. Vagueness (Machetti 2006) of each item is the condition that allows the inventory to be finite without constraining the meanings expressed in actual use to be finite as well. Deixis, contextualization, metaphorical meaning and similar phenomena are essential to language (and to our cognition of reality through language).

This is not only true for the lexicon (the inventory of words) and morphology (the inventory of forms), but also for syntax (the inven-tory of constructions): in principle, very many «transitive» construc-tions would be needed, because the transitivity of eating is different from that of breaking, or seeing, or knowing etc. Each of these «ac-tions» builds a different relation between the Agent and the Patient/’s Theme. Still, we group all those different kinds of relations under the same construction, essentially keeping them in a metaphorical relation to each other. Of course, different languages elaborate slightly differ-ent solutions: for instance, not all languages have just one transitive construction and one intransitive construction.

Just to make another example, the same holds for the ways to express questions and requests. The Italian construction gli chiederò se parte in macchina (I will ask him if he leaves by car) can describe both a question or a request. Japanese would employ two different constructions.

Pragmatics is – even more than morphology, syntax and seman-tics – the realm of vagueness. An utterance like:

(18) Rice is ready!

can mean quite different things according to different contexts and sit-uations, in the sense that it can convey – through Gricean implicatures (Grice 1965) – completely different messages: «switch off the gas» / «grate the Parmigiano» / «why don’t stay for lunch?» / «then the time it needed to cook was less than you expected!», etc.

A finite code can express infinite meanings provided that it is vague and has means to exploit contextualization. This is the solution

FRoM the kNowledge oF laNguage to the kNowledge oF the bRaIN

143

which, as a speaking species, we have adopted. Now, evidently, con-textualization has its costs. Processing a straight message, where the meaning of each unit is rigidly pre-determined, is probably less costly than employing complex interpretive mechanisms in order to extract the final meaning from the context. But also managing too large an inventory of linguistic forms is costly. Use of language is a compromise between these two pressures. This means that it gives us information on what the ideal balance between them is for the human brain.

It has been shown that an average college student understands about 60.000 lexical entries of her/his language, although active use is much more restricted: James Joyce’s Ulysses uses 29.899 different words, but for an average novel about 10.000 are sufficient; and in ordinary conversations only some thousands (2.000 to 5.000) actually occur.

Similar counts are more difficult for other components of lan-guage, such as grammar. But in principle the actual extension of the lexicon and grammar of known languages tells us that the effort need-ed to acquire and keep such an amount of information in long-term memory is more or less in balance with the effort needed to apply con-textualization processes to utterances made of not completely univocal units and constructions.

6.1. MetaphoRS aNd the ceNtRal Role oF aNalogy

Metaphors have a place of honour, in the interest of linguists and other cognitive scientists, among the strategies by which languages «bend» word meanings in order to obtain all the actual meanings they need without indefinitely multiplying the inventory of words6. It has been shown that in ordinary conversations we use an average of 6 meta-phors per minute. About two of them are «creative» metaphors such as «that man is a lion» (meaning that he fights heroically) or «I go back to my personal hell» (meaning one’s workplace). The rest are «frozen» expressions, originally metaphorical in nature, that have become the standard linguistic means to express a given concept, such as «the ta-ble’s legs», «the solution of a problem», «shares rise/fall» etc. Many words are as frequent in their metaphorical sense as in the «basic» one. For instance, in Italian newspapers 60% of the occurrences of the verb vedere «to see» have non-literal, metaphorical meaning.

edoaRdo loMbaRdI VallauRI

144

Metaphors are analogies, by which a word with a certain mean-ing can be used to mean something similar. But even without consider-ing metaphors, the very fact that language categorizes, i.e. that river stands for virtually all distinct but similar rivers, and tree for all trees, and past tense can stand for all different moments in time which are similar in that they precede the utterance time, is obviously due to our ability to manage analogy, and to our pervasively analogical attitude towards reality.

All this reveals that analogy is one of the pillars of interaction between mind and reality. Without analogy not only language would be much different, but in general our interaction with entities in the world would be different. Our attitude towards things is guided by analogy, in the sense that we extend understanding and behaviours that have proven apt to certain things to all «similar» things. What would be our destiny if we were not able to apply this elementary proc-ess? What would have happened to our species, if we faced each new object, individual or situation as if it were completely new and not at least in part understandable by analogy from another previously en-countered object, individual or situation?

The importance of analogy for language is thus to be seen as the secondary adaptation (to language) of a way to work that the brain must have evolved primarily, and much earlier, to cope with reality in general.

7. woRkINg MeMoRy eFFectS: copy

Repetition in speech and writing (see Figure 3) is not a merely acciden-tal failure which schoolteachers correct with blue pencils when pupils fail to remark that it has happened.7 It is the expected consequence of the better accessibility of words (and not only words) that are currently loaded in working memory, as compared to those that are not. Once a linguistic element (a word, an inflection, a syntactic construction) has been used, using it again is more economic than using some element that has to be retrieved in long-term memory. As a consequence, spea-kers tend to repeat much more than would be predictable statistically. The way this priming influences actual linguistic productions can be seen as giving information about the subjacent brain processes.

FRoM the kNowledge oF laNguage to the kNowledge oF the bRaIN

145

Fig. 3. Copy effect in amateur writer’s text8.

On the other side, repetition disturbs. Not only that of words, but more in general. For example, the noise of a drop of water falling is nothing, but its continuous repetition can lead to insanity. In any case, an interesting question whose answer must probably directly take the physical working of the brain into account is: Why is the same thing (repetition) preferred on economic grounds when we produce utter-ances, but it is so disturbing when it comes to processing the linguistic productions of someone else?

8. woRkINg MeMoRy eFFectS: oNe New Idea at a tIMe

The capacity of working memory has been intensely inquired, also from the point of view of language. Linguists are usually familiar with Miller (1956), but often also with subsequent research, such as for instance Cowan (2001). Probably the whole organization of sentences into a certain average number of words, and that of written (and not only written) paragraphs into a certain number of sentences depends on working memory biases.

Recent research in linguistics has further specified the limits im-posed to utterances by brain/processing biases, leading to the so-called «One New Idea at a Time» constraint, where «New» is opposed to «Giv-en» and means «not active in short-term memory»9. Given information is typically something recently mentioned. For example, the same sen-

edoaRdo loMbaRdI VallauRI

146

tence (coming second) is acceptable in (19) but not in (20), because en-coding through an anaphoric pronoun will only be understood if its ref-erent has been previously and recently activated in short-term memory:

(19) I have seen your ex-wife yesterday. She was wearing a red skirt.(20) I have slept until noon yesterday. * She was wearing a red skirt.

When the second sentence is uttered, the idea of the addressee’s ex-wife is Given (active) and consequently recoverable in (19), while it is New (inactive) in (20).

Now, as concerns information which is not active yet and must be activated by the utterance, according to Givón (1975, 202):

There exists a strategy of information processing in language such that the amount of new information per a certain unit of message-transaction is restricted in a fashion – say «one unit per proposition».

Chafe (1987), to whose extensive exemplification on actual spontaneous conversations we refer, more precisely points out that the constraint works with one new idea per intonation unit. Chafe (1994, 109) concludes that:

Thought, or at least language, proceeds in terms of one such activation at a time, and each activation applies to a single reference, event or state, but not to more than one. If this is a limitation on what the speaker can do, it may also be a limitation assumed for the listener as well. It may be that neither the speaker nor the listener is able to handle more than one new idea at a time.

This has been connected more explicitly with the processing limits of the brain (cf. e.g. Marois, Ivanoff 2005, 298):

It is generally accepted that our brain cannot process all the information with which it is bombarded, and that attention is the process that selects which stimuli/actions get access to these capacity-limited processes. In this view, attention can selectively act at multiple stages of information processing and operate differently in each stage according to the proces-sing characteristics of that stage.

In particular, while the processing of Given information, al-ready active in working memory, does not require much attention, the processing of New information, not active yet and needing to be newly

FRoM the kNowledge oF laNguage to the kNowledge oF the bRaIN

147

activated, requires specific attention and effort. A promising line of re-search is that which tries to relate the treatment of linguistically Given and New information with the difference between automatic and con-trolled processing (Schiffrin, Schneider 1984, 269):

Automatic processing is generally a fast, parallel, fairly effortless process that is not limited by short-term memory capacity, is not under direct subject control, and performs well-developed skilled behaviors. [...] Con-trolled processing is often slow, generally serial, effortful, capacity limited, subject regulated, and is used to deal with novel or inconsistent information.

Some first experimental data from brain activity while process-ing Topic/Focus structures became available a few years ago, and al-ready seem to confirm the hypotheses we have exposed (Baumann, Schumacher 2011; Benatar, Clifton 2014).

8.1. coNVeRgeNce wIth data oN audItoRy dIScRIMINatIoN acuIty

The just mentioned limits imposed by the brain to the pace by which new information can be introduced in discourse receive indirect confir-mation from the domain of auditory acuity. As a matter of fact, humans are able to discriminate many more sounds than those usually forming the set of phonemes employed by each language. This is true even if we take into account the security margins imposed by the disturbed con-ditions under which oral communication often takes place. Evidence for this is the simple fact that where a single language doesn’t discrimi-nate, another does. Physiologically, nothing would prevent a language from having all the vocalic phonemes of Italian (i, e, E, a, O, o, u), plus all those possessed by English (adding I, {, Æ, A, Á, U etc.) plus all the nasal vowels of French, and so on and so forth, with even more abundant effects for more exotic languages. Since English speakers can discriminate E from {, this is simply possible for the human ear. German, and Japanese «choose» not to exploit this possibility, but in principle they may add the distinction (and many others) to their pho-nological inventories. The same holds for hundreds of consonants.

Now, a language with more phonemes would be allowed a lower average length of its words, which would make it theoretically more ef-ficient, since it would be able to convey information in less time. But evi-

edoaRdo loMbaRdI VallauRI

148

dently not in practice, as it is shown by the fact that languages all content themselves with far less phonemes than it is allowed by auditory acuity.

The reason may probably be found in the fact that speed in infor-mation encoding is already at its maximum in natural languages, and cannot be increased. The bottle-neck may well reside in the processes of comprehension, memory allocation etc. on the part of the brain, which cannot be further sped up. The communication pace of natural languages is already near the maximum allowed, and there is no reason to sacrifice the surplus of security ensured by neat phonetic differences (high average phonetic distance between phonemes) in order to obtain shorter words that would be of no other use than conveying informa-tion at too rapid a pace for the brain to manage.

9. pRoceSSINg eFFoRt aNd pReSuppoSItIoNS

An information status similar though not identical to Given informa-tion is that of Presupposition. Languages have specific constructions devoted to presenting content as presupposed, i.e. as already belonging to the shared knowledge10. Among them the best known are definite descriptions, but also factive predicates, adverbial clauses etc.11. While Given information is defined as active in working memory at utteran-ce time, for presupposition it is sufficient that some content is contai-ned in the addressee’s long-term memory. One similarity between the two information statuses is that when something is activated by men-tion it becomes Given, but it also enters the knowledge shared by the participants, and from then on it can be encoded by presupposition.

Probably, the reason why languages distinguish between assert-ing some content and presenting it as presupposed is economy of ef-fort (Lombardi Vallauri, Masia 2014, 162-165). When some content is already in the knowledge of the addressee, the speaker must present it as presupposed. Otherwise, the addressee is instructed to process that piece of information as unknown, to focus her/his attention on it and build it as a new piece of knowledge in her/his mind:

(21) In this world there is a country called Italy, where once flourished the so-called Roman Empire. The capital city of that Empire was called Rome. Rome still exists. From that period, there remains in Rome an an-cient monumental stadium called the «Coliseum». I have a family. Well, I recently visited the Coliseum with my family, in a period conventionally called «February».

FRoM the kNowledge oF laNguage to the kNowledge oF the bRaIN

149

In (21), assertive constructions tell the addressee to focus on each introduced referent and build new mental slots for things to be called Italy, Rome, the Coliseum, the speaker’s family, February etc. But one instant later (s)he will realize that (s)he already has such slots: Italy, Rome, the Coliseum the speaker’s family and February are contents already known to her/him. What we usually do to avoid this waste of processing effort is use presupposing expressions, which «authorize» the addressee not to process their content in deep detail:

(22) In February we visited Rome and the Coliseum.

In (22), the existence and identifiability of Rome, the Colise-um, the speaker’s family and February, all known by the addressee, is presented as presupposed by means of definite descriptions, so the addressee will not make any unnecessary effort. (S)he will pay much less attention to those contents, because they come with the instruc-tion that they do not need to be examined thoroughly: a resumptive, «mentally opaque» recollection of the already known (the Coliseum, etc.) is enough for the purpose of understanding the part of the mes-sage which is really new («we visited it»). Full examination of already-known content would be the superfluous repetition of some effort that one has done in the past. Linguistic presupposition is the instruction to the addressee that (s)he can devote less attention to it, because more is not needed for full understanding of the message.

Linguistic devices for presupposing content probably arose in order to fulfil this economic purpose. But the same devices can also be used when some content is not already shared, if the message is un-derstandable even without full examination of that content. Thus, in-formation can be presented as presupposed even if it does not exist in the memory of the addressee. This is what may happen if the speaker’s having visited the Coliseum is not previously shared, but nevertheless it is encoded by a presupposing temporal clause in (23):

(23) Last February, in Rome, when I visited the Coliseum, I found a wallet with 3.000 $.

This attitude is indispensable for efficient communication. For example, a change-of-state verb such as switch off always conveys the presupposition that something is previously on:

edoaRdo loMbaRdI VallauRI

150

(24) Please, go to my room and switch off my laptop: I want to spare the battery for this evening.

If the addressee doesn’t know about it, the speaker might have said:

(24a) My computer is presently turned on. Please, go and switch it off, so the battery will be spared.

But focusing on the state of the laptop is superfluous effort: the information that it is presently turned on can be conveyed as presup-posed (exactly as if the addressee already knows about it), together with the request to switch it off. In this way, the addressee only devotes to it the small amount of attention which is necessary for understand-ing the request. The fact that it saves processing effort makes (24) more natural an utterance than (24a).

From the two functions of presupposing we have considered, a third arises. Presenting information as not to be processed thoroughly although it is actually unknown to the addressee, may be aimed at avoiding full understanding of that information. When certain content is questionable, the addressee will typically refuse to believe it. But (s)he may accept it if, paying less attention, (s)he remains partially una-ware of its most doubtful components. Some information may be un-acceptable when it is stated, but what is wrong with it may remain un-perceived if it is processed in a less attentive way. We can illustrate this through a commercial which Philips diffused in Italy, whose headline presupposes that the addressees are living with «closed eyes»:

Fig. 4. «Let Philips open your eyes».

FRoM the kNowledge oF laNguage to the kNowledge oF the bRaIN

151

A statement such as (25) would sound offensive to anyone:

(25) You are living with closed eyes

but the same content raises no reaction if it is encoded as a presupposi-tion, precisely because it is processed in a vague, less scrupulous way. In the ad, the idea of opening the people’s eyes is asserted and receives strong evidence, while the presupposed idea that they were previously closed passes into the addressees’ knowledge without receiving true fo-cusing: otherwise they may probably reach full awareness – and reject it. As a result, the message is accepted in spite of its offensive content, which – one second later – can become part of the beliefs about the world held by the addressees. The same happens for any questionable content. This is why commercials, political propaganda and persuasive communication in general very frequently use this strategy (Lombardi Vallauri 1995; 2009, Lombardi Vallauri, Masia 2014). Here is another example:

Fig. 5. «...and I felt grown up with my first Alfa».

The adjective first implies, by way of presupposition, that the person in the ad has owned another Alfa after the first one. This con-tent is particularly persuasive because it implies, as its most probable inference, that who buys an Alfa is usually so satisfied that (s)he goes

edoaRdo loMbaRdI VallauRI

152

on buying more Alfas. Presupposing it is much more effective than directly asserting it. In Figure 5, the addressee is invited to focus on the asserted idea that someone felt grown up with the car of his youth, and not to carefully consider the presupposed content. This avoids critical challenging of such a questionable content.

To resume: presupposition instructs the addressee to pay less at-tention to certain content. This may have the following functions, logi-cally and perhaps diachronically derived from each other:

– saving the addressee superfluous effort, because that content is already known to her/him;

– saving the addressee superfluous effort, because that content can be processed with minor attention without any damage to the comprehension of the message;

– preventing the addressee from becoming completely aware of (all the parts of) that content, lest (s)he may challenge and reject it.

If such things continuously happen in linguistic productions, this

means that the brain is very sensible to linguistic devices for the gradu-ation of attention. Presupposition is among the linguistic phenomena that are still least studied with the methods of neuroscience. As it is presently pointed out by the Euro-XPrag program for Experimen-tal Pragmatics by the European Science Foundation, «Presupposition is one of the central phenomena of pragmatics, but except for some work on definite descriptions, there has been no formal experimen-tal work on presuppositions. This research shows exciting promise» (www.euro-xprag.org/presentation/). However, recent psycholinguis-tic work on attention in processing presuppositions (Tiemann et al. 2011; Chemla, Bott 2013) seems to confirm our picture. The path is really open for new research.

10. oN the deVelopMeNt oF laNguage IN the bRaIN: kINdS oF coNceptS

Prandi (2013) points out that linguistic meanings can be divided into two fundamental types: Endocentric and Exocentric Concepts. Of course there are intermediate situations, but they may be regarded as blending these two defining properties in different proportions. As

FRoM the kNowledge oF laNguage to the kNowledge oF the bRaIN

153

Prandi puts it, endocentric concepts are «rooted in the system of cor-relations of the lexicon»: as already noticed by Saussure, the limits of each concept are set by the limits of those that are contiguous to it. The French verbs peler, éplucher, écosser, décortiquer all mean «to peel», respectively generic, for potatoes, for legumes and for chestnuts. The meanings of these verbs are quite arbitrarily fixed by the language, exactly as fleuve and rivière subdivide the meaning of Italian fiume, or German essen and fressen correspond to «eat» (for humans and ani-mals); or in Italian there is only one verb andare «to go» where German has gehen and fahren respectively for walking and driving. If éplucher disappeared from use, the meanings of the other verbs would change, extending their use to the previous domain of éplucher. Without fres-sen, essen would acquire the same semanic extension as English eat or Italian mangiare.

Languages contain very many words which express endocentric concepts. The meanings of spleen or common law, Spanish desengaño, German Stimmung, etc. are not rooted in reality as such, rather they are consequences of some choice on the part of the language to divide reality into certain subsections and not others.

On the other side, exocentric concepts are not at the language’s mercy. They are rooted in reality, which imposes them by means of some strong break in its structure. As Prandi puts it, they are «anchored to the structure of some experience independent from language». For instance, rose, tulip, periwinkle, have exocentric meanings. If the name of anoth-er flower dropped, periwinkle would go on meaning exactly the same.

Now, Prandi’s suggestion is that exocentric concepts, since they are directly requested by experience, may have developed before, at the origins of language. Endocentric concepts, on the contrary, in or-der to arise needed for the language to have acquired a certain degree of complexity.

It may be observed that, in this perspective, exocentric concepts can be regarded as compatible with earlier stages of development not only of language, but also of the brain. In order to understand and ex-press them, our ancestors’ brains only needed to «mirror» the elements of reality as they imposed themselves to their experience. Endocentric concepts, instead, had to wait for the brain to be able to manage a more complex cognitive system, where reality was not only perceived, but «re-built» creatively as a highly complex and structured whole.

edoaRdo loMbaRdI VallauRI

154

11. INdIRect MeaNINg aNd the theoRy oF MINd: IMplIcatuReS aNd NoN-MechaNIcal woR-kINg

That human communication as we know it involves attributing inten-tions to the interlocutor, is a well known fact (Grice 1975; Sperber, Wilson 1986). The hypothesis that we constantly build a Theory of Mind by which we interpret other people’s acts and utterances has been already quite accurately checked, with experimental methods we cannot resume here (Happé 1993; Fletcher et al. 1995; Williams, Hap-pé 2010; Csibra 2010). This has been done both on «normal» indivi-duals and on persons exhibiting a specific deficit in this domain, such as – typically – subjects with autism.

Now, when an utterance conveys some implicature, the Theory of Mind which is necessary for the indirect meaning to be transmitted can be described as follows. In (26), Ted must think that Mia knows about John’s possessing a red bike and John’s girlfriend’s working at the florist shop (we will call this info: X), otherwise it would make no sense for him to answer her question like that:

(26) Mia: Is John back from Paris?Ted: There’s a red bike in front of the florist shop.

Additionally, Mia must know that Ted knows the same about John’s possessing a red bike etc., otherwise he would not be able to give his allusion about the bike the meaning that John is back. And she must think that he thinks that she knows as well, otherwise he would not con-sider his answer sufficient for her. Ted must obviously think that Mia thinks that he thinks that she knows, otherwise she would not be allowed to interpret his answer as exploiting X; ans so on and so forth. It can be shown that we face a regressio in infinitum. On a strictly logical level, the chain of crossed beliefs required by a simple implicature never ends.

Communication is possible only if the required beliefs are checked. Without some kind of control about such beliefs, the intended meaning cannot be conveyed. For instance, if Mia thinks that Ted thinks that not John but his love rival owns a red bike and is presently with John’s girl-friend in the florist shop, she will interpret the answer in (26) as mean-ing exactly the opposite, namely that John is definitely not back yet.

Now, how can the examination of an infinite sequence of beliefs come to an end? If a logical machine should do it, no matter how fast

FRoM the kNowledge oF laNguage to the kNowledge oF the bRaIN

155

the machine is, the process would never lead to a result. However, humans continuously (and quite quickly) process utterances contain-ing implicatures. This can be regarded as evidence that the processes involved in the interpretation of linguistic utterances are not strictly logical or mechanical in nature, but they are rather synthetical and – in the commonest and least technical sense – pragmatic.

12. the pooR ReleVaNce oF SyNtax, aNd NoN-MechaNIcal woRkINg

Unlike it is believed in generative linguistics, syntax is very useful but by no means indispensable for linguistic communication. This is evi-dent from facts. For example, Dan Slobin pointed out that in his mo-nitoring of adult speech to children between the ages of two and five (in Turkish and English) «there were almost no instances of an adult utterance which could possibly be misinterpreted due to the child’s lacking basic syntactic knowledge, such as the roles of word order and inflections. That is, the overwhelming majority of utterances were cle-arly interpretable in context, requiring only knowledge of word mea-nings and the normal relations between actors, actions, and objects in the world.» (Slobin 1975, 30. Italics ours.)

In an innatist perspective, observance of those syntactic rules that are considered universal is due to the presence of a grammar in the brain. But it is easy to verify that humans are constrained by grammar in a different way as compared to biologically determined behaviours. Utterances like the following, if produced in context, are understand-able. And, even more significantly, can be uttered (and written):

(27) essen ich nicht, Dankewith car your daughter must gonetempo è eu partirdemain dormir Jacques avecsempre mangiare lui è

Although we recognize them as ungrammatical, our brains have no difficulty in producing them, nor in attributing them possible mean-ings. The same holds for utterances like (4) above (along with ([7, 10, 13]), which Chomsky presents as the violation of one of the pillars of UG, the Structure Dependency Principle.

edoaRdo loMbaRdI VallauRI

156

Differently from obedience to UG, behaviours that are guided by innate biological structures happen unbreakably. Our nervous system cannot avoid causing the secretion od adrenalin in case of danger, or upturning the images on the retina. Grammar rules, instead, resemble criminal law or the rules of soccer: we usually obey them in order for things to work better, but we are perfectly able to violate them. The rea-son why we comply with such rules is that we prefer to remain members of a community where they are observed: by violating them we would more or less seriously damage our possibility to pacifically coexist in a country, to play soccer, to communicate by means of linguistic messages.

In other words, grammar rules constrain us as interacting mem-bers of a collectivity, not as biological individuals. If (ever) grammar is unbreakable, it is only for the community where it is adopted. Linguis-tic innatism confuses the individual and the collective levels.

These facts, pointed out e.g. in Lombardi Vallauri (2004, 2008), meet the results of a thread of neurological research which started with studies such as those by Kolk and coll. (2003) or Kuperberg and coll. (2003), which show that utterances containing semantic anomalies (e.g. depicting implausible events) trigger ERP reactions (namely, P600) similar to those caused by syntactically disturbed utterances (e.g. with Subject-Verb disagreement). Such results can be interpreted as show-ing that there is no specifically syntactic module constraining utterance interpretation in accordance with mechanical, unbreakable rules, dif-ferent in nature from the patterns of semantic/contextual interpreta-tion. Rather, violation of syntactic rules seems to cause the same kind of understanding readjustments as semantic anomalies.

13. coNcluSIoN

Many more facts about language may be discussed, whose nature tells us something about the brain. The aim of a paper of this length was only that of pointing out or recalling some of them, in order to draw a picture of what kinds of conceptual paths can be followed from lingui-stic phenomena to brain predispositions.

edoardo lombardi Vallauriuniversità di Roma tre

dipartimento di lingue, letterature e culture [email protected]

FRoM the kNowledge oF laNguage to the kNowledge oF the bRaIN

157

Note1 For an extensive survey of such constraints, cf. Lombardi Vallauri, Simone (2010;

2011).2 Cf. Tomasello (2003) about entrenchement of linguistic structures.3 This is George A. Miller’s caveat in his Introduction to Zipf (1935, vii).4 Behaghel (1932, 4): «Das oberste Gesetz ist dieses, daß das geistig eng Zusam-

mengehörige auch eng zusammengestellt wird».5 Contextualization is meant here in its broadest sense, including compositional

meaning, i.e. the fact that word meaning is sensible not only to extralinguistic contexts, but also to its linguistic environment. Board can mean very different things in utterances like She pinned the notice on the board and She became a member of the board.

6 Cf. Lakoff, Johnson (1980); for experimental work on the brain correlates of metaphors, Bambini and coll. (2011), Bambini, Bara (2012), Bambini, Resta (2012).

7 Raffaele Simone (1990) has called «Copy effect» the tendency to repeat one or more linguistic elements at short distance after having utttered them.

8 Text by amateur novelist. Some repetitions of words, morphological patterns or even just sound sequences are highlighted.

9 We cannot resume here what linguists mean by Given and New information, nor the relation of these concepts with those of Topic and Focus, or Theme and Rheme. For a survey on what is usually called Information Structure, cf. e.g. Prince (1981), Lambrecht (1994), Lombardi Vallauri (2009).

10 Cf. Frege (1892), Strawson (1964) for the most influential definition of the phe-nomenon, Garner (1971) for a survey on different definitions, Ducrot (1972) for exhausti-ve description of presuppositions in actual use.

11 Cf. Lombardi Vallauri (2009) for a survey on linguistic constructions effecting presupposition.

ReFeReNceS

Akmajian A., Demers R.A., Farmer A.K., Harnish R.M. (1984)(19791), Linguis-tics. An Introduction to Language and Communication, Cambridge, Mass., Cambridge University Press.

Bambini V., Bara B.G. (2012), Neuropragmatics, in J. Ola Östman, J. Verschueren, (eds.), Handbook of Pragmatics, Amsterdam-Philadelphia, Benjamins, 1-21.

Bambini V., Gentili C., Ricciardi E., Bertinetto P.M., Pietrini P. (2011), Decom-posing metaphor processing at the cognitive and neural level through func-tionalmagnetic resonance imaging, in «Brain Research Bulletin», 86, 203-216.

Bambini V., Resta D. (2012), Metaphor and experimental pragmatics: when theory meets empirical investigation, in «Humana.Mente Journal of Philosophical Studies», 23, 37-60.

edoaRdo loMbaRdI VallauRI

158

Baronchelli, A., Loreto V., Puglisi A. (in press), Cognitive biases and language universals, arXiv:1310.7782v1 [physics.soc-ph].

Baumann S., Schumacher P. (2011), (De-)Accentuation and the Processing of In-formation Status: Evidence from Event-Related Brain Potentials, in «Lan-guage and Speech», 55(3), 361-381.

Behaghel O. (1932), Deutsche Syntax, vol. IV, Heidelberg, Carl Winter.Benatar A., Clifton, Ch. Jr. (2014), Newness, givenness and discourse updating:

Evidence from eye movements, in «Journal of Memory and Language», 71, 1-16.

Bickerton D. (1984), The language bio-program hypothesis, in «Behavioral and Brain Sciences», 7, 173-212.

Bickerton D. (2003), Symbol and structure: A comprehensive framework for lan-guage evolution, in M. H. Christiansen, S. Kirby (eds.), Language Evolu-tion, Oxford University Press, 77-93.

Bybee, J. (2007), Frequency of Use and the Organization of Language, Oxford, Oxford University Press.

Chafe W. (1987), Cognitive Constraints on Information Flow, in R. Tomlin (ed.), Coherence and Grounding in Discourse, Amsterdam-Philadelphia, Benja-mins, 21-51.

Chafe W. (1992), Information Flow in Speaking and Writing, in P. Downing, S.D. Lima, M. Noonan (eds.), The Linguistics of Literacy, Amsterdam-Philadel-phia, Benjamins, 17-29.

Chafe W. (1994), Discourse, Consciousness, and Time. The Flow and Displacement of Conscious Experience in Speaking and Writing, Chicago, University of Chicago Press.

Chemla E., Bott L. (2013), Processing presuppositions: Dynamic semantics vs prag-matic enrichment, in «Language and Cognitive Processes», 28(3), 241-260.

Chomsky N. (1965), Aspects of the Theory of Syntax, Cambridge (Mass.), MIT Press.

Chomsky N. (1980), Rules and Representations, Oxford, Blackwell. Chomsky N. (1986), Knowledge of Language: its Nature, Origin and Use, New

York, Praeger.Chomsky N. (1988), Language and Problems of Knowledge, Cambridge (Mass.),

MIT Press.Christiansen M.H., Chater N. (2008), Language as shaped by the brain, in «Behav-

ioral and Brain Sciences», 31, 489-558.Cook V.J., Newson M. (1996), Chomsky’s Universal Grammar, Oxford, Blackwell

(2nd edition). Corballis M. (2011), The Recursive Mind: The Origins of Human Language,

Thought, and Civilization, Princeton, Princeton University Press.Cowan N. (2001), The magical number 4 in short-term memory: a reconsideration

of mental storage capacity, in «Behavioural and Brain Sciences», 24(1), 87-114.

FRoM the kNowledge oF laNguage to the kNowledge oF the bRaIN

159

Crain S., Nakayama M. (1987), Structure dependence in children’s language, in «Language», 63, 522-543.

Csibra G. (2010), Recognizing communicative intentions in infancy, in «Mind & Language», 25,2, 141-168.

Deacon T.W. (2003), Universal grammar and semiotic constraints, in M. H. Chris-tiansen, S. Kirby (eds.), Language evolution, Oxford University Press, 111-139.

Ducrot O. (1972), Dire et ne pas dire, Paris, Hermann.Evans N., Levinson S.C. (2009), The myth of language universals: Language di-

versity and its importance for cognitive science, in «Behavioral and Brain Sciences», 32(5), 429-492.

Fletcher P.C., Happé F., Frith U., Baker S.C., Dolan R.J., Frackowiak R.S.J., Frith C.D. (1995), Other minds in the brain: a functional imaging study in story comprehension, in «Cognition», 57, 109-128.

Frege G. (1892), Über Sinn und Bedeutung, in «Zeitschrift für Philosophie und philosophische Kritik», 100, 25-50.

Gallese V., Lakoff G. (2005), The brain’s concepts: The role of the sensory-motor system in reason and language, in «Cognitive Neuropsychology», 22, 455-479.

Garner R. (1971), Presupposition in Philosophy and Linguistics, in Ch.J. Fillmore Charles J., T.D. Langendoen (eds.), Studies in Linguistic Semantics, New York, Holt, Rinehart and Winston, 22-42.

Givón T. (1975), Focus and the Scope of Assertion: Some Bantu Evidence, in «Stud-ies in African Linguistics», 6, 185-205.

Grice H.P. (1975), Logic and Conversation, in P. Cole, J.L. Morgan (eds.), Syntax and Semantics, vol. 3, Speech Acts, New York, Academic Press, 41-58.

Happé F. (1993), Communicative competence and theory of mind in autism: A test of relevance theory, in «Cognition», 48, 101-119.

Hurford J.R. (2003), The language mosaic and its evolution, in M.H. Christiansen, S. Kirby (eds.), Language evolution, Oxford, Oxford University Press, 38-57.

Jackendoff R. (2002), Foundations of language: Brain, meaning, grammar, evolu-tion, Oxford University Press.

Jenkins L. (2000), Biolinguistics: Exploring the biology of language, Cambridge, Cambridge University Press.

Kolk H.H.J., Chwilla D.J., van Herten M., Oo P.J. (2003), Structure and limited capacity in verbal working memory: a study with event-related potentials, in «Brain and Language», 85, 1-36.

Kuperberg G.R., Sitnikova T., Caplan D., Holcomb Ph.J. (2003), Electrophysi-ological distinctions in processing conceptual relationships within simple sentences, in «Cognitive Brain Research», 117, 117-129.

Lakoff G., Johnson M. (1980), Metaphors We Live By, University of Chicago Press.

Lambrecht K. (1994), Information Structure and Sentence Form, Cambridge, Cambridge University Press.

edoaRdo loMbaRdI VallauRI

160

Lombardi Vallauri E. (1995), Tratti linguistici della persuasione in pubblicità, in «Lingua Nostra», 2/3, 41-51.

Lombardi Vallauri E. (2004), The relation between mind and language: The In-nateness Hypothesis and the Poverty of the Stimulus, in «The Linguistic Review», 21, 345-387.

Lombardi Vallauri E. (2008), Alguns argumentos contra o inatismo lingüistico, in «Revista de estudos da linguagem», 16(1), 9-47.

Lombardi Vallauri E. (2009), La struttura informativa. Forma e funzione negli enunciati linguistici, Roma, Carocci.

Lombardi Vallauri E. (2012), In che modo il linguaggio non è nel cervello, in V. Bambini, I. Ricci, P.M. Bertinetto, Linguaggio e cervello – Semantica / Lan-guage and the brain – Semantics Atti del XLII Congresso Internazionale di Studi della Società di Linguistica Italiana (Pisa, SNS, 2008), Roma, Bulzoni, Vol. 2, I.D.8.

Lombardi Vallauri E., Masia V. (2014), Implicitness Impact: measuring texts, in «Journal of Pragmatics», 61, 2014, pp. 161-184.

Lombardi Vallauri E., Simone R. (2010), Natural constraints on language: meth-odological remarks, and the physical determinism, in «Cahiers Ferdinand de Saussure», 63, 205-224.

Lombardi Vallauri E., Simone R. (2011), Natural constraints on language: the er-gonomics of the software, in «Cahiers Ferdinand de Saussure», 64, 119-141.

Machetti S. (2006), Uscire dal vago. Analisi linguistica della vaghezza nel linguag-gio, Roma-Bari, Laterza.

Marois R., Ivanoff J. (2005), Capacity limits of information processing in the brain, in «Trends in Cognitive Sciences», 6(9), 296-305.

Miller G.A. (1956), The Magical Number Seven, Plus or Minus Two: Some Lim-its on Our Capacity for Processing Information, in «The Psychological Re-view», 63, 81-97.

Piattelli Palmarini M. (1979) (ed.), Théories du langage, théories de l’apprentissage: le débat entre Jean Piaget et Noam Chomsky, Paris, Editions du Seuil.

Pinker S. (1994), The Language Instinct: How the Mind Creates Language, New York, Morrow.

Pinker S. (2003), Language as an adaptation to the cognitive niche, in M. H. Chris-tiansen, S. Kirby (eds.), Language evolution, Oxford University Press, 16-37.

Pinker S., Bloom P. (1990), Natural language and natural selection, in «Behavioral and Brain Sciences», 13, 707-727.

Pinker S., Jackendoff R. (2005), The faculty of language: What’s special about it?, in «Cognition», 95(2), 201-236.

Prandi M. (2013), Le parole tra forma e sostanza, in N. Grandi (ed.) Nuovi dia-loghi sulle lingue e il linguaggio, Bologna, Pàtron, 61-67.

Prince E.F. (1981), Toward a Typology of Given-New Information, in P. Cole (ed.), Radical Pragmatics, New York, Academic Press, 223-255.

FRoM the kNowledge oF laNguage to the kNowledge oF the bRaIN

161

Pullum G.K., Scholz B.C. (2002), Empirical assessment of stimulus poverty argu-ments, in «The Linguistic Review», 19(1/2), 9-50.

Putnam H. (1967) The «innateness hypothesis» and explanatory models in linguis-tics, in «Synthese», 17, 12-22.

Sampson G.R. (2005), The «Language Instinct» Debate, London-New York, Con-tinuum.

Shiffrin R.M., Schneider W. (1984), Theoretical Note: Automatic and Controlled Processing Revisited, in «Psychological Review», 91(2), 269-276.

Simone R. (1990), Effetto copia e effetto quasi-copia, in «AION – Annali dell’Istituto Universitario orientale di Napoli», 12, 69-84.

Slobin, D.I. (1975), On the Nature of Talk to Children, in E.H. Lenneberg, E. Lenneberg (eds.), Foundations of Language Development, New York, Aca-demic Press, 283-297.

Slobin, D.I., Bever, Th.G. (1982) Children use canonical sentence schemas: A cross-linguistic study of word order and inflections, in «Cognition», 12, 229-265.

Solé R.V., Corominas-Murtra B., Fortuny J. (2011), Emergence of Zipf’s law in the evolution of communication, in «Physical Review E», 83, 036115.

Sperber D., Wilson D. (1986), Relevance: Communication and Cognition, Ox-ford, Blackwell.

Strawson P.F. (1964), Identifying Reference and Truth-Values, in «Theoria», 30 (2), 96-118.

Tiemann S., Schmid M., Bade N., Rolke B., Hertrich I., Ackermann H., Knapp J., Beck S. (2011), Psycholinguistic Evidence for Presuppositions: On-line and Off-line Data, in I. Reich et al. (eds.), Proceedings of Sinn & Bedeutung 15, Saarbrücken Universaar-Saarland University Press, 581-595.

Tomasello M. (2003), Constructing a Language: a Usage-based Theory of Language Acquisition, Cambridge (Mass.), Harvard University Press.

Williams D., Happé F. (2010), Representing intentions in self and other: studies of autism and typical development, in «Developmental Science», 13(2), 307-319.

Zipf G.K. (1935), The Psycho-Biology of Language, New York, Harcourt & Brace.