Deictic word and gesture production: Their interaction

7
Behavioural Brain Research 203 (2009) 200–206 Contents lists available at ScienceDirect Behavioural Brain Research journal homepage: www.elsevier.com/locate/bbr Research report Deictic word and gesture production: Their interaction Sergio Chieffi a,b,, Claudio Secchi a , Maurizio Gentilucci a a Department of Neuroscience, Section of Physiology, University of Parma, Via Volturno 39, 43100 Parma, Italy b Department of Experimental Medicine, Section of Physiology, Second University of Naples, Via Costantinopoli 16, 80138 Napoli, Italy article info Article history: Received 22 December 2008 Received in revised form 27 April 2009 Accepted 3 May 2009 Available online 9 May 2009 Keywords: Deictic gesture Deictic word Gesture production Word production Gesture kinematics Voice spectra abstract We examined whether and how deictic gestures and words influence each other when the content of the gesture was congruent or incongruent with that of the simultaneously produced word. Two experiments were carried out. In Experiment 1, the participants read aloud the deictic word ‘QUA’ (‘here’) or ‘LA” (‘there’), printed on a token placed near to or far from their body. Simultaneously, they pointed towards one’s own body, when the token was placed near, or at a remote position, when the token was placed far. In this way, participants read ‘QUA’ (‘here’) and pointed towards themselves (congruent condition) or a remote position (incongruent condition); or they read ‘LA” (‘there’) and pointed towards a remote position (congruent condition) or themselves (incongruent condition). In a control condition, in which a string of ‘X’ letters was printed on the token, the participants were silent and only pointed towards themselves (token placed near) or a remote position (token placed far). In Experiment 2, the participants read aloud the deictic word placed in the near or far position without gesturing. The results showed that the congruence/incongruence between the content of the deictic word and that of the gesture affected gesture kinematics and voice spectra. Indeed, the movement was faster in the congruent than in the control and incongruent conditions; and it was slower in the incongruent than in the control condition. As concerns voice spectra, formant 2 (F2) decreased in the incongruent conditions. The results suggest the existence of a bidirectional interaction between speech and gesture production systems. © 2009 Elsevier B.V. All rights reserved. 1. Experiment 1 1.1. Introduction Discourse production, in many cases, involves not only the pro- duction of speech sounds but also the performance of hand/arm gestures. Traditionally, gestures have been assumed to share with speech a computational stage and have mainly communicative and informative functions [34–38,49–51]. In support of this view there are a number of evidences. Firstly, gestures and speech show par- allel semantic and pragmatic functions [11,49]. This is the case of referential gestures that show a formal relation to the semantic content of the concurrent linguistic item that may be both con- crete objects and events (iconic gestures) and abstract concepts (metaphoric gestures) [53]. Other gestures, termed either beats [53] or batons [17], demonstrate parallels of pragmatic function. They emphasize discourse-oriented functions where the impor- tance of a linguistic item arises, not from its own propositional content, but from its relation to other linguistic items [48]. Fur- Corresponding author at: Department of Experimental Medicine, Section of Physiology, Second University of Naples, Via Costantinopoli 16, 80138 Napoli, Italy. Tel.: +39 81 5665820; fax: +39 81 5667500. E-mail address: sergio.chieffi@unina2.it (S. Chieffi). ther, in children’s development, gestures and speech seem to develop together through the same stages of increasing symboliza- tion [3,25,49,64]; and gestures and speech may be simultaneously affected by neurological damage [7,8,12,15]. Along similar lines, McNeill and Duncan [52] proposed that gestures, together with speech, express the same underlying idea unit but not necessar- ily express identical aspects of it. The confluence of speech and gesture suggests that the speaker is thinking in terms of a combina- tion of imagery and linguistic categorial content [52]. The growth point is the name that McNeill and Duncan [52] give to the minimal psychological unity combining imagery and linguistic categorial content. However, other authors have also suggested that gestures have a role in the speech production process. The majority of researchers that support this view have followed the model of speech pro- duction proposed by Levelt [46,47], which divides this process into three broad stages: conceptualization, formulation and artic- ulation. Some authors placed the influence of gesture on speech during the conceptualization stage, i.e. gestures would have inter- nal functions by helping speakers to organize their own thinking [1,28,39]. Such a view of gesture is referred as the Information Packing Hypothesis [39]. According to this hypothesis, gestures help speakers to organize and translate spatio-motoric knowl- edge into linguistic output. Evidence for the Information Packing Hypothesis comes from studies showing a greater production of 0166-4328/$ – see front matter © 2009 Elsevier B.V. All rights reserved. doi:10.1016/j.bbr.2009.05.003

Transcript of Deictic word and gesture production: Their interaction

R

D

Sa

b

a

ARRAA

KDDGWGV

1

1

dgsiaarcc([Ttc

PT

0d

Behavioural Brain Research 203 (2009) 200–206

Contents lists available at ScienceDirect

Behavioural Brain Research

journa l homepage: www.e lsev ier .com/ locate /bbr

esearch report

eictic word and gesture production: Their interaction

ergio Chieffi a,b,∗, Claudio Secchi a, Maurizio Gentilucci a

Department of Neuroscience, Section of Physiology, University of Parma, Via Volturno 39, 43100 Parma, ItalyDepartment of Experimental Medicine, Section of Physiology, Second University of Naples, Via Costantinopoli 16, 80138 Napoli, Italy

r t i c l e i n f o

rticle history:eceived 22 December 2008eceived in revised form 27 April 2009ccepted 3 May 2009vailable online 9 May 2009

eywords:eictic gestureeictic wordesture production

a b s t r a c t

We examined whether and how deictic gestures and words influence each other when the content of thegesture was congruent or incongruent with that of the simultaneously produced word. Two experimentswere carried out. In Experiment 1, the participants read aloud the deictic word ‘QUA’ (‘here’) or ‘LA”(‘there’), printed on a token placed near to or far from their body. Simultaneously, they pointed towardsone’s own body, when the token was placed near, or at a remote position, when the token was placedfar. In this way, participants read ‘QUA’ (‘here’) and pointed towards themselves (congruent condition)or a remote position (incongruent condition); or they read ‘LA” (‘there’) and pointed towards a remoteposition (congruent condition) or themselves (incongruent condition). In a control condition, in whicha string of ‘X’ letters was printed on the token, the participants were silent and only pointed towards

ord productionesture kinematicsoice spectra

themselves (token placed near) or a remote position (token placed far). In Experiment 2, the participantsread aloud the deictic word placed in the near or far position without gesturing. The results showed thatthe congruence/incongruence between the content of the deictic word and that of the gesture affectedgesture kinematics and voice spectra. Indeed, the movement was faster in the congruent than in thecontrol and incongruent conditions; and it was slower in the incongruent than in the control condition.As concerns voice spectra, formant 2 (F2) decreased in the incongruent conditions. The results suggest

tional

the existence of a bidirec

. Experiment 1

.1. Introduction

Discourse production, in many cases, involves not only the pro-uction of speech sounds but also the performance of hand/armestures. Traditionally, gestures have been assumed to share withpeech a computational stage and have mainly communicative andnformative functions [34–38,49–51]. In support of this view therere a number of evidences. Firstly, gestures and speech show par-llel semantic and pragmatic functions [11,49]. This is the case ofeferential gestures that show a formal relation to the semanticontent of the concurrent linguistic item that may be both con-rete objects and events (iconic gestures) and abstract conceptsmetaphoric gestures) [53]. Other gestures, termed either beats

53] or batons [17], demonstrate parallels of pragmatic function.hey emphasize discourse-oriented functions where the impor-ance of a linguistic item arises, not from its own propositionalontent, but from its relation to other linguistic items [48]. Fur-

∗ Corresponding author at: Department of Experimental Medicine, Section ofhysiology, Second University of Naples, Via Costantinopoli 16, 80138 Napoli, Italy.el.: +39 81 5665820; fax: +39 81 5667500.

E-mail address: [email protected] (S. Chieffi).

166-4328/$ – see front matter © 2009 Elsevier B.V. All rights reserved.oi:10.1016/j.bbr.2009.05.003

interaction between speech and gesture production systems.© 2009 Elsevier B.V. All rights reserved.

ther, in children’s development, gestures and speech seem todevelop together through the same stages of increasing symboliza-tion [3,25,49,64]; and gestures and speech may be simultaneouslyaffected by neurological damage [7,8,12,15]. Along similar lines,McNeill and Duncan [52] proposed that gestures, together withspeech, express the same underlying idea unit but not necessar-ily express identical aspects of it. The confluence of speech andgesture suggests that the speaker is thinking in terms of a combina-tion of imagery and linguistic categorial content [52]. The growthpoint is the name that McNeill and Duncan [52] give to the minimalpsychological unity combining imagery and linguistic categorialcontent.

However, other authors have also suggested that gestures havea role in the speech production process. The majority of researchersthat support this view have followed the model of speech pro-duction proposed by Levelt [46,47], which divides this processinto three broad stages: conceptualization, formulation and artic-ulation. Some authors placed the influence of gesture on speechduring the conceptualization stage, i.e. gestures would have inter-nal functions by helping speakers to organize their own thinking

[1,28,39]. Such a view of gesture is referred as the InformationPacking Hypothesis [39]. According to this hypothesis, gestureshelp speakers to organize and translate spatio-motoric knowl-edge into linguistic output. Evidence for the Information PackingHypothesis comes from studies showing a greater production of

rain R

gtptiidarfl

papggrstptbmiopbwnsm[oaaigcal

daatrttaLisooccftgwpttngb

S. Chieffi et al. / Behavioural B

estures, both in children [1] and in adults [28,29], when theask requires a more complex conceptualization. Other authorslaced the influence of gesture on speech during the formula-ion stage, i.e. gestures help speakers to access to specific itemsn their mental lexicon [6,9,27,43,55,59]. Such a view of gestures referred as the Lexical Access Hypothesis. Two types of evi-ence support this view. First, gesture rates increase when lexicalccess is difficult to name [56] or when names have not beenehearsed [9]. Second, prohibiting gestures makes speech lessuent [59].

In investigating the processing involved in gesture and speechroduction, several models have been proposed which follow theccount of Levelt [46,47]. The main difference between the pro-osed models lies at which level of computation the production ofestures and speech occurs. The Krauss’ model [43,44] assumes thatestures are generated from non-propositional (spatio-dynamic)epresentations in the working memory. A spatial/dynamic featureelector transforms these representations into abstract specifica-ions that will be translated into a motor program for gestureroduction. Simultaneously, the conceptualizer retrieves proposi-ional representations and elaborates a preverbal message that wille transformed by the formulator in overt speech. The de Ruiter’sodel [16] assumes that the conceptualizer has access, in work-

ng memory, to both imagistic (or spatio-temporal), for generationf gestures, and propositional information, for the generation ofreverbal messages. Then, the output of the conceptualizer wille, besides a preverbal message, a representation called sketch,hich contains information that will be sent to the gesture plan-

er. Finally, Kita and Özyürek [40] proposed the Interface Model forpeech and gesture production. They suggested [40,41,58] that theessage generation process for speech (Conceptualizer in Levelt

46]) interacts online with the process that determines the contentf gestures (‘Action Generator’). The Action Generator takes intoccount both the information in spatio-motoric working memorynd the message representation for speech in the Conceptual-zer. Unlike the preceding models according to which gestures areenerated before and without access to linguistic formulation pro-esses, according to the Kita and Özyürek’s model [40] speechnd gesture production processes interact online at conceptualevel.

Traditionally, the relationship between gesture and word pro-uction has been studied through the observation of spontaneousctivity during conversations or narratives. Levelt et al. [48] weremong the first to propose an experimental approach to this ques-ion. They asked participants to indicate which of an array ofeferent lights was momentarily illuminated. There were four LEDs,wo in each field: one LED was near to and the other one far fromhe midline. The participants pointed to the light (deictic gesture)nd/or used a deictic expression “this light”, to indicate the nearED, or “that light”, to indicate the far LED. By analyzing the tim-ng of gesture and speech onset, Levelt et al. [48] found that theirynchronization was largely established in the planning phase and,nce the pointing movement was initiated, gesture and speechperated in almost modular fashion. Successively, Gentilucci ando-workers [2,5] required participants to produce simultaneouslyommunicative words and symbolic gestures of the same [5] or dif-erent [2] meaning, e.g. they pronounced “ciao” while performinghe “ciao gesture” or pronounced “no” while performing the “ciaoesture”. The authors found that voice parameters were amplified,hereas arm kinematics was slowed down as compared to the sole

roduction of either the words or the gestures, but only in the condi-

ion of congruence between gesture and word [2,5]. They proposedhat spoken words and symbolic gestures are coded as single sig-al by a unique communication system [5,19,22–24]. The systemoverning the interactions between gesture and word is proba-ly located in Broca’s area, as shown by a repetitive Transcranial

esearch 203 (2009) 200–206 201

Magnetic Stimulation (rTMS) study [20]. Krahmer and Swerts [42]examined whether the occurrence of a beat on a particular wordhad a noticeable impact on speech itself. In their study, speakerswere instructed to produce a target sentence containing two propernames that might be marked for prominence with a pitch accentand/or with a beat gesture. Krahmer and Swerts [42] found thatbeat gestures have a significant effect on the spoken realizationof the target words. When a speaker produced a beat, the worduttered while making the beat was produced with relatively morespoken emphasis, irrespective of the position of the acoustic accent.Krahmer and Swerts [42] suggested that, at least for manual beatgestures, there is a very close connection between speech and ges-ture.

In the present experiment, we examined whether and how ges-ture and speech influence each other when the content of the deicticgesture was congruent/incongruent with that of the simultane-ously produced deictic word. Typically, deictic gestures are pointingmovements with the index finger extended and the remaining fin-gers closed. They are used to indicate an object or a person, adirection, a location, or more abstract referents such as “past time”.The “meaning” of a deictic gesture is the act of indicating the thingspointed to [43].

Declarative and request pointing appear in infants at the ageof approximately 10 months and are frequently accompanied byvocalizations [65]. Bernardis and co-workers [4] found that ininfants the voice spectra of vocalizations produced during requestpointing are influenced by the dimensions of the object target of thepointing. In other words, gesture and vocalization specify locationand properties of the object showing a strict interaction betweengesture and emergent lexicon. On the basis of these data [4] wewere interested in verifying the existence of an interaction between“simpler” signals (i.e. deictic words and gestures) that we hypothe-sized to occur also at level of signal parameterisation besides that oftemporal coordination [48]. In fact, previous studies (see for exam-ple [2,5,42]) analysed gestures that are involved in more complexfunctions and their interaction is necessary because they usuallyadd information to spoken language.

In the present experiment participants read aloud a deictic word,‘QUA’ (‘here’) or ‘LA” (‘there’), printed on a token which could beplaced in two positions, near to or far from one’s own body. Simul-taneously, they performed a deictic (or pointing) gesture directedat one’s own body, when the token was placed near, or at a remoteposition, when the token was placed far. In this way, the participantsread aloud ‘QUA’ (‘here’) and pointed towards themselves (con-gruent condition) or a remote position (incongruent condition).Similarly, they read ‘LA” (‘there’) and pointed towards a remoteposition (congruent condition) or themselves (incongruent condi-tion). There was also a further condition in which the strings ‘XXX’or ‘XX” were printed on the token. In this case, the participants weresilent and only pointed towards themselves (token placed near) ora remote position (token placed far). This was a control condition,which was used to examine the performance of the participantswhen the sole gesture was produced in comparison to when bothgesture and word were produced.

We examined if the congruence/incongruence between the con-tent of the word and that (i.e. the direction) of the to-be-performedgesture influenced verbal or gestural production or both. Our pre-diction was as follows: (a) the presence of an effect both on verbaland gestural production would have supported the hypothesis ofthe existence of a bidirectional interaction between the two pro-duction systems; (b) the presence of an effect on verbal or gestural

production would have supported the hypothesis of the existenceof an unidirectional interaction between the two systems; (c) theabsence of an any effect on both verbal or gestural production wouldhave supported the hypothesis of an independence between thetwo systems.

2 rain Research 203 (2009) 200–206

1

1

poo

1

fipSwSSFa

5ss

(X‘

1

baerfarm

‘‘ot

1

(erd

ldt

b(es1wfiv

1

l(

vv

c

1

1

t

of gestures directed towards themselves did not differ from thatof gestures directed towards a remote position (F(1,11) = 3.79, n.s.;near position = 38.9 cm; far position = 47.0 cm).

02 S. Chieffi et al. / Behavioural B

.2. Materials and methods

.2.1. ParticipantsTwelve right-handed (according to Edinburgh Inventory [57]) women partici-

ated in the study (ages 19–32 years). All participants were naïve as to the purposef the study. The study was approved by the Ethics Committee of the Medical Facultyf the University of Parma.

.2.2. ApparatusParticipants sat in front of a black table, in a dark and soundproof room. They

xed a LED (fixation point, FP) placed 38 cm from the table edge. Each participantlaced her index finger on a switch located on the table plane (starting position, SP).P was 20 cm distant from the table edge. A circular white token, diameter 5.5 cm,as placed either in a ‘near’ (between participant’s trunk and SP) or ‘far’ (beyond

P) position. Near position was 8 cm distant from table edge (and 12 cm distant fromP); far position was 68 cm distant from table edge (and 48 cm distant from SP).P, SP and both near and far token were placed along the participant’s midsagittalxis.

A microphone (Studio Electret Microphone, 20–20,000 Hz, 500 omega,mV/Pa/1 kHz) was placed on the table by means of a support. The centre of the

upport was 20 cm distant from table edge and 18 cm distant from participant’sagittal axis, on the left.

Either a deictic word (‘QUA’, i.e. ‘here’, or ‘LA”, i.e. ‘there’) or a X-string of letters‘XXX’ or ‘XX”) was printed in black on the token. The height of both the words and-string of letters was 1.5 cm. Further, ‘QUA’ and ‘XXX’ were 4.5 cm wide, ‘LA” and

XX” 3.5 cm wide.

.2.3. ProcedureThe trial started with the illumination of the room. Illumination was commanded

y a PC. As soon as the room was illuminated, the participants were required to readloud the deictic word and to perform simultaneously a pointing movement directedither towards themselves (the forearm was flexed), if the token was near, or at aemote position (both the arm and forearm were extended), if the token was placedar. If X-string of letters was printed on the token, the participants performed onlypointing movement directed either towards themselves (token placed near) or a

emote position (token placed far). The participants were required to move with theaximal velocity.

There were the following six experimental conditions: token with the wordQUA’ (‘here’) and placed in the (1) near or (2) far position; token with the wordLA” (‘there’) and placed in the (3) near or (4) far position; token with X-string (‘XXX’r ‘XX”) and placed in the (5) near or (6) far position. For each experimental conditionhere were eight trials. In total, 48 trials were pseudo-randomly run.

.2.4. Movement and voice recordingPointing movements of index finger were recorded using the three-dimensional

3D)-optoelectronic ELITE system (B.T.S. Milan, Italy). It consists of two TV cam-ras detecting infrared reflecting markers at the sampling rate of 50 Hz. Movementeconstruction in 3D coordinates and computation of the kinematic parameters areescribed in a previous work [21].

One marker was placed on the participant’s index finger and it was used to ana-yze the kinematics (index-displacement, -peak acceleration, -peak velocity) of theeictic gesture. Index displacement was calculated as distance in 3D space betweenhe final and initial position of the index finger.

The voice emitted by the participants during word pronunciation was recordedy a microphone connected to a PC for sound recording by means of a card device16 PCI Sound Blaster CREATIVE Technology Ltd., Singapore). The spectrogram ofach pronounced deictic word was computed for each participant using the PRAAToftware (University of Amsterdam, the Netherlands). The time courses of formant(F1) and formant 2 (F2) were analysed. The central part of the formant time courseas analysed by excluding both the formant transition (consonant/vowel) and thenal part of the vowel during which echo could add to the emitted sound. The meanalues of F1 and F2 of the vowel /a/ were analysed.

.2.5. Data analysisDeictic gesture: Separate ANOVAs were conducted on mean values of the ana-

yzed index finger kinematic parameters, with deictic word (‘QUA’ (‘here’) vs. ‘LA”‘there’) vs. ‘X’) and token position (near vs. far) as within-participant factors.

Voice: Separate ANOVAs were also conducted on mean F1 and F2 values of theowel /a/ of both ‘QUA’ (‘here’) and ‘LA” (‘there’), with deictic word (‘QUA’ (‘here’)s. ‘LA” (‘there’)) and token position (near vs. far) as within-participant factors.

In all analyses, paired comparisons were performed using Newman–Keuls pro-edure. Significance level was fixed at p < 0.05.

.3. Results

.3.1. Deictic gestureNo significant main effects were found on index peak accelera-

ion (deictic word: F(2,22) = 0.21, n.s.; token position: F(1,11) = 4.14,

Fig. 1. Mean values of index peak acceleration measured in Experiment 1. Bars areS.E.

n.s.). There was a significant interaction between the two fac-tors (F(2,22) = 5.80, p < 0.01). Post-hoc comparisons showed thatwhen the participants pointed towards a remote position (i.e. tokenplaced far), index peak acceleration was lower when they simulta-neously read ‘QUA’ (‘here’) than when they read ‘LA” (‘there’) orwere silent. Further, when the participants read ‘QUA’ (‘here’), peakacceleration was lower when they pointed towards remote posi-tions than when they pointed towards themselves (i.e. token placednear) (Fig. 1).

No significant main effects were found on index peak velocity(deictic word: F(2,22) = 0.02, n.s.; token position: F(1,11) = 3.13, n.s.).Again, there was a significant interaction (F(2,22) = 8.27, p < 0.005)between the two factors. Post-hoc comparisons showed that whenthe participants pointed towards a remote position, index peakvelocity was lower when they simultaneously read ‘QUA’ (‘here’)than when they read ‘LA” (‘there’) or were silent. Further, whenthe participants pointed towards themselves, index peak veloc-ity was greater when they simultaneously read ‘QUA’ (‘here’) thanwhen they read ‘LA” (‘there’) or were silent (Fig. 2). Post-hoccomparisons also showed that when the participants read ‘QUA’(‘here’) or ‘LA” (‘there’) or were silent, index peak velocity wasgreater when they simultaneously pointed towards themselvesthan when they pointed towards a remote position (Fig. 2). Thesedifferences in peak velocity might depend on differences in indexdisplacement, considering that movement velocity increases withincreasing movement amplitude [20]. However, the statistical anal-ysis did not support this hypothesis. Indeed, index displacement

Fig. 2. Mean values of index peak velocity measured in Experiment 1. Bars are S.E.

S. Chieffi et al. / Behavioural Brain R

Fm

1

mw(t

d‘n(cpw‘(

1

egv

pa(Cpt‘ttt

wpatmtmpf

int

tic words were used. Token was placed either in the near or far position as in

ig. 3. Mean formant 2 (F2) values of vowel /a/ for ‘QUA’ (‘here’) and ‘LA” (‘there’)easured in Experiment 1. Bars are S.E.

.3.2. VoiceThe analysis of the voice spectrograms showed, as regards

ean F1 values, the presence of a significant effect of deicticord (F(1,11) = 66.07; p < 0.00001; ‘QUA’ (‘here’) = 859.0 Hz; ‘LA’

‘there’) = 943.6 Hz). There was no significant effect of token posi-ion (F(1,11) = 1.06, n.s.) and no interaction (F(1,11) = 0.70; n.s.).

Regarding mean F2 values, there was a significant effect ofeictic word (F(1,11) = 16.30; p < 0.002; ‘QUA’ (‘here’) = 1351.2 Hz;

LA’ (‘there’) = 1540.5 Hz), but not of token position (F(1,11) = 4.26,.s.). A significant interaction between the two factors was presentF(1,11) = 5.01, p < 0.05). For ‘QUA’ (‘here’) pronunciation, post-hocomparisons showed that F2 was lower when the participantsointed towards a remote position (i.e. token placed far) thanhen they pointed towards themselves (i.e. token placed near). For

LA’ (‘there’) no significant effect due to token position was foundFig. 3).

.4. Discussion

The main finding of the present experiment was that the congru-nce/incongruence between the content of deictic word and that ofesture (i.e. its direction) influenced both gesture kinematics andoice spectra.

Indeed, when the token was placed far and the participantsointed towards a remote position, both index peak accelerationnd peak velocity were lower when they read the word ‘QUA’‘here’) than when they read the word ‘LA” (‘there’), or were silent.onversely, when the token was placed near and the participantsointed towards themselves, index peak velocity was greater whenhey read the word ‘QUA’ (‘here’) than when they read the wordLA” (‘there’), or were silent. Further, as regards voice spectra, whenhe participants read the word ‘QUA’ (‘here’), F2 was lower whenhey simultaneously pointed towards a remote position than whenhey pointed towards themselves.

The data also showed that, overall, peak velocity was greaterhen the participants pointed towards themselves than when they

ointed towards a remote position. These differences cannot bescribed to differences in index displacement between the two spa-ial conditions. A possible explanation of this observation is that

ovements in far conditions slowed down for increased arm iner-ia. Indeed, when the participants pointed towards themselves they

oved only the forearm (elbow flexion), whereas when the partici-ants pointed towards a remote position they moved both arm and

orearm (elbow and shoulder extension).

However, in our experimental design another factor might have

nfluenced both the kinematics of deictic gesture and voice spectra,amely the presence and the physical location of the token on whichhe deictic word was printed.

esearch 203 (2009) 200–206 203

Token was not the target of the pointing gesture. It signalled thedirection of the deictic gesture. Previous studies showed that con-textual stimuli may influence movement kinematics. Contextualstimuli may produce an increase in movement duration [60,61],a decrease in peak wrist velocity and an increase in decelerationphase [31], a deviation in movement trajectory [10,18,66]. How-ever, if we hypothesize that, in our experiment, the presence ofthe token had influenced movement kinematics, such an influencewould have been similar in both token positions, being the direc-tion of gestures always congruent with token position. Indeed, theparticipants pointed towards themselves, when the token was near,or at a remote position, when the token was far. Nevertheless, weobserved different effects on pointing kinematics between the twotoken positions. When the participants pointed toward themselves,peak velocity was greater when they simultaneously read the word‘QUA’ (‘here’) than when they read the word ‘LA” (there) or weresilent. When the participants pointed toward a remote position,both peak acceleration and velocity were lower when they read theword ‘QUA’ (‘here’) than when they read the word ‘LA” (there) orwere silent. Therefore, the specific effects we observed on pointingkinematics in the two token position conditions cannot be ascribedto the presence and the physical location of the token, but ratherto the congruence vs. incongruence between the content of thegesture and that of the word.

The same cannot be said for the effects observed on voice spec-tra. In fact, in this case, it needs to consider not only the congruencevs. incongruence between the content of the word and that of theto-be-performed gesture but also between the content of the wordand the physical location of the token. In other words, the decreasein F2 observed when the participants read the word ‘QUA’ (‘here’)and simultaneously pointed towards a remote position, in compar-ison to when they pointed towards themselves, might not dependon the incongruence between the content of the word and that ofthe gesture but the incongruence between the content of the wordand the spatial position of the token.

In order to assess the possible influence of the spatial positionof the token on voice spectra we carried out Experiment 2.

2. Experiment 2

2.1. Introduction

Experiment 2 was performed to examine if the congruence/incongruence between the content of the deictic term and the posi-tion of the token influenced voice spectra. The participants read thedeictic term, ‘QUA’ (‘here’) or ‘LA” (‘there’), printed on the tokenwithout gesturing. The token was placed near or far. Consequently,they read the word ‘QUA’ (‘here’) printed on the token placed in thenear (congruent condition) or far (incongruent condition) position;or they read the word ‘LA” (‘there’) printed on the token placedin the far (congruent condition) or near (incongruent condition)position. In this way we could examine if the position of the tokeninfluenced voice spectra.

2.2. Materials and methods

2.2.1. ParticipantsTwelve right-handed (according to Edinburgh Inventory [57]) women partici-

pated in the experiment (ages 21–30 years). All participants were naïve as to thepurpose of the study. They differed from those that participated in Experiment 1 inorder to avoid a covert activation of pointing gestures.

2.2.2. ApparatusThe apparatus was the same as in Experiment 1. The tokens with the deic-

Experiment 1.

2.2.3. ProcedureThe participants were required to read aloud the deictic word printed on the

token, which was placed in the near or far position. There were four experimental

204 S. Chieffi et al. / Behavioural Brain R

Fm

cffp

2

E

2

mt(t

d‘nwpstt

2

ot

sfEfIrapli

2wvnftb

ig. 4. Mean formant 2 (F2) values of vowel /a/ for ‘QUA’ (‘here’) and ‘LA” (‘there’)easured in Experiment 2. Bars are S.E.

onditions: token with the word ‘QUA’ (‘here’) and placed in the (1) near or (2)ar position; token with the word ‘LA” (‘there’) and placed in the (3) near or (4)ar position. For each experimental condition there were eight trials. 32 trials wereseudo-randomly run.

.2.4. Voice recordings and data analysesVoice recording and analyses performed on F1 and F2 were the same as in

xperiment 1.

.3. Results

The analysis of the voice spectrograms showed, as regardsean F1 values, the presence of a significant effect of deic-

ic word (F(1,11) = 27.42; p < 0.0005; ‘QUA’ (‘here’) = 875.7 Hz; ‘LA’‘there’) = 981.0 Hz). There was no significant effect of token posi-ion (F(1,11) = 1.71, n.s.) and no interaction (F(1,11) = 0.02; n.s.).

Regarding mean F2 values, there was a significant effect of botheictic word (F(1,11) = 32.69, p < 0.0002; ‘QUA’ (‘here’) = 1364.8 Hz;

LA” (‘there’) = 1567.4 Hz) and token position (F(1,11) = 6.41, p < 0.05;ear position = 1478.4 Hz; far position = 1453.7 Hz). Further, thereas a significant interaction between the two factors (F(1,11) = 8.21,< 0.02). For ‘LA” (‘there’) pronunciation post-hoc comparisons

howed that F2 was greater when the token was placed in the nearhan in the far position. For ‘QUA’ (‘here’) no significant effect dueo token position was found (see Fig. 4).

.4. Discussion

The results of the present experiment, if compared with thosef Experiment 1, suggest that the position of the token, on whichhe deictic word was printed, did not influence voice spectra.

In Experiment 2, as regards ‘QUA’ (‘here’), the value of F2 mea-ured when the token was placed far was not significantly differentrom that measured when the token was placed near. Conversely, inxperiment 1, the value of F2 measured when the token was placedar was lower than that measured when the token was placed near.t needs to remember that in Experiment 1 the participants, besideseading the word, performed simultaneously a pointing gesturend when the token was placed far they pointed towards a remoteosition. Thus, the reduction of F2 observed in Experiment 1 was

ikely due to the production of the gesture whose direction wasncongruent with the content of the word simultaneously read.

As regards ‘LA’ (‘there’), the value of F2 measured in Experimentwhen the token was placed near was greater than that measuredhen the token was placed far. Conversely, in Experiment 1, the

alue of F2 measured when the token was placed near was not sig-ificantly different from that measured when the token was placed

ar. Thus, it is possible that in Experiment 1 there has been a reduc-ion of F2 when the token was placed near and the participants,esides reading the word, simultaneously pointed towards them-

esearch 203 (2009) 200–206

selves. Thus, also in this case, the reduction of F2 was likely dueto the production of the gesture whose direction was incongruentwith the content of the word simultaneously read.

3. General discussion

In the present study, the participants read aloud a deictic wordand simultaneously performed a deictic gesture. The main find-ing was that the congruence/incongruence between the contentsof the two signals influenced their production. This is in favour ofthe hypothesis that speech and gesture production systems interactwith each other.

According to the dual-route model, two types of mechanismssupport reading aloud [13,14]. The non-lexical route allows readersto derive the sounds of written words by means of mechanismsthat convert letters or letter clusters into their correspondingsounds. This route is functionally limited in that it does not provideinformation about word meaning. Conversely, the lexical route isimplicated in the retrieval of stored information about the orthog-raphy, semantics, and phonology of familiar words [13,14]. Theaccess to the meaning of a lexical item should activate its concep-tual representation that incorporates a set of both propositionaland non-propositional (e.g. visual, spatial and dynamic) prop-erties [43,44]. Non-propositional specifications, e.g. visual [51],visuo-spatial [26], spatio-dynamic [43], spatio-temporal [16] orspatio-motoric [40], should be translated by a motor planner intoa motor program that provides the motor system with a set ofinstructions for executing the gesture.

If we consider the deictic words used in our study, one mayexpect the reading of ‘QUA’ (‘here’), that means “in, at or to thisplace or position”, activates spatio-dynamic (or -motoric) specifi-cations that, in turn, trigger a motor plan for a pointing movementdirected toward a near position; and the reading of ‘LA” (‘there’),that means “in, at or to that place or position”, activates a motor planfor a pointing movement directed toward a far position. Besidesreading and accessing to the meaning of the printed word, the par-ticipants simultaneously processed another kind of information,namely token position that indicated the direction (content) of theto-be-performed deictic gesture. When the token was placed near,the participants had to plan a pointing movement directed towardsthemselves; when the token was placed far, the participants hadto plan a pointing movement directed towards a remote position.Thus, it is possible that the congruence/incongruence between thecontent of the deictic word and that of the to-be-performed ges-ture could affect both kinematics of pointing movement and voicespectra. The results of the present study suggest that this actuallyoccurred.

Indeed, for gesture production, when the participants pointedtowards themselves, index peak velocity value was greater whenthey simultaneously read the word ‘QUA’ (‘here’) (congruent con-dition) than when they read the word ‘LA” (‘there’) (incongruentcondition) or were silent (control condition). This might depend onan amplification of gesture parameterization due to a synergic (orresonance) effect between: (a) the spatio-dynamic specificationsrelated to the content of the deictic word and those related to thecontent of the to-be-performed gesture or (b) the motor programtriggered by spatio-dynamic specifications related to the content ofthe deictic word and that triggered by the spatio-dynamic specifi-cations related to the content of the gesture.

Further, when the participants pointed towards a remote posi-tion, both index peak-acceleration and -velocity were lower when

they simultaneously read the word ‘QUA’ (‘here’) (incongruent con-dition) that when they read the word ‘LA” (‘there’) (congruentcondition) or were silent (control condition). This might dependon a partial inhibition of gesture parameterisation due to a conflictbetween: (a) the spatio-dynamic specifications related to the con-

rain R

ttstr

ostgQiofs

i(gpoaietcgitsrbiaimt

GdcdwbifqicaGtmttt

igifGiwar

S. Chieffi et al. / Behavioural B

ent of the deictic word and those related to the content of theo-be-performed gesture or (b) the motor program triggered bypatio-dynamic specifications related to the content of the deic-ic word and that triggered by the spatio-dynamic specificationselated to the content of the gesture.

As concerns voice spectra, from the examination of the resultsbtained from both Experiments 1 and 2, and from their compari-on, it results that there was a reduction of F2 when the content ofhe deictic word was incongruous with that of the to-be-performedesture. This occurred both when the participants read the wordUA (‘here’) and when they read the word LA’ (‘there’). Thus, it

s possible to hypothesize that the conflict between the contentf the deictic word and that of the to-be-performed gesture inter-ered with phonetic planning that serves as input to the articulatoryystem.

It is interesting to note that the effects on gesture kinemat-cs were evident only when the participants read the word ‘QUA’‘here’) and simultaneously performed a pointing gesture. For theeneration of pointing gestures, de Ruiter [16] proposed that somearameters are fixed and stored in memory, e.g. the shape of hand,thers are free and constitute the degrees of freedom of gesture, e.g.rm orientation. However, it is possible that also arm orientations stored in memory if a particular pointing gesture is usually ori-nted towards a narrow region of space. This might be the case ofhe deictic gestures performed in association with the word ‘here’,onsidering that the region of space indicated from this kind ofesture is less wide than that indicated from gestures performedn association with the word ‘there’. Indeed, ‘here’ refers especiallyo the speaker’s peripersonal space, whereas ‘there’ refers to all thepace beyond the peripersonal space. Thus, when the participantsead the word ‘here’ and pointed towards themselves, gesture mighte facilitated by using spatio-dynamic parameters already stored

n memory. Conversely, when the participants read the word ‘here’nd pointed towards a remote position, gesture might be partiallynhibited because the spatio-dynamic parameters retrieved from

emory, and related to word content, conflict with those relatedo the to-be-performed gesture.

The results of the present study differ from those found byentilucci and co-workers [2,5] in that an increase rather than aecrease [5] in the arm kinematics parameters was observed whenongruent gesture and word were simultaneously produced, and aecrease rather than no effect [2] was observed when gesture andords were incongruent. These contrasting results can be explained

y considering that the deictic gesture and word code the samenformation on a spatial location. The localization is more preciseor the gesture than for the corresponding deictic word. Conse-uently, their simultaneous execution could induce resonance and,

n turn, amplification of arm movement parameters. In contrast,ommunicative words and symbolic gestures can code differentspects of the same meaning. For example, the gestures studied byentilucci and co-workers [2,5] (i.e. CIAO, NO and STOP) can code

he intention to interact directly with the interlocutor. This aspectay be absent in the corresponding word. Consequently, the ges-

ure can transfer this aspect to the word, which in turn, when is ofhe same meaning, partially inhibits gesture execution. Indeed, inhis case the gesture becomes somewhat redundant [2,19].

In the studies by Gentilucci and co-workers [2,4] an increasen F2 was found in both the conditions of congruence and incon-ruence of the gesture, whereas in the present study a decreasen F2 was found in the incongruent condition. Placing the tongueorward/backward induces increase/decrease in F2 [45]. Previously,

entilucci and co-workers [2,5] suggested that the increase in F2

nduced by the gesture was due to the intention to interact directlyith the interlocutor because in non-humans, both mouth aperture

nd tongue protrusion accompany gestures typical of approachingelationships (for example, tongue is protruded during lip-macking

esearch 203 (2009) 200–206 205

and protruded face that precede grooming actions among monkeys[62,63]). A similar explanation can be offered for the results of thepresent study. However, the gesture affected differently the word.In fact, the incongruent deictic gesture reduced the possibility ofcommunicative intention of the word: consequently the tongue wasretracted and F2 decreased. No increase in F2 was observed in thecase of congruence of the gesture with the word probably becausethe symbolic gestures studied by Gentilucci and co-workers [2,5]always contain a communicative intention, which is automaticallytransferred to the word. In contrast, the context can make com-municative the deictic gesture [33,54], and only in this case thesimultaneous production of the two signals can be associated to anincrease in F2.

In conclusion, the results of the present study suggest theexistence of a tight interaction between the systems involved inproducing deictic word and gesture, so as suggested in previousstudies for emblems [2,5], iconic [40,41,58] and beat gestures [42].A tight interaction between the two systems was also reported incomprehension domain. A number of experimental studies inves-tigated how the brain integrates comprehension of hand gestureswith co-occurring speech and provided evidence that semanticprocessing evoked by gestures is qualitatively similar to that ofwords [67]. In ERP studies, when subjects were presented with co-speech gestures, a semantically anomalous gesture as well as ananomalous word elicited a stronger negative deflection in the signalaround 400 ms after (N400 effect) [32,33,58] and in fMRI studies anincreased activation in an overlapping region in the left frontal cor-tex [68]. Recently, Hubbard et al. [30] studied subjects underwentfMRI while listening to spontaneously-produced speech accompa-nied by rhythmic (beat) gesture and found a greater activity in leftsuperior temporal gyrus and sulcus (STG/S), areas well-known fortheir role in speech processing, suggesting the existence of a com-mon neural substrate for processing speech and gesture.

References

[1] Alibali MW, Kita S, Young A. Gesture and the process of speech production: wethink, therefore we gesture. Lang Cogn Process 2000;15:593–613.

[2] Barbieri F, Buonocore A, Dalla Volta R, Gentilucci M. How symbolic gestures andwords interact with each other. Brain Lang 2009 [on line].

[3] Bates E, Dick F. Language, gesture, and the developing brain. Dev Psychobiol2002;40:293–310.

[4] Bernardis P, Bello A, Pettenati P, Stefanini S, Gentilucci M. Manual actions affectvocalizations of infants. Exp Brain Res 2008;184:599–603.

[5] Bernardis P, Gentilucci M. Speech and gesture share the same communicationsystem. Neuropsychologia 2006;44:178–90.

[6] Butterworth B, Hadar U. Gesture, speech and computational stages: a reply toMcNeill. Psychol Rev 1989;96:168–74.

[7] Carlomagno S, Pandolfi M, Marini A, Di Iasi G, Cristilli C. Coverbal gestures inAlzheimer’s type dementia. Cortex 2005;41:535–46.

[8] Carlomagno S, Santoro A, Menditti A, Pandolfi M, Marini A. Referential commu-nication in Alzheimer’s type dementia. Cortex 2005;41:520–34.

[9] Chawla P, Krauss RM. Gesture and speech in spontaneous and rehearsed narra-tives. J Exp Soc Psychol 1994;30:580–601.

[10] Chieffi S, Ricci M, Carlomagno S. Influence of visual distractors on movementtrajectory. Cortex 2001;37:389–405.

[11] Chieffi S, Ricci M. Gesture production and text structure. Percept Mot Skills2005;101:435–9.

[12] Cicone M, Wapner W, Foldi N, Zurif E, Gardner H. The relation between gestureand language in aphasic communication. Brain Lang 1979;8:324–49.

[13] Coltheart M, Curtis B, Atkins P, Haller M. Models of reading aloud: dual-route and parallel-distributed-processing approaches. Psychol Rev 1993;100:589–608.

[14] Coltheart M, Rastle K, Perry C, Langdon R, Ziegler J. DRC: a dual route cascadedmodel of visual word recognition and reading aloud. Psychol Rev 2001;108:204–56.

[15] Delis D, Foldi NS, Hamby S, Gardner H, Zurif E. A note on temporal relationsbetween language and gestures. Brain Lang 1979;8:350–4.

[16] de Ruiter JP. The production of gesture and speech. In: Mc Neill D, editor. Lan-

guage and gesture. Cambridge: Cambridge University Press; 2000. p. 284–311.

[17] Ekman P, Friesen W. The repertoire of nonverbal behaviour: categories, origins,usage and coding. Semiotica 1969;11:49–98.

[18] Gangitano M, Daprati E, Gentilucci M. Visual distractors differentially interferewith the reaching and grasping components of prehension movements. ExpBrain Res 1998;122:441–52.

2 rain R

[

[

[

[

[

[

[

[

[

[

[

[

[

[

[

[

[

[

[

[

[

[

[

[

[

[

[[

[

[

[

[

[

[

[

[

[

[

[

[

[

[

[

[

[

[

06 S. Chieffi et al. / Behavioural B

19] Gentilucci M, Benuzzi F, Gangitano M, Grimaldi S. Grasp with hand and mouth:a kinematic study on healthy subjects. J Neurophysiol 2001;86:1685–99.

20] Gentilucci M, Bernardis P, Crisi G, Dalla Volta R. Repetitive transcranial magneticstimulation of Broca’s area affects verbal responses to gesture observation. JCogn Neurosci 2006;18:1059–74.

21] Gentilucci M, Chieffi S, Scarpa M, Castiello U. Temporal coupling between trans-port and grasp components during prehension movements: effects of visualperturbation. Behav Brain Res 1992;15:71–82.

22] Gentilucci M, Corballis MC. From manual gesture to speech: a gradual transition.Neurosci Biobehav Rev 2006;30:949–60.

23] Gentilucci M, Dalla Volta R, Gianelli C. When the hands speak. J Physiol Paris2008;102:21–30.

24] Gentilucci M, Stefanini S, Roy AC, Santunione P. Action observation and speechproduction: study on children and adults. Neuropsychologia 2004;42:1554–67.

25] Goldin-Meadow S, Butcher C. Pointing toward two-word speech in young chil-dren. In: Kita S, editor. Pointing: where language, culture, and cognition meet.Mawhaw, NJ: Erlbaum; 2003. p. 85–107.

26] Hadar U, Burstein A, Krauss R, Soroker N. Ideational gestures and speech inbrain-damaged subjects. Lang Cognit Process 1998;13:59–76.

27] Hadar U, Yadlin-Gedassy S. Conceptual and lexical aspects of gesture: evidencefrom aphasia. J Neurolinguistics 1994;8:57–65.

28] Hostetter AB, Alibali MW. On the tip of the mind: gesture as key to concep-tualization. In: Forbus K, Gentner D, Regier T, editors. Proceedings of the 26thAnnual Meeting of the Cognitive Science Society. Mawah, NJ: Erlbaum; 2004.p. 589–94.

29] Hostetter AB, Alibali WN, Kita S. I see it in my hand’s eye: representational ges-tures are sensitive to conceptual demands. Lang Cogn Process 2007;22:313–36.

30] Hubbard AL, Wilson SM, Callan DE, Dapretto M. Giving speech a hand: gesturemodulates activity in auditory cortex during speech perception. Hum BrainMapp 2009;30:1028–37.

31] Jackson SR, Jackson GM, Rosicky J. Are non-relevant objects represented inworking memory? The effect of non-target objects on reach and grasp kine-matics. Exp Brain Res 1995;102:519–30.

32] Kelly SD, Kravitz C, Hopkins M. Neural correlates of bimodal speech and gesturecomprehension. Brain Lang 2004;89:253–60.

33] Kelly SD, Ward S, Creigh P, Bartolotti J. An intentional stance modulatesthe integration of gesture and speech during comprehension. Brain Lang2007;101:222–33.

34] Kendon A. Some relationships between body motion and speech. An analysis ofan example. In: Siegman A, Pope B, editors. Studies in dyadic communication.Elmsford, NY: Pergamon; 1972. p. 177–210.

35] Kendon A. Gesticulation and speech: two aspects of the process of utterance.In: Key MR, editor. The relationship of verbal and nonverbal communication.Mouton: The Hague; 1980. p. 207–27.

36] Kendon A. Gesture and speech: how they interact. In: Wiemann JM, HarrisonRP, editors. Nonverbal interaction. Beverly Hills, CA: Sage Publications; 1983. p.13–45.

37] Kendon A. Do gestures communicate? A review. Res Lang Soc Interact1994;27:175–200.

38] Kendon A. Gesture: visible action as utterance. Cambridge: Cambridge Univer-sity Press; 2004. p. 412.

39] Kita S. How representational gestures help speaking. In: McNeill D, editor.Language and gesture. Cambridge, UK: Cambridge University Press; 2000. p.162–85.

40] Kita S, Özyürek A. What does cross-linguistic variation in semantic coordinationof speech and gesture reveal? Evidence for an interface representation of spatialthinking and speaking. J Mem Lang 2003;48:16–32.

41] Kita S, Özyürek A, Allen S, Brown A, Furman R, Ishizuka T. Relations betweensyntactic encoding and co-speech gestures: implications for a model of speechand gesture production. Lang Cognit Process 2007;22:1212–36.

42] Krahmer E, Swerts M. The effects of visual beats on prosodic prominence:acoustic analyses, auditory perception and visual perception. J Mem Lang2007;57:396–414.

[

[

[

esearch 203 (2009) 200–206

43] Krauss RM, Chen Y, Gottesman RF. Lexical gestures and lexical access: a pro-cess model. In: McNeill D, editor. Language and gesture. New York: CambridgeUniversity Press; 2000. p. 261–83.

44] Krauss RM, Hadar U. The role of speech-related arm/hand gestures in wordretrieval. In: Campbell R, Messing L, editors. Gesture, speech, and sign. Oxford:Oxford University Press; 1999. p. 93–116.

45] Leoni FA, Maturi P. Manuale di Fonetica. Roma: Carocci; 2002. p. 172.46] Levelt WJM. Speaking: from intention to articulation. Cambridge, MA: MIT

Press; 1989. p. 566.47] Levelt WJM. The skill of speaking. In: Bertelson P, Eelen P, d’Ydewalle G, editors.

International perspectives on psychological science. (Vol. I: leading themes).Hillsdale: Lawrence Erlbaum Associates; 1994. p. 89–104.

48] Levelt WJM, Richardson G, La Heij W. Pointing and voicing in deictic expressions.J Mem Lang 1985;24:133–64.

49] McNeill D. So you think gestures are nonverbal? Psychol Rev 1985;92:350–71.

50] McNeill D. Psycholinguistics: a new approach. New York: Harper & Row; 1987.p. 290.

[51] McNeill D. Hand and mind: what gestures reveal about thought. Chicago: Univ.Chicago Press; 1992. p. 416.

52] Mc Neill D, Duncan SD. Growth points in thinking-for-speaking. In: McNeill D,editor. Language and gesture. New York: Cambridge University Press; 2000. p.141–61.

53] McNeill D, Levy E. Conceptual representations in language activity and gesture.In: Jarvella R, Klein W, editors. Speech, place, and action: studies in deixis andrelated topics. Chichester, England: Wiley; 1982. p. 271–95.

54] Melinger A, Levelt WJM. Gesture and the communicative intention of thespeaker. Gesture 2004;4:119–41.

55] Morrel-Samuels P, Krauss RM. Word familiarity predicts temporal asynchronyof hand gestures and speech. J Exp Psychol Learn Mem Cogn 1992;18:615–22.

56] Morsella E, Krauss RM. The role of gestures in spatial working memory andspeech. Am J Psychol 2004;117:411–24.

57] Oldfield RC. The assessment and analysis of handedness: the Edinburgh inven-tory. Neuropsychologia 1971;9:97–113.

58] Özyürek A, Kita S, Allen S, Furman R, Brown A. How does linguistic framing ofevents influence co-speech gestures? Insights from cross-linguistic variationsand similarities. Gesture 2005;5:215–37.

59] Rauscher FB, Krauss RM, Chen Y. Gesture, speech and lexical access: the role oflexical movements in speech production. Psychol Sci 1996;7:226–31.

60] Tipper SP, Howard LA, Jackson SR. Selective reaching to grasp: evidence fordistractor interference effects. Vis Cogn 1997;4:1–38.

61] Tipper SP, Lortie C, Baylis GC. Selective reaching: evidence for action-centeredattention. J Exp Psychol Hum Percept Perform 1992;18:891–905.

62] Van Hoof JARAM. Facial expressions in higher primates. Symp Zool Soc Lond1962;8:97–125.

63] Van Hoof JARAM. The facial displays of the catarrhine monkeys and apes. In:Morris D, editor. Primate ethology. London: Weidenfield and Nicolson; 1967. p.7–68.

64] Volterra V, Bates E, Benigni L, Bretherton I, Campioni L. First words in languageand action: a qualitative look. In: Bates E, Benigni L, Bretherton I, Camaioni L,Volterra V, editors. The emergence of symbols: cognition and communicationin infancy. New York: Academic Press; 1979. p. 141–222.

65] Volterra V, Caselli MC, Capirci O, Pizzuto E. Gesture and the emergence anddevelopment of language. In: Tomasello M, Slobin D, editors. Beyond nature-nurture. Essays in honor of Elizabeth Bates. NJ: Lawrence Erlbaum Associates;2005. p. 3–40.

66] Welsh TN, Elliott D, Weeks DJ. Hand deviations toward distractors. Evidence forresponse competition. Exp Brain Res 1999;127:207–12.

67] Willems RM, Hagoort P. Neural evidence for the interplay between language,gesture, and action: a review. Brain Lang 2007;101:278–89.

68] Willems RM, Ozyürek A, Hagoort P. When language meets action: the neuralintegration of gesture and speech. Cereb Cortex 2007;17:2322–33.