Gesture, thought, and spatial language

16
Gesture, thought and spatial language * Gesture 1:1 (2001), 3550. issn 15681475 © 2001 John Benjamins Publishing Company Karen Emmorey and Shannon Casey The Salk Institute for Biological Studies / University of California, San Diego This study explores the conceptual and communicative roles of gesture by examining the consequences of gesture prevention for the type of spatial language used to solve a spatial problem. English speakers were asked to describe where to place a group of blocks so that the blocks completely filled a puzzle grid. Half the subjects were allowed to gesture and half were pre- vented from gesturing. In addition, half the subjects could see their address- ee and half could not. Addressee visibility affected how reliant subjects were on specifying puzzle grid co-ordinates, regardless of gesture condition. When describing block locations, subjects who were allowed to gesture were more likely to describe block orientation and rotation, but only when they could see the addressee. Further, gesture and speech complemented each other such that subjects were less likely to lexically specify rotation direction when this information was expressed by gesture; however, this was not a deliberate communicative choice because subjects who were not visible to their addressee also tended to leave rotation direction unspecified when they gestured. Finally, speakers produced deictic anaphoric constructions (e.g., “turn it this way”) which referred to their own gestures only when they could see the addressee. Together, these findings support the hypothesis that gesture is both an act of communication and an act of thought, and the results fail to support the hypothesis that gesture functions primarily to facilitate lexical retrieval. Keywords: spatial language, effects of gesture prevention, sign language Spontaneous gestures, particularly representational (iconic) gestures, have been found to be more prevalent when speakers use language with spatial content (e.g., accompanying spatial prepositional phrases) compared to when they do not (Rauscher, Krauss & Chen, 1996). Furthermore, Rauscher et al. (1996) found that when people were prevented from gesturing, their speech was less

Transcript of Gesture, thought, and spatial language

Gesture, thought and spatial language*

Gesture 1:1 (2001), 35–50. issn 1568–1475

© 2001 John Benjamins Publishing Company

<TARGET "emm" DOCINFO

AUTHOR "Karen Emmorey and Shannon Casey"

TITLE "Gesture, thought and spatial language"

SUBJECT "Gesture, Volume 1:1"

KEYWORDS "spatial language, effects of gesture prevention, sign language"

SIZE HEIGHT "220"

WIDTH "150"

VOFFSET "4">

<LINK "emm-n*">

Karen Emmorey and Shannon CaseyThe Salk Institute for Biological Studies / University of California, San Diego

This study explores the conceptual and communicative roles of gesture byexamining the consequences of gesture prevention for the type of spatiallanguage used to solve a spatial problem. English speakers were asked todescribe where to place a group of blocks so that the blocks completely filleda puzzle grid. Half the subjects were allowed to gesture and half were pre-vented from gesturing. In addition, half the subjects could see their address-ee and half could not. Addressee visibility affected how reliant subjects wereon specifying puzzle grid co-ordinates, regardless of gesture condition.When describing block locations, subjects who were allowed to gesture weremore likely to describe block orientation and rotation, but only when theycould see the addressee. Further, gesture and speech complemented eachother such that subjects were less likely to lexically specify rotation directionwhen this information was expressed by gesture; however, this was not adeliberate communicative choice because subjects who were not visible totheir addressee also tended to leave rotation direction unspecified when theygestured. Finally, speakers produced deictic anaphoric constructions (e.g.,“turn it this way”) which referred to their own gestures only when theycould see the addressee. Together, these findings support the hypothesis thatgesture is both an act of communication and an act of thought, and theresults fail to support the hypothesis that gesture functions primarily tofacilitate lexical retrieval.

Keywords: spatial language, effects of gesture prevention, sign language

Spontaneous gestures, particularly representational (iconic) gestures, have beenfound to be more prevalent when speakers use language with spatial content(e.g., accompanying spatial prepositional phrases) compared to when they donot (Rauscher, Krauss & Chen, 1996). Furthermore, Rauscher et al. (1996)found that when people were prevented from gesturing, their speech was less

36 Karen Emmorey and Shannon Casey

fluent, but only when using spatial language— speech with nonspatial contentwas unaffected by the ability to gesture. Rauscher et al. (1996, p.229) hypothe-size that representational gestures “derive from spatially coded knowledge, andthey facilitate lexical retrieval by cross-modally priming the semantic featuresthat enter into lexical search during grammatical encoding.” According to thishypothesis, gestures serve primarily to facilitate access to the mental lexicon.Rauscher et al. suggest, but argue against, a second hypothesis that “gesturinghelps the speaker conceptualize the spatial relations that will be expressed inspeech (p. 229)” and that difficulties at the conceptual level result in slow anddysfluent speech when gesture is not permitted. Rauscher et al. prefer the lexicalaccess hypothesis because they found that subjects who had been preventedfrom gesturing while speaking produced speech dysfluencies similar to thoseobserved when subjects were forced to use constrained speech (e.g., subjectswere not allowed to produced words containing a certain letter); thus, prevent-ing gesture and creating word finding difficulties led to the same types of speechdysfluencies (e.g., non-juncture filled pauses). However, the possibility thatgesture played a role in conceptualizing spatial relations could not be ruled out,and furthermore, speech dysfluencies can be caused by factors not associatedwith word finding difficulties (e.g., fear of public speaking, Lewin, McNeil &Lipson, 1996). The experiment reported here explores both the conceptual andcommunicative roles of gesture by examining the consequences of gestureprevention on the type of spatial language used to solve a spatial problem.

The current study stems from our previous study comparing spatiallanguage in American Sign Language (ASL) and English. Emmorey and Casey(1995) asked ASL signers and English speakers to solve a set of spatial puzzles.Subjects had to decide where to place a group of blocks so that the blockscompletely filled a puzzle grid (see Figure 1), but they were not allowed tomanipulate the blocks themselves. Instead, subjects instructed an experimenterwhere to place the blocks on the puzzle grid.

We found that English speakers relied heavily on grid coordinate labelsmarked on the puzzle mat, but ASL signers were able to use signing space torefer to puzzle positions. ASL signers also made significantly more overtreferences to orientation and rotation, and we hypothesized that this differencein language use was due to the direct representation of orientation in ASL byhand orientation and by tracing orientation in space.

However, there were critical differences in the experimental situation forthe English speakers and the ASL signers. The English data was collected byMark St. John (1992) to be used as input for training a computer model to learn

Gesture, thought and spatial language 37

language in the service of a specific task. St. John was keen to avoid deictic

Figure 1.�The “holding” boards and one of the puzzle mats on a table top. The figureshows a puzzle in progress. In fact, this configuration does not lead to a correct solution:the blocks on the puzzle mat must be re-oriented and moved in order for all of theblocks to fit within the grid.

references and gesture as input to the model, and he therefore forced subjectsto sit on their hands. St. John (personal communication) found that subjectswere still gesturing, but with their heads! He therefore erected a screen betweenthe experimenter and the subject. Of course, ASL signers were able to both seethe experimenter and use their hands to communicate.

Thus, the question arises: is the variation in spatial language used by ASLsigners and English speakers due to language modality or to differences in theability to gesture and/or see their interlocutor? To answer this question, weconducted another experiment using the same puzzle task, but English speakerswere allowed to gesture and could see their interlocutor. To anticipate, wefound that our previous results (i.e., increased reliance on grid references andfewer mentions of orientation by English speakers) were due to the experimen-tal situation, and not to modality differences between English and ASL. Thisresult led us to ask whether it was the ability to gesture or the ability to see theaddressee that was responsible for the changes in spatial language. Therefore, wecollected data from the remaining two possible conditions: (1) speakers wereprevented from gesturing, but could see their addressee and (2) speakers couldgesture but could not see their addressee.

We predicted that whether or not a subject mentioned grid coordinates wasdependent upon whether the subject could see the experimenter. This predic-tion is based on the hypothesis that when speakers cannot rely on eye contact orother facial expressions of the addressee to confirm that their instructions are

38 Karen Emmorey and Shannon Casey

being understood, they will produce more explicit descriptions by specifying thegrid co-ordinates. We further predicted that whether or not a subject tended tospecify the orientation of a block was dependent upon whether or not thesubject was allowed to gesture. This prediction is based on the hypothesis thatthe gestural expression of orientation (e.g., a twisting motion to indicaterotation) facilitates the verbal expression of orientation because the gesturehelps the subject conceptualize the spatial rotation required to position anobject at a desired location.

Method

Subjects. Thirty native English speakers participated in the study (15 males; 15females). All were students at the University of California, San Diego andreceived either course credit or payment for their participation.

Materials and Procedure. Subjects were asked to solve three puzzles. The firstpuzzle was a 4×5 rectangle. The third puzzle is shown in Figure 1, and thesecond puzzle was a variant of the third in which the vertical “bar” appeared atthe H coordinate of the grid, rather than at the E coordinate. Each puzzle wassolved using the same wooden blocks: three blue, two red, and one green block,all shown in Figure 1. Each puzzle grid was labeled with horizontal letter andvertical number coordinates. The subjects were tested individually. One of thepuzzle grids and all of the pieces were laid out on a table in front of the subjectand the experimenter. The subjects’ task was to fill in the puzzle grid with all ofthe pieces, but they were not allowed to touch the pieces. The subjects had togive commands to the experimenter specifying where they wanted each piece tobe put. The subject and experimenter sat side-by-side, so that each had a similarvisual perspective on the puzzle mat. The experimenter did not pick up theblock until the subject had finished describing how and where the block shouldbe placed on the puzzle mat. The experimenter said very little during the task,and only occasionally asked for clarification if a subject’s instruction wasparticularly unclear. Subjects participated in one of the first three experimentalconditions listed below (the experimenter was the same person in all threeconditions):

1.�Gesture permitted, addressee visible (code for examples: +G, +A). Subjectswere allowed to gesture, but they were restricted from crossing an invisible lineextending between their chair and the experimenter’s chair. This prevented

Gesture, thought and spatial language 39

subjects from pointing directly to grid locations on the puzzle mat. The puzzleand blocks were placed on a table in front of the experimenter.2.�Gesture permitted, addressee not visible (+G, -A). Subjects were allowed togesture, and there was a screen erected between the subject and the experiment-er. The screen was positioned so that the subject could see the experimenter’shands place the blocks on the puzzle mat, but could not otherwise see theexperimenter. Subjects were not permitted to lean forward past the screen (thusmaking their gestures visible to the experimenter).3.�Gesture not permitted, addressee visible (-G, +A). Subjects were asked to sit ontheir hands with their back pressed against the back of the chair (this helped toprevent subjects from leaning forward and gesturing with their heads).4.�Gesture not permitted, addressee not visible (-G, -A). Subjects were asked to siton their hands, and a screen was placed between the subject and the experi-menter. The data from this condition were from Emmorey and Casey (1995).

Results and discussion

The design of the analysis was 2 (gesture permitted, not permitted) × 2 (ad-dressee visible, not visible). For each condition, we determined (1) the percentof moves in which subjects referred to grid coordinates and (2) the percent ofmoves in which a subject referred to the orientation of a puzzle piece. A moveconsisted of instructing the experimenter either to place a block(s) on thepuzzle mat or to move a block(s) to a new position on the mat.1 The followingare examples of references to grid coordinates (the code associated with eachexample indicates which condition the example was taken from, see methods):

(1) take the blue L piece and put it on H1, H2, G2 (-G, -A)(2) the large red one in A2, 3, and ABC 3 position (+G, -A)(3) ok we move the first blue one here into H and I 3 (+G, +A)

Examples of references to orientation include the following:

(4) turn the red one counter-clockwise (-G, -A)(5) can we rotate that another 90 degrees (+G, +A)(6) place the green block lengthwise in E 2,3,4 (-G, -A)

Many moves combined grid specifications and orientation instructions as inexample (6) and other examples below. Other types of instructions specifiedpuzzle mat locations without reference to grid positions (e.g., “put it in the top

40 Karen Emmorey and Shannon Casey

left hand corner“) or referred to a relation between blocks (e.g., “put it to theleft of the green piece”). A separate ANOVA was conducted for the percentageof moves containing a grid reference and for the percentage of moves in whichblock orientation was specified. We then conducted an analysis of subjects’gestures and spatial language.

Grid coordinate references

When subjects could not see their addressee, they produced significantly morereferences to grid coordinates than when their addressee was visible (F(1,36)=9.67, p< .005), regardless of whether they could gesture. There was no signifi-cant effect of the ability to gesture on the number of grid references (F(1,36)=2.50, ns), and no interaction between addressee visibility and gesture (F<1).The data are shown in Figure 2.

If we compare the English speakers who were allowed to gesture and see

Grid References

Gesture

No Gesture

Mea

n p

erce

nt

ofm

oves

Visible Not Visible

Adressee

100

75

50

25

0

Figure 2.�Mean percent of moves containing a reference to grid co-ordinates on thepuzzle mat. Bars indicate standard error.

their addressee to the data from ASL signers (Emmorey & Casey, 1995), wefind that the English speakers produced a similar percentage of grid referencesas the ASL signers (t(18) < 1): 35% and 33% respectively; compare these

Gesture, thought and spatial language 41

percentages to 76% and 66% for the two groups of English speakers who couldnot see their addressee.

These results support our hypothesis that references to grid coordinates aretied to addressee visibility such that subjects rely less on grid specificationswhen they can see their addressee (regardless of gesture condition). Like theASL signers, English speakers became less dependent upon grid references byusing more “natural” location descriptions which divided the puzzle mat intoa left-right axis and a top-bottom axis and referred to local areas within thepuzzle. For example:

(7) and could you put it in the lower left hand corner (+G, +A)(8) stick the red one in the upper right (-G, +A)(9) move the green piece up and over one (+G, +A)

Thus, the decrease in references to grid coordinates found by Emmorey andCasey (1995) forASL signers does not appear to be due to their ability to use spaceto express location, but to the fact that the ASL signers could see their addressee.

Orientation references

As predicted, there was no effect of the visibility of the addressee on thepercentage of moves containing a reference to block orientation (F(1,36)=1.48,ns). However, we did not find that the ability to gesture alone increasedreferences to block orientation (F(1,36)=1.60, ns). Rather, the ability to gestureinteracted with the visibility of the addressee, although the interaction justmissed significance (F(1,36)=3.53, p=.06). The data are shown in Figure 3.

Inspection of the data reveals that gesture only increases the percentage ofreferences to orientation when the addressee is visible to the subject (t(38)=2.58, p< .02). This finding suggests that many of the gestures accompanyingspeech in this study were communicative and did not just facilitate lexical access.The result also suggests that gestures did not simply assist in the conceptualiza-tion of spatial relations while subjects solved the spatial puzzles. If gestureserved primarily to either facilitate lexical access or to enhance conceptualiza-tion of object orientation, then subjects should have producedmore referencesto orientation regardless of whether their addressee was visible, and this patternwas not observed.

Again, when we compare the English speakers who were allowed to gestureand see their addressee to the data from ASL signers (Emmorey & Casey, 1995),we find that the English speakers and ASL signers produced roughly the same

42 Karen Emmorey and Shannon Casey

percentage of orientation references per move (t(18) < 1): 41% and 51%

Orientation References

Gesture

No Gesture

Visible Not Visible

Adressee

50

40

30

20

10

0

Mea

n p

erce

nt

ofm

oves

Figure 3.�Mean percept of moves containing a reference to block orientation. Barsindicate standard error.

respectively — compare these percentages to the mean of 25% for Englishspeakers who could not gesture or who could not see their addressee.

Gesture analysis

First, we examined whether subjects did in fact gesture when talking aboutorientation. We found that 59% of orientation instructions were accompaniedby gesture when the subject could see their addressee, compared to 28% forsubjects who could not see their addressee. This difference was significant (t(18)= 2.46, p < .05), and reflects the general finding that subjects gestured muchmore when they could see their addressee (t(18) = 2.30, p < .05). When theexperimenter was visible, subjects produced an average of 45 gestures whilesolving the puzzles compared to an average of 18 gestures when the experiment-er was behind a screen. This result is consistent with other studies which haveshown that speakers gesture more in face-to-face conversations than they dowhen their addressee cannot see them (Cohen, 1977; Cohen &Harrison, 1973).

Some evidence that gestures were used communicatively when the subjectcould see their addressee is found in their use of deictic anaphoric constructions

Gesture, thought and spatial language 43

in which speakers actually refer to their own gesture. The following are exam-ples of this type of construction:2

(10) [and turn the green] [this way] (+G, +A)(a) (b)

(a) Points toward the green block on the puzzle mat(b) Both hands rotate counter-clockwise

(11) actually can you rotate it so [that it’s like this] (+G, +A)The hand has a loose L handshape (thumb and index finger extended,slightly curved), and it rotates counter-clockwise while the finger and thumbstraighten to form a clear “L” shape (refers to an L shaped block)

(12) [and the blue one that way] (+G, +A)The hand has an L-shape and rotates clockwise toward the speaker

When subjects could not see the experimenter, there were no examples of suchconstructions; whereas, when subjects could see their addressee, 13%of orienta-tion instructions contained a deictic reference to the accompanying gesture.

However, the scarcity of these constructions suggests that gesture rarelycarried the entire communicative burden. Most often, gestures indicated thedirection of rotation left unspecified in the speech, as the following examplesillustrate (the relevant aspect of the gesture is in bold):

(13) ok now [if we rotate] that 90 degrees (+G, +A)The two hands, index fingers extended, circle clockwise

(14) or actually uh [turn it twice] (+G, +A)The index finger arcs clockwise

(15) and [the green one] [can you flip it over] and then put it on the (a) (b) [thing that goes down] (+G, +A)(c)

(a) Left hand points to green block(b) Right hand rotates clockwise with thumb and index finger extended(c) Right hand index finger jabs downward repeatedly

Although speakers may have used gestures communicatively when they couldsee their addressee, this study does not provide evidence that the addressee wassensitive to the information contained in the gestures. The addressee in thisstudy was a confederate who knew the puzzle and participated in all conditions.To investigate whether the addressee takes up the information conveyed by

44 Karen Emmorey and Shannon Casey

gestures, pairs of naive subjects must be studied.Next, we explored whether there was any evidence that gestures helped

subjects to conceptualize the rotation necessary to place blocks in the desiredposition on the puzzle mat. We found several examples which support thishypothesis. In some cases, the gesture preceded the linguistic expression ofrotation, suggesting that the gesture itself contributed to the speaker’s image ofblock rotation and placement. Examples (16) and (17) are from a subject whocould not see the experimenter (the relevant gesture is in bold):

(16) then stick the [red one] [ ] 180 degrees [on the left side] (+G, -A)(a) (b) (c)

(a) Index finger points toward the red block on the “holding” board(b) Index finger circles counter-clockwise(c) Index finger points toward puzzle mat

(17) put the um blue one on the right hand side [ ] twisted around so (a) the [flat part] is down and put it on A4 (+G, -A) (b)

(a) Right hand, fingers in a loose 5 handshape, twists clockwise(b) Right hand, palm down, fingers together (a “B” handshape) moves

back-and-forth horizontally (indicates the long L segment of the thinflat block, as opposed to the other thicker blocks)

Both of these examples show the subject producing a gesture without anyaccompanying speech. These gestures were produced prior to speech thatspecified rotation or orientation change. Example (18) illustrates a similar typeof gesture produced when the subject could see the addressee (again, therelevant gesture is in bold).

(18) and [the red one] [ ] like can you [flip it over] [and then stick it in] (a) (b) (c) (d) [D, E] and then 3 (+G, +A)(e)

(a) Left hand points toward red block on “holding” board(b) Right hand with thumb and index finger extended, twists clockwise

toward the speaker(c) Right hand, same handshape, makes quick twisting motion(d) Right index finger traces short line: represents line on puzzle mat(e) Right index jabs downward with each letter spoken

Gesture, thought and spatial language 45

Because these gestures were produced prior to the linguistic expression ofrotation, the speaker may have been using the gesture to help conceptualize therequired spatial rotation. Further evidence for this hypothesis comes from otherexamples in which subjects produced a rotation-type gesture, but there was noreference to orientation change in the speech:

(19) [OK the big blue] [ ] [put it near E] (+G, +A)(a) (b) (c)

(a) Left hand points toward blue block on “holding” board(b) Left hand twists counter-clockwise with index and thumb extended,

hand opens palm out(c) Both hands point toward the E position on the puzzle mat

After this instruction, the experimenter did in fact rotate the block counter-clockwise to fit at the E coordinate, and the subject confirmed this move ascorrect by nodding and saying “yeah”. Example (20) provides another examplein which a subject produced rotation and orientation gestures with no referenceto orientation in her speech.

(20) um [ ] the uh the red [the red one] um [place it] in the D column (a) (b) (c) [ ] (+G,+A)(d)

(a) Right hand points toward “holding” board(b) Right hand again points toward “holding” board, slightly more em-

phatic and higher in space (more toward the red block on the far sideof the board)

(c) Right index finger circles counter-clockwise(d) Right index finger traces an L shape (short vertical line sweeping to a

long horizontal line — the desired orientation of the block)

For example (20), the experimenter did rotate the block counter-clockwise sothat the block had the orientation indicated by the gesture.

The existence of examples like (16)–(20) suggests that the gestures subjectsproduced while solving these spatial puzzles did not merely reflect the contentof speech. Rather, gestures often expressed information not present in speech,and gestures were occasionally produced in a way that indicated that the gestureitself was helping subjects solve the puzzle.

46 Karen Emmorey and Shannon Casey

Analysis of spatial language

As discussed above, when subjects could gesture and see their addressee, theyproduced more references to object orientation when solving the puzzle. Asfound by Emmorey and Casey (1995), the majority of the orientation instruc-tions (78%) specified some type of rotation using verbs like turn, rotate, flip.Only twenty-two percent specified the static orientation of a block; for example,“the red one to fit exactly how it is” or “the green one sideways with the edge ofit on 2, and 3 in I.” The ability to gesture or see the addressee did not seem toaffect the proportion of rotation instructions vs. static orientation descriptions.

If gesture tended to convey the direction of rotation as described above, wereasoned that when subjects could not gesture, they would be more likely tolexically specify rotation direction with terms such as “clockwise” or “to theleft.” This prediction turned out to be correct (see Figure 4).

When subjects could be seen by the experimenter, those prevented from

Gesture

No Gesture

Visible Not VisibleAdressee

50

40

30

20

10

0

Perc

ent

ofR

otat

ion

In

stru

ctio

ns

Figure 4.�Percent of rotation instructions that lexically specified the direction ofrotation.

gesturing were significantly more likely to lexically specify rotation directioncompared to subjects who could gesture: 43% vs. 21% of rotation requestsspecified direction, respectively (χ2=7.44, p<.01).3 Furthermore, subjects wereabout twice as likely to gesture when the rotation instruction did not specify

Gesture, thought and spatial language 47

direction: 67% of unspecified rotation instructions were accompanied bygesture, compared to 35% when rotation direction was specified. This findingindicates that gesture is not merely a semantically redundant accompanimentto speech; rather, gesture expressed additional non-redundant informationabout rotation direction.

Surprisingly, the effect of gesture on direction specification for rotationheld even when subjects could not see the experimenter. In this condition, 49%of rotation instructions specified direction for subjects who could not gesture,compared to 27% for those who were allowed to gesture, and this difference wassignificant (χ2 = 5.46, p < .05). Again, subjects were about twice as likely togesture when the rotation instruction did not specify direction, even though theexperimenter could not see the gesture: gesture accompanied 41% of rotationinstructions when direction was unspecified vs. 21% of rotation instructionswhen direction was specified.

This result suggests that the function of such rotation gestures was notnecessarily to communicate the direction of rotation to an addressee. Rather,they may have reflected the subjects’ visualization of the block rotation neces-sary to position it in a desired location. Such gestures affected the nature ofrotation descriptions encoded by speech, such that subjects tended to omit thedirection of rotation which is, of necessity, encoded within the gesture (i.e., thehand must rotate in a particular direction). The lexical retrieval hypothesispredicts the opposite result: an increase in rotation descriptions in the speechshould be accompanied by an increase in gesture because gesture is predicted tofacilitate speech about spatial concepts. However, we found the oppositepattern. Thus, we argue that gesture and speech express thought in an inter-dependent and integrated manner (McNeill, 1992) and that gesture contributesto the expression of thought even when the addressee cannot observe thespeaker’s gestures.

Conclusions

Speakers tend to gesture when they talk about spatial relationships. This is notsurprising given that the hands can easily represent concrete objects, handmotion can directly represent object movement, and the space occupied by thehands can correspond to described space. In addition, language is not particu-larly adept at expressing complex spatial relations— hence the cliché “a pictureis worth a thousand words.” Speakers may gesture to enhance both communi-

48 Karen Emmorey and Shannon Casey

cation and their conceptualization of the state or event that they are describing.The results of the current study support McNeill’s (1992) hypothesis thatgesture is both an act of communication and an act of thought.

Gestures may have helped subjects visualize the rotation necessary for theplacement of blocks within the puzzle. Subjects occasionally produced rotation-type gestures prior to speech about orientation change, and produced suchgestures in the absence of speech about rotation. Gestures may be particularlyhelpful for our experimental task because subjects would much prefer to solvethe spatial puzzles by manipulating the blocks themselves, rather than by tellinganother person what to do. Manual gestures provide a mechanism throughwhich subjects can imagine themselves moving the blocks while solving thepuzzle. However, it is important to note that subjects’ gestures were notmimetic depictions of how they would rotate the blocks if they were holdingthem. Rather, subjects’ gestures schematically represented block orientation andmotion; for example, subjects used the index finger with a tracing motion toindicate rotation or an L handshape to refer to an L-shaped block.

When solving spatial puzzles, we found that gesturing made it more likelythat subjects would describe orientation and rotation verbally, but less likelythat they would additionally specify the direction of orientation change. Onecould hypothesize that gesture merely facilitated access to words such as rotateor lengthwise, leading to an increase in the use of these words, but such ahypothesis leaves unexplained why this increase only occurred when theaddressee was visible or why access to direction-specific words such as counter-clockwise were not also facilitated. We suggest that the ability to use gesturemade it easier for speakers to express orientation verbally because gesture couldconvey all of the orientation information (as in deictic anaphoric references) orsome of the information (e.g., direction of rotation). Furthermore, gesture andspeech complemented each other such that speakers were less likely to lexicallyspecify rotation direction when this information could be expressed by gesture.However, since speakers who could not be seen by their addressee also tendedto leave rotation direction unspecified when they gestured, it is suggested thatspeakers’ gesture reflects the way in which they formulate their thoughts, inaddition to the communicative role that gesture may also play.

Gesture, thought and spatial language 49

Notes

* This work was supported by the National Science Foundation (Linguistics Program,

<DEST "emm-n*">

SBR-9510963 and SBR-9809002) and by the National Institute on Child Health and HumanDevelopment (R01 HD13249). We thank Amy Lakin for help in gesture coding and speechtranscription, and Mark St. John for the use of his spatial puzzles. We also thank RandiEngle, Edward Klima, and an anonymous reviewer for valuable comments on a draft of thepaper. An earlier version of this paper appeared in Coventry and Olivier (2001).

1. Subjects sometimes instructed the experimenter to remove a block(s) from the puzzle mat,but instructions within such a move rarely contained reference to grid coordinates or toblock orientation. Therefore, we did not include these moves in our analysis.

2. Following McNeill (1992), the phase of the hand movement deemed to be expressing the“meaning” (the stroke) is shown by enclosing the concurrent speech in square brackets. Thegesture itself is described in italics.

3. This analysis is based on the total orientation references for each group, rather than themean for each subject, given the relatively small number of orientation references for subjectsin the no-gesture condition.

References

Cohen, Akiba A. (1977). The communicative function of hand illustrators. Journal ofCommunication, 27, 54–63.

Cohen, Akiba & Harrison, Randall p. (1973). Intentionality in the use of hand illustrators inface-to-face communication situations. Journal of Personality and Social Psychology, 28,276–279.

Coventry, Kenneth R., & Olivier, Patrick (2001). Spatial language: Cognitive and computa-tional perspectives. Kluwer Academic Publishers.

Emmorey, Karen, & Casey, Shannon (1995). A comparison of spatial language in English andAmerican Sign Language. Sign Language Studies, 88, 255–288.

Lewin, Michael R., McNeil, Daniel R., & Lipson, Jonathon M. (1996). Enduring withoutavoiding: Pauses and verbal dysfluencies in public speaking fear. Journal of Psychopathol-ogy & Behavioral Assessment, 18(4), 387–402.

McNeill, David (1992). Hand and Mind: What gestures reveal about thought. The Chicago,IL.: University of Chicago Press.

Rauscher, Francis H., Krauss, Robert M., & Chen, Yihsiu (1996). Gesture, speech, andlexical access: The role of lexical movements in speech production. PsychologicalScience, 7(4), 226–231.

St. John, Mark (1992) Learning language in the service of a task. Proceedings of the 14thAnnual Conference of the Cognitive Science Society. Hillsdale, New Jersey: LawrenceErlbaum Associates.

50 Karen Emmorey and Shannon Casey

Authors’ addresses

Karen EmmoreyLaboratory for Cognitive NeuroscienceThe Salk Institute for Biological Studies10010 North Torrey Pines Rd., La Jolla, CA 92037e-mail: [email protected].

About the authors

KarenEmmorey receivedher Ph.D. inLinguistics fromtheUniversity ofCalifornia, LosAngelesin 1987. She is currently a senior staff scientist at the Salk Institute for Biological Studies.

Shannon Casey is currently a graduate student in linguistics at the University of California,San Diego.

</TARGET "emm">