Categorical perception in American Sign Language

25
Categorical perception in American Sign Language Karen Emmorey and Stephen McCullough The Salk Institute for Biological Studies, La Jolla, CA, USA Diane Brentari Purdue University, West Lafayette, IN, USA Categorical perception (CP) refers to the finding that certain stimuli (particularly speech) are perceived categorically rather than continuously, despite a continuous variation in form. Two experiments investigated whether Deaf signers or hearing nonsigners exhibit CP for hand configura- tion or for place of articulation (the location of articulation on the body) in American Sign Language (ASL). CP performance was measured using discrimination (ABX) and categorisation paradigms with computer-gener- ated images of signs. In the categorisation task, signers and nonsigners exhibited sigmoidal performance and categorised non-identical stimuli together at each end of the perceptual continuum for both hand configuration and place of articulation, regardless of phonological distinc- tiveness in ASL. The finding that signers and nonsigners performed similarly suggests that these categories in ASL have a perceptual as well as a linguistic basis. Results from the discrimination task, however, showed that only ASL signers demonstrated categorical perception, and only for phonologically contrastive hand configuration. Neither group exhibited CP for place of articulation. Lack of a CP effect for place of articulation may be due to more variable category boundaries. A CP effect for contrastive hand configuration suggests that deaf signers develop special abilities for perceiving distinctions that are relevant to American Sign Language. Requests for reprints should be sent to Karen Emmorey, Laboratory for Cognitive Neuroscience, The Salk Institute for Biological Studies, 10010 North Torrey Pines Road, La Jolla, CA 92037. Email: [email protected] This research was supported by grants from the National Science Foundation (Linguistics Program; SBR 9809002) and from the National Institute for Child Health and Human Development (ROI HD13249) awarded to Karen Emmorey at the Salk Institute for Biological Studies. We would like to thank Sam Hawk, Melissa Herzig, and Amy Hoshina for help testing deaf subjects, and Sharen Kwan and Jamie Park for help testing the hearing subjects. c 2003 Psychology Press Ltd http://www.tandf.co.uk/journals/pp/01690965.html DOI: 10.1080/01690960143000416 LANGUAGE AND COGNITIVE PROCESSES, 2003, 18 (1), 21–45

Transcript of Categorical perception in American Sign Language

Categorical perception in American Sign Language

Karen Emmorey and Stephen McCulloughThe Salk Institute for Biological Studies, La Jolla, CA, USA

Diane BrentariPurdue University, West Lafayette, IN, USA

Categorical perception (CP) refers to the finding that certain stimuli(particularly speech) are perceived categorically rather than continuously,despite a continuous variation in form. Two experiments investigatedwhether Deaf signers or hearing nonsigners exhibit CP for hand configura-tion or for place of articulation (the location of articulation on the body) inAmerican Sign Language (ASL). CP performance was measured usingdiscrimination (ABX) and categorisation paradigms with computer-gener-ated images of signs. In the categorisation task, signers and nonsignersexhibited sigmoidal performance and categorised non-identical stimulitogether at each end of the perceptual continuum for both handconfiguration and place of articulation, regardless of phonological distinc-tiveness in ASL. The finding that signers and nonsigners performed similarlysuggests that these categories in ASL have a perceptual as well as a linguisticbasis. Results from the discrimination task, however, showed that only ASLsigners demonstrated categorical perception, and only for phonologicallycontrastive hand configuration. Neither group exhibited CP for place ofarticulation. Lack of a CP effect for place of articulation may be due to morevariable category boundaries. A CP effect for contrastive hand configurationsuggests that deaf signers develop special abilities for perceiving distinctionsthat are relevant to American Sign Language.

Requests for reprints should be sent to Karen Emmorey, Laboratory for Cognitive

Neuroscience, The Salk Institute for Biological Studies, 10010 North Torrey Pines Road,

La Jolla, CA 92037. Email: [email protected]

This research was supported by grants from the National Science Foundation (Linguistics

Program; SBR 9809002) and from the National Institute for Child Health and Human

Development (ROI HD13249) awarded to Karen Emmorey at the Salk Institute for

Biological Studies. We would like to thank Sam Hawk, Melissa Herzig, and Amy Hoshina for

help testing deaf subjects, and Sharen Kwan and Jamie Park for help testing the hearing

subjects.

�c 2003 Psychology Press Ltd

http://www.tandf.co.uk/journals/pp/01690965.html DOI: 10.1080/01690960143000416

LANGUAGE AND COGNITIVE PROCESSES, 2003, 18 (1), 21–45

22 EMMOREY, McCULLOUGH AND BRENTARI

Categorical perception (henceforth CP) is a psychophysical phenomenonin which certain stimuli (particularly speech sounds) are perceivedcategorically rather than continuously, despite a continuous variation inform (Liberman, Cooper, Shankweiler, & Studdert-Kennedy, 1967). Theexperimental paradigm for demonstrating CP is psychophysical: discrimi-nation and identification performance are compared for a set of stimulithat vary along a physical continuum, and regions of that continuum can beassigned labels. For example, speech stimuli can be created that vary alonga voicing continuum (voice onset time or VOT), with [ba] and [pa] as theendpoints. When English speakers are presented with these continuouslyvarying stimuli in random order and asked to identify them, theirperformance is discontinuous. That is, they uniformly identify one end ofthe continuum as ‘ba’ and the other end as ‘pa’. Performance on thislabelling task is compared with a discrimination task in which subjectssimply decide which one of two stimuli matches a target stimulus (theABX paradigm), and no overt categorisation is involved. Crucially,speakers are able to easily discriminate between stimuli that fall across acategory boundary, but perform more poorly when discriminating within acategory. Thus, a CP effect occurs when (1) a set of stimuli ranging along aphysical continuum are identified as belonging to distinct, boundedcategories and (2) subjects are better able to discriminate between pairsof stimuli that straddle this boundary than pairs that fall within onecategory or the other. CP effects are not found for all acoustic propertiesused for speech; neither the affricate/fricative distinction (Ferrero,Pelamatti, & Vagges, 1982; Rosen & Howell, 1987), nor vowel distinctions(Abramson, 1961; Fry, Abramson, Eimas, & Liberman, 1962) areperceived categorically.

Three of the factors shown to be important in CP experiments areperceptual predisposition, language experience, and development/matura-tion. Although CP effects for speech were originally taken as evidence formechanisms specially evolved for speech perception, further researchrevealed CP effects for speech stimuli in non-human animals (e.g., Kuhl,1981), for non-speech auditory stimuli (e.g., Burns & Ward, 1978), and forsome visual stimuli as well (see Bornstein, 1987, for properties of colour;see Beale & Keil, 1995, and Etcoff & Magee, 1992, for face recognition).Thus, some CP effects may be accounted for by natural sensitivities of theauditory system to specific types of stimuli, rather than by specially evolvedmechanisms for speech.

However, many studies also indicate that humans develop specificperceptual abilities for listening to speech. For example, studies haveshown that adult speakers of Japanese have difficulty distinguishing /l/ and/r/ (e.g., Eimas, 1975); whereas adult English speakers have difficultydistinguishing between Hindi dental and retroflex stops which are not

CATEGORICAL PERCEPTION IN ASL 23

distinctive in English (Werker & Tees, 1983). In contrast, 6-month-oldinfants in Japan, India, and America can all distinguish these soundcontrasts. By about 1 year of age, however, infants are performing liketheir adult counterparts, reliably distinguishing only those speech soundsrelevant to their language. In addition, language experience plays animportant role in reported CP effects by establishing where categoryboundaries occur. CP effects are exhibited at a very young age (Eimas,Siqueland, Jusczyck, & Vigorito, 1971), and boundaries for properties suchas VOT shift in one direction or another based on their languageenvironment. Spanish and English both have a [þ/� voice] contrast, butthe category boundaries are about 30 ms apart (Williams, 1977). Kuhl(1991, 1998) proposes that babies don’t simply lose their ability to perceivenon-native sounds, rather they develop representations of prototypicalsounds in their language (see also Werker, 1994). These prototypicalrepresentations serve to ‘filter’ sounds in ways unique to a particularlanguage, making it difficult to hear some of the distinctions of otherlanguages. Aslin, Pisoni, and Jusczyck (1983) propose that languageexperience plays an ‘attunement’ role of an innate ability in thedevelopment of CP performance.

In the two experiments reported here, we investigated the effects ofperceptual predisposition and language experience on CP effects for visualstimuli from American Sign Language (ASL). We explored whether Deaf 1

signers develop unique abilities for perceiving distinctions that are relevantto American Sign Language (ASL) similar to those for spoken languages.Specifically, we investigated whether signers exhibit CP effects fordistinctions in hand configuration (HC) or place of articulation (POA).Like spoken languages, signed languages exhibit a linguistically significant,yet meaningless, level of structure that can be analysed as phonology(Stokoe, 1960; for reviews see Brentari, 1998; Corina & Sandler, 1993).Signs can be minimally distinguished by hand configuration (e.g.,PLEASE,2 SORRY), by place of articulation (APPLE, ONION), and bymovement (e.g., TRAIN, CHAIR). Orientation with respect to the body isanother component that minimally distinguishes among signs (e.g.,WHERE, SCOLD; Battison, 1978).

Not all hand configurations nor all places of articulation are distinctivein ASL. Phonological elements are contrastive (phonemic, distinctive) in agiven language if: (a) they occur in identical environments, (b) createminimal pairs, and (c) their distribution cannot be captured by aphonological constraint or rule. As single consonants in syllable onset

1 By convention, uppercaseDeaf is used when the use of sign language and/or membership

in the Deaf community is at issue, and lower case deaf is used to refer to audiological status.2 Signs are notated as English glosses in uppercase.

24 EMMOREY, McCULLOUGH AND BRENTARI

position, /t/ and /d/ are contrastive in English for place of articulation, e.g.,‘[t]ime’ vs. ‘[d]ime’. This contrast involves a certain portion of the fullrange of voice onset time. The criteria for contrastive phonologicalelements can be used in sign languages as well; they are not sound-specific.In monomorphemic ASL words, the property that refers to the number offingers that are selected is contrastive, e.g., PLEASE and SORRY (Figure1A). Whether fingers are selected or not is based on a set of criteria thatincludes whether or not the fingers are extended (Mandel, 1981; Sandler,1996). In PLEASE, the thumb and all of the fingers of the hand areselected (represented by the feature [all]); in SORRY, only the thumb isselected.

A phonological element is allophonic (not constrastive) in a givenlanguage if its distribution can be captured by a phonological constraint orrule. In American English, a dental flap is an allophone of /t/, since dentalflaps occur only in the context where a stressed vowel precedes and anunstressed vowel follows an underlying /t/, e.g., bites vs. biting. Inmonomorphemic words in ASL that contains a change in handshape, theaperture values (i.e., whether a handshape is open or closed) arepredictable. This finding has been expressed in different ways in theliterature, but the generalisation is uncontroversial (Brentari, 1990, 1998;Corina, 1990; Sandler, 1987) – within the set of selected fingers and jointspecifications for the underlying handshape, one handshape will be openand the other closed in a monomorphemic word. In the allophonic pair inFigure 1B, the open and closed handshapes in SAY-NO-TO arepredictable given that the thumb index and middle fingers are selectedand that the metacarpal joint is specified. These two handshapes areallophonic in ASL.

We hypothesised that adult Deaf ASL signers, unlike hearingnonsigners, would exhibit categorical perception, but only for those handconfigurations or places of articulation that are distinctive in ASL. It isimportant to highlight here a point that is implicit in CP studies for speechsounds; that is, only phonemic distinctions produce CP effects. Not onlymust a sound appear in the set of phones of a language (i.e., the set of allsounds of a given language), it must be part of the phonemic inventory(i.e., the set of sounds used for lexical contrast). Even acoustic propertiesthat have been shown to produce CP effects may not do so if the extremesare not phonemic in the speaker’s language. In other words, the extremesin CP stimuli are built on phonetic correlates of a possible phonemicdistinction; a CP effect of discrimination is not found unless a phonemic/contrastive distinction for a pair of sounds exists. For example, Japanesespeakers do not exhibit a CP effect for the r/l distinction (Eimas, 1975); asin Korean, these sounds are not used contrastively. Speakers produce [l]and [r] in Japanese and Korean, but they are allophonic – in Korean, [l] is

CATEGORICAL PERCEPTION IN ASL 25

produced syllable-finally while [r] is produced syllable-initially (Demers &Farmer, 1991; Kenstowicz, 1994).

To test whether linguistic status influences categorical perception inASL, we compared the performance of Deaf ASL signers and hearingnonsigners on discrimination and identification tasks that involvedcontinua in which the endpoints were either phonologically contrastiveor noncontrastive. An earlier study of categorical perception forcontrastive hand configuration and place of articulation was conductedby Newport and Supalla with Deaf ASL signers, but the experimentsfailed to find any CP effects (reported in Newport, 1982). However, thisearly study may have been hampered by a lack of statistical power (onlyfour or fewer subjects were tested) or by a lack of technology for creatingthe sign continua. In the experiments reported here, we used a 3-Danimation computer program to ‘morph’ one endpoint of a signcontinuum to another, thus creating equally spaced steps along eachcontinuum and changing only the relevant variable. Because filmedproductions by a live signer were used to create the stimuli in the earlyNewport and Supalla study, it is possible that the steps were not trulyequal along each continuum and that other aspects of the stimuli werealtered as well as the target variable (e.g., slight changes in wrist angle orspeed of movement). Further investigation of categorical perception for asigned language taking advantage of new computer techniques is clearlywarranted.

If we find categorical perception effects for Deaf ASL signers, but notfor hearing nonsigners, it will indicate that CP is a basic aspect of languageperception and processing and is independent of language modality.Differential CP effects for the contrastive vs. the noncontrastive linguisticcontinua will indicate how language processing gives rise to categoricalperception, e.g., we hypothesise that CP effects will only occur forlinguistically contrastive stimuli. If we find CP effects for both Deaf signersand hearing nonsigners, it will suggest that (1) ASL category boundariesfall along natural visual categories and (2) categorical perception forvisual-gestural stimuli does not emerge from language processingexperience. If we find no evidence of CP for either group, it will helpdelineate what types of visual stimuli can be perceived categorically.Categorical perception effects have been previously observed within thevisual domain for famous faces (Beale & Keil, 1995) and for emotionalfacial expressions (Etcoff & Magee, 1992). If we find no evidence of CP foreither hand configuration or body locations (places of articulation), it willsuggest that CP may only occur for human faces, perhaps because eachhuman face is a unique individual category (unlike other body parts) and/or because emotional facial expressions are universally categorised byhumans (Ekman, 1980).

26 EMMOREY, McCULLOUGH AND BRENTARI

In sum, the existence of categorical perception effects for the visual-gestural phonological components of ASL for either Deaf signers, hearingnonsigners, or both, will indicate (1) whether categorical perception arisesnaturally as part of language processing, regardless of modality, and (2)whether categorical perception in the visual domain can occur for gesturesas well as for faces.

EXPERIMENT 1

Our first experiment investigated categorical perception for two continuafor hand configuration (HC) and two for place of articulation (POA),comparing phonemic and allophonic HCs and POAs. The continuum forphonemic HC was anchored by the ‘B-bar’ and ‘A-bar’ hand configura-tions, exemplified by the minimal pair PLEASE and SORRY. Thecontinuum for allophonic HC was anchored by the initial and final handconfiguration of the sign SAY-NO-TO (‘open N’ to ‘closed N’). Thecontinuum for phonemic POA was anchored by the locations at the uppercheek and chin, exemplified by the minimal pair ONION and APPLE. Thecontinuum for allophonic POA was anchored by the initial and final placeof articulation of the sign DEAF (the chin and back jaw). Figures 1 and 2illustrate all four continua. The stimuli in each continuum were staticrather than dynamic because (1) transitional movement towards the bodyis not the same for the place of articulation endpoints and (2) thephonologically specified ‘closing’ movement of SAY-NO-TO (the allo-phonic HC endpoints) alters the hand configurations of interest.

Method

Subjects. Seventeen hearing nonsigners and fifteen Deaf ASL signersparticipated in Experiment 1. All hearing subjects reported no knowledgeof a signed language. All Deaf subjects were prelingually deaf and nativeor near-native ASL signers. Fourteen of the Deaf subjects had Deaffamilies and were exposed to ASL from birth, and one subject had hearingparents and was first exposed to signed communication (Pidgin SignedEnglish) at the age of five. All Deaf subjects reported ASL as their primaryand preferred language.

Materials. The stimuli were created with Poser (3-D animationsoftware) from MetaCreations and are illustrated in Figures 1 and 2.The keyframe animation program included in this software package canproduce a linear continuum of equal steps between two hand configura-tions or between two spatial positions through linear interpolation. The

CATEGORICAL PERCEPTION IN ASL 27

linear interpolation technique incorporates all parameter information onjoint/body positions from the starting and ending poses and calculates theposes in-between these endpoints in equal increments. For example, if ashoulder joint is set in such a way that the arm position is at 90 degrees inframe 1 and 0 degrees in frame 7, linear interpolation automatically createsfive in-between frames showing the arm at angles of 75, 60, 45, 30, and 15degrees. In our study, the initial and end poses were created to match thesign articulations for the endpoints of each continuum, and we thenapplied the linear interpolation technique to create a continuum of equalsteps between poses. Using this technique, we created four continua ofeleven still images depicting equally spaced steps between two handconfigurations or two places of articulation.

Procedure. Two different tasks, discrimination and categorisation,were used to determine whether subjects demonstrated CP effects for aspecific continuum, and the task design follows that of Beale and Keil

Figure 1. Illustration of stimulus continua varying in (A) phonemic hand configuration

(from the sign PLEASE to the sign SORRY) and (B) allophonic hand configuration (from

‘open-N’ to ‘closed N’ in the sign SAY-NO-TO). The continua were created with Poser

software from MetaCreations.

28 EMMOREY, McCULLOUGH AND BRENTARI

(1995) who investigated categorical perception for human faces. Stimuliwere presented on a Macintosh colour monitor using PsyScope software(Cohen, MacWhinney, Flatt, & Provost, 1993), and the order ofpresentation for the four different continua was counterbalanced acrosssubjects. The discrimination task followed an ‘ABX’ matching to sampleparadigm, and it was always presented prior to the categorisation task. Oneach trial, subjects were shown three images successively. The first twoimages (A and B) were always two steps apart along the linear continuum(e.g., 1–3, 5–7) and were displayed for 750 ms each.3 The third image (X)was always identical to the first or second image and was displayed for 1second. A 1 second inter-stimulus interval (ISI) consisting of a blank whitescreen separated consecutive stimuli. Subjects pressed a key on thecomputer keyboard to indicate whether the third image was the same as

Figure 2. Illustration of stimulus continua varying in (A) phonemic place of articulation

(from the sign ONION to the sign APPLE) and (B) allophonic place of articulation (from the

chin to the back jaw in the sign DEAF). The continua were created with Poser software from

MetaCreations.

3 Images that were more than two steps apart were not compared because these images

were relatively easy to discriminate and would likely produce a ceiling effect in response

accuracy.

CATEGORICAL PERCEPTION IN ASL 29

the first or second image. All 9 two-step pairings of the 11 images werepresented in each of four orders (ABA, ABB, BAA, BAB) resulting in 36combinations. Each combination was presented twice to each subject, andthe resulting 72 trials were fully randomised within each continuum. Theseparticular methods (i.e., a relatively long ISI and predictable step sizes)were chosen to maximise the probability of obtaining categoricalperception effects for the discrimination task. We reasoned that if CPcould be shown for a signed language under optimal conditions, then wecould manipulate these conditions in future studies.

The categorisation task consisted of a binary forced-choice categorisa-tion. Before subjects performed the categorisation task for eachcontinuum, they were first shown two endpoint images labelled with thenumber 1 or 2. On each trial, subjects were presented with a single imagerandomly selected from the 11 image continuum and were asked to decidewhich one of the pair (image number 1 or 2) that image most closelyresembled. For example, subjects pressed the number ‘1’ key if the imagemost resembled the B-bar hand configuration and the ‘2’ key if it mostresembled the A-bar hand configuration. Stimuli were presented for 750 msfollowed by a white blank screen, and each image was presented eighttimes, resulting in 88 randomly ordered trials for each continuum. Subjectscould refer to the endpoint images between trials (although they rarely didso), but not during a trial. For signers, the endpoint images had linguisticsignificance because the still images were recognisable as the signsPLEASE and SORRY without movement, and the endpoints of thenoncontrastive pairs were recognisable as components of the signs DEAFand SAY-NO-TO. For hearing nonsigners, however, the endpoint imageswere merely labelled with the arbitrary numbers 1 and 2.

Results

For all four continua tested, both hearing and Deaf subjects judged stimulias belonging to distinct categories with a sharp boundary between them(see Figures 3 and 4). The data from the categorisation task showed asigmoidal shift in identity judgements for each continuum, and theboundaries between categories were the same for both subject groups. Thecategorisation task data were then used to predict performance on thediscrimination task. Following Beale and Keil (1995), we assessed whetherthe stimuli within a continuum were perceived categorically by firstdefining the category boundary as those images which yielded labellingpercentages between 33% and 66% on the categorisation task. We chosethis method to define the category boundary because (1) the shift incategorisation judgments seen in Figures 3 and 4 is not definitive because itmay be an artifact of binary forced-choice judgements and (2) we wanted

Figure 3. Data from the hand configuration continua of Experiment 1 for Deaf signers and

hearing nonsigners. For each subject group, the upper graphs show results from the

categorisation tasks for the phonemic and allophonic HC continua. The lower graphs show

results from the discrimination tasks. The vertical lines indicate the predicted peaks in

accuracy. Only Deaf signers exhibited peak discrimination accuracy at the category boundary,

and only for phonemic hand configurations.

30

Figure 4. Data from the place of articulation continua of Experiment 1 for Deaf signers and

hearing nonsigners. For each subject group, the upper graphs show results from the

categorisation tasks for the phonemic and allophonic POA continua. The lower graphs show

results from the discrimination tasks. The vertical lines indicate the predicted peaks in

accuracy. Neither subject group exhibited better discrimination accuracy at the predicted

category boundary, thus failing to show CP effects for place of articulation.

31

32 EMMOREY, McCULLOUGH AND BRENTARI

to be able to compare our results with other studies of visual categoricalperception. If the stimuli along a continuum are perceived categorically, apeak in accuracy would be expected in the discrimination task for the two-step pair that straddles the boundary. Thus, planned comparisons wereperformed on the accuracy scores at the predicted peaks. That is, for eachcontinuum, accuracy for the pair that straddled the boundary wascontrasted with the mean accuracy on all the other pairs combined.

For the phonemic hand configuration continuum between B-bar(PLEASE) and A-bar (SORRY), Deaf signers exhibited a categoricalperception effect [F(1, 26) ¼ 20.35, p 5 .001], but the hearing nonsigningsubjects did not [F(1, 144) ¼ 1.3, n.s.]. For the allophonic HC continuumbetween ‘open N’ and ‘closed N’, neither subject group exhibited a CPeffect [Deaf subjects: F(1, 126) ¼ 0.9, n.s.; hearing subjects: F(1, 144) ¼1.05, n.s.].

For the phonemic place of articulation continuum between the uppercheek (ONION) and chin (APPLE), neither the Deaf subjects [F(1, 126) ¼0.11, n.s.] nor the hearing subjects [F(1, 144) ¼ 1.68, n.s.] exhibitedcategorical perception. Similarly, neither group exhibited a CP effect forthe allophonic place of articulation continuum between the chin and backjaw [Deaf subjects: F(1, 126) ¼ 3.04, n.s.; hearing subjects: F(1, 144) ¼ 3.8,n.s.].

Discussion

When asked to categorise hand configurations and places of articulation onthe face, both signers and nonsigners exhibited discontinuous (sigmoidal)performance and categorised non-identical stimuli together at each end ofthe perceptual continuum (see Figures 3 and 4). Given that we alreadyknow that place of articulation and hand configuration categories exist forsigners based on linguistic data, this result is not surprising for signers, atleast not for the hand configuration and place of articulation stimuli thatare contrastive. The finding that hearing nonsigners performed similarlysuggests that these categories may have a perceptual as well as a linguisticbasis. The category boundaries observed for the hand configuration andthe place of articulation continua may be perceptually driven, and thusDeaf signers (like hearing controls) exhibit sigmoidal performance evenwhen identifying allophonic stimuli within hand configuration and place ofarticulation continua.

The results from the categorisation tasks are consistent with earlierresults from Lane, Boyes-Braem, and Bellugi (1976) and from Poizner andLane (1978) who found similar perceptual groupings by Deaf signers andhearing controls for hand configuration and for place of articulation. Laneet al. (1976) found that both Deaf signers (native and non-native) and

CATEGORICAL PERCEPTION IN ASL 33

hearing controls made the same types of visual confusions among handconfigurations, suggesting that linguistic experience does not affect thesaliency of the visual features critical to the identification of handconfigurations. Similarly, Poizner and Lane (1978) found that Deaf andhearing subjects exhibit similar patterns of performance when asked toidentify locations on the body under conditions of visual noise. Althoughthe categorisation of hand configuration and place of articulation appear tobe unaffected by linguistic experience, the categorisation of movementmay be influenced by knowledge and use of ASL. Poizner (1981, 1983)found that signers and nonsigners provide different similarity judgementsfor point-light motion displays. In addition, experience with ASL affectsthe perception of apparent motion. Apparent motion is the perception of asingle moving object when a static object occurs at one location, followedrapidly by a static object at another location. Wilson (2001) found thatASL signers perceive an arc motion of the hand if the corresponding signhas an arc movement (e.g., IMPROVE), whereas hearing nonsignersperceive the (expected) shortest linear path motion for the same stimuli.Thus, although handshape and place of articulation may be categorisedsimilarly by signers and nonsigners, movement may be processeddifferently by the two groups.

Crucially, for hand configuration, the discrimination task revealed thatonly Deaf signers exhibited better discrimination across the categoryboundary compared to within categories, thus demonstrating categoricalperception. Furthermore, categorical perception was only observed forphonemic hand configurations, and not for hand configurations that areallophonic in ASL. To our knowledge, this is the first experiment todemonstrate a CP effect specific to users of a signed language. The factthat hearing nonsigners did not exhibit a CP effect for hand configurationindicates that the enhanced discrimination at the category boundaryexhibited by Deaf signers is a result of linguistic knowledge and not due togeneral properties of visual discrimination and perception. Furthermore,the finding that CP only occurred with phonemic, and not allophonic, handconfigurations suggests that Deaf signers, like speakers, develop uniqueabilities for perceiving distinctions that are relevant to their language.

Although the discrimination function for CP shown in Figure 3 does notresemble the ‘ideal’ discrimination function sometimes described forspeech (i.e., at chance performance within a category and perfectperformance across the boundary), the data do resemble the discrimina-tion functions observed for CP in other visual domains, specifically forfamous faces (Beale & Keil, 1995) and emotional facial expressions (deGelder, Teunisse, & Benson, 1997; Etcoff & Magee, 1992). Discriminationaccuracy within visual categories tends to be relatively high; generallyparticipants perform with about 70–80% mean accuracy rates within

34 EMMOREY, McCULLOUGH AND BRENTARI

categories. In addition, even for speech, within-category discriminationabilities are generally not at chance, and most studies of categoricalperception for speech report reasonably good within-category discrimina-tion performance, but much better between category discriminationperformance (see Macmillan, Kaplan, & Creelman, 1977; Massaro,1987). The above-chance performance within speech categories ledMassaro (1987) to argue that the term categorical perception isinappropriate and to suggest the term categorical partition as analternative. However, this term has not been widely accepted, and mostresearchers retain the term categorical perception to refer to the generalphenomenon of better discrimination across a category boundary thanwithin a category (e.g., Harnad, 1987). Nonetheless, comparing the CPeffects with sign language to previous results with speech suggest that CPeffects in the visual domain are weaker than those found for speech.Specifically, discrimination ability within hand configuration categories isbetter than discrimination ability reported within stop consonantcategories for speech (e.g., Liberman, Harris, Hoffman, & Griffith,1957). Thus, the enhanced discrimination ability at the hand configurationboundary exhibited by Deaf signers may be less dramatic than thatobserved for English speakers at the VOT category boundary for voicedstops.

In contrast to our findings with hand configuration, the discriminationresults for place of articulation showed that both signers and nonsignersexhibited similar discrimination abilities both across and within categories.Lack of a CP effect for place of articulation may be due to several factors.Locations on the body may be more variable and continuous, compared tohand configurations, and thus may pattern more like vowels in speechperception, at least with respect to categorical perception. Anotherpossibility is that the visual system is particularly adept at discriminatingspatial locations, and linguistic categories of location may simply have noimpact on spatial discrimination ability. However, another possibleexplanation for the lack of a CP effect for place of articulation may liein the nature of phonological rules that affect place of articulation in ASL.Specifically, ASL has a rule of displacement which allows signs made ateye level (e.g., ONION) to be articulated at the cheek or chin level whensigning in a casual register. Thus, the contrastive POAs we selected forExperiment 1 could have been misinterpreted as allophonic placealternations because of the phonological displacement rule. As statedearlier, allophonic pairs do not produce CP effects. The pattern ofdistribution of forehead and chin as POAs in ASL is known as ‘partialoverlap’ (Bloch, 1941) – that is, a pair of phones that are contrastive in oneenvironment are allophonic in another. A well-known case of partialoverlap occurs for voicing in Polish and many other Slavic languages. In

CATEGORICAL PERCEPTION IN ASL 35

Polish, voicing is contrastive word-initially, but not when the pair appearsword-finally (Kenstowicz, 1994; Rubach, 1984); thus, this contrast isneutralised word-finally. Experiment 2 attempts to find evidence for a CPeffect with place of articulation using POAs that cannot be misconstruedas the output of a phonological rule, and this experiment also attempts toreplicate CP effects for hand configuration.

EXPERIMENT 2

For Experiment 2, we selected a new pair of signs to test CP effects forplace or articulation: the chin (initial location of the sign YESTERDAY)and the neck (the initial position of the sign HANG-OVER), see Figure5B. First, and most importantly, this new pair does not run the risk ofinterference from the effects of a phonological rule. Second, the new pairdiffers not only in its terminal features, but also in its major body region. InBrentari (1998), the POA class node ‘body’ is divided into four majorregions – head, torso, nondominant arm, and nondominant hand – whichare then further assigned one of eight possible subregions. The cheek is asubregion dominated by the head, while the neck is a subregion dominatedby the torso in this model. Our new stimulus pair thus eliminates onepossible type of interference and also enhances the phonologicaldistinction between the two members of the pair.

Experiment 2 also attempts to replicate the CP effects observed inExperiment 1 for hand configuration, using a different set of contrastivehandshapes for the endpoints of the continuum. The early study byNewport and Supalla constructed a continuum from the l handshape (fistwith index finger extended, exemplified by CANDY) to the X handshape(first with index finger extended and bent, exemplified by APPLE).However, this particular contrast also involves a difference in contact andarm position – it does not just involve handshape. Therefore, we chose twosigns in which the handshapes did not differ in contact or in arm positionto form the endpoints of the HC continuum: MOTHER (an open5 handshape) and POSH (a 3 handshape), as illustrated in Figure 5A.

Method

Subjects. Twenty hearing nonsigners and twenty-two Deaf ASL signersparticipated in Experiment 2. All hearing subjects reported no knowledgeof a signed language. All Deaf subjects were prelingually deaf and nativeASL signers. All Deaf subjects had Deaf families and were exposed toASL from birth. All Deaf subjects reported ASL as their primary andpreferred language.

36 EMMOREY, McCULLOUGH AND BRENTARI

Materials. The stimuli were developed as in Experiment 1 and areillustrated in Figure 5.

Procedure. The procedure for the categorisation and discriminationtasks was the same as in Experiment 1, and order of presentation of thehand configuration and place of articulation continua was counter-balanced across subjects.

Results

As in Experiment 1, both subject groups exhibited sigmoidal performanceon the categorisation task for both the HC and POA continua. Thecategory boundaries, defined as those images yielding between 33% and66% accuracy, were slightly sharper for the Deaf subjects because only oneimage pair fell within this range, whereas two image pairs fell within thisrange for the hearing subjects for both the HC and POA continua (seeFigure 6). To determine which one of the two image pairs should beconsidered as straddling the category boundary, we inspected the standarddeviation for each image in the HC and POA continua and chose theimage with the largest standard deviation. Response variability is expected

Figure 5. Illustration of stimulus continua from Experiment 2. (A) Hand configuration

continuum (from the sign MOTHER to the sign POSH) and (B) place of articulation

continuum (from the initial location of the sign YESTERDAY to the initial location of the

sign HANG-OVER). The continua were created with Poser software from MetaCreations.

Figure 6. Data from the hand configuration and place of articulation continua of Experiment

2 for Deaf signers and hearing nonsigners. For each subject group, the upper graphs show

results from the categorisation tasks for the HC and POA continua. The lower graphs show

results from the discrimination tasks. The vertical lines indicate the predicted peaks in

accuracy. Only Deaf signers exhibited peak discrimination accuracy at the category boundary,

and only for hand configuration.

37

38 EMMOREY, McCULLOUGH AND BRENTARI

to be largest at the category boundary, where categorisation is mostdifficult (see Figure 7A). However, the results remain unaltered if theother image is considered as the category boundary for either the HC orPOA analyses. Interestingly, inspection of the standard deviation datagraphed in Figure 7A for hand configuration reveals a sharp peak invariability at the category boundary for the Deaf subjects, but responsevariability is more dispersed for the hearing subjects. This pattern suggeststhe Deaf subjects were more sensitive to the category boundary during theidentification task then the hearing subjects. For place of articulation, thepattern of variability was very similar for both subject groups.

To assess categorical perception, planned comparisons contrasteddiscrimination accuracy for the image pair that straddled the boundarywith the mean accuracy on all the other pairs combined. The results fromthe HC continuum replicated Experiment 1: Deaf signers were signifi-cantly more accurate when discriminating pairs that straddled the HCcategory boundary [F(1, 189) ¼ 7.75, p 5 .002], but hearing nonsignersshowed no increase in accuracy across the HC category boundary [F(1, 171)¼ 0.47, n.s.]. However, as in Experiment 1, neither group exhibitedevidence of categorical perception for place of articulation [Deaf signers:F(1, 189) ¼ 0.16, n.s.; hearing controls: F(1, 171) ¼ 0.45, n.s.].

In addition, we conducted an analysis of response variability fordiscrimination of hand configuration for the two subject groups. A CPeffect would predict greater response variability within a category wherediscrimination is most difficult and the least response variability at thecategory boundary where discrimination is easiest. Levene’s test ofhomogeneity of variance was performed, and the results are shown inFigure 7B, C. The hearing subjects showed no significant difference inresponse variability across the image pairs: F(8, 171) ¼ 1.22, n.s. Incontrast, there was a significant difference across the image pairs for theDeaf subjects [F(8, 189) ¼ 3.54, p 5 .001], and the smallest standarddeviation occurred for responses to the image pair that straddled thecategory boundary (see Figure 7B).

GENERAL DISCUSSION

The results of both experiments provide evidence for categoricalperception for some of the visual-gestural phonological components ofAmerican Sign Language. It appears that categorical perception may arisenaturally as a part of language processing, whether that language is signedor spoken. Deaf ASL signers, unlike hearing nonsigners, exhibitedincreased discrimination ability for contrastive hand configurations at therelevant boundary, but no increase in discrimination ability was observed

Figure 7. (A) Standard deviations for each subject group for identification of hand

configuration (image no. 4 is the category boundary). (B) Standard deviations for responses of

the Deaf subjects in the hand configuration discrimination task (note that the lowest standard

deviation is at the category boundary, which is indicated by the darkened bar). (C) Standard

deviations for responses of the hearing subjects for the hand configuration discrimination task

(standard deviations did not differ significantly across the image pairs).

39

40 EMMOREY, McCULLOUGH AND BRENTARI

for hand configurations that were non-contrastive. Thus, the CP effect forhand configuration is based on linguistic categorisation, rather than on apurely visual categorisation of hand configurations.

Two other recent studies of categorical perception in signed languageshave found significant CP effects in the domain of linguistic facialexpressions, but these effects have not been clearly linked to linguisticexperience. First, several studies using computer morphing techniqueshave demonstrated CP effects for emotional facial expressions (Calder,Young, Perrett, Etcoff, & Rowland, 1996; Etcoff & Magee, 1992; Young,Rowland, Calder, Etcoff, Seth, & Perrett, 1997). Systematic morphingbetween images of endpoint emotions (e.g., happy and sad expressions)yielded discontinuous discrimination and identification performance, withbetter discrimination across category boundaries than within categories.McCullough and Emmorey (1999) and Campbell, Woll, Benson, andWallace (1999) investigated whether linguistic facial expressions areperceived categorically for American Sign Language or for British SignLanguage, respectively. Both studies examined a continuum between thefacial expressions that mark yes/no and WH questions (for both ASL andBSL, yes/no questions are marked with raised eyebrows and WH questionsare marked with furrowed brows). McCullough and Emmorey (1999) alsoexamined a continuum between the mm and th facial adverbials in ASL(mm is produced with the lips pressed together and roughly indicates‘without effort’; th is produced with the tongue protruding slightly andindicates ‘carelessly’). Both the British and American studies foundevidence for categorical perception of linguistic facial expressions for bothDeaf signers and hearing nonsigners for all continua investigated. Bothgroups of subjects showed better discrimination for stimuli that straddledthe category boundaries (although the effects were weaker and lessconsistent for the Campbell et al. (1999) study).

The finding that hearing nonsigners demonstrated CP for linguistic facialexpressions suggests that the CP effects observed for Deaf signers in theseexperiments was not due to linguistic experience. The results also indicatethat CP effects are not limited to emotional facial expressions. That is,hearing and Deaf people perceive facial expressions that do not conveybasic emotions (i.e., linguistic facial expressions) as belonging to distinctcategories. It may be that humans have evolved a perceptual mechanismfor classifying facial displays that allows for efficient discrimination andrecognition of communicative expressions, even when these expressionsare unfamiliar (as linguistic facial expressions would be for nonsigners). Afair amount of data indicate that the perception of human faces has manyunique properties and may engage special neural mechanisms (e.g., Farah,Wilson, Drain, & Tanaka, 1998). The fact that only Deaf signers exhibitedCP effects for contrastive ASL hand configurations suggests that although

CATEGORICAL PERCEPTION IN ASL 41

humans may have special perceptual mechanisms for recognising thehuman hand which allow for categorisation, language experience plays animportant role in the discrimination of hand configurations.

Experiment 2 investigated whether the failure to find a categoricalperception effect for place of articulation in ASL was due to interferencefrom a phonological rule of displacement. However, no CP effects werefound even when we presented stimuli that controlled for the effects ofallophonic variation. A failure to find categorical perception for acontrastive linguistic category is not unique to sign language. As noted,for speech the affricate/fricative continuum has failed to produce CPeffects (Ferrero et al., 1982; Rosen & Howell, 1987), and vowels and tonesare not always perceived categorically (Abramson, 1961; Fry, Abramson,Eimas, & Liberman, 1962). CP effects within the speech domain areargued to be modulated by the nature of the articulation of speech sounds.Vowels, tones, and fricatives exhibit more acoustic variability comparedwith stop consonants due to their more continuous, less discretearticulation. Similarly, the production of place of articulation is muchmore variable than the production of hand configuration. Thus, a possibleexplanation for the lack of CP effect for place of articulation in ASL maylie in the more variable and continuous nature of its articulation.

As we have discussed, a phonological displacement rule may alter theplace of articulation of a sign. In addition, whispering in ASL can displacesigns normally articulated on the face and body to a location to the side,below the chest or to a location not easily observed. The articulation ofhand configuration is less dramatically altered during whispering (seeillustrations in Emmorey, 2002). Furthermore, locations in signing spacethat express spatial relationships are treated as analogue representations ofphysical space, whereas hand configurations that specify object size aretreated more categorically (Emmorey & Herzig, in press). The analogueproperties of locations in signing space and the variability in the preciselocation of articulation with respect to the body for ASL signs may resultin category boundaries that are less discrete than the category boundariesbetween hand configurations. Thus, enhanced perceptual sensitivity tohand configuration boundaries for Deaf ASL signers may arise from thestability of the hand configuration category boundary.

This pattern of results helps to illuminate what type of perceptual stimuligive rise to categorical perception effects. Specifically, linguistic categor-isation of stimuli may be necessary, but not sufficient, for CP effects toarise for either speech or for sign perception. In addition, it may be that thecategory boundary between two linguistic categories must be relativelystable. For sign, phonological rules and phonetic effects of register canalter place of articulation boundaries, and this may be why Deaf signersfail to exhibit enhanced discrimination at place of articulation category

42 EMMOREY, McCULLOUGH AND BRENTARI

boundaries. For speech, the perception of vowels is strongly affected bycontext; for example, listeners may establish a reference system of vowelqualities for individual speakers (e.g., Ladefoged & Broadbent, 1957).Perception of stop consonants is less affected by speaker identity or byother context effects, and several researchers have hypothesised that suchdifferences explain why CP effects are found for stop consonants, but notfor vowels (e.g., Fry et al., 1962). For both sign and speech, categoricalperception of linguistic categories may arise only when the categoryboundary is relatively stable, and perhaps only when articulation ofcategory members is relatively discrete, rather than continuous.

To conclude, Deaf signers appear to develop special abilities forperceiving aspects of signed language that are similar to the abilities thatspeakers develop for perceiving speech. An open question is whether theseperceptual abilities develop early in infancy, as has been found for speech.All (but one) of the Deaf subjects in our experiments were exposed toASL from birth by Deaf parents or other Deaf relatives. A next step is tostudy signers (either Deaf or hearing) who acquired ASL in adulthood toinvestigate whether categorical perception for ASL hand configurations isdependent upon early exposure to sign language and/or length ofexperience with ASL. For example, there is quite a bit of evidence thatlate learners of sign language exhibit a ‘phonological bottleneck’ whenprocessing ASL on-line and that this bottleneck is related to age oflanguage acquisition, rather than to the number of years signing (Mayberry& Fischer, 1989; see Emmorey, 2002, for a review). Late learners of ASLdevote much more attention to the phonological structure of signs whichinterferes with their ability to quickly access lexical semantic information.It is possible that late learners have not developed the categoricalperception abilities demonstrated by the native signers in our study, andthus they are less efficient at recognising and processing the phonologicalcomponents of ASL.

In sum, these experiments are the first to document categoricalperception effects in sign language that arise from linguistic experience.The results suggested that categorical perception emerges naturally as partof language processing, regardless of language modality. CP effects wereweaker for sign compared to what has been reported for speech, and thisdifference may reflect psychophysical differences between audition andvision. The results of these experiments also indicate that categoricalperception in the visual domain is not limited to human faces (see alsoLivingston, Andrews, & Harnad, 1998). Studies of sign languageperception provide an unusual window into the interplay betweenlinguistic and perceptional systems, and further studies may illuminatehow the acquisition of a signed language can affect visual discrimination ofhuman body actions recruited for linguistic expression.

CATEGORICAL PERCEPTION IN ASL 43

REFERENCES

Abramson, A.S. (1961). Identification and discrimination for phonemic tones. Journal of the

Acoustical Society of America, 33, 842.

Aslin, R.N., Pisoni, D.B., & Jusczyk, P.W. (1983). Auditory development and speech

perception in infancy. In M.M. Haith & J.J. Campos (Eds.), Infancy and the biology of

development. New York: John Wiley & Sons.

Battison, R. (1978). Lexical borrowing in American Sign Language. Silver Spring, MD:

Linstok Press.

Beale, J.M., & Keil, F.C. (1995). Categorical effects in the perception of faces. Cognition, 57,

217–239.

Bloch, B. (1941). Phonemic overlapping. American Speech, 16, 272–284.

Bornstein, M. (1987). Perceptual categories in vision and audition. In S. Harnad (Ed.),

Categorical perception (pp. 287–300). Cambridge: Cambridge University Press.

Brentari, D. (1990). Theoretical Foundations of American Sign Language Phonology,

Doctoral dissertation, University of Chicago. Published 1993, University of Chicago

Occasional Papers in Linguistics, Chicago, Illinois.

Brentari, D. (1998). A prosodic model of sign language phonology. Cambridge, MA: MIT

Press.

Burns, E.M., & Ward, W.D. (1978). Categorical perception—phenomenon or epiphenome-

non: Evidence from experiments in the perception of melodic musical intervals. Journal of

the Acoustical Society of America, 63, 456–468.

Calder, A.J., Young, A.W., Perrett, D.I., Etcoff, N.I., & Rowland, D. (1996). Categorical

perception of morphed facial expressions. Visual Cognition, 3, 81–117.

Campbell, R., Woll, B., Benson, P.J., & Wallace, S.B. (1999). Categorical processing of faces

in Sign. Quarterly Journal of Experimental Psychology, 52A, 62–95.

Cohen, J.D., MacWhinney, B., Flatt, M., & Provost, J. (1993). PsyScope: A new graphic

interactive environment for designing psychology experiments. Behavioral Research

Methods, Instruments, and Computers, 25(2), 257–271.

Corina, D. (1990). Reassessing the role of sonority in syllable structure: Evidence from a

visual-gestural language. Papers from the 26th Annual meeting of the Chicago Linguistic

Society: Vol. 2: Parasession on the syllable in phonetics and phonology. Chicago Linguistic

Society, University of Chicago, Chicago, Illinois.

Corina, D.P., & Sandler, W. (1993). On the nature of phonological structure in sign language.

Phonology, 10, 165–207.

de Gelder, B., Teunisse, J-P., & Benson, P.J. (1997). Categorical perception of facial

expressions: Categories and their internal structure. Cognition and Emotion, 11, 1–22.

Demers, R., & Farmer, A. (1991). A linguistics workbook. Cambridge, MA: MIT Press.

Eimas, P. (1975). Auditory and phonetic coding of cues for speech: Discrimination of the [r-l]

distinction by young infants. Perception and Psychophysics, 18, 341–347.

Eimas, P.D., Siqueland, E.R., Jusczyk, P., & Vigorito, J. (1971). Speech perception in infants.

Science, 171, 303–306.

Ekman, P. (1980). The face of man: Expressions of universal emotions in a New Guinea village.

New York: Garland Press.

Emmorey, K. (2002). Language, cognition, and the brain: Insights from sign language research.

Mahway, NJ: Lawrence Erlbaum Associates Inc.

Emmorey, K., & Herzig, M. (in press). Categorical versus gradient properties of classifier

constructions in ASL. In K. Emmorey (Ed.), Perspectives on classifier constructions in

signed languages. Mahwah, NJ: Lawrence Erlbaum Associates Inc.

Etcoff, N.L., & Magee, J.J. (1992). Categorical perception of facial expressions. Cognition, 44,

227–240.

44 EMMOREY, McCULLOUGH AND BRENTARI

Farah, M.J., Wilson, K.D., Drain, M., & Tanaka, J.N. (1998). What is ‘special’ about face

perception? Psychological Review, 105(3), 482–498.

Ferrero, F.E., Pelamatti, G.M., & Vagges, K. (1982). Continuous and categorical perception

of a fricative-affricate continuum. Journal of Phonetics, 10, 231–244.

Fry, D.B., Abramson, A.S., Eimas, P.D., & Liberman, A.M. (1962). The identification and

discrimination of synthetic vowels. Language and Speech, 5, 171–189.

Harnad, S. (Ed.). (1987). Categorical perception: The groundwork of cognition. Cambridge:

Cambridge University Press.

Kenstowicz, M. (1994). Phonology in generative grammar. Oxford: Basil Blackwell.

Kuhl, P. (1981). Discrimination of speech by nonhuman animals: Basic auditory sensitivities

conducive to the perception of speech-sound caegories. Journal of the Acoustical Society of

America, 70, 340–349.

Kuhl, P. (1991). Human adults and human infants show a ‘perceptual magnet effect’ for the

prototypes of speech categories, monkeys do not.Perception and Psychophysics, 50, 93–107.

Kuhl, P. (1998). The development of speech and language. In T.J. Carew, R. Menzel, & C.J.

Shatz (Eds.),Mechanistic relationships between development and learning (pp. 53–73). New

York: John Wiley & Sons.

Ladefoged, P., & Broadbent, D.E. (1957). Information conveyed by vowels. Journal of the

Acoustical Society of America, 29, 98–104.

Lane, H., Boyes-Braem, P., & Bellugi, U. (1976). Preliminaries to a distinctive feature analysis

of American Sign Language. Cognitive Psychology, 8, 263–289.

Liberman, A.M., Cooper, F.S., Shankweiler, D.S., & Studdert-Kennedy, M. (1967).

Perception of the speech code. Psychological Review, 74, 431–461.

Liberman, A.M., Harris, K.S., Hoffman, H.S., & Griffith, B.C. (1957). The discrimination of

speech sounds within and across phoneme boundaries. Journal of Experimental

Psychology, 54, 358–368.

Livingston, K.R., Andrews, J.K., & Harnad, S. (1998). Categorical perception effects induced

by category learning. Journal of Experimental Psychology: Learning, Memory, and

Cognition, 24(3), 732–753.

Macmillan, N.A., Kaplan, H.L., & Creelman, C.D. (1977). The psychophysics of categorical

perception. Psychological Review, 84, 452–471.

Mandel, M.A. (1981). Phonatactics and morphophonology in American Sign Language.

Doctoral dissertation, University of California, Berkeley, California.

Massaro, D. (1987). Speech perception by ear and eye: A paradigm for psychological inquiry.

Hillsdale, NJ: Lawrence Erlbaum Associates Inc.

Mayberry, R., & Fischer, S. (1989). Looking through phonological shape to sentence meaning:

The bottleneck of non-native sign language processing. Memory and Cognition, 17, 740–

754.

McCullough, S., & Emmorey, K. (1999). Perception of emotional and linguistic facial

expressions: A categorical perception study with deaf and hearing subjects. Poster presented

at the Psychonomics Society Meeting, November, Los Angeles, California.

Newport, E.L. (1982). Task specificity in language learning? Evidence from American Sign

Language. In E. Wanner & L.A. Gleitman (Eds.), Language acquisition: The state of the

art (pp. 450–486). Cambridge: Cambridge University Press.

Poizner, H. (1981). Visual and ‘phonetic’ coding of movement: Evidence from American Sign

Language. Science, 212, 691–693.

Poizner, H. (1983). Perception of movement in American Sign Language: Effects of linguistic

structure and linguistic experience. Perception and Psychophysics, 33, 215–231.

Poizner, H., & Lane, H. (1978). Discrimination of location in American Sign Language. In

P. Siple (Ed.), Understanding language through sign language research (pp. 271–287). New

York: Academic Press.

CATEGORICAL PERCEPTION IN ASL 45

Rosen, S., & Howell, P. (1987). Auditory, articulatory, and learning explanations of

categorical perception in speech. In S. Harnad (Ed.), Categorical perception: The

groundwork of cognition (pp. 113–195). Cambridge: Cambridge University Press.

Rubach, J. (1984). Cyclic and lexical phonology: The structure of Polish. Dordrecht: Foris.

Sandler, W. (1987). Sequentiality and simultaneity in American Sign Language phonology.

Dictoral dissertation, University of Texas, Austin, Texas. Published 1989, Phonological

representation of the sign. Dordrecht: Foris.

Sandler, W. (1996). Representing handshapes. International Review of Sign Language

Linguistics, 1, 115–158.

Stokoe, W. (1960). Sign language structure: An outline of the visual communication systems

of the American Deaf. Studies in Linguistics, Occasional papers 8. Silver Spring, MD:

Linstok Press.

Werker, J. (1994). Cross-language speech perception: Developmental change does not involve

loss. In J. Goodman & H. Nusbaum (Eds.), The development of speech perception: The

transition from speech sounds to spoken words (pp. 95–120). Cambridge, MA: MIT Press.

Werker, J., & Tees, R.C. (1983). Developmental changes across childhood in the perception of

non-native speech sounds. Canadian Journal of Psychology, 37, 278–286.

Williams, L. (1977). The perception of stop-consonant voicing by Spanish–English bilinguals.

Perception and Psychophysics, 21, 289–297.

Wilson, M. (2001). The impact of sign language expertise on perceived path of apparent

motion. In M.D. Clark & M. Marschark (Eds.), Context, Cognition, and Deafness.

Washington, DC: Gallaudet University Press.

Young, A.J., Rowland, D., Calder, A.J., Etcoff, N.L., Seth, A., Perrett, D.I. (1997). Facial

expression megamix: Tests of dimensional and category accounts of emotion recognition.

Cognition, 63, 271–313.