Handshape coding made easier: A theoretically based notation for phonological transcription

33
Sign Language & Linguistics 11:1 (2008), 69–101. doi 10.1075/sl&l.11.1.11ecc issn 1387–9316 / e-issn 1569–996x © John Benjamins Publishing Company Researcher’s Resources Handshape coding made easier A theoretically based notation for phonological transcription Petra Eccarius and Diane Brentari Purdue University is paper describes a notation system for the handshapes of sign languages that is theoretically motivated, grounded in empirical data, and economical in design. e system was constructed using the Prosodic Model of Sign Language Phonology. Handshapes from three lexical components — core, fingerspelling, and classifiers — were sampled from ten different sign languages resulting in a system that is relatively comprehensive and cross-linguistic. e system was designed to use only characters on a standard keyboard, which makes the system compatible with any database program. e notation is made relatively easy to learn and implement because the handshapes, along with their notations, are provided in convenient charts of photographs from which the notation can be copied. is makes the notation system quickly learnable by even inexperienced transcribers. Keywords: transcription, research tools, sign language phonology, handshape 1. Introduction Methods for transcribing sign language data are needed at all levels of linguis- tic analysis. Currently, pictures and glosses are still the most common ways of representing signs, neither of which adequately capture their phonology at a fea- tural level. Notation systems that capture this sort of linguistic information are needed not only in research to depict phonological commonalities and contrasts (thus enabling more efficient analyses), but also in academic publications so that other researchers can more fully understand (and continue to question) new ad- vances in the field. Adequate phonological transcription is especially important for researchers because it is phonological information that expresses distinctions throughout the grammar of a language. Consequently, knowing which kinds of

Transcript of Handshape coding made easier: A theoretically based notation for phonological transcription

Sign Language & Linguistics 11:1 (2008), 69–101. doi 10.1075/sl&l.11.1.11eccissn 1387–9316 / e-issn 1569–996x © John Benjamins Publishing Company

Researcher’s Resources

Handshape coding made easierA theoretically based notation for phonological transcription

Petra Eccarius and Diane BrentariPurdue University

This paper describes a notation system for the handshapes of sign languages that is theoretically motivated, grounded in empirical data, and economical in design. The system was constructed using the Prosodic Model of Sign Language Phonology. Handshapes from three lexical components — core, fingerspelling, and classifiers — were sampled from ten different sign languages resulting in a system that is relatively comprehensive and cross-linguistic. The system was designed to use only characters on a standard keyboard, which makes the system compatible with any database program. The notation is made relatively easy to learn and implement because the handshapes, along with their notations, are provided in convenient charts of photographs from which the notation can be copied. This makes the notation system quickly learnable by even inexperienced transcribers.

Keywords: transcription, research tools, sign language phonology, handshape

1. Introduction

Methods for transcribing sign language data are needed at all levels of linguis-tic analysis. Currently, pictures and glosses are still the most common ways of representing signs, neither of which adequately capture their phonology at a fea-tural level. Notation systems that capture this sort of linguistic information are needed not only in research to depict phonological commonalities and contrasts (thus enabling more efficient analyses), but also in academic publications so that other researchers can more fully understand (and continue to question) new ad-vances in the field. Adequate phonological transcription is especially important for researchers because it is phonological information that expresses distinctions throughout the grammar of a language. Consequently, knowing which kinds of

70 Petra Eccarius and Diane Brentari

properties to look for, the best way to transcribe them, and how to search for in-dividual properties in the transcriptions, is crucial. Because it is in the data tran-scription that observable patterns emerge, transcribing too much, too little, or the incorrect properties can make the difference between formulating useful analyses and either making incorrect conclusions or finding no systematicity at all.

Notation systems in general (regardless of language modality) vary greatly depending on the requirements of the research project involved. In the case of sign language research, however, the problem of finding a way to transcribe data is doubly difficult — not only do the needs of specific projects vary, but also the visual, more simultaneous nature of these languages is difficult to capture by the usual “spoken language” means, (i.e. mostly-linear arrangements of single symbols based on familiar written alphabets). Furthermore, researchers are still determin-ing which aspects of these languages are linguistically relevant to their phonology, morphology, syntax, etc. Thus, knowing what to transcribe in these languages be-comes as difficult a task as knowing how to transcribe it.

In this paper, we describe a notation system developed for use in studying one phonological aspect of sign languages, namely handshape. This system was created to aid in the phonological analysis of handshape in a cross-linguistic research proj-ect studying classifier forms, but ultimately it was expanded to include handshapes found throughout the lexicons of ten sign languages — Hong Kong Sign Language, Japanese Sign Language, British Sign Language, Swedish Sign Language, Israeli Sign Language, Danish Sign Language, German Sign Language, Swiss German Sign Language, and American Sign Language. A detailed analysis of handshape was needed to aid us in identifying phonological and morphological patterns be-cause this parameter can vary in its morphophonemic properties depending on context (e.g. potentially morphological features in classifier handshapes can be purely phonological features in core lexical signs; see Eccarius 2008 and Brentari & Eccarius, in press). The required details included aspects like the number of finger groups in a handshape, the fingers involved in each group, their joint configura-tions, and the position of the thumb. To perform this kind of analysis, however, our data had to first be transcribed in a manner that carried this kinds of detail in a consistent and easily searchable format. The system presented here was developed with these specific needs in mind and was based on the following goals:

– It should have sound theoretical grounding so that natural classes will be ap-parent.

– It should be searchable within commonly used database systems, (i.e. it should contain no characters other than those on a standard keyboard).

– It should be economical (i.e. the representation should be as compact as pos-sible while continuing to convey important linguistic information).

Handshape coding made easier 71

– It should be relatively easy to use, even for inexperienced signers or inexperi-enced transcribers, so that training is as fast as possible and the transcriptions are as reliable as possible.

In this work, we describe the main characteristics of the notation system ultimate-ly developed for our project, explaining how it meets the goals stated above. We begin by providing some background about why our research project required a new system, followed by an explanation of the system’s theoretical grounding. We then describe the design of the system, showing how it facilitates detailed data-base searches without causing the transcriptions to become too unwieldy. Finally, we briefly discuss the practical application of our system with regard to use by transcribers in hopes that it will prove beneficial to other researchers with similar notational requirements.

2. Background

2.1 Terminology

To keep the aims of this paper clear, we maintain a distinction between the terms ‘notation system’, ‘transcription’ and ‘theoretical model’. Notation systems and tran-scriptions are both written abstract renderings of a linguistic signal linked to spe-cific properties (or values) in production (van der Hulst & Channon, in press). In this paper, we use the former term to refer to the symbolic system itself, and the latter to refer to the end result of that system’s use (i.e. written representations of actual data). Theoretical models, on the other hand, are representations based on abstract principles of linguistic organization. For example, a feature geometry (e.g. Clements 1985) used to represent the hierarchical organization of phono-logical features of a spoken or signed language would be considered a theoretical model. Notation systems may or may not be based on theoretical models (i.e. they can represent visual or physiological characteristics of an utterance whether or not they have linguistic significance). Our handshape notation system is linked to such a model, namely, the Prosodic Model of Sign Language Phonology (Brentari 1998). Debating the merits of various theoretical models for sign languages is not one of the aims of this paper, and consequently, we restrict our discussion of mod-els to two points: (1) we have made the decision to base our notation system on a theoretical model, which we present as an advantage of the system; and (2) since our notation system is based on a particular phonological model, we describe it here only to the extent needed to understand the notation and how to use it. This issue will be taken up again in the next section.

72 Petra Eccarius and Diane Brentari

2.2 Existing notation systems

Before describing the specifics of our notation system, we must first explain why we felt it necessary to develop a new one. Many notation systems exist for use with sign languages, but their representations of handshape were not adequate to ad-dress the particular research needs of our project. In this section, we will provide three examples of existing systems frequently used by sign language researchers and then briefly explain why they were not sufficient for our purposes.1

One of the oldest notation systems developed for a sign language is the system developed by Stokoe and used in the Dictionary of American Sign Language (Sto-koe, Casterline & Croneberg 1965). At the time it was developed, this system (and the analysis it represented) was far beyond anything else available in terms of the linguistic complexity it could represent; it represented the handshape, movement, location and orientation of a sign instead of merely describing or depicting the sign as a whole as most others did. However, when compared to more recent re-search in sign language phonology, it is found to be incomplete in some important respects. For example, the 19 handshape symbols used in Stokoe’s system represent only a subset of the more current versions of ASL’s contrastive handshape inven-tory. Also, because of its original purpose (distinguishing between minimal pairs of lexical items), it offers very little linguistic detail in its handshape symbols. Most symbols are named based on their visual similarity to ASL fingerspelling hand-shapes (e.g. ‘B’ for all handshapes visually resembling the fingerspelled letter w ) rather than more linguistically descriptive categories like the number of selected fingers in the handshape (i.e. those that are active or important in a handshape; see Mandel 1981) or their joint specifications. Furthermore, what few diacritics are used to represent variants of the “basic” symbols are not always consistently applied.2 Consequently, the system fails to capture many of the important relation-ships between the handshapes being examined by our project.

Another attempt at transcribing (and writing) sign languages, SignWriting (e.g. Sutton 2002), has its own disadvantages for this sort of linguistic research. It is highly iconic, and, while it does represent in its symbols some (but not all) of the phonological features important to a study of handshape, the features themselves

1. Other notations may soon exist that might also be compatible with this type of research. For example, Liddell & Johnson’s notation system (1989) is currently in the process of evolving from earlier versions, (see Johnson & Liddell in prep). However, these revisions were not available at the time we were developing our system.

2. For example, three dots over a character can represent a spread version of an already bent (or ‘clawed’) base handshape (0 vs. ) ), a curved version of an extended, unspread base hand-shape (w vs. =) or a bent version of an extended and spread base handshape (Y vs. b ).

Handshape coding made easier 73

are not always depicted consistently and cannot be teased apart for use in searches. The resulting number of separate handshape symbols, (110 are listed on their web-page), can also make it cumbersome for transcribers to learn and use. In addition, special fonts and keyboards are needed to use the system, making it incompatible with many searchable databases.

HamNoSys (Prillwitz et al. 1989) is another notation system used by research-ers. This system, although more particularly developed for use in sign language research than SignWriting, shares many of the same disadvantages in terms of our needs. It is largely iconic and requires special fonts and keyboards, again making detailed searches more difficult. Also, while it is much more detailed and more consistent in its featural representations than SignWriting, (its symbols sometimes contain more phonetic information than even our project requires), because it was not based on any particular theoretical model, it still misses some of the ba-sic generalities between handshapes that we were looking for. For example, many theoretical models of sign language phonology distinguish between ‘selected’ and ‘non-selected’ fingers in handshapes, differentiating between those that are active or foregrounded in a handshape and those that remain (Mandel 1981). HamNoSys does not utilize this distinction in its representation of handshape preferring to stay with a more a-theoretic approach; therefore handshapes with the same se-lected fingers (e.g. the beginning and ending handshapes in the ASL sign send, 6 and >) have unrelated base symbols.3 For these reasons, this system, like the others, did not meet the needs of our project.

In our research of these and other notation systems, we were unable to find any for handshape which focused exclusively on that parameter in great enough detail to be a useful tool in our phonological or morphological analysis. One reason for this lack of detail is that most notation systems currently available to the research community strive to represent the whole sign — the handshape representation is only a small part of the overall transcription. Because of this, they understandably make their handshape notation as compact as possible (usually one character rep-resenting the whole handshape) so that the entire transcription is more space ef-ficient. Unfortunately, this efficiency can only be achieved at the expense of detail, and it is exactly that detail that we needed for our project. Therefore, our project needed something new.

3. See Takkinen (2005) for a more detailed discussion of these sorts of limitations.

74 Petra Eccarius and Diane Brentari

3. Theoretical grounding

The first of our goals in the development of this notation system, theoretical grounding, was of paramount importance for the purposes of our project; we needed a notation that would convey very specific phonological information about each handshape in the data. To this end, we used the Prosodic Model of Sign Language Phonology (Brentari 1998) and expansions of that model’s handshape branch for cross-linguistic use by Eccarius (2002), as well as more recent research pertaining to handshape contrast (Eccarius 2008). The Prosodic Model (hereafter abbreviated as PM) represents handshapes (among other things) by means of a bi-nary branching feature hierarchy combined with a set of distinctive features. (See Figure 1 in the next section for an illustration of this structure; the features of the model, their placement, and their definitions can be found in Brentari 1998.) This is an area of sign language phonological representation that has a reasonably wide degree of consensus — van der Hulst (1995), Sandler (1996), Channon (2002) and van der Kooij (2002) have very similar handshape representations — however, the capabilities of the PM were best suited to our notation’s requirements. Minimally, our notation needed to represent the number of separate finger groups present in a handshape, the digits that belonged to each group, and the joint specifications of each one. Unlike other models, the handshape portion of the expanded PM includes branches for three groups of fingers per handshape — primary selected fingers (PSF), secondary selected fingers (SSF), and nonselected fingers (NSF) — as well as combinations of branching structures and features to represent which digits and joint configurations are involved in each group. The SSF group, not in the original PM, is a branch of structure added by Eccarius (2002) to account for the most complex handshapes found throughout the languages in the project in which the selected fingers must be divided into two groups because they assume separate joint postures (e.g. } from Hong Kong Sign Language).

3.1 Attested vs. possible forms

But why base the notation system on a theoretical model in the first place? First, for our project we wanted a notation system that would allow us to transcribe all attested forms in the languages in question. This is different than the set of all possible forms. Allowing the system to generate every physiological possible form results in a list that is much larger than what actually occurs. For example, not all finger combinations are attested as selected fingers (e.g. index+ring), and not all joint configurations are attested in every finger group (e.g. [spread] and [stacked] are not found in the SSF). We wanted a notation system that (as much

Handshape coding made easier 75

as possible) included only handshapes that were known to occur, while still being flexible enough to expand easily if/when more are found.

3.2 Markedness and complexity

We also wanted a system that (at least in part) could reflect linguistic markedness through the complexity of its transcriptions. A phonologically-based notation sys-tem has built into it factors such as ease of articulation, ease of perception, order of acquisition, and frequency of occurrence, while a purely phonetic notation system does not. As a result, the transcription of a complex, or ‘marked’, form with regard to the factors just mentioned, will actually appear more complex than that of an ‘unmarked’ form if the notation system used to transcribe it is based on phonolog-ical principles. Conversely, transcriptions of marked and unmarked forms using a phonetic system would be roughy equivalent in terms of their complexity. Since there is now sufficient evidence across a wide number of sign languages in the Americas, Asia, and Europe to show that factors such as these exert common pres-sures on their respective handshape systems (e.g. Mandel 1981; Ann 2006; van der Kooij 2002; Greftegreff 1993; Eccarius 2008), relative markedness was important for our analysis. We needed a notation system that could reflect these differences.

Our notation system is phonologically-based rather than phonetic, but each of the whole handshapes represented and transcribed in our charts (see Appendix) is not a phoneme in and of itself. These whole handshapes are not phonemic because handshape systems from many sign languages are represented here, and the con-cept of phonemic opposition holds for just one language at a time. Moreover, the term ‘phonemic’ has come to mean ‘lexically contrastive’ (i.e. it creates a minimal pair in the core lexicon). While each handshape in our set has a meaningful con-trast within at least one language — either in the lexicon, the fingerspelling system, or classifier system — they are not necessarily contrastive in the core lexicons of all ten languages of the project.

3.3 Contrast

Here, we must clarify what we mean by ‘contrast’, since the ability to represent con-trasts is another important requirement of the notation system. The notion of ‘con-trast’ itself is in the process of evolving. Starting with the theory of Structuralism, only properties involved in minimal pairs were considered candidates for contras-tiveness; this practice continued through the 1950s and 1960s and still persists today (Jakobson, Fant & Halle 1951; Jakobson & Halle 1956; Chomsky & Halle 1968). In the 1970s, theories of phonological representation began to change, and with that work, categories of features emerged besides phonemic (contrastive) and

76 Petra Eccarius and Diane Brentari

phonetic (redundant). For example, with the advent of Autosegmental Phonol-ogy (Goldsmith 1976) and Feature Geometry (Clements 1985), some features (i.e. tone) were shown to have special abilities, and as a result, all features no longer had the same status or type of representation. In addition, the observation that features may be contrastive in one position in a word (e.g. word-initial position), while redundant in another (e.g. word final position) became more important. To make sense of these new discoveries, Clements (2001) suggested that three conceptual distinctions be made among phonological features: distinctive, active and promi-nent.4 Anytime a feature is used to establish a minimal pair it is distinctive (i.e. when the feature distinguishes two unrelated meanings in the core lexicon). For instance, y and < in ASL are found in the minimal pair kiss-on-the-cheek vs. thick (consistency)). In this case, we could call the feature for the contact between the thumb and fingers something akin to [closed] or [loop]. In contrast, anytime a feature is involved in a phonological operation (i.e. a rule or constraint), it is active. For example, the feature [stacked] operates in the phonological rule changing a ‘plain’ handshape Y to d in signs like see and verb (ASL) when the middle finger contacts the face (Eccarius 2008; Brentari & Eccarius, in press). Finally, a property is prominent if it participates in certain types of phonological operations, one of which is morphological status.5 Considering [stacked] again in ASL, this feature represents specific differences in meaning in related classifier forms (e.g. in the body part classifiers for ‘legs stand’ Y vs. ‘legs leap’ d).

Moreover, the concepts of distinctive, active and prominent can hold for an entire language, or only for a particular part of the lexicon. This is particularly true for languages with multiple origins, as is the case with many sign languages. These languages have a foreign component of the lexicon based on the manual alphabet and/or written characters, a core vocabulary, and a spatial lexicon containing spa-tial verbs and classifier forms (Brentari & Padden 2001). Let us return again to the feature [stacked] as an example. This feature is distinctive in the manual alphabet of ASL (‘V’ Y vs. ‘K’ d), prominent in classifier forms (‘legs stand’ Y vs. ‘legs leap’ d), and active in the phonological rules of the whole lexicon, (Y becomes d in see (core) and verb (foreign), and Z becomes stacked in the classifier form ‘car on its side’ (spatial) due to its orientation). In our project we wanted to include data from all parts of the lexicon utilizing all different kinds of contras-tive relationships. If a feature is distinctive, active, or prominent in any part of the lexicon, it is a part of the PM. Consequently, handshapes utilizing all of these

4. These concepts were used for sign language even before spoken language (Brentari 1998) though not labeled as such.

5. Prominent status is granted to a feature if it meets the criteria to be an autosegmental tier, established in Goldsmith (1976).

Handshape coding made easier 77

contrast types can be represented by our notation system and are included in our appendix charts.

3.4 Empirical basis

While our notation system, like the major research questions of our project, was informed by an existing phonological theory, (a ‘top-down’ approach to issues of contrast), we depend upon actual data (a ‘bottom-up’ approach) to test the system. After all, a theoretical approach without empirical grounding is worth very little.

The data that helped serve as a basis for this notation came from the ten lan-guages in the cross-linguistic classifier project mentioned in the introduction. Handshapes were taken from the three parts of each sign language lexicon follow-ing Brentari & Padden (2001) — the core lexicon (from dictionary vocabulary), the foreign lexicon (from the manual alphabet or forms based on Chinese characters), and the spatial lexicon (from classifier predicates). First, for the core and foreign parts of the lexicon, a native signer of each language articulated handshapes from the standard dictionary and was photographed by Brentari while in each country. Native signers were then interviewed about the use of each handshape, and a list was made indicating in which of the three components of the lexicon the hand-shape was used. Our initial handshape chart was created from these handshapes. We then examined elicited classifier data for each language, specifically, data from a picture description task developed by Zwitserlood (2003). This was extremely important because while most of the dictionaries for these sign languages were very good (it was one of the criteria for inclusion in the project), very few research-ers had elaborated on the foreign or core handshapes by looking carefully at the classifier system. We then used this expanded set of data to test our theoretical assumptions. When additional handshapes were found that appeared to be con-trastive in some way, we added them to our original handshape charts and made alterations to the notation system as necessary.

This section has described the theoretical and empirical bases for our notation system. To maximize data coverage, we used as wide a range of handshapes as pos-sible from the ten sign languages of our study. An expanded notion of ‘contrast’ was also employed, and a variety of sampling methods used so that we would have access to all three lexical components of each language. At the same time, we did our best to insure that the system did not over-generate, and that it represented the relative markedness of handshapes as much as possible. In the next section we describe the design of the system itself in more detail.

78 Petra Eccarius and Diane Brentari

4. Searchable and economical design

4.1 Searchable characters

The second of our goals for the system, choosing the symbols that would represent each theoretical feature/feature group, required a little more creativity. First, each needed to be represented by characters available on a standard keyboard. Meeting this requirement insured that the system could be used in the text fields of almost any database program without additional fonts or scripts, unlike more iconic nota-tion systems such as SignWriting and HamNoSys. Second, the linguistic aspects of each handshape needed to be independently represented in the notation to facili-tate searches for specific phonological feature bundles. In other words, each of the three groups of fingers (PSF, SSF, and NSF) and their joint specifications needed to be represented by separate characters. Figure 1 shows the cross-linguistic version of the PM tree structure for handshape (Eccarius 2002) as well as the relationships

THUMB FINGERS0

QUANTITY[all] [one]

POINT OF

REFERENCE [mid]

[ulnar]

FINGERS

QUANTITY [all] [one]

POINT OF

REFERENCE[mid]

[ulnar]

HAND

NONSELECTED FINGERS

SELECTED FINGERS

[extended] [�exed]

[loop] [�exed ]

SECONDARY SELECTED FINGERS

PRIMARY SELECTED FINGERS

JOINTS [�exed] [spread]

[stacked] [crossed]

base nonbase

FINGERS 1

THUMB

[opposed] [unopposed]

Base Symbols

Joint Symbols

Figure 1. The Prosodic Model’s Hand branch and its relationship to the notation.

Handshape coding made easier 79

between parts of the model and various aspects of the notation system.6 Twenty-six characters were chosen from the standard US keyboard and mapped onto the possible finger combinations and joint configurations predicted by PM or based on contrasts found in subsequent work (Eccarius 2008). These characters fall into two groups: (1) base symbols (which represent the areas of the tree surrounded by ovals), and (2) joint symbols (which represent the areas surrounded by rectangles). The divisions between the finger groups themselves are represented by the organi-zation of these characters as described in Section 4.2.

4.1.1 Base symbolsThe base symbols of this system indicate which digits are included in particular finger groups. They do this by representing the Fingers node (a combination of the Quantity and Point of Reference branches, used to indicate the number and location of fingers) and the Thumb node of both the PSF and SSF structures in the model. (NSF groups do not require base symbols since the group is comprised of all fingers not in the other groups.) Table 1 lists all 13 base symbols, the digits in-cluded in the combination they represent, and the theoretical features used in PM to indicate those combinations.7

Table 1. Base symbols with the specific selected fingers and PM features they represent.Base Symbol Selected fingers PM features

Quantity Point of ReferenceB IMRP [all]M IMR [all]/[one]D MRP [all]/[ one] [ulnar]U IM [one]/[ all]H IP [one]/[ all] [ulnar]A MR [one]/[ all] [mid]P MP [one]/[ all] [mid]/[ulnar]2 RP [one]/[ all] [ulnar]/[mid]1 I [one]8 M [one] [mid]7 R [one] [ulnar]/ [mid]J P [one] [ulnar]T T thumb

6. This section only presents a very basic description of the Prosodic Model’s capabilities. For more information about the model as a whole, see Brentari (1998), and for more information about the cross-linguistic expansion of the model’s Hand branch, see Eccarius (2002).

7. I=index, M=middle, R=ring, P=pinky, and T=thumb.

80 Petra Eccarius and Diane Brentari

To help our transcribers more easily learn the system, the twelve symbols in Table 1 representing finger combinations were chosen primarily because of their relation-ship to ASL fingerspelling and number handshapes with the same combinations of fingers in their PSF group. In instances where a combination of fingers did not occur as selected fingers in an ASL handshape, mnemonics from ASL classifiers or from other languages were used when possible.8 The thumb base symbol (‘T’) was chosen for obvious reasons, and is set apart because it is used in conjunction with the other base symbols, whereas the symbols used to represent the finger combi-nations themselves are typically not used in combination with each other.9

4.1.2 Joint symbolsJoint symbols represent the joint specifications of the various finger combinations. The number of symbols possible for each finger group varies depending on the joint features available in PM’s tree structure; accordingly, there are ten joint sym-bols possible for use with the fingers in the PSF group, two possibilities for use with the SSF group, and two possibilities for the NSF group. The thumb, which can also use the aforementioned joint symbols, can additionally be accompanied by the ‘unopposed’ symbol. Table 2 lists all thirteen joint symbols (organized by finger group possibilities), as well as the joint configurations they signify and the theoretical features used by PM to represent those configurations.10 (See Brentari (1998) for definitions and illustrations of these joint configurations.)

As with the base symbols, the decision to use the specific characters for the joint symbols was largely mnemonic; in this case, the symbols were chosen be-cause of their relative iconicity, i.e. they look as much like fingers in the particular joint configurations as the standard keyboard allows. The only exceptions are the ‘unopposed’ symbol (-), (which we felt was a fairly obvious choice), and the ‘closed’ symbol for NSF (#). The latter symbol, as well as its extended counterpart (/), was added because in our research, we often need to search the transcribed data for a joint value in a selected vs. nonselected finger group. Using different symbols for NSF joints facilitates these kinds of searches.

8. Mnemonics not from ASL fingerspelling include ‘H’ for ‘horns’, ‘A’ for ‘animal face’, and ‘P’ for a Hong Kong Sign Language variant of ‘airplane’. ‘2’ was chosen somewhat arbitrarily since the RP combination is very rare, hence no good mnemonic could be found.

9. Possible exceptions include IMP or IRP selected finger combinations, which occur very rarely cross-linguistically. If they do appear in a language, they can be represented by using combinations of the base symbols provided, (‘UJ’ and ‘12’ respectively).

10. The distinction between curved-open ‘narrow’ and ‘wide’ is an addition made to the nota-tion based on subsequent research on joint contrasts (Eccarius 2008). Differences in theoretical structure are as yet unclear.

Handshape coding made easier 81

In addition to the symbols in the table, the absence of joint symbols can also carry meaning about the positioning of the joints. In the PSF and SSF groups, not using a joint symbol implies an extended joint configuration (indicated in Table 2 by the cells marked empty) and the absence of the ‘unopposed’ symbol next to the thumb’s base symbol implies an opposed thumb. In the NSF group, no joint symbol means that there are no digits included in that group. These meaningful omissions in the notation mimic PM’s omission of features and/or branches of the tree structure in certain cases (such as extended selected fingers) indicating less marked forms. Additionally, this characteristic helps the notation fulfill its afore mentioned economy requirement.

4.1.3 Status of the thumbBefore discussing how these characters are organized according to finger group, we must first say a few words about how we determine the finger-group member-ship of the thumb. Very little research exists currently regarding the phonological status of various thumb positions. In fact, thumb positions seem to vary so much from situation to situation, that the topic is actively avoided by many researchers. Despite this high degree of variability, thumb positions are not completely unpre-dictable — some systematicity does exist (Brentari 1998).

Table 2. Joint symbols with the configurations and PM features for each finger group.Joint Symbols Joint configuration PM features

PSF SSF NSF

empty empty / extendedempty (PSF, SSF); [extended] (NSF)

ccurved-open (nar-row)

nonbase + base

( curved-open (wide) nonbase + base

o o curved-closed[flex] + nonbase + base (PSF); [loop] (SSF)

< flat-open base> flat-closed [flex] + base[ bent [flex] + nonbase@ @ # closed [flex]x crossed [cross]k stacked [stack]^ spread [spread]

- -unopposed (thumb only)

[unopposed]

82 Petra Eccarius and Diane Brentari

As part of the cross-linguistic expansion of PM, Eccarius (2002) looked at the frequency of various thumb positions in the published handshape inventories of 12 sign languages. Although these conclusions still need to be tested with natural data, we used them as a starting point for the notation of thumb positions in our system. Based on these conclusions, as well as the original predictions of PM, we make two over-arching assumptions in our system concerning the notation of the thumb:

1. If the thumb is a member of a selected finger group, it will behave like the other members of the group in terms of the basic joint configurations, (i.e. the nonbase, or interphalangeal, joint of the thumb will approximate the nonbase joints of the selected fingers, the thumb will spread away from the hand if the fingers are spread, etc.).

2. In cases where the nonbase joint of the thumb is extended (and thus, group membership is not immediately apparent), the thumb is assigned to a finger group according to the information in Table 3.11 (Shading indicates combina-tions that are unattested and/or not expected to occur. See Eccarius (2002) for further explanation of these group assignments.)

11. Examination of our data so far indicates that the distinction between the curved-open con-figurations ‘c’ and ‘(’ is only relevant when there is an opposed, selected thumb. If this turns out not to be the case, we would expect the thumb positions of ‘(’ to pattern with ‘c’.

Table 3. Finger group assignments for the thumb in various extended positions.

Joint configurations of fingers in PSF group

Opposed thumb (T) Unopposed Thumb (T-)spread from palm

palmadjacent

spread from radial side

radial side adjacent

open, spread ^ PSF SSF PSFopen, un-spread

PSF NSF PSF

curved-open c NSF SSFcurved-closed o SSF NSFbent, spread ^ [ SSF NSFbent, un-spread

[ SSF NSF

flat-open, spread

^ < PSF PSF SSF

flat-open, unspread

< PSF NSF SSF

flat-closed > PSF NSF PSFclosed @ SSF SSF NSF

Handshape coding made easier 83

4.2 Economical organization

The final task in the development of this notation system was to ensure that the resulting representations would be relatively compact without losing any impor-tant linguistic information. This task, although seemingly more trivial than the first two, is important for the human users of the system. If the string of characters is too long, it cannot be easily interpreted (or easily typed out) by a researcher or transcriber, and has therefore lost a great deal of its usefulness. In addition, the longer a notation string is, the greater the opportunity for error. To fulfill this final requirement, we allowed the spatial organization of the symbols to convey mean-ing about finger group assignment. We also allowed the base and joint symbols to represent the same linguistic information regardless of selected finger group, thus reducing the need for additional symbols and more efficiently utilizing space.

The organization of the system can be understood as being two-tier: first, there is the basic organization of characters within each finger group, and second, there is the organization of the finger groups themselves into the notation for the whole handshape. The basic organization within each finger group is composed of five symbol slots, written in a particular order. This order is illustrated in Figure 2.

At the beginning of the string, there is a slot for the base joint indicating the finger combination involved in that group. Immediately following the finger base symbol is a slot for the thumb’s base symbol. The three remaining slots are re-served for various joint symbols. The first of these, adjacent to the symbol for the thumb, is available for the ‘unopposed’ symbol. The second joint slot houses the ‘spread’, ‘stacked’ and/or ‘crossed’ symbols, which receive their own slot because they are the only joint configurations that can occur in conjunction with other configurations (e.g. [bent] + [spread], )). The final slot in the string is reserved for the remaining joint symbols (i.e. those representing degrees of flexion). It is important to emphasize that not all of the five possible slots will be filled while

B T - [^�ngerbase

symbol

thumbbase

symbol

‘unopposed’joint

symbol

‘spread’‘stacked’ &/or‘crossed’ joint

symbol(s)

remainingjoint

symbol

Figure 2. Basic organization of notation (order of base and joint symbols).

84 Petra Eccarius and Diane Brentari

notating a given handshape, nor will they all be available for every finger group. When present, however, the symbols will occur in the order illustrated above.

Once the possible symbols for each finger group have been organized into strings, those strings must also be placed in a specific order. Not surprisingly, in the final arrangement the PSF group comes first, followed by the SSF group, and ending with the NSF group. These groups are divided by a semicolon (;) as illus-trated in Figure 3. (For illustration’s sake, the symbols used in this example serve only as place-holders for specific types of symbols — e.g. ‘1’ for base symbols, ‘@’ for joints, etc.)

In the figure, there appears to be a possibility of twelve symbols (including the semicolons) for any given handshape notation — five slots for the PSF group (all those discussed above), four slots for the SSF group ([spread], [stacked] and [crossed] have not been attested in SSF), and one slot for the NSF joint specifica-tion. However, an actual notation could never contain all twelve symbols, since there is overlap between the groups. For example, if the thumb base symbol ap-pears in the PSF group, neither of the thumb symbols (base or joint) could be included in the SSF group (and vice versa) since a digit can only belong to one finger group at a time. If this overlap is taken into consideration, the maximum string length for any handshape notation is ten characters, although, in practice, the average notation is only four or five characters long. This relatively short string length, while not optimum for transcribing discourse, is quite manageable when a project focuses on handshape, once again aiding in the fulfillment of this system’s economy requirement.

4.3 Examples

To illustrate the symbols and organizational principles discussed in this section, we have provided some examples of handshapes and their notations in Figure 4.

These examples were chosen because of their theoretical similarities across a number of parameters (digits involved, joint configurations, etc.) to demonstrate

1 T - @^ ; 1 T - @ ; #

PrimarySelectedFingers

SecondarySelectedFingers

NonselectedFingers

Figure 3. Overall organization of notation (division into finger groups).

Handshape coding made easier 85

how the notation can capture these relationships regardless of the physical appear-ance of the handshapes themselves. For example, despite differences in appearance due to their NSF joint configurations, the second and third handshapes in this example set have the same number of finger groups (two), the same finger combi-nations in each group (index and thumb, or ‘1T’, for PSF; middle, ring and pinky for NSF) and the same PSF joint configurations (curved-closed, or ‘o’). In other notation systems, these two handshapes often have little (if anything) in common — iconic systems tend to focus primarily on the visually prominent aspects of the handshape (in this case, the NSF joints — the only difference between the two), while those systems that base their symbolism on ASL fingerspelling assign them labels such as ‘babyO’ and ‘F’ respectively. In the current notation, however, the notations for these two handshapes (‘1To;#’ M and ‘1To;/’ O) share all but one of their five symbols, as well as the organization of those symbols; thus, the similarities in the notations reflect the commonalities in the handshapes across multiple parameters. Furthermore, because these parameters are represented by individual characters (instead of, for example, elements of a single iconic symbol), they can be easily searched for in a database program, as discussed above.

5. Practical application of the notation

In this section we will address how transcribers can be quickly trained to use this notation system. It is always difficult for transcribers to reach consensus, whether they are native or inexperienced signers. Native signers’ transcriptions may be biased by their inherent phonological knowledge, similar to Sapir’s native tran-scribers of Paiute (Sapir 1933). In contrast, inexperienced signers may look at

1T@;# 1To;# 1To;/ 1Tc;/ 1T>;/ 1T<;/ D;1To D^[;1To

U;# U^[;# Ux;# 1[;8;# BT BT-^ BT< BT-k

Figure 4. Example handshapes and notations.

86 Petra Eccarius and Diane Brentari

handshapes too phonetically, because they tend to copy what they see as faithfully as possible, without the filter of a phonological system.12

To combat these practical problems and enhance transcriber agreement, we have constructed charts containing photographs and notations for all of the pho-nological handshapes we have observed in our corpus so far, with gaps filled in from other sources where possible. These charts are provided as a downloadable Appendix to this article.13 With this additional tool, transcribers need only match the handshape that they see in the data with the most similar-looking handshape photograph from our charts, then copy the notation beneath the matching photo into whatever kind of overall transcription tool is being used.

NOT LAX LAX

BT-^ ~BT-^

Figure 5. Example of a lax vs. a non-lax handshape.

Finally, one additional symbol may be added to the notation system as needed to help transcribers reach consensus. The handshapes in our photo charts were crisp-ly articulated, i.e. they were being pronounced as clearly as possible. Handshapes found ‘in the wild’ might be less crisply articulated to some extent, but they are still well articulated forms. In some cases, however, the handshapes, while recog-nizable, are noticeably ‘lax’ or ‘sloppy’. Following work on transcribing co-speech gesture (Kita, van Gijn & van der Hulst 1998), we have provided a symbol for ‘lax’ handshapes (i.e. ‘~’) which can be used as appropriate using their criteria for slop-piness — i.e. the selected or nonselected fingers are not in their canonical position

12. This has been shown experimentally. When using phonetically constructed, 3-dimensional, rotatable avatar as handshape models non-signers produced less accurate handshapes of the manual alphabet than when using 2-dimensional handshape drawings as models (Geitz & Bren-tari 1997).

13. These handshape charts are still a work in progress, and we welcome suggestions for expan-sion.

Handshape coding made easier 87

(see Figure 5). 14,15 When required, the lax symbol occurs at the beginning of the entire character string.

6. Conclusion

In summary, we have developed a notation system for handshape that meets the needs of our cross-linguistic classifier project. This system is grounded in the theo-retical model we are using for the project, it uses standard characters to individu-ally represent features so that they can be searched for easily in a database format, and, because it utilizes organization as a means of conveying information, it is relatively economical without losing important phonological detail.

The system has numerous advantages over other handshape notations (at least when used for similar research projects), but it also has a few disadvantages. Some of the system’s advantages have already been mentioned in this paper — its relative economy, its ability to show phonological relationships instead of only visual simi-larities, its use of standard characters for easier searches, etc. In addition, because it is based in phonological theory and not on the handshape inventory of a par-ticular language, this system can be used (and expanded) to represent handshapes not yet included in inventories while not vastly over-generating forms. This will be important as the amount of research performed on understudied sign languages increases and as the phonological distinctions within well-studied languages be-come better understood.

The disadvantages of the system are minor, but do exist. First, the base sym-bols have an ASL bias because they were chosen based mostly on ASL handshapes. This bias was intentional (these symbols serve as mnemonics for the members of our project who are most familiar with ASL), but it might make learning the system more difficult for users of different sign languages. Secondly, searches on particular symbols in handshape notations currently require some human deci-sion-making. For example, a search for the symbol ‘1’ will yield handshapes which select the index finger in both the PSF and the SSF groups. However, this problem should be easy to fix with a little bit of extra scripting, and even as is, because SSF groups are relatively rare, filtering through search results manually should not be very difficult.

14. Because there are so many varieties of surface forms among the four-finger handshapes, this set of handshapes may require particular attention in training.

15. Liddell & Johnson’s model (1989) also used ‘~’ for lax handshapes, although its use had somewhat different implications for joint configurations than it does here.

88 Petra Eccarius and Diane Brentari

Despite these minor disadvantages, this notation system remains a useful tool for transcribing handshape. It is not meant to replace other notation systems — it contains much more detail than most sign language linguists need when recording handshape — but it could be useful for research projects with goals similar to ours, projects that need to transcribe the phonological features of handshapes instead of just the handshapes themselves.

Acknowledgements

This notation system was developed for a project funded by NSF grant 0112391-BCS; P.I. Di-ane Brentari. We would like to thank Gladys Tang for use of her handshape font for this article (available at: http://www.cuhk.edu.hk/lin/Faculty_gladystang/handshape2002-dec.TTF).

References

Ann, Jean. 2006. Frequency of occurrence and ease of articulation of sign language handshapes: The Taiwanese example. Washington, DC: Gallaudet University Press.

Brentari, Diane. 1998. A prosodic model of sign language phonology. Cambridge, MA: MIT Press.

Brentari, Diane & Carol Padden. 2001. Native and foreign vocabulary in American Sign Lan-guage: A Lexicon with multiple origins. In D. Brentari (ed.), Foreign vocabulary in sign languages, 87–119. Mahwah, NJ: Lawrence Erlbaum Associates.

Brentari, Diane & Petra Eccarius. In press. Phonological contrast in sign language handshapes. In D. Brentari (ed.), Sign languages. Cambridge, UK: Cambridge University Press.

Channon, Rachel. 2002. Signs are single segments: Phonological representations and temporal se-quencing in ASL and other sign languages. University of Maryland dissertation.

Chomsky, Noam & Morris Halle. 1968. The sound pattern of English. New York: Harper and Row.

Clements, George N. 1985. The geometry of phonological features. Phonology Yearbook 2, 225–252.

Clements, George N. 2001. Representational economy in constraint-based phonology. In T. A. Hall (ed.), Distinctive feature theory, 71–146. Berlin: Mouton de Gruyter.

Eccarius, Petra. 2002. Finding common ground: A comparison of handshape across multiple sign languages. West Lafayette, IN: Purdue University Masters thesis.

Eccarius, Petra. 2008. A constraint-based account of handshape contrast in sign languages. West Lafayette, IN: Purdue University dissertation.

Geitz, Sarah & Diane Brentari. 1997. Delivering an ASL lexicon via the World Wide Web using Virtual Reality Models. Paper presented at the International Conference on Computers in Education, Kuching, Sarawak, Malaysia.

Goldsmith, John. 1976. Autosegmental phonology. Cambridge, MA: MIT dissertation. (Pub-lished 1979, New York: Garland Press.)

Handshape coding made easier 89

Greftegreff, Irene. 1993. A few notes on anatomy and distinctive features in NTS handshapes. University of Trondheim, Working Papers in Linguistics 17, 48–68.

Hulst, Harry van der 1995. The composition of handshapes. University of Trondheim, Working Papers in Linguistics 23, 1–17.

Hulst, Harry van der. & Rachel Channon. In press. Notation systems for spoken and signed languages. In D. Brentari (ed.), Sign languages: A Cambridge language survey. Cambridge, UK: Cambridge University Press.

Jakobson, Roman & Morris Halle. 1956, reprinted 1971. Fundamentals of language. The Hague: Mouton.

Jakobson, Roman, Gunnar Fant & Morris Halle. 1951, reprinted 1972. Preliminaries of speech analysis. Cambridge, MA: MIT Press.

Johnson, Robert E. & Scott Liddell. In preparation. Sign Language phonetics: Architecture and description.

Kita, Sotaro, Ingeborg van Gijn & Harry van der Hulst. 1998. Movement phases in signs and co-speech gestures, and their transcription by human coders. In I. Wachsmuth & M. Fröhlich (eds.), Gesture and sign language in human-computer interaction, International Gesture Workshop Bielefeld, Germany, September 17–19, 1997, Proceedings, 23–35. Berlin: Springer-Verlag.

Kooij, Els van der. 2002. Phonological categories in Sign Language of the Netherlands: The role of phonetic implementation and iconicity. Leiden University dissertation. Utrecht, The Neth-erlands: LOT.

Liddell, Scott & Robert E. Johnson. 1989. American Sign Language: The phonological base. Sign Language Studies 64, 195–278.

Mandel, Mark. 1981. Phonotactics and morphophonology in American Sign Language. Berkeley, CA: University of California dissertation.

Prillwitz, Siegmund, Regina Leven, Heiko Zienert, Thomas Hanke & Jan Henning. 1989. Ham-NoSys. Version 2.0; Hamburg Notation System for Sign Languages. An introductory guide. Hamburg: Signum Verlag.

Sandler, Wendy. 1996. Representing handshapes. International Review of Sign Language Linguis-tics 1, 115–158.

Sapir, Edward. 1933. The psychological reality of phonemes. In V. B. Makkai (ed.), Phonological theory: Evolution and current practice, 22–31. New York: Holt, Rinehart and Winston.

Stokoe, William, Dorothy Casterline & Carl Croneberg. 1965. A dictionary of American Sign Language on linguistic principles. Silver Spring, MD: Linstok Press.

Sutton, Valerie. 2002. Lessons in SignWriting. La Jolla, CA: The Deaf Action Committee for Sign Writing. Available: http://www.SignWriting.org/lessons/lessons.html

Takkinen, Ritva. 2005. Some observations on the use of HamNoSys (Hamburg Notation System for Sign Languages) in the context of the phonetic transcription of children’s signing. Sign Language & Linguistics 8(1/2), 97–116.

Zwitserlood, Inge. 2003. Classifying hand configurations in Nederlandse Gebarentaal. University of Utrecht dissertation. Utrecht: LOT.

90 Petra Eccarius and Diane Brentari

Appendix: Handshape Charts

To find a handshape in these charts, proceed as follows:

1. Determine the set of selected fingers. (Selected fingers involve contact or movement and/or carry meaning by themselves, and are typically the finger group with the most complex joint configuration(s). Note: The same surface handshape may have different selected fingers depending on context, (e.g. ‘T-;#’ vs. ‘B@;T-’).)

2. Determine the joint configuration. (Find the joint configuration that best resembles that of the handshape in question. NSF groups are considered either extended or closed, but the phonetic position of these fingers varies considerably — pick the closest of the two op-tions.)

3. Determine the thumb position. (Thumb positions are also highly variable — find the closest variant.)

ONE FINGER HANDSHAPES

THUMB

INDEX FINGER

Author’s addresses

Petra Eccariusc/o Brentari, Purdue UniversityHeavilon Hall500 Oval DriveWest Lafayette, IN 47907-2038

[email protected]

Diane BrentariPurdue UniversityHeavilon Hall500 Oval DriveWest Lafayette, IN 47907-2038

[email protected]

Handshape coding made easier 91

92 Petra Eccarius and Diane Brentari

MIDDLE FINGER

Handshape coding made easier 93

RING FINGER

PINKY FINGER

94 Petra Eccarius and Diane Brentari

TWO FINGER HANDSHAPES

INDEX AND MIDDLE FINGERS

Handshape coding made easier 95

96 Petra Eccarius and Diane Brentari

INDEX AND PINKY FINGERS

MIDDLE AND RING FINGERS

MIDDLE AND PINKY FINGERS

Handshape coding made easier 97

RING AND PINKY FINGERS

THREE FINGER HANDSHAPES

INDEX, MIDDLE, AND RING FINGERS

MIDDLE, RING, AND PINKY INDEX, MIDDLE, AND FINGERS PINKY FINGERS

98 Petra Eccarius and Diane Brentari

FOUR FINGER HANDSHAPES

Handshape coding made easier 99

100 Petra Eccarius and Diane Brentari

Handshape coding made easier 101

COMPLEX HANDSHAPES