American Sign Language phonology

Last updated

Sign languages such as American Sign Language (ASL) are characterized by phonological processes analogous to, yet dissimilar from, those of oral languages. Although there is a qualitative difference from oral languages in that sign-language phonemes are not based on sound, and are spatial in addition to being temporal, they fulfill the same role as phonemes in oral languages.

Contents

Six types of signs have been suggested: one-handed signs made without contact, one-handed signs made with contact (excluding on the other hand), symmetric two-handed signs (i.e. signs in which both hands are active and perform the same action), asymmetric two-handed signs (i.e. signs in which one hand is active and one hand is passive) where both hands have the same handshape, asymmetric two-handed signs where the hands have differing handshapes, and compound signs (that combine two or more of the above types). [1] The non-dominant hand in asymmetric signs often functions as the location of the sign. Monosyllabic signs are the most common type of signs in ASL and other sign languages. [2]

Phonemes and features

Signs consist of units smaller than the sign. These are often subdivided into parameters: handshapes with a particular orientation, that may perform some type of movement, in a particular location on the body or in the "signing space", and non-manual signals. These last may include movement of the eyebrows, the cheeks, the nose, the head, the torso, and the eyes. Parameter values are often equalled to spoken language phonemes, although sign language phonemes allow more simultaneity in their realization than phonemes in spoken languages. Phonemes in signed languages, as in oral languages, consist of features. For instance, the /B/ and /G/ handshapes are distinguished by the number of selected fingers: [all] versus [one].

Most phonological research focuses on the handshape. A problem in most studies of handshape is the fact that often elements of a manual alphabet are borrowed into signs, although not all of these elements are part of the sign language's phoneme inventory. [3] Also, allophones are sometimes considered separate phonemes. The first inventory of ASL handshapes contained 19 phonemes (or cheremes [4] ).

In some phonological models, movement is a phonological prime. [5] [6] Other models consider movement as redundant, as it is predictable from the locations, hand orientations and handshape features at the start and end of a sign. [7] [8] Models in which movement is a prime usually distinguish path movement (i.e. movement of the hand[s] through space) and internal movement (i.e. an opening or closing movement of the hand, a hand rotation, or finger wiggling).

Allophony and assimilation

Each phoneme may have multiple allophones, i.e. different realizations of the same phoneme. For example, in the /B/ handshape, the bending of the selected fingers may vary from straight to bent at the lowest joint, and the position of the thumb may vary from stretched at the side of the hand to fold in the palm of the hand. Allophony may be free, but is also often conditioned by the context of the phoneme. Thus, the /B/ handshape will be flexed in a sign in which the fingertips touch the body, and the thumb will be folded in the palm in signs where the radial side of the hand touches the body or the other hand.

Assimilation of sign phonemes to signs in the context is a common process in ASL. For example, the point of contact for signs like THINK, normally at the forehead, may be articulated at a lower location if the location in the following sign is below the cheek. Other assimilation processes concern the number of selected fingers in a sign, that may adapt to that of the previous or following sign. Also, it has been observed that one-handed signs are articulated with two hands when followed by two-handed signs.

Phonotactics

As yet, little is known about ASL phonotactic constraints (or those in other signed languages). The Symmetry and Dominance Conditions [9] are sometimes assumed to be phonotactic constraints. The Symmetry Condition requires both hands in a symmetric two-handed sign to have the same or a mirrored configuration, orientation, and movement. The Dominance Condition requires that only one hand in a two-handed sign moves if the hands do not have the same handshape specifications, and that the non-dominant hand has an unmarked handshape. However, since these conditions seem to apply in more and more signed languages as cross-linguistic research increases, it is doubtful whether these should be considered as specific to ASL phonotactics.

Prosody

ASL conveys prosody through facial expression and upper-body position. Head position, eyebrows, eye gaze, blinks, and mouth positions all convey important linguistic information in sign languages.

Some signs have required facial components that distinguish them from other signs. An example of this sort of lexical distinction is the sign translated 'not yet', which requires that the tongue touch the lower lip and that the head rotates from side to side, in addition to the manual part of the sign. Without these features, it would be interpreted as 'late'. [10]

Though there are some non-manual signs that are used for a number of functions, proficient signers don't have any more difficulty decoding what raised eyebrows mean in a specific context than speakers of English have figuring out what the pitch contour of a sentence in context means. The use of similar facial changes such as eyebrow height to convey both prosody and grammatical distinctions is similar to the overlap of prosodic pitch and lexical or grammatical tone in a tone language. [11]

Like most signed languages, ASL has an analogue to speaking loudly and whispering in oral language. "Loud" signs are larger and more separated, sometimes even with one-handed signs being produced with both hands. "Whispered" signs are smaller, off-center, and sometimes (partially) blocked from sight to unintended onlookers by the speaker's body or a piece of clothing. In fast signing, in particular in context, sign movements are smaller and there may be less repetition. Signs occurring at the end of a phrase may show repetition or may be held ("phrase-final lengthening").

Phonological processing in the brain

The brain processes language phonologically by first identifying the smallest units in an utterance, then combining them to make meaning. In spoken language, these smallest units are often referred to as phonemes, and they are the smallest sounds we identify in a spoken word. In sign language, the smallest units are often referred to as the parameters of a sign (i.e. handshape, location, movement and palm orientation), and we can identify these smallest parts within a produced sign. The cognitive method of phonological processing can be described as segmentation and categorization, where the brain recognizes the individual parts within the sign and combines them to form meaning. [12] This is similar to how spoken language combines sounds to form syllables and then words. Even though the modalities of these languages differ (spoken vs. signed), the brain still processes them similarly through segmentation and categorization.

Measuring brain activity while a person produces or perceives sign language reveals that the brain processes signs differently compared to regular hand movements. This is similar to how the brain differentiates between spoken words and semantically lacking sounds. More specifically, the brain is able to differentiate actual signs from the transition movements in between signs, similarly to how words in spoken language can be identified separately from sounds or breaths that occur in between words that don't contain linguistic meaning. Multiple studies have revealed enhanced brain activity while processing sign language compared to processing only hand movements. For example, during a brain surgery performed on a deaf patient who was still awake, their neural activity was observed and analyzed while they were shown videos in American Sign Language. The results showed that greater brain activity occurred during the moments when the person was perceiving actual signs as compared to the moments that occurred during transition into the next sign [13] This means the brain is segmenting the units of the sign and identifying which units combine to form actual meaning.

An observed difference in location for phonological processing between spoken language and sign language is the activation of areas of the brain specific to auditory vs. visual stimuli. Because of the modality differences, the cortical regions will be stimulated differently depending on which type of language it is. Spoken language creates sounds, which affects the auditory cortices in the superior temporal lobes. Sign language creates visual stimuli, which affects the occipitotemporal regions. Yet both modes of language still activate many of the same regions that are known for language processing in the brain. [14]   For example, the left superior temporal gyrus is stimulated by language in both spoken and signed forms, even though it was once assumed it was only affected by auditory stimuli. [15] No matter the mode of language being used, whether it be spoken or signed, the brain processes language by segmenting the smallest phonological units and combining them to make meaning.

Related Research Articles

<span class="mw-page-title-main">American Sign Language</span> Sign language used predominately in the United States

American Sign Language (ASL) is a natural language that serves as the predominant sign language of Deaf communities in the United States and most of Anglophone Canada. ASL is a complete and organized visual language that is expressed by employing both manual and nonmanual features. Besides North America, dialects of ASL and ASL-based creoles are used in many countries around the world, including much of West Africa and parts of Southeast Asia. ASL is also widely learned as a second language, serving as a lingua franca. ASL is most closely related to French Sign Language (LSF). It has been proposed that ASL is a creole language of LSF, although ASL shows features atypical of creole languages, such as agglutinative morphology.

In phonology and linguistics, a phoneme is a set of phones that can distinguish one word from another in a particular language.

Phonetics is a branch of linguistics that studies how humans produce and perceive sounds, or in the case of sign languages, the equivalent aspects of sign. Linguists who specialize in studying the physical properties of speech are phoneticians. The field of phonetics is traditionally divided into three sub-disciplines based on the research questions involved such as how humans plan and execute movements to produce speech, how various movements affect the properties of the resulting sound, or how humans convert sound waves to linguistic information. Traditionally, the minimal linguistic unit of phonetics is the phone—a speech sound in a language which differs from the phonological unit of phoneme; the phoneme is an abstract categorization of phones, and it is also defined as the smallest unit that discerns meaning between sounds in any given language.

Phonology is the branch of linguistics that studies how languages systematically organize their phones or, for sign languages, their constituent parts of signs. The term can also refer specifically to the sound or sign system of a particular language variety. At one time, the study of phonology related only to the study of the systems of phonemes in spoken languages, but may now relate to any linguistic analysis either:

<span class="mw-page-title-main">Sign language</span> Language that uses manual communication and body language to convey meaning

Sign languages are languages that use the visual-manual modality to convey meaning, instead of spoken words. Sign languages are expressed through manual articulation in combination with non-manual markers. Sign languages are full-fledged natural languages with their own grammar and lexicon. Sign languages are not universal and are usually not mutually intelligible, although there are also similarities among different sign languages.

<span class="mw-page-title-main">William Stokoe</span> Scholar of American Sign Language

William Clarence “Bill” Stokoe Jr. was an American linguist and a long-time professor at Gallaudet University. His research on American Sign Language (ASL) revolutionized the understanding of ASL in the United States and sign languages throughout the world. Stokoe's work led to a widespread recognition that sign languages are true languages, exhibiting syntax and morphology, and are not only systems of gesture.

Phonotactics is a branch of phonology that deals with restrictions in a language on the permissible combinations of phonemes. Phonotactics defines permissible syllable structure, consonant clusters and vowel sequences by means of phonotactic constraints.

Cued speech is a visual system of communication used with and among deaf or hard-of-hearing people. It is a phonemic-based system which makes traditionally spoken languages accessible by using a small number of handshapes, known as cues, in different locations near the mouth to convey spoken language in a visual format. The National Cued Speech Association defines cued speech as "a visual mode of communication that uses hand shapes and placements in combination with the mouth movements and speech to make the phonemes of spoken language look different from each other." It adds information about the phonology of the word that is not visible on the lips. This allows people with hearing or language difficulties to visually access the fundamental properties of language. It is now used with people with a variety of language, speech, communication, and learning needs. It is not a sign language such as American Sign Language (ASL), which is a separate language from English. Cued speech is considered a communication modality but can be used as a strategy to support auditory rehabilitation, speech articulation, and literacy development.

The American Manual Alphabet (AMA) is a manual alphabet that augments the vocabulary of American Sign Language.

<span class="mw-page-title-main">German Sign Language</span> Sign language predominantly used in Germany

German Sign Language or Deutsche Gebärdensprache (DGS), is the sign language of the deaf community in Germany, Luxembourg and in the German-speaking community of Belgium. It is unclear how many use German Sign Language as their main language; Gallaudet University estimated 50,000 as of 1986. The language has evolved through use in deaf communities over hundreds of years.

Home sign is a gestural communication system, often invented spontaneously by a deaf child who lacks accessible linguistic input. Home sign systems often arise in families where a deaf child is raised by hearing parents and is isolated from the Deaf community. Because the deaf child does not receive signed or spoken language input, these children are referred to as linguistically isolated.

<span class="mw-page-title-main">Segment (linguistics)</span> Distinct unit of speech

In linguistics, a segment is "any discrete unit that can be identified, either physically or auditorily, in the stream of speech". The term is most used in phonetics and phonology to refer to the smallest elements in a language, and this usage can be synonymous with the term phone.

<span class="mw-page-title-main">Stokoe notation</span> Phonemic script for sign languages

Stokoe notation is the first phonemic script used for sign languages. It was created by William Stokoe for American Sign Language (ASL), with Latin letters and numerals used for the shapes they have in fingerspelling, and iconic glyphs to transcribe the position, movement, and orientation of the hands. It was first published as the organizing principle of Sign Language Structure: An Outline of the Visual Communication Systems of the American Deaf (1960), and later also used in A Dictionary of American Sign Language on Linguistic Principles, by Stokoe, Casterline, and Croneberg (1965). In the 1965 dictionary, signs are themselves arranged alphabetically, according to their Stokoe transcription, rather than being ordered by their English glosses as in other sign-language dictionaries. This made it the only ASL dictionary where the reader could look up a sign without first knowing how to translate it into English. The Stokoe notation was later adapted to British Sign Language (BSL) in Kyle et al. (1985) and to Australian Aboriginal sign languages in Kendon (1988). In each case the researchers modified the alphabet to accommodate phonemes not found in ASL.

<span class="mw-page-title-main">Plains Indian Sign Language</span> Endangered language of the Plains peoples

Plains Indian Sign Language (PISL), also known as Hand Talk or Plains Sign Language, is an endangered language common to various Plains Nations across what is now central Canada, the central and western United States and northern Mexico. This sign language was used historically as a lingua franca, notably for trading among tribes; it is still used for story-telling, oratory, various ceremonies, and by deaf people for ordinary daily use.

Bimodal bilingualism is an individual or community's bilingual competency in at least one oral language and at least one sign language, which utilize two different modalities. An oral language consists of a vocal-aural modality versus a signed language which consists of a visual-spatial modality. A substantial number of bimodal bilinguals are children of deaf adults (CODA) or other hearing people who learn sign language for various reasons. Deaf people as a group have their own sign language(s) and culture that is referred to as Deaf, but invariably live within a larger hearing culture with its own oral language. Thus, "most deaf people are bilingual to some extent in [an oral] language in some form". In discussions of multilingualism in the United States, bimodal bilingualism and bimodal bilinguals have often not been mentioned or even considered. This is in part because American Sign Language, the predominant sign language used in the U.S., only began to be acknowledged as a natural language in the 1960s. However, bimodal bilinguals share many of the same traits as traditional bilinguals, as well as differing in some interesting ways, due to the unique characteristics of the Deaf community. Bimodal bilinguals also experience similar neurological benefits as do unimodal bilinguals, with significantly increased grey matter in various brain areas and evidence of increased plasticity as well as neuroprotective advantages that can help slow or even prevent the onset of age-related cognitive diseases, such as Alzheimer's and dementia.

The grammar of American Sign Language (ASL) has rules just like any other sign language or spoken language. ASL grammar studies date back to William Stokoe in the 1960s. This sign language consists of parameters that determine many other grammar rules. Typical word structure in ASL conforms to the SVO/OSV and topic-comment form, supplemented by a noun-adjective order and time-sequenced ordering of clauses. ASL has large CP and DP syntax systems, and also doesn't contain many conjunctions like some other languages do.

Nepalese Sign Language or Nepali Sign Language is the main sign language of Nepal. It is a partially standardized language based informally on the variety used in Kathmandu, with some input from varieties from Pokhara and elsewhere. As an indigenous sign language, it is not related to oral Nepali. The Nepali Constitution of 2015 specifically mentions the right to have education in Sign Language for the deaf. Likewise, the newly passed Disability Rights Act of 2072 BS defined language to include "spoken and sign languages and other forms of speechless language." in practice it is recognized by the Ministry of Education and the Ministry of Women, Children and Social Welfare, and is used in all schools for the deaf. In addition, there is legislation underway in Nepal which, in line with the UN Convention on the Rights of Persons with Disabilities (UNCRPD) which Nepal has ratified, should give Nepalese Sign Language equal status with the oral languages of the country.

In sign languages, the term classifier construction refers to a morphological system that can express events and states. They use handshape classifiers to represent movement, location, and shape. Classifiers differ from signs in their morphology, namely in that signs consist of a single morpheme. Signs are composed of three meaningless phonological features: handshape, location, and movement. Classifiers, on the other hand, consist of many morphemes. Specifically, the handshape, location, and movement are all meaningful on their own. The handshape represents an entity and the hand's movement iconically represents the movement of that entity. The relative location of multiple entities can be represented iconically in two-handed constructions.

<span class="mw-page-title-main">Sign language in the brain</span>

Sign language refers to any natural language which uses visual gestures produced by the hands and body language to express meaning. The brain's left side is the dominant side utilized for producing and understanding sign language, just as it is for speech. In 1861, Paul Broca studied patients with the ability to understand spoken languages but the inability to produce them. The damaged area was named Broca's area, and located in the left hemisphere’s inferior frontal gyrus. Soon after, in 1874, Carl Wernicke studied patients with the reverse deficits: patients could produce spoken language, but could not comprehend it. The damaged area was named Wernicke's area, and is located in the left hemisphere’s posterior superior temporal gyrus.

<span class="mw-page-title-main">Nonmanual feature</span> Sign language syntax

A nonmanual feature, also sometimes called nonmanual signal or sign language expression, are the features of signed languages that do not use the hands. Nonmanual features are gramaticised and a necessary component in many signs, in the same way that manual features are. Nonmanual features serve a similar function to intonation in spoken languages.

References

  1. Battison, Robbin (2011). "Analyzing Signs". Linguistics of American Sign Language (5th ed.). Washington, DC: Gallaudet University Press. pp. 209–210. ISBN   978-1-56368-508-8.
  2. Sandler, Wendy (2008). "The Syllable in Sign Language: Considering the Other Natural Language Modality". Ontogeny and phylogeny of syllable organization, Festschrift in honor of Peter MacNeilage. New York: Taylor Francis. p. 384.
  3. Battison, Robbin (1974). "Phonological Deletion in American Sign Language". Sign Language Studies. 1005 (1): 1–19. doi:10.1353/sls.1974.0005. ISSN   1533-6263. S2CID   143890757.
  4. Landar, Herbert; Stokoe, William C. (April 1961). "Sign Language Structure: An Outline of the Visual Communication Systems of the American Deaf". Language. 37 (2): 269. doi:10.2307/410856. ISSN   0097-8507. JSTOR   410856.
  5. Perlmutter, David M. (1993), "Sonority and Syllable Structure in American Sign Language **A slightly different version of this article appeared in Linguistic Inquiry, Vol. 23, No. 3, pp. 407–442 (1992). © 1992 by the Massachusetts Institute of Technology. Reprinted by permission.", Current Issues in ASL Phonology, Elsevier, pp. 227–261, doi:10.1016/b978-0-12-193270-1.50016-9, ISBN   9780121932701 , retrieved 2022-04-14
  6. Sandler, Wendy (December 1999). "Diane Brentari (1999). A prosodic model of sign language phonology. Cambridge, Mass.: MIT Press. Pp. xviii+376". Phonology. 16 (3): 443–447. doi:10.1017/s0952675799003802. ISSN   0952-6757. S2CID   60944874.
  7. van der Hulst, Harry (August 1993). "Units in the analysis of signs". Phonology. 10 (2): 209–241. doi:10.1017/s095267570000004x. ISSN   0952-6757. S2CID   16629421.
  8. Demey, Eline (2003-12-31). "Review of Van der Kooij (2002): Phonological Categories in Sign Language of the Netherlands. The Role of Phonetic Implementation and Iconicity". Sign Language & Linguistics. 6 (2): 277–284. doi:10.1075/sll.6.2.11dem. ISSN   1387-9316.
  9. Battison, Robbin (1974). "Phonological Deletion in American Sign Language". Sign Language Studies. 1005 (1): 1–19. doi:10.1353/sls.1974.0005. ISSN   1533-6263. S2CID   143890757.
  10. Liddell (2003)
  11. Traci Weast, 2008. PhD dissertation: Questions in American Sign Language: A quantitative analysis of raised and lowered eyebrows
  12. Petitto, L. A.; Langdon, C.; Stone, A.; Andriola, D.; Kartheiser, G.; Cochran, C. (November 2016). "Visual sign phonology: insights into human reading and language from a natural soundless phonology". WIREs Cognitive Science. 7 (6): 366–381. doi:10.1002/wcs.1404. ISSN   1939-5078. PMID   27425650.
  13. Leonard, Matthew K.; Lucas, Ben; Blau, Shane; Corina, David P.; Chang, Edward F. (November 2020). "Cortical Encoding of Manual Articulatory and Linguistic Features in American Sign Language". Current Biology. 30 (22): 4342–4351.e3. doi:10.1016/j.cub.2020.08.048. PMC   7674262 . PMID   32888480.
  14. MacSweeney, M. (2002-07-01). "Neural systems underlying British Sign Language and audio-visual English processing in native users". Brain. 125 (7): 1583–1593. doi: 10.1093/brain/awf153 . ISSN   1460-2156. PMID   12077007.
  15. Petitto, Laura Ann; Zatorre, Robert J.; Gauna, Kristine; Nikelski, E. J.; Dostie, Deanna; Evans, Alan C. (2000-12-05). "Speech-like cerebral activity in profoundly deaf people processing signed languages: Implications for the neural basis of human language". Proceedings of the National Academy of Sciences. 97 (25): 13961–13966. doi: 10.1073/pnas.97.25.13961 . ISSN   0027-8424. PMC   17683 . PMID   11106400.

Bibliography