American Sign Language phonology

Last updated

Sign languages such as American Sign Language (ASL) are characterized by phonological processes analogous to those of oral languages. Phonemes serve the same role between oral and signed languages, the main difference being oral languages are based on sound and signed languages are spatial and temporal. [1] There is debate about the phonotactics in ASL, but literature has largely agreed upon the Symmetry and Dominance Conditions for phonotactic constraints. Allophones perform the same in ASL as they do in spoken languages, where different phonemes can cause free variation, or complementary and contrastive distributions. There is assimilation between phonemes depending on the context around the sign when it is being produced. The brain processes spoken and signed language the same in terms of the linguistic properties, however, there is differences in activation between the auditory and visual cortex for language perception.

Contents

Phonemes

Sign phonemes consist of units smaller than the sign. These are subdivided into parameters: handshapes with a particular orientation, that may perform some type of movement, in a particular location on the body or in the "signing space", and non-manual signals. These last two may include movement of the eyebrows, the cheeks, the nose, the head, the torso, and the eyes. Parameter values are often compared to spoken language phonemes, however, sign language phonemes are unusual in that they can occur simultaneously, [2] like suprasegmental elements in speech.

Most phonological research focuses on the handshape. A problem in most studies of handshape is the fact that often elements of a manual alphabet are borrowed into signs, although not all of these elements are part of the sign language's phoneme inventory. [3] Also, allophones are sometimes considered separate phonemes. The first inventory of ASL handshapes contained 19 phonemes (or cheremes [4] ).

In some phonological models, movement is a phonological prime. [5] [6] Other models consider movement as redundant, as it is predictable from the locations, hand orientations and handshape features at the start and end of a sign. [7] [8] Models in which movement is a prime usually distinguish path movement (i.e. movement of the hand[s] through space) and internal movement (i.e. an opening or closing movement of the hand, a hand rotation, or finger wiggling).

Phonotactics

As yet, little is known about ASL phonotactic constraints (or those in other signed languages). The Symmetry and Dominance Conditions [3] are sometimes assumed to be phonotactic constraints. The Symmetry Condition requires both hands in a symmetric two-handed sign to have the same or a mirrored configuration, orientation, and movement. The Dominance Condition requires that only one hand in a two-handed sign moves if the hands do not have the same handshape specifications, and that the non-dominant hand has an unmarked handshape. [9] Since these conditions apply in more and more signed languages as cross-linguistic research increases, it may not apply to only ASL phonotactics.

Six types of signs have been suggested: one-handed signs made without contact, one-handed signs made with contact (excluding on the other hand), symmetric two-handed signs (i.e. signs in which both hands are active and perform the same action), asymmetric two-handed signs (i.e. signs in which one hand is active and one hand is passive) where both hands have the same handshape, asymmetric two-handed signs where the hands have differing handshapes, and compound signs (that combine two or more of the above types). [10] The non-dominant hand in asymmetric signs often functions as the location of the sign. Monosyllabic signs are the most common type of signs in ASL and other sign languages. [11]

Allophony and assimilation

The top most hand is the ASL sign for the letter /b/ with the different realizations of the same phoneme below it. The bottom left hand shows the fingers bent at the bottom most joint and the bottom right hand shows the thumb being along the side of the hand. ASL B allophones.pdf
The top most hand is the ASL sign for the letter /b/ with the different realizations of the same phoneme below it. The bottom left hand shows the fingers bent at the bottom most joint and the bottom right hand shows the thumb being along the side of the hand.

Each phoneme may have multiple allophones, i.e. different realizations of the same phoneme. For example, in the /B/ handshape, the bending of the selected fingers may vary from straight to bent at the lowest joint, and the position of the thumb may vary from stretched at the side of the hand to fold in the palm of the hand. Allophony may be free, but is also often conditioned by the context of the phoneme. Thus, the /B/ handshape will be flexed in a sign in which the fingertips touch the body, and the thumb will be folded in the palm in signs where the radial side of the hand touches the body or the other hand.

Assimilation of sign phonemes, to signs in the context, is a common process in ASL. For example, the point of contact for signs like THINK, normally at the forehead, may be articulated at a lower location if the location in the following sign is below the cheek. Other assimilation processes concern the number of selected fingers in a sign, that may adapt to that of the previous or following sign. Also, it has been observed that one-handed signs are articulated with two hands when followed by two-handed signs. [12]

Phonological processing in the brain

The brain processes language phonologically by first identifying the smallest units in an utterance, then combining them to make meaning. In spoken language, these smallest units are often referred to as phonemes, and they are the smallest sounds we identify in a spoken word. In sign language, the smallest units are often referred to as the parameters of a sign (i.e. handshape, location, movement and palm orientation), and we can identify these smallest parts within a produced sign. The cognitive method of phonological processing can be described as segmentation and categorization, where the brain recognizes the individual parts within the sign and combines them to form meaning. [13] This is similar to how spoken language combines sounds to form syllables and then words. Even though the modalities of these languages differ (spoken vs. signed), the brain still processes them similarly through segmentation and categorization.

Measuring brain activity while a person produces or perceives sign language reveals that the brain processes signs differently compared to regular hand movements. This is similar to how the brain differentiates between spoken words and semantically lacking sounds. More specifically, the brain is able to differentiate actual signs from the transition movements in between signs, similarly to how words in spoken language can be identified separately from sounds or breaths that occur in between words that don't contain linguistic meaning. Multiple studies have revealed enhanced brain activity while processing sign language compared to processing only hand movements. For example, during a brain surgery performed on a deaf patient who was still awake, their neural activity was observed and analyzed while they were shown videos in American Sign Language. The results showed that greater brain activity occurred during the moments when the person was perceiving actual signs as compared to the moments that occurred during transition into the next sign [14] This means the brain is segmenting the units of the sign and identifying which units combine to form actual meaning.

An observed difference in location for phonological processing between spoken language and sign language is the activation of areas of the brain specific to auditory vs. visual stimuli. Because of the modality differences, the cortical regions will be stimulated differently depending on which type of language it is. Spoken language creates sounds, which affects the auditory cortices in the superior temporal lobes. Sign language creates visual stimuli, which affects the occipitotemporal regions. Yet both modes of language still activate many of the same regions that are known for language processing in the brain. [15]   For example, the left superior temporal gyrus is stimulated by language in both spoken and signed forms, even though it was once assumed it was only affected by auditory stimuli. [16] No matter the mode of language being used, whether it be spoken or signed, the brain processes language by segmenting the smallest phonological units and combining them to make meaning.

Related Research Articles

<span class="mw-page-title-main">American Sign Language</span> Sign language used predominantly in the US

American Sign Language (ASL) is a natural language that serves as the predominant sign language of Deaf communities in the United States and most of Anglophone Canada. ASL is a complete and organized visual language that is expressed by employing both manual and nonmanual features. Besides North America, dialects of ASL and ASL-based creoles are used in many countries around the world, including much of West Africa and parts of Southeast Asia. ASL is also widely learned as a second language, serving as a lingua franca. ASL is most closely related to French Sign Language (LSF). It has been proposed that ASL is a creole language of LSF, although ASL shows features atypical of creole languages, such as agglutinative morphology.

A phoneme is any set of similar speech sounds that is perceptually regarded by the speakers of a language as a single basic sound—a smallest possible phonetic unit—that helps distinguish one word from another. All languages contains phonemes, and all spoken languages include both consonant and vowel phonemes. Phonemes are primarily studied under the branch of linguistics known as phonology.

Phonetics is a branch of linguistics that studies how humans produce and perceive sounds or, in the case of sign languages, the equivalent aspects of sign. Linguists who specialize in studying the physical properties of speech are phoneticians. The field of phonetics is traditionally divided into three sub-disciplines on questions involved such as how humans plan and execute movements to produce speech, how various movements affect the properties of the resulting sound or how humans convert sound waves to linguistic information. Traditionally, the minimal linguistic unit of phonetics is the phone—a speech sound in a language which differs from the phonological unit of phoneme; the phoneme is an abstract categorization of phones and it is also defined as the smallest unit that discerns meaning between sounds in any given language.

Phonology is the branch of linguistics that studies how languages systematically organize their Phonemes or, for sign languages, their constituent parts of signs. The term can also refer specifically to the sound or sign system of a particular language variety. At one time, the study of phonology related only to the study of the systems of phonemes in spoken languages, but now it may relate to any linguistic analysis either:

<span class="mw-page-title-main">Sign language</span> Language that uses manual communication and body language to convey meaning

Sign languages are languages that use the visual-manual modality to convey meaning, instead of spoken words. Sign languages are expressed through manual articulation in combination with non-manual markers. Sign languages are full-fledged natural languages with their own grammar and lexicon. Sign languages are not universal and are usually not mutually intelligible, although there are similarities among different sign languages.

<span class="mw-page-title-main">William Stokoe</span> American linguist (1919–2000)

William Clarence “Bill” Stokoe Jr. was an American linguist and a long-time professor at Gallaudet University. His research on American Sign Language (ASL) revolutionized the understanding of ASL in the United States and sign languages throughout the world. Stokoe's work led to a widespread recognition that sign languages are true languages, exhibiting syntax and morphology, and are not only systems of gesture.

Phonotactics is a branch of phonology that deals with restrictions in a language on the permissible combinations of phonemes. Phonotactics defines permissible syllable structure, consonant clusters and vowel sequences by means of phonotactic constraints.

Lip reading, also known as speechreading, is a technique of understanding a limited range of speech by visually interpreting the movements of the lips, face and tongue without sound. Estimates of the range of lip reading vary, with some figures as low as 30% because lip reading relies on context, language knowledge, and any residual hearing. Although lip reading is used most extensively by deaf and hard-of-hearing people, most people with normal hearing process some speech information from sight of the moving mouth.

Cued speech is a visual system of communication used with and among deaf or hard-of-hearing people. It is a phonemic-based system which makes traditionally spoken languages accessible by using a small number of handshapes, known as cues, in different locations near the mouth to convey spoken language in a visual format. The National Cued Speech Association defines cued speech as "a visual mode of communication that uses hand shapes and placements in combination with the mouth movements and speech to make the phonemes of spoken language look different from each other." It adds information about the phonology of the word that is not visible on the lips. This allows people with hearing or language difficulties to visually access the fundamental properties of language. It is now used with people with a variety of language, speech, communication, and learning needs. It is not a sign language such as American Sign Language (ASL), which is a separate language from English. Cued speech is considered a communication modality but can be used as a strategy to support auditory rehabilitation, speech articulation, and literacy development.

The American Manual Alphabet (AMA) is a manual alphabet that augments the vocabulary of American Sign Language.

Manually Coded English (MCE) is an umbrella term referring to a number of invented manual codes intended to visually represent the exact grammar and morphology of spoken English. Different codes of MCE vary in the levels of adherence to spoken English grammar, morphology, and syntax. MCE is typically used in conjunction with direct spoken English.

<span class="mw-page-title-main">Segment (linguistics)</span> Distinct unit of speech

In linguistics, a segment is "any discrete unit that can be identified, either physically or auditorily, in the stream of speech". The term is most used in phonetics and phonology to refer to the smallest elements in a language, and this usage can be synonymous with the term phone.

<span class="mw-page-title-main">Stokoe notation</span> Phonemic script for sign languages

Stokoe notation is the first phonemic script used for sign languages. It was created by William Stokoe for American Sign Language (ASL), with Latin letters and numerals used for the shapes they have in fingerspelling, and iconic glyphs to transcribe the position, movement, and orientation of the hands. It was first published as the organizing principle of Sign Language Structure: An Outline of the Visual Communication Systems of the American Deaf (1960), and later also used in A Dictionary of American Sign Language on Linguistic Principles, by Stokoe, Casterline, and Croneberg (1965). In the 1965 dictionary, signs are themselves arranged alphabetically, according to their Stokoe transcription, rather than being ordered by their English glosses as in other sign-language dictionaries. This made it the only ASL dictionary where the reader could look up a sign without first knowing how to translate it into English. The Stokoe notation was later adapted to British Sign Language (BSL) in Kyle et al. (1985) and to Australian Aboriginal sign languages in Kendon (1988). In each case the researchers modified the alphabet to accommodate phonemes not found in ASL.

<span class="mw-page-title-main">Plains Indian Sign Language</span> Endangered language of the Plains peoples

Plains Indian Sign Language (PISL), also known as Hand Talk or Plains Sign Language, is an endangered language common to various Plains Nations across what is now central Canada, the central and western United States and northern Mexico. This sign language was used historically as a lingua franca, notably for trading among tribes; it is still used for story-telling, oratory, various ceremonies, and by deaf people for ordinary daily use.

Speech shadowing is a psycholinguistic experimental technique in which subjects repeat speech at a delay to the onset of hearing the phrase. The time between hearing the speech and responding, is how long the brain takes to process and produce speech. The task instructs participants to shadow speech, which generates intent to reproduce the phrase while motor regions in the brain unconsciously process the syntax and semantics of the words spoken. Words repeated during the shadowing task would also imitate the parlance of the shadowed speech.

In sign languages, the term classifier construction refers to a morphological system that can express events and states. They use handshape classifiers to represent movement, location, and shape. Classifiers differ from signs in their morphology, namely that signs consist of a single morpheme. Signs are composed of three meaningless phonological features: handshape, location, and movement. Classifiers, on the other hand, consist of many morphemes. Specifically, the handshape, location, and movement are all meaningful on their own. The handshape represents an entity and the hand's movement iconically represents the movement of that entity. The relative location of multiple entities can be represented iconically in two-handed constructions.

In sign languages, location, or tab, refers to specific places that the hands occupy as they are used to form signs. In Stokoe terminology it is known as the TAB, an abbreviation of tabula. Location is one of five components, or parameters, of a sign, along with handshape, orientation, movement, and nonmanual features. A particular specification of a location, such as the chest or the temple of the head, can be considered a phoneme. Different sign languages can make use of different locations. In other words, different sign languages can have different inventories of location phonemes.

<span class="mw-page-title-main">Sign language in the brain</span>

Sign language refers to any natural language which uses visual gestures produced by the hands and body language to express meaning. The brain's left side is the dominant side utilized for producing and understanding sign language, just as it is for speech. In 1861, Paul Broca studied patients with the ability to understand spoken languages but the inability to produce them. The damaged area was named Broca's area, and located in the left hemisphere’s inferior frontal gyrus. Soon after, in 1874, Carl Wernicke studied patients with the reverse deficits: patients could produce spoken language, but could not comprehend it. The damaged area was named Wernicke's area, and is located in the left hemisphere’s posterior superior temporal gyrus.

Language acquisition is a natural process in which infants and children develop proficiency in the first language or languages that they are exposed to. The process of language acquisition is varied among deaf children. Deaf children born to deaf parents are typically exposed to a sign language at birth and their language acquisition follows a typical developmental timeline. However, at least 90% of deaf children are born to hearing parents who use a spoken language at home. Hearing loss prevents many deaf children from hearing spoken language to the degree necessary for language acquisition. For many deaf children, language acquisition is delayed until the time that they are exposed to a sign language or until they begin using amplification devices such as hearing aids or cochlear implants. Deaf children who experience delayed language acquisition, sometimes called language deprivation, are at risk for lower language and cognitive outcomes. However, profoundly deaf children who receive cochlear implants and auditory habilitation early in life often achieve expressive and receptive language skills within the norms of their hearing peers; age at implantation is strongly and positively correlated with speech recognition ability. Early access to language through signed language or technology have both been shown to prepare children who are deaf to achieve fluency in literacy skills.

Protactile is a language used by deafblind people using tactile channels. Unlike other sign languages, which are heavily reliant on visual information, protactile is oriented towards touch and is practiced on the body. Protactile communication originated out of communications by DeafBlind people in Seattle in 2007 and incorporates signs from American Sign Language. Protactile is an emerging system of communication in the United States, with users relying on shared principles such as contact space, tactile imagery, and reciprocity.

References

  1. Fenlon, Jordan; Cormier, Kearsy; Brentari, Diane (2017-12-14), Hannahs, S. J.; Bosch, Anna R. K. (eds.), "The phonology of sign languages", The Routledge Handbook of Phonological Theory (1 ed.), Routledge, pp. 453–475, doi:10.4324/9781315675428-16, ISBN   978-1-315-67542-8 , retrieved 2024-10-17
  2. Emmorey, Karen; Corina, David (December 1990). "Lexical Recognition in Sign Language: Effects of Phonetic Structure and Morphology". Perceptual and Motor Skills. 71 (3_suppl): 1227–1252. doi:10.2466/pms.1990.71.3f.1227. ISSN   0031-5125. PMID   2087376.
  3. 1 2 Battison, Robbin (1974). "Phonological Deletion in American Sign Language". Sign Language Studies. 1005 (1): 1–19. doi:10.1353/sls.1974.0005. ISSN   1533-6263. S2CID   143890757.
  4. Landar, Herbert; Stokoe, William C. (April 1961). "Sign Language Structure: An Outline of the Visual Communication Systems of the American Deaf". Language. 37 (2): 269. doi:10.2307/410856. ISSN   0097-8507. JSTOR   410856.
  5. Perlmutter, David M. (1993), "Sonority and Syllable Structure in American Sign Language **A slightly different version of this article appeared in Linguistic Inquiry, Vol. 23, No. 3, pp. 407–442 (1992). © 1992 by the Massachusetts Institute of Technology. Reprinted by permission.", Current Issues in ASL Phonology, Elsevier, pp. 227–261, doi:10.1016/b978-0-12-193270-1.50016-9, ISBN   9780121932701 , retrieved 2022-04-14
  6. Sandler, Wendy (December 1999). "Diane Brentari (1999). A prosodic model of sign language phonology. Cambridge, Mass.: MIT Press. Pp. xviii+376". Phonology. 16 (3): 443–447. doi:10.1017/s0952675799003802. ISSN   0952-6757. S2CID   60944874.
  7. van der Hulst, Harry (August 1993). "Units in the analysis of signs". Phonology. 10 (2): 209–241. doi:10.1017/s095267570000004x. ISSN   0952-6757. S2CID   16629421.
  8. Demey, Eline (2003-12-31). "Review of Van der Kooij (2002): Phonological Categories in Sign Language of the Netherlands. The Role of Phonetic Implementation and Iconicity". Sign Language & Linguistics. 6 (2): 277–284. doi:10.1075/sll.6.2.11dem. ISSN   1387-9316.
  9. Napoli, Donna Jo; Wu, Jeff (2003-12-31). "Morpheme structure constraints on two-handed signs in American Sign Language". Sign Language & Linguistics. 6 (2): 123–205. doi:10.1075/sll.6.2.03nap. ISSN   1387-9316.
  10. Battison, Robbin (2011). "Analyzing Signs". Linguistics of American Sign Language (5th ed.). Washington, DC: Gallaudet University Press. pp. 209–210. ISBN   978-1-56368-508-8.
  11. Sandler, Wendy (2008). "The Syllable in Sign Language: Considering the Other Natural Language Modality". Ontogeny and phylogeny of syllable organization, Festschrift in honor of Peter MacNeilage. New York: Taylor Francis. p. 384.
  12. Liddell, Scott K.; Johnson, Robert E. (1989). "American Sign Language: The Phonological Base". Sign Language Studies. 64 (1): 195–277. doi:10.1353/sls.1989.0027. ISSN   1533-6263.
  13. Petitto, L. A.; Langdon, C.; Stone, A.; Andriola, D.; Kartheiser, G.; Cochran, C. (November 2016). "Visual sign phonology: insights into human reading and language from a natural soundless phonology". WIREs Cognitive Science. 7 (6): 366–381. doi:10.1002/wcs.1404. ISSN   1939-5078. PMID   27425650.
  14. Leonard, Matthew K.; Lucas, Ben; Blau, Shane; Corina, David P.; Chang, Edward F. (November 2020). "Cortical Encoding of Manual Articulatory and Linguistic Features in American Sign Language". Current Biology. 30 (22): 4342–4351.e3. doi:10.1016/j.cub.2020.08.048. PMC   7674262 . PMID   32888480.
  15. MacSweeney, M. (2002-07-01). "Neural systems underlying British Sign Language and audio-visual English processing in native users". Brain. 125 (7): 1583–1593. doi: 10.1093/brain/awf153 . ISSN   1460-2156. PMID   12077007.
  16. Petitto, Laura Ann; Zatorre, Robert J.; Gauna, Kristine; Nikelski, E. J.; Dostie, Deanna; Evans, Alan C. (2000-12-05). "Speech-like cerebral activity in profoundly deaf people processing signed languages: Implications for the neural basis of human language". Proceedings of the National Academy of Sciences. 97 (25): 13961–13966. doi: 10.1073/pnas.97.25.13961 . ISSN   0027-8424. PMC   17683 . PMID   11106400.

Bibliography