This article has multiple issues. Please help improve it or discuss these issues on the talk page . (Learn how and when to remove these messages)
|
Sign languages such as American Sign Language (ASL) are characterized by phonological processes analogous to those of oral languages. Phonemes serve the same role between oral and signed languages, the main difference being oral languages are based on sound and signed languages are spatial and temporal. [1] There is debate about the phonotactics in ASL, but literature has largely agreed upon the Symmetry and Dominance Conditions for phonotactic constraints. Allophones perform the same in ASL as they do in spoken languages, where different phonemes can cause free variation, or complementary and contrastive distributions. There is assimilation between phonemes depending on the context around the sign when it is being produced. The brain processes spoken and signed language the same in terms of the linguistic properties, however, there is differences in activation between the auditory and visual cortex for language perception.
Sign phonemes consist of units smaller than the sign. These are subdivided into parameters: handshapes with a particular orientation, that may perform some type of movement, in a particular location on the body or in the "signing space", and non-manual signals. These last two may include movement of the eyebrows, the cheeks, the nose, the head, the torso, and the eyes. Parameter values are often compared to spoken language phonemes, however, sign language phonemes are unusual in that they can occur simultaneously, [2] like suprasegmental elements in speech.
Most phonological research focuses on the handshape. A problem in most studies of handshape is the fact that often elements of a manual alphabet are borrowed into signs, although not all of these elements are part of the sign language's phoneme inventory. [3] Also, allophones are sometimes considered separate phonemes. The first inventory of ASL handshapes contained 19 phonemes (or cheremes [4] ).
In some phonological models, movement is a phonological prime. [5] [6] Other models consider movement as redundant, as it is predictable from the locations, hand orientations and handshape features at the start and end of a sign. [7] [8] Models in which movement is a prime usually distinguish path movement (i.e. movement of the hand[s] through space) and internal movement (i.e. an opening or closing movement of the hand, a hand rotation, or finger wiggling).
As yet, little is known about ASL phonotactic constraints (or those in other signed languages). The Symmetry and Dominance Conditions [3] are sometimes assumed to be phonotactic constraints. The Symmetry Condition requires both hands in a symmetric two-handed sign to have the same or a mirrored configuration, orientation, and movement. The Dominance Condition requires that only one hand in a two-handed sign moves if the hands do not have the same handshape specifications, and that the non-dominant hand has an unmarked handshape. [9] Since these conditions apply in more and more signed languages as cross-linguistic research increases, it may not apply to only ASL phonotactics.
Six types of signs have been suggested: one-handed signs made without contact, one-handed signs made with contact (excluding on the other hand), symmetric two-handed signs (i.e. signs in which both hands are active and perform the same action), asymmetric two-handed signs (i.e. signs in which one hand is active and one hand is passive) where both hands have the same handshape, asymmetric two-handed signs where the hands have differing handshapes, and compound signs (that combine two or more of the above types). [10] The non-dominant hand in asymmetric signs often functions as the location of the sign. Monosyllabic signs are the most common type of signs in ASL and other sign languages. [11]
Each phoneme may have multiple allophones, i.e. different realizations of the same phoneme. For example, in the /B/ handshape, the bending of the selected fingers may vary from straight to bent at the lowest joint, and the position of the thumb may vary from stretched at the side of the hand to fold in the palm of the hand. Allophony may be free, but is also often conditioned by the context of the phoneme. Thus, the /B/ handshape will be flexed in a sign in which the fingertips touch the body, and the thumb will be folded in the palm in signs where the radial side of the hand touches the body or the other hand.
Assimilation of sign phonemes, to signs in the context, is a common process in ASL. For example, the point of contact for signs like THINK, normally at the forehead, may be articulated at a lower location if the location in the following sign is below the cheek. Other assimilation processes concern the number of selected fingers in a sign, that may adapt to that of the previous or following sign. Also, it has been observed that one-handed signs are articulated with two hands when followed by two-handed signs. [12]
The brain processes language phonologically by first identifying the smallest units in an utterance, then combining them to make meaning. In spoken language, these smallest units are often referred to as phonemes, and they are the smallest sounds we identify in a spoken word. In sign language, the smallest units are often referred to as the parameters of a sign (i.e. handshape, location, movement and palm orientation), and we can identify these smallest parts within a produced sign. The cognitive method of phonological processing can be described as segmentation and categorization, where the brain recognizes the individual parts within the sign and combines them to form meaning. [13] This is similar to how spoken language combines sounds to form syllables and then words. Even though the modalities of these languages differ (spoken vs. signed), the brain still processes them similarly through segmentation and categorization.
Measuring brain activity while a person produces or perceives sign language reveals that the brain processes signs differently compared to regular hand movements. This is similar to how the brain differentiates between spoken words and semantically lacking sounds. More specifically, the brain is able to differentiate actual signs from the transition movements in between signs, similarly to how words in spoken language can be identified separately from sounds or breaths that occur in between words that don't contain linguistic meaning. Multiple studies have revealed enhanced brain activity while processing sign language compared to processing only hand movements. For example, during a brain surgery performed on a deaf patient who was still awake, their neural activity was observed and analyzed while they were shown videos in American Sign Language. The results showed that greater brain activity occurred during the moments when the person was perceiving actual signs as compared to the moments that occurred during transition into the next sign [14] This means the brain is segmenting the units of the sign and identifying which units combine to form actual meaning.
An observed difference in location for phonological processing between spoken language and sign language is the activation of areas of the brain specific to auditory vs. visual stimuli. Because of the modality differences, the cortical regions will be stimulated differently depending on which type of language it is. Spoken language creates sounds, which affects the auditory cortices in the superior temporal lobes. Sign language creates visual stimuli, which affects the occipitotemporal regions. Yet both modes of language still activate many of the same regions that are known for language processing in the brain. [15] For example, the left superior temporal gyrus is stimulated by language in both spoken and signed forms, even though it was once assumed it was only affected by auditory stimuli. [16] No matter the mode of language being used, whether it be spoken or signed, the brain processes language by segmenting the smallest phonological units and combining them to make meaning.
American Sign Language (ASL) is a natural language that serves as the predominant sign language of Deaf communities in the United States and most of Anglophone Canada. ASL is a complete and organized visual language that is expressed by employing both manual and nonmanual features. Besides North America, dialects of ASL and ASL-based creoles are used in many countries around the world, including much of West Africa and parts of Southeast Asia. ASL is also widely learned as a second language, serving as a lingua franca. ASL is most closely related to French Sign Language (LSF). It has been proposed that ASL is a creole language of LSF, although ASL shows features atypical of creole languages, such as agglutinative morphology.
A phoneme is any set of similar speech sounds that is perceptually regarded by the speakers of a language as a single basic sound—a smallest possible phonetic unit—that helps distinguish one word from another. All languages contains phonemes, and all spoken languages include both consonant and vowel phonemes. Phonemes are primarily studied under the branch of linguistics known as phonology.
Phonetics is a branch of linguistics that studies how humans produce and perceive sounds or, in the case of sign languages, the equivalent aspects of sign. Linguists who specialize in studying the physical properties of speech are phoneticians. The field of phonetics is traditionally divided into three sub-disciplines on questions involved such as how humans plan and execute movements to produce speech, how various movements affect the properties of the resulting sound or how humans convert sound waves to linguistic information. Traditionally, the minimal linguistic unit of phonetics is the phone—a speech sound in a language which differs from the phonological unit of phoneme; the phoneme is an abstract categorization of phones and it is also defined as the smallest unit that discerns meaning between sounds in any given language.
Phonology is the branch of linguistics that studies how languages systematically organize their Phonemes or, for sign languages, their constituent parts of signs. The term can also refer specifically to the sound or sign system of a particular language variety. At one time, the study of phonology related only to the study of the systems of phonemes in spoken languages, but now it may relate to any linguistic analysis either:
Sign languages are languages that use the visual-manual modality to convey meaning, instead of spoken words. Sign languages are expressed through manual articulation in combination with non-manual markers. Sign languages are full-fledged natural languages with their own grammar and lexicon. Sign languages are not universal and are usually not mutually intelligible, although there are similarities among different sign languages.
William Clarence “Bill” Stokoe Jr. was an American linguist and a long-time professor at Gallaudet University. His research on American Sign Language (ASL) revolutionized the understanding of ASL in the United States and sign languages throughout the world. Stokoe's work led to a widespread recognition that sign languages are true languages, exhibiting syntax and morphology, and are not only systems of gesture.
Phonotactics is a branch of phonology that deals with restrictions in a language on the permissible combinations of phonemes. Phonotactics defines permissible syllable structure, consonant clusters and vowel sequences by means of phonotactic constraints.
Lip reading, also known as speechreading, is a technique of understanding a limited range of speech by visually interpreting the movements of the lips, face and tongue without sound. Estimates of the range of lip reading vary, with some figures as low as 30% because lip reading relies on context, language knowledge, and any residual hearing. Although lip reading is used most extensively by deaf and hard-of-hearing people, most people with normal hearing process some speech information from sight of the moving mouth.
Cued speech is a visual system of communication used with and among deaf or hard-of-hearing people. It is a phonemic-based system which makes traditionally spoken languages accessible by using a small number of handshapes, known as cues, in different locations near the mouth to convey spoken language in a visual format. The National Cued Speech Association defines cued speech as "a visual mode of communication that uses hand shapes and placements in combination with the mouth movements and speech to make the phonemes of spoken language look different from each other." It adds information about the phonology of the word that is not visible on the lips. This allows people with hearing or language difficulties to visually access the fundamental properties of language. It is now used with people with a variety of language, speech, communication, and learning needs. It is not a sign language such as American Sign Language (ASL), which is a separate language from English. Cued speech is considered a communication modality but can be used as a strategy to support auditory rehabilitation, speech articulation, and literacy development.
The American Manual Alphabet (AMA) is a manual alphabet that augments the vocabulary of American Sign Language.
Manually Coded English (MCE) is an umbrella term referring to a number of invented manual codes intended to visually represent the exact grammar and morphology of spoken English. Different codes of MCE vary in the levels of adherence to spoken English grammar, morphology, and syntax. MCE is typically used in conjunction with direct spoken English.
In linguistics, a segment is "any discrete unit that can be identified, either physically or auditorily, in the stream of speech". The term is most used in phonetics and phonology to refer to the smallest elements in a language, and this usage can be synonymous with the term phone.
Stokoe notation is the first phonemic script used for sign languages. It was created by William Stokoe for American Sign Language (ASL), with Latin letters and numerals used for the shapes they have in fingerspelling, and iconic glyphs to transcribe the position, movement, and orientation of the hands. It was first published as the organizing principle of Sign Language Structure: An Outline of the Visual Communication Systems of the American Deaf (1960), and later also used in A Dictionary of American Sign Language on Linguistic Principles, by Stokoe, Casterline, and Croneberg (1965). In the 1965 dictionary, signs are themselves arranged alphabetically, according to their Stokoe transcription, rather than being ordered by their English glosses as in other sign-language dictionaries. This made it the only ASL dictionary where the reader could look up a sign without first knowing how to translate it into English. The Stokoe notation was later adapted to British Sign Language (BSL) in Kyle et al. (1985) and to Australian Aboriginal sign languages in Kendon (1988). In each case the researchers modified the alphabet to accommodate phonemes not found in ASL.
Plains Indian Sign Language (PISL), also known as Hand Talk or Plains Sign Language, is an endangered language common to various Plains Nations across what is now central Canada, the central and western United States and northern Mexico. This sign language was used historically as a lingua franca, notably for trading among tribes; it is still used for story-telling, oratory, various ceremonies, and by deaf people for ordinary daily use.
Speech shadowing is a psycholinguistic experimental technique in which subjects repeat speech at a delay to the onset of hearing the phrase. The time between hearing the speech and responding, is how long the brain takes to process and produce speech. The task instructs participants to shadow speech, which generates intent to reproduce the phrase while motor regions in the brain unconsciously process the syntax and semantics of the words spoken. Words repeated during the shadowing task would also imitate the parlance of the shadowed speech.
In sign languages, the term classifier construction refers to a morphological system that can express events and states. They use handshape classifiers to represent movement, location, and shape. Classifiers differ from signs in their morphology, namely that signs consist of a single morpheme. Signs are composed of three meaningless phonological features: handshape, location, and movement. Classifiers, on the other hand, consist of many morphemes. Specifically, the handshape, location, and movement are all meaningful on their own. The handshape represents an entity and the hand's movement iconically represents the movement of that entity. The relative location of multiple entities can be represented iconically in two-handed constructions.
In sign languages, location, or tab, refers to specific places that the hands occupy as they are used to form signs. In Stokoe terminology it is known as the TAB, an abbreviation of tabula. Location is one of five components, or parameters, of a sign, along with handshape, orientation, movement, and nonmanual features. A particular specification of a location, such as the chest or the temple of the head, can be considered a phoneme. Different sign languages can make use of different locations. In other words, different sign languages can have different inventories of location phonemes.
Sign language refers to any natural language which uses visual gestures produced by the hands and body language to express meaning. The brain's left side is the dominant side utilized for producing and understanding sign language, just as it is for speech. In 1861, Paul Broca studied patients with the ability to understand spoken languages but the inability to produce them. The damaged area was named Broca's area, and located in the left hemisphere’s inferior frontal gyrus. Soon after, in 1874, Carl Wernicke studied patients with the reverse deficits: patients could produce spoken language, but could not comprehend it. The damaged area was named Wernicke's area, and is located in the left hemisphere’s posterior superior temporal gyrus.
Language acquisition is a natural process in which infants and children develop proficiency in the first language or languages that they are exposed to. The process of language acquisition is varied among deaf children. Deaf children born to deaf parents are typically exposed to a sign language at birth and their language acquisition follows a typical developmental timeline. However, at least 90% of deaf children are born to hearing parents who use a spoken language at home. Hearing loss prevents many deaf children from hearing spoken language to the degree necessary for language acquisition. For many deaf children, language acquisition is delayed until the time that they are exposed to a sign language or until they begin using amplification devices such as hearing aids or cochlear implants. Deaf children who experience delayed language acquisition, sometimes called language deprivation, are at risk for lower language and cognitive outcomes. However, profoundly deaf children who receive cochlear implants and auditory habilitation early in life often achieve expressive and receptive language skills within the norms of their hearing peers; age at implantation is strongly and positively correlated with speech recognition ability. Early access to language through signed language or technology have both been shown to prepare children who are deaf to achieve fluency in literacy skills.
Protactile is a language used by deafblind people using tactile channels. Unlike other sign languages, which are heavily reliant on visual information, protactile is oriented towards touch and is practiced on the body. Protactile communication originated out of communications by DeafBlind people in Seattle in 2007 and incorporates signs from American Sign Language. Protactile is an emerging system of communication in the United States, with users relying on shared principles such as contact space, tactile imagery, and reciprocity.