The examples and perspective in this article deal primarily with the United States and do not represent a worldwide view of the subject.(February 2014) |
Cued speech | |
---|---|
Created by | R. Orin Cornett |
Date | 1966 |
Setting and usage | Deaf or hard-of-hearing people |
Purpose | Adds information about the phonology of the word that is not visible on the lips |
Language codes | |
ISO 639-3 | – |
Cued speech is a visual system of communication used with and among deaf or hard-of-hearing people. It is a phonemic-based system which makes traditionally spoken languages accessible by using a small number of handshapes, known as cues (representing consonants), in different locations near the mouth (representing vowels) to convey spoken language in a visual format. The National Cued Speech Association defines cued speech as "a visual mode of communication that uses hand shapes and placements in combination with the mouth movements and speech to make the phonemes of spoken language look different from each other." It adds information about the phonology of the word that is not visible on the lips. This allows people with hearing or language difficulties to visually access the fundamental properties of language. It is now used with people with a variety of language, speech, communication, and learning needs. It is not a sign language such as American Sign Language (ASL), which is a separate language from English. Cued speech is considered a communication modality but can be used as a strategy to support auditory rehabilitation, speech articulation, and literacy development.
Cued speech was invented in 1966 by R. Orin Cornett at Gallaudet College, Washington, D.C. [1] After discovering that children with prelingual and profound hearing impairments typically have poor reading comprehension, he developed the system with the aim of improving the reading abilities of such children through better comprehension of the phonemes of English. At the time, some were arguing that deaf children were earning these lower marks because they had to learn two different systems: American Sign Language (ASL) for person-to-person communication and English for reading and writing. [2] As many sounds look identical on the lips (such as /p/ and /b/), the hand signals introduce a visual contrast in place of the formerly acoustic contrast. Cued Speech may also help people hearing incomplete or distorted sound—according to the National Cued Speech Association at cuedspeech.org, "cochlear implants and Cued Speech are perfect partners". [3]
Since cued speech is based on making sounds visible to the hearing impaired, it is not limited to use in English-speaking nations. Because of the demand for use in other languages/countries, by 1994 Cornett had adapted cueing to 25 other languages and dialects. [1] Originally designed to represent American English, the system was adapted to French in 1977. As of 2005 [update] , Cued speech has been adapted to approximately 60 languages and dialects, including six dialects of English. For tonal languages such as Thai, the tone is indicated by inclination and movement of the hand. For English, cued speech uses eight different hand shapes and four different positions around the mouth.[ citation needed ]
Though to a hearing person, cued speech may look similar to signing, it is not a sign language; nor is it a manually coded sign system for a spoken language. Rather, it is a manual modality of communication for representing any language at the phonological level (phonetics).
A manual cue in cued speech consists of two components: hand shape and hand position relative to the face. Hand shapes distinguish consonants and hand positions distinguish vowel. A hand shape and a hand position (a "cue") together with the accompanying mouth shape, makes up a CV unit - a basic syllable. [4]
Cuedspeech.org lists 64 different dialects to which CS has been adapted. [5] Each language takes on CS by looking through the catalog of the language's phonemes and distinguishing which phonemes appear similar when pronounced and thus need a hand sign to differentiate them.[ citation needed ]
Cued speech is based on the hypothesis that if all the sounds in the spoken language looked clearly different from each other on the lips of the speaker, people with a hearing loss would learn a language in much the same way as a hearing person, but through vision rather than audition. [6] [7]
Literacy is the ability to read and write proficiently, which allows one to understand and communicate ideas so as to participate in a literate society.
Cued speech was designed to help eliminate the difficulties of English language acquisition and literacy development in children who are deaf or hard-of-hearing. Results of research show that accurate and consistent cueing with a child can help in the development of language, communication and literacy but its importance and use is debated. Studies address the issues behind literacy development, [8] traditional deaf education, and how using cued speech affects the lives of deaf and HOH children.
Cued speech does indeed achieve its goal of distinguishing phonemes received by the learner, but there is some question of whether it is as helpful to expression as it is to reception. An article by Jacqueline Leybaert and Jesús Alegría discusses how children who are introduced to cued speech before the age of one are up-to-speed with their hearing peers on receptive vocabulary, though expressive vocabulary lags behind. [9] The writers suggest additional and separate training to teach oral expression if such is desired, but more importantly this reflects the nature of cued speech; to adapt children who are deaf and hard-of-hearing to a hearing world, as such discontinuities of expression and reception are not as commonly found for children with a hearing loss who are learning sign language. [9]
In her paper "The Relationship Between Phonological Coding And Reading Achievement In Deaf Children: Is Cued Speech A Special Case?" (1998), Ostrander notes, "Research has consistently shown a link between lack of phonological awareness and reading disorders (Jenkins & Bowen, 1994)" and discusses the research basis for teaching cued speech as an aid to phonological awareness and literacy. [10] Ostrander concludes that further research into these areas is needed and well justified. [11]
The editor of the Cued Speech Journal (currently sought but not discovered) reports that "Research indicating that Cued Speech does greatly improve the reception of spoken language by profoundly deaf children was reported in 1979 by Gaye Nicholls, and in 1982 by Nicholls and Ling." [12]
In the book Choices in Deafness: A Parents' Guide to Communication Options, Sue Schwartz writes on how cued speech helps a deaf child recognize pronunciation. The child can learn how to pronounce words such as "hors d'oeuvre" or "tamale" or "Hermione" that have pronunciations different from how they are spelled. A child can learn about accents and dialects. In New York, coffee may be pronounced "caw fee"; in the South, the word friend ("fray-end") can be a two-syllable word. [13]
The topic of deaf education has long been filled with controversy. Two strategies for teaching the deaf exist: an aural/oral approach and a manual approach. Those who use aural-oralism believe that children who are deaf or hard of hearing should be taught through the use of residual hearing, speech and speechreading. Those promoting a manual approach believe the deaf should be taught through the use of signed languages, such as American Sign Language (ASL). [14]
Within the United States, proponents of cued speech often discuss the system as an alternative to ASL and similar sign languages, although others note that it can be learned in addition to such languages. [15] For the ASL-using community, cued speech is a unique potential component for learning English as a second language. Within bilingual-bicultural models, cued speech does not borrow or invent signs from ASL, nor does CS attempt to change ASL syntax or grammar. Rather, CS provides an unambiguous model for language learning that leaves ASL intact. [16]
Cued speech has been adapted to more than 50 languages and dialects. However, it is not clear in how many of them it is actually in use. [17]
Similar systems have been used for other languages, such as the Assisted Kinemes Alphabet in Belgium and the Baghcheban phonetic hand alphabet for Persian. [19]
American Sign Language (ASL) is a natural language that serves as the predominant sign language of Deaf communities in the United States and most of Anglophone Canada. ASL is a complete and organized visual language that is expressed by employing both manual and nonmanual features. Besides North America, dialects of ASL and ASL-based creoles are used in many countries around the world, including much of West Africa and parts of Southeast Asia. ASL is also widely learned as a second language, serving as a lingua franca. ASL is most closely related to French Sign Language (LSF). It has been proposed that ASL is a creole language of LSF, although ASL shows features atypical of creole languages, such as agglutinative morphology.
A spoken language is a language produced by articulate sounds or manual gestures, as opposed to a written language. An oral language or vocal language is a language produced with the vocal tract in contrast with a sign language, which is produced with the body and hands.
Lip reading, also known as speechreading, is a technique of understanding a limited range of speech by visually interpreting the movements of the lips, face and tongue without sound. Estimates of the range of lip reading vary, with some figures as low as 30% because lip reading relies on context, language knowledge, and any residual hearing. Although lip reading is used most extensively by deaf and hard-of-hearing people, most people with normal hearing process some speech information from sight of the moving mouth.
Signing Exact English is a system of manual communication that strives to be an exact representation of English language vocabulary and grammar. It is one of a number of such systems in use in English-speaking countries. It is related to Seeing Essential English (SEE-I), a manual sign system created in 1945, based on the morphemes of English words. SEE-II models much of its sign vocabulary from American Sign Language (ASL), but modifies the handshapes used in ASL in order to use the handshape of the first letter of the corresponding English word.
R. Orin Cornett was an American physicist, university professor and administrator, and the inventor of a literacy system for the deaf, known as Cued Speech.
The American Manual Alphabet (AMA) is a manual alphabet that augments the vocabulary of American Sign Language.
Oralism is the education of deaf students through oral language by using lip reading, speech, and mimicking the mouth shapes and breathing patterns of speech. Oralism came into popular use in the United States around the late 1860s. In 1867, the Clarke School for the Deaf in Northampton, Massachusetts, was the first school to start teaching in this manner. Oralism and its contrast, manualism, manifest differently in deaf education and are a source of controversy for involved communities. Listening and Spoken Language, a technique for teaching deaf children that emphasizes the child's perception of auditory signals from hearing aids or cochlear implants, is how oralism continues on in the current day.
Manually Coded English (MCE) is an umbrella term referring to a number of invented manual codes intended to visually represent the exact grammar and morphology of spoken English. Different codes of MCE vary in the levels of adherence to spoken English grammar, morphology, and syntax. MCE is typically used in conjunction with direct spoken English.
Simultaneous communication, SimCom, or sign supported speech (SSS) is a technique sometimes used by deaf, hard-of-hearing or hearing sign language users in which both a spoken language and a manual variant of that language are used simultaneously. While the idea of communicating using two modes of language seems ideal in a hearing/deaf setting, in practice the two languages are rarely relayed perfectly. Often the native language of the user is the language that is strongest, while the non-native language degrades in clarity. In an educational environment this is particularly difficult for deaf children as a majority of teachers who teach the deaf are hearing. Results from surveys taken indicate that communication for students is indeed signing, and that the signing leans more toward English rather than ASL.
Icelandic Sign Language is the sign language of the deaf community in Iceland. It is based on Danish Sign Language; until 1910, deaf Icelandic people were sent to school in Denmark, but the languages have diverged since then. It is officially recognized by the state and regulated by a national committee.
A contact sign language, or contact sign, is a variety or style of language that arises from contact between deaf individuals using a sign language and hearing individuals using an oral language. Contact languages also arise between different sign languages, although the term pidgin rather than contact sign is used to describe such phenomena.
Metalinguistics is the branch of linguistics that studies language and its relationship to other cultural behaviors. It is the study of how different parts of speech and communication interact with each other and reflect the way people live and communicate together. Jacob L. Mey in his book, Trends in Linguistics, describes Mikhail Bakhtin's interpretation of metalinguistics as "encompassing the life history of a speech community, with an orientation toward a study of large events in the speech life of people and embody changes in various cultures and ages."
Manually coded languages (MCLs) are a family of gestural communication methods which include gestural spelling as well as constructed languages which directly interpolate the grammar and syntax of oral languages in a gestural-visual form—that is, signed versions of oral languages. Unlike the sign languages that have evolved naturally in deaf communities, these manual codes are the conscious invention of deaf and hearing educators, and as such lack the distinct spatial structures present in native deaf sign languages. MCLs mostly follow the grammar of the oral language—or, more precisely, of the written form of the oral language that they interpolate. They have been mainly used in deaf education in an effort to "represent English on the hands" and by sign language interpreters in K-12 schools, although they have had some influence on deaf sign languages where their implementation was widespread.
Bimodal bilingualism is an individual or community's bilingual competency in at least one oral language and at least one sign language, which utilize two different modalities. An oral language consists of a vocal-aural modality versus a signed language which consists of a visual-spatial modality. A substantial number of bimodal bilinguals are children of deaf adults (CODA) or other hearing people who learn sign language for various reasons. Deaf people as a group have their own sign language(s) and culture that is referred to as Deaf, but invariably live within a larger hearing culture with its own oral language. Thus, "most deaf people are bilingual to some extent in [an oral] language in some form". In discussions of multilingualism in the United States, bimodal bilingualism and bimodal bilinguals have often not been mentioned or even considered. This is in part because American Sign Language, the predominant sign language used in the U.S., only began to be acknowledged as a natural language in the 1960s. However, bimodal bilinguals share many of the same traits as traditional bilinguals, as well as differing in some interesting ways, due to the unique characteristics of the Deaf community. Bimodal bilinguals also experience similar neurological benefits as do unimodal bilinguals, with significantly increased grey matter in various brain areas and evidence of increased plasticity as well as neuroprotective advantages that can help slow or even prevent the onset of age-related cognitive diseases, such as Alzheimer's and dementia.
Singapore Sign Language, or SgSL, is the native sign language used by the deaf and hard of hearing in Singapore, developed over six decades since the setting up of the first school for the Deaf in 1954. Since Singapore's independence in 1965, the Singapore deaf community has had to adapt to many linguistic changes. Today, the local deaf community recognises Singapore Sign Language (SgSL) as a reflection of Singapore's diverse culture. SgSL is influenced by Shanghainese Sign Language (SSL), American Sign Language (ASL), Signing Exact English (SEE-II) and locally developed signs.
American Sign Language literature is one of the most important shared cultural experiences in the American deaf community. Literary genres initially developed in residential Deaf institutes, such as American School for the Deaf in Hartford, Connecticut, which is where American Sign Language developed as a language in the early 19th century. There are many genres of ASL literature, such as narratives of personal experience, poetry, cinematographic stories, folktales, translated works, original fiction and stories with handshape constraints. Authors of ASL literature use their body as the text of their work, which is visually read and comprehended by their audience viewers. In the early development of ASL literary genres, the works were generally not analyzed as written texts are, but the increased dissemination of ASL literature on video has led to greater analysis of these genres.
Robert J. Hoffmeister is associate professor emeritus and former director of the Center for the Study of Communication & Deafness at Boston University. He is most known for his book, Journey into the Deaf World. He is also known for supporting the American deaf community and deaf education.
Sign languages such as American Sign Language (ASL) are characterized by phonological processes analogous to those of oral languages. Phonemes serve the same role between oral and signed languages, the main difference being oral languages are based on sound and signed languages are spatial and temporal. There is debate about the phonotactics in ASL, but literature has largely agreed upon the Symmetry and Dominance Conditions for phonotactic constraints. Allophones perform the same in ASL as they do in spoken languages, where different phonemes can cause free variation, or complementary and contrastive distributions. There is assimilation between phonemes depending on the context around the sign when it is being produced. The brain processes spoken and signed language the same in terms of the linguistic properties, however, there is differences in activation between the auditory and visual cortex for language perception.
Language acquisition is a natural process in which infants and children develop proficiency in the first language or languages that they are exposed to. The process of language acquisition is varied among deaf children. Deaf children born to deaf parents are typically exposed to a sign language at birth and their language acquisition follows a typical developmental timeline. However, at least 90% of deaf children are born to hearing parents who use a spoken language at home. Hearing loss prevents many deaf children from hearing spoken language to the degree necessary for language acquisition. For many deaf children, language acquisition is delayed until the time that they are exposed to a sign language or until they begin using amplification devices such as hearing aids or cochlear implants. Deaf children who experience delayed language acquisition, sometimes called language deprivation, are at risk for lower language and cognitive outcomes. However, profoundly deaf children who receive cochlear implants and auditory habilitation early in life often achieve expressive and receptive language skills within the norms of their hearing peers; age at implantation is strongly and positively correlated with speech recognition ability. Early access to language through signed language or technology have both been shown to prepare children who are deaf to achieve fluency in literacy skills.