Cued speech

Last updated
Cued speech
KINEMAS.jpg
Kinemes used in Cued Speech.
Created byR. Orin Cornett
Date1966
Setting and usageDeaf or hard-of-hearing people
Purpose
Adds information about the phonology of the word that is not visible on the lips
Language codes
ISO 639-3
This article contains IPA phonetic symbols. Without proper rendering support, you may see question marks, boxes, or other symbols instead of Unicode characters. For an introductory guide on IPA symbols, see Help:IPA.

Cued speech is a visual system of communication used with and among deaf or hard-of-hearing people. It is a phonemic-based system which makes traditionally spoken languages accessible by using a small number of handshapes, known as cues (representing consonants), in different locations near the mouth (representing vowels) to convey spoken language in a visual format. The National Cued Speech Association defines cued speech as "a visual mode of communication that uses hand shapes and placements in combination with the mouth movements and speech to make the phonemes of spoken language look different from each other." It adds information about the phonology of the word that is not visible on the lips. This allows people with hearing or language difficulties to visually access the fundamental properties of language. It is now used with people with a variety of language, speech, communication, and learning needs. It is not a sign language such as American Sign Language (ASL), which is a separate language from English. Cued speech is considered a communication modality but can be used as a strategy to support auditory rehabilitation, speech articulation, and literacy development.

Contents

History

Cued speech was invented in 1966 by R. Orin Cornett at Gallaudet College, Washington, D.C. [1] After discovering that children with prelingual and profound hearing impairments typically have poor reading comprehension, he developed the system with the aim of improving the reading abilities of such children through better comprehension of the phonemes of English. At the time, some were arguing that deaf children were earning these lower marks because they had to learn two different systems: American Sign Language (ASL) for person-to-person communication and English for reading and writing. [2] As many sounds look identical on the lips (such as /p/ and /b/), the hand signals introduce a visual contrast in place of the formerly acoustic contrast. Cued Speech may also help people hearing incomplete or distorted sound—according to the National Cued Speech Association at cuedspeech.org, "cochlear implants and Cued Speech are perfect partners". [3]

Since cued speech is based on making sounds visible to the hearing impaired, it is not limited to use in English-speaking nations. Because of the demand for use in other languages/countries, by 1994 Cornett had adapted cueing to 25 other languages and dialects. [1] Originally designed to represent American English, the system was adapted to French in 1977. As of 2005, Cued speech has been adapted to approximately 60 languages and dialects, including six dialects of English. For tonal languages such as Thai, the tone is indicated by inclination and movement of the hand. For English, cued speech uses eight different hand shapes and four different positions around the mouth.[ citation needed ]

Nature and use

Though to a hearing person, cued speech may look similar to signing, it is not a sign language; nor is it a Manually Coded Sign System for a spoken language. Rather, it is a manual modality of communication for representing any language at the phonological level (phonetics).

A manual cue in cued speech consists of two components: hand shape and hand position relative to the face. Hand shapes distinguish consonants and hand positions distinguish vowel. A hand shape and a hand position (a "cue") together with the accompanying mouth shape, makes up a CV unit - a basic syllable. [4]

Cuedspeech.org lists 64 different dialects to which CS has been adapted. [5] Each language takes on CS by looking through the catalog of the language's phonemes and distinguishing which phonemes appear similar when pronounced and thus need a hand sign to differentiate them.[ citation needed ]

Literacy

Cued speech is based on the hypothesis that if all the sounds in the spoken language looked clearly different from each other on the lips of the speaker, people with a hearing loss would learn a language in much the same way as a hearing person, but through vision rather than audition. [6] [7]

Literacy is the ability to read and write proficiently, which allows one to understand and communicate ideas so as to participate in a literate society.

Cued speech was designed to help eliminate the difficulties of English language acquisition and literacy development in children who are deaf or hard-of-hearing. Results of research show that accurate and consistent cueing with a child can help in the development of language, communication and literacy but its importance and use is debated. Studies address the issues behind literacy development, [8] traditional deaf education, and how using cued speech affects the lives of deaf and HOH children.

Cued speech does indeed achieve its goal of distinguishing phonemes received by the learner, but there is some question of whether it is as helpful to expression as it is to reception. An article by Jacqueline Leybaert and Jesús Alegría discusses how children who are introduced to cued speech before the age of one are up-to-speed with their hearing peers on receptive vocabulary, though expressive vocabulary lags behind. [9] The writers suggest additional and separate training to teach oral expression if such is desired, but more importantly this reflects the nature of cued speech; to adapt children who are deaf and hard-of-hearing to a hearing world, as such discontinuities of expression and reception are not as commonly found for children with a hearing loss who are learning sign language. [9]

In her paper "The Relationship Between Phonological Coding And Reading Achievement In Deaf Children: Is Cued Speech A Special Case?" (1998), Ostrander notes, "Research has consistently shown a link between lack of phonological awareness and reading disorders (Jenkins & Bowen, 1994)" and discusses the research basis for teaching cued speech as an aid to phonological awareness and literacy. [10] Ostrander concludes that further research into these areas is needed and well justified. [11]

The editor of the Cued Speech Journal (currently sought but not discovered) reports that "Research indicating that Cued Speech does greatly improve the reception of spoken language by profoundly deaf children was reported in 1979 by Gaye Nicholls, and in 1982 by Nicholls and Ling." [12]

In the book Choices in Deafness: A Parents' Guide to Communication Options , Sue Schwartz writes on how cued speech helps a deaf child recognize pronunciation. The child can learn how to pronounce words such as "hors d'oeuvre" or "tamale" or "Hermione" that have pronunciations different from how they are spelled. A child can learn about accents and dialects. In New York, coffee may be pronounced "caw fee"; in the South, the word friend ("fray-end") can be a two-syllable word. [13]

Debate over cued speech vs. sign language

The topic of deaf education has long been filled with controversy. There are two strategies for teaching the deaf that exist: an aural/oral approach or a manual approach. Those who use aural-oralism believe that children who are deaf or hard of hearing should be taught through the use of residual hearing, speech and speechreading. Those promoting a manual approach believe the deaf should be taught through the use of signed languages, such as American Sign Language (ASL). [14]

Within the United States, proponents of cued speech often discuss the system as an alternative to ASL and similar sign languages, although others note that it can be learned in addition to such languages. [15] For the ASL-using community, cued speech is a unique potential component for learning English as a second language. Within bilingual-bicultural models, cued speech does not borrow or invent signs from ASL, nor does CS attempt to change ASL syntax or grammar. Rather, CS provides an unambiguous model for language learning that leaves ASL intact. [16]

Languages

Cued speech has been adapted to more than 50 languages and dialects. However, it is not clear how many of them are actually in use. [17]

Similar systems have been used for other languages, such as the Assisted Kinemes Alphabet in Belgium and the Baghcheban phonetic hand alphabet for Persian. [19]

See also

Related Research Articles

<span class="mw-page-title-main">American Sign Language</span> Sign language used predominately in the United States

American Sign Language (ASL) is a natural language that serves as the predominant sign language of Deaf communities in the United States of America and most of Anglophone Canada. ASL is a complete and organized visual language that is expressed by employing both manual and nonmanual features. Besides North America, dialects of ASL and ASL-based creoles are used in many countries around the world, including much of West Africa and parts of Southeast Asia. ASL is also widely learned as a second language, serving as a lingua franca. ASL is most closely related to French Sign Language (LSF). It has been proposed that ASL is a creole language of LSF, although ASL shows features atypical of creole languages, such as agglutinative morphology.

A spoken language is a language produced by articulate sounds or manual gestures, as opposed to a written language. An oral language or vocal language is a language produced with the vocal tract in contrast with a sign language, which is produced with the body and hands.

<span class="mw-page-title-main">Written language</span> Representation of a language through writing

A written language is the representation of a language by means of writing. This involves the use of visual symbols, known as graphemes, to represent linguistic units such as phonemes, syllables, morphemes, or words. However, it is important to note that written language is not merely spoken or signed language written down, though it can approximate that. Instead, it is a separate system with its own norms, structures, and stylistic conventions, and it often evolves differently than its corresponding spoken or signed language.

Lip reading, also known as speechreading, is a technique of understanding a limted range of speech by visually interpreting the movements of the lips, face and tongue without sound. Estimates of the range of lip reading vary, with some figures as low as 30% because lip reading relies on context, language knowledge, and any residual hearing. Although lip reading is used most extensively by deaf and hard-of-hearing people, most people with normal hearing process some speech information from sight of the moving mouth.

Signing Exact English is a system of manual communication that strives to be an exact representation of English language vocabulary and grammar. It is one of a number of such systems in use in English-speaking countries. It is related to Seeing Essential English (SEE-I), a manual sign system created in 1945, based on the morphemes of English words. SEE-II models much of its sign vocabulary from American Sign Language (ASL), but modifies the handshapes used in ASL in order to use the handshape of the first letter of the corresponding English word.

R. Orin Cornett was an American physicist, university professor and administrator, and the inventor of a literacy system for the deaf, known as Cued Speech.

The American Manual Alphabet (AMA) is a manual alphabet that augments the vocabulary of American Sign Language.

Oralism is the education of deaf students through oral language by using lip reading, speech, and mimicking the mouth shapes and breathing patterns of speech. Oralism came into popular use in the United States around the late 1860s. In 1867, the Clarke School for the Deaf in Northampton, Massachusetts, was the first school to start teaching in this manner. Oralism and its contrast, manualism, manifest differently in deaf education and are a source of controversy for involved communities. Oralism should not be confused with Listening and Spoken Language, a technique for teaching deaf children that emphasizes the child's perception of auditory signals from hearing aids or cochlear implants.

Manually Coded English (MCE) is a type of sign system that follows direct spoken English. The different codes of MCE vary in the levels of directness in following spoken English grammar. There may also be a combination with other visual clues, such as body language. MCE is typically used in conjunction with direct spoken English.

Simultaneous communication, SimCom, or sign supported speech (SSS) is a technique sometimes used by deaf, hard-of-hearing or hearing sign language users in which both a spoken language and a manual variant of that language are used simultaneously. While the idea of communicating using two modes of language seems ideal in a hearing/deaf setting, in practice the two languages are rarely relayed perfectly. Often the native language of the user is the language that is strongest, while the non-native language degrades in clarity. In an educational environment this is particularly difficult for deaf children as a majority of teachers who teach the deaf are hearing. Results from surveys taken indicate that communication for students is indeed signing, and that the signing leans more toward English rather than ASL.

Icelandic Sign Language is the sign language of the deaf community in Iceland. It is based on Danish Sign Language; until 1910, deaf Icelandic people were sent to school in Denmark, but the languages have diverged since then. It is officially recognized by the state and regulated by a national committee.

A contact sign language, or contact sign, is a variety or style of language that arises from contact between deaf individuals using a sign language and hearing individuals using an oral language. Contact languages also arise between different sign languages, although the term pidgin rather than contact sign is used to describe such phenomena.

Manually coded languages (MCLs) are a family of gestural communication methods which include gestural spelling as well as constructed languages which directly interpolate the grammar and syntax of oral languages in a gestural-visual form—that is, signed versions of oral languages. Unlike the sign languages that have evolved naturally in deaf communities, these manual codes are the conscious invention of deaf and hearing educators, and as such lack the distinct spatial structures present in native deaf sign languages. MCLs mostly follow the grammar of the oral language—or, more precisely, of the written form of the oral language that they interpolate. They have been mainly used in deaf education in an effort to "represent English on the hands" and by sign language interpreters in K-12 schools, although they have had some influence on deaf sign languages where their implementation was widespread.

Bimodal bilingualism is an individual or community's bilingual competency in at least one oral language and at least one sign language, which utilize two different modalities. An oral language consists of a vocal-aural modality versus a signed language which consists of a visual-spatial modality. A substantial number of bimodal bilinguals are children of deaf adults (CODA) or other hearing people who learn sign language for various reasons. Deaf people as a group have their own sign language(s) and culture that is referred to as Deaf, but invariably live within a larger hearing culture with its own oral language. Thus, "most deaf people are bilingual to some extent in [an oral] language in some form". In discussions of multilingualism in the United States, bimodal bilingualism and bimodal bilinguals have often not been mentioned or even considered. This is in part because American Sign Language, the predominant sign language used in the U.S., only began to be acknowledged as a natural language in the 1960s. However, bimodal bilinguals share many of the same traits as traditional bilinguals, as well as differing in some interesting ways, due to the unique characteristics of the Deaf community. Bimodal bilinguals also experience similar neurological benefits as do unimodal bilinguals, with significantly increased grey matter in various brain areas and evidence of increased plasticity as well as neuroprotective advantages that can help slow or even prevent the onset of age-related cognitive diseases, such as Alzheimer's and dementia.

Singapore Sign Language, or SgSL, is the native sign language used by the deaf and hard of hearing in Singapore, developed over six decades since the setting up of the first school for the Deaf in 1954. Since Singapore's independence in 1965, the Singapore deaf community has had to adapt to many linguistic changes. Today, the local deaf community recognises Singapore Sign Language (SgSL) as a reflection of Singapore's diverse linguistic culture. SgSL is influenced by Shanghainese Sign Language (SSL), British Sign Language(BSL), Australian Sign Language(Auslan), American Sign Language (ASL), Signing Exact English (SEE-II) and locally developed signs.

American Sign Language literature is one of the most important shared cultural experiences in the American deaf community. Literary genres initially developed in residential Deaf institutes, such as American School for the Deaf in Hartford, Connecticut, which is where American Sign Language developed as a language in the early 19th century. There are many genres of ASL literature, such as narratives of personal experience, poetry, cinematographic stories, folktales, translated works, original fiction and stories with handshape constraints. Authors of ASL literature use their body as the text of their work, which is visually read and comprehended by their audience viewers. In the early development of ASL literary genres, the works were generally not analyzed as written texts are, but the increased dissemination of ASL literature on video has led to greater analysis of these genres.

Robert J. Hoffmeister is associate professor emeritus and former director of the Center for the Study of Communication & Deafness at Boston University. He is most known for his book, Journey into the Deaf World. He is also known for supporting the American deaf community and deaf education.

Sign languages such as American Sign Language (ASL) are characterized by phonological processes analogous to, yet dissimilar from, those of oral languages. Although there is a qualitative difference from oral languages in that sign-language phonemes are not based on sound, and are spatial in addition to being temporal, they fulfill the same role as phonemes in oral languages.

Language acquisition is a natural process in which infants and children develop proficiency in the first language or languages that they are exposed to. The process of language acquisition is varied among deaf children. Deaf children born to deaf parents are typically exposed to a sign language at birth and their language acquisition follows a typical developmental timeline. However, at least 90% of deaf children are born to hearing parents who use a spoken language at home. Hearing loss prevents many deaf children from hearing spoken language to the degree necessary for language acquisition. For many deaf children, language acquisition is delayed until the time that they are exposed to a sign language or until they begin using amplification devices such as hearing aids or cochlear implants. Deaf children who experience delayed language acquisition, sometimes called language deprivation, are at risk for lower language and cognitive outcomes. However, profoundly deaf children who receive cochlear implants and auditory habilitation early in life often achieve expressive and receptive language skills within the norms of their hearing peers; age at implantation is strongly and positively correlated with speech recognition ability. Early access to language through signed language or technology have both been shown to prepare children who are deaf to achieve fluency in literacy skills.

References

  1. 1 2 "All Good Things...Gallaudet closes Cued Speech Team", Cued Speech News Vol. XXVII No. 4 (Final Issue) Winter 1994: Pg 1
  2. Tamura, Leslie (September 27, 2010). "Cued speech offers deaf children links to spoken English". The Washington Post . Retrieved 2022-07-01.
  3. Jane Smith (2020). "Cued Speech and Cochlear Implantation: A view from two decades" (PDF).
  4. Heracleous, P. Beautemps, D. & Aboutabit, N. (2010). Cued speech automatic recognition in normal-hearing and deaf subjects. Speech Communication, 52, 504–512.
  5. "Cued Speech in Different Languages | National Cued Speech Association". www.cuedspeech.org. Archived from the original on 2012-07-28.
  6. Cued Speech: What and Why?, R. Orin Cornett, Ph.D., undated white paper.
  7. Proceedings of the International Congress on Education of the Deaf, Stockholm, Sweden 1970, Vol. 1, pp. 97-99
  8. Schwartz, Sue (2007). "Choices in Deafness: A Parents' Guide to Communication Options by Sue Schwartz (Editor), Ph.D. (Editor) " | 9781890627737 | Get Textbooks | New Textbooks | Used Textbooks | College Textbooks - GetTextbooks.co.uk"". www.gettextbooks.co.uk. Retrieved 2023-03-01.
  9. 1 2 Leybaert, Jacqueline; LaSasso, Carol; Crain, Kelly Lamar; ProQuest (Firm) (2010). Cued speech and cued language for deaf and hard of hearing children. San Diego, CA: Plural Pub. ISBN   978-1-59756-334-5.
  10. http://web.syr.edu/~clostran/literacy.html Archived 2006-05-03 at the Wayback Machine "The Relationship Between Phonological Coding And Reading Achievement In Deaf Children: Is Cued Speech A Special Case?" Carolyn Ostrander, 1998 (accessed August 23, 2006)
  11. Nielsen, Diane Corcoran; Luetke-Stahlman, Barbara (2002). "Phonological Awareness: One Key to the Reading Proficiency of Deaf Children". American Annals of the Deaf. 147 (3): 11–19. ISSN   0002-726X. JSTOR   44390352.
  12. Editor Carol J. Boggs, Ph.D, "Editor's Notes", Cued Speech Journal, (1990) Vol 4, pg ii
  13. Sue Schwartz, Ph.D, Choices in Deafness: A Parents' Guide to Communication Options
  14. National Cued Speech Association (2006). "Cued Speech and Literacy: History, Research, and Background Information" (PDF). Archived from the original (PDF) on 2013-10-20. Retrieved 2013-10-20.
  15. Cued Speech FAQ
  16. Giese, Karla (2018). "Cued Speech: An Opportunity Worth Recognizing". Odyssey: New Directions in Deaf Education. Retrieved 2022-03-05.
  17. Cued Languages - list of languages and dialects to which Cued Speech has been adapted
  18. "Etusivu - Vinkkipuheyhdistys ry". Vinkkipuhe.fi. Retrieved 2022-07-01.
  19. "Jabbar Baghcheban, Iran's sign language pioneer, remembered". Archived from the original on 2022-05-16. Retrieved 2014-01-10.

Organizations

Tutorials and general information

Cued languages other than English