This article needs additional citations for verification . (November 2016) (Learn how and when to remove this template message) |
Simultaneous communication, SimCom, or sign supported speech (SSS) is a technique sometimes used by deaf, hard-of-hearing or hearing sign language users in which both a spoken language and a manual variant of that language (such as English and manually coded English) are used simultaneously. While the idea of communicating using two modes of language seems ideal in a hearing/deaf setting, in practice the two languages are rarely relayed perfectly. Often the native language of the user (usually spoken language for the hearing person and sign language for the deaf person) is the language that is strongest, while the non-native language degrades in clarity. In an educational environment this is particularly difficult for deaf children as a majority of teachers who teach the deaf are hearing. [1] Results from surveys taken indicate that communication for students is indeed signing (about 2/3 of the population of students), and that the signing leans more toward English rather than ASL. [2]
Manual communication, including simultaneous communication, has existed for a while in the United States, but gained traction in the 70's. [3] The history of using signing with children has been a tumultuous one, with many swings between discouraging the use of signed languages and focusing on oralism, to the current push of bilingualism in Deaf schools. Ultimately, the majority of schools pushed the signed language they used to focus on English, resulting in the birth of a language that combined spoken language (English) with a manual language. The historical use of SC in schools has been stormy, with professionals (both researchers and teachers alike) on either side of the debate on whether the language is useful or not.
According to a study done in 1984, it was found that compared with haphazard instruction involving no language approach whatsoever, Total Communication was proved to be beneficial when combined with the correct approach. [4]
One study entitled "Intelligibility of speech produced during simultaneous communication", [5] 12 hearing impaired individuals were asked to audit the audio samples of 4 hearing sign language experts who had produced recordings of a Simultaneous Communication (SC) sample and a Speech Alone (SA) sample. The 12 hearing impaired individuals were asked to then determine which speech produced was clearer. After listening to both audio samples, hearing impaired listeners agreed that both SC and SA were intelligible, which is supported by previous research. Since the intelligibility of the speech was kept on par with English grammar, the study results indicate that SC is a positive tool to use with Deaf and Hard of Hearing children as a language model and for Deaf/Hard of Hearing adults to keep using.
Another study showed the difference between a control group, families who participated in an intervention program that offered services such as classes on Total Communication, private teachers for the child and a deaf adult who came to the families house, and another group of families who used TC, but did not have as much intervention as the control group. The results showed that intervention did work, and that it positively correlated with the communication skills show by the control group's children. The children showed advanced cognitive skills, including comprehension and expression, specifically related to time. [4]
A study done in 1990 titled "The Effectiveness of Three Means of Communication in the College Classroom" by Dennis Cokely reviewed research done previously that supported the use of Total Communication (SimCom) in the classroom. However, the study pointed out several restrictive factors that several research tests had not approached. One of the tests administered only compared SimCom, the Rochester Method and speech reading with voice (lip reading), omitting the option of ASL as a means of communication. The 1990 study addressed this issue by comparing SimCom, Sign Alone and Interpretation to see which was the most effective. The results from the comparison showed that signing alone as a way for students to understand information given was the most effective and SimCom was the least effective. Overall, Sign Alone and Interpretation was most effective in all areas of the test, proving that SimCom was a struggle for teachers and students alike. [6] When working with two separate modes of communication, the one that comes naturally for the user will be the more prominent mode. A study conducted in 1998 showed that signing and speaking at the same time results in a slower approach to instruction than if just one modality was used to express language.
Listed below are the signed communications that are used within SimCom. Since SimCom can use any spoken language, mainly English, combined with any signed mode, all communication listed below are available for use.
International Sign (IS) is a pidgin sign language which is used in a variety of different contexts, particularly at international meetings such as the World Federation of the Deaf (WFD) congress, events such as the Deaflympics and the Miss & Mister Deaf World, and informally when travelling and socialising.
Auslan is the majority sign language of the Australian Deaf community. The term Auslan is a portmanteau of "Australian Sign Language", coined by Trevor Johnston in the 1980s, although the language itself is much older. Auslan is related to British Sign Language (BSL) and New Zealand Sign Language (NZSL); the three have descended from the same parent language, and together comprise the BANZSL language family. Auslan has also been influenced by Irish Sign Language (ISL) and more recently has borrowed signs from American Sign Language (ASL).
Signing Exact English is a system of manual communication that strives to be an exact representation of English vocabulary and grammar. It is one of a number of such systems in use in English-speaking countries. It is related to Seeing Essential English (SEE-I), a manual sign system created in 1945, based on the morphemes of English words. SEE-II models much of its sign vocabulary from American Sign Language (ASL), but modifies the handshapes used in ASL in order to use the handshape of the first letter of the corresponding English word. The four components of signs are handshape, orientation, location, and movement.
Cued speech is a visual system of communication used with and among deaf or hard-of-hearing people. It is a phonemic-based system which makes traditionally spoken languages accessible by using a small number of handshapes, known as cues, in different locations near the mouth to convey spoken language in a visual format. The National Cued Speech Association defines cued speech as "a visual mode of communication that uses hand shapes and placements in combination with the mouth movements and speech to make the phonemes of spoken language look different from each other." It adds information about the phonology of the word that is not visible on the lips. This allows people with hearing or language difficulties to visually access the fundamental properties of language. It is now used with people with a variety of language, speech, communication, and learning needs. It is different from American Sign Language (ASL), which is a separate language from English. Cued speech is considered a communication modality but can be used as a strategy to support auditory rehabilitation, speech articulation, and literacy development.
Oralism is the education of deaf students through oral language by using lip reading, speech, and mimicking the mouth shapes and breathing patterns of speech. Oralism came into popular use in the United States around the late 1860s. In 1867, the Clarke School for the Deaf in Northampton, Massachusetts was the first school to start teaching in this manner. Oralism and its contrast, manualism, manifest differently in deaf education and are a source of controversy for involved communities. Oralism should not be confused with Listening and Spoken Language, a technique for teaching deaf children that emphasizes the child's perception of auditory signals from hearing aids or cochlear implants.
Manually-Coded English (MCE) is a type of sign language that follow direct spoken English. The different codes of MCE vary in the levels of directness in following spoken English grammar. There may also be a combination with other visual clues, such as body language. MCE is typically used in conjunction with direct spoken English.
Icelandic Sign Language is the sign language of the deaf community in Iceland. It is based on Danish Sign Language; until 1910, deaf Icelandic people were sent to school in Denmark, but the languages have diverged since then. It is officially recognized by the state and regulated by a national committee.
Audism is a form of discrimination aimed at persons who are deaf and the actions that deaf persons do to assist in communication with others. Tom L. Humphries coined the term in his doctoral dissertation in 1975, but it did not start to catch on until Harlan Lane used it in his own writings. Humphries originally applied audism to individual attitudes and practices; whereas Lane broadened the term to include oppression of deaf people.
A contact sign language, or contact sign, is a variety or style of language that arises from contact between a deaf sign language and an oral language. Contact languages also arise between different sign languages, although the term pidgin rather than contact sign is used to describe such phenomena.
Manually coded languages (MCLs) are a family of gestural communication methods which include gestural spelling as well as constructed languages which directly interpolate the grammar and syntax of oral languages in a gestural-visual form - that is, signed versions of oral languages. Unlike the sign languages that have evolved naturally in deaf communities, these manual codes are the conscious invention of deaf and hearing educators. MCLs mostly follow the grammar of the oral language—or, more precisely, of the written form of the oral language that they interpolate. They have been mainly used in deaf education in an effort to "represent English on the hands" and by sign language interpreters in K-12 schools, although they have had some influence on deaf sign languages where their implementation was widespread.
Total Communication (TC) is an approach to communicating that aims to make use of a number of modes of communication such as signed, oral, auditory, written and visual aids, depending on the particular needs and abilities of the child. This approach can be useful in Deaf education, for young children who are pre-verbal, and for children with language disorders such as apraxia or Autism Spectrum Disorder.
Catalan Sign Language is a sign language used by around 18,000 people in different areas of Spain including Barcelona and Catalonia. As of 2012, the Catalan Federation for the Deaf estimates 25,000 LSC signers and roughly 12,000 deaf people around the Catalan lands. It has about 50% intelligibility with Spanish Sign Language (LSE). On the basis of mutual intelligibility, lexicon, and social attitudes, linguists have argued that LSC and LSE are distinct languages.
Singapore Sign Language, or SgSL, is the native sign language used by the deaf and hard of hearing in Singapore, developed over six decades since the setting up of the first school for the Deaf in 1954. Since Singapore's independence in 1965, the Singapore deaf community has had to adapt to many linguistic changes. Today, the local deaf community recognises Singapore Sign Language (SgSL) as a reflection of Singapore's diverse linguistic culture. SgSL is influenced by Shanghainese Sign Language (SSL), American Sign Language (ASL), Signing Exact English (SEE-II) and locally developed signs. The total number of deaf clients registered with The Singapore Association For The Deaf (SADeaf), an organisation that advocates equal opportunity for the deaf, is 5756, as of 2014. Among which, only about one-third stated their knowledge of Sign Language.
Deaf education is the education of students with any degree of hearing loss or deafness which addresses their differences and individual needs. This process involves individually-planned, systematically-monitored teaching methods, adaptive materials, accessible settings, and other interventions designed to help students achieve a higher level of self-sufficiency and success in the school and community than they would achieve with a typical classroom education. A number of countries focus on training teachers to teach deaf students with a variety of approaches and have organizations to aid deaf students.
John D. Bonvillian was a psychologist and associate professor - emeritus in the Department of Psychology and Interdepartmental Program in Linguistics at the University of Virginia in Charlottesville, Virginia. He was known for his contributions to the study of sign language, childhood development, psycholinguistics, and language acquisition. Much of his research worked with typically developing children, deaf children, or children with disabilities.
Manual babbling is a linguistic phenomenon that has been observed in deaf children and hearing children born to deaf parents who have been exposed to sign language. Manual babbles are characterized by repetitive movements that are confined to a limited area in front of the body similar to the sign-phonetic space used in sign languages. In their 1991 paper, Pettito and Marentette concluded that between 40% and 70% of deaf children's manual activity can be classified as manual babbling, whereas manual babbling accounts for less than 10% of hearing children’s manual activity. Manual Babbling appears in both deaf and hearing children learning American Sign Language from 6 to 14 months old.
Language acquisition is a natural process in which infants and children develop proficiency in the first language or languages that they are exposed to. The process of language acquisition is varied among deaf children. Deaf children born to deaf parents are typically exposed to a sign language at birth and their language acquisition following a typical developmental timeline. However, at least 90% of deaf children are born to hearing parents who use a spoken language at home. Hearing loss prevents many deaf children from hearing spoken language to the degree necessary for language acquisition. For many deaf children, language acquisition is delayed until the time that they are exposed to a sign language or until they begin using amplification devices such as hearing aids or cochlear implants. Deaf children who experience delayed language acquisition, sometimes called language deprivation, are at risk for lower language and cognitive outcomes.
Constructed action and constructed dialogue are pragmatic features of languages where the speaker performs the role of someone else during a conversation or narrative. Metzger defines them as the way people "use their body, head, and eye gaze to report the actions, thoughts, words, and expressions of characters within a discourse". Constructed action is when a speaker performs the actions of someone else in the narrative, while constructed dialogue is when a speaker acts as the other person in a reported dialogue. The difference between constructed action and constructed dialogue in sign language users is an important distinction to make, since signing can be considered an action. Recounting a past dialogue through sign is the communication of that occurrence so therefore it is part of the dialogue whereas the facial expressions and depictions of actions that took place are constructed actions. Constructed action is very common cross-linguistically.
Language deprivation in deaf and hard of hearing children occurs when children do not receive accessible language exposure during the critical period of language development. Language development may be severely delayed from the lack of language exposure during this period. This was observed in well-known clinical case studies such as Genie, Kaspar Hauser, Anna, and Isabelle, as well as cases analyzing feral children such as Victor. All of these children had typical hearing, yet did not develop language typically due to language deprivation. Similarly, language deprivation in deaf and hard of hearing children often occurs when sufficient language exposure is not provided in the first few years of life. However, deaf and hard of hearing children who are exposed to sufficient language as children are able to develop typical language. Language can be provided in a variety of ways and helps children learn about and understand the world around them. Early intervention, parental involvement, and legislation work to prevent language deprivation. Deaf children who experience limited access to language—spoken or signed—may not develop the necessary skills to successfully assimilate into the academic learning milieu. There are varied educational approaches for teaching deaf and hard of hearing individuals.