Mouthing

Last updated

In sign language, mouthing is the production of visual syllables with the mouth while signing. That is, signers sometimes say or mouth a word in a spoken language at the same time as producing the sign for it. Mouthing is one of the many ways in which the face and mouth is used while signing. Although not present in all sign languages, and not in all signers, where it does occur it may be an essential (that is, phonemic) element of a sign, distinguishing signs which would otherwise be homophones; in other cases a sign may seem to be flat and incomplete without mouthing even if it is unambiguous. Other signs use a combination of mouth movements and hand movements to indicate the sign; for example, the ASL sign for NOT-YET includes a mouth gesture where the mouth is slightly open. [1]

Mouthing often originates from oralist education, where sign and speech are used together. Thus mouthing may preserve an often abbreviated rendition of the spoken translation of a sign. In educated Ugandan Sign Language, for example, where both English and Ganda are influential, the word for VERY, Av", is accompanied by the mouthed syllable nyo, from Ganda nnyo 'very', and ABUSE, jO*[5]v", is accompanied by vu, from Ganda onvuma. Similarly, the USL sign FINISH, t55bf, is mouthed fsh, an abbreviation of English finish, and DEAF, }HxU, is mouthed df.

However, mouthing may also be iconic, as in the word for HOT (of food or drink) in ASL, UtCbf", where the mouthing suggests something hot in the mouth and does not correspond to the English word "hot".

Mouthing is an essential element of cued speech and simultaneous sign and speech, both for the direct instruction of oral language and to disambiguate cases where there is not a one-to-one correspondence between sign and speech. However, mouthing does not always reflect the corresponding spoken word; when signing 'thick' in Auslan (Australian Sign Language), for example, the mouthing is equivalent to spoken fahth.

In a 2008 edition of Sign Language & Linguistics, there is a study that discusses similarities and differences in mouthing between three different European sign languages. It goes into detail about mouthings, adverbial mouth gestures, semantically empty mouth gestures, enacting mouth gestures, and whole face gestures. [2]

Linguists do not agree on how to best analyze mouthing. It is an open question as to whether they form a part of the phonological system or whether they are a product of simultaneous code-blending. Another question is whether mouthings are an inherent part of the lexicon or not. [3]

Related Research Articles

<span class="mw-page-title-main">American Sign Language</span> Sign language used predominately in the United States

American Sign Language (ASL), is a natural language that serves as the predominant sign language of Deaf communities in the United States of America and most of Anglophone Canada. ASL is a complete and organized visual language that is expressed by employing both manual and nonmanual features. Besides North America, dialects of ASL and ASL-based creoles are used in many countries around the world, including much of West Africa and parts of Southeast Asia. ASL is also widely learned as a second language, serving as a lingua franca. ASL is most closely related to French Sign Language (LSF). It has been proposed that ASL is a creole language of LSF, although ASL shows features atypical of creole languages, such as agglutinative morphology.

<span class="mw-page-title-main">Language</span> Structured system of communication

Language is a structured system of communication that consists of grammar and vocabulary. It is the primary means by which humans convey meaning, both in spoken and written forms, and may also be conveyed through sign languages. The vast majority of human languages have developed writing systems that allow for the recording and preservation of the sounds or signs of language. Human language is characterized by its cultural and historical diversity, with significant variations observed between cultures and across time. Human languages possess the properties of productivity and displacement, which enable the creation of an infinite number of sentences, and the ability to refer to objects, events, and ideas that are not immediately present in the discourse. The use of human language relies on social convention and is acquired through learning.

In phonology and linguistics, a phoneme is a unit of sound that can distinguish one word from another in a particular language.

Phonetics is a branch of linguistics that studies how humans produce and perceive sounds, or in the case of sign languages, the equivalent aspects of sign. Linguists who specialize in studying the physical properties of speech are phoneticians. The field of phonetics is traditionally divided into three sub-disciplines based on the research questions involved such as how humans plan and execute movements to produce speech, how various movements affect the properties of the resulting sound, or how humans convert sound waves to linguistic information. Traditionally, the minimal linguistic unit of phonetics is the phone—a speech sound in a language which differs from the phonological unit of phoneme; the phoneme is an abstract categorization of phones, and it is also defined as the smallest unit that discerns meaning between sounds in any given language.

<span class="mw-page-title-main">Sign language</span> Language that uses manual communication and body language to convey meaning

Sign languages are languages that use the visual-manual modality to convey meaning, instead of spoken words. Sign languages are expressed through manual articulation in combination with non-manual markers. Sign languages are full-fledged natural languages with their own grammar and lexicon. Sign languages are not universal and are usually not mutually intelligible, although there are also similarities among different sign languages.

Auslan is the majority sign language of the Australian Deaf community. The term Auslan is a portmanteau of "Australian Sign Language", coined by Trevor Johnston in the 1980s, although the language itself is much older. Auslan is related to British Sign Language (BSL) and New Zealand Sign Language (NZSL); the three have descended from the same parent language, and together comprise the BANZSL language family. Auslan has also been influenced by Irish Sign Language (ISL) and more recently has borrowed signs from American Sign Language (ASL).

Signing Exact English is a system of manual communication that strives to be an exact representation of English vocabulary and grammar. It is one of a number of such systems in use in English-speaking countries. It is related to Seeing Essential English (SEE-I), a manual sign system created in 1945, based on the morphemes of English words. SEE-II models much of its sign vocabulary from American Sign Language (ASL), but modifies the handshapes used in ASL in order to use the handshape of the first letter of the corresponding English word.

Cued speech is a visual system of communication used with and among deaf or hard-of-hearing people. It is a phonemic-based system which makes traditionally spoken languages accessible by using a small number of handshapes, known as cues, in different locations near the mouth to convey spoken language in a visual format. The National Cued Speech Association defines cued speech as "a visual mode of communication that uses hand shapes and placements in combination with the mouth movements and speech to make the phonemes of spoken language look different from each other." It adds information about the phonology of the word that is not visible on the lips. This allows people with hearing or language difficulties to visually access the fundamental properties of language. It is now used with people with a variety of language, speech, communication, and learning needs. It is not a sign language such as American Sign Language (ASL), which is a separate language from English. Cued speech is considered a communication modality but can be used as a strategy to support auditory rehabilitation, speech articulation, and literacy development.

Research into great ape language has involved teaching chimpanzees, bonobos, gorillas and orangutans to communicate with humans and with each other using sign language, physical tokens, lexigrams, and mimicking human speech. Some primatologists argue that these primates' use of the communication tools indicates their ability to use "language", although this is not consistent with some definitions of that term.

Manually-Coded English (MCE) is a type of sign system that follows direct spoken English. The different codes of MCE vary in the levels of directness in following spoken English grammar. There may also be a combination with other visual clues, such as body language. MCE is typically used in conjunction with direct spoken English.

<span class="mw-page-title-main">German Sign Language</span> Sign language predominantly used in Germany

German Sign Language or Deutsche Gebärdensprache (DGS), is the sign language of the deaf community in Germany, Luxembourg and in the German-speaking community of Belgium. It is unclear how many use German Sign Language as their main language; Gallaudet University estimated 50,000 as of 1986. The language has evolved through use in deaf communities over hundreds of years.

Home sign is a gestural communication system, often invented spontaneously by a deaf child who lacks accessible linguistic input. Home sign systems often arise in families where a deaf child is raised by hearing parents and is isolated from the Deaf community. Because the deaf child does not receive signed or spoken language input, these children are referred to as linguistically isolated.

The Japanese Sign Language syllabary is a system of manual kana used as part of Japanese Sign Language (JSL). It is a signary of 45 signs and 4 diacritics representing the phonetic syllables of the Japanese language. Signs are distinguished both in the direction they point, and in whether the palm faces the viewer or the signer. For example, the manual syllables na, ni, ha are all made with the first two fingers of the hand extended straight, but for na the fingers point down, for ni across the body, and for ha toward the viewer. The signs for te and ho are both an open flat hand, but in te the palm faces the viewer, and in ho it faces away.

Manually coded languages (MCLs) are a family of gestural communication methods which include gestural spelling as well as constructed languages which directly interpolate the grammar and syntax of oral languages in a gestural-visual form—that is, signed versions of oral languages. Unlike the sign languages that have evolved naturally in deaf communities, these manual codes are the conscious invention of deaf and hearing educators, and as such lack the distinct spatial structures present in native deaf sign languages. MCLs mostly follow the grammar of the oral language—or, more precisely, of the written form of the oral language that they interpolate. They have been mainly used in deaf education in an effort to "represent English on the hands" and by sign language interpreters in K-12 schools, although they have had some influence on deaf sign languages where their implementation was widespread.

Bimodal bilingualism is an individual or community's bilingual competency in at least one oral language and at least one sign language, which utilize two different modalities. An oral language consists of an vocal-aural modality versus a signed language which consists of a visual-spatial modality. A substantial number of bimodal bilinguals are children of deaf adults (CODA) or other hearing people who learn sign language for various reasons. Deaf people as a group have their own sign language(s) and culture that is referred to as Deaf, but invariably live within a larger hearing culture with its own oral language. Thus, "most deaf people are bilingual to some extent in [an oral] language in some form" In discussions of multilingualism in the United States, bimodal bilingualism and bimodal bilinguals have often not been mentioned or even considered, in part because American Sign Language, the predominant sign language used in the U.S., only began to be acknowledged as a natural language in the 1960s. However, bimodal bilinguals share many of the same traits as traditional bilinguals, as well as differing in some interesting ways, due to the unique characteristics of the Deaf community. Bimodal bilinguals also experience similar neurological benefits as do unimodal bilinguals, with significantly increased grey matter in various brain areas and evidence of increased plasticity as well as neuroprotective advantages that can help slow or even prevent the onset of age-related cognitive diseases, such as Alzheimer's and dementia.

Articulatory gestures are the actions necessary to enunciate language. Examples of articulatory gestures are the hand movements necessary to enunciate sign language and the mouth movements of speech. In semiotic terms, these are the physical embodiment (signifiers) of speech signs, which are gestural by nature.

The grammar of American Sign Language (ASL) is the best studied of any sign language, though research is still in its infancy, dating back only to William Stokoe in the 1960s.

In sign languages, the term classifier construction refers to a morphological system that can express events and states. They use handshape classifiers to represent movement, location, and shape. Classifiers differ from signs in their morphology, namely in that signs consist of a single morpheme. Signs are composed of three meaningless phonological features: handshape, location, and movement. Classifiers, on the other hand, consist of many morphemes. Specifically, the handshape, location, and movement are all meaningful on their own. The handshape represents an entity and the hand's movement iconically represents the movement of that entity. The relative location of multiple entities can be represented iconically in two-handed constructions.

Sign languages such as American Sign Language (ASL) are characterized by phonological processes analogous to, yet dissimilar from, those of oral languages. Although there is a qualitative difference from oral languages in that sign-language phonemes are not based on sound, and are spatial in addition to being temporal, they fulfill the same role as phonemes in oral languages.

<span class="mw-page-title-main">Nonmanual feature</span> Sign language syntax

A nonmanual feature, also sometimes called nonmanual signal or sign language expression, are the features of signed languages that do not use the hands. Nonmanual features are gramaticised and a necessary component in many signs, in the same way that manual features are. Nonmanual features serve a similar function to intonation in spoken languages.

References

  1. "Mouthing in American Sign Language (ASL)".
  2. Sign Language & Linguistics. "Frequency distribution and spreading behavior of different types of mouth actions in three sign languages". Sign Language & Linguistics.
  3. Racz-Engelhardt, Szilard. Morphological Properties of Mouthings in Hungarian Sign Language (MJNY): A Corpus-based Study. Diss. Staats-und Universitätsbibliothek Hamburg Carl von Ossietzky, 2016.

See also