Nonmanual feature

Last updated

A nonmanual feature, also sometimes called nonmanual signal or sign language expression, are the features of signed languages that do not use the hands. Nonmanual features are gramaticised and a necessary component in many signs, in the same way that manual features are. Nonmanual features serve a similar function to intonation in spoken languages. [1]

Contents

ASL sign for "angry" - note the furrowed eyebrows ASL Claw5@NearNose-PalmBack.jpg
ASL sign for "angry" - note the furrowed eyebrows

Purpose

Nonmanual features in signed languages do not function the same way that general body language and facial expressions do in spoken ones. In spoken languages, they can give extra information but are not necessary for the receiver to understand the meaning of the utterance (for example, an autistic person may not use any facial expressions but still get their meaning across clearly, and people with visual impairments may understand spoken utterances without the need for visual aides). Conversely, nonmanual features are needed to understand the full meaning of many signs, and they can drastically change the meaning of individual signs. For example, in ASL the signs HERE and NOT HERE have the same manual sign, and are distinguished only by nonmanual features. [2]

Nonmanual features also do not function the same way as gestures (which exist in both spoken and signed languages), as nonmanual features are grammaticised. [3] For this reason, nonmanual features need to be included in signwriting systems.

Form

In sign languages, the hands do the majority of the work, forming phonemes and giving denotational meaning. Extra meaning however is created through the use of nonmanual features. Despite the literal meaning of manual, not all signs that use other body parts are nonmanual features of the language, and it generally refers to information expressed in the upper half of the body such as the head, eyebrows, eyes, cheeks, and mouth in various postures or movements. [4]

Nonmanual features have two main aspects - place and setting. These are the nonmanual equivalents to HOLM (handshape, orientation, location, and movement) in manual sign components. Place refers to the part of the body used, while setting refers to the state it is in. [5] For example, the Auslan sign for WHY has nonmanual features necessary to distinguish it from the sign BECAUSE. One of these nonmanual features can be described as having the place of [eyebrows] and the setting of [furrowed]. [6]

Although it is done using the face, mouthing is not always considered a nonmanual feature, as it is not a natural feature of signed languages, being taken from the local spoken language/s. [5] Because of this, there is debate as to whether mouthing is a sign language feature or a form of codeswitching. [7]

Types

Lexical

Many lexical signs use nonmanual features in addition to the manual articulation. For instance, facial expressions may accompany verbs of emotion, as in the sign for angry in Czech Sign Language.

Nonmanual elements can be lexically contrastive. An example is the ASL sign for NOT YET, which requires that the tongue touch the lower lip and that the head rotate from side to side, in addition to the manual part of the sign. Without these features the sign would be interpreted as LATE. [8] Mouthings can also be contrastive, as in the manually identical signs for DOCTOR and BATTERY in Sign Language of the Netherlands. [9]

In some languages, there are a small amount of words that are formed entirely by nonmanual features. For example, in Polish Sign Language, a sign is used to express that the user wishes to self-correct or rephrase an utterance, perhaps best translated as I MEAN. The sign is made by closing the eyes and shaking the head. [5] Because it does not use the hands, this can be used simultaneously as the user rephrases their statement.

Intensifiers can be expressed through nonmanual features, as they have the benefit of being expressed at the same time as manual signs. In Auslan, puffed cheeks can be used simultaneously with the manual sign LARGE to translate the sign better as GIGANTIC.

Nonmanual features are also a part of many sign names. [2]

Phrasal

Many grammatical functions are produced nonmanually, [10] including interrogation, negation, relative clauses and topicalisation, and conditional clauses. [11] ASL and BSL use similar nonmanual marking for yes–no questions - they are shown through raised eyebrows and a forward head tilt. [12] [1] which functions similarly to English's pitch raise in these questions. [1]

Nonmanual features are frequently used to grammatically signify role shift, which is when the signer switches between two or more individuals they are quoting. [13] For example, in German Sign Language this can be done by the signer using signing space to tie quoted speech to pronouns. [14] It can also be expressed by gaze-shifting or head-shifting. [15]

Adjective phrases can be formed using nonmanual features. For instance, in ASL a slightly open mouth with the tongue relaxed and visible in the corner of the mouth means 'carelessly', but a similar nonmanual in BSL means 'boring' or 'unpleasant'. [16]

Discourse

Discourse functions such as turn taking are largely regulated through head movement and eye gaze. Since the addressee in a signed conversation must be watching the signer, a signer can avoid letting the other person have a turn by not looking at them, or can indicate that the other person may have a turn by making eye contact. [17]

Recognition in academia

In early studies of signed languages done by hearing researchers, nonmanual features were largely ignored. [18] In the 1960s, William Stokoe established a system of sign language phonology for American Sign Language and was one of the first researchers to discuss nonmanual features in his writings when he used diacritics in his writings to signify six different facial expressions based on their meanings in English. [19]

From Stokoe's writings until the 1990s, facial expressions were discussed in some studies on signed languages, and awareness of them as a grammaticised aspect of signed languages began to grow. [3] In the 21st century, discussion of nonmanual signs in both research on individual languages and sign language education has become more common, partially due to the increased awareness of minimal pairs in automatic sign language recognition technology. [20]

Related Research Articles

<span class="mw-page-title-main">American Sign Language</span> Sign language used predominately in the United States

American Sign Language (ASL) is a natural language that serves as the predominant sign language of Deaf communities in the United States of America and most of Anglophone Canada. ASL is a complete and organized visual language that is expressed by employing both manual and nonmanual features. Besides North America, dialects of ASL and ASL-based creoles are used in many countries around the world, including much of West Africa and parts of Southeast Asia. ASL is also widely learned as a second language, serving as a lingua franca. ASL is most closely related to French Sign Language (LSF). It has been proposed that ASL is a creole language of LSF, although ASL shows features atypical of creole languages, such as agglutinative morphology.

<span class="mw-page-title-main">Sign language</span> Language that uses manual communication and body language to convey meaning

Sign languages are languages that use the visual-manual modality to convey meaning, instead of spoken words. Sign languages are expressed through manual articulation in combination with non-manual markers. Sign languages are full-fledged natural languages with their own grammar and lexicon. Sign languages are not universal and are usually not mutually intelligible, although there are also similarities among different sign languages.

<span class="mw-page-title-main">British Sign Language</span> Sign language used in the United Kingdom

British Sign Language (BSL) is a sign language used in the United Kingdom and is the first or preferred language among the deaf community in the UK. Based on the percentage of people who reported 'using British Sign Language at home' on the 2011 Scottish Census, the British Deaf Association estimates there are 151,000 BSL users in the UK, of whom 87,000 are Deaf. By contrast, in the 2011 England and Wales Census 15,000 people living in England and Wales reported themselves using BSL as their main language. People who are not deaf may also use BSL, as hearing relatives of deaf people, sign language interpreters or as a result of other contact with the British Deaf community. The language makes use of space and involves movement of the hands, body, face and head.

A sign language glove is an electronic device which attempts to convert the motions of a sign language into written or spoken words. Some critics of such technologies have argued that the potential of sensor-enabled gloves to do this is commonly overstated or misunderstood, because many sign languages have a complex grammar that includes use of the sign space and facial expressions.

Auslan is the sign language used by the majority of the Australian Deaf community. The term Auslan is a portmanteau of "Australian Sign Language", coined by Trevor Johnston in the 1980s, although the language itself is much older. Auslan is related to British Sign Language (BSL) and New Zealand Sign Language (NZSL); the three have descended from the same parent language, and together comprise the BANZSL language family. Auslan has also been influenced by Irish Sign Language (ISL) and more recently has borrowed signs from American Sign Language (ASL).

The American Manual Alphabet (AMA) is a manual alphabet that augments the vocabulary of American Sign Language.

Manually Coded English (MCE) is a type of sign system that follows direct spoken English. The different codes of MCE vary in the levels of directness in following spoken English grammar. There may also be a combination with other visual clues, such as body language. MCE is typically used in conjunction with direct spoken English.

Home sign is a gestural communication system, often invented spontaneously by a deaf child who lacks accessible linguistic input. Home sign systems often arise in families where a deaf child is raised by hearing parents and is isolated from the Deaf community. Because the deaf child does not receive signed or spoken language input, these children are referred to as linguistically isolated.

A contact sign language, or contact sign, is a variety or style of language that arises from contact between deaf individuals using a sign language and hearing individuals using an oral language. Contact languages also arise between different sign languages, although the term pidgin rather than contact sign is used to describe such phenomena.

Ka'apor Sign Language is a village sign language used by the small community of Ka'apor people in the Brazilian state of Maranhão. Linguist Jim Kakumasu observed in 1968 that the number of deaf people in the community was 7 out of a population of about 500. This relatively high ratio of deafness led to both hearing and deaf members of the community using the language, and most hearing children grow up bilingual in the spoken and signed languages. The current state of the language is unknown. Other Indigenous tribes in the region have also been reported to use sign languages, and to communicate between themselves using sign language pidgins.

<span class="mw-page-title-main">Stokoe notation</span> Phonemic script for sign languages

Stokoe notation is the first phonemic script used for sign languages. It was created by William Stokoe for American Sign Language (ASL), with Latin letters and numerals used for the shapes they have in fingerspelling, and iconic glyphs to transcribe the position, movement, and orientation of the hands. It was first published as the organizing principle of Sign Language Structure: An Outline of the Visual Communication Systems of the American Deaf (1960), and later also used in A Dictionary of American Sign Language on Linguistic Principles, by Stokoe, Casterline, and Croneberg (1965). In the 1965 dictionary, signs are themselves arranged alphabetically, according to their Stokoe transcription, rather than being ordered by their English glosses as in other sign-language dictionaries. This made it the only ASL dictionary where the reader could look up a sign without first knowing how to translate it into English. The Stokoe notation was later adapted to British Sign Language (BSL) in Kyle et al. (1985) and to Australian Aboriginal sign languages in Kendon (1988). In each case the researchers modified the alphabet to accommodate phonemes not found in ASL.

Bimodal bilingualism is an individual or community's bilingual competency in at least one oral language and at least one sign language, which utilize two different modalities. An oral language consists of a vocal-aural modality versus a signed language which consists of a visual-spatial modality. A substantial number of bimodal bilinguals are children of deaf adults (CODA) or other hearing people who learn sign language for various reasons. Deaf people as a group have their own sign language(s) and culture that is referred to as Deaf, but invariably live within a larger hearing culture with its own oral language. Thus, "most deaf people are bilingual to some extent in [an oral] language in some form" In discussions of multilingualism in the United States, bimodal bilingualism and bimodal bilinguals have often not been mentioned or even considered, in part because American Sign Language, the predominant sign language used in the U.S., only began to be acknowledged as a natural language in the 1960s. However, bimodal bilinguals share many of the same traits as traditional bilinguals, as well as differing in some interesting ways, due to the unique characteristics of the Deaf community. Bimodal bilinguals also experience similar neurological benefits as do unimodal bilinguals, with significantly increased grey matter in various brain areas and evidence of increased plasticity as well as neuroprotective advantages that can help slow or even prevent the onset of age-related cognitive diseases, such as Alzheimer's and dementia.

The grammar of American Sign Language (ASL) has rules just like any other sign language or spoken language. ASL grammar studies date back to William Stokoe in the 1960s. This sign language consists of parameters that determine many other grammar rules. Typical word structure in ASL conforms to the SVO/OSV and topic-comment form, supplemented by a noun-adjective order and time-sequenced ordering of clauses. ASL has large CP and DP syntax systems, and also doesn't contain many conjunctions like some other languages do.

Russian Sign Language (RSL) is the sign language used by the Deaf community in Russia, with what is possibly additional presence in Belarus and Tajikistan. It belongs to the French Sign Language family.

Nepalese Sign Language or Nepali Sign Language is the main sign language of Nepal. It is a partially standardized language based informally on the variety used in Kathmandu, with some input from varieties from Pokhara and elsewhere. As an indigenous sign language, it is not related to oral Nepali. The Nepali Constitution of 2015 specifically mentions the right to have education in Sign Language for the deaf. Likewise, the newly passed Disability Rights Act of 2072 BS defined language to include "spoken and sign languages and other forms of speechless language." in practice it is recognized by the Ministry of Education and the Ministry of Women, Children and Social Welfare, and is used in all schools for the deaf. In addition, there is legislation underway in Nepal which, in line with the UN Convention on the Rights of Persons with Disabilities (UNCRPD) which Nepal has ratified, should give Nepalese Sign Language equal status with the oral languages of the country.

si5s is a writing system for American Sign Language that resembles a handwritten form of SignWriting. It was devised in 2003 in New York City by Robert Arnold, with an unnamed collaborator. In July 2010 at the Deaf Nation World Expo in Las Vegas, Nevada, it was presented and formally announced to the public. Soon after its release, si5s development split into two branches: the "official" si5s track monitored by Arnold and a new set of partners at ASLized, and the "open source" ASLwrite. In 2015, Arnold had a falling out with his ASLized partners, took down the si5s.org website, and made his Twitter account private. ASLized has since removed any mention of si5s from their website.

Sign languages such as American Sign Language (ASL) are characterized by phonological processes analogous to, yet dissimilar from, those of oral languages. Although there is a qualitative difference from oral languages in that sign-language phonemes are not based on sound, and are spatial in addition to being temporal, they fulfill the same role as phonemes in oral languages.

<span class="mw-page-title-main">ASLwrite</span> Transcription system for American Sign Language

ASLwrite is a writing system that developed from si5s. It was created to be an open-source, continuously developing orthography for American Sign Language (ASL), trying to capture the nuances of ASL's features. ASLwrite is only used by a handful of people, primarily revolving around discussions happening on Facebook and, previously, Google Groups. ASLwrite has been used for comic strips and posters.

<span class="mw-page-title-main">Black American Sign Language</span> Dialect of American Sign Language

Black American Sign Language (BASL) or Black Sign Variation (BSV) is a dialect of American Sign Language (ASL) used most commonly by deaf African Americans in the United States. The divergence from ASL was influenced largely by the segregation of schools in the American South. Like other schools at the time, schools for the deaf were segregated based upon race, creating two language communities among deaf signers: black deaf signers at black schools and white deaf signers at white schools. As of the mid 2010s, BASL is still used by signers in the South despite public schools having been legally desegregated since 1954.

Constructed action and constructed dialogue are pragmatic features of languages where the speaker performs the role of someone else during a conversation or narrative. Metzger defines them as the way people "use their body, head, and eye gaze to report the actions, thoughts, words, and expressions of characters within a discourse". Constructed action is when a speaker performs the actions of someone else in the narrative, while constructed dialogue is when a speaker acts as the other person in a reported dialogue. The difference between constructed action and constructed dialogue in sign language users is an important distinction to make, since signing can be considered an action. Recounting a past dialogue through sign is the communication of that occurrence so therefore it is part of the dialogue whereas the facial expressions and depictions of actions that took place are constructed actions. Constructed action is very common cross-linguistically.

References

  1. 1 2 3 Rudge, Luke A. (2018-08-03). "Analysing British sign language through the lens of systemic functional linguistics".{{cite journal}}: Cite journal requires |journal= (help)
  2. 1 2 Aran, Oya; Burger, Thomas; Caplier, Alice; Akarun, Lale (2008). "A belief-based sequential fusion approach for fusing manual and non-manual signs". S2CID   2971052.{{cite web}}: Missing or empty |url= (help)
  3. 1 2 Reilly, Judy Snitzer; Mcintire, Marina; Bellugi, Ursula (1990). "The acquisition of conditionals in American Sign Language: Grammaticized facial expressions". Applied Psycholinguistics. 11 (4): 369–392. doi:10.1017/S0142716400009632. ISSN   1469-1817. S2CID   146327328.
  4. Herrmann, Annika (2013), "Nonmanuals in sign languages", Modal and Focus Particles in Sign Languages, A Cross-Linguistic Study (1 ed.), De Gruyter, pp. 33–52, JSTOR   j.ctvbkk221.10 , retrieved 2022-04-02
  5. 1 2 3 Tomaszewski, Piotr (2010-01-01), Not by the hands alone: Functions of non-manual features in Polish Sign Language, Matrix, pp. 289–320, ISBN   978-83-932212-0-2 , retrieved 2022-04-04
  6. "Signbank". auslan.org.au. Retrieved 2022-04-02.
  7. Bogliotti, Caroline; Isel, Frederic (2021). "Manual and Spoken Cues in French Sign Language's Lexical Access: Evidence From Mouthing in a Sign-Picture Priming Paradigm". Frontiers in Psychology. 12: 655168. doi: 10.3389/fpsyg.2021.655168 . ISSN   1664-1078. PMC   8185165 . PMID   34113290.
  8. Liddell, Scott K. (2003). Grammar, Gesture, and Meaning in American Sign Language. Cambridge: Cambridge University Press.
  9. Josep Quer i Carbonell; Carlo Cecchetto; Rannveig Sverrisd Ãttir, eds. (2017). SignGram blueprint: A guide to sign language grammar writing. De Gruyter Mouton. ISBN   9781501511806. OCLC   1012688117.
  10. Bross, Fabian; Hole, Daniel. "Scope-taking strategies in German Sign Language". Glossa. 2 (1): 1–30. doi: 10.5334/gjgl.106 .
  11. Boudreault, Patrick; Mayberry, Rachel I. (2006). "Grammatical processing in American Sign Language: Age of first-language acquisition effects in relation to syntactic structure". Language and Cognitive Processes. 21 (5): 608–635. doi:10.1080/01690960500139363. S2CID   13572435.
  12. Baker, Charlotte, and Dennis Cokely (1980). American Sign Language: A teacher's resource text on grammar and culture. Silver Spring, MD: T.J. Publishers.
  13. Quer, Josep (2018-10-01). "On categorizing types of role shift in Sign languages". Theoretical Linguistics. 44 (3–4): 277–282. doi:10.1515/tl-2018-0020. hdl: 10230/36020 . ISSN   1613-4060. S2CID   69448938.
  14. Buchstaller, Isabelle; Alphen, Ingrid van (2012-05-01). Quotatives: Cross-linguistic and cross-disciplinary perspectives. John Benjamins Publishing. ISBN   978-90-272-7479-3.
  15. "How to use role shifting in American Sign Language". www.handspeak.com. Retrieved 2022-04-14.
  16. Sutton-Spence, Rachel, and Bencie Woll (1998). The linguistics of British Sign Language. Cambridge: Cambridge University Press.
  17. Baker, Charlotte (1977). Regulators and turn-taking in American Sign Language discourse, in Lynn Friedman, On the other hand: New perspectives on American Sign Language. New York: Academic Press. ISBN   9780122678509
  18. Filhol, Michael; Choisier, Annick; Hadjadj, Mohamed (1982-05-31). "Non-manual features: the right to indifference".{{cite journal}}: Cite journal requires |journal= (help)
  19. Stokoe, William C. Jr. (2005-01-01). "Sign Language Structure: An Outline of the Visual Communication Systems of the American Deaf". The Journal of Deaf Studies and Deaf Education. 10 (1): 3–37. doi:10.1093/deafed/eni001. ISSN   1081-4159. PMID   15585746.
  20. Mukushev, Medet; Sabyrov, Arman; Imashev, Alfarabi; Koishybay, Kenessary; Kimmelman, Vadim; Sandygulova, Anara (2020). "Evaluation of Manual and Non-manual Components for Sign Language Recognition". Proceedings of the 12th Language Resources and Evaluation Conference. Marseille, France: European Language Resources Association: 6073–6078. ISBN   979-10-95546-34-4.