Sign language glove

Last updated

A sign language glove is an electronic device which attempts to convert the motions of a sign language into written or spoken words. Some critics of such technologies have argued that the potential of sensor-enabled gloves to do this is commonly overstated or misunderstood, because many sign languages have a complex grammar that includes use of the sign space and facial expressions (non-manual elements). [1]

The wearable device contains sensors that run along the four fingers and thumb to identify each word, phrase or letter as it is made in Sign Language. Those signals are then sent wirelessly to a smartphone, which translates them into spoken words at a rate of one word per second.

Scientists at UCLA, where one the many projects was developed, believe the innovation could allow for easier communication for deaf people. "Our hope is that this opens up an easy way for people who use sign language to communicate directly with non-signers without needing someone else to translate for them," said lead researcher Jun Chen.

The researchers also added adhesive sensors to the faces of people used to test the device -- between their eyebrows and on one side of their mouths -- to capture nonmanual features of the language.

Related Research Articles

American Sign Language Sign language used predominately in the United States

American Sign Language (ASL) is a natural language that serves as the predominant sign language of Deaf communities in the United States and most of Anglophone Canada. ASL is a complete and organized visual language that is expressed by both manual and nonmanual features. Besides North America, dialects of ASL and ASL-based creoles are used in many countries around the world, including much of West Africa and parts of Southeast Asia. ASL is also widely learned as a second language, serving as a lingua franca. ASL is most closely related to French Sign Language (LSF). It has been proposed that ASL is a creole language of LSF, although ASL shows features atypical of creole languages, such as agglutinative morphology.

In phonology, minimal pairs are pairs of words or phrases in a particular language, spoken or signed, that differ in only one phonological element, such as a phoneme, toneme or chroneme, and have distinct meanings. They are used to demonstrate that two phones are two separate phonemes in the language.

Sign language Language which uses manual communication and body language to convey meaning

Sign languages are languages that use the visual-manual modality to convey meaning. Sign languages are expressed through manual articulations in combination with non-manual elements. Sign languages are full-fledged natural languages with their own grammar and lexicon. Sign languages are not universal and are usually not mutually intelligible with each other, although there are also similarities among different sign languages.

International Sign (IS) is a pidgin sign language which is used in a variety of different contexts, particularly at international meetings such as the World Federation of the Deaf (WFD) congress, in some European Union settings, and at some UN conferences, at events such as the Deaflympics, the Miss & Mister Deaf World, and Eurovision, and informally when travelling and socialising.

British Sign Language Sign language used in the United Kingdom (UK)

British Sign Language (BSL) is a sign language used in the United Kingdom (UK), and is the first or preferred language among the Deaf community in the UK. Based on the percentage of people who reported 'using British Sign Language at home' on the 2011 Scottish Census, the British Deaf Association estimates there are 151,000 BSL users in the UK, of which 87,000 are Deaf. By contrast, in the 2011 England and Wales Census 15,000 people living in England and Wales reported themselves using BSL as their main language. People who are not deaf may also use BSL, as hearing relatives of deaf people, sign language interpreters or as a result of other contact with the British Deaf community. The language makes use of space and involves movement of the hands, body, face, and head.

Auslan is the majority sign language of the Australian Deaf community. The term Auslan is a portmanteau of "Australian Sign Language", coined by Trevor Johnston in the 1980s, although the language itself is much older. Auslan is related to British Sign Language (BSL) and New Zealand Sign Language (NZSL); the three have descended from the same parent language, and together comprise the BANZSL language family. Auslan has also been influenced by Irish Sign Language (ISL) and more recently has borrowed signs from American Sign Language (ASL).

Signing Exact English is a system of manual communication that strives to be an exact representation of English vocabulary and grammar. It is one of a number of such systems in use in English-speaking countries. It is related to Seeing Essential English (SEE-I), a manual sign system created in 1945, based on the morphemes of English words. SEE-II models much of its sign vocabulary from American Sign Language (ASL), but modifies the handshapes used in ASL in order to use the handshape of the first letter of the corresponding English word.

Oralism is the education of deaf students through oral language by using lip reading, speech, and mimicking the mouth shapes and breathing patterns of speech. Oralism came into popular use in the United States around the late 1860s. In 1867, the Clarke School for the Deaf in Northampton, Massachusetts was the first school to start teaching in this manner. Oralism and its contrast, manualism, manifest differently in deaf education and are a source of controversy for involved communities. Oralism should not be confused with Listening and Spoken Language, a technique for teaching deaf children that emphasizes the child's perception of auditory signals from hearing aids or cochlear implants.

Manually-Coded English (MCE) is a type of sign language that follows direct spoken English. The different codes of MCE vary in the levels of directness in following spoken English grammar. There may also be a combination with other visual clues, such as body language. MCE is typically used in conjunction with direct spoken English.

Japanese Sign Language, also known by the acronym JSL, is the dominant sign language in Japan and is a complete natural language, distinct from but influenced by the spoken Japanese language.

Al-Sayyid Bedouin Sign Language (ABSL) is a village sign language used by about 150 deaf and many hearing members of the al-Sayyid Bedouin tribe in the Negev desert of southern Israel.

Manually coded languages (MCLs) are a family of gestural communication methods which include gestural spelling as well as constructed languages which directly interpolate the grammar and syntax of oral languages in a gestural-visual form—that is, signed versions of oral languages. Unlike the sign languages that have evolved naturally in deaf communities, these manual codes are the conscious invention of deaf and hearing educators, and as such lack the distinct spatial structures present in native deaf sign languages. MCLs mostly follow the grammar of the oral language—or, more precisely, of the written form of the oral language that they interpolate. They have been mainly used in deaf education in an effort to "represent English on the hands" and by sign language interpreters in K-12 schools, although they have had some influence on deaf sign languages where their implementation was widespread.

Subtitles Textual representation of events and speech in motion imagery

Subtitles are text derived from either a transcript or screenplay of the dialogue or commentary in films, television programs, video games, and the like, always displayed at the bottom of the screen, and at the top of the screen if there is already text at the bottom of the screen as per the Oxford English Definition. They can either be a form of written translation of a dialogue in a foreign language, or a written rendering of the dialogue in the same language, with or without added information to help viewers who are deaf or hard-of-hearing, who cannot understand the spoken language, or who have accent recognition problems to follow the dialogue.

A speech-to-text reporter (STTR), also known as a captioner, is a person who listens to what is being said and inputs it, word for word, using an electronic shorthand keyboard, speech recognition software, or a CAT software system. Their keyboard or speech recognition software is linked to a computer, which converts this information to properly spelled words. The reproduced text can then be read by deaf or hard-of-hearing people, English language learners, or persons with auditory processing disabilities.

American Sign Language literature is one of the most important shared cultural experiences in the American Deaf community. Literary genres initially developed in residential Deaf institutes, such as American School for the Deaf in Hartford, Connecticut, which is where American Sign Language developed as a language in the early 19th century. There are many genres of ASL literature, such as narratives of personal experience, poetry, cinematographic stories, folktales, translated works, original fiction and stories with handshape constraints. Authors of ASL literature use their body as the text of their work, which is visually read and comprehended by their audience viewers. In the early development of ASL literary genres, the works were generally not analyzed as written texts are, but the increased dissemination of ASL literature on video has led to greater analysis of these genres.

Prelingual deafness refers to deafness that occurs before learning speech or language. Speech and language typically begin to develop very early with infants saying their first words by age one. Therefore, prelingual deafness is considered to occur before the age of one, where a baby is either born deaf or loses hearing before the age of one. This hearing loss may occur for a variety of reasons and impacts cognitive, social, and language development.

Nepalese Sign Language or Nepali Sign Language is the main sign language of Nepal. It is a partially standardized language based informally on the variety of Kathmandu, with some input from varieties of Pokhara and elsewhere. As an indigenous sign language, it is not related to oral Nepali. The Nepali constitution of 2015 specifically mentions the right to have education in Sign Language for the deaf. Likewise, the newly passed Disability right act 2072 (2017) in its definition of Language has mentioned "'Language' means spoken and sign languages and other forms of speechless language." in practice it is recognized by the Ministry of Education and the Ministry of Women, Children and Social Welfare, and is used in all schools for the deaf. In addition, there is legislation underway in Nepal which, in line with the UN Convention on the Rights of Persons with Disabilities (UNCRPD) which Nepal has ratified, should give Nepalese Sign Language equal status with the oral languages of the country.

The Arab sign-language family is a family of sign languages spread across the Arab Middle East. Its extent is not yet known, because only some of the sign languages in the region have been compared.

Language acquisition is a natural process in which infants and children develop proficiency in the first language or languages that they are exposed to. The process of language acquisition is varied among deaf children. Deaf children born to deaf parents are typically exposed to a sign language at birth and their language acquisition following a typical developmental timeline. However, at least 90% of deaf children are born to hearing parents who use a spoken language at home. Hearing loss prevents many deaf children from hearing spoken language to the degree necessary for language acquisition. For many deaf children, language acquisition is delayed until the time that they are exposed to a sign language or until they begin using amplification devices such as hearing aids or cochlear implants. Deaf children who experience delayed language acquisition, sometimes called language deprivation, are at risk for lower language and cognitive outcomes.

The machine translation of sign languages has been possible, albeit in a limited fashion, since 1977. When a research project successfully matched English letters from a keyboard to ASL manual alphabet letters which were simulated on a robotic hand. These technologies translate signed languages into written or spoken language, and written or spoken language to sign language, without the use of a human interpreter. Sign languages possess different phonological features than spoken languages, which has created obstacles for developers. Developers use computer vision and machine learning to recognize specific phonological parameters and epentheses unique to sign languages, and speech recognition and natural language processing allow interactive communication between hearing and deaf people.

References

  1. Erard, Michael (November 9, 2017). "Why Sign-Language Gloves Don't Help Deaf People". The Atlantic.