Lip reading, also known as speechreading, is a technique of understanding a limited range of speech by visually interpreting the movements of the lips, face and tongue without sound. Estimates of the range of lip reading vary, with some figures as low as 30% because lip reading relies on context, language knowledge, and any residual hearing. [1] Although lip reading is used most extensively by deaf and hard-of-hearing people, most people with normal hearing process some speech information from sight of the moving mouth. [2]
Although speech perception is considered to be an auditory skill, it is intrinsically multimodal, since producing speech requires the speaker to make movements of the lips, teeth and tongue which are often visible in face-to-face communication. Information from the lips and face supports aural comprehension [3] and most fluent listeners of a language are sensitive to seen speech actions (see McGurk effect). The extent to which people make use of seen speech actions varies with the visibility of the speech action and the knowledge and skill of the perceiver.
The phoneme is the smallest detectable unit of sound in a language that serves to distinguish words from one another. /pit/ and /pik/ differ by one phoneme and refer to different concepts. Spoken English has about 44 phonemes. For lip reading, the number of visually distinctive units - visemes - is much smaller, thus several phonemes map onto a few visemes. This is because many phonemes are produced within the mouth and throat, and are hard to see. These include glottal consonants and most gestures of the tongue. Voiced and unvoiced pairs look identical, such as [p] and [b], [k] and [g], [t] and [d], [f] and [v], and [s] and [z]; likewise for nasalisation (e.g. [m] vs. [b]). Homophenes are words that look similar when lip read, but which contain different phonemes. Because there are about three times as many phonemes as visemes in English, it is often claimed that only 30% of speech can be lip read. Homophenes are a crucial source of mis-lip reading.
Visemes can be captured as still images, but speech unfolds in time. The smooth articulation of speech sounds in sequence can mean that mouth patterns may be 'shaped' by an adjacent phoneme: the 'th' sound in 'tooth' and in 'teeth' appears very different because of the vocalic context. This feature of dynamic speech-reading affects lip-reading 'beyond the viseme'. [5]
While visemes offer a useful starting point for understanding lipreading, spoken distinctions within a viseme can be distinguished and can help support identification. [6] Moreover, the statistical distribution of phonemes within the lexicon of a language is uneven. While there are clusters of words which are phonemically similar to each other ('lexical neighbors', such as spit/sip/sit/stick...etc.), others are unlike all other words: they are 'unique' in terms of the distribution of their phonemes ('umbrella' may be an example). Skilled users of the language bring this knowledge to bear when interpreting speech, so it is generally harder to identify a heard word with many lexical neighbors than one with few neighbors. Applying this insight to seen speech, some words in the language can be unambiguously lip-read even when they contain few visemes - simply because no other words could possibly 'fit'. [7]
Many factors affect the visibility of a speaking face, including illumination, movement of the head/camera, frame-rate of the moving image and distance from the viewer (see e.g. [8] ). Head movement that accompanies normal speech can also improve lip-reading, independently of oral actions. [9] However, when lip-reading connected speech, the viewer's knowledge of the spoken language, familiarity with the speaker and style of speech, and the context of the lip-read material [10] are as important as the visibility of the speaker. While most hearing people are sensitive to seen speech, there is great variability in individual speechreading skill. Good lipreaders are often more accurate than poor lipreaders at identifying phonemes from visual speech.
A simple visemic measure of 'lipreadability' has been questioned by some researchers. [11] The 'phoneme equivalence class' measure takes into account the statistical structure of the lexicon and can also accommodate individual differences in lip-reading ability. [12] [13] In line with this, excellent lipreading is often associated with more broad-based cognitive skills including general language proficiency, executive function and working memory. [14] [15]
Seeing the mouth plays a role in the very young infant's early sensitivity to speech, and prepares them to become speakers at 1 – 2 years. In order to imitate, a baby must learn to shape their lips in accordance with the sounds they hear; seeing the speaker may help them to do this. [16] Newborns imitate adult mouth movements such as sticking out the tongue or opening the mouth, which could be a precursor to further imitation and later language learning. [17] Infants are disturbed when audiovisual speech of a familiar speaker is desynchronized [18] and tend to show different looking patterns for familiar than for unfamiliar faces when matched to (recorded) voices. [19] Infants are sensitive to McGurk illusions months before they have learned to speak. [20] [21] These studies and many more point to a role for vision in the development of sensitivity to (auditory) speech in the first half-year of life.
Until around six months of age, most hearing infants are sensitive to a wide range of speech gestures - including ones that can be seen on the mouth - which may or may not later be part of the phonology of their native language. But in the second six months of life, the hearing infant shows perceptual narrowing for the phonetic structure of their own language - and may lose the early sensitivity to mouth patterns that are not useful. The speech sounds /v/ and /b/ which are visemically distinctive in English but not in Castilian Spanish are accurately distinguished in Spanish-exposed and English-exposed babies up to the age of around 6 months. However, older Spanish-exposed infants lose the ability to 'see' this distinction, while it is retained for English-exposed infants. [22] Such studies suggest that rather than hearing and vision developing in independent ways in infancy, multimodal processing is the rule, not the exception, in (language) development of the infant brain. [23]
Given the many studies indicating a role for vision in the development of language in the pre-lingual infant, the effects of congenital blindness on language development are surprisingly small. 18-month-olds learn new words more readily when they hear them, and do not learn them when they are shown the speech movements without hearing. [24] However, children blind from birth can confuse /m/ and /n/ in their own early production of English words – a confusion rarely seen in sighted hearing children, since /m/ and /n/ are visibly distinctive, but auditorially confusable. [25] The role of vision in children aged 1–2 years may be less critical to the production of their native language, since, by that age, they have attained the skills they need to identify and imitate speech sounds. However, hearing a non-native language can shift the child's attention to visual and auditory engagement by way of lipreading and listening in order to process, understand and produce speech. [26]
Studies with pre-lingual infants and children use indirect, non-verbal measures to indicate sensitivity to seen speech. Explicit lip-reading can be reliably tested in hearing preschoolers by asking them to 'say aloud what I say silently'. [27] In school-age children, lipreading of familiar closed-set words such as number words can be readily elicited. [28] Individual differences in lip-reading skill, as tested by asking the child to 'speak the word that you lip-read', or by matching a lip-read utterance to a picture, [29] show a relationship between lip-reading skill and age. [30] [31]
While lip-reading silent speech poses a challenge for most hearing people, adding sight of the speaker to heard speech improves speech processing under many conditions. The mechanisms for this, and the precise ways in which lip-reading helps, are topics of current research. [32] Seeing the speaker helps at all levels of speech processing from phonetic feature discrimination to interpretation of pragmatic utterances. [33] The positive effects of adding vision to heard speech are greater in noisy than quiet environments, [34] where by making speech perception easier, seeing the speaker can free up cognitive resources, enabling deeper processing of speech content.
As hearing becomes less reliable in old-age, people may tend to rely more on lip-reading, and are encouraged to do so. However, greater reliance on lip-reading may not always make good the effects of age-related hearing loss. Cognitive decline in aging may be preceded by and/or associated with measurable hearing loss. [35] [36] Thus lipreading may not always be able to fully compensate for the combined hearing and cognitive age-related decrements.
A number of studies report anomalies of lipreading in populations with distinctive developmental disorders. Autism: People with autism may show reduced lipreading abilities and reduced reliance on vision in audiovisual speech perception. [37] [38] This may be associated with gaze-to-the-face anomalies in these people. [39] Williams syndrome: People with Williams syndrome show some deficits in speechreading which may be independent of their visuo-spatial difficulties. [40] Specific Language Impairment: Children with SLI are also reported to show reduced lipreading sensitivity, [41] as are people with dyslexia. [42]
Debate has raged for hundreds of years over the role of lip-reading ('oralism') compared with other communication methods (most recently, total communication) in the education of deaf people. The extent to which one or other approach is beneficial depends on a range of factors, including level of hearing loss of the deaf person, age of hearing loss, parental involvement and parental language(s). Then there is a question concerning the aims of the deaf person and their community and carers. Is the aim of education to enhance communication generally, to develop sign language as a first language, or to develop skills in the spoken language of the hearing community? Researchers now focus on which aspects of language and communication may be best delivered by what means and in which contexts, given the hearing status of the child and her family, and their educational plans. [43] Bimodal bilingualism (proficiency in both speech and sign language) is one dominant current approach in language education for the deaf child. [44]
Deaf people are often better lip-readers than people with normal hearing. [45] Some deaf people practice as professional lipreaders [46] for instance in forensic lipreading. In deaf people who have a cochlear implant, pre-implant lip-reading skill can predict post-implant (auditory or audiovisual) speech processing. [47] In adults, the later the age of implantation, the better the visual speechreading abilities of the deaf person. [48] For many deaf people, access to spoken communication can be helped when a spoken message is relayed via a trained, professional lip-speaker . [49] [50]
In connection with lipreading and literacy development, children born deaf typically show delayed development of literacy skills [51] which can reflect difficulties in acquiring elements of the spoken language. [52] In particular, reliable phoneme-grapheme mapping may be more difficult for deaf children, who need to be skilled speech-readers in order to master this necessary step in literacy acquisition. Lip-reading skill is associated with literacy abilities in deaf adults and children [53] [54] and training in lipreading may help to develop literacy skills. [55]
Cued Speech uses lipreading with accompanying hand shapes that disambiguate the visemic (consonant) lipshape. Cued speech is said to be easier for hearing parents to learn than a sign language, and studies, primarily from Belgium, show that a deaf child exposed to cued speech in infancy can make more efficient progress in learning a spoken language than from lipreading alone. [56] The use of cued speech in cochlear implantation for deafness is likely to be positive. [57] A similar approach, involving the use of handshapes accompanying seen speech, is Visual Phonics, which is used by some educators to support the learning of written and spoken language.
The aim of teaching and training in lipreading is to develop awareness of the nature of lipreading, and to practice ways of improving the ability to perceive speech 'by eye'. [58] While the value of lipreading training in improving 'hearing by eye' was not always clear, especially for people with acquired hearing loss, there is evidence that systematic training in alerting students to attend to seen speech actions can be beneficial. [59] Lipreading classes, often called lipreading and managing hearing loss classes, are mainly aimed at adults who have hearing loss. The highest proportion of adults with hearing loss have an age-related, or noise-related loss; with both of these forms of hearing loss, the high-frequency sounds are lost first. Since many of the consonants in speech are high-frequency sounds, speech becomes distorted. Hearing aids help but may not cure this. Lipreading classes have been shown to be of benefit in UK studies commissioned by the Action on Hearing Loss charity [60] (2012).
Trainers recognise that lipreading is an inexact art. Students are taught to watch the lips, tongue and jaw movements, to follow the stress and rhythm of language, to use their residual hearing, with or without hearing aids, to watch expression and body language, and to use their ability to reason and deduce. They are taught the lipreaders' alphabet, groups of sounds that look alike on the lips (visemes) like p, b, m, or f, v. The aim is to get the gist, so as to have the confidence to join in conversation and avoid the damaging social isolation that often accompanies hearing loss. Lipreading classes are recommended for anyone who struggles to hear in noise, and help to adjust to hearing loss.
Most tests of lipreading were devised to measure individual differences in performing specific speech-processing tasks and to detect changes in performance following training. Lipreading tests have been used with relatively small groups in experimental settings, or as clinical indicators with individual patients and clients. That is, most lipreading tests to date have limited validity as markers of lipreading skill in the general population. [61]
Automated lip-reading has been a topic of interest in computational engineering, as well as in science fiction movies. The computational engineer Steve Omohundro, among others, pioneered its development. In facial animation, the aim is to generate realistic facial actions, especially mouth movements, that simulate human speech actions. Computer algorithms to deform or manipulate images of faces can be driven by heard or written language. Systems may be based on detailed models derived from facial movements (motion capture); on anatomical modelling of actions of the jaw, mouth and tongue; or on mapping of known viseme- phoneme properties. [62] [63] Facial animation has been used in speechreading training (demonstrating how different sounds 'look'). [64] These systems are a subset of speech synthesis modelling which aim to deliver reliable 'text-to-(seen)-speech' outputs. A complementary aim—the reverse of making faces move in speech—is to develop computer algorithms that can deliver realistic interpretations of speech (i.e. a written transcript or audio record) from natural video data of a face in action: this is facial speech recognition. These models too can be sourced from a variety of data. [65] Automatic visual speech recognition from video has been quite successful in distinguishing different languages (from a corpus of spoken language data). [66] Demonstration models, using machine-learning algorithms, have had some success in lipreading speech elements, such as specific words, from video [67] and for identifying hard-to-lipread phonemes from visemically similar seen mouth actions. [68] Machine-based speechreading is now making successful use of neural-net based algorithms which use large databases of speakers and speech material (following the successful model for auditory automatic speech recognition). [69]
Uses for machine lipreading could include automated lipreading of video-only records, automated lipreading of speakers with damaged vocal tracts, and speech processing in face-to-face video (i.e. from videophone data). Automated lipreading may help in processing noisy or unfamiliar speech. [70] Automated lipreading may contribute to biometric person identification, replacing password-based identification. [71] [72]
Following the discovery that auditory brain regions, including Heschl's gyrus, were activated by seen speech, [73] the neural circuitry for speechreading was shown to include supra-modal processing regions, especially superior temporal sulcus (all parts) as well as posterior inferior occipital-temporal regions including regions specialised for the processing of faces and biological motion. [74] In some but not all studies, activation of Broca's area is reported for speechreading, [75] [76] suggesting that articulatory mechanisms can be activated in speechreading. [77] Studies of the time course of audiovisual speech processing showed that sight of speech can prime auditory processing regions in advance of the acoustic signal. [78] [79] Better lipreading skill is associated with greater activation in (left) superior temporal sulcus and adjacent inferior temporal (visual) regions in hearing people. [80] [81] In deaf people, the circuitry devoted to speechreading appears to be very similar to that in hearing people, with similar associations of (left) superior temporal activation and lipreading skill. [82]
Tadoma is a method of communication utilized by deafblind individuals, in which the listener places their little finger on the speaker's lips and their fingers along the jawline. The middle three fingers often fall along the speaker's cheeks with the little finger picking up the vibrations of the speaker's throat. It is sometimes referred to as tactile lipreading, as the listener feels the movement of the lips, the vibrations of the vocal cords, expansion of the cheeks and the warm air produced by nasal phonemes such as 'N' and 'M'. Hand positioning can vary, and it is a sometimes also used by hard-of-hearing people to supplement their remaining hearing.
The McGurk effect is a perceptual phenomenon that demonstrates an interaction between hearing and vision in speech perception. The illusion occurs when the auditory component of one sound is paired with the visual component of another sound, leading to the perception of a third sound. The visual information a person gets from seeing a person speak changes the way they hear the sound. If a person is getting poor-quality auditory information but good-quality visual information, they may be more likely to experience the McGurk effect.
The temporal lobe is one of the four major lobes of the cerebral cortex in the brain of mammals. The temporal lobe is located beneath the lateral fissure on both cerebral hemispheres of the mammalian brain.
Oralism is the education of deaf students through oral language by using lip reading, speech, and mimicking the mouth shapes and breathing patterns of speech. Oralism came into popular use in the United States around the late 1860s. In 1867, the Clarke School for the Deaf in Northampton, Massachusetts, was the first school to start teaching in this manner. Oralism and its contrast, manualism, manifest differently in deaf education and are a source of controversy for involved communities. Listening and Spoken Language, a technique for teaching deaf children that emphasizes the child's perception of auditory signals from hearing aids or cochlear implants, is how oralism continues on in the current day.
A viseme is any of several speech sounds that look the same, for example when lip reading.
The two-streams hypothesis is a model of the neural processing of vision as well as hearing. The hypothesis, given its initial characterisation in a paper by David Milner and Melvyn A. Goodale in 1992, argues that humans possess two distinct visual systems. Recently there seems to be evidence of two distinct auditory systems as well. As visual information exits the occipital lobe, and as sound leaves the phonological network, it follows two main pathways, or "streams". The ventral stream leads to the temporal lobe, which is involved with object and visual identification and recognition. The dorsal stream leads to the parietal lobe, which is involved with processing the object's spatial location relative to the viewer and with speech repetition.
Speech perception is the process by which the sounds of language are heard, interpreted, and understood. The study of speech perception is closely linked to the fields of phonology and phonetics in linguistics and cognitive psychology and perception in psychology. Research in speech perception seeks to understand how human listeners recognize speech sounds and use this information to understand spoken language. Speech perception research has applications in building computer systems that can recognize speech, in improving speech recognition for hearing- and language-impaired listeners, and in foreign-language teaching.
Bimodal bilingualism is an individual or community's bilingual competency in at least one oral language and at least one sign language, which utilize two different modalities. An oral language consists of a vocal-aural modality versus a signed language which consists of a visual-spatial modality. A substantial number of bimodal bilinguals are children of deaf adults (CODA) or other hearing people who learn sign language for various reasons. Deaf people as a group have their own sign language(s) and culture that is referred to as Deaf, but invariably live within a larger hearing culture with its own oral language. Thus, "most deaf people are bilingual to some extent in [an oral] language in some form". In discussions of multilingualism in the United States, bimodal bilingualism and bimodal bilinguals have often not been mentioned or even considered. This is in part because American Sign Language, the predominant sign language used in the U.S., only began to be acknowledged as a natural language in the 1960s. However, bimodal bilinguals share many of the same traits as traditional bilinguals, as well as differing in some interesting ways, due to the unique characteristics of the Deaf community. Bimodal bilinguals also experience similar neurological benefits as do unimodal bilinguals, with significantly increased grey matter in various brain areas and evidence of increased plasticity as well as neuroprotective advantages that can help slow or even prevent the onset of age-related cognitive diseases, such as Alzheimer's and dementia.
Auditory processing disorder (APD), rarely known as King-Kopetzky syndrome or auditory disability with normal hearing (ADN), is a neurodevelopmental disorder affecting the way the brain processes sounds. Individuals with APD usually have normal structure and function of the ear, but cannot process the information they hear in the same way as others do, which leads to difficulties in recognizing and interpreting sounds, especially the sounds composing speech. It is thought that these difficulties arise from dysfunction in the central nervous system.
Aural rehabilitation is the process of identifying and diagnosing a hearing loss, providing different types of therapies to clients who are hard of hearing, and implementing different amplification devices to aid the client's hearing abilities. Aural rehab includes specific procedures in which each therapy and amplification device has as its goal the habilitation or rehabilitation of persons to overcome the handicap (disability) caused by a hearing impairment or deafness.
Speech shadowing is a psycholinguistic experimental technique in which subjects repeat speech at a delay to the onset of hearing the phrase. The time between hearing the speech and responding, is how long the brain takes to process and produce speech. The task instructs participants to shadow speech, which generates intent to reproduce the phrase while motor regions in the brain unconsciously process the syntax and semantics of the words spoken. Words repeated during the shadowing task would also imitate the parlance of the shadowed speech.
In the human brain, the superior temporal sulcus (STS) is the sulcus separating the superior temporal gyrus from the middle temporal gyrus in the temporal lobe of the brain. A sulcus is a deep groove that curves into the largest part of the brain, the cerebrum, and a gyrus is a ridge that curves outward of the cerebrum.
Prelingual deafness refers to deafness that occurs before learning speech or language. Speech and language typically begin to develop very early with infants saying their first words by age one. Therefore, prelingual deafness is considered to occur before the age of one, where a baby is either born deaf or loses hearing before the age of one. This hearing loss may occur for a variety of reasons and impacts cognitive, social, and language development.
Auditory feedback (AF) is an aid used by humans to control speech production and singing by helping the individual verify whether the current production of speech or singing is in accordance with his acoustic-auditory intention. This process is possible through what is known as the auditory feedback loop, a three-part cycle that allows individuals to first speak, then listen to what they have said, and lastly, correct it when necessary. From the viewpoint of movement sciences and neurosciences, the acoustic-auditory speech signal can be interpreted as the result of movements of speech articulators. Auditory feedback can hence be inferred as a feedback mechanism controlling skilled actions in the same way that visual feedback controls limb movements.
Speech acquisition focuses on the development of vocal, acoustic and oral language by a child. This includes motor planning and execution, pronunciation, phonological and articulation patterns.
Forensic speechreading is the use of speechreading for information or evidential purposes. Forensic speechreading can be considered a branch of forensic linguistics. In contrast to speaker recognition, which is often the focus of voice analysis from an audio record, forensic speechreading usually aims to establish the content of speech, since the identity of the speaker is usually apparent. Often, it involves the production of a transcript of lip-read video-recordings of talk that lack a usable audiotrack, for example CCTV material. Occasionally, 'live' lipreading is involved, for example in the Casey Anthony case. Forensic speechreaders are usually deaf or from deaf families (CODA), and use speechreading in their daily lives to a greater extent than people with normal hearing outside the deaf community. Some speechreading tests suggest deaf people can be better lipreaders than most hearing people.
Language acquisition is a natural process in which infants and children develop proficiency in the first language or languages that they are exposed to. The process of language acquisition is varied among deaf children. Deaf children born to deaf parents are typically exposed to a sign language at birth and their language acquisition follows a typical developmental timeline. However, at least 90% of deaf children are born to hearing parents who use a spoken language at home. Hearing loss prevents many deaf children from hearing spoken language to the degree necessary for language acquisition. For many deaf children, language acquisition is delayed until the time that they are exposed to a sign language or until they begin using amplification devices such as hearing aids or cochlear implants. Deaf children who experience delayed language acquisition, sometimes called language deprivation, are at risk for lower language and cognitive outcomes. However, profoundly deaf children who receive cochlear implants and auditory habilitation early in life often achieve expressive and receptive language skills within the norms of their hearing peers; age at implantation is strongly and positively correlated with speech recognition ability. Early access to language through signed language or technology have both been shown to prepare children who are deaf to achieve fluency in literacy skills.
Language deprivation in deaf and hard-of-hearing children is a delay in language development that occurs when sufficient exposure to language, spoken or signed, is not provided in the first few years of a deaf or hard of hearing child's life, often called the critical or sensitive period. Early intervention, parental involvement, and other resources all work to prevent language deprivation. Children who experience limited access to language—spoken or signed—may not develop the necessary skills to successfully assimilate into the academic learning environment. There are various educational approaches for teaching deaf and hard of hearing individuals. Decisions about language instruction is dependent upon a number of factors including extent of hearing loss, availability of programs, and family dynamics.
Quentin Summerfield is a British psychologist, specialising in hearing. He joined the Medical Research Council Institute of Hearing Research in 1977 and served as its deputy director from 1993 to 2004, before moving on to a chair in psychology at The University of York. He served as head of the Psychology department from 2011 to 2017 and retired in 2018, becoming an emeritus professor. From 2013 to 2018, he was a member of the University of York's Finance & Policy Committee. From 2015 to 2018, he was a member of York University's governing body, the Council.
Deaf and hard of hearing individuals with additional disabilities are referred to as "Deaf Plus" or "Deaf+". Deaf children with one or more co-occurring disabilities could also be referred to as hearing loss plus additional disabilities or Deafness and Diversity (D.A.D.). About 40–50% of deaf children experience one or more additional disabilities, with learning disabilities, intellectual disabilities, autism spectrum disorder (ASD), and visual impairments being the four most concomitant disabilities. Approximately 7–8% of deaf children have a learning disability. Deaf plus individuals utilize various language modalities to best fit their communication needs.
{{cite journal}}
: CS1 maint: numeric names: authors list (link){{cite journal}}
: CS1 maint: numeric names: authors list (link){{cite journal}}
: CS1 maint: numeric names: authors list (link)