This article's lead section contains information that is not included elsewhere in the article.(August 2020) |
The Lombard effect or Lombard reflex is the involuntary tendency of speakers to increase their vocal effort when speaking in loud noise to enhance the audibility of their voice. [5] This change includes not only loudness but also other acoustic features such as pitch, rate, and duration of syllables. [6] [7] This compensation effect maintains the auditory signal-to-noise ratio of the speaker's spoken words.
The effect links to the needs of effective communication, as there is a reduced effect when words are repeated or lists are read where communication intelligibility is not important. [5] Since the effect is involuntary it is used as a means to detect malingering in those simulating hearing loss. Research on birds [8] [9] and monkeys [10] find that the effect also occurs in the vocalizations of animals.
The effect was discovered in 1909 by Étienne Lombard, a French otolaryngologist. [5] [11]
Listeners hear a speech recorded with background noise better than they hear a speech which has been recorded in quiet with masking noise applied afterwards. This is because changes between normal and Lombard speech include: [6] [7]
These changes cannot be controlled by instructing a person to speak as they would in silence, though people can learn control with feedback. [15]
The Lombard effect also occurs following laryngectomy when people following speech therapy talk with esophageal speech. [16]
The intelligibility of an individual's own vocalization can be adjusted with audio-vocal reflexes using their own hearing (private loop), or it can be adjusted indirectly in terms of how well listeners can hear the vocalization (public loop). [5] Both processes are involved in the Lombard effect.
A speaker can regulate their vocalizations, particularly their amplitude relative to background noise, with reflexive auditory feedback. Such auditory feedback is known to maintain the production of vocalization since deafness affects the vocal acoustics of both humans [17] and songbirds [18] Changing the auditory feedback also changes vocalization in human speech [19] or bird song. [20] Neural circuits have been found in the brainstem that enable such reflex adjustment. [21]
A speaker can regulate their vocalizations at higher cognitive level in terms of observing its consequences on their audience's ability to hear it. [5] In this auditory self-monitoring adjusts vocalizations in terms of learnt associations of what features of their vocalization, when made in noise, create effective and efficient communication. The Lombard effect has been found to be greatest upon those words that are important to the listener to understand a speaker suggesting such cognitive effects are important. [12]
Both private and public loop processes exist in children. There is a development shift however from the Lombard effect being linked to acoustic self-monitoring in young children to the adjustment of vocalizations to aid its intelligibility for others in adults. [22]
The Lombard effect depends upon audio-vocal neurons in the periolivary region of the superior olivary complex and the adjacent pontine reticular formation. [21] It has been suggested that the Lombard effect might also involve the higher cortical areas [5] that control these lower brainstem areas. [23]
Choral singers experience reduced feedback due to the sound of other singers upon their own voice. [24] This results in a tendency for people in choruses to sing at a louder level if it is not controlled by a conductor. Trained soloists can control this effect but it has been suggested that after a concert they might speak more loudly in noisy surroundings, such as after-concert parties. [24]
The Lombard effect also occurs to those playing instruments such as the guitar. [25]
Noise has been found to affect the vocalizations of animals that vocalize against a background of human noise pollution. [26] Experimentally, the Lombard effect has also been found in the vocalization of:
The illusory continuity of tones is the auditory illusion caused when a tone is interrupted for a short time, during which a narrow band of noise is played. The noise has to be of a sufficiently high level to effectively mask the gap, unless it is a gap transfer illusion. Whether the tone is of constant, rising or decreasing pitch, the ear perceives the tone as continuous if the discontinuity is masked by noise. Because the human ear is very sensitive to sudden changes, however, it is necessary for the success of the illusion that the amplitude of the tone in the region of the discontinuity not decrease or increase too abruptly. While the inner mechanisms of this illusion is not well understood, there is evidence that supports activation of primarily the auditory cortex is present.
Lip reading, also known as speechreading, is a technique of understanding a limited range of speech by visually interpreting the movements of the lips, face and tongue without sound. Estimates of the range of lip reading vary, with some figures as low as 30% because lip reading relies on context, language knowledge, and any residual hearing. Although lip reading is used most extensively by deaf and hard-of-hearing people, most people with normal hearing process some speech information from sight of the moving mouth.
Bird vocalization includes both bird calls and bird songs. In non-technical use, bird songs are the bird sounds that are melodious to the human ear. In ornithology and birding, songs are distinguished by function from calls.
The acoustic reflex is an involuntary muscle contraction that occurs in the middle ear in response to loud sound stimuli or when the person starts to vocalize.
In speech communication, intelligibility is a measure of how comprehensible speech is in given conditions. Intelligibility is affected by the level and quality of the speech signal, the type and level of background noise, reverberation, and, for speech over communication devices, the properties of the communication system. A common standard measurement for the quality of the intelligibility of speech is the Speech Transmission Index (STI). The concept of speech intelligibility is relevant to several fields, including phonetics, human factors, acoustical engineering, and audiometry.
Ultrasonic hearing is a recognised auditory effect which allows humans to perceive sounds of a much higher frequency than would ordinarily be audible using the inner ear, usually by stimulation of the base of the cochlea through bone conduction. Normal human hearing is recognised as having an upper bound of 15–28 kHz, depending on the person.
Binaural fusion or binaural integration is a cognitive process that involves the combination of different auditory information presented binaurally, or to each ear. In humans, this process is essential in understanding speech as one ear may pick up more information about the speech stimuli than the other.
A parametric array, in the field of acoustics, is a nonlinear transduction mechanism that generates narrow, nearly side lobe-free beams of low frequency sound, through the mixing and interaction of high frequency sound waves, effectively overcoming the diffraction limit associated with linear acoustics. The main side lobe-free beam of low frequency sound is created as a result of nonlinear mixing of two high frequency sound beams at their difference frequency. Parametric arrays can be formed in water, air, and earth materials/rock.
Delayed Auditory Feedback (DAF), also called delayed sidetone, is a type of altered auditory feedback that consists of extending the time between speech and auditory perception. It can consist of a device that enables a user to speak into a microphone and then hear their voice in headphones a fraction of a second later. Some DAF devices are hardware; DAF computer software is also available. Most delays that produce a noticeable effect are between 50–200 milliseconds (ms). DAF usage has been shown to induce mental stress.
Electronic fluency devices are electronic devices intended to improve the fluency of persons who stutter. Most electronic fluency devices change the sound of the user's voice in his or her ear.
The National Center for Voice and Speech (NCVS), is a multi-site research and teaching organization dedicated to studying the characteristics, limitations and enhancement of human voice and speech. The NCVS is located in Salt Lake City, Utah with the Lead Institution located at the University of Utah. NCVS is also a Center at the University of Iowa where it has laboratories in the Department of Speech Pathology and Audiology. In addition, the NCVS has collaborators in Denver and at many institutions around the United States. Its focus is vocology, or the science and practice of voice habilitation.
William M. Hartmann is a noted physicist, psychoacoustician, author, and former president of the Acoustical Society of America. His major contributions in psychoacoustics are in pitch perception, binaural hearing, and sound localization. Working with junior colleagues, he discovered several major pitch effects: the binaural edge pitch, the binaural coherence edge pitch, the pitch shifts of mistuned harmonics, and the harmonic unmasking effect. His textbook, Signals, Sound and Sensation, is widely used in courses on psychoacoustics. He is currently a professor of physics at Michigan State University.
The electroglottograph, or EGG, is a device used for the noninvasive measurement of the degree of contact between the vibrating vocal folds during voice production. Though it is difficult to verify the assumption precisely, the aspect of contact being measured by a typical EGG unit is considered to be the vocal fold contact area (VFCA). To measure VFCA, electrodes are applied on the surface of the neck so that the EGG records variations in the transverse electrical impedance of the larynx and nearby tissues by means of a small A/C electric current. This electrical impedance will vary slightly with the area of contact between the moist vocal folds during the segment of the glottal vibratory cycle in which the folds are in contact. However, because the percentage variation in the neck impedance caused by vocal fold contact can be extremely small and varies considerably between subjects, no absolute measure of contact area is obtained, only the pattern of variation for a given subject.
The motor theory of speech perception is the hypothesis that people perceive spoken words by identifying the vocal tract gestures with which they are pronounced rather than by identifying the sound patterns that speech generates. It originally claimed that speech perception is done through a specialized module that is innate and human-specific. Though the idea of a module has been qualified in more recent versions of the theory, the idea remains that the role of the speech motor system is not only to produce speech articulations but also to detect them.
Soundscape ecology is the study of the acoustic relationships between living organisms, human and other, and their environment, whether the organisms are marine or terrestrial. First appearing in the Handbook for Acoustic Ecology edited by Barry Truax, in 1978, the term has occasionally been used, sometimes interchangeably, with the term acoustic ecology. Soundscape ecologists also study the relationships between the three basic sources of sound that comprise the soundscape: those generated by organisms are referred to as the biophony; those from non-biological natural categories are classified as the geophony, and those produced by humans, the anthropophony.
Auditory feedback (AF) is an aid used by humans to control speech production and singing by helping the individual verify whether the current production of speech or singing is in accordance with his acoustic-auditory intention. This process is possible through what is known as the auditory feedback loop, a three-part cycle that allows individuals to first speak, then listen to what they have said, and lastly, correct it when necessary. From the viewpoint of movement sciences and neurosciences, the acoustic-auditory speech signal can be interpreted as the result of movements of speech articulators. Auditory feedback can hence be inferred as a feedback mechanism controlling skilled actions in the same way that visual feedback controls limb movements.
Temporal envelope (ENV) and temporal fine structure (TFS) are changes in the amplitude and frequency of sound perceived by humans over time. These temporal changes are responsible for several aspects of auditory perception, including loudness, pitch and timbre perception and spatial hearing.
Deniz Başkent is a Turkish-born Dutch auditory scientist who works on auditory perception. As of 2018, she is Professor of Audiology at the University Medical Center Groningen, Netherlands.
Binaural unmasking is phenomenon of auditory perception discovered by Ira Hirsh. In binaural unmasking, the brain combines information from the two ears in order to improve signal detection and identification in noise. The phenomenon is most commonly observed when there is a difference between the interaural phase of the signal and the interaural phase of the noise. When such a difference is present there is an improvement in masking threshold compared to a reference situation in which the interaural phases are the same, or when the stimulus has been presented monaurally. Those two cases usually give very similar thresholds. The size of the improvement is known as the "binaural masking level difference" (BMLD), or simply as the "masking level difference".
Quentin Summerfield is a British psychologist, specialising in hearing. He joined the Medical Research Council Institute of Hearing Research in 1977 and served as its deputy director from 1993 to 2004, before moving on to a chair in psychology at The University of York. He served as head of the Psychology department from 2011 to 2017 and retired in 2018, becoming an emeritus professor. From 2013 to 2018, he was a member of the University of York's Finance & Policy Committee. From 2015 to 2018, he was a member of York University's governing body, the Council.