Spatial hearing loss | |
---|---|
Specialty | Audiology |
Spatial hearing loss refers to a form of deafness that is an inability to use spatial cues about where a sound originates from in space. Poor sound localization in turn affects the ability to understand speech in the presence of background noise. [1]
People with spatial hearing loss have difficulty processing speech that arrives from one direction while simultaneously filtering out 'noise' arriving from other directions. Research has shown spatial hearing loss to be a leading cause of central auditory processing disorder (CAPD) in children. Children with spatial hearing loss commonly present with difficulties understanding speech in the classroom. [1] Spatial hearing loss is found in most people over 70 years of age, and can sometimes be independent of other types of age related hearing loss. [2] As with presbycusis, spatial hearing loss varies with age. Through childhood and into adulthood it can be viewed as spatial hearing gain (with it becoming easier to hear speech in noise), and then with middle age and beyond the spatial hearing loss begins (with it becoming harder again to hear speech in noise).
Sound streams arriving from the left or right (the horizontal plane) are localised primarily by the small time differences of the same sound arriving at the two ears. A sound straight in front of the head is heard at the same time by both ears. A sound to the side of the head is heard approximately 0.0005 seconds later by the ear furthest away. A sound halfway to one side is heard approximately 0.0003 seconds later. This is the interaural time difference (ITD) cue and is measured by signal processing in the two central auditory pathways that begin after the cochlea and pass through the brainstem and mid-brain. [3] Some of those with spatial hearing loss are unable to process ITD (low frequency) cues.
Sound streams arriving from below the head, above the head, and over behind the head (the vertical plane) are localised again by signal processing in the central auditory pathways. The cues this time however are the notches/peaks that are added to the sound arriving at the ears by the complex shapes of the pinna. Different notches/peaks are added to sounds coming from below compared to sounds coming from above, and compared to sounds coming from behind. The most significant notches are added to sounds in the 4 kHz to 10 kHz range. [4] Some of those with spatial hearing loss are unable to process pinna related (high frequency) cues.
By the time sound stream representations reach the end of the auditory pathways brainstem inhibition processing ensures that the right pathway is solely responsible for the left ear sounds and the left pathway is solely responsible for the right ear sounds. [5] It is then the responsibility of the auditory cortex (AC) of the right hemisphere (on its own) to map the whole auditory scene. Information about the right auditory hemifield joins with the information about the left hemifield once it has passed through the corpus callosum (CC) - the brain white matter that connects homologous regions of the left and right hemispheres. [6] Some of those with spatial hearing loss are unable to integrate the auditory representations of the left and right hemifields, and consequently are unable to maintain any representation of auditory space.
An auditory space representation enables attention to be given (conscious top-down driven) to a single auditory stream. A gain mechanism can be employed involving the enhancement of the speech stream, and the suppression of any other speech streams and any noise streams. [7] An inhibition mechanism can be employed involving the variable suppression of outputs from the two cochlea. [8] Some of those with spatial hearing loss are unable to suppress unwanted cochlea output.
Those individuals with spatial hearing loss are not able to accurately perceive the directions different sound streams are coming from and their hearing is no longer 3-dimensional (3D). Sound streams from the rear may appear to come from the front instead. Sound streams from the left or right may appear to come from the front. The gain mechanism can not be used to enhance the speech stream of interest from all other sound streams. Those with spatial hearing loss need target speech to be raised by typically more than 10 dB when listening to speech in a background noise compared to those with no spatial hearing loss. [9]
Spatial hearing ability normally begins to develop in early childhood, and then continues to develop into early adulthood. After the age of 50 years spatial hearing ability begins to decline. [10] Both peripheral hearing and central auditory pathway problems can interfere with early development. With some individuals, for a range of different reasons, maturation of the two ear spatial hearing ability may simply never happen. For example, prolonged episodes of ear infections such as “glue ear” are likely to significantly hinder its development. [11]
Many neuroscience studies have facilitated the development and refinement of a speech processing model. This model shows cooperation between the two hemispheres of the brain, with asymmetric inter-hemispheric and intrahemispheric connectivity consistent with the left hemisphere specialization for phonological processing. [12] The right hemisphere is more specialized for sound localization, [13] while auditory space representation in the brain requires the integration of information from both hemispheres. [14]
The corpus callosum (CC) is the major route of communication between the two hemispheres. At maturity it is a large mass of white matter and consists of bundles of fibres linking the white matter of the two cerebral hemispheres. Its caudal and splenium portions contain fibres that originate from the primary and second auditory cortices, and from other auditory responsive areas. [15] Transcallosal interhemispheric transfer of auditory information plays a significant role in spatial hearing functions that depend on binaural cues. [16] Various studies have shown that despite normal audiograms, children with known auditory interhemispheric transfer deficits have particular difficulty localizing sound and understanding speech in noise. [17]
The CC of the human brain is relatively slow to mature with its size continuing to increase until the fourth decade of life. From this point it then slowly begins to shrink. [18] LiSN-S SRT scores show that the ability to understand speech in noisy environments develops with age, is beginning to be adult like by 18 years and starts to decline between 40 and 50 years of age. [19]
The medial olivocochlear bundle (MOC) is part of a collection of brainstem nuclei known as the superior olivary complex (SOC). The MOC innervates the outer hair cells of the cochlea and its activity is able to reduce basilar-membrane responses to sound by reducing the gain of cochlear amplification. [20]
In a quiet environment when speech from a single talker is being listened to, then the MOC efferent pathways are essentially inactive. In this case the single speech stream enters both ears and its representation ascends the two auditory pathways. [5] The stream arrives at both the right and left auditory cortices for eventual speech processing by the left hemisphere.
In a noisy environment the MOC efferent pathways are required to be active in two distinct ways. The first is an automatic response to the multiple sound streams arriving at the two ears, while the second is a top-down corticofugal attention driven response. The purpose of both is an attempt to enhance the signal to noise ratio between the speech stream being listened to and all other sound streams. [21]
The automatic response involves the MOC efferents inhibiting the output of the cochlear of the left ear. The output of the right ear is therefore dominant and only the right hemispace streams (with their direct connection to the speech processing areas of the left hemisphere) travel up the auditory pathway. [22] With children the underdeveloped Corpus Callosum (CC) is unable, in any case, to transfer auditory streams arriving (from the left ear) at the right hemisphere to the left hemisphere. [23]
With adults with a mature CC, an attention driven (conscious) decision to attend to one particular sound stream is the trigger for further MOC activity. [24] The 3D spatial representation of the multiple streams of the noisy environment (a function of the right hemisphere) enables a choice of the ear to be attended to. As a consequence, instruction may be given to the MOC efferents to inhibit the output of the right cochlear rather than the left cochlear. [8] If the speech stream being attended to is from the left hemispace it will arrive at the right hemisphere and access speech processing via the CC.
Spatial hearing loss can be diagnosed using the Listening in Spatialized Noise – Sentences test (LiSN-S), [25] which was designed to assess the ability of children with central auditory processing disorder (CAPD) to understand speech in background noise. The LiSN-S allows audiologists to measure how well a person uses spatial (and pitch) information to understand speech in noise. Inability to use spatial information has been found to be a leading cause of CAPD in children. [1]
Test participants repeat a series of target sentences which are presented simultaneously with competing speech. The listener's speech reception threshold (SRT) for target sentences is calculated using an adaptive procedure. The targets are perceived as coming from in front of the listener whereas the distracters vary according to where they are perceived spatially (either directly in front or either side of the listener). The vocal identity of the distracters also varies (either the same as, or different from, the speaker of the target sentences). [25]
Performance on the LISN-S is evaluated by comparing listeners' performances across four listening conditions, generating two SRT measures and three "advantage" measures. The advantage measures represent the benefit in dB gained when either talker, spatial, or both talker and spatial cues are available to the listener. The use of advantage measures minimizes the influence of higher order skills on test performance. [1] This serves to control for the inevitable differences that exist between individuals in functions such as language or memory.
Dichotic listening tests can be used to measure the efficacy of the attentional control of cochlear inhibition and the inter-hemispheric transfer of auditory information. Dichotic listening performance typically increases (and the right-ear advantage decreases) with the development of the Corpus Callosum (CC), peaking before the fourth decade. During middle age and older the auditory system ages, the CC reduces in size, and dichotic listening becomes worse, primarily in the left ear. [26] Dichotic listening tests typically involve two different auditory stimuli (usually speech) presented simultaneously, one to each ear, using a set of headphones. Participants are asked to attend to one or (in a divided-attention test) both of the messages. [27]
The activity of the medial olivocochlear bundle (MOC) and its inhibition of cochlear gain can be measured using a Distortion Product Otoacoustic Emission (DPOE) recording method. This involves the contralateral presentation of broadband noise and the measurement of both DPOAE amplitudes and the latency of onset of DPOAE suppression. DPOAE suppression is significantly affected by age and becomes difficult to detect by approximately 50 years of age. [28]
Research has shown that PC based spatial hearing training software can help some of the children identified as failing to develop their spatial hearing skills (perhaps because of frequent bouts of otitis media with effusion). [29] Further research is needed to discover if a similar approach would help those over 60 to recover the loss of their spatial hearing. One such study showed that dichotic test scores for the left ear improved with daily training. [30] Related research into the plasticity of white-matter (see Lövdén et al. for example) [31] suggests some recovery may be possible.
Music training leads to superior understanding of speech in noise across age groups and musical experience protects against age-related degradation in neural timing. [32] Unlike speech (fast temporal information), music (pitch information) is primarily processed by areas of the brain in the right hemisphere. [33] Given that it seems likely that the right ear advantage (REA) for speech is present from birth, [22] it would follow that a left ear advantage for music is also present from birth and that MOC efferent inhibition (of the right ear) plays a similar role in creating this advantage. Does greater exposure to music increase conscious control of cochlear gain and inhibition? Further research is needed to explore the apparent ability of music to promote an enhanced capability of speech in noise recognition.
Bilateral digital hearing aids do not preserve localization cues (see, for example, Van den Bogaert et al., 2006) [34] This means that audiologists when fitting hearing aids to patients (with a mild to moderate age related loss) risk negatively impacting their spatial hearing capability. With those patients who feel that their lack of understanding of speech in background noise is their primary hearing difficulty then hearing aids may simply make their problem even worse - their spatial hearing gain will be reduced by in the region of 10 dB. Although further research is needed, there is a growing number of studies which have shown that open-fit hearing aids are better able to preserve localisation cues (see, for example, Alworth 2011) [35]
The auditory system is the sensory system for the sense of hearing. It includes both the sensory organs and the auditory parts of the sensory system.
The auditory cortex is the part of the temporal lobe that processes auditory information in humans and many other vertebrates. It is a part of the auditory system, performing basic and higher functions in hearing, such as possible relations to language switching. It is located bilaterally, roughly at the upper sides of the temporal lobes – in humans, curving down and onto the medial surface, on the superior temporal plane, within the lateral sulcus and comprising parts of the transverse temporal gyri, and the superior temporal gyrus, including the planum polare and planum temporale.
Sound localization is a listener's ability to identify the location or origin of a detected sound in direction and distance.
Unilateral hearing loss (UHL) is a type of hearing impairment where there is normal hearing in one ear and impaired hearing in the other ear.
The cocktail party effect is the phenomenon of the brain's ability to focus one's auditory attention on a particular stimulus while filtering out a range of other stimuli, such as when a partygoer can focus on a single conversation in a noisy room. Listeners have the ability to both segregate different stimuli into different streams, and subsequently decide which streams are most pertinent to them.
The cochlear nuclear (CN) complex comprises two cranial nerve nuclei in the human brainstem, the ventral cochlear nucleus (VCN) and the dorsal cochlear nucleus (DCN). The ventral cochlear nucleus is unlayered whereas the dorsal cochlear nucleus is layered. Auditory nerve fibers, fibers that travel through the auditory nerve carry information from the inner ear, the cochlea, on the same side of the head, to the nerve root in the ventral cochlear nucleus. At the nerve root the fibers branch to innervate the ventral cochlear nucleus and the deep layer of the dorsal cochlear nucleus. All acoustic information thus enters the brain through the cochlear nuclei, where the processing of acoustic information begins. The outputs from the cochlear nuclei are received in higher regions of the auditory brainstem.
The interaural time difference when concerning humans or animals, is the difference in arrival time of a sound between two ears. It is important in the localization of sounds, as it provides a cue to the direction or angle of the sound source from the head. If a signal arrives at the head from one side, the signal has further to travel to reach the far ear than the near ear. This pathlength difference results in a time difference between the sound's arrivals at the ears, which is detected and aids the process of identifying the direction of sound source.
Binaural fusion or binaural integration is a cognitive process that involves the combination of different auditory information presented binaurally, or to each ear. In humans, this process is essential in understanding speech as one ear may pick up more information about the speech stimuli than the other.
Auditory processing disorder (APD), rarely known as King-Kopetzky syndrome or auditory disability with normal hearing (ADN), is a neurodevelopmental disorder affecting the way the brain processes sounds. Individuals with APD usually have normal structure and function of the outer, middle, and inner ear. However, they cannot process the information they hear in the same way as others do, which leads to difficulties in recognizing and interpreting sounds, especially the sounds composing speech. It is thought that these difficulties arise from dysfunction in the central nervous system. It is highly prevalent in individuals with other neurodevelopmental disorders, such as attention deficit hyperactivity disorder, autism spectrum disorders, dyslexia, and sensory processing disorder.
Cortical deafness is a rare form of sensorineural hearing loss caused by damage to the primary auditory cortex. Cortical deafness is an auditory disorder where the patient is unable to hear sounds but has no apparent damage to the structures of the ear. It has been argued to be as the combination of auditory verbal agnosia and auditory agnosia. Patients with cortical deafness cannot hear any sounds, that is, they are not aware of sounds including non-speech, voices, and speech sounds. Although patients appear and feel completely deaf, they can still exhibit some reflex responses such as turning their head towards a loud sound.
The olivocochlear system is a component of the auditory system involved with the descending control of the cochlea. Its nerve fibres, the olivocochlear bundle (OCB), form part of the vestibulocochlear nerve, and project from the superior olivary complex in the brainstem (pons) to the cochlea.
Hearing, or auditory perception, is the ability to perceive sounds through an organ, such as an ear, by detecting vibrations as periodic changes in the pressure of a surrounding medium. The academic field concerned with hearing is auditory science.
The neural encoding of sound is the representation of auditory sensation and perception in the nervous system. The complexities of contemporary neuroscience are continually redefined. Thus what is known of the auditory system has been continually changing. The encoding of sounds includes the transduction of sound waves into electrical impulses along auditory nerve fibers, and further processing in the brain.
Prelingual deafness refers to deafness that occurs before learning speech or language. Speech and language typically begin to develop very early with infants saying their first words by age one. Therefore, prelingual deafness is considered to occur before the age of one, where a baby is either born deaf or loses hearing before the age of one. This hearing loss may occur for a variety of reasons and impacts cognitive, social, and language development.
Dichotic listening is a psychological test commonly used to investigate selective attention and the lateralization of brain function within the auditory system. It is used within the fields of cognitive psychology and neuroscience.
Amblyaudia is a term coined by Dr. Deborah Moncrieff to characterize a specific pattern of performance from dichotic listening tests. Dichotic listening tests are widely used to assess individuals for binaural integration, a type of auditory processing skill. During the tests, individuals are asked to identify different words presented simultaneously to the two ears. Normal listeners can identify the words fairly well and show a small difference between the two ears with one ear slightly dominant over the other. For the majority of listeners, this small difference is referred to as a "right-ear advantage" because their right ear performs slightly better than their left ear. But some normal individuals produce a "left-ear advantage" during dichotic tests and others perform at equal levels in the two ears. Amblyaudia is diagnosed when the scores from the two ears are significantly different with the individual's dominant ear score much higher than the score in the non-dominant ear Researchers interested in understanding the neurophysiological underpinnings of amblyaudia consider it to be a brain based hearing disorder that may be inherited or that may result from auditory deprivation during critical periods of brain development. Individuals with amblyaudia have normal hearing sensitivity but have difficulty hearing in noisy environments like restaurants or classrooms. Even in quiet environments, individuals with amblyaudia may fail to understand what they are hearing, especially if the information is new or complicated. Amblyaudia can be conceptualized as the auditory analog of the better known central visual disorder amblyopia. The term “lazy ear” has been used to describe amblyaudia although it is currently not known whether it stems from deficits in the auditory periphery or from other parts of the auditory system in the brain, or both. A characteristic of amblyaudia is suppression of activity in the non-dominant auditory pathway by activity in the dominant pathway which may be genetically determined and which could also be exacerbated by conditions throughout early development.
Selective auditory attention or selective hearing is a type of selective attention and involves the auditory system. Selective hearing is characterized as the action in which people focus their attention intentionally on a specific source of a sound or spoken words. When people use selective hearing, noise from the surrounding environment is heard by the auditory system but only certain parts of the auditory information are chosen to be processed by the brain.
Phonemic restoration effect is a perceptual phenomenon where under certain conditions, sounds actually missing from a speech signal can be restored by the brain and may appear to be heard. The effect occurs when missing phonemes in an auditory signal are replaced with a noise that would have the physical properties to mask those phonemes, creating an ambiguity. In such ambiguity, the brain tends towards filling in absent phonemes. The effect can be so strong that some listeners may not even notice that there are phonemes missing. This effect is commonly observed in a conversation with heavy background noise, making it difficult to properly hear every phoneme being spoken. Different factors can change the strength of the effect, including how rich the context or linguistic cues are in speech, as well as the listener's state, such as their hearing status or age.
Temporal envelope (ENV) and temporal fine structure (TFS) are changes in the amplitude and frequency of sound perceived by humans over time. These temporal changes are responsible for several aspects of auditory perception, including loudness, pitch and timbre perception and spatial hearing.
Christian Lorenzi is Professor of Experimental Psychology at École Normale Supérieure in Paris, France, where he has been Director of the Department of Cognitive Studies and Director of Scientific Studies until. Lorenzi works on auditory perception.