The auditory brainstem response (ABR), also called brainstem evoked response audiometry (BERA) or brainstem auditory evoked potentials (BAEPs) or brainstem auditory evoked responses (BAERs) [1] [2] is an auditory evoked potential extracted from ongoing electrical activity in the brain and recorded via electrodes placed on the scalp. The recording is a series of six to seven vertex positive waves of which I through V are evaluated. These waves, labeled with Roman numerals in Jewett/Williston convention, occur in the first 10 milliseconds after onset of an auditory stimulus. The ABR is termed an exogenous response because it is dependent upon external factors. [3] [4] [5]
The auditory structures that generate the auditory brainstem response are believed to be as follows: [4] [6]
Waves I and II originate from the distal and proximal auditory nerve fibers, wave III from the cochlear nucleus, IV showing activity in the superior olivary complex, and wave V with the lateral lemniscus. [7]
In 1967, Sohmer and Feinmesser were the first to publish human ABRs recorded with surface electrodes, showing that cochlear potentials could be obtained non-invasively. In 1971, Jewett and Williston gave a clear description of the human ABR and correctly interpreted the later waves as arriving from the brainstem. In 1977, Selters and Brackman reported on prolonged inter-peak latencies in tumor cases (greater than 1 cm). In 1974, Hecox and Galambos showed that ABR could be used for threshold estimation in adults and infants. In 1975, Starr and Achor were the first to report the effects on the ABR of CNS pathology in the brainstem. [4]
Long and Allen were the first to report abnormal brainstem auditory evoked potentials (BAEPs) in an alcoholic woman who recovered from acquired central hypoventilation syndrome. These investigators hypothesized that their patient's brainstem was poisoned, but not destroyed, by her chronic alcoholism. [8]
The ABR is used for newborn hearing screening, auditory threshold estimation, intraoperative monitoring, diagnosing hearing loss type and degree, auditory nerve and brainstem lesion detection, and in development of cochlear implants.
Site-of-lesion testing is sensitive to large acoustic tumors.
Stacked ABR is the sum of the synchronous neural activity generated from five frequency regions across the cochlea in response to click stimulation and high-pass pink noise masking. [9] This technique was based on the 8th cranial nerve compound action potential work of Teas, Eldredge, and Davis in 1962. [10] In 2005, Don defined the Stacked ABR as "...an attempt to record the sum of the neural activity across the entire frequency region of the cochlea in response to a click stimuli." [6]
Traditional ABR has poor sensitivity to sub-centimeter tumors. In the 1990s, studies recommended that using ABRs to detect acoustic tumors should be abandoned. As a result, many practitioners switched to MRI for this purpose. [9]
ABR does not identify small tumors because they rely on latency changes of peak voltage (V). Peak V is primarily influenced by high-frequency fibers. Tumors will be missed if those fibers are unaffected. Although the click stimulates a wide frequency region on the cochlea, phase cancellation of the lower-frequency responses occurs as a result of time delays along the basilar membrane. [11] Small tumors may not sufficiently affect those fibers.
However, MRI-ing every patient is not practical given its high cost, impact on patient comfort, and limited availability in many areas. In 1997, Don and colleagues introduced the Stacked ABR as a way to enhance sensitivity to smaller tumors. Their hypothesis was that the ABR-stacked derived-band ABR amplitude could detect tumors missed by standard ABRs. [12] In 2005, Don stated that it would be clinically valuable to have available an ABR test to screen for small tumors. [9] The Stacked ABR is sensitive, specific, widely available, comfortable, and cost-effective.
The stacked ABR is a composite of activity from ALL frequency regions of the cochlea – not just high frequency. [6]
When the derived waveforms are representing activity from more apical regions along the basilar membrane, wave V latencies are prolonged because of the nature of the traveling wave. In order to compensate for these latency shifts, the wave V component for each derived waveform is stacked (aligned), added together, and then the resulting amplitude is measured. [11] In 2005, Don explains that in a normal ear, the sum of the Stacked ABR will have the same amplitude as the Click-evoked ABR. But, the presence of even a small tumor results in a reduction in the amplitude of the Stacked ABR in comparison with the Click-evoked ABR.
Screening and detecting sub-centimeter acoustic tumors, the Stacked ABR offers: [12]
(Note: 100% sensitivity was obtained at 50% specificity)
In a 2007 comparative study of ABR abnormalities in acoustic tumor patients, Montaguti, et.al., described Stacked ABR as having the potentiasl to identify small acoustic neuromas. [13]
Tone-burst ABR is used to obtain thresholds for children who are too young to otherwise reliably respond behaviorally to frequency-specific acoustic stimuli. The most common frequencies tested are 500, 1000, 2000, and 4000 Hz, as these frequencies are generally necessary for hearing aid programming.
Auditory steady-state response (ASSR) is an auditory evoked potential, elicited with modulated tones that can be used to predict hearing sensitivity in patients of all ages. It is an electrophysiologic response to rapid auditory stimuli and creates a statistically valid estimated audiogram (evoked potential predicts hearing thresholds). ASSR uses statistical measures to identify thresholds is a "cross-check" for verification purposes prior to arriving at a differential diagnosis.
In 1981, Galambos and colleagues reported on the "40 Hz auditory potential" which is a continuous 400 Hz tone sinusoidally 'amplitude modulated' at 40 Hz and at 70 dB SPL. This produced a frequency-specific response, but the response was influenced by state of arousal. In 1991, Cohen and colleagues learned that by presenting at >70 Hz, the response was smaller, but less affected by sleep. In 1994, Rickards and colleagues showed that it was possible to obtain responses in newborns. In 1995, Lins and Picton found that simultaneous stimuli presented at rates in the 80 to 100 Hz range made it possible to obtain auditory thresholds. [3]
ASSR uses the same or similar montages as ABR recordings. Two active electrodes are placed at or near vertex and at ipsilateral earlobe/mastoid with ground at low forehead. Collecting from both ears simultaneously requires a two-channel pre-amplifier. Single channel recordings can detect activity from a binaural presentation. A common reference electrode may be located at the nape of the neck. Transducers can be earphones, headphones, a bone oscillator, or sound field. It is preferable for the patient to be asleep. The high pass filter might be approximately 40 to 90 Hz and low pass filter might be between 320 and 720 Hz with typical filter slopes of 6 dB per octave. Gain settings of 10,000 are common, artifact reject is "on", and manual "override" allows the clinician to make decisions during test and correct as appropriate. [14]
Similarities:
Differences:
Analysis is dependent upon the fact that related bioelectric events coincide with the stimulus repetition rate. The specific analysis method is based on the manufacturer's detection algorithm. It occurs in the spectral domain and is composed of specific frequency components that are harmonics of the stimulus repetition rate. ASSR systems incorporate higher harmonics in their detection algorithms. [14] Most equipment provides correction tables for converting ASSR thresholds to estimated HL audiograms and are found to be within 10 dB to 15 dB of audiometric thresholds, although studies vary. Correction data depends on variables such as equipment, frequencies, collection time, subject age, sleep state, and stimulus parameters. [15]
In certain cases where behavioral thresholds cannot be attained, ABR thresholds can be used for hearing aid fittings. Fitting formulas such as DSL v5.0 allow the hearing aid settings to be based on the ABR thresholds. Correction factors exist for converting ABR thresholds to behavioral thresholds, but vary greatly. For example, one set involves lowering ABR thresholds from 1000 to 4000 Hz by 10 dB and lowering the ABR threshold at 500 Hz by 15 to 20 dB. [16] Previously, brainstem audiometry was used for hearing aid selection by using normal and pathological intensity-amplitude functions to determine appropriate amplification. [17] The principal idea was based on the assumption that amplitudes of the brainstem potentials were directly related to loudness perception. Under this assumption, the amplitudes of brainstem potentials stimulated by the hearing devices should exhibit close-to-normal values. ABR thresholds do not necessarily improve in the aided condition. [18] ABR can be an inaccurate indicator of hearing aid benefit due to difficulty processing the appropriate amount of fidelity of the transient stimuli used to evoke a response. Bone conduction ABR thresholds can be used if other limitations are present, but thresholds are not as accurate as ABR thresholds recorded through air conduction. [19]
Advantages:
Disadvantages:
Some 188,000 people around the world have cochlear implants. In the United States, 30,000 adults and over 30,000 children have them. [20] In 1961, House began work on the predecessor of cochlear implants. House is an otologist. The first implant was approved by the FDA in 1984. [21] It was a single-channel device and led to multi-channel cochlear implants. Cochlear implants transforms sound received by the implant's microphone into radio waves using the external sound processor. The external transmitting coil transmits the (frequency-modulated) radio waves through the skin. The signal is not turned back into sounds. The internal receiver stimulator delivers the correct electrical stimulation to the appropriate internal electrodes to represent the sounds. The electrode array stimulates auditory nerve fibers in the cochlea, which carry the signal to the brain.
One way to measure the status of the auditory cortical pathways is to study the latency of cortical auditory evoked potentials (CAEP). In particular, the latency of the first positive peak (P1) of the CAEP is of interest. P1 is a robust positive wave occurring at around 100 to 300 ms in children. P1 latency represents the synaptic delays throughout the peripheral and central auditory pathways. [22] P1 in children is considered a marker for maturation of the auditory cortical areas. [23] [24] [25]
P1 latency changes as a function of age, and is considered an index of cortical auditory maturation. [26] P1 latency and age have a strong negative correlation, decrease in P1 latency with increasing age. This is most likely due to more efficient synaptic transmission over time. The P1 waveform also broadens with age. P1 neural generators are thought to originate from the thalamo-cortical portion of the auditory cortex. P1 may be the first recurrent activity in the auditory cortex. [27] The negative component following P1 is called N1. N1 is not consistently seen in children until 12 years or age.
A 2006 study measured the P1 response in deaf children who received cochlear implants at different ages to examine the limits of plasticity in the central auditory system. [24] Children who received cochlear implant stimulation while younger than 3.5 years had normal P1 latencies. Children older than seven years had abnormal latencies. Children between 3.5 and 7 had variable latencies. Studies in 2005 [28] and 2007 [25] reported that children with early implants the P1 had normal waveform morphology. Children with later implants had abnormal waveforms abnormal with lower amplitudes. A 2008 study used source reconstruction and dipole source analysis derived from high density EEG recordings to estimate P1 generators in three groups of children: normal hearing children, children implanted before age four, and children implanted after age seven. Findings concluded that the waveform morphology of normal hearing children and early implanted children were similar. [29]
An evoked potential or evoked response is an electrical potential in a specific pattern recorded from a specific part of the nervous system, especially the brain, of a human or other animals following presentation of a stimulus such as a light flash or a pure tone. Different types of potentials result from stimuli of different modalities and types. Evoked potential is distinct from spontaneous potentials as detected by electroencephalography (EEG), electromyography (EMG), or other electrophysiologic recording method. Such potentials are useful for electrodiagnosis and monitoring that include detections of disease and drug-related sensory dysfunction and intraoperative monitoring of sensory pathway integrity.
The saccule is a bed of sensory cells in the inner ear that detects linear acceleration and head tilting in the vertical plane, and converts these vibrations into electrical impulses to be interpreted by the brain. When the head moves vertically, the sensory cells of the saccule are moved due to a combination of inertia and gravity. In response, the neurons connected to the saccule transmit electrical impulses that represent this movement to the brain. These impulses travel along the vestibular portion of the eighth cranial nerve to the vestibular nuclei in the brainstem.
The auditory system is the sensory system for the sense of hearing. It includes both the sensory organs and the auditory parts of the sensory system.
The acoustic reflex is an involuntary muscle contraction that occurs in the middle ear in response to loud sound stimuli or when the person starts to vocalize.
An otoacoustic emission (OAE) is a sound that is generated from within the inner ear. Having been predicted by Austrian astrophysicist Thomas Gold in 1948, its existence was first demonstrated experimentally by British physicist David Kemp in 1978, and otoacoustic emissions have since been shown to arise through a number of different cellular and mechanical causes within the inner ear. Studies have shown that OAEs disappear after the inner ear has been damaged, so OAEs are often used in the laboratory and the clinic as a measure of inner ear health.
Audiometry is a branch of audiology and the science of measuring hearing acuity for variations in sound intensity and pitch and for tonal purity, involving thresholds and differing frequencies. Typically, audiometric tests determine a subject's hearing levels with the help of an audiometer, but may also measure ability to discriminate between different sound intensities, recognize pitch, or distinguish speech from background noise. Acoustic reflex and otoacoustic emissions may also be measured. Results of audiometric tests are used to diagnose hearing loss or diseases of the ear, and often make use of an audiogram.
Auditory neuropathy (AN) is a hearing disorder in which the outer hair cells of the cochlea are present and functional, but sound information is not transmitted sufficiently by the auditory nerve to the brain. The cause may be several dysfunctions of the inner hair cells of the cochlea or spiral ganglion neuron levels. Hearing loss with AN can range from normal hearing sensitivity to profound hearing loss.
Presbycusis, or age-related hearing loss, is the cumulative effect of aging on hearing. It is a progressive and irreversible bilateral symmetrical age-related sensorineural hearing loss resulting from degeneration of the cochlea or associated structures of the inner ear or auditory nerves. The hearing loss is most marked at higher frequencies. Hearing loss that accumulates with age but is caused by factors other than normal aging is not presbycusis, although differentiating the individual effects of distinct causes of hearing loss can be difficult.
In human neuroanatomy, brainstem auditory evoked potentials (BAEPs), also called brainstem auditory evoked responses (BAERs), are very small auditory evoked potentials in response to an auditory stimulus, which are recorded by electrodes placed on the scalp. They reflect neuronal activity in the auditory nerve, cochlear nucleus, superior olive, and inferior colliculus of the brainstem. They typically have a response latency of no more than six milliseconds with an amplitude of approximately one microvolt.
Electric acoustic stimulation (EAS) is the use of a hearing aid and a cochlear implant technology together in the same ear. EAS is intended for people with high-frequency hearing loss, who can hear low-pitched sounds but not high-pitched ones. The hearing aid acoustically amplifies low-frequency sounds, while the cochlear implant electrically stimulates the middle- and high-frequency sounds. The inner ear then processes the acoustic and electric stimuli simultaneously, to give the patient the perception of sound.
Cortical deafness is a rare form of sensorineural hearing loss caused by damage to the primary auditory cortex. Cortical deafness is an auditory disorder where the patient is unable to hear sounds but has no apparent damage to the structures of the ear. It has been argued to be as the combination of auditory verbal agnosia and auditory agnosia. Patients with cortical deafness cannot hear any sounds, that is, they are not aware of sounds including non-speech, voices, and speech sounds. Although patients appear and feel completely deaf, they can still exhibit some reflex responses such as turning their head towards a loud sound.
The cerebellopontine angle syndrome is a distinct neurological syndrome of deficits that can arise due to the closeness of the cerebellopontine angle to specific cranial nerves. Indications include unilateral hearing loss (85%), speech impediments, disequilibrium, tremors or other loss of motor control. The cerebellopontine angle cistern is a subarachnoid cistern formed by the cerebellopontine angle that lies between the cerebellum and the pons. It is filled with cerebrospinal fluid and is a common site for the growth of acoustic neuromas or schwannomas.
An auditory brainstem implant (ABI) is a surgically implanted electronic device that provides a sense of sound to a person who is profoundly deaf, due to retrocochlear hearing impairment. In Europe, ABIs have been used in children and adults, and in patients with neurofibromatosis type II.
Hearing, or auditory perception, is the ability to perceive sounds through an organ, such as an ear, by detecting vibrations as periodic changes in the pressure of a surrounding medium. The academic field concerned with hearing is auditory science.
The neural encoding of sound is the representation of auditory sensation and perception in the nervous system. The complexities of contemporary neuroscience are continually redefined. Thus what is known of the auditory system has been continually changing. The encoding of sounds includes the transduction of sound waves into electrical impulses along auditory nerve fibers, and further processing in the brain.
Electrocochleography is a technique of recording electrical potentials generated in the inner ear and auditory nerve in response to sound stimulation, using an electrode placed in the ear canal or tympanic membrane. The test is performed by an otologist or audiologist with specialized training, and is used for detection of elevated inner ear pressure or for the testing and monitoring of inner ear and auditory nerve function during surgery.
Bone-conduction auditory brainstem response or BCABR is a type of auditory evoked response that records neural response from EEG with stimulus transmitted through bone conduction.
The frequency following response (FFR), also referred to as frequency following potential (FFP) or envelope following response (EFR), is an evoked potential generated by periodic or nearly-periodic auditory stimuli. Part of the auditory brainstem response (ABR), the FFR reflects sustained neural activity integrated over a population of neural elements: "the brainstem response...can be divided into transient and sustained portions, namely the onset response and the frequency-following response (FFR)". It is often phase-locked to the individual cycles of the stimulus waveform and/or the envelope of the periodic stimuli. It has not been well studied with respect to its clinical utility, although it can be used as part of a test battery for helping to diagnose auditory neuropathy. This may be in conjunction with, or as a replacement for, otoacoustic emissions.
Temporal envelope (ENV) and temporal fine structure (TFS) are changes in the amplitude and frequency of sound perceived by humans over time. These temporal changes are responsible for several aspects of auditory perception, including loudness, pitch and timbre perception and spatial hearing.
Suzanne Carolyn Purdy is a New Zealand psychology academic specialising in auditory processing and hearing loss. She is currently a full professor and head of the School of Psychology at the University of Auckland.
According to the U.S. Food and Drug Administration, about 188,000 people worldwide have received implants as of April 2009.
Cochlear implants