Auditory brainstem response

Last updated

The auditory brainstem response (ABR), also called brainstem evoked response audiometry (BERA), is an auditory evoked potential extracted from ongoing electrical activity in the brain and recorded via electrodes placed on the scalp. The measured recording is a series of six to seven vertex positive waves of which I through V are evaluated. These waves, labeled with Roman numerals in Jewett and Williston convention, occur in the first 10 milliseconds after onset of an auditory stimulus. The ABR is considered an exogenous response because it is dependent upon external factors. [1] [2] [3]

Contents

The auditory structures that generate the auditory brainstem response are believed to be as follows: [2] [4]

History of research

In 1967, Sohmer and Feinmesser were the first to publish ABRs recorded with surface electrodes in humans which showed that cochlear potentials could be obtained non-invasively. In 1971, Jewett and Williston gave a clear description of the human ABR and correctly interpreted the later waves as arriving from the brainstem. In 1977, Selters and Brackman published landmark findings on prolonged inter-peak latencies in tumor cases (greater than 1 cm). In 1974, Hecox and Galambos showed that the ABR could be used for threshold estimation in adults and infants. In 1975, Starr and Achor were the first to report the effects on the ABR of CNS pathology in the brainstem. [2]

Long and Allen were the first to report the abnormal brainstem auditory evoked potentials (BAEPs) in an alcoholic woman who recovered from acquired central hypoventilation syndrome. These investigators hypothesized that their patient's brainstem was poisoned, but not destroyed, by her chronic alcoholism. [6]

Measurement techniques

Recording parameters

Interpretation of results

When interpreting the ABR, we look at amplitude (the number of neurons firing), latency (the speed of transmission), interpeak latency (the time between peaks), and interaural latency (the difference in wave V latency between ears). The ABR represents initiated activity beginning at the base of the cochlea and moving toward the apex over a 4 ms period of time. The peaks largely reflect activity from the most basal regions on the cochlea because the disturbance hits the basal end first and by the time it gets to the apex, a significant amount of phase cancellation occurs.[ citation needed ]

Use

The ABR is used for newborn hearing screening, auditory threshold estimation, intraoperative monitoring, determining hearing loss type and degree, and auditory nerve and brainstem lesion detection, and in development of cochlear implants.

Advanced techniques

Stacked ABR

History

One use of the traditional ABR is site-of-lesion testing and it has been shown to be sensitive to large acoustic tumors. However, it has poor sensitivity to tumors smaller than 1 centimeter in diameter. In the 1990s, there were several studies that concluded that the use of ABRs to detect acoustic tumors should be abandoned. As a result, many practitioners only use MRI for this purpose now. [7]

The reason the ABR does not identify small tumors can be explained by the fact that ABRs rely on latency changes of peak V. Peak V is primarily influenced by high-frequency fibers, and tumors will be missed if those fibers aren't affected. Although the click stimulates a wide frequency region on the cochlea, phase cancellation of the lower-frequency responses occurs as a result of time delays along the basilar membrane. [8] If a tumor is small, it is possible those fibers won't be sufficiently affected to be detected by the traditional ABR measure.

Primary reasons why it is not practical to simply send every patient in for an MRI are the high cost of an MRI, its impact on patient comfort, and limited availability in rural areas and third-world countries. In 1997, Dr. Manuel Don and colleagues published on the Stacked ABR as a way to enhance the sensitivity of the ABR in detecting smaller tumors. Their hypothesis was that the new ABR-stacked derived-band ABR amplitude could detect small acoustic tumors missed by standard ABR measures. [9] In 2005, he stated that it would be clinically valuable to have available an ABR test to screen for small tumors. [7] In a 2005 interview in Audiology Online, Dr. Don of House Ear Institute defined the Stacked ABR as "...an attempt to record the sum of the neural activity across the entire frequency region of the cochlea in response to a click stimuli." [4]

Stacked ABR defined

The stacked ABR is the sum of the synchronous neural activity generated from five frequency regions across the cochlea in response to click stimulation and high-pass pink noise masking. [7] The development of this technique was based on the 8th cranial nerve compound action potential work done by Teas, Eldredge, and Davis in 1962. [10]

Methodology

The stacked ABR is a composite of activity from ALL frequency regions of the cochlea – not just high frequency. [4]

  • Step 1: obtain Click-evoked ABR responses to clicks and high-pass pink masking noise (ipsilateral masking)
  • Step 2: obtain derived-band ABRs (DBR)
  • Step 3: shift & align the wave V peaks of the DBR – thus, "stacking" the waveforms with wave V lined up
  • Step 4: add the waveforms together
  • Step 5: compare the amplitude of the Stacked ABR with the click-evoked ABR from the same ear

When the derived waveforms are representing activity from more apical regions along the basilar membrane, wave V latencies are prolonged because of the nature of the traveling wave. In order to compensate for these latency shifts, the wave V component for each derived waveform is stacked (aligned), added together, and then the resulting amplitude is measured. [8] In 2005, Don explains that in a normal ear, the sum of the Stacked ABR will have the same amplitude as the Click-evoked ABR. But, the presence of even a small tumor results in a reduction in the amplitude of the Stacked ABR in comparison with the Click-evoked ABR.

Application and effectiveness

With the intent of screening for and detecting the presence of small (less than or equal to 1 cm) acoustic tumors, the Stacked ABR is: [9]

  • 95% Sensitivity
  • 83% Specificity

(Note: 100% sensitivity was obtained at 50% specificity)

In a 2007 comparative study of ABR abnormalities in acoustic tumor patients, Montaguti and colleagues mention the promise of and great scientific interest in the Stacked ABR. The article suggests that the Stacked ABR could make it possible to identify small acoustic neuromas missed by traditional ABRs. [11]

The Stacked ABR is a valuable screening tool for the detection of small acoustic tumors because it is sensitive, specific, widely available, comfortable, and cost-effective.

Tone-burst ABR

Tone-burst ABR is used to obtain thresholds for children who are too young to otherwise reliably respond behaviorally to frequency-specific sound stimuli. The most common frequencies tested at 500, 1000, 2000, and 4000 Hz, as these frequencies are generally thought to be necessary for hearing aid programming.

Auditory steady-state response (ASSR)

ASSR defined

Auditory steady-state response is an auditory evoked potential, elicited with modulated tones that can be used to predict hearing sensitivity in patients of all ages. It is an electrophysiologic response to rapid auditory stimuli and creates a statistically valid estimated audiogram (evoked potential used to predict hearing thresholds for normal hearing individuals and those with hearing loss). The ASSR uses statistical measures to determine if and when a threshold is present and is a "cross-check" for verification purposes prior to arriving at a differential diagnosis.

History

In 1981, Galambos and colleagues reported on the "40 Hz auditory potential" which is a continuous 400 Hz tone sinusoidally 'amplitude modulated' at 40 Hz and at 70 dB SPL. This produced a very frequency specific response, but the response was very susceptible to state of arousal. In 1991, Cohen and colleagues learned that by presenting at a higher rate of stimulation than 40 Hz (>70 Hz), the response was smaller but less affected by sleep. In 1994, Rickards and colleagues showed that it was possible to obtain responses in newborns. In 1995, Lins and Picton found that simultaneous stimuli presented at rates in the 80 to 100 Hz range made it possible to obtain auditory thresholds. [1]

Methodology

The same or similar to traditional recording montages used for ABR recordings are used for the ASSR. Two active electrodes are placed at or near vertex and at ipsilateral earlobe/mastoid with ground at low forehead. If collecting from both ears simultaneously, a two-channel pre-amplifier is used. When single channel recording system is used to detect activity from a binaural presentation, a common reference electrode may be located at the nape of the neck. Transducers can be insert earphones, headphones, a bone oscillator, or sound field and it is preferable if patient is asleep. Unlike ABR settings, the high pass filter might be approximately 40 to 90 Hz and low pass filter might be between 320 and 720 Hz with typical filter slopes of 6 dB per octave. Gain settings of 10,000 are common, artifact reject is left "on", and it is thought to be advantageous to have manual "override" to allow the clinician to make decisions during test and apply course corrections as needed. [12]

ABR vs. ASSR

Similarities:

Differences:

Analysis is mathematically based and dependent upon the fact that related bioelectric events coincide with the stimulus rep rate. The specific method of analysis is based on the manufacturer's statistical detection algorithm. It occurs in the spectral domain and is composed of specific frequency components that are harmonics of the stimulus repetition rate. Early ASSR systems considered the first harmonic only, but newer systems also incorporate higher harmonics in their detection algorithms. [12] Most equipment provides correction tables for converting ASSR thresholds to estimated HL audiograms and are found to be within 10 dB to 15 dB of audiometric thresholds. Although there are variances across studies. Correction data depends on variables such as: equipment used, frequencies collected, collection time, age of subject, sleep state of subject, stimulus parameters. [13]

Hearing aid fittings

In certain cases where behavioral thresholds cannot be attained, ABR thresholds can be used for hearing aid fittings. New fitting formulas such as DSL v5.0 allow the user to base the settings in the hearing aid on the ABR thresholds. Correction factors do exist for converting ABR thresholds to behavioral thresholds, but vary greatly. For example, one set of correction factors involves lowering ABR thresholds from 1000 to 4000 Hz by 10 dB and lowering the ABR threshold at 500 Hz by 15 to 20 dB. [14] Previously, brainstem audiometry has been used for hearing aid selection by using normal and pathological intensity-amplitude functions to determine appropriate amplification. [15] The principal idea of the selection and fitting of the hearing instrument was based on the assumption that amplitudes of the brainstem potentials were directly related to loudness perception. Under this assumption, the amplitudes of brainstem potentials stimulated by the hearing devices should exhibit close-to-normal values. ABR thresholds do not necessarily improve in the aided condition. [16] ABR can be an inaccurate indicator of hearing aid benefit due to difficulty processing the appropriate amount of fidelity of the transient stimuli used to evoke a response. Bone conduction ABR thresholds can be used if other limitations are present, but thresholds are not as accurate as ABR thresholds recorded through air conduction. [17]

Advantages of hearing aid selection by brainstem audiometry include the following applications:

Disadvantages of hearing aid selection by brainstem audiometry include the following applications:

Cochlear implantation and central auditory development

There are about 188,000 people around the world who have received cochlear implants. In the United States alone, there are about 30,000 adults and over 30,000 children who are recipients of cochlear implants. [18] This number continues to grow as cochlear implantation is becoming more and more accepted. In 1961, Dr. William House began work on the predecessor for today's cochlear implant. William House is an Otologist and is the founder of House ear institute in Los Angeles, California. This groundbreaking device, which was manufactured by 3M company was approved by the FDA in 1984. [19] Although this was a single channel device, it paved the way for future multi channel cochlear implants. Currently, as of 2007, the three cochlear implant devices approved for use in the U.S. are manufactured by Cochlear, Med-El, and Advanced Bionics. The way a cochlear implant works is sound is received by the cochlear implant's microphone, which picks up input that needs to be processed to determine how the electrodes will receive the signal. This is done on the external component of the cochlear implant called the sound processor. The transmitting coil, also an external component transmits the information from the speech processor through the skin using frequency modulated radio waves. The signal is never turned back into an acoustic stimulus, unlike a hearing aid. This information is then received by the cochlear implant's internal components. The receiver stimulator delivers the correct amount of electrical stimulation to the appropriate electrodes on the array to represent the sound signal that was detected. The electrode array stimulates the remaining auditory nerve fibers in the cochlea, which carry the signal on to the brain, where it is processed.

One way to measure the developmental status and limits of plasticity of the auditory cortical pathways is to study the latency of cortical auditory evoked potentials (CAEP). In particular, the latency of the first positive peak (P1) of the CAEP is of interest to researchers. P1 in children is considered a marker for maturation of the auditory cortical areas (Eggermont & Ponton, 2003; Sharma & Dorman, 2006; Sharma, Gilley, Dorman, & Baldwin, 2007). [20] [21] [22] The P1 is a robust positive wave occurring at around 100 to 300 ms in children. P1 latency represents the synaptic delays throughout the peripheral and central auditory pathways (Eggermont, Ponton, Don, Waring, & Kwong, 1997). [23]

P1 latency changes as a function of age, and is considered an index of cortical auditory maturation (Ceponiene, Cheour, & Naatanen, 1998). [24] P1 latency and age has a strong negative correlation, decrease in P1 latency with increasing age. This is most likely due to more efficient synaptic transmission over time. The P1 waveform also becomes broader as we age. The P1 neural generators are thought to originate from the thalamo-cortical portion of the auditory cortex. Researchers believe that P1 may be the first recurrent activity in the auditory cortex (Kral & Eggermont, 2007). [25] The negative component following P1 is called N1. N1 is not consistently seen in children until 12 years or age.

In 2006 Sharma & Dorman measured the P1 response in deaf children who received cochlear implants at different ages to examine the limits of plasticity in the central auditory system. [21] Those who received cochlear implant stimulation in early childhood (younger than 3.5 years) had normal P1 latencies. Children who received cochlear implant stimulation late in childhood (younger than seven years) had abnormal cortical responses latencies. However, children who received cochlear implant stimulation between the ages 3.5 and 7 years revealed variable latencies of the P1. Sharma also studied the waveform morphology of the P1 response in 2005 [26] and 2007. [22] She found that in early implanted children the P1 waveform morphology was normal. For late implanted children, the P1 waveforms were abnormal and had lower amplitudes when compared to normal waveform morphology. In 2008 Gilley and colleagues used source reconstruction and dipole source analysis derived from high density EEG recordings to estimate generators for the P1 in three groups of children: normal hearing children, children receiving a cochlear implant before the age of four, and children receiving a cochlear implant after the age of seven. Findings concluded that the waveform morphology of normal hearing children and early implanted children were very similar. [27]

Sedation protocols

Common sedative used

To achieve the highest-quality recordings for any recording potential, good patient relaxation is generally necessary. However, many recordings can be filled and contaminated with myogenic and movement artifacts. Patient restlessness and movement will contribute to threshold overestimation and inaccurate test results. In most cases, an adult is usually more than capable to provide a good extratympanic recording. In transtympanic recordings, a sedative can be used when time-consuming events need to take place. Most patients (especially infants) are given light anesthesia when test transtympanically.

Chloral hydrate is a commonly prescribed sedative, and most common for inducing sleep in young children and infants for AEP recordings. It uses alcohol to depress the central nervous system, specifically the cerebral cortex. Side effects of chloral hydrate include vomiting, nausea, gastric irritation, delirium, disorientation, allergic reactions and occasionally excitement – a high level of activity rather than becoming tired and falling asleep. Chloral hydrate is readily available in three forms – syrup, capsule and suppository. Syrup is most successful for those 4 months and older, proper dosage is poured in an oral syringe or cup. The syringe is used to squirt in the back of the mouth and then the child is encouraged to swallow. To induce sleep, dosages range anywhere from 500 mg to 2g, the recommended pediatric dose is equal to 50 mg per kg of body weight. A second dose no greater than the first dose, and an overall dose not exceeding 100 mg/kg of body weight can be used if the child does not fall asleep after the first dose. Sedation personnel should include a physician and a registered or practical nurse. Documentation and monitoring of physiologic parameters is required throughout the entire process. Sedatives should only be administered in the presence of those who are knowledgeable and skilled in airway management and cardiopulmonary resuscitation (CPR).

Increasingly, Propofol is used intravenously via infusion pump for sedation.

Procedures

A consent form must be signed and received from the patient or guardian indicating the conscious sedation and the procedure being performed. Documented medical evaluation for pre-sedation purposes including a focused airway examination either on the same day as the sedation process or within recent days that will include but not limited to:

All orders for conscious sedation for patients must be written. Prescriptions or orders received from areas outside of the conscious sedation area are not acceptable. There has to be a single individual assigned to monitor the sedated patient's cardiorespiratory status before, during and after sedation.

If patient is deeply sedated, the individual's only job should be to verify and record vital signs no less than every five minutes. All age and size appropriate equipment and medications used to sustain life should be verified before sedation and should be readily available at any time during and after sedation.

The medication should be administered by a physician or nurse and documented (dosage, name, time, etc.). Children should not receive the sedative without supervision of a skilled and knowledgeable medical personnel (at home, technician). Emergency equipment including crash cart must be readily available and respiration monitoring should be done visually or with stethoscope. Family member needs to remain in room with patient, especially if tester steps out. In this scenario, respiration can be monitored acoustically with a talk-back system microphone placed near patient's head. Medical personnel should be notified of slow respiration state.

After procedure is over, patient must be continuously observed in the facility that is appropriately equipped and staffed because patient's typically "floppy" and have poor motor control. Patients shouldn't stand on their own for the first few hours. No other medications with alcohol should be administered until patient is back to normal state. Drinking fluids is encouraged to reduce stomach irritation. Each facility should create and use their own discharge criteria. Verbal and written instructions should be provided on the topics of limitations of activity and anticipated changes in behavior. All discharge criteria must be met and documented before the patient leaves the facility.

Some criteria prior to discharge should include:

See also

Related Research Articles

An evoked potential or evoked response is an electrical potential in a specific pattern recorded from a specific part of the nervous system, especially the brain, of a human or other animals following presentation of a stimulus such as a light flash or a pure tone. Different types of potentials result from stimuli of different modalities and types. Evoked potential is distinct from spontaneous potentials as detected by electroencephalography (EEG), electromyography (EMG), or other electrophysiologic recording method. Such potentials are useful for electrodiagnosis and monitoring that include detections of disease and drug-related sensory dysfunction and intraoperative monitoring of sensory pathway integrity.

<span class="mw-page-title-main">Vestibulocochlear nerve</span> Cranial nerve VIII, for hearing and balance

The vestibulocochlear nerve or auditory vestibular nerve, also known as the eighth cranial nerve, cranial nerve VIII, or simply CN VIII, is a cranial nerve that transmits sound and equilibrium (balance) information from the inner ear to the brain. Through olivocochlear fibers, it also transmits motor and modulatory information from the superior olivary complex in the brainstem to the cochlea.

<span class="mw-page-title-main">Saccule</span> Bed of sensory cells in the inner ear

The saccule is a bed of sensory cells in the inner ear. It translates head movements into neural impulses for the brain to interpret. The saccule detects linear accelerations and head tilts in the vertical plane. When the head moves vertically, the sensory cells of the saccule are disturbed and the neurons connected to them begin transmitting impulses to the brain. These impulses travel along the vestibular portion of the eighth cranial nerve to the vestibular nuclei in the brainstem.

<span class="mw-page-title-main">Sensorineural hearing loss</span> Hearing loss caused by an inner ear or vestibulocochlear nerve defect

Sensorineural hearing loss (SNHL) is a type of hearing loss in which the root cause lies in the inner ear or sensory organ or the vestibulocochlear nerve. SNHL accounts for about 90% of reported hearing loss. SNHL is usually permanent and can be mild, moderate, severe, profound, or total. Various other descriptors can be used depending on the shape of the audiogram, such as high frequency, low frequency, U-shaped, notched, peaked, or flat.

An otoacoustic emission (OAE) is a sound that is generated from within the inner ear. Having been predicted by Austrian astrophysicist Thomas Gold in 1948, its existence was first demonstrated experimentally by British physicist David Kemp in 1978, and otoacoustic emissions have since been shown to arise through a number of different cellular and mechanical causes within the inner ear. Studies have shown that OAEs disappear after the inner ear has been damaged, so OAEs are often used in the laboratory and the clinic as a measure of inner ear health.

Audiometry is a branch of audiology and the science of measuring hearing acuity for variations in sound intensity and pitch and for tonal purity, involving thresholds and differing frequencies. Typically, audiometric tests determine a subject's hearing levels with the help of an audiometer, but may also measure ability to discriminate between different sound intensities, recognize pitch, or distinguish speech from background noise. Acoustic reflex and otoacoustic emissions may also be measured. Results of audiometric tests are used to diagnose hearing loss or diseases of the ear, and often make use of an audiogram.

Auditory neuropathy (AN) is a hearing disorder in which the outer hair cells of the cochlea are present and functional, but sound information is not transmitted sufficiently by the auditory nerve to the brain. Hearing loss with AN can range from normal hearing sensitivity to profound hearing loss.

Intraoperative neurophysiological monitoring (IONM) or intraoperative neuromonitoring is the use of electrophysiological methods such as electroencephalography (EEG), electromyography (EMG), and evoked potentials to monitor the functional integrity of certain neural structures during surgery. The purpose of IONM is to reduce the risk to the patient of iatrogenic damage to the nervous system, and/or to provide functional guidance to the surgeon and anesthesiologist.

<span class="mw-page-title-main">Cochlear nerve</span> Nerve carrying auditory information from the inner ear to the brain

The cochlear nerve is one of two parts of the vestibulocochlear nerve, a cranial nerve present in amniotes, the other part being the vestibular nerve. The cochlear nerve carries auditory sensory information from the cochlea of the inner ear directly to the brain. The other portion of the vestibulocochlear nerve is the vestibular nerve, which carries spatial orientation information to the brain from the semicircular canals, also known as semicircular ducts.

<span class="mw-page-title-main">Interaural time difference</span>

The interaural time difference when concerning humans or animals, is the difference in arrival time of a sound between two ears. It is important in the localization of sounds, as it provides a cue to the direction or angle of the sound source from the head. If a signal arrives at the head from one side, the signal has further to travel to reach the far ear than the near ear. This pathlength difference results in a time difference between the sound's arrivals at the ears, which is detected and aids the process of identifying the direction of sound source.

In human neuroanatomy, brainstem auditory evoked potentials (BAEPs), also called brainstem auditory evoked responses (BAERs), are very small auditory evoked potentials in response to an auditory stimulus, which are recorded by electrodes placed on the scalp. They reflect neuronal activity in the auditory nerve, cochlear nucleus, superior olive, and inferior colliculus of the brainstem. They typically have a response latency of no more than six milliseconds with an amplitude of approximately one microvolt.

Electric acoustic stimulation (EAS) is the use of a hearing aid and a cochlear implant technology together in the same ear. EAS is intended for people with high-frequency hearing loss, who can hear low-pitched sounds but not high-pitched ones. The hearing aid acoustically amplifies low-frequency sounds, while the cochlear implant electrically stimulates the middle- and high-frequency sounds. The inner ear then processes the acoustic and electric stimuli simultaneously, to give the patient the perception of sound.

<span class="mw-page-title-main">Cortical deafness</span> Medical condition

Cortical deafness is a rare form of sensorineural hearing loss caused by damage to the primary auditory cortex. Cortical deafness is an auditory disorder where the patient is unable to hear sounds but has no apparent damage to the structures of the ear. It has been argued to be as the combination of auditory verbal agnosia and auditory agnosia. Patients with cortical deafness cannot hear any sounds, that is, they are not aware of sounds including non-speech, voices, and speech sounds. Although patients appear and feel completely deaf, they can still exhibit some reflex responses such as turning their head towards a loud sound.

The cerebellopontine angle syndrome is a distinct neurological syndrome of deficits that can arise due to the closeness of the cerebellopontine angle to specific cranial nerves. Indications include unilateral hearing loss (85%), speech impediments, disequilibrium, tremors or other loss of motor control. The cerebellopontine angle cistern is a subarachnoid cistern formed by the cerebellopontine angle that lies between the cerebellum and the pons. It is filled with cerebrospinal fluid and is a common site for the growth of acoustic neuromas or schwannomas.

An auditory brainstem implant (ABI) is a surgically implanted electronic device that provides a sense of sound to a person who is profoundly deaf, due to retrocochlear hearing impairment. In Europe, ABIs have been used in children and adults, and in patients with neurofibromatosis type II.

<span class="mw-page-title-main">Hearing</span> Sensory perception of sound by living organisms

Hearing, or auditory perception, is the ability to perceive sounds through an organ, such as an ear, by detecting vibrations as periodic changes in the pressure of a surrounding medium. The academic field concerned with hearing is auditory science.

Electrocochleography is a technique of recording electrical potentials generated in the inner ear and auditory nerve in response to sound stimulation, using an electrode placed in the ear canal or tympanic membrane. The test is performed by an otologist or audiologist with specialized training, and is used for detection of elevated inner ear pressure or for the testing and monitoring of inner ear and auditory nerve function during surgery.

Bone-conduction auditory brainstem response or BCABR is a type of auditory evoked response that records neural response from EEG with stimulus transmitted through bone conduction.

The frequency following response (FFR), also referred to as frequency following potential (FFP) or envelope following response (EFR), is an evoked potential generated by periodic or nearly-periodic auditory stimuli. Part of the auditory brainstem response (ABR), the FFR reflects sustained neural activity integrated over a population of neural elements: "the brainstem response...can be divided into transient and sustained portions, namely the onset response and the frequency-following response (FFR)". It is often phase-locked to the individual cycles of the stimulus waveform and/or the envelope of the periodic stimuli. It has not been well studied with respect to its clinical utility, although it can be used as part of a test battery for helping to diagnose auditory neuropathy. This may be in conjunction with, or as a replacement for, otoacoustic emissions.

Temporal envelope (ENV) and temporal fine structure (TFS) are changes in the amplitude and frequency of sound perceived by humans over time. These temporal changes are responsible for several aspects of auditory perception, including loudness, pitch and timbre perception and spatial hearing.

References

  1. 1 2 Eggermont, Jos J.; Burkard, Robert F.; Manuel Don (2007). Auditory evoked potentials: basic principles and clinical application. Hagerstwon, MD: Lippincott Williams & Wilkins. ISBN   978-0-7817-5756-0. OCLC   70051359.
  2. 1 2 3 Hall, James W. (2007). New handbook of auditory evoked responses. Boston: Pearson. ISBN   978-0-205-36104-5. OCLC   71369649.
  3. Moore, Ernest J (1983). Bases of auditory brain stem evoked responses . New York: Grune & Stratton. ISBN   978-0-8089-1465-5. OCLC   8451561.
  4. 1 2 3 DeBonis, David A.; Donohue, Constance L. (2007). Survey of Audiology: Fundamentals for Audiologists and Health Professionals (2nd ed.). Boston, Mass: Allyn & Bacon. ISBN   978-0-205-53195-0. OCLC   123962954.
  5. Møsller, Aage R.; Jannetta, Peter J.; Møsller, Margareta B. (November 1981). "Neural Generators of Brainstem Evoked Potentials Results from Human Intracranial Recordings". Annals of Otology, Rhinology & Laryngology. 90 (6): 591–596. doi:10.1177/000348948109000616. ISSN   0003-4894. PMID   7316383. S2CID   11652964.
  6. Long, K.J.; Allen, N. (October 1984). "Abnormal brain-stem auditory evoked potentials following Ondine's curse". Arch. Neurol. 41 (10): 1109–10. doi : 10.1001/archneur.1984.04050210111028. PMID 6477223.
  7. 1 2 3 Don M, Kwong B, Tanaka C, Brackmann D, Nelson R (2005). "The stacked ABR: a sensitive and specific screening tool for detecting small acoustic tumors". Audiol. Neurootol. 10 (5): 274–90. doi:10.1159/000086001. PMID   15925862. S2CID   43009634.
  8. 1 2 Prout, T (2007). "Asymmetrical low frequency hearing loss and acoustic neuroma". Audiologyonline.
  9. 1 2 Don M, Masuda A, Nelson R, Brackmann D (September 1997). "Successful detection of small acoustic tumors using the stacked derived-band auditory brain stem response amplitude". Am J Otol. 18 (5): 608–21, discussion 682–5. PMID   9303158.
  10. Teas, Donald C. (1962). "Cochlear Responses to Acoustic Transients: An Interpretation of Whole-Nerve Action Potentials". The Journal of the Acoustical Society of America. 34 (9B): 1438–1489. Bibcode:1962ASAJ...34.1438T. doi:10.1121/1.1918366. ISSN   0001-4966.
  11. Montaguti M, Bergonzoni C, Zanetti MA, Rinaldi Ceroni A (April 2007). "Comparative evaluation of ABR abnormalities in patients with and without neurinoma of VIII cranial nerve". Acta Otorhinolaryngol Ital. 27 (2): 68–72. PMC   2640003 . PMID   17608133.
  12. 1 2 3 Beck, DL; Speidel, DP; and Petrak, M. (2007) Auditory Steady-State Response (ASSR): A Beginner's Guide. The Hearing Review. 2007; 14(12):34-37.
  13. Picton TW, Dimitrijevic A, Perez-Abalo MC, Van Roon P (March 2005). "Estimating audiometric thresholds using auditory steady-state responses". Journal of the American Academy of Audiology. 16 (3): 140–56. doi:10.3766/jaaa.16.3.3. PMID   15844740.
  14. 1 2 Hall JW, Swanepoel DW (2010). Objective Assessment of Hearing. San Diego = Arch. Neurol: Plural Publishing Inc.
  15. Kiebling J (1982). "Hearing Aid Selection by Brainstem Audiometry". Scandinavian Audiology. 11 (4): 269–275. doi:10.3109/01050398209087478. PMID   7163771.
  16. Billings CJ, Tremblay K, Souza PE, Binns MA (2007). "Stimulus Intensity and Amplification Effects on Cortical Evoked Potentials". Audiol Neurotol. 12 (4): 234–246. doi:10.1159/000101331. PMID   17389790. S2CID   2120101.
  17. Rahne T, Ehelebe T, Rasinski C, Gotze G (2010). "Auditory Brainstem and Cortical Potentials Following Bone-Anchored Hearing Aid Stimulation". Journal of Neuroscience Methods. 193 (2): 300–306. doi:10.1016/j.jneumeth.2010.09.013. PMID   20875458. S2CID   42869487.
  18. Jennifer Davis (2009-10-29), Peoria Journal Star, According to the U.S. Food and Drug Administration, about 188,000 people worldwide have received implants as of April 2009.
  19. W.F. House (2009), Annals of Otology, Rhinology, and Laryngology, vol. 85, pp. 1–93, Cochlear implants
  20. Eggermont, J. J.; Ponton, C. W. (2003), Acta Oto-Laryngologica, vol. 123, pp. 249–252, Auditory-evoked potential studies of cortical maturation in normal hearing and implanted children: Correlations with changes in structure and speech perception.
  21. 1 2 Sharma, A.; Dorman, M. F. (2006), Advances in Oto-Laryngologica, Central auditory development in children with cochlear implants: Clinical implications.
  22. 1 2 Sharma, A.; Gilley, P. M.; Dorman, M. F.; Baldwin, R. (2007), International Journal of Audiology, vol. 46, pp. 494–499, Deprivation-induced cortical reorganization in children with cochlear implants.
  23. Eggermont, J. J.; Ponton, C. W.; Don, M.; Waring, M. D.; Kwong, B. (1997), Acta Oto-Laryngologica, vol. 117, pp. 161–163, Deprivation-induced cortical reorganization in children with cochlear implants.
  24. Ceponiene, R.; Cheour, M.; Naatanen, R. (1998), Electroencephalography and Clinical Neurophysiology, vol. 108, pp. 345–354, Interstimulus interval and auditory event-related potentials in children: Evidence for multiple generators.
  25. Kral, A.; Eggermont, J. J. (2007), Brain Res. Rev., vol. 56, pp. 259–269, What's to lose and what's to learn: development under auditory deprivation, cochlear implants and limits of cortical plasticity.
  26. Sharma, A. (2005), "Audiol", Journal of the American Academy of Audiology, 16 (8): 564–573, doi:10.3766/jaaa.16.8.5, PMID   16295243, P1 latency as a biomarker for central auditory development in children with hearing impairment
  27. Gilley, P. M., Sharma, A., & Dorman, M. F. (2008). Cortical reorganization in children with cochlear implants. Brain Research.

Further reading