Frequency following response

Last updated

The frequency following response (FFR), also referred to as frequency following potential (FFP) is an evoked potential generated by periodic or nearly-periodic auditory stimuli. [1] [2] Part of the auditory brainstem response (ABR), the FFR reflects sustained neural activity integrated over a population of neural elements: "the brainstem response...can be divided into transient and sustained portions, namely the onset response and the frequency-following response (FFR)". [3] It is often phase-locked to the individual cycles of the stimulus waveform and/or the envelope of the periodic stimuli. [4] It has not been well studied with respect to its clinical utility, although it can be used as part of a test battery for helping to diagnose auditory neuropathy. This may be in conjunction with, or as a replacement for, otoacoustic emissions. [5]

Contents

History

In 1930, Wever and Bray discovered a potential called the "Wever-Bray effect". [6] [7] They originally believed that the potential originated from the cochlear nerve, but it was later discovered that the response is non-neural and is cochlear in origin, specifically from the outer hair cells. [8] [9] This phenomenon came to be known as the cochlear microphonic (CM). The FFR may have been accidentally discovered back in 1930; however, renewed interest in defining the FFR did not occur until the mid-1960s. While several researchers raced to publish the first detailed account of the FFR, the term "FFR" was originally coined by Worden and Marsh in 1968, to describe the CM-like neural components recorded directly from several brainstem nuclei (research based on Jewett and Williston's work on click ABR's). [2]

Stimulus parameters

The recording procedures for the scalp-recorded FFR are essentially the same as the ABR. A montage of three electrodes is typically utilized: An active electrode, located either at the top of the head or top of the forehead, a reference electrode, located on an earlobe, mastoid, or high vertebra, and a ground electrode, located either on the other earlobe or in the middle of the forehead. [10] [11] The FFR can be evoked to sinusoids, complex tones, steady-state vowels, tonal sweeps, or consonant-vowel syllables. The duration of those stimuli is generally between 15 and 150 milliseconds, with a rise time of 5 milliseconds.

The polarity of successive stimuli can be either fixed or alternating. There are many reasons for, and effects of, alternating polarity. When stimulus delivery technology is not properly shielded, the electromagnetic acoustic transducer may induce the stimulus directly into the electrodes. This is known as a stimulus artifact, and researchers and clinicians seek to avoid it, as it is a contamination of the true recorded response of the nervous system. If stimulus polarities alternate, and responses are averaged over both polarities, stimulus artifact can be guaranteed to be absent. This is because the artifact changes polarity with the physical stimuli, and thus will average to nearly zero over time. Direct physiological responses to the stimuli such as the CM, however, also alternate polarity with the stimuli and will also be absent. Subtracting the responses to the two polarities yields the portions of the signal canceled out in the average. Such decomposition of the responses is not readily possible if the stimuli have constant polarity. [12] [13]

Clinical applicability

Due to the lack of specificity at low levels, the FFR has yet to make its way into clinical settings. Only recently has the FFR been evaluated for encoding complex sound and binaural processing. [14] [15] [16] There may be uses for the information the FFR can provide regarding steady state, time-variant, and speech signals for better understanding of individuals with hearing loss and its effects and of people with psychopathology. [17] [18] FFR distortion products (FFR DPs) could supplement low frequency (< 1000 Hz) DPOAEs. [1] FFRs have the potential to be used to evaluate the neural representation of speech sounds processed by different strategies employed by users of cochlear implants, primarily identification and discrimination of speech. Also, phase-locked neural activity reflected in the FFR has been successfully used to predict auditory thresholds. [14]

Research directions

Currently, there is renewed interest in using the FFR to evaluate: the role of neural phase-locking in encoding of complex sounds in normally hearing and hearing impaired subjects, encoding of voice pitch, binaural hearing, and evaluating the characteristics of the neural version of cochlear nonlinearity. [1] Furthermore, it is demonstrated that the temporal pattern of phase-locked brainstem neural activity generating the FFR may contain information relevant to the binaural processes underlying spatial release from masking (SRM) in challenging listening environments. [19]

Related Research Articles

An evoked potential or evoked response is an electrical potential in a specific pattern recorded from a specific part of the nervous system, especially the brain, of a human or other animals following presentation of a stimulus such as a light flash or a pure tone. Different types of potentials result from stimuli of different modalities and types. Evoked potential is distinct from spontaneous potentials as detected by electroencephalography (EEG), electromyography (EMG), or other electrophysiologic recording method. Such potentials are useful for electrodiagnosis and monitoring that include detections of disease and drug-related sensory dysfunction and intraoperative monitoring of sensory pathway integrity.

Stimulus modality, also called sensory modality, is one aspect of a stimulus or what is perceived after a stimulus. For example, the temperature modality is registered after heat or cold stimulate a receptor. Some sensory modalities include: light, sound, temperature, taste, pressure, and smell. The type and location of the sensory receptor activated by the stimulus plays the primary role in coding the sensation. All sensory modalities work together to heighten stimuli sensation when necessary.

<span class="mw-page-title-main">Auditory system</span> Sensory system used for hearing

The auditory system is the sensory system for the sense of hearing. It includes both the sensory organs and the auditory parts of the sensory system.

<span class="mw-page-title-main">Otoacoustic emission</span> Sound from the inner ear

An otoacoustic emission (OAE) is a sound that is generated from within the inner ear. Having been predicted by Austrian astrophysicist Thomas Gold in 1948, its existence was first demonstrated experimentally by British physicist David Kemp in 1978, and otoacoustic emissions have since been shown to arise through a number of different cellular and mechanical causes within the inner ear. Studies have shown that OAEs disappear after the inner ear has been damaged, so OAEs are often used in the laboratory and the clinic as a measure of inner ear health.

<span class="mw-page-title-main">Audiometry</span> Branch of audiology measuring hearing sensitivity

Audiometry is a branch of audiology and the science of measuring hearing acuity for variations in sound intensity and pitch and for tonal purity, involving thresholds and differing frequencies. Typically, audiometric tests determine a subject's hearing levels with the help of an audiometer, but may also measure ability to discriminate between different sound intensities, recognize pitch, or distinguish speech from background noise. Acoustic reflex and otoacoustic emissions may also be measured. Results of audiometric tests are used to diagnose hearing loss or diseases of the ear, and often make use of an audiogram.

Auditory neuropathy (AN) is a hearing disorder in which the outer hair cells of the cochlea are present and functional, but sound information is not transmitted sufficiently by the auditory nerve to the brain. The cause may be several dysfunctions of the inner hair cells of the cochlea or spiral ganglion neuron levels. Hearing loss with AN can range from normal hearing sensitivity to profound hearing loss.

<span class="mw-page-title-main">Volley theory</span>

Volley theory states that groups of neurons of the auditory system respond to a sound by firing action potentials slightly out of phase with one another so that when combined, a greater frequency of sound can be encoded and sent to the brain to be analyzed. The theory was proposed by Ernest Wever and Charles Bray in 1930 as a supplement to the frequency theory of hearing. It was later discovered that this only occurs in response to sounds that are about 500 Hz to 5000 Hz.

<span class="mw-page-title-main">Cochlear nucleus</span> Two cranial nerve nuclei of the human brainstem

The cochlear nucleus (CN) or cochlear nuclear complex comprises two cranial nerve nuclei in the human brainstem, the ventral cochlear nucleus (VCN) and the dorsal cochlear nucleus (DCN). The ventral cochlear nucleus is unlayered whereas the dorsal cochlear nucleus is layered. Auditory nerve fibers, fibers that travel through the auditory nerve carry information from the inner ear, the cochlea, on the same side of the head, to the nerve root in the ventral cochlear nucleus. At the nerve root the fibers branch to innervate the ventral cochlear nucleus and the deep layer of the dorsal cochlear nucleus. All acoustic information thus enters the brain through the cochlear nuclei, where the processing of acoustic information begins. The outputs from the cochlear nuclei are received in higher regions of the auditory brainstem.

<span class="mw-page-title-main">Superior olivary complex</span> Collection of brainstem nuclei related to hearing

The superior olivary complex (SOC) or superior olive is a collection of brainstem nuclei that is located in pons, functions in multiple aspects of hearing and is an important component of the ascending and descending auditory pathways of the auditory system. The SOC is intimately related to the trapezoid body: most of the cell groups of the SOC are dorsal to this axon bundle while a number of cell groups are embedded in the trapezoid body. Overall, the SOC displays a significant interspecies variation, being largest in bats and rodents and smaller in primates.

<span class="mw-page-title-main">Interaural time difference</span> Difference in time that it takes a sound to travel between two ears

The interaural time difference when concerning humans or animals, is the difference in arrival time of a sound between two ears. It is important in the localization of sounds, as it provides a cue to the direction or angle of the sound source from the head. If a signal arrives at the head from one side, the signal has further to travel to reach the far ear than the near ear. This pathlength difference results in a time difference between the sound's arrivals at the ears, which is detected and aids the process of identifying the direction of sound source.

The mismatch negativity (MMN) or mismatch field (MMF) is a component of the event-related potential (ERP) to an odd stimulus in a sequence of stimuli. It arises from electrical activity in the brain and is studied within the field of cognitive neuroscience and psychology. It can occur in any sensory system, but has most frequently been studied for hearing and for vision, in which case it is abbreviated to vMMN. The (v)MMN occurs after an infrequent change in a repetitive sequence of stimuli For example, a rare deviant (d) stimulus can be interspersed among a series of frequent standard (s) stimuli. In hearing, a deviant sound can differ from the standards in one or more perceptual features such as pitch, duration, loudness, or location. The MMN can be elicited regardless of whether someone is paying attention to the sequence. During auditory sequences, a person can be reading or watching a silent subtitled movie, yet still show a clear MMN. In the case of visual stimuli, the MMN occurs after an infrequent change in a repetitive sequence of images.

<span class="mw-page-title-main">Auditory brainstem response</span> Auditory phenomenon in the brain

The auditory brainstem response (ABR), also called brainstem evoked response audiometry (BERA) or brainstem auditory evoked potentials (BAEPs) or brainstem auditory evoked responses (BAERs) is an auditory evoked potential extracted from ongoing electrical activity in the brain and recorded via electrodes placed on the scalp. The recording is a series of six to seven vertex positive waves of which I through V are evaluated. These waves, labeled with Roman numerals in Jewett/Williston convention, occur in the first 10 milliseconds after onset of an auditory stimulus. The ABR is termed an exogenous response because it is dependent upon external factors.

Binaural fusion or binaural integration is a cognitive process that involves the combination of different auditory information presented binaurally, or to each ear. In humans, this process is essential in understanding speech in noisy and reverberent environments.

In human neuroanatomy, brainstem auditory evoked potentials (BAEPs), also called brainstem auditory evoked responses (BAERs), are very small auditory evoked potentials in response to an auditory stimulus, which are recorded by electrodes placed on the scalp. They reflect neuronal activity in the auditory nerve, cochlear nucleus, superior olive, and inferior colliculus of the brainstem. They typically have a response latency of no more than six milliseconds with an amplitude of approximately one microvolt.

<span class="mw-page-title-main">Cortical deafness</span> Medical condition

Cortical deafness is a rare form of sensorineural hearing loss caused by damage to the primary auditory cortex. Cortical deafness is an auditory disorder where the patient is unable to hear sounds but has no apparent damage to the structures of the ear. It has been argued to be as the combination of auditory verbal agnosia and auditory agnosia. Patients with cortical deafness cannot hear any sounds, that is, they are not aware of sounds including non-speech, voices, and speech sounds. Although patients appear and feel completely deaf, they can still exhibit some reflex responses such as turning their head towards a loud sound.

The olivocochlear system is a component of the auditory system involved with the descending control of the cochlea. Its nerve fibres, the olivocochlear bundle (OCB), form part of the vestibulocochlear nerve, and project from the superior olivary complex in the brainstem (pons) to the cochlea.

In neuroscience, the N100 or N1 is a large, negative-going evoked potential measured by electroencephalography ; it peaks in adults between 80 and 120 milliseconds after the onset of a stimulus, and is distributed mostly over the fronto-central region of the scalp. It is elicited by any unpredictable stimulus in the absence of task demands. It is often referred to with the following P200 evoked potential as the "N100-P200" or "N1-P2" complex. While most research focuses on auditory stimuli, the N100 also occurs for visual, olfactory, heat, pain, balance, respiration blocking, and somatosensory stimuli.

Electrocochleography is a technique of recording electrical potentials generated in the inner ear and auditory nerve in response to sound stimulation, using an electrode placed in the ear canal or tympanic membrane. The test is performed by an otologist or audiologist with specialized training, and is used for detection of elevated inner ear pressure or for the testing and monitoring of inner ear and auditory nerve function during surgery.

Bone-conduction auditory brainstem response or BCABR is a type of auditory evoked response that records neural response from EEG with stimulus transmitted through bone conduction.

Temporal envelope (ENV) and temporal fine structure (TFS) are changes in the amplitude and frequency of sound perceived by humans over time. These temporal changes are responsible for several aspects of auditory perception, including loudness, pitch and timbre perception and spatial hearing.

References

  1. 1 2 3 Burkard, R., Don, M., & Eggermont, J. J. Auditory evoked potentials: Basic principles and clinical application. Philadelphia: Lippincott Williams & Wilkins.
  2. 1 2 Worden, F.G.; Marsh, J.T. (July 1968). "Frequency-following (microphonic-like) neural responses evoked by sound". Electroencephalography and Clinical Neurophysiology. 25 (1): 42–52. doi:10.1016/0013-4694(68)90085-0. PMID   4174782.
  3. Russo, N.; Nicol, T.; Musacchia, G.; Kraus, N. (September 2004). "Brainstem responses to speech syllables". Clinical Neurophysiology. 115 (9): 2021–2030. doi:10.1016/j.clinph.2004.04.003. PMC   2529166 . PMID   15294204.
  4. Moushegian, G.; Rupert, A. L. (1973). "Response diversity of neurons in ventral cochlear nucleus of kangaroo rat to low-frequency tones". Journal of Neurophysiology. 33 (3): 351–364. doi:10.1152/jn.1970.33.3.351. PMID   5439342.
  5. Pandya, PK; Krishnan, A (March 2004). "Human frequency-following response correlates of the distortion product at 2F1-F2" (PDF). Journal of the American Academy of Audiology. 15 (3): 184–97. doi:10.3766/jaaa.15.3.2. PMID   15119460.
  6. Wever, E. G. & Bray, C. W. (1930a) Proc. Natl. Acad. Sci. Wash. 16. 344.
  7. Wever, E. G. & Bray, C. W. (1930b). J. Exp Psychol. 13, 373.
  8. Hallpike, C. S.; Rawdon-Smith, A. F. (9 June 1934). "The 'Wever and Bray phenomenon.' A study of the electrical response in the cochlea with especial reference to its origin". The Journal of Physiology. 81 (3): 395–408. doi:10.1113/jphysiol.1934.sp003143. PMC   1394151 . PMID   16994551.
  9. Moore EJ (1983). Bases of auditory brain-stem evoked responses. Grune & Stratton, Inc.
  10. Skoe, E; Kraus, N (June 2010). "Auditory brain stem response to complex sounds: a tutorial" (PDF). Ear and Hearing. 31 (3): 302–24. doi:10.1097/aud.0b013e3181cdb272. PMC   2868335 . PMID   20084007.
  11. Gockel, Hedwig E.; Carlyon, Robert P.; Mehta, Anahita; Plack, Christopher J. (9 August 2011). "The Frequency Following Response (FFR) May Reflect Pitch-Bearing Information But is Not a Direct Representation of Pitch". Journal of the Association for Research in Otolaryngology. 12 (6): 767–782. doi:10.1007/s10162-011-0284-1. PMC   3214239 . PMID   21826534.
  12. Chertoff, ME; Hecox, KE (March 1990). "Auditory nonlinearities measured with auditory-evoked potentials". The Journal of the Acoustical Society of America. 87 (3): 1248–54. Bibcode:1990ASAJ...87.1248C. doi:10.1121/1.398800. PMID   2324391.
  13. Lerud, KD; Almonte, FV; Kim, JC; Large, EW (February 2014). "Mode-locking neurodynamics predict human auditory brainstem responses to musical intervals". Hearing Research. 308: 41–9. doi:10.1016/j.heares.2013.09.010. PMID   24091182. S2CID   25879339.
  14. 1 2 Krishnan, A. (2002). Human frequency–following responses: Representation of steady-state synthetic vowels. Hearing Research, 166, 192-201.
  15. Krishnan, A., Parkinson, J. (2000). Human frequency-following response: Representation of tonal sweeps. Audiology and Neurootology, 5, 312-321.
  16. Krishnan, A., Xu, Y., Gandour, J. T., Cariani, P. A. (2004). Human frequency-following response: Representation of pitch contours in Chinese tones. Hearing Research, 189, 1-12.
  17. Clayson, Peter E.; Molina, Juan L.; Joshi, Yash B.; Thomas, Michael L.; Sprock, Joyce; Nungaray, John; Swerdlow, Neal R.; Light, Gregory A. (2021-11-01). "Evaluation of the frequency following response as a predictive biomarker of response to cognitive training in schizophrenia". Psychiatry Research. 305: 114239. doi:10.1016/j.psychres.2021.114239. ISSN   0165-1781.
  18. Clayson, Peter E.; Joshi, Yash B.; Thomas, Michael L.; Tarasenko, Melissa; Bismark, Andrew; Sprock, Joyce; Nungaray, John; Cardoso, Lauren; Wynn, Jonathan K.; Swerdlow, Neal R.; Light, Gregory A. (2022-05-01). "The viability of the frequency following response characteristics for use as biomarkers of cognitive therapeutics in schizophrenia". Schizophrenia Research. 243: 372–382. doi:10.1016/j.schres.2021.06.022. ISSN   0920-9964.
  19. Rouhbakhsh, N., 2016. Investigating the effect of spatial separation on the detection of sounds in competition, by examining electrophysiological responses from the brainstem and auditory cortex.