Temporal theory (hearing)

Last updated

The temporal theory of hearing, also called frequency theory or timing theory, states that human perception of sound depends on temporal patterns with which neurons respond to sound in the cochlea. Therefore, in this theory, the pitch of a pure tone is determined by the period of neuron firing patterns—either of single neurons, or groups as described by the volley theory. Temporal theory competes with the place theory of hearing, which instead states that pitch is signaled according to the locations of vibrations along the basilar membrane.

Contents

Temporal theory was first suggested by August Seebeck.

Description

As the basilar membrane vibrates, each clump of hair cells along its length is deflected in time with the sound components as filtered by basilar membrane tuning for its position. The more intense this vibration is, the more the hair cells are deflected and the more likely they are to cause cochlear nerve firings. Temporal theory supposes that the consistent timing patterns, whether at high or low average firing rate, code for a consistent pitch percept.

High amplitudes

At high sounds levels, nerve fibers whose characteristic frequencies do not exactly match the stimulus still respond, because of the motion induced in larger areas of the basilar membrane by loud sounds. Temporal theory can help explain how we maintain this discrimination. Even when a larger group of nerve fibers are all firing, there is a periodicity to this firing, which corresponds to the periodicity of the stimulus.

High frequencies

Neurons tend to have a maximum firing frequency within the range of frequencies we can hear. To be complete, rate theory must somehow explain how we distinguish pitches above this maximum firing rate. The volley theory, in which groups of neurons cooperate to code the temporal pattern, is an attempt to make the temporal theory more complete, but some frequencies are too high to see any synchrony in the cochlear nerve firings.

The random firing solution

Beament [1] outlined a potential solution. He noted that in two classic studies [2] [3] individual hair cell neurons did not always fire at the first moment they were able to. Though they would fire in time with the vibrations, the neurons would not fire on every vibration. The number of skipped vibrations was seemingly random. The gaps in the resulting train of neural impulses would then all be integer multiples of the period of vibration. For example, a pure tone of 100 Hz has a period of 10 ms. The corresponding train of impulses would contain gaps of 10 ms, 20 ms, 30 ms, 40 ms, etc. Such a group of gaps can only be generated by a 100 Hz tone. The set of gaps for a sound above the maximum neural firing rate would be similar except it would be missing some of the initial gaps, however it would still uniquely correspond to the frequency. The pitch of a pure tone could then be seen as corresponding to the difference between adjacent gaps.

Another solution

research suggests that the perception of pitch depends on both the places and patterns of neuron firings. Place theory may be dominant for higher frequencies. [4] However, it is also suggested that place theory may be dominant for low, resolved frequency harmonics, and that temporal theory may be dominant for high, unresolved frequency harmonics. [5]

Experiments to distinguish rate and place effects on pitch perception

Experiments to distinguish between place theory and rate theory using subjects with normal hearing are easy to devise, because of the strong correlation between rate and place: large vibrations at a low rate are produced at the apical end of the basilar membrane while large vibrations at a high rate are produced at the basal end. The two stimulus parameters can, however, be controlled independently using cochlear implants: pulses with a range of rates can be applied via different pairs of electrodes distributed along the membrane and subjects can be asked to rate a stimulus on a pitch scale.

Experiments using implant recipients (who had previously had normal hearing) showed that, at stimulation rates below about 500 Hz, ratings on a pitch scale were proportional to the log of stimulation rate, but also decreased with distance from the round window. At higher rates, the effect of rate became weaker, but the effect of place was still strong. [6]

Related Research Articles

<span class="mw-page-title-main">Inner ear</span> Innermost part of the vertebrate ear

The inner ear is the innermost part of the vertebrate ear. In vertebrates, the inner ear is mainly responsible for sound detection and balance. In mammals, it consists of the bony labyrinth, a hollow cavity in the temporal bone of the skull with a system of passages comprising two main functional parts:

<span class="mw-page-title-main">Pitch (music)</span> Perceptual property in music ordering sounds from low to high

Pitch is a perceptual property that allows sounds to be ordered on a frequency-related scale, or more commonly, pitch is the quality that makes it possible to judge sounds as "higher" and "lower" in the sense associated with musical melodies. Pitch is a major auditory attribute of musical tones, along with duration, loudness, and timbre.

<span class="mw-page-title-main">Cochlea</span> Snail-shaped part of inner ear involved in hearing

The cochlea is the part of the inner ear involved in hearing. It is a spiral-shaped cavity in the bony labyrinth, in humans making 2.75 turns around its axis, the modiolus. A core component of the cochlea is the organ of Corti, the sensory organ of hearing, which is distributed along the partition separating the fluid chambers in the coiled tapered tube of the cochlea.

<span class="mw-page-title-main">Vestibulocochlear nerve</span> Cranial nerve VIII, for hearing and balance

The vestibulocochlear nerve or auditory vestibular nerve, also known as the eighth cranial nerve, cranial nerve VIII, or simply CN VIII, is a cranial nerve that transmits sound and equilibrium (balance) information from the inner ear to the brain. Through olivocochlear fibers, it also transmits motor and modulatory information from the superior olivary complex in the brainstem to the cochlea.

<span class="mw-page-title-main">Basilar membrane</span> Stiff structural element within the cochlea of the inner ear which separates two liquid-filled tubes

The basilar membrane is a stiff structural element within the cochlea of the inner ear which separates two liquid-filled tubes that run along the coil of the cochlea, the scala media and the scala tympani. The basilar membrane moves up and down in response to incoming sound waves, which are converted to traveling waves on the basilar membrane.

Place theory is a theory of hearing that states that our perception of sound depends on where each component frequency produces vibrations along the basilar membrane. By this theory, the pitch of a sound, such as a human voice or a musical tone, is determined by the places where the membrane vibrates, based on frequencies corresponding to the tonotopic organization of the primary auditory neurons.

Stimulus modality, also called sensory modality, is one aspect of a stimulus or what is perceived after a stimulus. For example, the temperature modality is registered after heat or cold stimulate a receptor. Some sensory modalities include: light, sound, temperature, taste, pressure, and smell. The type and location of the sensory receptor activated by the stimulus plays the primary role in coding the sensation. All sensory modalities work together to heighten stimuli sensation when necessary.

<span class="mw-page-title-main">Auditory system</span> Sensory system used for hearing

The auditory system is the sensory system for the sense of hearing. It includes both the sensory organs and the auditory parts of the sensory system.

<span class="mw-page-title-main">Acoustic reflex</span> Small muscle contraction in the middle ear in response to loud sound

The acoustic reflex is an involuntary muscle contraction that occurs in the middle ear in response to loud sound stimuli or when the person starts to vocalize.

<span class="mw-page-title-main">Sensorineural hearing loss</span> Hearing loss caused by an inner ear or vestibulocochlear nerve defect

Sensorineural hearing loss (SNHL) is a type of hearing loss in which the root cause lies in the inner ear, sensory organ, or the vestibulocochlear nerve. SNHL accounts for about 90% of reported hearing loss. SNHL is usually permanent and can be mild, moderate, severe, profound, or total. Various other descriptors can be used depending on the shape of the audiogram, such as high frequency, low frequency, U-shaped, notched, peaked, or flat.

In physiology, tonotopy is the spatial arrangement of where sounds of different frequency are processed in the brain. Tones close to each other in terms of frequency are represented in topologically neighbouring regions in the brain. Tonotopic maps are a particular case of topographic organization, similar to retinotopy in the visual system.

<span class="mw-page-title-main">Volley theory</span>

Volley theory states that groups of neurons of the auditory system respond to a sound by firing action potentials slightly out of phase with one another so that when combined, a greater frequency of sound can be encoded and sent to the brain to be analyzed. The theory was proposed by Ernest Wever and Charles Bray in 1930 as a supplement to the frequency theory of hearing. It was later discovered that this only occurs in response to sounds that are about 500 Hz to 5000 Hz.

<span class="mw-page-title-main">Cochlear nucleus</span> Two cranial nerve nuclei of the human brainstem

The cochlear nuclear (CN) complex comprises two cranial nerve nuclei in the human brainstem, the ventral cochlear nucleus (VCN) and the dorsal cochlear nucleus (DCN). The ventral cochlear nucleus is unlayered whereas the dorsal cochlear nucleus is layered. Auditory nerve fibers, fibers that travel through the auditory nerve carry information from the inner ear, the cochlea, on the same side of the head, to the nerve root in the ventral cochlear nucleus. At the nerve root the fibers branch to innervate the ventral cochlear nucleus and the deep layer of the dorsal cochlear nucleus. All acoustic information thus enters the brain through the cochlear nuclei, where the processing of acoustic information begins. The outputs from the cochlear nuclei are received in higher regions of the auditory brainstem.

The auditory brainstem response (ABR), also called brainstem evoked response audiometry (BERA) or brainstem auditory evoked potentials (BAEPs) or brainstem auditory evoked responses (BAERs) is an auditory evoked potential extracted from ongoing electrical activity in the brain and recorded via electrodes placed on the scalp. The measured recording is a series of six to seven vertex positive waves of which I through V are evaluated. These waves, labeled with Roman numerals in Jewett and Williston convention, occur in the first 10 milliseconds after onset of an auditory stimulus. The ABR is considered an exogenous response because it is dependent upon external factors.

Computational auditory scene analysis (CASA) is the study of auditory scene analysis by computational means. In essence, CASA systems are "machine listening" systems that aim to separate mixtures of sound sources in the same way that human listeners do. CASA differs from the field of blind signal separation in that it is based on the mechanisms of the human auditory system, and thus uses no more than two microphone recordings of an acoustic environment. It is related to the cocktail party problem.

In audio signal processing, auditory masking occurs when the perception of one sound is affected by the presence of another sound.

<span class="mw-page-title-main">Hearing</span> Sensory perception of sound by living organisms

Hearing, or auditory perception, is the ability to perceive sounds through an organ, such as an ear, by detecting vibrations as periodic changes in the pressure of a surrounding medium. The academic field concerned with hearing is auditory science.

Electrocochleography is a technique of recording electrical potentials generated in the inner ear and auditory nerve in response to sound stimulation, using an electrode placed in the ear canal or tympanic membrane. The test is performed by an otologist or audiologist with specialized training, and is used for detection of elevated inner ear pressure or for the testing and monitoring of inner ear and auditory nerve function during surgery.

Infrasound is sound at frequencies lower than the low frequency end of human hearing threshold at 20 Hz. It is known, however, that humans can perceive sounds below this frequency at very high pressure levels. Infrasound can come from many natural as well as man-made sources, including weather patterns, topographic features, ocean wave activity, thunderstorms, geomagnetic storms, earthquakes, jet streams, mountain ranges, and rocket launchings. Infrasounds are also present in the vocalizations of some animals. Low frequency sounds can travel for long distances with very little attenuation and can be detected hundreds of miles away from their sources.

Temporal envelope (ENV) and temporal fine structure (TFS) are changes in the amplitude and frequency of sound perceived by humans over time. These temporal changes are responsible for several aspects of auditory perception, including loudness, pitch and timbre perception and spatial hearing.

References

  1. James Beament (2001). How We Hear Music . The Boydell Press. ISBN   0-85115-813-7.
  2. Nelson Y. S. Kiang (1969). Discharge Patterns of Single Auditory Fibers. MIT Research Monograph 35.
  3. J. J. Rose; J. Hind; D. Anderson & J. Brugge (1967). "Response of Auditory Fibers in the Squirrel Monkey". J. Neurophysiol. 30 (4): 769–793. doi:10.1152/jn.1967.30.4.769. PMID   4962851.
  4. Alain de Cheveigné (2005). "Pitch Perception Models". In Christopher J. Plack; Andrew J. Oxenham; Richard R. Fay; Arthur N. Popper (eds.). Pitch. Birkhäuser. ISBN   0-387-23472-1.
  5. Shackleton, Trevor M.; Carlyon, Robert (1994). "The role of resolved and unresolved harmonics in pitch perception and frequency modulation discrimination". The Journal of the Acoustical Society of America. 95 (6): 3529. doi: 10.1121/1.409970 . Retrieved 22 November 2016.
  6. Fearn R, Carter P, Wolfe J (1999). "The perception of pitch by users of cochlear implants: possible significance for rate and place theories of pitch". Acoustics Australia. 27 (2): 41–43.