This article needs additional citations for verification .(July 2019) |
Dichotic pitch (or the dichotic pitch phenomenon) is a pitch heard due to binaural processing, when the brain combines two noises presented simultaneously to the ears. [1] In other words, it cannot be heard when the sound stimulus is presented monaurally (to one ear) but, when it is presented binaurally (simultaneously, to both ears) a sensation of a pitch can be heard. [2] The binaural stimulus is presented to both ears through headphones simultaneously, and is the same in several respects except for a narrow frequency band that is manipulated. [3] The most common variation is the Huggins Pitch, which presents white-noise that only differ in the interaural phase relation over a narrow range of frequencies. [3] For humans, this phenomenon is restricted to fundamental frequencies lower than 330 Hz and extremely low sound pressure levels. [4] Experts investigate the effects of the dichotic pitch on the brain. [4] For instance, there are studies that suggested it evokes activation at the lateral end of Heschl's gyrus. [5]
When continuous white noise (with a frequency content below about 2000 Hz) is presented by headphones to the left and right ear of a listener, binaurally, and given a particular interaural phase relationship between the left and right ear signals, a sensation of pitch (psychophysics) may be observed. [6] Thus, stimulation of either ear alone gives rise to the sensation of white noise only, but stimulation of both ears together produces pitch. Therefore, as a special case of dichotic listening, such a pitch is called dichotic pitch or binaural pitch. Generally, a dichotic pitch is perceived somewhere in the head amidst the noisy sound filling the binaural space. To be more specific, the dichotic pitch is characterized by three perceptual properties: pitch value, timbre, and in-head position (lateralization). Experiments on the dichotic pitch were motivated in the context of the study of the pitch in general, and of the binaural system in particular, relevant for sound localization and separation of competing for sound sources (see cocktail party effect). In the past, various configurations of the dichotic pitch were studied and several auditory models were developed. However, no singular model has been developed that accounts for all aspects of the dichotic pitch, from how it is formed to the lateralization of the dichotic pitch. [7] The great challenge for psychophysical and physiological acoustics is to predict both the pitch value and pitch-image position in one model. For more information, references, audio demos etc. see more.
Huggins pitch and Binaural edge pitch elicit a pure-tone like sound at singular frequency and are generated by creating an interaural phase shift at a narrow frequency band. [8] This changes the point at which the sound wave that first reaches the ear, so the sound wave of the white noise stimulus reaches the ear at different points. In other words, the noise is decorrelated at that frequency. [9]
For HP to occur, the same white noise that is identical at all frequencies except for a narrow frequency band must be presented simultaneously to the ears. [2] An all-pass filter is used at this narrow frequency band to create an interaural phase shift from 0 to 2π radians (sometimes referred to as a 360-degree phase shift). [2] [10]
BEP is created by introducing an interaural phase shift from 0 to π radians (a 180-degree phase shift). It is best heard within the frequency range of 350–800 Hz. [10]
Both the Fourcin pitch and Dichotic repetition pitch are complex tones. They are generated by creating large interaural delays in the binaural stimulus but differ in where these large interaural delays are applied. [8]
The FP is similar to pure tones in the sense that an interaural phase shift is needed, however, it also presents different stimuli, differing in their interaural delays, to each ear at the same time. [8]
The DRP presents the same stimuli with a singular large interaural delay binaurally, simultaneously, to the ears. [8]
The equalization-cancellation (E-C) is a model that explains how the dichotic pitch is created, specifically, the Binaural edge pitch and the Huggins pitch, [10] and is related to binaural unmasking. [11]
The dichotic pitch stimulus is processed in a two-step process: equalization followed by the cancellation. [11] Equalization is the process in which the binaural system modifies the differences in the interaural time delay, level and phase. [12] [8] That is the differences in the time the stimulus reaches the ears, the difference in the loudness and frequency to each ear, and the different phase of the wave when it reaches each ear, respectively. This allows the binaural system to subtract out what was perfectly correlated in the broadband noise. What is left is the interaural phase shift created at the narrow frequency band, [10] the only part of the broadband noise that was decorrelated. [12] The results of E-C is what is heard as the HP and BEP. [8]
More specifically, the BEP is created as the E-C process creates a central spectrum with a sharp edge and a high-pass or low-pass sound where the BEP is heard. [8] [10]
It was found that the characteristics of the white noise stimulus influences where Huggins pitch is lateralized. This includes the centre frequency and interaural time delay of the white noise.
The half-period rule theorizes that the lateralization of the Huggins pitch depends on the difference in the time it takes for the noise to reach each ear, otherwise known as the interaural time delay. However, this model does not accurately account for the lateralization of the dichotic pitch under all circumstances. [7]
Using the dichotic pitch, pitch processing in relation to Heschl's gyrus in the brain was studied. Using various pitch evoking stimuli, fMRI scans in Hall & Plack's study found that multiple areas, including the Heschl's gyrus and, primarily, the planum temporale which is posterior of Heschl's gyrus, were activated by the pitch-evoking stimuli (the binaural stimulus) such as Huggins pitch. [9]
The activation of Heschl's gyrus and the planum temporale was replicated by another study that used 2 dichotic pitches (Huggins pitch and Binaural band pitch) and pure tones which sound the same as the dichotic pitch but have dissimilar characteristics to study whether the activation depended on the characteristics of the pitch. The fMRI scans showed that the dichotic pitch and its corresponding pure tone activated the same areas: the lateral end of Heschl's gyrus and the lateral border of the Planum temporal. This reflects how Heschl's gyrus activation may not depend on the characteristics of the pitch but on the pitch itself. Huggins pitch was also found to affect the region bilaterally. [13] The Planum temporale was also found to be more responsive to changes in the pitch such as those found in melodies. [13]
There have been many findings on the subject of dichotic pitch, showing that different disorders experience it in multiple different ways. Individuals with dyslexia seem to experience dichotic pitch in a similar way, too if they were trying to distinguish words and letters. Robert F. Dougherty and team, ran an experiment using both dyslexic and non-dyslexic children. The participants were given a melody to listen to and different tones were then played within the melody. The dyslexic children were able to decipher the higher-pitched tones but were unable to distinguish the lower notes from the background melody. It became apparent that the lower notes caused some sort of auditory and sensory problem for the dyslexic children that made it harder for their brain to sort out the information being sent to it. [1]
Santurette and Dau compared the ability for hearing-impaired individuals to hear the dichotic pitch to non-hearing-impaired listeners. It was found that most hearing-impaired individuals were able to hear the dichotic pitch, but had more difficulty hearing it compared to non-hearing-impaired listeners. However, not all hearing-impaired participants, such as those with central auditory processing deficits, were able to hear the dichotic pitch. While this is only preliminary research, the researchers suggested that due to the differential ability for hearing-impaired individuals to perceive the dichotic pitch, this may make the dichotic pitch a useful tool for diagnosing hearing-impaired individuals. [14]
A study done by Bianca Pinheiro Lanzetta-Valdo and the team looked at children with the diagnosis of Attention Deficit Hyperactivity Disorder (ADHD) and dichotic pitch. [15] At the beginning of the experiment, all of the children were at a base level of the medication methylphenidate, a stimulant that is used to try and calm individuals with ADHD. Over a 6-month period, the children were given auditory stimulation that consisted of white noise, and during this stimulation, they would be given physical, neurological, visual and auditory examinations, as well as biochemical tests to see if any improvement was made. Lanzetta-Valdo and collaborators did find any improvements in the participants over the 6 months in their different evaluations, but there are controversial results on this topic.
Frequency shift detectors (FSDs) are hypothesized to play a role in linking sounds together so that one can perceive words and melodies. They detect when the pitch in noise increases and decreases. [12]
Carcagno and colleagues studied whether FSDs could detect frequency changes in both dichotic pitches (binaural stimuli) and from the monaural stimulus. They used an up/down task which asked participants to discriminate between the direction of the frequency change. The dichotic pitch and monaural stimulus did not change the ability for participants to do the up/down task. The similar results obtained amongst the two trials led to the conclusion that FSDs are equally as sensitive to changes in frequency in the monaural and binaural stimulus. This also led to the conclusion that FSDs are located somewhere after the binaural convergence, the point where the auditory processing system combines the noise stimuli that has arrived at the ears. [12]
The absolute threshold of hearing (ATH), also known as the absolute hearing threshold or auditory threshold, is the minimum sound level of a pure tone that an average human ear with normal hearing can hear with no other sound present. The absolute threshold relates to the sound that can just be heard by the organism. The absolute threshold is not a discrete point and is therefore classed as the point at which a sound elicits a response a specified percentage of the time.
The auditory system is the sensory system for the sense of hearing. It includes both the sensory organs and the auditory parts of the sensory system.
The auditory cortex is the part of the temporal lobe that processes auditory information in humans and many other vertebrates. It is a part of the auditory system, performing basic and higher functions in hearing, such as possible relations to language switching. It is located bilaterally, roughly at the upper sides of the temporal lobes – in humans, curving down and onto the medial surface, on the superior temporal plane, within the lateral sulcus and comprising parts of the transverse temporal gyri, and the superior temporal gyrus, including the planum polare and planum temporale.
Sound localization is a listener's ability to identify the location or origin of a detected sound in direction and distance.
The cocktail party effect is the phenomenon of the brain's ability to focus one's auditory attention on a particular stimulus while filtering out a range of other stimuli, such as when a partygoer can focus on a single conversation in a noisy room. Neurotypical listeners have the ability to both segregate different stimuli into different streams, and subsequently decide which streams are most pertinent to them.
In acoustics, a beat is an interference pattern between two sounds of slightly different frequencies, perceived as a periodic variation in volume whose rate is the difference of the two frequencies.
The medial geniculate nucleus (MGN) or medial geniculate body (MGB) is part of the auditory thalamus and represents the thalamic relay between the inferior colliculus (IC) and the auditory cortex (AC). It is made up of a number of sub-nuclei that are distinguished by their neuronal morphology and density, by their afferent and efferent connections, and by the coding properties of their neurons. It is thought that the MGN influences the direction and maintenance of attention.
The interaural time difference when concerning humans or animals, is the difference in arrival time of a sound between two ears. It is important in the localization of sounds, as it provides a cue to the direction or angle of the sound source from the head. If a signal arrives at the head from one side, the signal has further to travel to reach the far ear than the near ear. This pathlength difference results in a time difference between the sound's arrivals at the ears, which is detected and aids the process of identifying the direction of sound source.
Eric Knudsen is a professor of neurobiology at Stanford University. He is best known for his discovery, along with Masakazu Konishi, of a brain map of sound location in two dimensions in the barn owl, tyto alba. His work has contributed to the understanding of information processing in the auditory system of the barn owl, the plasticity of the auditory space map in developing and adult barn owls, the influence of auditory and visual experience on the space map, and more recently, mechanisms of attention and learning. He is a recipient of the Lashley Award, the Gruber Prize in Neuroscience, and the Newcomb Cleveland prize and is a member of the National Academy of Sciences.
Binaural fusion or binaural integration is a cognitive process that involves the combination of different auditory information presented binaurally, or to each ear. In humans, this process is essential in understanding speech as one ear may pick up more information about the speech stimuli than the other.
Computational auditory scene analysis (CASA) is the study of auditory scene analysis by computational means. In essence, CASA systems are "machine listening" systems that aim to separate mixtures of sound sources in the same way that human listeners do. CASA differs from the field of blind signal separation in that it is based on the mechanisms of the human auditory system, and thus uses no more than two microphone recordings of an acoustic environment. It is related to the cocktail party problem.
Dichotic listening is a psychological test commonly used to investigate selective attention and the lateralization of brain function within the auditory system. It is used within the fields of cognitive psychology and neuroscience.
Spatial hearing loss refers to a form of deafness that is an inability to use spatial cues about where a sound originates from in space. Poor sound localization in turn affects the ability to understand speech in the presence of background noise.
Amblyaudia is a term coined by Dr. Deborah Moncrieff to characterize a specific pattern of performance from dichotic listening tests. Dichotic listening tests are widely used to assess individuals for binaural integration, a type of auditory processing skill. During the tests, individuals are asked to identify different words presented simultaneously to the two ears. Normal listeners can identify the words fairly well and show a small difference between the two ears with one ear slightly dominant over the other. For the majority of listeners, this small difference is referred to as a "right-ear advantage" because their right ear performs slightly better than their left ear. But some normal individuals produce a "left-ear advantage" during dichotic tests and others perform at equal levels in the two ears. Amblyaudia is diagnosed when the scores from the two ears are significantly different with the individual's dominant ear score much higher than the score in the non-dominant ear Researchers interested in understanding the neurophysiological underpinnings of amblyaudia consider it to be a brain based hearing disorder that may be inherited or that may result from auditory deprivation during critical periods of brain development. Individuals with amblyaudia have normal hearing sensitivity but have difficulty hearing in noisy environments like restaurants or classrooms. Even in quiet environments, individuals with amblyaudia may fail to understand what they are hearing, especially if the information is new or complicated. Amblyaudia can be conceptualized as the auditory analog of the better known central visual disorder amblyopia. The term “lazy ear” has been used to describe amblyaudia although it is currently not known whether it stems from deficits in the auditory periphery or from other parts of the auditory system in the brain, or both. A characteristic of amblyaudia is suppression of activity in the non-dominant auditory pathway by activity in the dominant pathway which may be genetically determined and which could also be exacerbated by conditions throughout early development.
The frequency following response (FFR), also referred to as frequency following potential (FFP) or envelope following response (EFR), is an evoked potential generated by periodic or nearly-periodic auditory stimuli. Part of the auditory brainstem response (ABR), the FFR reflects sustained neural activity integrated over a population of neural elements: "the brainstem response...can be divided into transient and sustained portions, namely the onset response and the frequency-following response (FFR)". It is often phase-locked to the individual cycles of the stimulus waveform and/or the envelope of the periodic stimuli. It has not been well studied with respect to its clinical utility, although it can be used as part of a test battery for helping to diagnose auditory neuropathy. This may be in conjunction with, or as a replacement for, otoacoustic emissions.
Perceptual-based 3D sound localization is the application of knowledge of the human auditory system to develop 3D sound localization technology.
Most owls are nocturnal or crepuscular birds of prey. Because they hunt at night, they must rely on non-visual senses. Experiments by Roger Payne have shown that owls are sensitive to the sounds made by their prey, not the heat or the smell. In fact, the sound cues are both necessary and sufficient for localization of mice from a distant location where they are perched. For this to work, the owls must be able to accurately localize both the azimuth and the elevation of the sound source.
Temporal envelope (ENV) and temporal fine structure (TFS) are changes in the amplitude and frequency of sound perceived by humans over time. These temporal changes are responsible for several aspects of auditory perception, including loudness, pitch and timbre perception and spatial hearing.
Binaural unmasking is phenomenon of auditory perception discovered by Ira Hirsh. In binaural unmasking, the brain combines information from the two ears in order to improve signal detection and identification in noise. The phenomenon is most commonly observed when there is a difference between the interaural phase of the signal and the interaural phase of the noise. When such a difference is present there is an improvement in masking threshold compared to a reference situation in which the interaural phases are the same, or when the stimulus has been presented monaurally. Those two cases usually give very similar thresholds. The size of the improvement is known as the "binaural masking level difference" (BMLD), or simply as the "masking level difference".
Auditosensory cortex is the part of the auditory system that is associated with the sense of hearing in humans. It occupies the bilateral primary auditory cortex in the temporal lobe of the mammalian brain. The term is used to describe Brodmann area 42 together with the transverse temporal gyri of Heschl. The auditosensory cortex takes part in the reception and processing of auditory nerve impulses, which passes sound information from the thalamus to the brain. Abnormalities in this region are responsible for many disorders in auditory abilities, such as congenital deafness, true cortical deafness, primary progressive aphasia and auditory hallucination.
Dougherty, R.F., Cyander, M.S., Bjornson, B.H., Edgell, D., & Giaschi, D.E. (1998). Dichotic Pitch: A new stimulus distinguishes normal and dyslexic auditory function. NeruoReport.9(13)Retrieved from https://www.researchgate.net/profile/Robert_Dougherty/publication/13482828_Dichotic_pitch_A_new_stimulus_distinguishes_normal_and_dyslexic_auditory_function/links/00b4952dafbd0e7c3d000000/Dichotic-pitch-A-new-stimulus-distinguishes-normal-and-dyslexic-auditory-function.pdf Lanzetta-Valdo, B. P., de Oliveira, G. A., Ferreira, J. C., & Palacios, E. N. (2017). Auditory Processing Assessment in Children with Attention Deficit Hyperactivity Disorder: An Open Study Examining Methylphenidate Effects. International Archives Of Otorhinolaryngology, 21(1), 72–78. doi : 10.1055/s-0036-1572526