Dichotic listening

Last updated
Dichotic listening
Synonyms Dichotic listening test
Purposeused to investigate auditory laterality and selective attention

Dichotic listening is a psychological test commonly used to investigate selective attention and the lateralization of brain function within the auditory system. It is used within the fields of cognitive psychology and neuroscience.

Contents

In a standard dichotic listening test, a participant is presented with two different auditory stimuli simultaneously (usually speech), directed into different ears over headphones. [1] In one type of test, participants are asked to pay attention to one or both of the stimuli; later, they are asked about the content of either the stimulus they were instructed to attend to or the stimulus they were instructed to ignore. [1] [2]

History

Donald Broadbent is credited with being the first scientist to systematically use dichotic listening tests in his work. [3] [4] In the 1950s, Broadbent employed dichotic listening tests in his studies of attention, asking participants to focus attention on either a left- or right-ear sequence of digits. [5] [6] He suggested that due to limited capacity, the human information processing system needs to select which channel of stimuli to attend to, deriving his filter model of attention. [6]

In the early 1960s, Doreen Kimura used dichotic listening tests to draw conclusions about lateral asymmetry of auditory processing in the brain. [7] [8] She demonstrated, for example, that healthy participants have a right-ear superiority for the reception of verbal stimuli, and left-ear superiority for the perception of melodies. [9] From that study, and others studies using neurological patients with brain lesions, she concluded that there is a predominance of the left hemisphere for speech perception, and a predominance of the right hemisphere for melodic perception. [10] [11]

In the late 1960s and early 1970s, Donald Shankweiler [12] and Michael Studdert-Kennedy [13] of Haskins Laboratories used a dichotic listening technique (presenting different nonsense syllables) to demonstrate the dissociation of phonetic (speech) and auditory (nonspeech) perception by finding that phonetic structure devoid of meaning is an integral part of language and is typically processed in the left cerebral hemisphere. [14] [15] [16] A dichotic listening performance advantage for one ear is interpreted as indicating a processing advantage in the contralateral hemisphere. In another example, Sidtis (1981) [17] found that healthy adults have a left-ear advantage on a dichotic pitch recognition experiment. He interpreted this result as indicating right-hemisphere dominance for pitch discrimination.

During the early 1970s, Tim Rand demonstrated dichotic perception at Haskins Laboratories. [18] [19] In his study, the first stimuli: formant (F1), was presented to one ear while the second and third stimuli:(F2) and (F3) formants, were presented to the opposite ear. F2 and F3 varied in low and high intensity. Ultimately, in comparison to the binaural condition, "peripheral masking is avoided when speech is heard dichotically." [19] This demonstration was originally known as "the Rand effect" but was later renamed "dichotic release from masking". The name for this demonstration continued to evolve and was finally named "dichotic perception" or "dichotic listening." Around the same time, Jim Cutting (1976), [20] an investigator at Haskins Laboratories, researched how listeners could correctly identify syllables when different components of the syllable were presented to different ears. The formants of vowel sounds and their relation are crucial in differentiating vowel sounds. Even though the listeners heard two separate signals with neither ear receiving a 'complete' vowel sound, they could still identify the syllable sounds.

Dichotic listening test designs

Dichotic fused words test (DFWT)

The "dichotic fused words test" (DFWT) is a modified version of the basic dichotic listening test. It was originally explored by Johnson et al. (1977) [21] but in the early 80's Wexler and Hawles (1983) [22] modified this original test to ascertain more accurate data pertaining to hemispheric specialization of language function. In the DFWT, each participant listens to pairs of monosyllabic rhyming consonant-vowel-consonant (CVC) words. Each word varies in the initial consonant. The significant difference in this test is "the stimuli are constructed and aligned in such a way that partial interaural fusion occurs: subjects generally experience and report only one stimulus per trial." [23] According to Zatorre (1989), some major advantages of this method include "minimizing attentional factors, since the percept is unitary and localized to the midline" and "stimulus dominance effects may be explicitly calculated, and their influence on ear asymmetries assessed and eliminated." [23] Wexler and Hawles study obtained a high test-retest reliability (r=0.85). [22] High test-retest reliability is good, because it proves that the data collected from the study is consistent.

Testing with emotional factors

An emotional version of the dichotic listening task was developed. In this version individuals listen to the same word in each ear but they hear it in either a surprised, happy, sad, angry, or neutral tone. Participants are then asked to press a button indicating what tone they heard. Usually dichotic listening tests show a right-ear advantage for speech sounds. Right-ear/left-hemisphere advantage is expected, because of evidence from Broca's area and Wernicke's area, which are both located in the left hemisphere. In contrast, the left ear (and therefore the right hemisphere) is often better at processing nonlinguistic material. [24] The data from the emotional dichotic listening task is consistent with the other studies, because participants tend to have more correct responses to their left ear than to the right. [25] It is important to note that the emotional dichotic listening task is seemingly harder for the participants than the phonemic dichotic listening task, meaning more incorrect responses were submitted by individuals.

Manipulation of voice onset time (VOT)

The manipulation of voice onset time (VOT) during dichotic listening tests has given many insights regarding brain function. [26] To date, the most common design is the utilisation of four VOT conditions: short-long pairs (SL), where a Consonant-Vowel (CV) syllable with a short VOT is presented to the left ear and a CV syllable with a long VOT is presented to the right ear, as well as long-short (LS), short-short (SS) and long-long (LL) pairs. In 2006, Rimol, Eichele, and Hugdahl [27] first reported that, in healthy adults, SL pairs elicit the largest REA while, in fact, LS pairs elicit a significant left ear advantage (LEA). A study of children 5–8 years old has shown a developmental trajectory whereby long VOTs gradually start to dominate over short VOTs when LS pairs are being presented under dichotic conditions. [28] Converging evidence from studies of attentional modulation of the VOT effect shows that, around age 9, children lack the adult-like cognitive flexibility required to exert top-down control over stimulus-driven bottom-up processes. [29] [30] Arciuli et al.(2010) further demonstrated that this kind of cognitive flexibility is a predictor of proficiency with complex tasks such as reading. [26] [31]

Neuroscience

Dichotic listening tests can also be used as lateralized speech assessment task. Neuropsychologists have used this test to explore the role of singular neuroanatomical structures in speech perception and language asymmetry. For example, Hugdahl et al. (2003), investigated dichotic listening performance and frontal lobe function [32] in left and right lesioned frontal lobe nonaphasiac patients compared to healthy controls. In the study, all groups were exposed to 36 dichotic trials with pairs of CV syllables and each patient was asked to state which syllable he or she heard best. As expected, the right lesioned patients showed a right ear advantage like the healthy control group but the left hemisphere lesioned patients displayed impairment when compared to both the right lesioned patients and control group. From this study, researchers concluded "dichotic listening as into a neuronal circuitry which also involves the frontal lobes, and that this may be a critical aspect of speech perception." [32] Similarly, Westerhausen and Hugdahl (2008) [33] analyzed the role of the corpus callosum in dichotic listening and speech perception. After reviewing many studies, it was concluded that "...dichotic listening should be considered a test of functional inter-hemispheric interaction and connectivity, besides being a test of lateralized temporal lobe language function" and "the corpus callosum is critically involved in the top-down attentional control of dichotic listening performance, thus having a critical role in auditory laterality." [33]

Language processing

Dichotic listening can also be used to test the hemispheric asymmetry of language processing. In the early 60s, Doreen Kimura reported that dichotic verbal stimuli (specifically spoken numerals) presented to a participant produced a right ear advantage (REA). [34] She attributed the right-ear advantage "to the localization of speech and language processing in the so-called dominant left hemisphere of the cerebral cortex." [35] :115 According to her study, this phenomenon was related to the structure of the auditory nerves and the left-sided dominance for language processing. [36] It is important to note that REA doesn't apply to non-speech sounds. In "Hemispheric Specialization for Speech Perception," by Studdert-Kennedy and Shankweiler (1970) [14] examine dichotic listening of CVC syllable pairs. The six stop consonants (b, d, g, p, t, k) are paired with the six vowels and a variation in the initial and final consonants are analyzed. REA is the strongest when the sound of the initial and final consonants differ and it is the weakest when solely the vowel is changed. Asbjornsen and Bryden (1996) state that "many researchers have chosen to use CV syllable pairs, usually consisting of the six stop consonants paired with the vowel \a\. Over the years, a large amount of data has been generated using such material." [37]

Selective attention

In selective attention experiments, the participants may be asked to repeat aloud the content of the message they are listening to. This task is known as shadowing. As Colin Cherry (1953) [38] found, people do not recall the shadowed message well, suggesting that most of the processing necessary to shadow the attended to message occurs in working memory and is not preserved in the long-term store. Performance on the unattended message is worse. Participants are generally able to report almost nothing about the content of the unattended message. In fact, a change from English to German in the unattended channel frequently goes unnoticed. However, participants are able to report that the unattended message is speech rather than non-verbal content. In addition to this, if the content of the unattended message contains certain information, such as the listener's name, then the unattended message is more likely to be noticed and remembered. [39] A demonstration of this was done by Conway, Cowen, and Bunting (2001) in which they had subjects shadow words in one ear while ignoring words in the other ear. At some point, the subject's name was spoken in the ignored ear, and the question was whether the subject would report hearing their name. Subjects with a high working memory (WM) span were more capable of blocking out the distracting information. [40] Also if the message contains sexual words then people usually notice them immediately. [41] This suggests that the unattended information is also undergoing analysis and keywords can divert attention to it.

Sex differences

Some data gathered from dichotic listening test experiments suggests that there is possibly a small-population sex difference in perceptual and auditory asymmetries and language laterality. According to Voyer (2011), [42] "Dichotic listening tasks produced homogenous effect sizes regardless of task type (verbal, non-verbal), reflecting a significant sex difference in the magnitude of laterality effects, with men obtaining larger laterality effects than women." [42] :245–246 However, the authors discuss numerous limiting factors ranging from publication bias to small effect size. Furthermore, as discussed in "Attention, reliability, and validity of perceptual asymmetries in the fused dichotic words test," [43] women reported more "intrusions" or words presented to the uncued ear than men when presented with exogenous cues in the Fused Dichotic Word Task which suggests two possibilities: 1) Women experience more difficulty paying attention to the cued word than men and/or 2) regardless of the cue, women spread their attention evenly as opposed to men who may possibly focus in more intently on exogenous cues. [42]

Effect of schizophrenia

A study conducted involving the dichotic listening test, with emphasis on subtypes of schizophrenia (particularly paranoid and undifferentiated), demonstrated that people with paranoid schizophrenia have the largest left hemisphere advantage – whereas people with undifferentiated schizophrenia (where psychotic symptoms are present but the criteria for paranoid, disorganized, or catatonic types have not been met) having the smallest. [44] The application of the dichotic listening test helped to further the beliefs that preserved left hemisphere processing is a product of paranoid schizophrenia, and in contrast, that the left hemisphere's lack of activity is a symptom of undifferentiated schizophrenia. In 1994, M.F. Green and colleagues tried to relate "the functional integration of the left hemisphere in hallucinating and nonhallucinating psychotic patients" using a dichotic listening study. The study showed that auditory hallucinations are connected to a malfunction in the left hemisphere of the brain. [45]

Emotions

Dichotic listening can also be found in the emotion-oriented parts of the brain. Further study on this matter was done by Phil Bryden and his dichotic listening research focused on emotionally loaded stimuli (Hugdahl, 2015). [46] More research, focused on how lateralization and the identification of the cortical regions of the brain created inquiries on how dichotic listening is implicated whenever two dichotic listening tasks are provided. In order to obtain results, a Functional  magnetic resonance imaging (fMRI) was used by Jancke et al. (2001) to determine the activation of parts of the brain in charge of attention, auditory stimuli to a specific emotional stimuli. Following results on this experiment clarified that the dependability of the provided stimuli (Phonetic, emotion) had a significant presence on activating the different parts of the brain in charge of the specific stimuli. However, no concerning difference in cortical activation was found. [47]

See also

Related Research Articles

<span class="mw-page-title-main">McGurk effect</span> Perceptual illusion

The McGurk effect is a perceptual phenomenon that demonstrates an interaction between hearing and vision in speech perception. The illusion occurs when the auditory component of one sound is paired with the visual component of another sound, leading to the perception of a third sound. The visual information a person gets from seeing a person speak changes the way they hear the sound. If a person is getting poor-quality auditory information but good-quality visual information, they may be more likely to experience the McGurk effect. Integration abilities for audio and visual information may also influence whether a person will experience the effect. People who are better at sensory integration have been shown to be more susceptible to the effect. Many people are affected differently by the McGurk effect based on many factors, including brain damage and other disorders.

<span class="mw-page-title-main">Temporal lobe</span> One of the four lobes of the mammalian brain

The temporal lobe is one of the four major lobes of the cerebral cortex in the brain of mammals. The temporal lobe is located beneath the lateral fissure on both cerebral hemispheres of the mammalian brain.

Split-brain or callosal syndrome is a type of disconnection syndrome when the corpus callosum connecting the two hemispheres of the brain is severed to some degree. It is an association of symptoms produced by disruption of, or interference with, the connection between the hemispheres of the brain. The surgical operation to produce this condition involves transection of the corpus callosum, and is usually a last resort to treat refractory epilepsy. Initially, partial callosotomies are performed; if this operation does not succeed, a complete callosotomy is performed to mitigate the risk of accidental physical injury by reducing the severity and violence of epileptic seizures. Before using callosotomies, epilepsy is instead treated through pharmaceutical means. After surgery, neuropsychological assessments are often performed.

The term laterality refers to the preference most humans show for one side of their body over the other. Examples include left-handedness/right-handedness and left/right-footedness; it may also refer to the primary use of the left or right hemisphere in the brain. It may also apply to animals or plants. The majority of tests have been conducted on humans, specifically to determine the effects on language.

<span class="mw-page-title-main">Auditory cortex</span> Part of the temporal lobe of the brain

The auditory cortex is the part of the temporal lobe that processes auditory information in humans and many other vertebrates. It is a part of the auditory system, performing basic and higher functions in hearing, such as possible relations to language switching. It is located bilaterally, roughly at the upper sides of the temporal lobes – in humans, curving down and onto the medial surface, on the superior temporal plane, within the lateral sulcus and comprising parts of the transverse temporal gyri, and the superior temporal gyrus, including the planum polare and planum temporale.

<span class="mw-page-title-main">Language processing in the brain</span> How humans use words to communicate

In psycholinguistics, language processing refers to the way humans use words to communicate ideas and feelings, and how such communications are processed and understood. Language processing is considered to be a uniquely human ability that is not produced with the same grammatical understanding or systematicity in even human's closest primate relatives.

<span class="mw-page-title-main">Cocktail party effect</span> Ability of the brain to focus on a single auditory stimulus by filtering out background noise

The cocktail party effect is the phenomenon of the brain's ability to focus one's auditory attention on a particular stimulus while filtering out a range of other stimuli, such as when a partygoer can focus on a single conversation in a noisy room. Listeners have the ability to both segregate different stimuli into different streams, and subsequently decide which streams are most pertinent to them.

Doreen Kimura was a Canadian psychologist who was professor at the University of Western Ontario and professor emeritus at Simon Fraser University. Kimura was recognized for her contributions to the field of neuropsychology and later, her advocacy for academic freedom. She was the founding president of the Society for Academic Freedom and Scholarship.

Speech perception is the process by which the sounds of language are heard, interpreted, and understood. The study of speech perception is closely linked to the fields of phonology and phonetics in linguistics and cognitive psychology and perception in psychology. Research in speech perception seeks to understand how human listeners recognize speech sounds and use this information to understand spoken language. Speech perception research has applications in building computer systems that can recognize speech, in improving speech recognition for hearing- and language-impaired listeners, and in foreign-language teaching.

Donald P. ShankweilerArchived 2006-06-26 at the Wayback Machine is an eminent psychologist and cognitive scientist who has done pioneering work on the representation and processing of language in the brain. He is a Professor Emeritus of Psychology at the University of Connecticut, a Senior Scientist at Haskins Laboratories in New Haven, Connecticut, and a member of the Board of Directors Archived 2021-01-26 at the Wayback Machine at Haskins. He is married to well-known American philosopher of biology, psychology, and language Ruth Millikan.

<span class="mw-page-title-main">Brain asymmetry</span> Term in human neuroanatomy referring to several things

In human neuroanatomy, brain asymmetry can refer to at least two quite distinct findings:

Auditory agnosia is a form of agnosia that manifests itself primarily in the inability to recognize or differentiate between sounds. It is not a defect of the ear or "hearing", but rather a neurological inability of the brain to process sound meaning. While auditory agnosia impairs the understanding of sounds, other abilities such as reading, writing, and speaking are not hindered. It is caused by bilateral damage to the anterior superior temporal gyrus, which is part of the auditory pathway responsible for sound recognition, the auditory "what" pathway.

Speech shadowing is a psycholinguistic experimental technique in which subjects repeat speech at a delay to the onset of hearing the phrase. The time between hearing the speech and responding, is how long the brain takes to process and produce speech. The task instructs participants to shadow speech, which generates intent to reproduce the phrase while motor regions in the brain unconsciously process the syntax and semantics of the words spoken. Words repeated during the shadowing task would also imitate the parlance of the shadowed speech.

In neuroscience, the N100 or N1 is a large, negative-going evoked potential measured by electroencephalography ; it peaks in adults between 80 and 120 milliseconds after the onset of a stimulus, and is distributed mostly over the fronto-central region of the scalp. It is elicited by any unpredictable stimulus in the absence of task demands. It is often referred to with the following P200 evoked potential as the "N100-P200" or "N1-P2" complex. While most research focuses on auditory stimuli, the N100 also occurs for visual, olfactory, heat, pain, balance, respiration blocking, and somatosensory stimuli.

Extinction is a neurological disorder that impairs the ability to perceive multiple stimuli of the same type simultaneously. Extinction is usually caused by damage resulting in lesions on one side of the brain. Those who are affected by extinction have a lack of awareness in the contralesional side of space and a loss of exploratory search and other actions normally directed toward that side.

The neuroscience of music is the scientific study of brain-based mechanisms involved in the cognitive processes underlying music. These behaviours include music listening, performing, composing, reading, writing, and ancillary activities. It also is increasingly concerned with the brain basis for musical aesthetics and musical emotion. Scientists working in this field may have training in cognitive neuroscience, neurology, neuroanatomy, psychology, music theory, computer science, and other relevant fields.

Spatial hearing loss refers to a form of deafness that is an inability to use spatial cues about where a sound originates from in space. Poor sound localization in turn affects the ability to understand speech in the presence of background noise.

Amblyaudia is a term coined by Dr. Deborah Moncrieff to characterize a specific pattern of performance from dichotic listening tests. Dichotic listening tests are widely used to assess individuals for binaural integration, a type of auditory processing skill. During the tests, individuals are asked to identify different words presented simultaneously to the two ears. Normal listeners can identify the words fairly well and show a small difference between the two ears with one ear slightly dominant over the other. For the majority of listeners, this small difference is referred to as a "right-ear advantage" because their right ear performs slightly better than their left ear. But some normal individuals produce a "left-ear advantage" during dichotic tests and others perform at equal levels in the two ears. Amblyaudia is diagnosed when the scores from the two ears are significantly different with the individual's dominant ear score much higher than the score in the non-dominant ear Researchers interested in understanding the neurophysiological underpinnings of amblyaudia consider it to be a brain based hearing disorder that may be inherited or that may result from auditory deprivation during critical periods of brain development. Individuals with amblyaudia have normal hearing sensitivity but have difficulty hearing in noisy environments like restaurants or classrooms. Even in quiet environments, individuals with amblyaudia may fail to understand what they are hearing, especially if the information is new or complicated. Amblyaudia can be conceptualized as the auditory analog of the better known central visual disorder amblyopia. The term “lazy ear” has been used to describe amblyaudia although it is currently not known whether it stems from deficits in the auditory periphery or from other parts of the auditory system in the brain, or both. A characteristic of amblyaudia is suppression of activity in the non-dominant auditory pathway by activity in the dominant pathway which may be genetically determined and which could also be exacerbated by conditions throughout early development.

Broadbent's filter model is an early selection theory of attention.

Selective auditory attention or selective hearing is a type of selective attention and involves the auditory system. Selective hearing is characterized as the action in which people focus their attention intentionally on a specific source of a sound or spoken words. When people use selective hearing, noise from the surrounding environment is heard by the auditory system but only certain parts of the auditory information are chosen to be processed by the brain.

References

  1. 1 2 Westerhausen, René; Kompus, Kristiina (2018). "How to get a left-ear advantage: A technical review of assessing brain asymmetry with dichotic listening". Scandinavian Journal of Psychology. 59 (1): 66–73. doi:10.1111/sjop.12408. PMID   29356005.
  2. Daniel L. Schacter; Daniel Todd Gilbert; Daniel M. Wegner (2011). Psychology (1. publ., 3. print. ed.). Cambridge: Worth Publishers. p. 180. ISBN   978-1-429-24107-6.
  3. Hugdahl, Kenneth (2015), "Dichotic Listening and Language: Overview", International Encyclopedia of the Social & Behavioral Sciences, Elsevier, pp. 357–367, doi:10.1016/b978-0-08-097086-8.54030-6, ISBN   978-0-08-097087-5
  4. Kimura, Doreen (2011). "From ear to brain". Brain and Cognition. 76 (2): 214–217. doi:10.1016/j.bandc.2010.11.009. PMID   21236541. S2CID   43450851.
  5. Broadbent, D. E. (1956). "Successive Responses to Simultaneous Stimuli". Quarterly Journal of Experimental Psychology. 8 (4): 145–152. doi: 10.1080/17470215608416814 . ISSN   0033-555X. S2CID   144045935.
  6. 1 2 Broadbent, Donald E. (Donald Eric) (1987). Perception and communication. Oxford [Oxfordshire]: Oxford University Press. ISBN   0-19-852171-5. OCLC   14067709.
  7. "Canadian Society for Brain, Behaviour & Cognitive Science: Dr. Doreen Kimura". www.csbbcs.org. Retrieved 2019-12-05.
  8. Kimura, Doreen (1961). "Cerebral dominance and the perception of verbal stimuli". Canadian Journal of Psychology. 15 (3): 166–171. doi:10.1037/h0083219. ISSN   0008-4255.
  9. Kimura, Doreen (1964). "Left-right differences in the perception of melodies". Quarterly Journal of Experimental Psychology. 16 (4): 355–358. doi:10.1080/17470216408416391. ISSN   0033-555X. S2CID   145633913.
  10. Kimura, Doreen (1961). "Some effects of temporal-lobe damage on auditory perception". Canadian Journal of Psychology. 15 (3): 156–165. doi:10.1037/h0083218. ISSN   0008-4255. PMID   13756014.
  11. Kimura, Doreen (1967). "Functional Asymmetry of the Brain in Dichotic Listening". Cortex. 3 (2): 163–178. doi: 10.1016/S0010-9452(67)80010-8 .
  12. "Donald P. Shankweiler".
  13. "Michael Studdert-Kennedy".
  14. 1 2 Studdert-Kennedy, Michael; Shankweiler, Donald (19 August 1970). "Hemispheric specialization for speech perception". Journal of the Acoustical Society of America. 48 (2): 579–594. Bibcode:1970ASAJ...48..579S. doi:10.1121/1.1912174. PMID   5470503.
  15. Studdert-Kennedy M.; Shankweiler D.; Schulman S. (1970). "Opposed effects of a delayed channel on perception of dichotically and monotically presented CV syllables". Journal of the Acoustical Society of America. 48 (2B): 599–602. Bibcode:1970ASAJ...48..599S. doi:10.1121/1.1912179.
  16. Studdert-Kennedy M.; Shankweiler D.; Pisoni D. (1972). "Auditory and phonetic processes in speech perception: Evidence from a dichotic study". Journal of Cognitive Psychology. 2 (3): 455–466. doi:10.1016/0010-0285(72)90017-5. PMC   3523680 . PMID   23255833.
  17. Sidtis J. J. (1981). "The complex tone test: Implications for the assessment of auditory laterality effects". Neuropsychologia. 19 (1): 103–112. doi:10.1016/0028-3932(81)90050-6. PMID   7231655. S2CID   42655052.
  18. "Rand, T. C. (1974)". Haskins Laboratories Publications-R.
  19. 1 2 Rand, Timothy C. (1974). "Dichotic release from masking for speech". Journal of the Acoustical Society of America. 55 (3): 678–680. Bibcode:1974ASAJ...55..678R. doi:10.1121/1.1914584. PMID   4819869.
  20. Cutting J. E. (1976). "Auditory and linguistic processes in speech perception: inferences from six fusions in dichotic listening". Psychological Review. 83 (2): 114–140. CiteSeerX   10.1.1.587.9878 . doi:10.1037/0033-295x.83.2.114. PMID   769016.
  21. Johnson; et al. (1977). "Dichotic ear preference in aphasia". Journal of Speech and Hearing Research. 20 (1): 116–129. doi:10.1044/jshr.2001.116. PMID   846195.
  22. 1 2 Wexler, Bruce; Terry Hawles (1983). "Increasing the power of dichotic methods: the fused rhymed words test". Neuropsychologia. 21 (1): 59–66. doi:10.1016/0028-3932(83)90100-8. PMID   6843817. S2CID   6717817.
  23. 1 2 Zatorre, Robert (1989). "Perceptual asymmetry on the dichotic fused words test and cerebral speech lateralization determined by the caroid sodium amytal test". Neuropsychologia. 27 (10): 1207–1219. doi:10.1016/0028-3932(89)90033-x. PMID   2480551. S2CID   26052363.
  24. Grimshaw; et al. (2003). "The dynamic nature of language lateralization: effects of lexical and prosodic factors". Neuropsychologia. 41 (8): 1008–1019. doi:10.1016/s0028-3932(02)00315-9. PMID   12667536. S2CID   13251643.
  25. Hahn, Constanze (Jul 2011). "Smoking reduces language lateralization: A dichotic listening study with control participants and schizophrenia patients". Brain and Cognition. 76 (2): 300–309. doi:10.1016/j.bandc.2011.03.015. PMID   21524559. S2CID   16181999.
  26. 1 2 Arciuli J (July 2011). "Manipulation of voice onset time during dichotic listening". Brain and Cognition. 76 (2): 233–8. doi:10.1016/j.bandc.2011.01.007. PMID   21320740. S2CID   40737054.
  27. Rimol, L.M.; Eichele, T.; Hugdahl, K. (2006). "The effect of voice-onset-time on dichotic listening with consonant-vowel syllables". Neuropsychologia. 44 (2): 191–196. doi:10.1016/j.neuropsychologia.2005.05.006. PMID   16023155. S2CID   2131160.
  28. Westerhausen, R.; Helland, T.; Ofte, S.; Hugdahl, K. (2010). "A longitudinal study of the effect of voicing on the dichotic listening ear advantage in boys and girls at age 5 to 8". Developmental Neuropsychology. 35 (6): 752–761. doi:10.1080/87565641.2010.508551. PMID   21038164. S2CID   12980025.
  29. Andersson, M.; Llera, J.E.; Rimol, L.M.; Hugdahl, K. (2008). "Using dichotic listening to study bottom-up and top-down processing in children and adults". Child Neuropsychol. 14 (5): 470–479. doi:10.1080/09297040701756925. PMID   18608228. S2CID   20770018.
  30. Arciuli, J.; Rankine, T.; Monaghan, P. (2010). "Auditory discrimination of voice-onset time and its relationship with reading ability". Laterality. 15 (3): 343–360. doi:10.1080/13576500902799671. PMID   19343572. S2CID   23776770.
  31. Arciuli J, Rankine T, Monaghan P (May 2010). "Auditory discrimination of voice-onset time and its relationship with reading ability". Laterality. 15 (3): 343–60. doi:10.1080/13576500902799671. PMID   19343572. S2CID   23776770.
  32. 1 2 Hugdahl, Kenneth (2003). "Dichotic Listening Performance and Frontal Lobe Function". Cognitive Brain Research. 16 (1): 58–65. doi:10.1016/s0926-6410(02)00210-0. PMID   12589889.
  33. 1 2 Westerhausen, Rene; Kenneth Hugdahl (2008). "The corpus callosum in dichotic listening studies of hemispheric asymmetry: A review of clinical and experimental evidence". Neuroscience and Biobehavioral Reviews. 32 (5): 1044–1054. doi:10.1016/j.neubiorev.2008.04.005. PMID   18499255. S2CID   23137612.
  34. Kimura D (1961). "Cerebral dominance and the perception of verbal stimuli". Canadian Journal of Psychology. 15 (3): 166–171. doi:10.1037/h0083219.
  35. Ingram, John C.L. (2007). Neurolinguistics: an introduction to spoken language processing and its disorders (1. publ., 3. print. ed.). Cambridge: Cambridge University Press. ISBN   978-0-521-79640-8.
  36. Kimura D (1967). "Functional asymmetry of the brain in dichotic listening". Cortex. 3 (2): 163–178. doi: 10.1016/s0010-9452(67)80010-8 .
  37. Asbjornsen, Arve; M.P. Bryden (1996). "Biased attention and the fused dichotic words test". Neuropsychologia. 34 (5): 407–11. doi:10.1016/0028-3932(95)00127-1. PMID   9148197. S2CID   43071799.
  38. Cherry E. C. (1953). "Some experiments on the recognition of speech, with one and two ears". Journal of the Acoustical Society of America. 25 (5): 975–979. Bibcode:1953ASAJ...25..975C. doi:10.1121/1.1907229. hdl: 11858/00-001M-0000-002A-F750-3 .
  39. Moray N (1959). "Attention in dichotic listening: Affective cues and the influence of instructions". Quarterly Journal of Experimental Psychology. 11: 56–60. doi:10.1080/17470215908416289. S2CID   144324766.
  40. Engle Randall W (2002). "Working Memory Capacity as Executive Attention". Current Directions in Psychological Science. 11: 19–23. doi:10.1111/1467-8721.00160. S2CID   116230.
  41. Nielson L. L.; Sarason I. G. (1981). "Emotion, personality, and selective attention". Journal of Personality and Social Psychology. 41 (5): 945–960. doi:10.1037/0022-3514.41.5.945.
  42. 1 2 3 Voyer, Daniel (2011). "Sex differences in dichotic listening". Brain and Cognition. 76 (2): 245–255. doi:10.1016/j.bandc.2011.02.001. PMID   21354684. S2CID   43323875.
  43. Voyer, Daniel; Jennifer Ingram (2005). "Attention, reliability, and validity of perceptual asymmetries in the fused dichotic word test". Laterality: Asymmetries of Body, Brain and Cognition. 10 (6): 545–561. doi:10.1080/13576500442000292. PMID   16298885. S2CID   33137060.
  44. Friedman, Michelle S.; Bruder, Gerard E.; Nestor, Paul G.; Stuart, Barbara K.; Amador, Xavier F.; Gorman, Jack M. (September 2001). "Perceptual Asymmetries in Schizophrenia: Subtype Differences in Left Hemisphere Dominance for Dichotic Fused Words" (PDF). American Journal of Psychiatry. 158 (9): 1437–1440. doi:10.1176/appi.ajp.158.9.1437. PMID   11532728.
  45. Green, MF; Hugdahl, K; Mitchell, S (March 1994). "Dichotic listening during auditory hallucinations in patients with schizophrenia". American Journal of Psychiatry. 151 (3): 357–362. doi:10.1176/ajp.151.3.357. PMID   8109643.
  46. Hugdahl, Kenneth (2016). "Dichotic Listening and attention: the legacy of Phil Bryden". Laterality: Asymmetries of Body, Brain and Cognition. 21 (4–6): 433–454. doi:10.1080/1357650X.2015.1066382. PMID   26299422. S2CID   40077399.
  47. Jäncke, L.; Buchanan, T.W.; Lutz, K.; Shah, N.J. (September 2001). "Focused and Nonfocused Attention in Verbal and Emotional Dichotic Listening: An FMRI Study". Brain and Language. 78 (3): 349–363. doi:10.1006/brln.2000.2476. PMID   11703062. S2CID   42698136.

Further reading