Cocktail party effect

Last updated
A crowded cocktail bar Patron cocktail bar (5807919992).jpg
A crowded cocktail bar

The cocktail party effect refers to a phenomenon wherein the brain focuses a person's attention on a particular stimulus, usually auditory. This focus excludes a range of other stimuli from conscious awareness, as when a partygoer follows a single conversation in a noisy room. [1] [2] This ability is widely distributed among humans, with most listeners more or less easily able to portion the totality of sound detected by the ears into distinct streams, and subsequently to decide which streams are most pertinent, excluding all or most others. [3]

Contents

It has been proposed that a person's sensory memory subconsciously parses all stimuli and identifies discrete portions of these sensations according to their salience. [4] This allows most people to tune effortlessly into a single voice while tuning out all others. The phenomenon is often described as a "selective attention" or "selective hearing". It may also describe a similar phenomenon that occurs when one may immediately detect words of importance originating from unattended stimuli, for instance hearing one's name among a wide range of auditory input. [5] [6]

A person who lacks the ability to segregate stimuli in this way is often said to display the cocktail party problem [7] or cocktail party deafness. [8] This may also be described as auditory processing disorder or King-Kopetzky syndrome.

Neurological basis (and binaural processing)

Auditory attention in regards to the cocktail party effect primarily occurs in the left hemisphere of the superior temporal gyrus, a non-primary region of auditory cortex; a fronto-parietal network involving the inferior frontal gyrus, superior parietal sulcus, and intraparietal sulcus also accounts for the acts of attention-shifting, speech processing, and attention control. [9] [10] Both the target stream (the more important information being attended to) and competing/interfering streams are processed in the same pathway within the left hemisphere, but fMRI scans show that target streams are treated with more attention than competing streams. [11]

Furthermore, activity in the superior temporal gyrus (STG) toward the target stream is decreased/interfered with when competing stimuli streams (that typically hold significant value) arise. The "cocktail party effect" – the ability to detect significant stimuli in multi-talker situations – has also been labeled the "cocktail party problem", because the ability to selectively attend simultaneously interferes with the effectiveness of attention at a neurological level. [11]

The cocktail party effect works best as a binaural effect, which requires hearing with both ears. People with only one functioning ear seem much more distracted by interfering noise than people with two typical ears. [12] The benefit of using two ears may be partially related to the localization of sound sources. The auditory system is able to localize at least two sound sources and assign the correct characteristics to these sources simultaneously. As soon as the auditory system has localized a sound source, it can extract the signals of this sound source out of a mixture of interfering sound sources. [13] However, much of this binaural benefit can be attributed to two other processes, better-ear listening and binaural unmasking. [12] Better-ear listening is the process of exploiting the better of the two signal-to-noise ratios available at the ears. Binaural unmasking is a process that involves a combination of information from the two ears in order to extract signals from noise.

Early work

In the early 1950s much of the early attention research can be traced to problems faced by air traffic controllers. At that time, controllers received messages from pilots over loudspeakers in the control tower. Hearing the intermixed voices of many pilots over a single loudspeaker made the controller's task very difficult. [14] The effect was first defined and named "the cocktail party problem" by Colin Cherry in 1953. [7] Cherry conducted attention experiments in which participants listened to two different messages from a single loudspeaker at the same time and tried to separate them; this was later termed a dichotic listening task. [15] His work reveals that the ability to separate sounds from background noise is affected by many variables, such as the sex of the speaker, the direction from which the sound is coming, the pitch, and the rate of speech. [7]

Cherry developed the shadowing task in order to further study how people selectively attend to one message amid other voices and noises. In a shadowing task participants wear a special headset that presents a different message to each ear. The participant is asked to repeat aloud the message (called shadowing) that is heard in a specified ear (called a channel). [15] Cherry found that participants were able to detect their name from the unattended channel, the channel they were not shadowing. [16] Later research using Cherry's shadowing task was done by Neville Moray in 1959. He was able to conclude that almost none of the rejected message is able to penetrate the block set up, except subjectively "important" messages. [16]

More recent work

Selective attention shows up across all ages. Starting with infancy, babies begin to turn their heads toward a sound that is familiar to them, such as their parents' voices. [17] This shows that infants selectively attend to specific stimuli in their environment. Furthermore, reviews of selective attention indicate that infants favor "baby" talk over speech with an adult tone. [15] [17] This preference indicates that infants can recognize physical changes in the tone of speech. However, the accuracy in noticing these physical differences, like tone, amid background noise improves over time. [17] Infants may simply ignore stimuli because something like their name, while familiar, holds no higher meaning to them at such a young age. However, research suggests that the more likely scenario is that infants do not understand that the noise being presented to them amidst distracting noise is their own name, and thus do not respond. [18] The ability to filter out unattended stimuli reaches its prime in young adulthood. In reference to the cocktail party phenomenon, older adults have a harder time than younger adults focusing in on one conversation if competing stimuli, like "subjectively" important messages, make up the background noise. [17]

Some examples of messages that catch people's attention include personal names and taboo words. The ability to selectively attend to one's own name has been found in infants as young as 5 months of age and appears to be fully developed by 13 months. [18] Along with multiple experts in the field, Anne Treisman states that people are permanently primed to detect personally significant words, like names, and theorizes that they may require less perceptual information than other words to trigger identification. [19] Another stimulus that reaches some level of semantic processing while in the unattended channel is taboo words. [20] These words often contain sexually explicit material that cause an alert system in people that leads to decreased performance in shadowing tasks. [21] Taboo words do not affect children in selective attention until they develop a strong vocabulary with an understanding of language.

Selective attention begins to waver as we get older. Older adults have longer latency periods in discriminating between conversation streams. This is typically attributed to the fact that general cognitive ability begins to decay with old age (as exemplified with memory, visual perception, higher order functioning, etc.). [9] [22]

Even more recently, modern neuroscience techniques are being applied to study the cocktail party problem. Some notable examples of researchers doing such work include Edward Chang, Nima Mesgarani, and Charles Schroeder using electrocorticography; Jonathan Simon, Mounya Elhilali, Adrian KC Lee, Shihab Shamma, Barbara Shinn-Cunningham, Daniel Baldauf, and Jyrki Ahveninen using magnetoencephalography; Jyrki Ahveninen, Edmund Lalor, and Barbara Shinn-Cunningham using electroencephalography; and Jyrki Ahveninen and Lee M. Miller using functional magnetic resonance imaging.

Models of attention

Not all the information presented to us can be processed. In theory, the selection of what to pay attention to can be random or nonrandom. [23] For example, when driving, drivers are able to focus on the traffic lights rather than on other stimuli present in the scene. In such cases it is mandatory to select which portion of presented stimuli is important. A basic question in psychology is when this selection occurs. [15] This issue has developed into the early versus late selection controversy. The basis for this controversy can be found in the Cherry dichotic listening experiments. Participants were able to notice physical changes, like pitch or change in gender of the speaker, and stimuli, like their own name, in the unattended channel. This brought about the question of whether the meaning, semantics, of the unattended message was processed before selection. [15] In an early selection attention model very little information is processed before selection occurs. In late selection attention models more information, like semantics, is processed before selection occurs. [23]

Broadbent

The earliest work in exploring mechanisms of early selective attention was performed by Donald Broadbent, who proposed a theory that came to be known as the filter model. [24] This model was established using the dichotic listening task. His research showed that most participants were accurate in recalling information that they actively attended to, but were far less accurate in recalling information that they had not attended to. This led Broadbent to the conclusion that there must be a "filter" mechanism in the brain that could block out information that was not selectively attended to. The filter model was hypothesized to work in the following way: as information enters the brain through sensory organs (in this case, the ears) it is stored in sensory memory, a buffer memory system that hosts an incoming stream of information long enough for us to pay attention to it. [15] Before information is processed further, the filter mechanism allows only attended information to pass through. The selected attention is then passed into working memory, the set of mechanisms that underlies short-term memory and communicates with long-term memory. [15] In this model, auditory information can be selectively attended to on the basis of its physical characteristics, such as location and volume. [24] [25] [26] Others suggest that information can be attended to on the basis of Gestalt features, including continuity and closure. [27] For Broadbent, this explained the mechanism by which people can choose to attend to only one source of information at a time while excluding others. However, Broadbent's model failed to account for the observation that words of semantic importance, for example the individual's own name, can be instantly attended to despite having been in an unattended channel.

Shortly after Broadbent's experiments, Oxford undergraduates Gray and Wedderburn repeated his dichotic listening tasks, altered with monosyllabic words that could form meaningful phrases, except that the words were divided across ears. [28] For example, the words, "Dear, one, Jane," were sometimes presented in sequence to the right ear, while the words, "three, Aunt, six," were presented in a simultaneous, competing sequence to the left ear. Participants were more likely to remember, "Dear Aunt Jane," than to remember the numbers; they were also more likely to remember the words in the phrase order than to remember the numbers in the order they were presented. This finding goes against Broadbent's theory of complete filtration because the filter mechanism would not have time to switch between channels. This suggests that meaning may be processed first.

Treisman

In a later addition to this existing theory of selective attention, Anne Treisman developed the attenuation model. [29] In this model, information, when processed through a filter mechanism, is not completely blocked out as Broadbent might suggest. Instead, the information is weakened (attenuated), allowing it to pass through all stages of processing at an unconscious level. Treisman also suggested a threshold mechanism whereby some words, on the basis of semantic importance, may grab one's attention from the unattended stream. One's own name, according to Treisman, has a low threshold value (i.e. it has a high level of meaning) and thus is recognized more easily. The same principle applies to words like fire, directing our attention to situations that may immediately require it. The only way this can happen, Treisman argued, is if information was being processed continuously in the unattended stream.

Deutsch and Deutsch

Diana Deutsch, best known for her work in music perception and auditory illusions, has also made important contributions to models of attention. In order to explain in more detail how words can be attended to on the basis of semantic importance, Deutsch & Deutsch [30] and Norman [31] proposed a model of attention which includes a second selection mechanism based on meaning. In what came to be known as the Deutsch-Norman model, information in the unattended stream is not processed all the way into working memory, as Treisman's model would imply. Instead, information on the unattended stream is passed through a secondary filter after pattern recognition. If the unattended information is recognized and deemed unimportant by the secondary filter, it is prevented from entering working memory. In this way, only immediately important information from the unattended channel can come to awareness.

Kahneman

Daniel Kahneman also proposed a model of attention, but it differs from previous models in that he describes attention not in terms of selection, but in terms of capacity. For Kahneman, attention is a resource to be distributed among various stimuli, [32] a proposition which has received some support. [6] [4] [33] This model describes not when attention is focused, but how it is focused. According to Kahneman, attention is generally determined by arousal; a general state of physiological activity. The Yerkes-Dodson law predicts that arousal will be optimal at moderate levels - performance will be poor when one is over- or under-aroused. Of particular relevance, Narayan et al. discovered a sharp decline in the ability to discriminate between auditory stimuli when background noises were too numerous and complex - this is evidence of the negative effect of overarousal on attention. [4] Thus, arousal determines our available capacity for attention. Then, an allocation policy acts to distribute our available attention among a variety of possible activities. Those deemed most important by the allocation policy will have the most attention given to them. The allocation policy is affected by enduring dispositions (automatic influences on attention) and momentary intentions (a conscious decision to attend to something). Momentary intentions requiring a focused direction of attention rely on substantially more attention resources than enduring dispositions. [34] Additionally, there is an ongoing evaluation of the particular demands of certain activities on attention capacity. [32] That is to say, activities that are particularly taxing on attention resources will lower attention capacity and will influence the allocation policy - in this case, if an activity is too draining on capacity, the allocation policy will likely cease directing resources to it and instead focus on less taxing tasks. Kahneman's model explains the cocktail party phenomenon in that momentary intentions might allow one to expressly focus on a particular auditory stimulus, but that enduring dispositions (which can include new events, and perhaps words of particular semantic importance) can capture our attention. It is important to note that Kahneman's model doesn't necessarily contradict selection models, and thus can be used to supplement them.

Visual correlates

Some research has demonstrated that the cocktail party effect may not be simply an auditory phenomenon, and that relevant effects can be obtained when testing visual information as well. For example, Shapiro et al. were able to demonstrate an "own name effect" with visual tasks, where subjects were able to easily recognize their own names when presented as unattended stimuli. [35] They adopted a position in line with late selection models of attention such as the Treisman or Deutsch-Norman models, suggesting that early selection would not account for such a phenomenon. The mechanisms by which this effect might occur were left unexplained.

Effect in animals

Animals that communicate in choruses such as frogs, insects, songbirds and other animals that communicate acoustically can experience the cocktail party effect as multiple signals or calls occur concurrently. Similar to their human counterparts, acoustic mediation allows animals to listen for what they need to within their environments. For Bank swallows, cliff swallows, and king penguins, acoustic mediation allows for parent/offspring recognition in noisy environments. Amphibians also demonstrate this effect as evidenced in frogs; female frogs can listen for and differentiate male mating calls, while males can mediate other males' aggression calls. [36] There are two leading theories as to why acoustic signaling evolved among different species. Receiver psychology holds that the development of acoustic signaling can be traced back to the nervous system and the processing strategies the nervous system uses. Specifically, how the physiology of auditory scene analysis affects how a species interprets and gains meaning from sound. Communication Network Theory states that animals can gain information by eavesdropping on other signals between others of their species. This is true especially among songbirds. [36]

Hearables for the cocktail party effect

Hearable devices like noise-canceling headphones have been designed to address the cocktail party problem. [37] [38] These type of devices could provide wearers with a degree of control over the sound sources around them. [39] [40]

Deep learning headphone systems like target speech hearing have been proposed to give wearers the ability to hear a target person in a crowded room with multiple speakers and background noise. [37] This technology uses real-time neural networks to learn the voice characteristics of an enrolled target speaker, which is later used to focus on their speech while suppressing other speakers and noise. [39] [41] Semantic hearing headsets also use neural networks to enable wearers to hear specific sounds, such as birds tweeting or alarms ringing, based on their semantic description, while suppressing other ambient sounds in the environment. [38]

These devices could benefit individuals with hearing loss, sensory processing disorders and misophonia as well as people who require focused listening for their job in health-care and military, or for factory or construction workers.

See also

Related Research Articles

<span class="mw-page-title-main">Attention</span> Psychological focus, perception and prioritising discrete information

Attention or focus, is the concentration of awareness on some phenomenon to the exclusion of other stimuli. It is the selective concentration on discrete information, either subjectively or objectively. William James (1890) wrote that "Attention is the taking possession by the mind, in clear and vivid form, of one out of what seem several simultaneously possible objects or trains of thought. Focalization, concentration, of consciousness are of its essence." Attention has also been described as the allocation of limited cognitive processing resources. Attention is manifested by an attentional bottleneck, in terms of the amount of data the brain can process each second; for example, in human vision, less than 1% of the visual input data stream of 1MByte/sec can enter the bottleneck, leading to inattentional blindness.

<span class="mw-page-title-main">Donald Broadbent</span> British psychologist (1928–2020)

Donald Eric Broadbent CBE, FRS was an influential experimental psychologist from the United Kingdom. His career and research bridged the gap between the pre-World War II approach of Sir Frederic Bartlett and what became known as cognitive psychology in the late 1960s. A Review of General Psychology survey, published in 2002, ranked Broadbent as the 54th most cited psychologist of the 20th century.

<span class="mw-page-title-main">Anne Treisman</span> English cognitive psychologist (1935–2018)

Anne Marie Treisman was an English psychologist who specialised in cognitive psychology.

The Levels of Processing model, created by Fergus I. M. Craik and Robert S. Lockhart in 1972, describes memory recall of stimuli as a function of the depth of mental processing. More analysis produce more elaborate and stronger memory than lower levels of processing. Depth of processing falls on a shallow to deep continuum. Shallow processing leads to a fragile memory trace that is susceptible to rapid decay. Conversely, deep processing results in a more durable memory trace. There are three levels of processing in this model. Structural processing, or visual, is when we remember only the physical quality of the word. Phonemic processing includes remembering the word by the way it sounds. Lastly, we have semantic processing in which we encode the meaning of the word with another word that is similar or has similar meaning. Once the word is perceived, the brain allows for a deeper processing.

Attentional blink (AB) is a phenomenon that reflects temporal limitations in the ability to deploy visual attention. When people must identify two visual stimuli in quick succession, accuracy for the second stimulus is poor if it occurs within 200 to 500 ms of the first.

The dot-probe paradigm is a test used by cognitive psychologists to assess selective attention.

Echoic memory is the sensory memory that registers specific to auditory information (sounds). Once an auditory stimulus is heard, it is stored in memory so that it can be processed and understood. Unlike most visual memory, where a person can choose how long to view the stimulus and can reassess it repeatedly, auditory stimuli are usually transient and cannot be reassessed. Since echoic memories are heard once, they are stored for slightly longer periods of time than iconic memories. Auditory stimuli are received by the ear one at a time before they can be processed and understood.

Auditory processing disorder (APD), rarely known as King-Kopetzky syndrome or auditory disability with normal hearing (ADN), is a neurodevelopmental disorder affecting the way the brain processes sounds. Individuals with APD usually have normal structure and function of the ear, but cannot process the information they hear in the same way as others do, which leads to difficulties in recognizing and interpreting sounds, especially the sounds composing speech. It is thought that these difficulties arise from dysfunction in the central nervous system.

Dichotic pitch is a pitch heard due to binaural processing, when the brain combines two noises presented simultaneously to the ears. In other words, it cannot be heard when the sound stimulus is presented monaurally but, when it is presented binaurally a sensation of a pitch can be heard. The binaural stimulus is presented to both ears through headphones simultaneously, and is the same in several respects except for a narrow frequency band that is manipulated. The most common variation is the Huggins Pitch, which presents white-noise that only differ in the interaural phase relation over a narrow range of frequencies. For humans, this phenomenon is restricted to fundamental frequencies lower than 330 Hz and extremely low sound pressure levels. Experts investigate the effects of the dichotic pitch on the brain. For instance, there are studies that suggested it evokes activation at the lateral end of Heschl's gyrus.

Nelson Cowan is the Curators' Distinguished Professor of Psychological Sciences at the University of Missouri. He specializes in working memory, the small amount of information held in mind and used for language processing and various kinds of problem solving. To overcome conceptual difficulties that arise for models of information processing in which different functions occur in separate boxes, Cowan proposed a more organically organized "embedded processes" model. Within it, representations held in working memory comprise an activated subset of the representations held in long-term memory, with a smaller subset held in a more integrated form in the current focus of attention. Other work has been on the developmental growth of working memory capacity and the scientific method. His work, funded by the National Institutes of Health since 1984, has been cited over 41,000 times according to Google Scholar. The work has resulted in over 250 peer-reviewed articles, over 60 book chapters, 2 sole-authored books, and 4 edited volumes.

<span class="mw-page-title-main">Illusory conjunctions</span> Illusory conjunctions

Illusory conjunctions are psychological effects in which participants combine features of two objects into one object. There are visual illusory conjunctions, auditory illusory conjunctions, and illusory conjunctions produced by combinations of visual and tactile stimuli. Visual illusory conjunctions are thought to occur due to a lack of visual spatial attention, which depends on fixation and the amount of time allotted to focus on an object. With a short span of time to interpret an object, blending of different aspects within a region of the visual field – like shapes and colors – can occasionally be skewed, which results in visual illusory conjunctions. For example, in a study designed by Anne Treisman and Schmidt, participants were required to view a visual presentation of numbers and shapes in different colors. Some shapes were larger than others but all shapes and numbers were evenly spaced and shown for just 200 ms. When the participants were asked to recall the shapes they reported answers such as a small green triangle instead of a small green circle. If the space between the objects is smaller, illusory conjunctions occur more often.

Dichotic listening is a psychological test commonly used to investigate selective attention and the lateralization of brain function within the auditory system. It is used within the fields of cognitive psychology and neuroscience.

Spatial hearing loss refers to a form of deafness that is an inability to use spatial cues about where a sound originates from in space. Poor sound localization in turn affects the ability to understand speech in the presence of background noise.

Amblyaudia is a term coined by Dr. Deborah Moncrieff to characterize a specific pattern of performance from dichotic listening tests. Dichotic listening tests are widely used to assess individuals for binaural integration, a type of auditory processing skill. During the tests, individuals are asked to identify different words presented simultaneously to the two ears. Normal listeners can identify the words fairly well and show a small difference between the two ears with one ear slightly dominant over the other. For the majority of listeners, this small difference is referred to as a "right-ear advantage" because their right ear performs slightly better than their left ear. But some normal individuals produce a "left-ear advantage" during dichotic tests and others perform at equal levels in the two ears. Amblyaudia is diagnosed when the scores from the two ears are significantly different with the individual's dominant ear score much higher than the score in the non-dominant ear Researchers interested in understanding the neurophysiological underpinnings of amblyaudia consider it to be a brain based hearing disorder that may be inherited or that may result from auditory deprivation during critical periods of brain development. Individuals with amblyaudia have normal hearing sensitivity but have difficulty hearing in noisy environments like restaurants or classrooms. Even in quiet environments, individuals with amblyaudia may fail to understand what they are hearing, especially if the information is new or complicated. Amblyaudia can be conceptualized as the auditory analog of the better known central visual disorder amblyopia. The term “lazy ear” has been used to describe amblyaudia although it is currently not known whether it stems from deficits in the auditory periphery or from other parts of the auditory system in the brain, or both. A characteristic of amblyaudia is suppression of activity in the non-dominant auditory pathway by activity in the dominant pathway which may be genetically determined and which could also be exacerbated by conditions throughout early development.

Attenuation theory, also known as Treisman's attenuation model, is a model of selective attention proposed by Anne Treisman, and can be seen as a revision of Donald Broadbent's filter model. Treisman proposed attenuation theory as a means to explain how unattended stimuli sometimes came to be processed in a more rigorous manner than what Broadbent's filter model could account for. As a result, attenuation theory added layers of sophistication to Broadbent's original idea of how selective attention might operate: claiming that instead of a filter which barred unattended inputs from ever entering awareness, it was a process of attenuation. Thus, the attenuation of unattended stimuli would make it difficult, but not impossible to extract meaningful content from irrelevant inputs, so long as stimuli still possessed sufficient "strength" after attenuation to make it through a hierarchical analysis process.

Broadbent's filter model is an early selection theory of attention.

Selective auditory attention, or selective hearing, is a process of the auditory system where an individual selects or focuses on certain stimuli for auditory information processing while other stimuli are disregarded. This selection is very important as the processing and memory capabilities for humans have a limited capacity. When people use selective hearing, noise from the surrounding environment is heard by the auditory system but only certain parts of the auditory information are chosen to be processed by the brain.

Crossmodal attention refers to the distribution of attention to different senses. Attention is the cognitive process of selectively emphasizing and ignoring sensory stimuli. According to the crossmodal attention perspective, attention often occurs simultaneously through multiple sensory modalities. These modalities process information from the different sensory fields, such as: visual, auditory, spatial, and tactile. While each of these is designed to process a specific type of sensory information, there is considerable overlap between them which has led researchers to question whether attention is modality-specific or the result of shared "cross-modal" resources. Cross-modal attention is considered to be the overlap between modalities that can both enhance and limit attentional processing. The most common example given of crossmodal attention is the Cocktail Party Effect, which is when a person is able to focus and attend to one important stimulus instead of other less important stimuli. This phenomenon allows deeper levels of processing to occur for one stimulus while others are then ignored.

Neville Moray was a British-born Canadian psychologist. He served as an academic and professor at the Department of Psychology of the University of Surrey, known from his 1959 research of the cocktail party effect.

Perceptual load theory is a psychological theory of attention. It was presented by Nilli Lavie in the mid-nineties as a potential resolution to the early/late selection debate.

References

  1. Bronkhorst, Adelbert W. (2000). "The Cocktail Party Phenomenon: A Review on Speech Intelligibility in Multiple-Talker Conditions". Acta Acustica United with Acustica. 86: 117–128. Retrieved 2020-11-16.
  2. Shinn-Cunningham BG (May 2008). "Object-based auditory and visual attention" (PDF). Trends in Cognitive Sciences. 12 (5): 182–6. doi:10.1016/j.tics.2008.02.003. PMC   2699558 . PMID   18396091. Archived from the original (PDF) on 2015-09-23. Retrieved 2014-06-20.
  3. Marinato G, Baldauf D (February 2019). "Object-based attention in complex, naturalistic auditory streams". Scientific Reports. 9 (1): 2854. Bibcode:2019NatSR...9.2854M. doi:10.1038/s41598-019-39166-6. PMC   6393668 . PMID   30814547.
  4. 1 2 3 Narayan R, Best V, Ozmeral E, McClaine E, Dent M, Shinn-Cunningham B, Sen K (December 2007). "Cortical interference effects in the cocktail party problem". Nature Neuroscience. 10 (12): 1601–7. doi:10.1038/nn2009. PMID   17994016. S2CID   7857806.
  5. Wood N, Cowan N (January 1995). "The cocktail party phenomenon revisited: how frequent are attention shifts to one's name in an irrelevant auditory channel?". Journal of Experimental Psychology: Learning, Memory, and Cognition. 21 (1): 255–60. doi:10.1037/0278-7393.21.1.255. PMID   7876773.
  6. 1 2 Conway AR, Cowan N, Bunting MF (June 2001). "The cocktail party phenomenon revisited: the importance of working memory capacity". Psychonomic Bulletin & Review. 8 (2): 331–5. doi: 10.3758/BF03196169 . PMID   11495122.
  7. 1 2 3 Cherry EC (1953). "Some Experiments on the Recognition of Speech, with One and with Two Ears" (PDF). The Journal of the Acoustical Society of America. 25 (5): 975–79. Bibcode:1953ASAJ...25..975C. doi:10.1121/1.1907229. hdl: 11858/00-001M-0000-002A-F750-3 . ISSN   0001-4966.
  8. Pryse-Phillips W (2003). Companion to Clinical Neurology (2nd ed.). Oxford: Oxford University Press. p. 206. ISBN   0-19-515938-1.
  9. 1 2 Getzmann S, Jasny J, Falkenstein M (February 2017). "Switching of auditory attention in "cocktail-party" listening: ERP evidence of cueing effects in younger and older adults". Brain and Cognition. 111: 1–12. doi:10.1016/j.bandc.2016.09.006. PMID   27814564. S2CID   26052069.
  10. de Vries IE, Marinato G, Baldauf D (August 2021). "Decoding object-based auditory attention from source-reconstructed MEG alpha oscillations". The Journal of Neuroscience. 41 (41): 8603–8617. doi:10.1523/JNEUROSCI.0583-21.2021. PMC   8513695 . PMID   34429378.
  11. 1 2 Evans S, McGettigan C, Agnew ZK, Rosen S, Scott SK (March 2016). "Getting the Cocktail Party Started: Masking Effects in Speech Perception". Journal of Cognitive Neuroscience. 28 (3): 483–500. doi:10.1162/jocn_a_00913. PMC   4905511 . PMID   26696297.
  12. 1 2 Hawley ML, Litovsky RY, Culling JF (February 2004). "The benefit of binaural hearing in a cocktail party: effect of location and type of interferer" (PDF). The Journal of the Acoustical Society of America. 115 (2): 833–43. Bibcode:2004ASAJ..115..833H. doi:10.1121/1.1639908. PMID   15000195. Archived from the original (PDF) on 2016-10-20. Retrieved 2013-07-21.
  13. Fritz JB, Elhilali M, David SV, Shamma SA (August 2007). "Auditory attention--focusing the searchlight on sound". Current Opinion in Neurobiology. 17 (4): 437–55. doi:10.1016/j.conb.2007.07.011. PMID   17714933. S2CID   11641395.
  14. Sorkin, Robert D.; Kantowitz, Barry H. (1983). Human factors: understanding people-system relationships . New York: Wiley. ISBN   978-0-471-09594-1. OCLC   8866672.
  15. 1 2 3 4 5 6 7 Revlin R (2007). Human Cognition : Theory and Practice. New York, NY: Worth Pub. p. 59. ISBN   9780716756675. OCLC   779665820.
  16. 1 2 Moray N (1959). "Attention in dichotic listening: Affective cues and the influence of instructions" (PDF). Quarterly Journal of Experimental Psychology. 11 (1): 56–60. doi:10.1080/17470215908416289. ISSN   0033-555X. S2CID   144324766.
  17. 1 2 3 4 Plude DJ, Enns JT, Brodeur D (August 1994). "The development of selective attention: a life-span overview". Acta Psychologica. 86 (2–3): 227–72. doi:10.1016/0001-6918(94)90004-3. PMID   7976468.
  18. 1 2 Newman RS (March 2005). "The cocktail party effect in infants revisited: listening to one's name in noise". Developmental Psychology. 41 (2): 352–62. doi:10.1037/0012-1649.41.2.352. PMID   15769191.
  19. Driver J (February 2001). "A selective review of selective attention research from the past century" (PDF). British Journal of Psychology. 92 Part 1: 53–78. doi:10.1348/000712601162103. PMID   11802865. Archived from the original (PDF) on 2014-05-21. Retrieved 2013-07-21.
  20. Straube ER, Germer CK (August 1979). "Dichotic shadowing and selective attention to word meaning in schizophrenia". Journal of Abnormal Psychology. 88 (4): 346–53. doi:10.1037/0021-843X.88.4.346. PMID   479456.
  21. Nielsen SL, Sarason IG (1981). "Emotion, personality, and selective attention". Journal of Personality and Social Psychology. 41 (5): 945–960. doi:10.1037/0022-3514.41.5.945. ISSN   0022-3514.
  22. Getzmann S, Näätänen R (November 2015). "The mismatch negativity as a measure of auditory stream segregation in a simulated "cocktail-party" scenario: effect of age". Neurobiology of Aging. 36 (11): 3029–3037. doi:10.1016/j.neurobiolaging.2015.07.017. PMID   26254109. S2CID   25443567.
  23. 1 2 Cohen A (2006). "Selective Attention". Encyclopedia of Cognitive Science. doi:10.1002/0470018860.s00612. ISBN   978-0470016190.
  24. 1 2 Broadbent DE (March 1954). "The role of auditory localization in attention and memory span". Journal of Experimental Psychology. 47 (3): 191–6. doi:10.1037/h0054182. PMID   13152294.[ dead link ]
  25. Scharf B (1990). "On hearing what you listen for: The effects of attention and expectancy". Canadian Psychology. 31 (4): 386–387. doi:10.1037/h0084409.
  26. Brungart DS, Simpson BD (January 2007). "Cocktail party listening in a dynamic multitalker environment". Perception & Psychophysics. 69 (1): 79–91. doi: 10.3758/BF03194455 . PMID   17515218.
  27. Haykin S, Chen Z (September 2005). "The cocktail party problem". Neural Computation. 17 (9): 1875–902. doi:10.1162/0899766054322964. PMID   15992485. S2CID   207575815.
  28. Gray JA, Wedderburn AA (1960). "Grouping strategies with simultaneous stimuli". Quarterly Journal of Experimental Psychology. 12 (3): 180–184. doi:10.1080/17470216008416722. S2CID   143819583. Archived from the original on 2015-01-08. Retrieved 2013-07-21.
  29. Treisman AM (May 1969). "Strategies and models of selective attention". Psychological Review. 76 (3): 282–99. doi:10.1037/h0027242. PMID   4893203.
  30. Deutsch JA, Deutsch D (January 1963). "Some theoretical considerations". Psychological Review. 70 (I): 80–90. doi:10.1037/h0039515. PMID   14027390.
  31. Norman DA (1968). "Toward a theory of memory and attention". Psychological Review. 75 (6): 522–536. doi:10.1037/h0026699.
  32. 1 2 Kahneman, D. (1973). Attention and effort . Englewood Cliffs, NJ: Prentice-Hall.
  33. Dalton P, Santangelo V, Spence C (November 2009). "The role of working memory in auditory selective attention". Quarterly Journal of Experimental Psychology. 62 (11): 2126–32. doi:10.1080/17470210903023646. PMID   19557667. S2CID   17704836.
  34. Koch I, Lawo V, Fels J, Vorländer M (August 2011). "Switching in the cocktail party: exploring intentional control of auditory selective attention". Journal of Experimental Psychology. Human Perception and Performance. 37 (4): 1140–7. doi:10.1037/a0022189. PMID   21553997.
  35. Shapiro KL, Caldwell J, Sorensen RE (April 1997). "Personal names and the attentional blink: a visual "cocktail party" effect". Journal of Experimental Psychology. Human Perception and Performance. 23 (2): 504–14. doi:10.1037/0096-1523.23.2.504. PMID   9104007.
  36. 1 2 Bee MA, Micheyl C (August 2008). "The cocktail party problem: what is it? How can it be solved? And why should animal behaviorists study it?". Journal of Comparative Psychology. 122 (3): 235–51. doi:10.1037/0735-7036.122.3.235. PMC   2692487 . PMID   18729652.
  37. 1 2 "Noise-canceling headphones use AI to let a single voice through". MIT Technology Review. Retrieved 2024-05-26.
  38. 1 2 Veluri, Bandhav; Itani, Malek; Chan, Justin; Yoshioka, Takuya; Gollakota, Shyamnath (2023-10-29). "Semantic Hearing: Programming Acoustic Scenes with Binaural Hearables". Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology. ACM. pp. 1–15. arXiv: 2311.00320 . doi:10.1145/3586183.3606779. ISBN   979-8-4007-0132-0.
  39. 1 2 Zmolikova, Katerina; Delcroix, Marc; Ochiai, Tsubasa; Kinoshita, Keisuke; Černocký, Jan; Yu, Dong (May 2023). "Neural Target Speech Extraction: An overview". IEEE Signal Processing Magazine. 40 (3): 8–29. arXiv: 2301.13341 . Bibcode:2023ISPM...40c...8Z. doi:10.1109/MSP.2023.3240008. ISSN   1053-5888.
  40. "Noise-canceling headphones could let you pick and choose the sounds you want to hear". MIT Technology Review. Retrieved 2024-05-26.
  41. Veluri, Bandhav; Itani, Malek; Chen, Tuochao; Yoshioka, Takuya; Gollakota, Shyamnath (2024-05-11). "Look Once to Hear: Target Speech Hearing with Noisy Examples". Proceedings of the CHI Conference on Human Factors in Computing Systems. ACM. pp. 1–16. arXiv: 2405.06289 . doi:10.1145/3613904.3642057. ISBN   979-8-4007-0330-0.