Pre-attentive processing

Last updated

Pre-attentive processing is the subconscious accumulation of information from the environment. [1] [2] All available information is pre-attentively processed. [2] Then, the brain filters and processes what is important. Information that has the highest salience (a stimulus that stands out the most) or relevance to what a person is thinking about is selected for further and more complete analysis by conscious (attentive) processing. [1] [2] Understanding how pre-attentive processing works is useful in advertising, in education, and for prediction of cognitive ability.

Contents

Pure-capture and contingent-capture

The reasons are unclear as to why certain information proceeds from pre-attentive to attentive processing while other information does not. It is generally accepted that the selection involves an interaction between the salience of a stimulus and person's current intentions and/or goals. [3] Two models of pre-attentive processing are pure-capture and contingent-capture. [4]

The "pure-capture" model focuses on stimulus salience. [5] If certain properties of a stimulus stand out from its background, the stimulus has a higher chance of being selected for attentive processing. [4] This is sometimes referred to as "bottom-up" processing, as it is the properties of the stimuli which affect selection. Since things that affect pre-attentive processing do not necessarily correlate with things that affect attention, stimulus salience may be more important than conscious goals. For example, pre-attentive processing is slowed by sleep deprivation while attention, although less focused, is not slowed. [6] Furthermore, when searching for a particular visual stimulus among a variety of visual distractions, people often have more trouble finding what they are looking for if one or more of the distractions is particularly salient. [4] For example, it is easier to locate a bright, green circle (which is salient) among distractor circles if they are all grey (a bland color) than it is to locate a green circle among distractor circles if some are red (also salient colour). This is thought to occur because the salient red circles attract our attention away from the target green circle. However, this is difficult to prove because when given a target (like the green circle) to search for in a laboratory experiment, participants may generalize the task to searching for anything that stands out, rather than solely searching for the target. [4] If this happens, the conscious goal becomes finding anything that stands out, which would direct the person's attention towards red distractor circles as well as the green target. This means that a person's goal, rather than the salience of the stimuli, could be causing the delayed ability to find the target.

The "contingent-capture" model emphasizes the idea that a person's current intentions and/or goals affect the speed and efficiency of pre-attentive processing. [4] The brain directs an individual's attention towards stimuli with features that fit in with their goals. Consequently, these stimuli will be processed faster at the pre-attentive stage and will be more likely to be selected for attentive processing. [5] Since this model focuses on the importance of conscious processes (rather than properties of the stimulus itself) in selecting information for attentive processing, it is sometimes called "top-down" selection. [4] In support of this model, it has been shown that a target stimulus can be located faster if it is preceded by the presentation of a similar, priming stimulus. [4] For example, if an individual is shown the color green and then required to find a green circle among distractors, the initial exposure to the color will make it easier to find the green circle. This is because they are already thinking about and envisioning the color green, so when it shows up again as the green circle, their brain readily directs its attention towards it. This suggests that processing an initial stimulus speeds up a person's ability to select a similar target from pre-attentive processing. However, it could be that the speed of pre-attentive processing itself is not affected by the first stimulus, but rather that people are simply able to quickly abandon dissimilar stimuli, enabling them to re-engage to the correct target more quickly. [4] This would mean that the difference in reaction time occurs at the attentive level, after pre-attentive processing and stimulus selection has already taken place.

Vision

Information for pre-attentive processing is detected through the senses. In the visual system, the receptive fields at the back of the eye (retina) transfer the image via axons to the thalamus, specifically the lateral geniculate nuclei. [7] The image then travels to the primary visual cortex and continues on to be processed by the visual association cortex. At each stage, the image is processed with increasing complexity. Pre-attentive processing starts with the retinal image; this image is magnified as it moves from retina to the cortex of the brain. [7] Shades of light and dark are processed in the lateral geniculate nuclei of the thalamus. [7] Simple and complex cells in the brain process boundary and surface information by deciphering the image's contrast, orientation, and edges. [7] When the image hits the fovea, it is highly magnified, facilitating object recognition. The images in the periphery are less clear but help to create a complete image used for scene perception. [7] [8] [9]

Visual scene segmentation is a pre-attentive process where stimuli are grouped together into specific objects against a background. [10] Figure and background regions of an image activate different processing centres: figures use the lateral occipital areas (which involve object processing) and background engages dorso-medial areas. [10] [11]

Visual pre-attentive processing uses a distinct memory mechanism. [12] When a stimulus is presented consecutively, the stimulus is perceived at a faster rate than if different stimuli are presented consecutively. [12] The theory behind this is called the dimension-weighting account (DWA) where each time a specific stimulus (i.e. color) is presented it contributes to the weight of the stimuli. [12] More presentations increase the weight of the stimuli, and therefore, subsequently decrease the reaction time to the stimulus. [12] The dimensional-weighting system, which calculates pre-attentive processing for our visual system, codes the stimulus and thus directs attention to the stimulus with the most weight. [12]

Visual pre-attentive processing is also involved in the perception of emotion. [13] Human beings are social creatures and are very adept at critiquing facial expressions. We have the ability to unconsciously process emotional stimuli and equate the stimuli, such as a face, with meaning. [13]

Audition

The auditory system is also very important in accumulating information for pre-attentive processing. When a person's eardrum is struck by incoming sound waves, it vibrates. This sends messages, via the auditory nerve, to the brain for pre-attentive processing. The ability to adequately filter information from pre-attentive processing to attentive processing is necessary for the normal development of social skills. [14] For acoustic pre-attentive processing, the temporal cortex was believed to be the main site of activation; however, recent evidence has indicated involvement of the frontal cortex as well. [15] [16] The frontal cortex is predominantly associated with attentional processing, but it may also be involved in pre-attentive processing of complex or salient acoustic stimuli. [10] [15] For example, detecting slight variations in complex musical patterns has been shown to activate the right ventromedial prefrontal cortex. [15]

It has been shown that in acoustic pre-attentive processing there is some degree of lateralization. [17] The left hemisphere responds more to temporal acoustic information whereas the right hemisphere responds to the frequency of auditory information. [17] Also, there is lateralization in the perception of speech which is left hemisphere dominant for pre-attentive processing. [18]

Multisensory integration

Vision, sound, smell, touch, and taste are processed together pre-attentively when more than one sensory stimuli are present. [19] This multisensory integration increases activity in the superior temporal sulcus (STS), thalamus, and superior colliculus. [19] Specifically, the pre-attentive process of multisensory integration works jointly with attention to activate brain regions such as the STS. [19] Multisensory integration seems to give a person the advantage of greater comprehension if both auditory and visual stimuli are being processed together. [19] But it is important to note that multisensory integration is affected by what a person pays attention to and their current goals. [19]

Plasticity

Training can lead to changes in activity and brain structures involved in pre-attentive processing. [15] Professional musicians, in particular, show larger ERP (Event-related potential) responses to deviations in auditory stimuli and have possibly related structural differences in their brains (Heschl's gyrus, corpus callosum, and pyramidal tracts). [15] This plasticity of pre-attentive processing has also been shown in perception. Using EEG (electroencephalography) methods in pre-attentive colour perception, a study observed how easy it was for bilinguals to adapt to the linguistic constructs of a different culture. [20] This means that pre-attentive processes are not hard-wired but malleable. [20]

Deficits

Deficits in the transition from pre-attentive processing to attentive processing are associated with disorders such as schizophrenia, Alzheimer's disease, and autism. [14] [16] [21] Abnormal prefrontal cortex function in individuals with schizophrenia results in the inability to use pre-attentive processing to recognize familiar auditory stimuli as non-threatening. [16] Individuals with schizophrenia with positive symptoms have a greater capability of pre-attentively processing emotionally negative odors. [22] This heightened ability to distinguish odors seems to be involved in their hypersensitivity to threatening situations. [22] Alzheimer's disease is typically thought to affect high-level brain functioning (like memory) but can also have negative impacts on visual pre-attentive processing. [21] Some of the difficulties with social interaction seen in autistic individuals may be due to an impairment in filtration of pre-attentive auditory information. [14] For example, they often have difficulty following a conversation as they cannot distinguish which parts are important and are easily distracted by other sounds.

See also

Related Research Articles

<span class="mw-page-title-main">Attention</span> Psychological focus, perception and prioritising discrete information

Attention or focus, is the concentration of awareness on some phenomenon to the exclusion of other stimuli. It is the selective concentration on discrete information, either subjectively or objectively. William James (1890) wrote that "Attention is the taking possession by the mind, in clear and vivid form, of one out of what seem several simultaneously possible objects or trains of thought. Focalization, concentration, of consciousness are of its essence." Attention has also been described as the allocation of limited cognitive processing resources. Attention is manifested by an attentional bottleneck, in terms of the amount of data the brain can process each second; for example, in human vision, less than 1% of the visual input data stream of 1MByte/sec can enter the bottleneck, leading to inattentional blindness.

Auditory imagery is a form of mental imagery that is used to organize and analyze sounds when there is no external auditory stimulus present. This form of imagery is broken up into a couple of auditory modalities such as verbal imagery or musical imagery. This modality of mental imagery differs from other sensory images such as motor imagery or visual imagery. The vividness and detail of auditory imagery can vary from person to person depending on their background and condition of their brain. Through all of the research developed to understand auditory imagery behavioral neuroscientists have found that the auditory images developed in subjects' minds are generated in real time and consist of fairly precise information about quantifiable auditory properties as well as melodic and harmonic relationships. These studies have been able to recently gain confirmation and recognition due to the arrival of Positron emission tomography and fMRI scans that can confirm a physiological and psychological correlation.

Multisensory integration, also known as multimodal integration, is the study of how information from the different sensory modalities may be integrated by the nervous system. A coherent representation of objects combining modalities enables animals to have meaningful perceptual experiences. Indeed, multisensory integration is central to adaptive behavior because it allows animals to perceive a world of coherent perceptual entities. Multisensory integration also deals with how different sensory modalities interact with one another and alter each other's processing.

Sensory processing is the process that organizes and distinguishes sensation from one's own body and the environment, thus making it possible to use the body effectively within the environment. Specifically, it deals with how the brain processes multiple sensory modality inputs, such as proprioception, vision, auditory system, tactile, olfactory, vestibular system, interoception, and taste into usable functional outputs.

Sensory gating describes neural processes of filtering out redundant or irrelevant stimuli from all possible environmental stimuli reaching the brain. Also referred to as gating or filtering, sensory gating prevents an overload of information in the higher cortical centers of the brain. Sensory gating can also occur in different forms through changes in both perception and sensation, affected by various factors such as "arousal, recent stimulus exposure, and selective attention."

Salience is the property by which some thing stands out. Salient events are an attentional mechanism by which organisms learn and survive; those organisms can focus their limited perceptual and cognitive resources on the pertinent subset of the sensory data available to them.

Visual search is a type of perceptual task requiring attention that typically involves an active scan of the visual environment for a particular object or feature among other objects or features. Visual search can take place with or without eye movements. The ability to consciously locate an object or target amongst a complex array of stimuli has been extensively studied over the past 40 years. Practical examples of using visual search can be seen in everyday life, such as when one is picking out a product on a supermarket shelf, when animals are searching for food among piles of leaves, when trying to find a friend in a large crowd of people, or simply when playing visual search games such as Where's Wally?

The mismatch negativity (MMN) or mismatch field (MMF) is a component of the event-related potential (ERP) to an odd stimulus in a sequence of stimuli. It arises from electrical activity in the brain and is studied within the field of cognitive neuroscience and psychology. It can occur in any sensory system, but has most frequently been studied for hearing and for vision, in which case it is abbreviated to vMMN. The (v)MMN occurs after an infrequent change in a repetitive sequence of stimuli For example, a rare deviant (d) stimulus can be interspersed among a series of frequent standard (s) stimuli. In hearing, a deviant sound can differ from the standards in one or more perceptual features such as pitch, duration, loudness, or location. The MMN can be elicited regardless of whether someone is paying attention to the sequence. During auditory sequences, a person can be reading or watching a silent subtitled movie, yet still show a clear MMN. In the case of visual stimuli, the MMN occurs after an infrequent change in a repetitive sequence of images.

Echoic memory is the sensory memory that registers specific to auditory information (sounds). Once an auditory stimulus is heard, it is stored in memory so that it can be processed and understood. Unlike most visual memory, where a person can choose how long to view the stimulus and can reassess it repeatedly, auditory stimuli are usually transient and cannot be reassessed. Since echoic memories are heard once, they are stored for slightly longer periods of time than iconic memories. Auditory stimuli are received by the ear one at a time before they can be processed and understood.

<span class="mw-page-title-main">Erich Schröger</span> German neuroscientist (born 1958)

Erich Schröger is a German psychologist and neuroscientist.

In neuroscience, the N100 or N1 is a large, negative-going evoked potential measured by electroencephalography ; it peaks in adults between 80 and 120 milliseconds after the onset of a stimulus, and is distributed mostly over the fronto-central region of the scalp. It is elicited by any unpredictable stimulus in the absence of task demands. It is often referred to with the following P200 evoked potential as the "N100-P200" or "N1-P2" complex. While most research focuses on auditory stimuli, the N100 also occurs for visual, olfactory, heat, pain, balance, respiration blocking, and somatosensory stimuli.

In neuroscience, the visual P200 or P2 is a waveform component or feature of the event-related potential (ERP) measured at the human scalp. Like other potential changes measurable from the scalp, this effect is believed to reflect the post-synaptic activity of a specific neural process. The P2 component, also known as the P200, is so named because it is a positive going electrical potential that peaks at about 200 milliseconds after the onset of some external stimulus. This component is often distributed around the centro-frontal and the parieto-occipital areas of the scalp. It is generally found to be maximal around the vertex of the scalp, however there have been some topographical differences noted in ERP studies of the P2 in different experimental conditions.

<span class="mw-page-title-main">Visual N1</span>

The visual N1 is a visual evoked potential, a type of event-related electrical potential (ERP), that is produced in the brain and recorded on the scalp. The N1 is so named to reflect the polarity and typical timing of the component. The "N" indicates that the polarity of the component is negative with respect to an average mastoid reference. The "1" originally indicated that it was the first negative-going component, but it now better indexes the typical peak of this component, which is around 150 to 200 milliseconds post-stimulus. The N1 deflection may be detected at most recording sites, including the occipital, parietal, central, and frontal electrode sites. Although, the visual N1 is widely distributed over the entire scalp, it peaks earlier over frontal than posterior regions of the scalp, suggestive of distinct neural and/or cognitive correlates. The N1 is elicited by visual stimuli, and is part of the visual evoked potential – a series of voltage deflections observed in response to visual onsets, offsets, and changes. Both the right and left hemispheres generate an N1, but the laterality of the N1 depends on whether a stimulus is presented centrally, laterally, or bilaterally. When a stimulus is presented centrally, the N1 is bilateral. When presented laterally, the N1 is larger, earlier, and contralateral to the visual field of the stimulus. When two visual stimuli are presented, one in each visual field, the N1 is bilateral. In the latter case, the N1's asymmetrical skewedness is modulated by attention. Additionally, its amplitude is influenced by selective attention, and thus it has been used to study a variety of attentional processes.

<span class="mw-page-title-main">P3b</span>

The P3b is a subcomponent of the P300, an event-related potential (ERP) component that can be observed in human scalp recordings of brain electrical activity. The P3b is a positive-going amplitude peaking at around 300 ms, though the peak will vary in latency from 250 to 500 ms or later depending upon the task and on the individual subject response. Amplitudes are typically highest on the scalp over parietal brain areas.

<span class="mw-page-title-main">Oddball paradigm</span> Psychology research paradigm

The oddball paradigm is an experimental design used within psychology research. The oddball paradigm relies on the brain's sensitivity to rare deviant stimuli presented pseudo-randomly in a series of repeated standard stimuli. The oddball paradigm has a wide selection of stimulus types, including stimuli such as sound duration, frequency, intensity, phonetic features, complex music, or speech sequences. The reaction of the participant to this "oddball" stimulus is recorded.

<span class="mw-page-title-main">Visual capture</span>

In psychology, visual capture is the dominance of vision over other sense modalities in creating a percept. In this process, the visual senses influence the other parts of the somatosensory system, to result in a perceived environment that is not congruent with the actual stimuli. Through this phenomenon, the visual system is able to disregard what other information a different sensory system is conveying, and provide a logical explanation for whatever output the environment provides. Visual capture allows one to interpret the location of sound as well as the sensation of touch without actually relying on those stimuli but rather creating an output that allows the individual to perceive a coherent environment.

Dichotic listening is a psychological test commonly used to investigate selective attention and the lateralization of brain function within the auditory system. It is used within the fields of cognitive psychology and neuroscience.

The Colavita visual dominance effect refers to the phenomenon in which study participants respond more often to the visual component of an audiovisual stimulus, when presented with bimodal stimuli.

Biased competition theory advocates the idea that each object in the visual field competes for cortical representation and cognitive processing. This theory suggests that the process of visual processing can be biased by other mental processes such as bottom-up and top-down systems which prioritize certain features of an object or whole items for attention and further processing. Biased competition theory is, simply stated, the competition of objects for processing. This competition can be biased, often toward the object that is currently attended in the visual field, or alternatively toward the object most relevant to behavior.

<span class="mw-page-title-main">Salience network</span> Large-scale brain network involved in detecting and attending to relevant stimuli

The salience network (SN), also known anatomically as the midcingulo-insular network (M-CIN) or ventral attention network, is a large scale network of the human brain that is primarily composed of the anterior insula (AI) and dorsal anterior cingulate cortex (dACC). It is involved in detecting and filtering salient stimuli, as well as in recruiting relevant functional networks. Together with its interconnected brain networks, the SN contributes to a variety of complex functions, including communication, social behavior, and self-awareness through the integration of sensory, emotional, and cognitive information.

References

  1. 1 2 Atienza, M., Cantero, J. L., & Escera, C. (2001). Auditory information processing during human sleep as revealed by event-related brain potentials. Clinical Neurophysiology, 112(11), 2031-2045.
  2. 1 2 3 Van der Heijden, A. H. C. (1996). Perception for selection, selection for action, and action for perception. Visual Cognition, 3(4), 357-361.
  3. Egeth, H. E., Yantis, S. (1997). Visual attention: Control, representation, and time course. Annual Review of Psychology, 48, 269-297.
  4. 1 2 3 4 5 6 7 8 Folk, C. L., & Remington, R. (2006). Top-down modulation of preattentive processing: Testing the recovery account of contingent capture. Visual Cognition, 14, 445-465.
  5. 1 2 Tollner, T., Zehetleitner, M., Gramann, K., & Muller, H. J. (2010). Top-down weighting of visual dimensions: Behavioral and electrophysiological evidence. Vision Research, 50(14), 1372-1381.
  6. Raz, A., Deouell, L. Y., & Bentin, S. (2001). Is pre-attentive processing compromised by prolonged wakefulness? Effects of total sleep deprivation on the mismatch negativity. Psychophysiology, 38, 787-795.
  7. 1 2 3 4 5 Meng, X., & Wang, Z. (2009). A pre-attentive model of biological vision. IEEE International Conference on Intelligent Computing and Intelligent Systems, 3, 154-158.
  8. Klein, S. A., Carney, T., Barghout-Stein, L., & Tyler, C. W. (1997, June). Seven models of masking. In Electronic Imaging'97 (pp. 13-24). International Society for Optics and Photonics.
  9. Barghout-Stein, Lauren. On differences between peripheral and foveal pattern masking. Diss. University of California, Berkeley, 1999.
  10. 1 2 3 Appelbaum, L. G., & Norcia, A. M. (2009). Attentive and pre-attentive aspects of figural processing. Journal of Vision, 9(11), 1-12. doi : 10.1167/9.11.18
  11. Kourtzi, Z., & Kanwisher, N. (2000). Cortical regions involved in perceiving object shape. Journal of Neuroscience, 20, 3310-3318.
  12. 1 2 3 4 5 Krummenacher, J., Grubert, A., & Müller, H. J. (2010). Inter-trial and redundant-signals effects in visual search and discrimination tasks: Separable pre-attentive and post-selective effects. Vision Research, 50(14), 1382-1395. doi : 10.1016/j.visres.2010.04.006
  13. 1 2 Balconi, M., & Mazza, G. (2009). Consciousness and emotion: ERP modulation and attentive vs. pre-attentive elaboration of emotional facial expressions by backward masking. Springer Science, 33, 113-124.
  14. 1 2 3 Seri, S., Pisani, F., Thai, J. N., & Cerquiglini, A. (2007). Pre-attentive auditory sensory processing in autistic spectrum disorder. Are electromagnetic measurements telling us a coherent story? International Journal of Psychophysiology, 63(2), 159-163.
  15. 1 2 3 4 5 Habermeyer, B., Herdener, M., Esposito, F., Hilti, C. C., Klarhofer, M., di Salle, F., Wetzel, S., et al. (2009). Neural correlates of pre-attentive processing of pattern deviance in professional musicians. Human Brain Mapping, 30, 3736-3747.
  16. 1 2 3 Klamer, D., Svensson, L., Fejgin, K., & Palson, E. (2011). Prefrontal NMDA receptor antagonism reduces impairments in pre-attentive information processing. European Neuropsychopharmacology, 21(3), 248-253.
  17. 1 2 Zaehle, T., Jancke, L., Herrmann, C. S., Meyer, M. (2009). Pre-attentive spectro-temporal feature processing in the human auditory system. Brain Topography, 22, 97-108.
  18. Sorokin, A., Alku, P., & Kujala, T. (2010). Change and novelty detection in speech and non-speech sound streams. Brain Research, 1327, 77-90. doi : 10.1016/j.brainres.2010.02.052.
  19. 1 2 3 4 5 Fairhall, S. L., & Macaluso, E. (2009). Spatial attention can modulate audiovisual integration at multiple cortical and subcortical sites. European Journal of Neuroscience, 29, 1247-1257.
  20. 1 2 Athanasopoulos, P., Dering, B., Wiggett, A., Kuipers, J., & Thierry, G. (2010). Perceptual shift in bilingualism: Brain potentials reveal plasticity in pre-attentive colour perception. Cognition, 116(3), 437-443. doi : 10.1016/j.cognition.2010.05.016
  21. 1 2 Tales, A., Haworth, J., Wilcock, G., Newton, P., & Butler, S. (2008). Visual mismatch negativity highlights abnormal pre-attentive visual processing in mild cognitive impairment and Alzheimer's disease. Neuropsychologia, 46(5), 1224-1232.
  22. 1 2 Pause, B. M., Hellman, G., Goder, R., Aldenhoff, J. B., & Ferstl, R. (2008). Increased processing speed for emotionally negative odors in schizophrenia. International Journal of Psychophysiology, 70, 16-22.