Pre-attentive processing

Last updated

Pre-attentive processing is the subconscious accumulation of information from the environment. [1] [2] All available information is pre-attentively processed. [2] Then, the brain filters and processes what is important. Information that has the highest salience (a stimulus that stands out the most) or relevance to what a person is thinking about is selected for further and more complete analysis by conscious (attentive) processing. [1] [2] Understanding how pre-attentive processing works is useful in advertising, in education, and for prediction of cognitive ability.

Cognition is "the mental action or process of acquiring knowledge and understanding through thought, experience, and the senses". It encompasses many aspects of intellectual functions and processes such as attention, the formation of knowledge, memory and working memory, judgment and evaluation, reasoning and "computation", problem solving and decision making, comprehension and production of language. Cognitive processes use existing knowledge and generate new knowledge.

Contents

Pure-capture and contingent-capture

The reasons are unclear as to why certain information proceeds from pre-attentive to attentive processing while other information does not. It is generally accepted that the selection involves an interaction between the salience of a stimulus and a person’s current intentions and/or goals. [3] Two models of pre-attentive processing are pure-capture and contingent-capture. [4]

The "pure-capture" model focuses on stimulus salience. [5] If certain properties of a stimulus stand out from its background, the stimulus has a higher chance of being selected for attentive processing. [4] This is sometimes referred to as "bottom-up" processing, as it is the properties of the stimuli which affect selection. Since things that affect pre-attentive processing do not necessarily correlate with things that affect attention, stimulus salience may be more important than conscious goals. For example, pre-attentive processing is slowed by sleep deprivation while attention, although less focused, is not slowed. [6] Furthermore, when searching for a particular visual stimulus among a variety of visual distractions, people often have more trouble finding what they are looking for if one or more of the distractions is particularly salient. [4] For example, it is easier to locate a bright, green circle (which is salient) among distractor circles if they are all grey (a bland color) than it is to locate a green circle among distractor circles if some are red (also salient colour). This is thought to occur because the salient red circles attract our attention away from the target green circle. However, this is difficult to prove because when given a target (like the green circle) to search for in a laboratory experiment, participants may generalize the task to searching for anything that stands out, rather than solely searching for the target. [4] If this happens, the conscious goal becomes finding anything that stands out, which would direct the person’s attention towards red distractor circles as well as the green target. This means that a person’s goal, rather than the salience of the stimuli, could be causing the delayed ability to find the target.

The "contingent-capture" model emphasizes the idea that a person’s current intentions and/or goals affect the speed and efficiency of pre-attentive processing. [4] The brain directs an individual’s attention towards stimuli with features that fit in with their goals. Consequently, these stimuli will be processed faster at the pre-attentive stage and will be more likely to be selected for attentive processing. [5] Since this model focuses on the importance of conscious processes (rather than properties of the stimulus itself) in selecting information for attentive processing, it is sometimes called "top-down" selection. [4] In support of this model, it has been shown that a target stimulus can be located faster if it is preceded by the presentation of a similar, priming stimulus. [4] For example, if an individual is shown the color green and then required to find a green circle among distractors, the initial exposure to the color will make it easier to find the green circle. This is because they are already thinking about and envisioning the color green, so when it shows up again as the green circle, their brain readily directs its attention towards it. This suggests that processing an initial stimulus speeds up a person’s ability to select a similar target from pre-attentive processing. However, it could be that the speed of pre-attentive processing itself is not affected by the first stimulus, but rather that people are simply able to quickly abandon dissimilar stimuli, enabling them to re-engage to the correct target more quickly. [4] This would mean that the difference in reaction time occurs at the attentive level, after pre-attentive processing and stimulus selection has already taken place.

Vision

Information for pre-attentive processing is detected through the five senses. In the visual system, the receptive fields at the back of the eye (retina) transfer the image via axons to the thalamus, specifically the lateral geniculate nuclei. [7] The image then travels to the primary visual cortex and continues on to be processed by the visual association cortex. At each stage, the image is processed with increasing complexity. Pre-attentive processing starts with the retinal image; this image is magnified as it moves from retina to the cortex of the brain. [7] Shades of light and dark are processed in the lateral geniculate nuclei of the thalamus. [7] Simple and complex cells in the brain process boundary and surface information by deciphering the image's contrast, orientation, and edges. [7] When the image hits the fovea, it is highly magnified, facilitating object recognition. The images in the periphery are less clear but help to create a complete image used for scene perception. [7] [8] [9]

Retina light-sensitive organ in the eye

The retina is the innermost, light-sensitive layer of tissue of the eye of most vertebrates and some molluscs. The optics of the eye create a focused two-dimensional image of the visual world on the retina, which translates that image into electrical neural impulses to the brain to create visual perception, the retina serving a function analogous to that of the film or image sensor in a camera.

Axon The long process of a neuron that conducts nerve impulses, usually away from the cell body to the terminals and varicosities, which are sites of storage and release of neurotransmitter.

An axon, or nerve fiber, is a long, slender projection of a nerve cell, or neuron, in vertebrates, that typically conducts electrical impulses known as action potentials away from the nerve cell body. The function of the axon is to transmit information to different neurons, muscles, and glands. In certain sensory neurons, such as those for touch and warmth, the axons are called afferent nerve fibers and the electrical impulse travels along these from the periphery to the cell body, and from the cell body to the spinal cord along another branch of the same axon. Axon dysfunction has caused many inherited and acquired neurological disorders which can affect both the peripheral and central neurons. Nerve fibers are classed into three types – group A nerve fibers, group B nerve fibers, and group C nerve fibers. Groups A and B are myelinated, and group C are unmyelinated. These groups include both sensory fibers and motor fibers. Another classification groups only the sensory fibers as Type I, Type II, Type III, and Type IV.

Thalamus part of diencephalon, which is in turn part of prosencephalon (forebrain)

The thalamus is a large mass of gray matter in the dorsal part of the diencephalon of the brain with several functions such as relaying of sensory signals, including motor signals to the cerebral cortex, and the regulation of consciousness, sleep, and alertness.

Visual scene segmentation is a pre-attentive process where stimuli are grouped together into specific objects against a background. [10] Figure and background regions of an image activate different processing centres: figures use the lateral occipital areas (which involve object processing) and background engages dorso-medial areas. [10] [11]

Occipital lobe part of the brain

The occipital lobe is one of the four major lobes of the cerebral cortex in the brain of mammals. The occipital lobe is the visual processing center of the mammalian brain containing most of the anatomical region of the visual cortex. The primary visual cortex is Brodmann area 17, commonly called V1. Human V1 is located on the medial side of the occipital lobe within the calcarine sulcus; the full extent of V1 often continues onto the posterior pole of the occipital lobe. V1 is often also called striate cortex because it can be identified by a large stripe of myelin, the Stria of Gennari. Visually driven regions outside V1 are called extrastriate cortex. There are many extrastriate regions, and these are specialized for different visual tasks, such as visuospatial processing, color differentiation, and motion perception. The name derives from the overlying occipital bone, which is named from the Latin ob, behind, and caput, the head. Bilateral lesions of the occipital lobe can lead to cortical blindness.

Visual pre-attentive processing uses a distinct memory mechanism. [12] When a stimulus is presented consecutively, the stimulus is perceived at a faster rate than if different stimuli are presented consecutively. [12] The theory behind this is called the dimension-weighting account (DWA) where each time a specific stimulus (i.e. color) is presented it contributes to the weight of the stimuli. [12] More presentations increase the weight of the stimuli, and therefore, subsequently decrease the reaction time to the stimulus. [12] The dimensional-weighting system, which calculates pre-attentive processing for our visual system, codes the stimulus and thus directs attention to the stimulus with the most weight. [12]

Visual pre-attentive processing is also involved in the perception of emotion. [13] Human beings are social creatures and are very adept at critiquing facial expressions. We have the ability to unconsciously process emotional stimuli and equate the stimuli, such as a face, with meaning. [13]

Audition

The auditory system is also very important in accumulating information for pre-attentive processing. When a person’s eardrum is struck by incoming sound waves, it vibrates. This sends messages, via the auditory nerve, to the brain for pre-attentive processing. The ability to adequately filter information from pre-attentive processing to attentive processing is necessary for the normal development of social skills. [14] For acoustic pre-attentive processing, the temporal cortex was believed to be the main site of activation, however, recent evidence has indicated involvement of the frontal cortex as well. [15] [16] The frontal cortex is predominantly associated with attentional processing, but it may also be involved in pre-attentive processing of complex and/or salient acoustic stimuli. [10] [15] For example, detecting slight variations in complex musical patterns has been shown to activate the right ventromedial prefrontal cortex. [15]

It has been shown that in acoustic pre-attentive processing there is some degree of lateralization. [17] The left hemisphere responds more to temporal acoustic information whereas the right hemisphere responds to the frequency of auditory information. [17] Also, there is lateralization in the perception of speech which is left hemisphere dominant for pre-attentive processing. [18]

Multisensory integration

Vision, sound, smell, touch, and taste are processed together pre-attentively when more than one sensory stimuli are present. [19] This multisensory integration increases activity in the superior temporal sulcus (STS), thalamus, and superior colliculus. [19] Specifically, the pre-attentive process of multisensory integration works jointly with attention to activate brain regions such as the STS. [19] Multisensory integration seems to give a person the advantage of greater comprehension if both auditory and visual stimuli are being processed together. [19] But it is important to note that multisensory integration is affected by what a person pays attention to and their current goals. [19]

Plasticity

Training can lead to changes in activity and brain structures involved in pre-attentive processing. [15] Professional musicians, in particular, show larger ERP (Event-related potential) responses to deviations in auditory stimuli and have possibly related structural differences in their brains (Heschl’s gyrus, corpus callosum, and pyramidal tracts). [15] This plasticity of pre-attentive processing has also been shown in perception. Using EEG (electroencephalography) methods in pre-attentive colour perception, a study observed how easy it was for bilinguals to adapt to the linguistic constructs of a different culture. [20] This means that pre-attentive processes are not hard-wired but malleable. [20]

Deficits

Deficits in the transition from pre-attentive processing to attentive processing are associated with disorders such as schizophrenia, Alzheimer's disease, and autism. [14] [16] [21] Abnormal prefrontal cortex function in schizophrenics results in the inability to use pre-attentive processing to recognize familiar auditory stimuli as non-threatening. [16] Schizophrenics with positive symptoms have a greater capability of pre-attentively processing emotionally negative odors. [22] This heightened ability to distinguish odors seems to be involved in their hypersensitivity to threatening situations. [22] Alzheimer's disease is typically thought to affect high-level brain functioning (like memory) but can also have negative impacts on visual pre-attentive processing. [21] Some of the difficulties with social interaction seen in autistics may be due to an impairment in filtration of pre-attentive auditory information. [14] For example, they often have difficulty following a conversation as they cannot distinguish which parts are important and are easily distracted by other sounds.

See also

Related Research Articles

Attention Behavioral and cognitive process of selectively concentrating on a discrete aspect of information, whether deemed subjective or objective, while ignoring other perceivable information

Attention is the behavioral and cognitive process of selectively concentrating on a discrete aspect of information, whether deemed subjective or objective, while ignoring other perceivable information. It is a state of arousal. It is the taking possession by the mind in clear and vivid form of one out of what seem several simultaneous objects or trains of thought. Focalization, the concentration of consciousness, is of its essence. Attention has also been described as the allocation of limited cognitive processing resources.

Multisensory integration, also known as multimodal integration, is the study of how information from the different sensory modalities, such as sight, sound, touch, smell, self-motion and taste, may be integrated by the nervous system. A coherent representation of objects combining modalities enables us to have meaningful perceptual experiences. Indeed, multisensory integration is central to adaptive behavior because it allows us to perceive a world of coherent perceptual entities. Multisensory integration also deals with how different sensory modalities interact with one another and alter each other's processing.

Sensory processing is the process that organizes sensation from one’s own body and the environment, thus making it possible to use the body effectively within the environment. Specifically, it deals with how the brain processes multiple sensory modality inputs, such as proprioception, vision, auditory system, tactile, olfactory, vestibular system, interoception, and taste into usable functional outputs.

The salience of an item – be it an object, a person, a pixel, etc. – is the state or quality by which it stands out from its neighbors. Saliency detection is considered to be a key attentional mechanism that facilitates learning and survival by enabling organisms to focus their limited perceptual and cognitive resources on the most pertinent subset of the available sensory data.

Visual search is a type of perceptual task requiring attention that typically involves an active scan of the visual environment for a particular object or feature among other objects or features. Visual search can take place with or without eye movements. The ability to consciously locate an object or target amongst a complex array of stimuli has been extensively studied over the past 40 years. Practical examples of using visual search can be seen in everyday life, such as when one is picking out a product on a supermarket shelf, when animals are searching for food amongst piles of leaves, when trying to find your friend in a large crowd of people, or simply when playing visual search games such as Where's Wally? Much previous literature on visual search used reaction time in order to measure the time it takes to detect the target amongst its distractors. An example of this could be a green square amongst a set of red circles. However, reaction time measurements do not always distinguish between the role of attention and other factors: a long reaction time might be the result of difficulty directing attention to the target, or slowed decision-making processes or slowed motor responses after attention is already directed to the target and the target has already been detected. Many visual search paradigms have therefore used eye movement as a means to measure the degree of attention given to stimuli. However, vast research to date suggests that eye movements can move independently of attention, and therefore eye movement measures do not completely capture the role of attention.

The mismatch negativity (MMN) or mismatch field (MMF) is a component of the event-related potential (ERP) to an odd stimulus in a sequence of stimuli. It arises from electrical activity in the brain and is studied within the field of cognitive neuroscience and psychology. It can occur in any sensory system, but has most frequently been studied for hearing and for vision. In the case of auditory stimuli, the MMN occurs after an infrequent change in a repetitive sequence of sounds For example, a rare deviant (d) sound can be interspersed among a series of frequent standard (s) sounds. The deviant sound can differ from the standards in one or more perceptual features such as pitch, duration, or loudness. The MMN is usually evoked by either a change in frequency, intensity, duration or real or apparent spatial locus of origin. The MMN can be elicited regardless of whether the subject is paying attention to the sequence. During auditory sequences, a person can be reading or watching a silent subtitled movie, yet still show a clear MMN. In the case of visual stimuli, the MMN occurs after an infrequent change in a repetitive sequence of images.

Negative priming

Negative priming is an implicit memory effect in which prior exposure to a stimulus unfavorably influences the response to the same stimulus. It falls under the category of priming, which refers to the change in the response towards a stimulus due to a subconscious memory effect. Negative priming describes the slow and error-prone reaction to a stimulus that is previously ignored. For example, a subject may be imagined trying to pick a red pen from a pen holder. The red pen becomes the target of attention, so the subject responds by moving their hand towards it. At this time, they mentally block out all other pens as distractors to aid in closing in on just the red pen. After repeatedly picking the red pen over the others, switching to the blue pen results in a momentary delay picking the pen out. The slow reaction due to the change of the distractor stimulus to target stimulus is called the negative priming effect.

Erich Schröger is a German psychologist and neuroscientist.

In neuroscience, the visual P200 or P2 is a waveform component or feature of the event-related potential (ERP) measured at the human scalp. Like other potential changes measurable from the scalp, this effect is believed to reflect the post-synaptic activity of a specific neural process. The P2 component, also known as the P200, is so named because it is a positive going electrical potential that peaks at about 200 milliseconds after the onset of some external stimulus. This component is often distributed around the centro-frontal and the parieto-occipital areas of the scalp. It is generally found to be maximal around the vertex of the scalp, however there have been some topographical differences noted in ERP studies of the P2 in different experimental conditions.

Visual N1

The visual N1 is a visual evoked potential, a type of event-related electrical potential (ERP), that is produced in the brain and recorded on the scalp. The N1 is so named to reflect the polarity and typical timing of the component. The "N" indicates that the polarity of the component is negative with respect to an average mastoid reference. The "1" originally indicated that it was the first negative-going component, but it now better indexes the typical peak of this component, which is around 150 to 200 milliseconds post-stimulus. The N1 deflection may be detected at most recording sites, including the occipital, parietal, central, and frontal electrode sites. Although, the visual N1 is widely distributed over the entire scalp, it peaks earlier over frontal than posterior regions of the scalp, suggestive of distinct neural and/or cognitive correlates. The N1 is elicited by visual stimuli, and is part of the visual evoked potential – a series of voltage deflections observed in response to visual onsets, offsets, and changes. Both the right and left hemispheres generate an N1, but the laterality of the N1 depends on whether a stimulus is presented centrally, laterally, or bilaterally. When a stimulus is presented centrally, the N1 is bilateral. When presented laterally, the N1 is larger, earlier, and contralateral to the visual field of the stimulus. When two visual stimuli are presented, one in each visual field, the N1 is bilateral. In the latter case, the N1's asymmetrical skewedness is modulated by attention. Additionally, its amplitude is influenced by selective attention, and thus it has been used to study a variety of attentional processes.

N2pc refers to an ERP component linked to selective attention. The N2pc appears over visual cortex contralateral to the location in space to which subjects are attending; if subjects pay attention to the left side of the visual field, the N2pc appears in the right hemisphere of the brain, and vice versa. This characteristic makes it a useful tool for directly measuring the general direction of a person's attention with fine-grained temporal resolution.

The oddball paradigm is an experimental design used within psychology research. Presentations of sequences of repetitive stimuli are infrequently interrupted by a deviant stimulus. The reaction of the participant to this "oddball" stimulus is recorded.

Visual capture

In psychology, visual capture is the dominance of vision over other sense modalities in creating a percept. In this process, the visual senses influence the other parts of the somatosensory system, to result in a perceived environment that is not congruent with the actual stimuli. Through this phenomenon, the visual system is able to disregard what other information a different sensory system is conveying, and provide a logical explanation for whatever output the environment provides. Visual capture allows one to interpret the location of sound as well as the sensation of touch without actually relying on those stimuli but rather creating an output that allows the individual to perceive a coherent environment.

Change deafness is a perceptual phenomenon that occurs when, under certain circumstances, a physical change in an auditory stimulus goes unnoticed by the listener. There is uncertainty regarding the mechanisms by which changes to auditory stimuli go undetected, though scientific research has been done to determine the levels of processing at which these consciously undetected auditory changes are actually encoded. An understanding of the mechanisms underlying change deafness could offer insight on issues such as the completeness of our representation of the auditory environment, the limitations of the auditory perceptual system, and the relationship between the auditory system and memory. The phenomenon of change deafness is thought to be related to the interactions between high and low level processes that produce conscious experiences of auditory soundscapes.

In the psychology of perception and motor control, the term response priming denotes a special form of priming. Generally, priming effects take place whenever a response to a target stimulus is influenced by a prime stimulus presented at an earlier time. The distinctive feature of response priming is that prime and target are presented in quick succession and are coupled to identical or alternative motor responses. When a speeded motor response is performed to classify the target stimulus, a prime immediately preceding the target can thus induce response conflicts when assigned to a different response as the target. These response conflicts have observable effects on motor behavior, leading to priming effects, e.g., in response times and error rates. A special property of response priming is its independence from visual awareness of the prime.

Chronostasis is a type of temporal illusion in which the first impression following the introduction of a new event or task-demand to the brain can appear to be extended in time. For example, chronostasis temporarily occurs when fixating on a target stimulus, immediately following a saccade. This elicits an overestimation in the temporal duration for which that target stimulus was perceived. This effect can extend apparent durations by up to 500 ms and is consistent with the idea that the visual system models events prior to perception.

The Colavita visual dominance effect refers to the phenomenon where participants respond more often to the visual component of an audiovisual stimulus, when presented with bimodal stimuli. Research has shown that vision is the most dominant sense for human beings who do not suffer from sensory difficulties. Theorists have proposed that the Colavita visual dominance effect demonstrates a bias toward visual sensory information, because the presence of auditory stimuli is commonly neglected during audiovisual events.

Biased competition theory advocates the idea that each object in the visual field competes for cortical representation and cognitive processing. This theory suggests that the process of visual processing can be biased by other mental processes such as bottom-up and top-down systems which prioritize certain features of an object or whole items for attention and further processing. Biased competition theory is, simply stated, the competition of objects for processing. This competition can be biased, often toward the object that is currently attended in the visual field, or alternatively toward the object most relevant to behavior.

In cognitive psychology, intertrial priming is an accumulation of the priming effect over multiple trials, where "priming" is the effect of the exposure to one stimulus on subsequently presented stimuli. Intertrial priming occurs when a target feature is repeated from one trial to the next, and typically results in speeded response times to the target. A target is the stimulus participants are required to search for. For example, intertrial priming occurs when the task is to respond to either a red or a green target, and the response time to a red target is faster if the preceding trial also has a red target.

Nilli Lavie presented the perceptual load theory in the mid-nineties as a potential resolution to the early/late selection debate. This debate can be summated through two questions, firstly; when in the information processing stream does attention select target information? Secondly; to what degree do distractor stimuli get processed? Prior to Lavie's theory there were several other suggestions as to how targets are perceived amongst distractor stimuli.

References

  1. 1 2 Atienza, M., Cantero, J. L., & Escera, C. (2001). Auditory information processing during human sleep as revealed by event-related brain potentials. Clinical Neurophysiology, 112(11), 2031-2045.
  2. 1 2 3 Van der Heijden, A. H. C. (1996). Perception for selection, selection for action, and action for perception. Visual Cognition, 3(4), 357-361.
  3. Egeth, H. E., Yantis, S. (1997). Visual attention: Control, representation, and time course. Annual Review of Psychology, 48, 269-297.
  4. 1 2 3 4 5 6 7 8 Folk, C. L., & Remington, R. (2006). Top-down modulation of preattentive processing: Testing the recovery account of contingent capture. Visual Cognition, 14, 445-465.
  5. 1 2 Tollner, T., Zehetleitner, M., Gramann, K., & Muller, H. J. (2010). Top-down weighting of visual dimensions: Behavioral and electrophysiological evidence. Vision Research, 50(14), 1372-1381.
  6. Raz, A., Deouell, L. Y., & Bentin, S. (2001). Is pre-attentive processing compromised by prolonged wakefulness? Effects of total sleep deprivation on the mismatch negativity. Psychophysiology, 38, 787-795.
  7. 1 2 3 4 5 Meng, X., & Wang, Z. (2009). A pre-attentive model of biological vision. IEEE International Conference on Intelligent Computing and Intelligent Systems, 3, 154-158.
  8. Klein, S. A., Carney, T., Barghout-Stein, L., & Tyler, C. W. (1997, June). Seven models of masking. In Electronic Imaging'97 (pp. 13-24). International Society for Optics and Photonics.
  9. Barghout-Stein, Lauren. On differences between peripheral and foveal pattern masking. Diss. University of California, Berkeley, 1999.
  10. 1 2 3 Appelbaum, L. G., & Norcia, A. M. (2009). Attentive and pre-attentive aspects of figural processing. Journal of Vision, 9(11), 1-12. doi : 10.1167/9.11.18
  11. Kourtzi, Z., & Kanwisher, N. (2000). Cortical regions involved in perceiving object shape. Journal of Neuroscience, 20, 3310-3318.
  12. 1 2 3 4 5 Krummenacher, J., Grubert, A., & Müller, H. J. (2010). Inter-trial and redundant-signals effects in visual search and discrimination tasks: Separable pre-attentive and post-selective effects. Vision Research, 50(14), 1382-1395. doi : 10.1016/j.visres.2010.04.006
  13. 1 2 Balconi, M., & Mazza, G. (2009). Consciousness and emotion: ERP modulation and attentive vs. pre-attentive elaboration of emotional facial expressions by backward masking. Springer Science, 33, 113-124.
  14. 1 2 3 Seri, S., Pisani, F., Thai, J. N., & Cerquiglini, A. (2007). Pre-attentive auditory sensory processing in autistic spectrum disorder. Are electromagnetic measurements telling us a coherent story? International Journal of Psychophysiology, 63(2), 159-163.
  15. 1 2 3 4 5 Habermeyer, B., Herdener, M., Esposito, F., Hilti, C. C., Klarhofer, M., di Salle, F., Wetzel, S., et al. (2009). Neural correlates of pre-attentive processing of pattern deviance in professional musicians. Human Brain Mapping, 30, 3736-3747.
  16. 1 2 3 Klamer, D., Svensson, L., Fejgin, K., & Palson, E. (2011). Prefrontal NMDA receptor antagonism reduces impairments in pre-attentive information processing. European Neuropsychopharmacology, 21(3), 248-253.
  17. 1 2 Zaehle, T., Jancke, L., Herrmann, C. S., Meyer, M. (2009). Pre-attentive spectro-temporal feature processing in the human auditory system. Brain Topography, 22, 97-108.
  18. Sorokin, A., Alku, P., & Kujala, T. (2010). Change and novelty detection in speech and non-speech sound streams. Brain Research, 1327, 77-90. doi : 10.1016/j.brainres.2010.02.052.
  19. 1 2 3 4 5 Fairhall, S. L., & Macaluso, E. (2009). Spatial attention can modulate audiovisual integration at multiple cortical and subcortical sites. European Journal of Neuroscience, 29, 1247-1257.
  20. 1 2 Athanasopoulos, P., Dering, B., Wiggett, A., Kuipers, J., & Thierry, G. (2010). Perceptual shift in bilingualism: Brain potentials reveal plasticity in pre-attentive colour perception. Cognition, 116(3), 437-443. doi : 10.1016/j.cognition.2010.05.016
  21. 1 2 Tales, A., Haworth, J., Wilcock, G., Newton, P., & Butler, S. (2008). Visual mismatch negativity highlights abnormal pre-attentive visual processing in mild cognitive impairment and Alzheimer's disease. Neuropsychologia, 46(5), 1224-1232.
  22. 1 2 Pause, B. M., Hellman, G., Goder, R., Aldenhoff, J. B., & Ferstl, R. (2008). Increased processing speed for emotionally negative odors in schizophrenia. International Journal of Psychophysiology, 70, 16-22.