Visual search is a type of perceptual task requiring attention that typically involves an active scan of the visual environment for a particular object or feature (the target) among other objects or features (the distractors). [1] Visual search can take place with or without eye movements. The ability to consciously locate an object or target amongst a complex array of stimuli has been extensively studied over the past 40 years. Practical examples of using visual search can be seen in everyday life, such as when one is picking out a product on a supermarket shelf, when animals are searching for food among piles of leaves, when trying to find a friend in a large crowd of people, or simply when playing visual search games such as Where's Wally?
Much previous literature on visual search used reaction time in order to measure the time it takes to detect the target amongst its distractors. An example of this could be a green square (the target) amongst a set of red circles (the distractors). However, reaction time measurements do not always distinguish between the role of attention and other factors: a long reaction time might be the result of difficulty directing attention to the target, or slowed decision-making processes or slowed motor responses after attention is already directed to the target and the target has already been detected. Many visual search paradigms have therefore used eye movement as a means to measure the degree of attention given to stimuli. [2] [3] However, eyes can move independently of attention, and therefore eye movement measures do not completely capture the role of attention. [4] [5]
Feature search (also known as "disjunctive" or "efficient" search) [6] is a visual search process that focuses on identifying a previously requested target amongst distractors that differ from the target by a unique visual feature such as color, shape, orientation, or size. [7] An example of a feature search task is asking a participant to identify a white square (target) surrounded by black squares (distractors). [6] In this type of visual search, the distractors are characterized by the same visual features. [7] The efficiency of feature search in regards to reaction time (RT) and accuracy depends on the "pop out" effect, [8] bottom-up processing, [8] and parallel processing. [7] However, the efficiency of feature search is unaffected by the number of distractors present. [7]
The "pop out" effect is an element of feature search that characterizes the target's ability to stand out from surrounding distractors due to its unique feature. [8] Bottom-up processing, which is the processing of information that depends on input from the environment, [8] explains how one utilizes feature detectors to process characteristics of the stimuli and differentiate a target from its distractors. [7] This draw of visual attention towards the target due to bottom-up processes is known as "saliency." [9] Lastly, parallel processing is the mechanism that then allows one's feature detectors to work simultaneously in identifying the target. [7]
Conjunction search (also known as inefficient or serial search) [6] is a visual search process that focuses on identifying a previously requested target surrounded by distractors possessing no distinct features from the target itself. [10] An example of a conjunction search task is having a person identify a green X (target) amongst distractors composed of purple Xs (same shape) and green Os (same color). [10] Unlike feature search, conjunction search involves distractors (or groups of distractors) that may differ from each other but exhibit at least one common feature with the target. [10] The efficiency of conjunction search in regards to reaction time (RT) and accuracy is dependent on the distractor-ratio [10] and the number of distractors present. [7] As the distractors represent the differing individual features of the target more equally amongst themselves (distractor-ratio effect), reaction time(RT) increases and accuracy decreases. [10] As the number of distractors present increases, the reaction time (RT) increases and the accuracy decreases. [6] However, with practice the original reaction time (RT) restraints of conjunction search tend to show improvement. [11] In the early stages of processing, conjunction search utilizes bottom-up processes to identify pre-specified features amongst the stimuli. [7] These processes are then overtaken by a more serial process of consciously evaluating the indicated features of the stimuli [7] in order to properly allocate one's focal spatial attention towards the stimulus that most accurately represents the target. [12]
In many cases, top-down processing affects conjunction search by eliminating stimuli that are incongruent with one's previous knowledge of the target-description, which in the end allows for more efficient identification of the target. [8] [9] An example of the effect of top-down processes on a conjunction search task is when searching for a red 'K' among red 'Cs' and black 'Ks', individuals ignore the black letters and focus on the remaining red letters in order to decrease the set size of possible targets and, therefore, more efficiently identify their target. [13]
In everyday situations, people are most commonly searching their visual fields for targets that are familiar to them. When it comes to searching for familiar stimuli, top-down processing allows one to more efficiently identify targets with greater complexity than can be represented in a feature or conjunction search task. [8] In a study done to analyze the reverse-letter effect, which is the idea that identifying the asymmetric letter among symmetric letters is more efficient than its reciprocal, researchers concluded that individuals more efficiently recognize an asymmetric letter among symmetric letters due to top-down processes. [9] Top-down processes allowed study participants to access prior knowledge regarding shape recognition of the letter N and quickly eliminate the stimuli that matched their knowledge. [9] In the real world, one must use prior knowledge everyday in order to accurately and efficiently locate objects such as phones, keys, etc. among a much more complex array of distractors. [8] Despite this complexity, visual search with complex objects (and search for categories of objects, such as "phone", based on prior knowledge) appears to rely on the same active scanning processes as conjunction search with less complex, contrived laboratory stimuli, [14] [15] although global statistical information available in real-world scenes can also help people locate target objects. [16] [17] [18] While bottom-up processes may come into play when identifying objects that are not as familiar to a person, overall top-down processing highly influences visual searches that occur in everyday life. [8] [19] [20] Familiarity can play especially critical roles when parts of objects are not visible (as when objects are partly hidden from view because they are behind other objects). Visual information from hidden parts can be recalled from long-term memory and used to facilitate search for familiar objects. [21] [22]
It is also possible to measure the role of attention within visual search experiments by calculating the slope of reaction time over the number of distractors present. [23] Generally, when high levels of attention are required when looking at a complex array of stimuli (conjunction search), the slope increases as reaction times increase. For simple visual search tasks (feature search), the slope decreases due to reaction times being fast and requiring less attention. [24] However, the use of a reaction time slope to measure attention is controversial because non-attentional factors can also affect reaction time slope. [25] [26] [27]
One obvious way to select visual information is to turn towards it, also known as visual orienting. This may be a movement of the head and/or eyes towards the visual stimulus, called a saccade. Through a process called foveation, the eyes fixate on the object of interest, making the image of the visual stimulus fall on the fovea of the eye, the central part of the retina with the sharpest visual acuity.
There are two types of orienting:
Visual search relies primarily on endogenous orienting because participants have the goal to detect the presence or absence of a specific target object in an array of other distracting objects.
Early research suggested that attention could be covertly (without eye movement) shifted to peripheral stimuli, [29] but later studies found that small saccades (microsaccades) occur during these tasks, and that these eye movements are frequently directed towards the attended locations (whether or not there are visible stimuli). [30] [31] [32] These findings indicate that attention plays a critical role in understanding visual search.
Subsequently, competing theories of attention have come to dominate visual search discourse. [33] The environment contains a vast amount of information. We are limited in the amount of information we are able to process at any one time, so it is therefore necessary that we have mechanisms by which extraneous stimuli can be filtered and only relevant information attended to. In the study of attention, psychologists distinguish between pre-attentive and attentional processes. [34] Pre-attentive processes are evenly distributed across all input signals, forming a kind of "low-level" attention. Attentional processes are more selective and can only be applied to specific preattentive input. A large part of the current debate in visual search theory centres on selective attention and what the visual system is capable of achieving without focal attention. [33]
A popular explanation for the different reaction times of feature and conjunction searches is the feature integration theory (FIT), introduced by Treisman and Gelade in 1980. This theory proposes that certain visual features are registered early, automatically, and are coded rapidly in parallel across the visual field using pre-attentive processes. [35] Experiments show that these features include luminance, colour, orientation, motion direction, and velocity, as well as some simple aspects of form. [36] For example, a red X can be quickly found among any number of black Xs and Os because the red X has the discriminative feature of colour and will "pop out." In contrast, this theory also suggests that in order to integrate two or more visual features belonging to the same object, a later process involving integration of information from different brain areas is needed and is coded serially using focal attention. For example, when locating an orange square among blue squares and orange triangles, neither the colour feature "orange" nor the shape feature "square" is sufficient to locate the search target. Instead, one must integrate information of both colour and shape to locate the target.
Evidence that attention and thus later visual processing is needed to integrate two or more features of the same object is shown by the occurrence of illusory conjunctions, or when features do not combine correctly For example, if a display of a green X and a red O are flashed on a screen so briefly that the later visual process of a serial search with focal attention cannot occur, the observer may report seeing a red X and a green O.
The FIT is a dichotomy because of the distinction between its two stages: the preattentive and attentive stages. [37] Preattentive processes are those performed in the first stage of the FIT model, in which the simplest features of the object are being analyzed, such as color, size, and arrangement. The second attentive stage of the model incorporates cross-dimensional processing, [38] and the actual identification of an object is done and information about the target object is put together. This theory has not always been what it is today; there have been disagreements and problems with its proposals that have allowed the theory to be amended and altered over time, and this criticism and revision has allowed it to become more accurate in its description of visual search. [38] There have been disagreements over whether or not there is a clear distinction between feature detection and other searches that use a master map accounting for multiple dimensions in order to search for an object. Some psychologists support the idea that feature integration is completely separate from this type of master map search, whereas many others have decided that feature integration incorporates this use of a master map in order to locate an object in multiple dimensions. [37]
The FIT also explains that there is a distinction between the brain's processes that are being used in a parallel versus a focal attention task. Chan and Hayward [37] have conducted multiple experiments supporting this idea by demonstrating the role of dimensions in visual search. While exploring whether or not focal attention can reduce the costs caused by dimension-switching in visual search, they explained that the results collected supported the mechanisms of the feature integration theory in comparison to other search-based approaches. They discovered that single dimensions allow for a much more efficient search regardless of the size of the area being searched, but once more dimensions are added it is much more difficult to efficiently search, and the bigger the area being searched the longer it takes for one to find the target. [37]
A second main function of preattentive processes is to direct focal attention to the most "promising" information in the visual field. [33] There are two ways in which these processes can be used to direct attention: bottom-up activation (which is stimulus-driven) and top-down activation (which is user-driven). In the guided search model by Jeremy Wolfe, [39] information from top-down and bottom-up processing of the stimulus is used to create a ranking of items in order of their attentional priority. In a visual search, attention will be directed to the item with the highest priority. If that item is rejected, then attention will move on to the next item and the next, and so forth. The guided search theory follows that of parallel search processing.
An activation map is a representation of visual space in which the level of activation at a location reflects the likelihood that the location contains a target. This likelihood is based on preattentive, featural information of the perceiver. According to the guided search model, the initial processing of basic features produces an activation map, with every item in the visual display having its own level of activation. Attention is demanded based on peaks of activation in the activation map in a search for the target. [39] Visual search can proceed efficiently or inefficiently. During efficient search, performance is unaffected by the number of distractor items. The reaction time functions are flat, and the search is assumed to be a parallel search. Thus, in the guided search model, a search is efficient if the target generates the highest, or one of the highest activation peaks. For example, suppose someone is searching for red, horizontal targets. Feature processing would activate all red objects and all horizontal objects. Attention is then directed to items depending on their level of activation, starting with those most activated. This explains why search times are longer when distractors share one or more features with the target stimuli. In contrast, during inefficient search, the reaction time to identify the target increases linearly with the number of distractor items present. According to the guided search model, this is because the peak generated by the target is not one of the highest. [39]
During visual search experiments the posterior parietal cortex has elicited much activation during functional magnetic resonance imaging (fMRI) and electroencephalography (EEG) experiments for inefficient conjunction search, which has also been confirmed through lesion studies. Patients with lesions to the posterior parietal cortex show low accuracy and very slow reaction times during a conjunction search task but have intact feature search remaining to the ipsilesional (the same side of the body as the lesion) side of space. [40] [41] [42] [43] Ashbridge, Walsh, and Cowey in (1997) [44] demonstrated that during the application of transcranial magnetic stimulation (TMS) to the right parietal cortex, conjunction search was impaired by 100 milliseconds after stimulus onset. This was not found during feature search. Nobre, Coull, Walsh and Frith (2003) [45] identified using functional magnetic resonance imaging (fMRI) that the intraparietal sulcus located in the superior parietal cortex was activated specifically to feature search and the binding of individual perceptual features as opposed to conjunction search. Conversely, the authors further identify that for conjunction search, the superior parietal lobe and the right angular gyrus elicit bilaterally during fMRI experiments.
In contrast, Leonards, Sunaert, Vam Hecke and Orban (2000) [46] identified that significant activation is seen during fMRI experiments in the superior frontal sulcus primarily for conjunction search. This research hypothesises that activation in this region may in fact reflect working memory for holding and maintaining stimulus information in mind in order to identify the target. Furthermore, significant frontal activation including the ventrolateral prefrontal cortex bilaterally and the right dorsolateral prefrontal cortex were seen during positron emission tomography for attentional spatial representations during visual search. [47] The same regions associated with spatial attention in the parietal cortex coincide with the regions associated with feature search. Furthermore, the frontal eye field (FEF) located bilaterally in the prefrontal cortex, plays a critical role in saccadic eye movements and the control of visual attention. [48] [49] [50]
Moreover, research into monkeys and single cell recording found that the superior colliculus is involved in the selection of the target during visual search as well as the initiation of movements. [51] Conversely, it also suggested that activation in the superior colliculus results from disengaging attention, ensuring that the next stimulus can be internally represented. The ability to directly attend to a particular stimuli during visual search experiments has been linked to the pulvinar nucleus (located in the midbrain) while inhibiting attention to unattended stimuli. [52] Conversely, Bender and Butter (1987) [53] found that during testing on monkeys, no involvement of the pulvinar nucleus was identified during visual search tasks.
There is evidence for the V1 Saliency Hypothesis that the primary visual cortex (V1) creates a bottom-up saliency map to guide attention exogenously, [54] [55] and this V1 saliency map is read out by the superior colliculus which receives monosynaptic inputs from V1.
There is a variety of speculation about the origin and evolution of visual search in humans. It has been shown that during visual exploration of complex natural scenes, both humans and nonhuman primates make highly stereotyped eye movements. [56] Furthermore, chimpanzees have demonstrated improved performance in visual searches for upright human or dog faces, [57] suggesting that visual search (particularly where the target is a face) is not peculiar to humans and that it may be a primal trait. Research has suggested that effective visual search may have developed as a necessary skill for survival, where being adept at detecting threats and identifying food was essential. [58] [59]
The importance of evolutionarily relevant threat stimuli was demonstrated in a study by LoBue and DeLoache (2008) in which children (and adults) were able to detect snakes more rapidly than other targets amongst distractor stimuli. [60] However, some researchers question whether evolutionarily relevant threat stimuli are detected automatically. [61]
Over the past few decades there have been vast amounts of research into face recognition, specifying that faces endure specialized processing within a region called the fusiform face area (FFA) located in the mid fusiform gyrus in the temporal lobe. [62] Debates are ongoing whether both faces and objects are detected and processed in different systems and whether both have category specific regions for recognition and identification. [63] [64] Much research to date focuses on the accuracy of the detection and the time taken to detect the face in a complex visual search array. When faces are displayed in isolation, upright faces are processed faster and more accurately than inverted faces, [65] [66] [67] [68] but this effect was observed in non-face objects as well. [69] When faces are to be detected among inverted or jumbled faces, reaction times for intact and upright faces increase as the number of distractors within the array is increased. [70] [71] [72] Hence, it is argued that the 'pop out' theory defined in feature search is not applicable in the recognition of faces in such visual search paradigm. Conversely, the opposite effect has been argued and within a natural environmental scene, the 'pop out' effect of the face is significantly shown. [73] This could be due to evolutionary developments as the need to be able to identify faces that appear threatening to the individual or group is deemed critical in the survival of the fittest. [74] More recently, it was found that faces can be efficiently detected in a visual search paradigm, if the distracters are non-face objects, [75] [76] [77] however it is debated whether this apparent 'pop out' effect is driven by a high-level mechanism or by low-level confounding features. [78] [79] Furthermore, patients with developmental prosopagnosia, who have impaired face identification, generally detect faces normally, suggesting that visual search for faces is facilitated by mechanisms other than the face-identification circuits of the fusiform face area. [80]
Patients with forms of dementia can also have deficits in facial recognition and the ability to recognize human emotions in the face. In a meta-analysis of nineteen different studies comparing normal adults with dementia patients in their abilities to recognize facial emotions, [81] the patients with frontotemporal dementia were seen to have a lower ability to recognize many different emotions. These patients were much less accurate than the control participants (and even in comparison with Alzheimer's patients) in recognizing negative emotions, but were not significantly impaired in recognizing happiness. Anger and disgust in particular were the most difficult for the dementia patients to recognize. [81]
Face recognition is a complex process that is affected by many factors, both environmental and individually internal. Other aspects to be considered include race and culture and their effects on one's ability to recognize faces. [82] Some factors such as the cross-race effect can influence one's ability to recognize and remember faces.
Research indicates that performance in conjunctive visual search tasks significantly improves during childhood and declines in later life. [83] More specifically, young adults have been shown to have faster reaction times on conjunctive visual search tasks than both children and older adults, but their reaction times were similar for feature visual search tasks. [52] This suggests that there is something about the process of integrating visual features or serial searching that is difficult for children and older adults, but not for young adults. Studies have suggested numerous mechanisms involved in this difficulty in children, including peripheral visual acuity, [84] eye movement ability, [85] ability of attentional focal movement, [86] and the ability to divide visual attention among multiple objects. [87]
Studies have suggested similar mechanisms in the difficulty for older adults, such as age related optical changes that influence peripheral acuity, [88] the ability to move attention over the visual field, [89] the ability to disengage attention, [90] and the ability to ignore distractors. [91]
A study by Lorenzo-López et al. (2008) provides neurological evidence for the fact that older adults have slower reaction times during conjunctive searches compared to young adults. Event-related potentials (ERPs) showed longer latencies and lower amplitudes in older subjects than young adults at the P3 component, which is related to activity of the parietal lobes. This suggests the involvement of the parietal lobe function with an age-related decline in the speed of visual search tasks. Results also showed that older adults, when compared to young adults, had significantly less activity in the anterior cingulate cortex and many limbic and occipitotemporal regions that are involved in performing visual search tasks. [92]
Research has found that people with Alzheimer's disease (AD) are significantly impaired overall in visual search tasks. [93] People with AD manifest enhanced spatial cueing, but this benefit is only obtained for cues with high spatial precision. [94] Abnormal visual attention may underlie certain visuospatial difficulties in patients with (AD). People with AD have hypometabolism and neuropathology in the parietal cortex, and given the role of parietal function for visual attention, patients with AD may have hemispatial neglect, which may result in difficulty with disengaging attention in visual search. [95]
An experiment conducted by Tales et al. (2000) [93] investigated the ability of patients with AD to perform various types of visual search tasks. Their results showed that search rates on "pop-out" tasks were similar for both AD and control groups, however, people with AD searched significantly slower compared to the control group on a conjunction task. One interpretation of these results is that the visual system of AD patients has a problem with feature binding, such that it is unable to communicate the different feature descriptions for the stimulus efficiently. [93] Binding of features is thought to be mediated by areas in the temporal and parietal cortex, and these areas are known to be affected by AD-related pathology.
Another possibility for the impairment of people with AD on conjunction searches is that there may be some damage to general attentional mechanisms in AD, and therefore any attention-related task will be affected, including visual search. [93]
Tales et al. (2000) detected a double dissociation with their experimental results on AD and visual search. Earlier work was carried out on patients with Parkinson's disease (PD) concerning the impairment patients with PD have on visual search tasks. [96] [97] In those studies, evidence was found of impairment in PD patients on the "pop-out" task, but no evidence was found on the impairment of the conjunction task. As discussed, AD patients show the exact opposite of these results: normal performance was seen on the "pop-out" task, but impairment was found on the conjunction task. This double dissociation provides evidence that PD and AD affect the visual pathway in different ways, and that the pop-out task and the conjunction task are differentially processed within that pathway.
Studies have consistently shown that autistic individuals performed better and with lower reaction times in feature and conjunctive visual search tasks than matched controls without autism. [98] [99] Several explanations for these observations have been suggested. One possibility is that people with autism have enhanced perceptual capacity. [99] This means that autistic individuals are able to process larger amounts of perceptual information, allowing for superior parallel processing and hence faster target location. [100] Second, autistic individuals show superior performance in discrimination tasks between similar stimuli and therefore may have an enhanced ability to differentiate between items in the visual search display. [101] A third suggestion is that autistic individuals may have stronger top-down target excitation processing and stronger distractor inhibition processing than controls. [98] Keehn et al. (2008) used an event-related functional magnetic resonance imaging design to study the neurofunctional correlates of visual search in autistic children and matched controls of typically developing children. [102] Autistic children showed superior search efficiency and increased neural activation patterns in the frontal, parietal, and occipital lobes when compared to the typically developing children. Thus, autistic individuals' superior performance on visual search tasks may be due to enhanced discrimination of items on the display, which is associated with occipital activity, and increased top-down shifts of visual attention, which is associated with the frontal and parietal areas.
In the past decade, there has been extensive research into how companies can maximise sales using psychological techniques derived from visual search to determine how products should be positioned on shelves. Pieters and Warlop (1999) [103] used eye tracking devices to assess saccades and fixations of consumers while they visually scanned/searched an array of products on a supermarket shelf. Their research suggests that consumers specifically direct their attention to products with eye-catching properties such as shape, colour or brand name. This effect is due to a pressured visual search where eye movements accelerate and saccades minimise, thus resulting in the consumer's quickly choosing a product with a 'pop out' effect. This study suggests that efficient search is primarily used, concluding that consumers do not focus on items that share very similar features. The more distinct or maximally visually different a product is from surrounding products, the more likely the consumer is to notice it. Janiszewski (1998) [104] discussed two types of consumer search. One search type is goal directed search taking place when somebody uses stored knowledge of the product in order to make a purchase choice. The second is exploratory search. This occurs when the consumer has minimal previous knowledge about how to choose a product. It was found that for exploratory search, individuals would pay less attention to products that were placed in visually competitive areas such as the middle of the shelf at an optimal viewing height. This was primarily due to the competition in attention meaning that less information was maintained in visual working memory for these products.
Attention or focus, is the concentration of awareness on some phenomenon to the exclusion of other stimuli. It is the selective concentration on discrete information, either subjectively or objectively. William James (1890) wrote that "Attention is the taking possession by the mind, in clear and vivid form, of one out of what seem several simultaneously possible objects or trains of thought. Focalization, concentration, of consciousness are of its essence." Attention has also been described as the allocation of limited cognitive processing resources. Attention is manifested by an attentional bottleneck, in terms of the amount of data the brain can process each second; for example, in human vision, less than 1% of the visual input data stream of 1MByte/sec can enter the bottleneck, leading to inattentional blindness.
Wishful thinking is the formation of beliefs based on what might be pleasing to imagine, rather than on evidence, rationality, or reality. It is a product of resolving conflicts between belief and desire. Methodologies to examine wishful thinking are diverse. Various disciplines and schools of thought examine related mechanisms such as neural circuitry, human cognition and emotion, types of bias, procrastination, motivation, optimism, attention and environment. This concept has been examined as a fallacy. It is related to the concept of wishful seeing.
Neuroesthetics is a recent sub-discipline of applied aesthetics. Empirical aesthetics takes a scientific approach to the study of aesthetic experience of art, music, or any object that can give rise to aesthetic judgments. Neuroesthetics is a term coined by Semir Zeki in 1999 and received its formal definition in 2002 as the scientific study of the neural bases for the contemplation and creation of a work of art. Anthropologists and evolutionary biologists alike have accumulated evidence suggesting that human interest in, and creation of, art evolved as an evolutionarily necessary mechanism for survival as early as the 9th and 10th century in Gregorian monks and Native Americans. Neuroesthetics uses neuroscience to explain and understand the aesthetic experiences at the neurological level. The topic attracts scholars from many disciplines including neuroscientists, art historians, artists, art therapists and psychologists.
Anne Marie Treisman was an English psychologist who specialised in cognitive psychology.
Change blindness is a perceptual phenomenon that occurs when a change in a visual stimulus is introduced and the observer does not notice it. For example, observers often fail to notice major differences introduced into an image while it flickers off and on again. People's poor ability to detect changes has been argued to reflect fundamental limitations of human attention. Change blindness has become a highly researched topic and some have argued that it may have important practical implications in areas such as eyewitness testimony and distractions while driving.
Inhibition of return (IOR) refers to an orientation mechanism that briefly enhances the speed and accuracy with which an object is detected after the object is attended, but then impairs detection speed and accuracy. IOR is usually measured with a cue-response paradigm, in which a person presses a button when they detect a target stimulus following the presentation of a cue that indicates the location in which the target will appear. The cue can be exogenous, or endogenous. Inhibition of return results from oculomotor activation, regardless of whether it was produced by exogenous signals or endogenously. Although IOR occurs for both visual and auditory stimuli, IOR is greater for visual stimuli, and is studied more often than auditory stimuli.
Attentional shift occurs when directing attention to a point increases the efficiency of processing of that point and includes inhibition to decrease attentional resources to unwanted or irrelevant inputs. Shifting of attention is needed to allocate attentional resources to more efficiently process information from a stimulus. Research has shown that when an object or area is attended, processing operates more efficiently. Task switching costs occur when performance on a task suffers due to the increased effort added in shifting attention. There are competing theories that attempt to explain why and how attention is shifted as well as how attention is moved through space in attentional control.
The posterior parietal cortex plays an important role in planned movements, spatial reasoning, and attention.
Recognition memory, a subcategory of explicit memory, is the ability to recognize previously encountered events, objects, or people. When the previously experienced event is reexperienced, this environmental content is matched to stored memory representations, eliciting matching signals. As first established by psychology experiments in the 1970s, recognition memory for pictures is quite remarkable: humans can remember thousands of images at high accuracy after seeing each only once and only for a few seconds.
The effects of sleep deprivation on cognitive performance are a broad range of impairments resulting from inadequate sleep, impacting attention, executive function and memory. An estimated 20% of adults or more have some form of sleep deprivation. It may come with insomnia or major depressive disorder, or indicate other mental disorders. The consequences can negatively affect the health, cognition, energy level and mood of a person and anyone around. It increases the risk of human error, especially with technology.
N2pc refers to an ERP component linked to selective attention. The N2pc appears over visual cortex contralateral to the location in space to which subjects are attending; if subjects pay attention to the left side of the visual field, the N2pc appears in the right hemisphere of the brain, and vice versa. This characteristic makes it a useful tool for directly measuring the general direction of a person's attention with fine-grained temporal resolution.
Chronostasis is a type of temporal illusion in which the first impression following the introduction of a new event or task-demand to the brain can appear to be extended in time. For example, chronostasis temporarily occurs when fixating on a target stimulus, immediately following a saccade. This elicits an overestimation in the temporal duration for which that target stimulus was perceived. This effect can extend apparent durations by up to half a second and is consistent with the idea that the visual system models events prior to perception.
Haptic memory is the form of sensory memory specific to touch stimuli. Haptic memory is used regularly when assessing the necessary forces for gripping and interacting with familiar objects. It may also influence one's interactions with novel objects of an apparently similar size and density. Similar to visual iconic memory, traces of haptically acquired information are short lived and prone to decay after approximately two seconds. Haptic memory is best for stimuli applied to areas of the skin that are more sensitive to touch. Haptics involves at least two subsystems; cutaneous, or everything skin related, and kinesthetic, or joint angle and the relative location of body. Haptics generally involves active, manual examination and is quite capable of processing physical traits of objects and surfaces.
Images and other stimuli contain both local features and global features. Precedence refers to the level of processing to which attention is first directed. Global precedence occurs when an individual more readily identifies the global feature when presented with a stimulus containing both global and local features. The global aspect of an object embodies the larger, overall image as a whole, whereas the local aspect consists of the individual features that make up this larger whole. Global processing is the act of processing a visual stimulus holistically. Although global precedence is generally more prevalent than local precedence, local precedence also occurs under certain circumstances and for certain individuals. Global precedence is closely related to the Gestalt principles of grouping in that the global whole is a grouping of proximal and similar objects. Within global precedence, there is also the global interference effect, which occurs when an individual is directed to identify the local characteristic, and the global characteristic subsequently interferes by slowing the reaction time.
Object-based attention refers to the relationship between an ‘object’ representation and a person’s visually stimulated, selective attention, as opposed to a relationship involving either a spatial or a feature representation; although these types of selective attention are not necessarily mutually exclusive. Research into object-based attention suggests that attention improves the quality of the sensory representation of a selected object, and results in the enhanced processing of that object’s features.
The Posner cueing task, also known as the Posner paradigm, is a neuropsychological test often used to assess attention. Formulated by Michael Posner, it assesses a person's ability to perform an attentional shift. It has been used and modified to assess disorders, focal brain injury, and the effects of both on spatial attention.
Emotion perception refers to the capacities and abilities of recognizing and identifying emotions in others, in addition to biological and physiological processes involved. Emotions are typically viewed as having three components: subjective experience, physical changes, and cognitive appraisal; emotion perception is the ability to make accurate decisions about another's subjective experience by interpreting their physical changes through sensory systems responsible for converting these observed changes into mental representations. The ability to perceive emotion is believed to be both innate and subject to environmental influence and is also a critical component in social interactions. How emotion is experienced and interpreted depends on how it is perceived. Likewise, how emotion is perceived is dependent on past experiences and interpretations. Emotion can be accurately perceived in humans. Emotions can be perceived visually, audibly, through smell and also through bodily sensations and this process is believed to be different from the perception of non-emotional material.
In cognitive psychology, intertrial priming is an accumulation of the priming effect over multiple trials, where "priming" is the effect of the exposure to one stimulus on subsequently presented stimuli. Intertrial priming occurs when a target feature is repeated from one trial to the next, and typically results in speeded response times to the target. A target is the stimulus participants are required to search for. For example, intertrial priming occurs when the task is to respond to either a red or a green target, and the response time to a red target is faster if the preceding trial also has a red target.
Visual spatial attention is a form of visual attention that involves directing attention to a location in space. Similar to its temporal counterpart visual temporal attention, these attention modules have been widely implemented in video analytics in computer vision to provide enhanced performance and human interpretable explanation of deep learning models.
Michael E. Goldberg, also known as Mickey Goldberg, is an American neuroscientist and David Mahoney Professor at Columbia University. He is known for his work on the mechanisms of the mammalian eye in relation to brain activity. He served as president of the Society for Neuroscience from 2009 to 2010.
{{cite book}}
: CS1 maint: multiple names: authors list (link){{cite journal}}
: CS1 maint: multiple names: authors list (link){{cite journal}}
: CS1 maint: multiple names: authors list (link){{cite journal}}
: CS1 maint: multiple names: authors list (link)