Perceptual learning

Last updated

Perceptual learning is learning better perception skills such as differentiating two musical tones from one another or categorizations of spatial and temporal patterns relevant to real-world expertise. Examples of this may include reading, seeing relations among chess pieces, and knowing whether or not an X-ray image shows a tumor.

Contents

Sensory modalities may include visual, auditory, tactile, olfactory, and taste. Perceptual learning forms important foundations of complex cognitive processes (i.e., language) and interacts with other kinds of learning to produce perceptual expertise. [1] [2] Underlying perceptual learning are changes in the neural circuitry. The ability for perceptual learning is retained throughout life. [3]

Category learning vs. perceptual learning

It can be fairly easy to confuse category learning and perceptual learning. Category learning is "an assumed fixed, pre-established perceptual representation to describe the objects to be categorized." [4] Category learning is built upon perceptual learning because you are showing a distinction of what the objects are. Perceptual learning is defined as a "change in perception as a product of experience, and has reviewed evidence demonstrating that discrimination between otherwiords

that sound similar to their native language. They now can tell the difference whereas in category learning they are trying to separate the two.

Examples

Basic sensory discrimination

Laboratory studies reported many examples of dramatic improvements in sensitivities from appropriately structured perceptual learning tasks. In visual Vernier acuity tasks, observers judge whether one line is displaced above or below a second line. Untrained observers are often already very good with this task, but after training, observers' threshold has been shown to improve as much as 6 fold. [5] [6] [7] Similar improvements have been found for visual motion discrimination [8] and orientation sensitivity. [9] [10] In visual search tasks, observers are asked to find a target object hidden among distractors or in noise. Studies of perceptual learning with visual search show that experience leads to great gains in sensitivity and speed. In one study by Karni and Sagi, [3] the time it took for subjects to search for an oblique line among a field of horizontal lines was found to improve dramatically, from about 200ms in one session to about 50ms in a later session. With appropriate practice, visual search can become automatic and very efficient, such that observers do not need more time to search when there are more items present on the search field. [11] Tactile perceptual learning has been demonstrated on spatial acuity tasks such as tactile grating orientation discrimination, and on vibrotactile perceptual tasks such as frequency discrimination; tactile learning on these tasks has been found to transfer from trained to untrained fingers. [12] [13] [14] [15] Practice with Braille reading and daily reliance on the sense of touch may underlie the enhancement in tactile spatial acuity of blind compared to sighted individuals. [16]

Neuropsychology of perceptual category learning

Multiple different category learning systems may mediate the learning of different category structures. "Two systems that have received support are a frontal-based explicit system that uses logical reasoning, depends on working memory and executive attention, and is mediated primarily by the anterior cingulate, the prefrontal cortex and the associative striatum, including the head of the caudate. The second is a basal ganglia-mediated implicit system that uses procedural learning, requires a dopamine reward signal and is mediated primarily by the sensorimotor striatum" [17] The studies showed that there was significant involvement of the striatum and less involvement of the medial temporal lobes in category learning. In people who have striatal damage, the need to ignore irrelevant information is more predictive of a rule-based category learning deficit. Whereas, the complexity of the rule is predictive of an information integration category learning deficit.

In the natural world

Perceptual learning is prevalent and occurs continuously in everyday life. "Experience shapes the way people see and hear." [18] Experience provides the sensory input to our perceptions as well as knowledge about identities. When people are less knowledgeable about different races and cultures people develop stereotypes because they are less knowledgeable. Perceptual learning is a more in-depth relationship between experience and perception. Different perceptions of the same sensory input may arise in individuals with different experiences or training. This leads to important issues about the ontology of sensory experience, the relationship between cognition and perception.

An example of this is money. Every day we look at money and we can look at it and know what it is but when you are asked to find the correct coin in similar coins that have slight differences we may have a problem finding the difference. This is because we see it every day but we are not directly trying to find a difference. Learning to perceive differences and similarities among stimuli based on exposure to the stimuli. A study conducted by Gibson's in 1955 illustrates how exposure to stimuli can affect how well we learn details for different stimuli.

As our perceptual system adapts to the natural world, we become better at discriminating between different stimuli when they belong to different categories than when they belong to the same category. We also tend to become less sensitive to the differences between two instances of the same category. [19] These effects are described as the result of categorical perception. Categorical perception effects do not transfer across domains.

Infants, when different sounds belong to the same phonetic category in their native language, tend to lose sensitivity to differences between speech sounds by 10 months of age. [20] They learn to pay attention to salient differences between native phonetic categories, and ignore the less language-relevant ones. In chess, expert chess players encode larger chunks of positions and relations on the board and require fewer exposures to fully recreate a chess board. This is not due to their possessing superior visual skill, but rather to their advanced extraction of structural patterns specific to chess. [21] [22]

When a woman has a baby, shortly after the baby's birth she will be able to decipher the difference in her baby's cry. This is because she is becoming more sensitive to the differences. She can tell what cry is because they are hungry, need to be changed, etc.

Extensive practice reading in English leads to extraction and rapid processing of the structural regularities of English spelling patterns. The word superiority effect demonstrates this—people are often much faster at recognizing words than individual letters. [23] [24]

In speech phonemes, observers who listen to a continuum of equally spaced consonant-vowel syllables going from /be/ to /de/ are much quicker to indicate that two syllables are different when they belonged to different phonemic categories than when they were two variants of the same phoneme, even when physical differences were equated between each pair of syllables. [25]

Other examples of perceptual learning in the natural world include the ability to distinguish between relative pitches in music, [26] identify tumors in x-rays, [27] sort day-old chicks by gender, [28] taste the subtle differences between beers or wines, [29] identify faces as belonging to different races, [30] detect the features that distinguish familiar faces, [31] discriminate between two bird species ("great blue crown heron" and "chipping sparrow"), [32] and attend selectively to the hue, saturation and brightness values that comprise a color definition. [33]

Brief history

The prevalent idiom that “practice makes perfect” captures the essence of the ability to reach impressive perceptual expertise. This has been demonstrated for centuries and through extensive amounts of practice in skills such as wine tasting, fabric evaluation, or musical preference. The first documented report, dating to the mid-19th century, is the earliest example of tactile training aimed at decreasing the minimal distance at which individuals can discriminate whether one or two points on their skin have been touched. It was found that this distance (JND, Just Noticeable Difference) decreases dramatically with practice, and that this improvement is at least partially retained on subsequent days. Moreover, this improvement is at least partially specific to the trained skin area. A particularly dramatic improvement was found for skin positions at which initial discrimination was very crude (e.g. on the back), though training could not bring the JND of initially crude areas down to that of initially accurate ones (e.g. finger tips). [34] William James devoted a section in his Principles of Psychology (1890/1950) to "the improvement in discrimination by practice". [35] He noted examples and emphasized the importance of perceptual learning for expertise. In 1918, Clark L. Hull, a noted learning theorist, trained human participants to learn to categorize deformed Chinese characters into categories. For each category, he used 6 instances that shared some invariant structural property. People learned to associate a sound as the name of each category, and more importantly, they were able to classify novel characters accurately. [36] This ability to extract invariances from instances and apply them to classify new instances marked this study as a perceptual learning experiment. It was not until 1969, however, that Eleanor Gibson published her seminal book The Principles of Perceptual learning and Development and defined the modern field of perceptual learning. She established the study of perceptual learning as an inquiry into the behavior and mechanism of perceptual change. By the mid-1970s, however, this area was in a state of dormancy due to a shift in focus to perceptual and cognitive development in infancy. Much of the scientific community tended to underestimate the impact of learning compared with innate mechanisms. Thus, most of this research focused on characterizing basic perceptual capacities of young infants rather than on perceptual learning processes.

Since the mid-1980s, there has been a new wave of interest in perceptual learning due to findings of cortical plasticity at the lowest sensory levels of sensory systems. Our increased understanding of the physiology and anatomy of our cortical systems has been used to connect the behavioral improvement to the underlying cortical areas. This trend began with earlier findings of Hubel and Wiesel that perceptual representations at sensory areas of the cortex are substantially modified during a short ("critical") period immediately following birth. Merzenich, Kaas and colleagues showed that though neuroplasticity is diminished, it is not eliminated when the critical period ends. [37] Thus, when the external pattern of stimulation is substantially modified, neuronal representations in lower-level (e.g. primary) sensory areas are also modified. Research in this period centered on basic sensory discriminations, where remarkable improvements were found on almost any sensory task through discrimination practice. Following training, subjects were tested with novel conditions and learning transfer was assessed. This work departed from earlier work on perceptual learning, which spanned different tasks and levels.

A question still debated today is to what extent improvements from perceptual learning stems from peripheral modifications compared with improvement in higher-level readout stages. Early interpretations, such as that suggested by William James, attributed it to higher-level categorization mechanisms whereby initially blurred differences are gradually associated with distinctively different labels. The work focused on basic sensory discrimination, however, suggests that the effects of perceptual learning are specific to changes in low-levels of the sensory nervous system (i.e., primary sensory cortices). [38] More recently, research suggest that perceptual learning processes are multilevel and flexible. [39] This cycles back to the earlier Gibsonian view that low-level learning effects are modulated by high-level factors, and suggests that improvement in information extraction may not involve only low-level sensory coding but also apprehension of relatively abstract structure and relations in time and space.

Within the past decade, researchers have sought a more unified understanding of perceptual learning and worked to apply these principles to improve perceptual learning in applied domains.

Characteristics

Discovery and fluency effects

Perceptual learning effects can be organized into two broad categories: discovery effects and fluency effects. [1] Discovery effects involve some change in the bases of response such as in selecting new information relevant for the task, amplifying relevant information or suppressing irrelevant information. Experts extract larger "chunks" of information and discover high-order relations and structures in their domains of expertise that are invisible to novices. Fluency effects involve changes in the ease of extraction. Not only can experts process high-order information, they do so with great speed and low attentional load. Discovery and fluency effects work together so that as the discovery structures becomes more automatic, attentional resources are conserved for discovery of new relations and for high-level thinking and problem-solving.

The role of attention

William James (Principles of Psychology, 1890) asserted that "My experience is what I agree to attend to. Only those items which I notice shape my mind - without selective interest, experience is an utter chaos.". [35] His view was extreme, yet its gist was largely supported by subsequent behavioral and physiological studies. Mere exposure does not seem to suffice for acquiring expertise.

Indeed, a relevant signal in a given behavioral condition may be considered noise in another. For example, when presented with two similar stimuli, one might endeavor to study the differences between their representations in order to improve one's ability to discriminate between them, or one may instead concentrate on the similarities to improve one's ability to identify both as belonging to the same category. A specific difference between them could be considered 'signal' in the first case and 'noise' in the second case. Thus, as we adapt to tasks and environments, we pay increasingly more attention to the perceptual features that are relevant and important for the task at hand, and at the same time, less attention to the irrelevant features. This mechanism is called attentional weighting. [39]

However, recent studies suggest that perceptual learning occurs without selective attention. [40] Studies of such task-irrelevant perceptual learning (TIPL) show that the degree of TIPL is similar to that found through direct training procedures. [41] TIPL for a stimulus depends on the relationship between that stimulus and important task events [42] or upon stimulus reward contingencies. [43] It has thus been suggested that learning (of task irrelevant stimuli) is contingent upon spatially diffusive learning signals. [44] Similar effects, but upon a shorter time scale, have been found for memory processes and in some cases is called attentional boosting. [45] Thus, when an important (alerting) event occurs, learning may also affect concurrent, non-attended and non-salient stimuli. [46]

Time course of perceptual learning

The time course of perceptual learning varies from one participant to another. [12] Perceptual learning occurs not only within the first training session but also between sessions. [47] Fast learning (i.e., within-first-session learning) and slow learning (i.e., between-session learning) involves different changes in the human adult brain. While the fast learning effects can only be retained for a short term of several days, the slow learning effects can be preserved for a long term over several months. [48]

Explanations and models

Receptive field modification

Research on basic sensory discriminations often show that perceptual learning effects are specific to the trained task or stimulus. [49] Many researchers take this to suggest that perceptual learning may work by modifying the receptive fields of the cells (e.g., V1 and V2 cells) that initially encode the stimulus. For example, individual cells could adapt to become more sensitive to important features, effectively recruiting more cells for a particular purpose, making some cells more specifically tuned for the task at hand. [50] Evidence for receptive field change has been found using single-cell recording techniques in primates in both tactile and auditory domains. [51]

However, not all perceptual learning tasks are specific to the trained stimuli or tasks. Sireteanu and Rettenback [52] discussed discrimination learning effects that generalize across eyes, retinal locations and tasks. Ahissar and Hochstein [53] used visual search to show that learning to detect a single line element hidden in an array of differently-oriented line segments could generalize to positions at which the target was never presented. In human vision, not enough receptive field modification has been found in early visual areas to explain perceptual learning. [54] Training that produces large behavioral changes such as improvements in discrimination does not produce changes in receptive fields. In studies where changes have been found, the changes are too small to explain changes in behavior. [55]

Reverse hierarchy theory

The Reverse Hierarchy Theory (RHT), proposed by Ahissar & Hochstein, aims to link between learning dynamics and specificity and the underlying neuronal sites. [56] RHT proposes that naïve performance is based on responses at high-level cortical areas, where crude, categorical level representations of the environment are represented. Hence initial learning stages involve understanding global aspects of the task. Subsequent practice may yield better perceptual resolution as a consequence of accessing lower-level information via the feedback connections going from high to low levels. Accessing the relevant low-level representations requires a backward search during which informative input populations of neurons in the low level are allocated. Hence, subsequent learning and its specificity reflect the resolution of lower levels. RHT thus proposes that initial performance is limited by the high-level resolution whereas post-training performance is limited by the resolution at low levels. Since high-level representations of different individuals differ due to their prior experience, their initial learning patterns may differ. Several imaging studies are in line with this interpretation, finding that initial performance is correlated with average (BOLD) responses at higher-level areas whereas subsequent performance is more correlated with activity at lower-level areas[ citation needed ]. RHT proposes that modifications at low levels will occur only when the backward search (from high to low levels of processing) is successful. Such success requires that the backward search will "know" which neurons in the lower level are informative. This "knowledge" is gained by training repeatedly on a limited set of stimuli, such that the same lower-level neuronal populations are informative during several trials. Recent studies found that mixing a broad range of stimuli may also yield effective learning if these stimuli are clearly perceived as different, or are explicitly tagged as different. These findings further support the requirement for top-down guidance in order to obtain effective learning.

Enrichment versus differentiation

In some complex perceptual tasks, all humans are experts. We are all very sophisticated, but not infallible at scene identification, face identification and speech perception. Traditional explanations attribute this expertise to some holistic, somewhat specialized, mechanisms. Perhaps such quick identifications are achieved by more specific and complex perceptual detectors which gradually "chunk" (i.e., unitize) features that tend to concur, making it easier to pull a whole set of information. Whether any concurrence of features can gradually be chunked with practice or chunking can only be obtained with some pre-disposition (e.g. faces, phonological categories) is an open question. Current findings suggest that such expertise is correlated with a significant increase in the cortical volume involved in these processes. Thus, we all have somewhat specialized face areas, which may reveal an innate property, but we also develop somewhat specialized areas for written words as opposed to single letters or strings of letter-like symbols. Moreover, special experts in a given domain have larger cortical areas involved in that domain. Thus, expert musicians have larger auditory areas. [57] These observations are in line with traditional theories of enrichment proposing that improved performance involves an increase in cortical representation. For this expertise, basic categorical identification may be based on enriched and detailed representations, located to some extent in specialized brain areas. Physiological evidence suggests that training for refined discrimination along basic dimensions (e.g. frequency in the auditory modality) also increases the representation of the trained parameters, though in these cases the increase may mainly involve lower-level sensory areas. [58]

Selective reweighting

In 2005, Petrov, Dosher and Lu pointed out that perceptual learning may be explained in terms of the selection of which analyzers best perform the classification, even in simple discrimination tasks. They explain that the some part of the neural system responsible for particular decisions have specificity[ clarification needed ], while low-level perceptual units do not. [39] In their model, encodings at the lowest level do not change. Rather, changes that occur in perceptual learning arise from changes in higher-level, abstract representations of the relevant stimuli. Because specificity can come from differentially selecting information, this "selective reweighting theory" allows for learning of complex, abstract representation. This corresponds to Gibson's earlier account of perceptual learning as selection and learning of distinguishing features. Selection may be the unifying principles of perceptual learning at all levels. [59]

The impact of training protocol and the dynamics of learning

Ivan Pavlov discovered conditioning. He found that when a stimulus (e.g. sound) is immediately followed by food several times, the mere presentation of this stimulus would subsequently elicit saliva in a dog's mouth. He further found that when he used a differential protocol, by consistently presenting food after one stimulus while not presenting food after another stimulus, dogs were quickly conditioned to selectively salivate in response to the rewarded one. He then asked whether this protocol could be used to increase perceptual discrimination, by differentially rewarding two very similar stimuli (e.g. tones with similar frequency). However, he found that differential conditioning was not effective.

Pavlov's studies were followed by many training studies which found that an effective way to increase perceptual resolution is to begin with a large difference along the required dimension and gradually proceed to small differences along this dimension. This easy-to-difficult transfer was termed "transfer along a continuum".

These studies showed that the dynamics of learning depend on the training protocol, rather than on the total amount of practice. Moreover, it seems that the strategy implicitly chosen for learning is highly sensitive to the choice of the first few trials during which the system tries to identify the relevant cues.

Consolidation and sleep

Several studies asked whether learning takes place during practice sessions or in between, for example, during subsequent sleep. The dynamics of learning are hard to evaluate since the directly measured parameter is performance, which is affected by both learning, inducing improvement, and fatigue, which hampers performance. Current studies suggest that sleep contributes to improved and durable learning effects, by further strengthening connections in the absence of continued practice. [47] [60] [61] Both slow-wave and REM (rapid eye movement) stages of sleep may contribute to this process, via not-yet-understood mechanisms.

Comparison and contrast

Practice with comparison and contrast of instances that belong to the same or different categories allow for the pick-up of the distinguishing features—features that are important for the classification task—and the filter of the irrelevant features. [62]

Task difficulty

Learning easy examples first may lead to better transfer and better learning of more difficult cases. [63] By recording ERPs from human adults, Ding and Colleagues investigated the influence of task difficulty on the brain mechanisms of visual perceptual learning. Results showed that difficult task training affected earlier visual processing stage and broader visual cortical regions than easy task training. [64]

Active classification and attention

Active classification effort and attention are often necessary to produce perceptual learning effects. [61] However, in some cases, mere exposure to certain stimulus variations can produce improved discriminations.

Feedback

In many cases, perceptual learning does not require feedback (whether or not the classification is correct). [58] Other studies suggest that block feedback (feedback only after a block of trials) produces more learning effects than no feedback at all. [65]

Limits

Despite the marked perceptual learning demonstrated in different sensory systems and under varied training paradigms, it is clear that perceptual learning must face certain unsurpassable limits imposed by the physical characteristics of the sensory system. For instance, in tactile spatial acuity tasks, experiments suggest that the extent of learning is limited by fingertip surface area, which may constrain the underlying density of mechanoreceptors. [12]

Relations to other forms of learning

Declarative & procedural learning

In many domains of expertise in the real world, perceptual learning interacts with other forms of learning. Declarative knowledge tends to occur with perceptual learning. As we learn to distinguish between an array of wine flavors, we also develop a wide range of vocabularies to describe the intricacy of each flavor.

Similarly, perceptual learning also interacts flexibly with procedural knowledge. For example, the perceptual expertise of a baseball player at bat can detect early in the ball's flight whether the pitcher threw a curveball. However, the perceptual differentiation of the feel of swinging the bat in various ways may also have been involved in learning the motor commands that produce the required swing. [1]

Implicit learning

Perceptual learning is often said to be implicit, such that learning occurs without awareness. It is not at all clear whether perceptual learning is always implicit. Changes in sensitivity that arise are often not conscious and do not involve conscious procedures, but perceptual information can be mapped onto various responses. [1]

In complex perceptual learning tasks (e.g., sorting of newborn chicks by sex, playing chess), experts are often unable to explain what stimulus relationships they are using in classification. However, in less complex perceptual learning tasks, people can point out what information they're using to make classifications.

Applications

Improving perceptual skills

An important potential application of perceptual learning is the acquisition of skill for practical purposes. Thus it is important to understand whether training for increased resolution in lab conditions induces a general upgrade which transfers to other environmental contexts, or results from mechanisms which are context specific. Improving complex skills is typically gained by training under complex simulation conditions rather than one component at a time. Recent lab-based training protocols with complex action computer games have shown that such practice indeed modifies visual skills in a general way, which transfers to new visual contexts. In 2010, Achtman, Green, and Bavelier reviewed the research on video games to train visual skills. [66] They cite a previous review by Green & Bavelier (2006) [67] on using video games to enhance perceptual and cognitive abilities. A variety of skills were upgraded in video game players, including "improved hand-eye coordination, [68] increased processing in the periphery, [69] enhanced mental rotation skills, [70] greater divided attention abilities, [71] and faster reaction times, [72] to name a few". An important characteristic is the functional increase in the size of the effective visual field (within which viewers can identify objects), which is trained in action games and transfers to new settings. Whether learning of simple discriminations, which are trained in separation, transfers to new stimulus contexts (e.g. complex stimulus conditions) is still an open question.

Like experimental procedures, other attempts to apply perceptual learning methods to basic and complex skills use training situations in which the learner receives many short classification trials. Tallal, Merzenich and their colleagues have successfully adapted auditory discrimination paradigms to address speech and language difficulties. [73] [74] They reported improvements in language learning-impaired children using specially enhanced and extended speech signals. The results applied not only to auditory discrimination performance but speech and language comprehension as well.

Technologies for perceptual learning

In educational domains, recent efforts by Philip Kellman and colleagues showed that perceptual learning can be systematically produced and accelerated using specific, computer-based technology. Their approach to perceptual learning methods take the form of perceptual learning modules (PLMs): sets of short, interactive trials that develop, in a particular domain, learners' pattern recognition, classification abilities, and their abilities to map across multiple representations. As a result of practice with mapping across transformations (e.g., algebra, fractions) and across multiple representations (e.g., graphs, equations, and word problems), students show dramatic gains in their structure recognition in fraction learning and algebra. They also demonstrated that when students practice classifying algebraic transformations using PLMs, the results show remarkable improvements in fluency at algebra problem solving. [59] [75] [76] These results suggests that perceptual learning can offer a needed complement to conceptual and procedural instructions in the classroom.

Similar results have also been replicated in other domains with PLMs, including anatomic recognition in medical and surgical training, [77] reading instrumental flight displays, [78] and apprehending molecular structures in chemistry. [79]

See also

Related Research Articles

<span class="mw-page-title-main">Attention</span> Psychological process of selectively perceiving and prioritising discrete aspects of information

Attention is the concentration of awareness on some phenomenon to the exclusion of other stimuli. It is a process of selectively concentrating on a discrete aspect of information, whether considered subjective or objective. William James (1890) wrote that "Attention is the taking possession by the mind, in clear and vivid form, of one out of what seem several simultaneously possible objects or trains of thought. Focalization, concentration, of consciousness are of its essence." Attention has also been described as the allocation of limited cognitive processing resources. Attention is manifested by an attentional bottleneck, in terms of the amount of data the brain can process each second; for example, in human vision, only less than 1% of the visual input data can enter the bottleneck, leading to inattentional blindness.

<span class="mw-page-title-main">Wishful thinking</span> Formation of beliefs based on what might be pleasing to imagine

Wishful thinking is the formation of beliefs based on what might be pleasing to imagine, rather than on evidence, rationality, or reality. It is a product of resolving conflicts between belief and desire. Methodologies to examine wishful thinking are diverse. Various disciplines and schools of thought examine related mechanisms such as neural circuitry, human cognition and emotion, types of bias, procrastination, motivation, optimism, attention and environment. This concept has been examined as a fallacy. It is related to the concept of wishful seeing.

Psychophysics quantitatively investigates the relationship between physical stimuli and the sensations and perceptions they produce. Psychophysics has been described as "the scientific study of the relation between stimulus and sensation" or, more completely, as "the analysis of perceptual processes by studying the effect on a subject's experience or behaviour of systematically varying the properties of a stimulus along one or more physical dimensions".

Stimulus modality, also called sensory modality, is one aspect of a stimulus or what is perceived after a stimulus. For example, the temperature modality is registered after heat or cold stimulate a receptor. Some sensory modalities include: light, sound, temperature, taste, pressure, and smell. The type and location of the sensory receptor activated by the stimulus plays the primary role in coding the sensation. All sensory modalities work together to heighten stimuli sensation when necessary.

In the study of vision, visual short-term memory (VSTM) is one of three broad memory systems including iconic memory and long-term memory. VSTM is a type of short-term memory, but one limited to information within the visual domain.

Multisensory integration, also known as multimodal integration, is the study of how information from the different sensory modalities may be integrated by the nervous system. A coherent representation of objects combining modalities enables animals to have meaningful perceptual experiences. Indeed, multisensory integration is central to adaptive behavior because it allows animals to perceive a world of coherent perceptual entities. Multisensory integration also deals with how different sensory modalities interact with one another and alter each other's processing.

Inattentional blindness or perceptual blindness occurs when an individual fails to perceive an unexpected stimulus in plain sight, purely as a result of a lack of attention rather than any vision defects or deficits. When it becomes impossible to attend to all the stimuli in a given situation, a temporary "blindness" effect can occur, as individuals fail to see unexpected but often salient objects or stimuli.

The Levels of Processing model, created by Fergus I. M. Craik and Robert S. Lockhart in 1972, describes memory recall of stimuli as a function of the depth of mental processing. Deeper levels of analysis produce more elaborate, longer-lasting, and stronger memory traces than shallow levels of analysis. Depth of processing falls on a shallow to deep continuum. Shallow processing leads to a fragile memory trace that is susceptible to rapid decay. Conversely, deep processing results in a more durable memory trace. There are three levels of processing in this model. Structural processing, or visual, is when we remember only the physical quality of the word E.g how the word is spelled and how letters look. Phonemic processing includes remembering the word by the way it sounds. E.G the word tall rhymes with fall. Lastly, we have semantic processing in which we encode the meaning of the word with another word that is similar of has similar meaning. Once the word is perceived, the brain allows for a deeper processing.

Sensory substitution is a change of the characteristics of one sensory modality into stimuli of another sensory modality.

<span class="mw-page-title-main">Associative visual agnosia</span> Medical condition

Associative visual agnosia is a form of visual agnosia. It is an impairment in recognition or assigning meaning to a stimulus that is accurately perceived and not associated with a generalized deficit in intelligence, memory, language or attention. The disorder appears to be very uncommon in a "pure" or uncomplicated form and is usually accompanied by other complex neuropsychological problems due to the nature of the etiology. Affected individuals can accurately distinguish the object, as demonstrated by the ability to draw a picture of it or categorize accurately, yet they are unable to identify the object, its features or its functions.

Categorical perception is a phenomenon of perception of distinct categories when there is a gradual change in a variable along a continuum. It was originally observed for auditory stimuli but now found to be applicable to other perceptual modalities.

<span class="mw-page-title-main">Mental chronometry</span> Study of processing speed on cognitive tasks

Mental chronometry is the scientific study of processing speed or reaction time on cognitive tasks to infer the content, duration, and temporal sequencing of mental operations. Reaction time is measured by the elapsed time between stimulus onset and an individual's response on elementary cognitive tasks (ECTs), which are relatively simple perceptual-motor tasks typically administered in a laboratory setting. Mental chronometry is one of the core methodological paradigms of human experimental, cognitive, and differential psychology, but is also commonly analyzed in psychophysiology, cognitive neuroscience, and behavioral neuroscience to help elucidate the biological mechanisms underlying perception, attention, and decision-making in humans and other species.

Repetition priming refers to improvements in a behavioural response when stimuli are repeatedly presented. The improvements can be measured in terms of accuracy or reaction time and can occur when the repeated stimuli are either identical or similar to previous stimuli. These improvements have been shown to be cumulative, so as the number of repetitions increases the responses get continually faster up to a maximum of around seven repetitions. These improvements are also found when the repeated items are changed slightly in terms of orientation, size and position. The size of the effect is also modulated by the length of time the item is presented for and the length time between the first and subsequent presentations of the repeated items.

The cutaneous rabbit illusion is a tactile illusion evoked by tapping two or more separate regions of the skin in rapid succession. The illusion is most readily evoked on regions of the body surface that have relatively poor spatial acuity, such as the forearm. A rapid sequence of taps delivered first near the wrist and then near the elbow creates the sensation of sequential taps hopping up the arm from the wrist towards the elbow, although no physical stimulus was applied between the two actual stimulus locations. Similarly, stimuli delivered first near the elbow then near the wrist evoke the illusory perception of taps hopping from elbow towards wrist. The illusion was discovered by Frank Geldard and Carl Sherrick of Princeton University, in the early 1970s, and further characterized by Geldard (1982) and in many subsequent studies. Geldard and Sherrick likened the perception to that of a rabbit hopping along the skin, giving the phenomenon its name. While the rabbit illusion has been most extensively studied in the tactile domain, analogous sensory saltation illusions have been observed in audition and vision. The word "saltation" refers to the leaping or jumping nature of the percept.

<span class="mw-page-title-main">Two-point discrimination</span> Ability to discern between single and pairs of points on ones skin

Two-point discrimination (2PD) is the ability to discern that two nearby objects touching the skin are truly two distinct points, not one. It is often tested with two sharp points during a neurological examination and is assumed to reflect how finely innervated an area of skin is.

Priming is the idea that exposure to one stimulus may influence a response to a subsequent stimulus, without conscious guidance or intention. The priming effect refers to the positive or negative effect of a rapidly presented stimulus on the processing of a second stimulus that appears shortly after. Generally speaking, the generation of priming effect depends on the existence of some positive or negative relationship between priming and target stimuli. For example, the word nurse might be recognized more quickly following the word doctor than following the word bread. Priming can be perceptual, associative, repetitive, positive, negative, affective, semantic, or conceptual. Priming effects involve word recognition, semantic processing, attention, unconscious processing, and many other issues, and are related to differences in various writing systems. Research, however, has yet to firmly establish the duration of priming effects, yet their onset can be almost instantaneous.

<span class="mw-page-title-main">Cross modal plasticity</span> Reorganization of neurons in the brain to integrate the function of two or more sensory systems

Cross modal plasticity is the adaptive reorganization of neurons to integrate the function of two or more sensory systems. Cross modal plasticity is a type of neuroplasticity and often occurs after sensory deprivation due to disease or brain damage. The reorganization of the neural network is greatest following long-term sensory deprivation, such as congenital blindness or pre-lingual deafness. In these instances, cross modal plasticity can strengthen other sensory systems to compensate for the lack of vision or hearing. This strengthening is due to new connections that are formed to brain cortices that no longer receive sensory input.

Haptic memory is the form of sensory memory specific to touch stimuli. Haptic memory is used regularly when assessing the necessary forces for gripping and interacting with familiar objects. It may also influence one's interactions with novel objects of an apparently similar size and density. Similar to visual iconic memory, traces of haptically acquired information are short lived and prone to decay after approximately two seconds. Haptic memory is best for stimuli applied to areas of the skin that are more sensitive to touch. Haptics involves at least two subsystems; cutaneous, or everything skin related, and kinesthetic, or joint angle and the relative location of body. Haptics generally involves active, manual examination and is quite capable of processing physical traits of objects and surfaces.

Images and other stimuli contain both local features and global features. Precedence refers to the level of processing to which attention is first directed. Global precedence occurs when an individual more readily identifies the global feature when presented with a stimulus containing both global and local features. The global aspect of an object embodies the larger, overall image as a whole, whereas the local aspect consists of the individual features that make up this larger whole. Global processing is the act of processing a visual stimulus holistically. Although global precedence is generally more prevalent than local precedence, local precedence also occurs under certain circumstances and for certain individuals. Global precedence is closely related to the Gestalt principles of grouping in that the global whole is a grouping of proximal and similar objects. Within global precedence, there is also the global interference effect, which occurs when an individual is directed to identify the local characteristic, and the global characteristic subsequently interferes by slowing the reaction time.

Perceptual load theory is a psychological theory of attention. It was presented by Nilli Lavie in the mid-nineties as a potential resolution to the early/late selection debate.

References

  1. 1 2 3 4 Kellman, P. J. (2002). "Perceptual learning". In Pashler, H.; Gallistel, R. (eds.). Stevens' Handbook of Experimental Psychology. Vol. 3: Learning, Motivation, and Emotion (3rd ed.). New York: Wiley. doi:10.1002/0471214426.pas0307. ISBN   0471214426.
  2. Goldstone, R. L., Steyvers, M., Spencer-Smith, J. & Kersten, A. (2000). Interaction between perceptual and conceptual learning. In E. Diettrich & A. B. Markman (Eds). Cognitive Dynamics: Conceptual Change in Humans and Machines (pp. 191-228). Lawrence Erlbaum and Associates
  3. 1 2 Karni, A; Sagi, D (1993). "The time course of learning a visual skill". Nature. 365 (6443): 250–252. Bibcode:1993Natur.365..250K. doi:10.1038/365250a0. PMID   8371779. S2CID   473406.
  4. Carvalho, Paulo (2017). Human Perceptual Learning and Categorization. Wiley handbooks in cognitive neuroscience. p. 1. ISBN   978-1-118-65094-3.
  5. Westheimer, G; McKee, SP (1978). "Stereoscopic acuity for moving retinal images". Journal of the Optical Society of America. 68 (4): 450–455. Bibcode:1978JOSA...68..450W. doi:10.1364/JOSA.68.000450. PMID   671135.
  6. Saarinen, J; Levi, DM (1995). "Perceptual learning in vernier acuity: What is learned?". Vision Research. 35 (4): 519–527. doi: 10.1016/0042-6989(94)00141-8 . PMID   7900292. S2CID   14458537.
  7. Poggio, T.; Fahle, M.; Edelman, S (1992). "Fast perceptual learning in visual hyperacuity". Science. 256 (5059): 1018–21. Bibcode:1992Sci...256.1018P. doi:10.1126/science.1589770. hdl: 1721.1/6585 . PMID   1589770.
  8. Ball, K; Sekuler, R (1982). "A specific and enduring improvement in visual motion discrimination". Science. 218 (4573): 697–698. Bibcode:1982Sci...218..697B. doi:10.1126/science.7134968. PMID   7134968.
  9. Shiu, LP; Pashler, H (1992). "Improvement in line orientation discrimination is retinally local but dependent on cognitive set". Percept Psychophys. 52 (5): 582–588. doi: 10.3758/BF03206720 . PMID   1437491. S2CID   9245872.
  10. Vogels, R; Orban, GA (1985). "The effect of practice on the oblique effect in line orientation judgments". Vision Res. 25 (11): 1679–1687. doi:10.1016/0042-6989(85)90140-3. PMID   3832592. S2CID   37198864.
  11. Schneider, W.; Shiffrin, R.M. (1977). "Controlled and automatic human information processing: I. Detection, search, and attention". Psychological Review. 84 (1): 1–66. doi:10.1037/0033-295X.84.1.1. S2CID   26558224.
  12. 1 2 3 Wong, M.; Peters, R. M.; Goldreich, D. (2013). "A Physical Constraint on Perceptual Learning: Tactile Spatial Acuity Improves with Training to a Limit Set by Finger Size". Journal of Neuroscience. 33 (22): 9345–9352. doi:10.1523/JNEUROSCI.0514-13.2013. PMC   6618562 . PMID   23719803.
  13. Sathian, K; Zangaladze, A (1997). "Tactile learning is task specific but transfers between fingers". Perception & Psychophysics. 59 (1): 119–28. doi: 10.3758/bf03206854 . PMID   9038414. S2CID   43267891.
  14. Imai, T; Kamping, S; Breitenstein, C; Pantev, C; Lütkenhöner, B; Knecht, S (2003). "Learning of tactile frequency discrimination in humans". Human Brain Mapping. 18 (4): 260–71. doi:10.1002/hbm.10083. PMC   6871959 . PMID   12632464.
  15. Harris, JA; Harris, IM; Diamond, ME (2001). "The topography of tactile learning in humans". Journal of Neuroscience. 21 (3): 1056–61. doi:10.1523/JNEUROSCI.21-03-01056.2001. PMC   6762328 . PMID   11157091.
  16. Wong, M; Gnanakumaran, V; Goldreich, D (11 May 2011). "Tactile spatial acuity enhancement in blindness: evidence for experience-dependent mechanisms". Journal of Neuroscience. 31 (19): 7028–37. doi:10.1523/jneurosci.6461-10.2011. PMC   6703211 . PMID   21562264.
  17. Roeder, Jessica (2017). The Neuropsychology of Perceptual Category Learning. Handbook of categorization in cognitive science. p. 1. ISBN   978-0-08-101107-2.
  18. Goldstone, Robert (2015). Perceptual Learning. The Oxford handbook of philosophy of perception. p. 1. ISBN   978-0-19-960047-2.
  19. Goldstone, R.L.; Hendrickson, A. (2010). "Categorical perception". Wiley Interdisciplinary Reviews: Cognitive Science. 1 (1): 69–78. doi:10.1002/wcs.26. PMID   26272840. S2CID   7830566.
  20. Werker, J.F.; Lalonde, C.E. (1988). "Cross-language speech perception: initial capabilities and developmental change". Developmental Psychology. 24 (5): 672–83. doi:10.1037/0012-1649.24.5.672.
  21. De Groot, A.D. (1965). Thought and choice in chess. The Hague, Netherlands: Mouton.
  22. Chase, W.G.; Simon, H.A. (1973). "Perception in Chess". Cognitive Psychology. 4 (1): 55–81. doi:10.1016/0010-0285(73)90004-2.
  23. Reicher, G.M. (1969). "Perceptual recognition as a function of meaningfulness of stimulus material". Journal of Experimental Psychology. 81 (2): 275–280. doi:10.1037/h0027768. PMID   5811803.
  24. Wheeler, D. D. (1970). Processes in the visual recognition of words (Doctoral dissertation, University of Michigan, 1970)" Dissertation Abstracts Internationals 31(2), 940B.
  25. Liberman, A.M.; Harris, K.S.; Hoffman, H.S.; Griffith, B.C. (1957). "The discrimination of speech sounds within and across phonemes boundaries". Journal of Experimental Psychology. 54 (5): 358–368. doi:10.1037/h0044417. PMID   13481283. S2CID   10117886.
  26. Burns, E.M.; Ward, W.D. (1978). "Categorical perception - phenomenon or epiphenomenon: Evidence from experiments in the per- ception of melodic musical intervals". J. Acoust. Soc. Am. 63 (2): 456–68. Bibcode:1978ASAJ...63..456B. doi:10.1121/1.381737. PMID   670543.
  27. Myles-Worsley, M.; Johnston, W.A.; Simon, M.A. (1988). "The influence of expertise on X-ray image processing". Journal of Experimental Psychology: Learning, Memory, and Cognition. 14 (3): 553–57. doi:10.1037/0278-7393.14.3.553. PMID   2969946.
  28. Biederman, I.; Shiffrar, M. M. (1987). "Sexing day- old chicks: a case study and expert systems analysis of a difficult perceptual-learning task". Journal of Experimental Psychology: Learning, Memory, and Cognition. 13 (4): 640–45. doi:10.1037/0278-7393.13.4.640.
  29. Peron, R.M.; Allen, G.L. (1988). "Attempts to train novices for beer flavor discrimination: a matter of taste". Journal of General Psychology. 115 (4): 403–418. doi:10.1080/00221309.1988.9710577. PMID   3210002.
  30. Shapiro, P.N.; Penrod, S.D. (1986). "Meta-analysis of face identification studies". Psychological Bulletin. 100 (2): 139–56. doi:10.1037/0033-2909.100.2.139.
  31. O'Toole, A. J.; Peterson, J.; Deffenbacher, K.A. (1996). "An "other-race effect" for categorizing faces by sex". Perception. 25 (6): 669–76. doi:10.1068/p250669. PMID   8888300. S2CID   7191979.
  32. Tanaka, J.; Taylor, M. (1991). "Object categories and expertise: Is the basic level in the eye of the beholder?". Cognitive Psychology. 23 (3): 457–82. doi:10.1016/0010-0285(91)90016-H. S2CID   2259482.
  33. Burns, B.; Shepp, B.E. (1988). "Dimensional interactions and the structure of psychological space: the representation of hue, saturation, and brightness". Perception and Psychophysics. 43 (5): 494–507. doi: 10.3758/BF03207885 . PMID   3380640. S2CID   20843793.
  34. Volkman, A. W. (1858). "Über den Einfluss der Übung". Leipzig Ber Math-Phys Classe. 10: 38–69.
  35. 1 2 James, W (1890). The principles of psychology. Vol. I. New York: Dover Publications Inc. p. 509. doi:10.1037/11059-000.
  36. Hull, C.L. (1920). "Quantitative aspects of evolution of concepts". Psychological monographs. Vol. XXVIII. pp. 1–86. doi:10.2105/ajph.10.7.583. PMC   1362740 . PMID   18010338.{{cite book}}: |journal= ignored (help)
  37. Merzenich MM, Kaas JH, Wall JT, Sur M, Nelson RJ, Felleman DJ (1983). "Progression of change following median nerve section in the cortical representation of the hand in areas 3b and 1 in adult owl and squirrel monkeys". Neuroscience. 10 (3): 639–65. doi:10.1016/0306-4522(83)90208-7. PMID   6646426. S2CID   4930.
  38. Fahle Manfred (Editor), and Tomaso Poggio (Editor), Perceptual Learning. The MIT Press (2002: p. xiv) ISBN   0262062216
  39. 1 2 3 Petrov, A. A.; Dosher, B. A.; Lu, Z.-L. (2005). "The Dynamics of Perceptual Learning: An Incremental Reweighting Model". Psychological Review. 112 (4): 715–743. doi:10.1037/0033-295X.112.4.715. PMID   16262466. S2CID   18320512.
  40. Watanabe, T.; Nanez, J.E.; Sasaki, Y. (2001). "Perceptual learning without perception". Nature. 413 (6858): 844–848. Bibcode:2001Natur.413..844W. doi:10.1038/35101601. PMID   11677607. S2CID   4381577.
  41. Seitz; Watanabe (2009). "The Phenomenon of Task-Irrelevant Perceptual Learning". Vision Research. 49 (21): 2604–2610. doi:10.1016/j.visres.2009.08.003. PMC   2764800 . PMID   19665471.
  42. Seitz; Watanabe (2003). "Is subliminal learning really passive?". Nature. 422 (6927): 36. doi: 10.1038/422036a . PMID   12621425. S2CID   4429167.
  43. Seitz, Kim; Watanabe (2009). "Rewards Evoke Learning of Unconsciously Processed Visual Stimuli in Adult Humans". Neuron. 61 (5): 700–7. doi:10.1016/j.neuron.2009.01.016. PMC   2683263 . PMID   19285467.
  44. Seitz; Watanabe (2005). "A unified model of task-irrelevant and task-relevant perceptual learning". Trends in Cognitive Sciences. 9 (7): 329–334. doi:10.1016/j.tics.2005.05.010. PMID   15955722. S2CID   11648415.
  45. Swallow, KM; Jiang, YV (April 2010). "The Attentional Boost Effect: Transient increases in attention to one task enhance performance in a second task". Cognition. 115 (1): 118–32. doi:10.1016/j.cognition.2009.12.003. PMC   2830300 . PMID   20080232.
  46. Y Jiang; MM Chun (2001). "Selective attention modulates implicit learning". The Quarterly Journal of Experimental Psychology A. 54 (4): 1105–24. CiteSeerX   10.1.1.24.8668 . doi:10.1080/713756001. PMID   11765735. S2CID   6839092.
  47. 1 2 Karni, A.; Sagi, D. (1993). "The time course of learning a visual skill". Nature. 365 (6443): 250–252. Bibcode:1993Natur.365..250K. doi:10.1038/365250a0. PMID   8371779. S2CID   473406.
  48. Qu, Z. Song; Ding, Y. (2010). "ERP evidence for distinct mechanisms of fast and slow visual perceptual learning". Neuropsychologia. 48 (6): 1869–1874. doi:10.1016/j.neuropsychologia.2010.01.008. PMID   20080117. S2CID   17617635.
  49. Fahle, M., Poggio, T. (2002) Perceptual learning. Cambridge, MA: MIT Press.
  50. Fahle, M.; Edelman, S. (1993). "Long-term learning in vernier acuity: Effects of stimulus orientation, range and of feedback". Vision Research. 33 (3): 397–412. doi:10.1016/0042-6989(93)90094-d. PMID   8447110. S2CID   2141339.
  51. Recanzone, G.H.; Schreiner, C.E.; Merzenich, M.M. (1993). "Plasticity in the frequency representation of primary auditory cortex following discrimination training in adult owl monkeys". Journal of Neuroscience. 13 (1): 87–103. doi:10.1523/JNEUROSCI.13-01-00087.1993. PMC   6576321 . PMID   8423485.
  52. Leonards, U.; Rettenback, R.; Nase, G.; Sireteanu, R. (2002). "Perceptual learning of highly demanding visual search tasks". Vision Research. 42 (18): 2193–204. doi: 10.1016/s0042-6989(02)00134-7 . PMID   12207979. S2CID   506154.
  53. Ahissar, M.; Hochstein, S. (2000). "Learning pop-out detection: The spread of attention and learning in feature search: effects of target distribution and task difficulty". Vision Research. 40 (10–12): 1349–64. doi: 10.1016/s0042-6989(00)00002-x . PMID   10788645. S2CID   13172627.
  54. Schoups, A.; Vogels, R.; Qian, N.; Orban, G. (2001). "Practising orientation identification improves orientation coding in V1 neurons". Nature. 412 (6846): 549–53. Bibcode:2001Natur.412..549S. doi:10.1038/35087601. PMID   11484056. S2CID   4419839.
  55. Ghose, G.M.; Yang, T.; Maunsell, J.H. (2002). "Physiological correlates of perceptual learning in monkey v1 and v2". Journal of Neurophysiology. 87 (4): 1867–88. doi:10.1152/jn.00690.2001. PMID   11929908.
  56. M. Ahissar; S. Hochstein (2004). "The reverse hierarchy theory of visual perceptual learning". Trends in Cognitive Sciences. 8 (10): 457–64. doi:10.1016/j.tics.2004.08.011. PMID   15450510. S2CID   16274816.
  57. Draganski, B; May, A (2008). "Training-induced structural changes in the adult human brain". Behav Brain Res. 192 (1): 137–142. doi:10.1016/j.bbr.2008.02.015. PMID   18378330. S2CID   2199886.
  58. 1 2 Gibson, J.J.; Gibson, E.J. (1955). "Perceptual learning: Differentiation or enrichment?". Psychological Review. 62 (1): 32–41. doi:10.1037/h0048826. PMID   14357525.
  59. 1 2 Kellman, P. J.; Garrigan, P. (2009). "Perceptual learning and human expertise". Physics of Life Reviews. 6 (2): 53–84. Bibcode:2009PhLRv...6...53K. doi:10.1016/j.plrev.2008.12.001. PMC   6198797 . PMID   20416846.
  60. Stickgold, R.; LaTanya, J.; Hobson, J.A. (2000). "Visual discrimination learning requires sleep after training". Nature Neuroscience. 3 (12): 1237–1238. doi: 10.1038/81756 . PMID   11100141. S2CID   11807197.
  61. 1 2 Shiu, L.; Pashler, H. (1992). "Improvement in line orientation discrimination is retinally local but dependent on cognitive set". Perception & Psychophysics. 52 (5): 582–588. doi: 10.3758/bf03206720 . PMID   1437491. S2CID   9245872.
  62. Gibson, Eleanor (1969) Principles of Perceptual Learning and Development. New York: Appleton-Century-Crofts
  63. Ahissar, M.; Hochstein, S. (1997). "Task difficulty and learning specificity". Nature. 387 (6631): 401–406. doi:10.1038/387401a0. PMID   9163425. S2CID   4343062.
  64. Wang, Y.; Song, Y.; Qu, Z.; Ding, Y.L. (2010). "Task difficulty modulates electrophysiological correlates of perceptual learning". International Journal of Psychophysiology. 75 (3): 234–240. doi:10.1016/j.ijpsycho.2009.11.006. PMID   19969030.
  65. Herzog, M.H.; Fahle, M. (1998). "Modeling perceptual learning: Difficulties and how they can be overcome". Biological Cybernetics. 78 (2): 107–117. doi:10.1007/s004220050418. PMID   9525037. S2CID   12351107.
  66. R.L. Achtman; C.S. Green & D. Bavelier (2008). "Video games as a tool to train visual skills". Restor Neurol Neuroscience. 26 (4–5): 4–5. PMC   2884279 . PMID   18997318.
  67. Green, C.S., & Bavelier, D.. (2006) The Cognitive Neuroscience of Video Games. In: Messaris P, Humphrey L, editors. Digital Media: Transformations in Human Communication. New York: Peter Lang
  68. Griffith, J.L.; Voloschin, P.; Gibb, G.D.; Bailey, J.R. (1983). "Differences in eye-hand motor coordination of video-game users and non-users". Perceptual and Motor Skills. 57 (1): 155–158. doi:10.2466/pms.1983.57.1.155. PMID   6622153. S2CID   24182135.
  69. Green, C.S.; Bavelier, D. (2006). "Effect of action video games on the spatial distribution of visuospatial attention". Journal of Experimental Psychology: Human Perception and Performance. 32 (6): 1465–1478. doi:10.1037/0096-1523.32.6.1465. PMC   2896828 . PMID   17154785.
  70. Sims, V.K.; Mayer, R.E. (2000). "Domain specificity of spatial expertise: The case of video game players". Applied Cognitive Psychology. 16: 97–115. doi: 10.1002/acp.759 .
  71. Greenfield, P.M.; DeWinstanley, P.; Kilpatrick, H.; Kaye, D. (1994). "Action video games and informal education: effects on strategies for dividing visual attention". Journal of Applied Developmental Psychology. 15: 105–123. doi:10.1016/0193-3973(94)90008-6.
  72. Castel, A.D.; Pratt, J.; Drummond, E. (2005). "The effects of action video game experience on the time course of inhibition of return and the efficiency of visual search". Acta Psychologica. 119 (2): 217–230. doi:10.1016/j.actpsy.2005.02.004. PMID   15877981.
  73. Merzenich; Jenkins, W.M.; Johnstone, P.; Schreiner, C.; Miller, S.; Tallal, P. (1996). "Temporal processing deficits of language impaired children ameliorated by training". Science. 271 (5245): 77–81. doi:10.1126/science.271.5245.77. PMID   8539603. S2CID   24141773.
  74. Tallal, P.; Merzenich, M.; Miller, S.; Jenkins, W. (1998). "Language learning impairment: Integrating research and remediation". Scandinavian Journal of Psychology . 39 (3): 197–199. doi:10.1111/1467-9450.393079. PMID   9800537.
  75. Kellman, P. J.; Massey, C. M.; Son, J. Y. (2009). "Perceptual Learning Modules in Mathematics: Enhancing Students' Pattern Recognition, Structure Extraction, and Fluency". Topics in Cognitive Science. 2 (2): 1–21. doi:10.1111/j.1756-8765.2009.01053.x. PMC   6124488 . PMID   25163790.
  76. Kellman, P.J.; Massey, C.M.; Roth, Z.; Burke, T.; Zucker, J.; Saw, A.; et al. (2008). "Perceptual learning and the technology of expertise: Studies in fraction learning and algebra". Pragmatics & Cognition. 16 (2): 356–405. doi:10.1075/p&c.16.2.07kel.
  77. Guerlain, S.; La Follette, M.; Mersch, T.C.; Mitchell, B.A.; Poole, G.R.; Calland, J.F.; et al. (2004). "Improving surgical pattern recognition through repetitive viewing of video clips". IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans. 34 (6): 699–707. doi:10.1109/tsmca.2004.836793. S2CID   7058984.
  78. Kellman, P.J.; Kaiser, M.K. (1994). "Extracting object motion during observer motion: Combining constraints from optic flow and binocular disparity". Journal of the Optical Society of America A. 12 (3): 623–5. doi:10.1364/JOSAA.12.000623. PMID   7891215. S2CID   9306696.
  79. Wise, J., Kubose, T., Chang, N., Russell, A., & Kellman, P. (1999). Perceptual learning modules in mathematics and science instruction. Artificial intelligence in education: open learning environments: new computational technologies to support learning, exploration and collaboration, 169.