The Colavita visual dominance effect refers to the phenomenon in which study participants respond more often to the visual component of an audiovisual stimulus, when presented with bimodal stimuli. [1]
Research has shown that vision is the most dominant sense for human beings [2] who do not suffer from sensory difficulties (e.g. blindness, cataracts). Theorists have proposed that the Colavita visual dominance effect demonstrates a bias toward visual sensory information, because the presence of auditory stimuli is commonly neglected during audiovisual events. [3]
Francis B. Colavita, whom the Colavita visual dominance effect is named, was the first to demonstrate this phenomenon in 1974. Colavita's original experiments found that visual dominance for audiovisual events persists under a number of conditions, which has been further established as a robust effect by other researchers.
In 1974, Colavita conducted an experiment, which provided evidence for visual dominance in humans when performing an audiovisual discrimination task. [4]
In his experiment, Colavita (1974) presented participants with an auditory (tone) or visual (light) stimulus, to which they were instructed to respond by pressing the ‘tone key’ or ‘light key’ respectively. [4] Throughout the experiment, unimodal auditory trials, unimodal visual trials and a small number of audiovisual bimodal trials were randomly presented. [4]
Colavita deceived the participants by informing them that the bimodal trials in the experiment occurred "accidentally". [4] During practice trials, Colavita would "accidentally" present audiovisual stimuli, and would then draw the participants’ attention to what had just happened and would apologize for such ‘accident’. [4] In addition, the participants were not instructed on how to respond on such trials or whether this type of trials would occur again. [5]
The results showed that participants had almost equivalent response times for auditory and visual stimuli in unimodal trials. [4] Additionally, Colavita found that participants pressed the ‘light key’ in the majority of the bimodal trials. This was seen as evidence of visual dominance because participants failed to acknowledge the presence of the auditory stimulus in most bimodal trials. [4] [5] However, due to Colavita's deception of the "accidental" occurrence of bimodal trials, researchers have proposed that experimenter expectancy effects, task demands or methodological problems may have contributed to the visual dominance effect reported in Colavita's original study. [6] Nevertheless, subsequent experiments have discontinued the use of deception, and continue to show a robust Colavita visual dominance effect. [5]
For example, Sinnet and his colleagues conducted an experiment in which they presented participants with three response keys, one for each type of response (unimodal visual, unimodal auditory and bimodal audiovisual); instead of just two, and they instructed participants to press the bimodal key when responding to audiovisual stimuli. [6] This new manipulation resulted in a significant reduction of the Colavita effect because errors in bimodal trials were only committed in a small number of trials. [6] In another experiment, Sinnett and his colleagues conducted a pre-specified target detection task where auditory targets were more frequent than visual or bimodal targets. This led to the elimination of the Colavita effect. The authors suggested that this was due to the introduction of a bias toward auditory stimuli. [7] Ngo and her colleagues conducted a similar study where the results were replicated, because their findings showed that under the appropriate conditions and task demand, the Colavita effect can be reversed. [8] Also, Sinnett and his colleagues mention that animals and humans increase their reliance on auditory stimuli in high-arousal situations and when facing potential threats, [7] which could imply that the Colavita effect is situation and context dependent.
Colavita also varied the intensity of the visual and the auditory stimuli to determine whether matching the intensity of both stimuli, or if increasing the intensity of only the auditory stimulus would decrease the occurrence of the Colavita effect. [4] However, none of the experimental manipulations regarding the intensity of the stimuli decreased the occurrence of the Colavita effect. [4] Further research has been conducted to replicate Colavita's experiment and to extend the Colavita effect to more complex stimuli. For example, Sinnett, Spence and Soto-Faraco conducted an experiment in 2007, in which pictures and sounds were used as stimuli instead of the light and tone. [6] The rationale for using more complex stimuli was that this type of stimuli would increase perceptual load, requiring more attentional resources. [9] The findings from this study showed that the Colavita effect continues to occur when stimuli become more complex.
According to Hartcher-O’Brien and colleagues, the Colavita and visual dominance effects can be generally attributed to an imbalance in the ability to access processing resources, namely between vision and other sensory modalities. Failure of a stimulus to access awareness when multiple stimuli are presented at the same time may result in sensory dominance. [1] For decades, there has been a continuous debate regarding whether the Colavita effect occurs at a sensory level or at the level of attention, involving exogenous (involuntary or reflexive) or endogenous (voluntary) attention. Research has found inconclusive results regarding this debate.
Posner and colleagues conducted a study to look at the origin and significance of visual dominance. [2] They proposed that attentional resources are endogenously (voluntarily) biased towards vision in order to compensate for the low alerting capability of visual signals. Posner and colleagues proposed that visual stimuli do not alert attention automatically, whereas other sensory modalities do. In order for the visual stimulus to serve as an effective alerting mechanism, the person must process it through active attention. [2] Consequently, when attention is directed towards vision, the attentive mechanisms to other sensory modalities are reduced. The low alerting capability of visual signals is compensated by individuals’ attentional bias towards visual modalities. This is viewed by Posner and colleagues (1976) as an endogenous (voluntary) strategy in responding to one's environment. [2]
Conversely, Koppen and Spence suggest that attentional bias towards the visual modality may be exogenously (reflexively) mediated. [5] That is to say, visual stimuli may in fact capture a person's attention more effectively than stimuli from other sensory modalities.
Koppen and Spence (2007a) investigated what role exogenous (reflexive) attention plays in sensory modality, and found that the Colavita effect can be regulated (but not terminated), depending on which sensory modality (audition or vision) individuals focus their exogenous attention on. [5] Visual stimuli were more effective at capturing attention than auditory stimuli. Thus, Koppen and Spence propose that the Colavita effect may reflect differences in the exogenous attention-capturing qualities of visual versus auditory stimuli. [5]
Sinnett, Spence, and Faraco (2007) conducted an experiment in which they noted that, when attention was manipulated in the direction of the auditory stimulus, the Colavita effect was able to be reduced, but not reversed, to create auditory dominance. [6] As a result, they proposed that visual dominance cannot be based entirely on attentional mechanisms, but must occur at a sensory level. This inability to reverse the Colavita effect when attention is directed to an auditory stimulus, suggests that visual dominance may involve innate biases towards visual modalities. In one of their experiments, in order to reduce the magnitude of visual dominance, Sinnett and his colleagues (2007) created a strong bias towards the auditory modality. [6] To do this, they increased the proportion of auditory targets, which resulted in faster reaction times to unimodal auditory targets than to unimodal visual targets. [6] Participants showed a small (nonsignificant) bias towards making more erroneous unimodal auditory responses, and no reversal of the Colavita effect was observed. These results support the claim that visual dominance occurs at a sensory level, before the engagement of attention. [6]
Another theory that has been used in the explanation of the Colavita effect is the ‘Failure of Binding’. [8] This theory suggests that participants bind together the visual and auditory components of an audiovisual stimulus, which can hinder the processing of the auditory component of the audiovisual stimulus. [8] This occurs because the visual component alone provides enough information about the audiovisual stimulus. [8] This theory only applies when the visual and auditory stimuli presented are congruent. When they are incongruent, the visual component is not an accurate representation of the auditory component. In this case, incongruency can act as a cue to inform participants that a bimodal target occurred. [8]
The Colavita effect has been shown to be affected by factors that contribute to the intermodal binding of auditory and visual stimuli during perception. These factors of interest are spatial and temporal coincidence between the auditory and visual stimuli, which modulate the Colavita effect through temporal separation and temporal order.
For example, results from an experiment, conducted by Koppen and Spence (2007b), showed a larger Colavita effect when auditory and visual stimuli were presented closer together in time. [7] When the stimuli were presented further apart in time, the Colavita effect was reduced. [7] Their results also showed that the Colavita effect was largest when the visual stimulus was presented before the auditory stimulus during the bimodal trials. [7] Conversely, the Colavita effect was reversed or reduced when the auditory stimulus preceded the visual stimulus . In addition, Koppen and Spence conducted an experiment in which participants showed a significantly larger Colavita effect when the auditory and visual stimuli were presented from the same spatial location, rather than from different locations. [10] Based on these results, Koppen and his colleagues proposed that the ‘unity effect’ can adequately explain the role of spatial and temporal coincidence between stimuli in modulating the Colavita effect. [11] According to the Unity effect, intersensory bias is greater when the participants unconsciously bind the two sensory events and believe that a single unimodal object is being perceived, rather than two separate events.
Research has shown that multisensory cues from an object may share certain semantic features, which may contribute to cross-modal binding of sensory information. Sinnett and his colleagues conducted an experiment using meaningful stimuli, and their findings showed that the Colavita effect continued to exist when using complex and meaningful stimuli were used. [6]
In addition, a study conducted by Laurienti and colleagues showed that, under certain conditions, responses to audiovisual stimuli can be affected by semantic congruence or incongruence. More specifically, their findings showed that participants responded faster to congruent auditory and visual stimuli than to incongruent stimuli. [12] In addition, Koppen, Alsius and Spence conducted a study which investigated whether the Colavita effect would be modulated by the semantic congruency between the visual and auditory stimulus, using stimuli of similar semantic meaning and complexity. [11] The findings from this study showed that semantic congruency had no effect on the magnitude of the Colavita effect in the experiments, yet it had a significant effect on participants’ performance in the speeded discrimination task. Participants showed a pattern that reflected difficulties with separating the auditory stimulus from the visual stimulus when these stimuli had congruent semantic meaning and were presented simultaneously. [11] For incongruent stimuli, participants had faster response times, which could also be explained by the previously mentioned theory of ‘Failure of Binding’. [11]
Previous research has shown that people with one eye have enhanced spatial vision, implying that vision in the remaining working eye compensates for the loss of the simultaneous use of both eyes. [13] Furthermore, individuals who have lost the ability to use one sensory system develop an enhanced ability in the use of the remaining senses. It is thought that intact sensory systems may adapt and compensate for the loss of one of the senses. However, little is known about cross-sensory adaption in cases of developmental partial sensory deprivation, such as monocular enucleation, where individuals have one eye surgically removed early in life. [13]
In an experiment, Moro and Steeves tested whether participants with one eye showed the Colavita visual dominance effect, and compared their performance to binocular viewers (use of both eyes) and monocular (eye-patched) control participants. [13] In their experiment, Moro and Steeves used a stimulus detection and discrimination task, which had three conditions: unimodal visual targets, unimodal auditory targets, and bimodal (visual and auditory presented together) targets. The binocular and monocular participants both displayed the Colavita visual dominance effect; however the monocular enucleation group did not. [13] Moro and Steeves demonstrated that people with one eye show equivalent auditory and visual processing, compared with binocular and monocular viewing controls, when asked to discriminate between audio, visual, and bimodal stimuli. [13]
The lack of visual dominance in the enucleated participants cannot be due to the overall reduction in visual input, as the monocular control group wearing an eye patch performed the same as the binocular normal control group. [13] Moro and Steeves concluded that people with one eye develop an unbiased allocation of sensory resources, which places less emphasis on vision when bimodal stimuli are presented. [13] Although the lack of Colavita effect in people with one eye begins to explore the possibility that a decrease in visual dominance potentially allows for the adaption of other senses, such as audition.
Research has shown that vision is the most dominant sense out of the five senses that human beings possess. Vision can dominate over audition in localization judgement, over touch for shape judgement, and over proprioception when trying to determine the position of one's limb in space. [1] Individuals’ perception of auditory stimuli is often influenced by visual stimuli. Visual dominance has been demonstrated in a multisensory illusion called the McGurk effect, where a visual stimulus paired with an incongruent auditory stimulus leads to the misperception of auditory information, resulting in individuals hearing a sound different from the real auditory input. [4] According to Posner and colleagues individuals’ visual system lacks the capacity to properly alert them of possible threats. [2] Therefore, it is possible that visual dominance results from the attention system's attempt to compensate for the visual system's improper alerting capabilities. [2]
Research has shown that vision is also the dominant modality in a number of other animal species; this is thought to be due to the majority of biologically important information being received visually. Visual dominance effects over audition have been reported in cows, [14] rats, [15] and pigeons. [16] For example, Miller conducted an experiment, in which, his findings showed a visual dominance effect for rats. [15] In this experiment, rats were trained to press a lever in response to a visual (light) and to an auditory (tone) stimulus, which could be presented individually or simultaneously. The findings from this experiment showed that rats' response rates were more frequent on the 'light' lever, than on the 'tone' lever, when both stimuli were presented simultaneously. [15]
Vision has also been shown to be a dominant modality in pigeons, according to a study by Randich, Klien and LoLordo. [16] Pigeons were trained to perform an auditory-visual discrimination task by depressing two different foot treadles, one when an auditory tone was presented, and another treadle in the presence of a red light. The results from this experiment showed that pigeons demonstrated the Colavita visual dominance effect. When presented with a bimodal (auditory and visual) stimulus, the pigeons always responded on the visual treadle, implying visual dominance. Furthermore, in a subsequent task, Randich and his colleagues [16] delayed the presentation of the visual stimulus relative to the auditory stimulus. Visual treadle responses by pigeons still occurred with a delay interval of less than 500ms, showing that visual dominance still prevailed when visual stimulus onset was delayed.
The developmental trajectory for sensory dominance and multisensory interactions still remains to be characterised. There have been many experiments exploring sensory dominance in adult human, animal models and even infants, but there is a dearth of information covering the age range of later childhood and adolescence. While visual dominance prevails in adults, it has been shown that infants and young children demonstrate auditory dominance.
Lewkowicz (1988a-1988b) presented 6- and 10-month-olds with audiovisual compounds differing in temporal characteristics (i.e., rate or duration of stimuli presentation) of either the visual or auditory component. [17] Results showed that infants (particularly those aged 6 months) detected temporal changes in the auditory, but not visual modality, indicating auditory dominance in infants. Lewkowicz (1988a) suggests that auditory dominance in early development might be an indication of the ontogenetically asynchronous development of the sensory systems. [17] Furthermore, it is important to note that the auditory system starts being responsive to external input much before birth. The visual system is the least stimulated sense in utero throughout gestation, as it only receives very low light intensities. This suggests that the visual receptors will only start being fully stimulated after birth. [18]
Further behavioural studies have shown that this auditory dominance persists up to 4 years of age. [19] Nava and Pavani (2013) investigated the development of multisensory interactions in three school aged groups of children (6-7, 9-10, and 10-11 respectively) using the Colavita paradigm, with the aim of directly assessing whether auditory dominance persists beyond 4 years of age and to examine when adult like visual dominance begins to emerge. [20] They found that auditory dominance persists until 6 years of age, and that the transition toward visual dominance starts at school age. In particular, Experiment 1 showed that children aged 6 to 7 years do not exhibit a Colavita effect, implying auditory dominance. 9- to 10-year-old children and 11- to 12-year-old children exhibited adult like visual dominance of the Colavita effect, suggesting that sensory dominance undergoes a developmental change in late childhood. Nava and Pavani (2013) suggest that visual dominance begins to emerge at the ages of 9 to 10 and is consolidated by 11 to 12 years of age. [20]
This pattern of sensory dominance, suggests a gradual change in multisensory perception during development, with the consolidation of adult-like processing of multisensory inputs starting from late childhood, where auditory dominance switches with visual dominance. [20]
The McGurk effect is a perceptual phenomenon that demonstrates an interaction between hearing and vision in speech perception. The illusion occurs when the auditory component of one sound is paired with the visual component of another sound, leading to the perception of a third sound. The visual information a person gets from seeing a person speak changes the way they hear the sound. If a person is getting poor quality auditory information but good quality visual information, they may be more likely to experience the McGurk effect. Integration abilities for audio and visual information may also influence whether a person will experience the effect. People who are better at sensory integration have been shown to be more susceptible to the effect. Many people are affected differently by the McGurk effect based on many factors, including brain damage and other disorders.
The Atkinson–Shiffrin model is a model of memory proposed in 1968 by Richard Atkinson and Richard Shiffrin. The model asserts that human memory has three separate components:
Stimulus modality, also called sensory modality, is one aspect of a stimulus or what is perceived after a stimulus. For example, the temperature modality is registered after heat or cold stimulate a receptor. Some sensory modalities include: light, sound, temperature, taste, pressure, and smell. The type and location of the sensory receptor activated by the stimulus plays the primary role in coding the sensation. All sensory modalities work together to heighten stimuli sensation when necessary.
Multisensory integration, also known as multimodal integration, is the study of how information from the different sensory modalities may be integrated by the nervous system. A coherent representation of objects combining modalities enables animals to have meaningful perceptual experiences. Indeed, multisensory integration is central to adaptive behavior because it allows animals to perceive a world of coherent perceptual entities. Multisensory integration also deals with how different sensory modalities interact with one another and alter each other's processing.
The Levels of Processing model, created by Fergus I. M. Craik and Robert S. Lockhart in 1972, describes memory recall of stimuli as a function of the depth of mental processing. Deeper levels of analysis produce more elaborate, longer-lasting, and stronger memory traces than shallow levels of analysis. Depth of processing falls on a shallow to deep continuum. Shallow processing leads to a fragile memory trace that is susceptible to rapid decay. Conversely, deep processing results in a more durable memory trace.
The cocktail party effect is the phenomenon of the brain's ability to focus one's auditory attention on a particular stimulus while filtering out a range of other stimuli, as when a partygoer can focus on a single conversation in a noisy room. Listeners have the ability to both segregate different stimuli into different streams, and subsequently decide which streams are most pertinent to them. Thus, it has been proposed that one's sensory memory subconsciously parses all stimuli and identifies discrete pieces of information by classifying them by salience. This effect is what allows most people to "tune into" a single voice and "tune out" all others. This phenomenon is often described in terms of "selective attention" or "selective hearing". It may also describe a similar phenomenon that occurs when one may immediately detect words of importance originating from unattended stimuli, for instance hearing one's name among a wide range of auditory input.
Charles Spence is an experimental psychologist at the University of Oxford. He is the head of the Crossmodal Research group which specializes in the research about the integration of information across different sensory modalities. He also teaches Experimental Psychology to undergraduates at Somerville College. He is currently a consultant for a number of multinational companies advising on various aspects of multisensory design. He has also conducted research on human-computer interaction issues on the Crew Work Station on the European Space Shuttle, and currently works on problems associated with the design of foods that maximally stimulate the senses, and with the effect of the indoor environment on mood, well-being, and performance. Charles has published more than 500 articles in top-flight scientific journals over the last decade. Charles has been awarded the 10th Experimental Psychology Society Prize, the British Psychology Society: Cognitive Section Award, the Paul Bertelson Award, recognizing him as the young European Cognitive Psychologist of the Year, and, most recently, the prestigious Friedrich Wilhelm Bessel Research Award from the Alexander von Humboldt Foundation in Germany.
Sensory processing is the process that organizes sensation from one's own body and the environment, thus making it possible to use the body effectively within the environment. Specifically, it deals with how the brain processes multiple sensory modality inputs, such as proprioception, vision, auditory system, tactile, olfactory, vestibular system, interoception, and taste into usable functional outputs.
The cutaneous rabbit illusion is a tactile illusion evoked by tapping two or more separate regions of the skin in rapid succession. The illusion is most readily evoked on regions of the body surface that have relatively poor spatial acuity, such as the forearm. A rapid sequence of taps delivered first near the wrist and then near the elbow creates the sensation of sequential taps hopping up the arm from the wrist towards the elbow, although no physical stimulus was applied between the two actual stimulus locations. Similarly, stimuli delivered first near the elbow then near the wrist evoke the illusory perception of taps hopping from elbow towards wrist. The illusion was discovered by Frank Geldard and Carl Sherrick of Princeton University, in the early 1970s, and further characterized by Geldard (1982) and in many subsequent studies. Geldard and Sherrick likened the perception to that of a rabbit hopping along the skin, giving the phenomenon its name. While the rabbit illusion has been most extensively studied in the tactile domain, analogous sensory saltation illusions have been observed in audition and vision. The word "saltation" refers to the leaping or jumping nature of the percept.
The bouba/kiki effect is a non-arbitrary mapping between speech sounds and the visual shape of objects. It was first documented by Wolfgang Köhler in 1929 using nonsense words. The effect has been observed in American university students, Tamil speakers in India, young children, and infants, and has also been shown to occur with familiar names. It is absent in individuals who are congenitally blind and reduced in autistic individuals. The effect has recently been investigated using fMRI.
Extinction is a neurological disorder that impairs the ability to perceive multiple stimuli of the same type simultaneously. Extinction is usually caused by damage resulting in lesions on one side of the brain. Those who are affected by extinction have a lack of awareness in the contralesional side of space and a loss of exploratory search and other actions normally directed toward that side.
Auditory spatial attention is a specific form of attention, involving the focusing of auditory perception to a location in space.
In neuroscience, the visual P200 or P2 is a waveform component or feature of the event-related potential (ERP) measured at the human scalp. Like other potential changes measurable from the scalp, this effect is believed to reflect the post-synaptic activity of a specific neural process. The P2 component, also known as the P200, is so named because it is a positive going electrical potential that peaks at about 200 milliseconds after the onset of some external stimulus. This component is often distributed around the centro-frontal and the parieto-occipital areas of the scalp. It is generally found to be maximal around the vertex of the scalp, however there have been some topographical differences noted in ERP studies of the P2 in different experimental conditions.
The visual N1 is a visual evoked potential, a type of event-related electrical potential (ERP), that is produced in the brain and recorded on the scalp. The N1 is so named to reflect the polarity and typical timing of the component. The "N" indicates that the polarity of the component is negative with respect to an average mastoid reference. The "1" originally indicated that it was the first negative-going component, but it now better indexes the typical peak of this component, which is around 150 to 200 milliseconds post-stimulus. The N1 deflection may be detected at most recording sites, including the occipital, parietal, central, and frontal electrode sites. Although, the visual N1 is widely distributed over the entire scalp, it peaks earlier over frontal than posterior regions of the scalp, suggestive of distinct neural and/or cognitive correlates. The N1 is elicited by visual stimuli, and is part of the visual evoked potential – a series of voltage deflections observed in response to visual onsets, offsets, and changes. Both the right and left hemispheres generate an N1, but the laterality of the N1 depends on whether a stimulus is presented centrally, laterally, or bilaterally. When a stimulus is presented centrally, the N1 is bilateral. When presented laterally, the N1 is larger, earlier, and contralateral to the visual field of the stimulus. When two visual stimuli are presented, one in each visual field, the N1 is bilateral. In the latter case, the N1's asymmetrical skewedness is modulated by attention. Additionally, its amplitude is influenced by selective attention, and thus it has been used to study a variety of attentional processes.
The N200, or N2, is an event-related potential (ERP) component. An ERP can be monitored using a non-invasive electroencephalography (EEG) cap that is fitted over the scalp on human subjects. An EEG cap allows researchers and clinicians to monitor the minute electrical activity that reaches the surface of the scalp from post-synaptic potentials in neurons, which fluctuate in relation to cognitive processing. EEG provides millisecond-level temporal resolution and is therefore known as one of the most direct measures of covert mental operations in the brain. The N200 in particular is a negative-going wave that peaks 200-350ms post-stimulus and is found primarily over anterior scalp sites. Past research focused on the N200 as a mismatch detector, but it has also been found to reflect executive cognitive control functions, and has recently been used in the study of language.
In psychology, visual capture is the dominance of vision over other sense modalities in creating a percept. In this process, the visual senses influence the other parts of the somatosensory system, to result in a perceived environment that is not congruent with the actual stimuli. Through this phenomenon, the visual system is able to disregard what other information a different sensory system is conveying, and provide a logical explanation for whatever output the environment provides. Visual capture allows one to interpret the location of sound as well as the sensation of touch without actually relying on those stimuli but rather creating an output that allows the individual to perceive a coherent environment.
Pre-attentive processing is the subconscious accumulation of information from the environment. All available information is pre-attentively processed. Then, the brain filters and processes what is important. Information that has the highest salience or relevance to what a person is thinking about is selected for further and more complete analysis by conscious (attentive) processing. Understanding how pre-attentive processing works is useful in advertising, in education, and for prediction of cognitive ability.
Crossmodal attention refers to the distribution of attention to different senses. Attention is the cognitive process of selectively emphasizing and ignoring sensory stimuli. According to the crossmodal attention perspective, attention often occurs simultaneously through multiple sensory modalities. These modalities process information from the different sensory fields, such as: visual, auditory, spatial, and tactile. While each of these is designed to process a specific type of sensory information, there is considerable overlap between them which has led researchers to question whether attention is modality-specific or the result of shared "cross-modal" resources. Cross-modal attention is considered to be the overlap between modalities that can both enhance and limit attentional processing. The most common example given of crossmodal attention is the Cocktail Party Effect, which is when a person is able to focus and attend to one important stimulus instead of other less important stimuli. This phenomenon allows deeper levels of processing to occur for one stimulus while others are then ignored.
Interindividual differences in perception describes the effect that differences in brain structure or factors such as culture, upbringing and environment have on the perception of humans. Interindividual variability is usually regarded as a source of noise for research. However, in recent years, it has become an interesting source to study sensory mechanisms and understand human behavior. With the help of modern neuroimaging methods such as fMRI and EEG, individual differences in perception could be related to the underlying brain mechanisms. This has helped to explain differences in behavior and cognition across the population. Common methods include studying the perception of illusions, as they can effectively demonstrate how different aspects such as culture, genetics and the environment can influence human behavior.
Laura Busse is a German neuroscientist and professor of Systemic Neuroscience within the Division of Neurobiology at the Ludwig Maximilian University of Munich. Busse's lab studies context-dependent visual processing in mouse models by performing large scale in vivo electrophysiological recordings in the thalamic and cortical circuits of awake and behaving mice.