Laura Busse (born c. 1977) [1] is a German neuroscientist and professor of Systemic Neuroscience within the Division of Neurobiology at the Ludwig Maximilian University of Munich. Busse's lab studies context-dependent visual processing in mouse models by performing large scale in vivo electrophysiological recordings in the thalamic and cortical circuits of awake and behaving mice.
Busse was born in Germany in 1977. She had an early interest in brain studies and received a scholarship from the State of Bavaria that supported her studies in basic psychology at the University of Leipzig, in Leipzig, Germany from 1997 to 1999. [1] Busse then pursued further studies at the Max Planck Research School at the University of Tübingen in Germany where she focused in Neural and Behavioral Sciences from 1999 to 2001. [1]
During her time at Tübingen, Busse pursued research abroad for her Master's in Neuroscience. [1] She moved to the United States for 6 months where she studied under the mentorship of Marty Woldorff at Duke University. [2] Busse explored the cognitive underpinnings of attention in the human brain in the Center for Cognitive Neuroscience at Duke University. [3] After successfully completing her Master's in 2001, Busse stayed at Duke University for another year to work as a research technician, continuing to explore the neurobiological underpinnings of cognition using various imaging techniques such as fMRI, EEG, and ERP. [1]
In late 2002, Busse pursued her doctoral work back in Germany at the German Primate Center Göttingen and the Bernstein Center for Computational Neuroscience. [1] Busse worked under the mentorship of Stefan Treue, where she entered the field of visual processing, exploring the neural basis of visual perception using non-human primates as a model organism. [4] Busse completed her PhD in neuroscience in 2006 and then moved back to the United States for one year funded by the Leopoldina Postdoctoral Scholarship. [1] Busse completed her postdoctoral work at the Smith-Kettlewell Eye Research Institute in San Francisco, USA in 2007 and then moved to the Institute of Ophthalmology at the University College London, in the United Kingdom to work as a Research Associate under the mentorship of Matteo Carandini from 2008 to 2010. [1] [5] Under Carandini, Busse explored visual processing in the cat V1 and visual behavior in mice. [2]
In the Woldorff Lab, Busse explored caveats to fMRI experimental trial structure in human fMRI experiments. [6] Since fMRI experiments often suffer from extensive overlap of adjacent trial brain signals, experimenters started to implement “null” or “no-stim” trials in order to provide time for extraction of stimulus generated signal during non-event trials. [6] However, Busse sought to explore the hypothesis that “null-event” trials actually evoke unique brain activity patterns, called the omitted stimulus response (OSR). [6] In an auditory task, Busse found significant OSRs, defined by an early posterior negative wave followed by a larger anterior positive wave, across a variety of stimulus rates and omitted stimulus percentages. [6] Her work provided not only insight into the brain's OSR but also to the caveat associated with using OSRs as a “null” trial. [6]
Busse then published a paper in the Proceedings of the National Academy of Sciences exploring the phenomenon of cross-modal attentional spreading. [7] Busse found that when a subject is paying attention to a stimulus in one sensory modality, it increases the subject's attention to a non-related stimulus in a different sensory modality. [7] This finding elucidated the idea that simultaneous yet disconnected stimuli can be grouped into one multisensory object enhancing the cognitive processing that is allocated to both stimuli. [7]
For her graduate work, Busse explored how cognition influences sensory information processing. For example, Busse became interested in top-down processing of sensory information in the case of visual attention, which is the ability of the brain to focus on one aspect of the visual environment even though it is taking in multitudes of visual information at once. [8] Busse first showed both spatial and feature-based influences of exogenous cueing on motion processing. [8] Autonomic shifts in attention, driven by exogenous cueing, appeared to be integrally driven by characteristic modulations of sensory processing. [9] Busse then explored how cognitive attention in macaques changes the neural representation of motion information. [9] Busse found that visual attention enhances the spatio-temporal structure of receptive fields for moving objects. [10] Busse completed her dissertation in 2008, showing that cognitive factors have strong modulatory effects on the processing of visual motion. [9]
In her postdoctoral studies, Busse first explored visual processing in the primary visual cortex in cats. [11] Busse found that when populations of neurons encode multiple stimuli simultaneously, a model of contrast normalization best explains how neurons represent multiple stimuli in V1. [11] Essentially, the population response can be described as a weighted sum of the individual responses to the components of the visual stimulus. [11] Not only did their modelling of normalization hold in cats, but also extended to recordings from human primary visual cortex. [11]
Busse was then ready to move her experiments into mice, a common model organism is systems neuroscience to dissect neural circuits, but she first had to pioneer a new approach to be able to relate vision circuits to perception in mice. [12] Busse extensively trained mice to detect visual contrast using trial-based operant conditioning. [12] After extensive training, they found that choices mice made in this operant task were not only based on the learned contrast association but also factors such as reward value or recent failures. [12] When they used a generalized linear model to decode the neural data to predict behavioral outputs, they found that the decoder performed better than the mouse suggesting that the mouse might not be using the V1 information in the most optimal way. [12]
In 2010, Busse became a Junior Research Group Leader in the Werner Reichardt Center for Integrative Neuroscience at the University of Tübingen, in Germany. [13] She led a team of researchers to approach studying the visual stimuli in an ethologically relevant way. [2] Since visual systems are designed to reflect an organism's environment, Busse shaped her research program around probing the neural circuits underlying visual processing with stimuli similar to those that would be experienced in that organism's natural environment. [13]
In 2016, Busse was recruited to the Ludwig Maximilian University of Munich in Germany to hold a professorship within the Munich Center for Neurosciences. [14] Busse currently leads the Vision Circuits Lab along with co-principal investigator Steffen Katzner within the Department of Biology, Neurobiology Division. [15] [16]
As a Junior Research Group Leader, Busse began to explore the neural circuits underlying visual processing in mouse models. Busse began by asking whether surround suppression, a computation known to underlie visual salience, could be observed in the V1 cortex. [17] Busse and her team found that in awake mice, parvalbumin positive interneurons in the primary visual cortex mediate surround suppression, however, when mice are under anesthesia, this profoundly affects surround suppression and thus spatial integration. [17] Using optogenetics, Busse and her team were able to show in awake mice that activation of PV+ interneurons increases the receptive field size and decreases the suppression of neural populations, underscoring the role these cells play in spatial integration and highlighting the utility of mice in circuit level analyses of visual processing. [17]
Continuing to use mice as models to study visual processing, Busse and her team explored how behavioral context impacts neural activity in V1. [18] They found that locomotion de-correlates V1 population responses however, locomotion seemed to control the tuning of dorsolateral geniculate nucleus population responses. [18] Overall, their findings highlighted novel insight into the effects of locomotion in early visual system information processing. [18]
As a new faculty at the Ludwig Maximilian University of Munich, Busse explored whether and how each cortical layer performs surround suppression and coordinates this across cortical layers. [19] Using in vivo recordings, Busse and her group were able to detect that layer 3 and layer 4 exhibited the strongest surround suppression and that intermediate stimulus sizes resulted in the strongest functional connections between layers. [19]
In their 2019 publication in Neuron, Busse and her colleagues at Tübingen shed light on the mechanisms by which the large degree of visual information coming in from the retina is processed and transferred in a manageable way to the visual cortex. [20] In the feedforward visual processing pathway, the retina extracts visual information from light inputs and passes this information on via its output layer of retinal ganglion cells (RGCs), which project axons to the dorsolateral geniculate nucleus (dLGN) of the thalamus, which in turn routes this information to the primary visual cortex (V1). Whereas the dLGN has traditionally been thought of as a passive relay in visual signal processing, [21] Busse and her colleagues investigate the hypothesis that it might instead be involved in actively shaping visual signals via several factors including recombination of incoming RGC inputs, processing of cortico-thalamic feedback inputs and local inhibitory interneuron computations, amongst others, which will actively shape the output signals sent to the primary visual cortex (V1) (e.g. via altering the thalamic firing modes between burst vs. tonic firing). [20] To test the contribution of recombination of retinal input signals from RGCs, Busse and her colleagues recorded responses from RGCs and thalamic cells to the same set of visual stimuli and then used computational modelling to see which retinal cells contribute to the responses of thalamic cells. [22] Fascinatingly, they found that the output of one thalamic cell relies on no more than 5 retinal cells, and that though these retinal inputs are combined to generate an output, they are not given equal weights. [20] Their work highlighted the active role of the thalamus in signal processing, not just signal relaying as is thought to be the canonical function of the thalamus. [22]
The visual cortex of the brain is the area of the cerebral cortex that processes visual information. It is located in the occipital lobe. Sensory input originating from the eyes travels through the lateral geniculate nucleus in the thalamus and then reaches the visual cortex. The area of the visual cortex that receives the sensory input from the lateral geniculate nucleus is the primary visual cortex, also known as visual area 1 (V1), Brodmann area 17, or the striate cortex. The extrastriate areas consist of visual areas 2, 3, 4, and 5.
The thalamus is a large mass of gray matter located in the dorsal part of the diencephalon. Nerve fibers project out of the thalamus to the cerebral cortex in all directions, allowing hub-like exchanges of information. It has several functions, such as the relaying of sensory signals, including motor signals to the cerebral cortex and the regulation of consciousness, sleep, and alertness.
The visual system comprises the sensory organ and parts of the central nervous system which gives organisms the sense of sight as well as enabling the formation of several non-image photo response functions. It detects and interprets information from the optical spectrum perceptible to that species to "build a representation" of the surrounding environment. The visual system carries out a number of complex tasks, including the reception of light and the formation of monocular neural representations, colour vision, the neural mechanisms underlying stereopsis and assessment of distances to and between objects, the identification of a particular object of interest, motion perception, the analysis and integration of visual information, pattern recognition, accurate motor coordination under visual guidance, and more. The neuropsychological side of visual information processing is known as visual perception, an abnormality of which is called visual impairment, and a complete absence of which is called blindness. Non-image forming visual functions, independent of visual perception, include the pupillary light reflex and circadian photoentrainment.
The sensory nervous system is a part of the nervous system responsible for processing sensory information. A sensory system consists of sensory neurons, neural pathways, and parts of the brain involved in sensory perception and interoception. Commonly recognized sensory systems are those for vision, hearing, touch, taste, smell, balance and visceral sensation. Sense organs are transducers that convert data from the outer physical world to the realm of the mind where people interpret the information, creating their perception of the world around them.
Multisensory integration, also known as multimodal integration, is the study of how information from the different sensory modalities may be integrated by the nervous system. A coherent representation of objects combining modalities enables animals to have meaningful perceptual experiences. Indeed, multisensory integration is central to adaptive behavior because it allows animals to perceive a world of coherent perceptual entities. Multisensory integration also deals with how different sensory modalities interact with one another and alter each other's processing.
Retinotopy is the mapping of visual input from the retina to neurons, particularly those neurons within the visual stream. For clarity, 'retinotopy' can be replaced with 'retinal mapping', and 'retinotopic' with 'retinally mapped'.
Neuronal tuning refers to the hypothesized property of brain cells by which they selectively represent a particular type of sensory, association, motor, or cognitive information. Some neuronal responses have been hypothesized to be optimally tuned to specific patterns through experience. Neuronal tuning can be strong and sharp, as observed in primary visual cortex, or weak and broad, as observed in neural ensembles. Single neurons are hypothesized to be simultaneously tuned to several modalities, such as visual, auditory, and olfactory. Neurons hypothesized to be tuned to different signals are often hypothesized to integrate information from the different sources. In computational models called neural networks, such integration is the major principle of operation. The best examples of neuronal tuning can be seen in the visual, auditory, olfactory, somatosensory, and memory systems, although due to the small number of stimuli tested the generality of neuronal tuning claims is still an open question.
Matteo Carandini is a neuroscientist who studies the visual system. He is currently a professor at University College London, where he co-directs the Cortical Processing Laboratory with Kenneth D Harris.
The normalization model is an influential model of responses of neurons in primary visual cortex. David Heeger developed the model in the early 1990s, and later refined it together with Matteo Carandini and J. Anthony Movshon. The model involves a divisive stage. In the numerator is the output of the classical receptive field. In the denominator, a constant plus a measure of local stimulus contrast. Although the normalization model was initially developed to explain responses in the primary visual cortex, normalization is now thought to operate throughout the visual system, and in many other sensory modalities and brain regions, including the representation of odors in the olfactory bulb, the modulatory effects of visual attention, the encoding of value, and the integration of multisensory information. It has also been observed at subthreshold potentials in the hippocampus. Its presence in such a diversity of neural systems in multiple species, from invertebrates to mammals, suggests that normalization serves as a canonical neural computation. Divisive normalization reduces the redundancy in natural stimulus statistics and is sometimes viewed as an implementation of the efficient coding principle. Formally, divisive normalization is an information-maximizing code for stimuli following a multivariate Pareto distribution.
Sensory neuroscience is a subfield of neuroscience which explores the anatomy and physiology of neurons that are part of sensory systems such as vision, hearing, and olfaction. Neurons in sensory regions of the brain respond to stimuli by firing one or more nerve impulses following stimulus presentation. How is information about the outside world encoded by the rate, timing, and pattern of action potentials? This so-called neural code is currently poorly understood and sensory neuroscience plays an important role in the attempt to decipher it. Looking at early sensory processing is advantageous since brain regions that are "higher up" contain neurons which encode more abstract representations. However, the hope is that there are unifying principles which govern how the brain encodes and processes information. Studying sensory systems is an important stepping stone in our understanding of brain function in general.
The efficient coding hypothesis was proposed by Horace Barlow in 1961 as a theoretical model of sensory coding in the brain. Within the brain, neurons communicate with one another by sending electrical impulses referred to as action potentials or spikes. One goal of sensory neuroscience is to decipher the meaning of these spikes in order to understand how the brain represents and processes information about the outside world. Barlow hypothesized that the spikes in the sensory system formed a neural code for efficiently representing sensory information. By efficient Barlow meant that the code minimized the number of spikes needed to transmit a given signal. This is somewhat analogous to transmitting information across the internet, where different file formats can be used to transmit a given image. Different file formats require different number of bits for representing the same image at given distortion level, and some are better suited for representing certain classes of images than others. According to this model, the brain is thought to use a code which is suited for representing visual and audio information representative of an organism's natural environment.
Neural coding is a neuroscience field concerned with characterising the hypothetical relationship between the stimulus and the individual or ensemble neuronal responses and the relationship among the electrical activity of the neurons in the ensemble. Based on the theory that sensory and other information is represented in the brain by networks of neurons, it is thought that neurons can encode both digital and analog information.
The mismatch negativity (MMN) or mismatch field (MMF) is a component of the event-related potential (ERP) to an odd stimulus in a sequence of stimuli. It arises from electrical activity in the brain and is studied within the field of cognitive neuroscience and psychology. It can occur in any sensory system, but has most frequently been studied for hearing and for vision, in which case it is abbreviated to vMMN. The (v)MMN occurs after an infrequent change in a repetitive sequence of stimuli For example, a rare deviant (d) stimulus can be interspersed among a series of frequent standard (s) stimuli. In hearing, a deviant sound can differ from the standards in one or more perceptual features such as pitch, duration, loudness, or location. The MMN can be elicited regardless of whether someone is paying attention to the sequence. During auditory sequences, a person can be reading or watching a silent subtitled movie, yet still show a clear MMN. In the case of visual stimuli, the MMN occurs after an infrequent change in a repetitive sequence of images.
Recurrent thalamo-cortical resonance is an observed phenomenon of oscillatory neural activity between the thalamus and various cortical regions of the brain. It is proposed by Rodolfo Llinas and others as a theory for the integration of sensory information into the whole of perception in the brain. Thalamocortical oscillation is proposed to be a mechanism of synchronization between different cortical regions of the brain, a process known as temporal binding. This is possible through the existence of thalamocortical networks, groupings of thalamic and cortical cells that exhibit oscillatory properties.
Feature detection is a process by which the nervous system sorts or filters complex natural stimuli in order to extract behaviorally relevant cues that have a high probability of being associated with important objects or organisms in their environment, as opposed to irrelevant background or noise.
Binocular neurons are neurons in the visual system that assist in the creation of stereopsis from binocular disparity. They have been found in the primary visual cortex where the initial stage of binocular convergence begins. Binocular neurons receive inputs from both the right and left eyes and integrate the signals together to create a perception of depth.
Surround suppression is where the relative firing rate of a neuron may under certain conditions decrease when a particular stimulus is enlarged. It has been observed in electrophysiology studies of the brain and has been noted in many sensory neurons, most notably in the early visual system. Surround suppression is defined as a reduction in the activity of a neuron in response to a stimulus outside its classical receptive field.
Nadine Gogolla is a Research Group Leader at the Max Planck Institute of Neurobiology in Martinsried, Germany as well as an Associate Faculty of the Graduate School for Systemic Neuroscience. Gogolla investigates the neural circuits underlying emotion to understand how the brain integrates external cues, feeling states, and emotions to make calculated behavioral decisions. Gogolla is known for her discovery using machine learning and two-photon microscopy to classify mouse facial expressions into emotion-like categories and correlate these facial expressions with neural activity in the insular cortex.
Carsen Stringer is an American computational neuroscientist and Group Leader at the Howard Hughes Medical Institute Janelia Research Campus. Stringer uses machine learning and deep neural networks to visualize large scale neural recordings and then probe the neural computations that give rise to visual processing in mice. Stringer has also developed several novel software packages that enable cell segmentation and robust analyses of neural recordings and mouse behavior.
Sonja Hofer is a German neuroscientist studying the neural basis of sensory perception and sensory-guided decision-making at the Sainsbury Wellcome Centre for Neural Circuits and Behaviour. Her research focuses on how the brain processes visual information, how neural networks are shaped by experience and learning, and how they integrate visual signals with other information in order to interpret the outside world and guide behaviour. She received her undergraduate degree from the Technical University of Munich, her PhD at the Max Planck Institute of Neurobiology in Martinsried, Germany, and completed a post doctorate at the University College London. After holding an Assistant Professorship at the Biozentrum University of Basel in Switzerland for five years, she now is a group leader and Professor at the Sainsbury Wellcome Centre for Neural Circuits and Behaviour since 2018.