J. Anthony Movshon | |
---|---|
Born | Joseph Anthony Movshon December 10, 1950 (age 73) New York City, New York, U.S. |
Nationality | American |
Awards | António Champalimaud Vision Award 2010, National Academy of Sciences 2008, American Academy of Arts and Sciences 2009, Karl Spencer Lashley Award 2013[ citation needed ] |
Scientific career | |
Fields | Neuroscience (Visual Neuroscience, Computational Neuroscience, Systems Neuroscience) |
Institutions | New York University (professor) |
Joseph Anthony Movshon ForMemRS (born December 10, 1950, in New York City) [1] is an American neuroscientist. He has made contributions to the understanding of the brain mechanisms that represent the form [2] [3] and motion [4] of objects, and the way these mechanisms contribute to perceptual judgments [5] and visually guided movement. [6] He is a founding co-editor of the Annual Review of Vision Science . [7] [8]
Movshon studied at Cambridge University, obtaining his B.A. in 1972, and his Ph.D. under the supervision of Colin Blakemore in 1975. Since 1975 he has been a faculty member at New York University, where he is University Professor and Silver Professor and Director of the University's Center for Neural Science, which he founded in 1987.[ citation needed ] He also served on the Life Sciences jury for the Infosys Prize in 2016.
Movshon and collaborators pioneered the application of detection theory to the output of neurons in Visual cortex, to obtain a Neurometric function. [9] This work led to the suggestion that a visual percept could be due to the activity of a handful of neurons. This suggestion found later support in studies where he collaborated with William Newsome to measure the neurometric function in the brain of the observer. [10]
Movshon has contributed to understanding how visual information is processed in visual cortex, including computations for visual motion, [11] [12] and visual texture. [13] Movshon has also contributed to understanding visual cortical development, [14] its modification by visual experience, [15] and its relation to the development of visual behavior, including the clinical visual disorder of amblyopia. [16]
Movshon received the António Champalimaud Vision Award in 2010. He was elected to the National Academy of Sciences in 2008, [17] and to the American Academy of Arts and Sciences in 2009. [18] Movshon was elected a Fellow of the Royal Society as a Foreign Member (ForMemRS) in May 2024. [19]
The visual cortex of the brain is the area of the cerebral cortex that processes visual information. It is located in the occipital lobe. Sensory input originating from the eyes travels through the lateral geniculate nucleus in the thalamus and then reaches the visual cortex. The area of the visual cortex that receives the sensory input from the lateral geniculate nucleus is the primary visual cortex, also known as visual area 1 (V1), Brodmann area 17, or the striate cortex. The extrastriate areas consist of visual areas 2, 3, 4, and 5.
The consciousness and binding problem is the problem of how objects, background, and abstract or emotional features are combined into a single experience.
In neuroanatomy, the superior colliculus is a structure lying on the roof of the mammalian midbrain. In non-mammalian vertebrates, the homologous structure is known as the optic tectum or optic lobe. The adjective form tectal is commonly used for both structures.
Retinotopy is the mapping of visual input from the retina to neurons, particularly those neurons within the visual stream. For clarity, 'retinotopy' can be replaced with 'retinal mapping', and 'retinotopic' with 'retinally mapped'.
Matteo Carandini is a neuroscientist who studies the visual system. He is currently a professor at University College London, where he co-directs the Cortical Processing Laboratory with Kenneth D Harris.
David J. Heeger is an American neuroscientist, psychologist, computer scientist, data scientist, and entrepreneur. He is a professor at New York University, Chief Scientific Officer of Statespace Labs, and Chief Scientific Officer and co-founder of Epistemic AI.
The normalization model is an influential model of responses of neurons in primary visual cortex. David Heeger developed the model in the early 1990s, and later refined it together with Matteo Carandini and J. Anthony Movshon. The model involves a divisive stage. In the numerator is the output of the classical receptive field. In the denominator, a constant plus a measure of local stimulus contrast. Although the normalization model was initially developed to explain responses in the primary visual cortex, normalization is now thought to operate throughout the visual system, and in many other sensory modalities and brain regions, including the representation of odors in the olfactory bulb, the modulatory effects of visual attention, the encoding of value, and the integration of multisensory information. It has also been observed at subthreshold potentials in the hippocampus. Its presence in such a diversity of neural systems in multiple species, from invertebrates to mammals, suggests that normalization serves as a canonical neural computation. Divisive normalization reduces the redundancy in natural stimulus statistics and is sometimes viewed as an implementation of the efficient coding principle. Formally, divisive normalization is an information-maximizing code for stimuli following a multivariate Pareto distribution.
Ocular dominance columns are stripes of neurons in the visual cortex of certain mammals that respond preferentially to input from one eye or the other. The columns span multiple cortical layers, and are laid out in a striped pattern across the surface of the striate cortex (V1). The stripes lie perpendicular to the orientation columns.
The inferior temporal gyrus is one of three gyri of the temporal lobe and is located below the middle temporal gyrus, connected behind with the inferior occipital gyrus; it also extends around the infero-lateral border on to the inferior surface of the temporal lobe, where it is limited by the inferior sulcus. This region is one of the higher levels of the ventral stream of visual processing, associated with the representation of objects, places, faces, and colors. It may also be involved in face perception, and in the recognition of numbers and words.
The efficient coding hypothesis was proposed by Horace Barlow in 1961 as a theoretical model of sensory coding in the brain. Within the brain, neurons communicate with one another by sending electrical impulses referred to as action potentials or spikes. One goal of sensory neuroscience is to decipher the meaning of these spikes in order to understand how the brain represents and processes information about the outside world. Barlow hypothesized that the spikes in the sensory system formed a neural code for efficiently representing sensory information. By efficient it is understood that the code minimized the number of spikes needed to transmit a given signal. This is somewhat analogous to transmitting information across the internet, where different file formats can be used to transmit a given image. Different file formats require different number of bits for representing the same image at given distortion level, and some are better suited for representing certain classes of images than others. According to this model, the brain is thought to use a code which is suited for representing visual and audio information representative of an organism's natural environment.
Complex cells can be found in the primary visual cortex (V1), the secondary visual cortex (V2), and Brodmann area 19 (V3).
Repetition priming refers to improvements in a behavioural response when stimuli are repeatedly presented. The improvements can be measured in terms of accuracy or reaction time and can occur when the repeated stimuli are either identical or similar to previous stimuli. These improvements have been shown to be cumulative, so as the number of repetitions increases the responses get continually faster up to a maximum of around seven repetitions. These improvements are also found when the repeated items are changed slightly in terms of orientation, size and position. The size of the effect is also modulated by the length of time the item is presented for and the length time between the first and subsequent presentations of the repeated items.
In cognitive neuroscience, visual modularity is an organizational concept concerning how vision works. The way in which the primate visual system operates is currently under intense scientific scrutiny. One dominant thesis is that different properties of the visual world require different computational solutions which are implemented in anatomically/functionally distinct regions that operate independently – that is, in a modular fashion.
Surround suppression is where the relative firing rate of a neuron may under certain conditions decrease when a particular stimulus is enlarged. It has been observed in electrophysiology studies of the brain and has been noted in many sensory neurons, most notably in the early visual system. Surround suppression is defined as a reduction in the activity of a neuron in response to a stimulus outside its classical receptive field.
The Karl Spencer Lashley Award is awarded by The American Philosophical Society as a recognition of research on the integrative neuroscience of behavior. The award was established in 1957 by a gift from Dr. Karl Spencer Lashley.
DeepDream is a computer vision program created by Google engineer Alexander Mordvintsev that uses a convolutional neural network to find and enhance patterns in images via algorithmic pareidolia, thus creating a dream-like appearance reminiscent of a psychedelic experience in the deliberately overprocessed images.
Binocular switch suppression (BSS) is a technique to suppress usually salient images from an individual's awareness, a type of experimental manipulation used in visual perception and cognitive neuroscience. In BSS, two images of differing signal strengths are repetitively switched between the left and right eye at a constant rate of 1 Hertz. During this process of switching, the image of lower contrast and signal strength is perceptually suppressed for a period of time.
In neuroscience, a neurometric function is a mathematical formula relating the activity of brain cells to aspects of an animal's sensory experience or motor behavior. Neurometric functions provide a quantitative summary of the neural code of a particular brain region.
Alexander C. Huk is an American neuroscientist. Prior to moving to UCLA in 2022, he was the Raymond Dickson Centennial Professor #2 of Neuroscience and Psychology, and the Director of the Center for Perceptual Systems at The University of Texas at Austin. His laboratory studies how the brain integrates information over space and time and how these neural signals guide behavior in the natural world. He has made contributions towards understanding how the brain represents 3D visual motion and how those representations are used to make perceptual judgments
Nicole C. Rust is an American neuroscientist, psychologist, and a Professor of Psychology at the University of Pennsylvania. She studies visual perception and visual recognition memory. She is recognized for significant advancements in experimental psychology and neuroscience.