In cognitive neuroscience, visual modularity is an organizational concept concerning how vision works. The way in which the primate visual system operates is currently under intense scientific scrutiny. One dominant thesis is that different properties of the visual world (color, motion, form and so forth) require different computational solutions which are implemented in anatomically/functionally distinct regions that operate independently – that is, in a modular fashion. [1]
Akinetopsia, a term coined by Semir Zeki, [2] refers to an intriguing condition brought about by damage to the Extrastriate cortex MT+ (also known as area V5) that renders humans and monkeys unable to perceive motion, seeing the world in a series of static "frames" instead [3] [4] [5] [6] and indicates that there might be a "motion centre" in the brain. Of course, such data can only indicate that this area is at least necessary to motion perception, not that it is sufficient; however, other evidence has shown the importance of this area to primate motion perception. Specifically, physiological, neuroimaging, perceptual, electrical- and transcranial magnetic stimulation evidence (Table 1) all come together on the area V5/hMT+. Converging evidence of this type is supportive of a module for motion processing. However, this view is likely to be incomplete: other areas are involved with motion perception, including V1, [7] [8] [9] V2 and V3a [10] and areas surrounding V5/hMT+ (Table 2). A recent fMRI study put the number of motion areas at twenty-one. [11] Clearly, this constitutes a stream of diverse anatomical areas. The extent to which this is ‘pure’ is in question: with Akinetopsia come severe difficulties in obtaining structure from motion. [12] V5/hMT+ has since been implicated in this function [13] as well as determining depth. [14] Thus the current evidence suggests that motion processing occurs in a modular stream, although with a role in form and depth perception at higher levels.
Methodology | Finding | Source |
---|---|---|
Physiology (single cell recording) | Cells directionally and speed selective in MT/V5 | [15] [16] [17] [18] |
Neuroimaging | Greater activation for motion information than static information in V5/MT | [11] [19] |
Electrical-stimulation & perceptual | Following electrical stimulation of V5/MT cells perceptual decisions are biased towards the stimulated neuron's direction preference | [20] |
Magnetic-stimulation | Motion perception is also briefly impaired in humans by a strong magnetic pulse over the corresponding scalp region to hMT+ | [21] [22] [23] |
Psychophysics | Perceptual asynchrony among motion, color and orientation. | [24] [25] |
Methodology | Finding | Source |
---|---|---|
Physiology (single cell recording) | Complex motion involving contraction/expansion and rotation found to activate neurons in medial superior temporal area (MST) | [26] |
Neuroimaging | Biological motion activated superior temporal sulcus | [27] |
Neuroimaging | Tool use activated middle temporal gyrus and inferior temporal sulcus | [28] |
Neuropsychology | Damage to visual area V5 results in akinetopsia | [3] [4] [5] [6] |
Similar converging evidence suggests modularity for color. Beginning with Gowers’ finding [29] that damage to the fusiform/lingual gyri in occipitotemporal cortex correlates with a loss in color perception (achromatopsia), the notion of a "color centre" in the primate brain has had growing support. [30] [31] [32] Again, such clinical evidence only implies that this region is critical to color perception, and nothing more. Other evidence, however, including neuroimaging [11] [33] [34] and physiology [35] [36] converges on V4 as necessary to color perception. A recent meta-analysis has also shown a specific lesion common to achromats corresponding to V4. [37] From another direction altogether it has been found that when synaesthetes experience color by a non-visual stimulus, V4 is active. [38] [39] On the basis of this evidence it would seem that color processing is modular. However, as with motion processing it is likely that this conclusion is inaccurate. Other evidence shown in Table 3 implies different areas’ involvement with color. It may thus be more instructive to consider a multistage color processing stream from the retina through to cortical areas including at least V1, V2, V4, PITd and TEO. Consonant with motion perception, there appears to be a constellation of areas drawn upon for color perception. In addition, V4 may have a special, but not exclusive, role. For example, single cell recording has shown that only V4 cells respond to the color of a stimuli rather than its waveband, whereas other areas involved with color do not. [35] [36]
Other areas involved with color/Other functions of V4 | Source |
---|---|
Wavelength sensitive cells in V1 and V2 | [40] [41] |
anterior parts of the inferior temporal cortex | [42] [43] |
posterior parts of the superior temporal sulcus (PITd) | [44] |
Area in or near TEO | [45] |
Shape detection | [46] [47] |
Link between vision, attention and cognition | [48] |
Another clinical case that would a priori suggest a module for modularity in visual processing is visual agnosia. The well studied patient DF is unable to recognize or discriminate objects [49] owing to damage in areas of the lateral occipital cortex although she can see scenes without problem – she can literally see the forest but not the trees. [50] Neuroimaging of intact individuals reveals strong occipito-temporal activation during object presentation and greater activation still for object recognition. [51] Of course, such activation could be due to other processes, such as visual attention. However, other evidence that shows a tight coupling of perceptual and physiological changes [52] suggests activation in this area does underpin object recognition. Within these regions are more specialized areas for face or fine grained analysis, [53] place perception [54] and human body perception. [55] Perhaps some of the strongest evidence for the modular nature of these processing systems is the double dissociation between object- and face (prosop-) agnosia. However, as with color and motion, early areas (see [46] for a comprehensive review) are implicated too, lending support to the idea of a multistage stream terminating in the inferotemporal cortex rather than an isolated module.
One of the first uses of the term "module" or "modularity" occurs in the influential book "Modularity of Mind" by philosopher Jerry Fodor. [56] A detailed application of this idea to the case of vision was published by Pylyshyn (1999), who argued that there is a significant part of vision that is not responsive to beliefs and is "cognitively impenetrable". [57]
Much of the confusion concerning modularity exists in neuroscience because there is evidence for specific areas (e.g. V4 or V5/hMT+) and the concomitant behavioral deficits following brain insult (thus taken as evidence for modularity). In addition, evidence shows other areas are involved and that these areas subserve processing of multiple properties (e.g. V1 [58] ) (thus taken as evidence against modularity). That these streams have the same implementation in early visual areas, like V1, is not inconsistent with a modular viewpoint: to adopt the canonical analogy in cognition, it is possible for different software to run on the same hardware. A consideration of psychophysics and neuropsychological data would suggest support for this. For example, psychophysics has shown that percepts for different properties are realized asynchronously. [24] [25] In addition, although achromats experience other cognitive defects [59] they do not have motion deficits when their lesion is restricted to V4, or total loss of form perception. [60] Relatedly, Zihl and colleagues' akinetopsia patient shows no deficit to color or object perception (although deriving depth and structure from motion is problematic, see above) and object agnostics do not have damaged motion or color perception, making the three disorders triply dissociable. [4] Taken together this evidence suggests that even though distinct properties may employ the same early visual areas they are functionally independent. Furthermore, that the intensity of subjective perceptual experience (e.g. color) correlates with activity in these specific areas (e.g. V4), [33] the recent evidence that synaesthetes show V4 activation during the perceptual experience of color, as well as the fact that damage to these areas results in concomitant behavioral deficits (the processing may be occurring but perceivers do not have access to the information) are all evidence for visual modularity.
The visual cortex of the brain is the area of the cerebral cortex that processes visual information. It is located in the occipital lobe. Sensory input originating from the eyes travels through the lateral geniculate nucleus in the thalamus and then reaches the visual cortex. The area of the visual cortex that receives the sensory input from the lateral geniculate nucleus is the primary visual cortex, also known as visual area 1 (V1), Brodmann area 17, or the striate cortex. The extrastriate areas consist of visual areas 2, 3, 4, and 5.
Blindsight is the ability of people who are cortically blind to respond to visual stimuli that they do not consciously see due to lesions in the primary visual cortex, also known as the striate cortex or Brodmann Area 17. The term was coined by Lawrence Weiskrantz and his colleagues in a paper published in a 1974 issue of Brain. A previous paper studying the discriminatory capacity of a cortically blind patient was published in Nature in 1973. The assumed existence of blindsight is controversial, with some arguing that it is merely degraded conscious vision.
Color constancy is an example of subjective constancy and a feature of the human color perception system which ensures that the perceived color of objects remains relatively constant under varying illumination conditions. A green apple for instance looks green to us at midday, when the main illumination is white sunlight, and also at sunset, when the main illumination is red. This helps us identify objects.
The visual system is the physiological basis of visual perception. The system detects, transduces and interprets information concerning light within the visible range to construct an image and build a mental model of the surrounding environment. The visual system is associated with the eye and functionally divided into the optical system and the neural system.
The parietal lobe is one of the four major lobes of the cerebral cortex in the brain of mammals. The parietal lobe is positioned above the temporal lobe and behind the frontal lobe and central sulcus.
The consciousness and binding problem is the problem of how objects, background, and abstract or emotional features are combined into a single experience. The binding problem refers to the overall encoding of our brain circuits for the combination of decisions, actions, and perception. It is considered a "problem" due to the fact that no complete model exists.
A cortical column is a group of neurons forming a cylindrical structure through the cerebral cortex of the brain perpendicular to the cortical surface. The structure was first identified by Vernon Benjamin Mountcastle in 1957. He later identified minicolumns as the basic units of the neocortex which were arranged into columns. Each contains the same types of neurons, connectivity, and firing properties. Columns are also called hypercolumn, macrocolumn, functional column or sometimes cortical module. Neurons within a minicolumn (microcolumn) encode similar features, whereas a hypercolumn "denotes a unit containing a full set of values for any given set of receptive field parameters". A cortical module is defined as either synonymous with a hypercolumn (Mountcastle) or as a tissue block of multiple overlapping hypercolumns.
Motion perception is the process of inferring the speed and direction of elements in a scene based on visual, vestibular and proprioceptive inputs. Although this process appears straightforward to most observers, it has proven to be a difficult problem from a computational perspective, and difficult to explain in terms of neural processing.
In psycholinguistics, language processing refers to the way humans use words to communicate ideas and feelings, and how such communications are processed and understood. Language processing is considered to be a uniquely human ability that is not produced with the same grammatical understanding or systematicity in even human's closest primate relatives.
Supplementary eye field (SEF) is the name for the anatomical area of the dorsal medial frontal lobe of the primate cerebral cortex that is indirectly involved in the control of saccadic eye movements. Evidence for a supplementary eye field was first shown by Schlag, and Schlag-Rey. Current research strives to explore the SEF's contribution to visual search and its role in visual salience. The SEF constitutes together with the frontal eye fields (FEF), the intraparietal sulcus (IPS), and the superior colliculus (SC) one of the most important brain areas involved in the generation and control of eye movements, particularly in the direction contralateral to their location. Its precise function is not yet fully known. Neural recordings in the SEF show signals related to both vision and saccades somewhat like the frontal eye fields and superior colliculus, but currently most investigators think that the SEF has a special role in high level aspects of saccade control, like complex spatial transformations, learned transformations, and executive cognitive functions.
The inferior temporal gyrus is one of three gyri of the temporal lobe and is located below the middle temporal gyrus, connected behind with the inferior occipital gyrus; it also extends around the infero-lateral border on to the inferior surface of the temporal lobe, where it is limited by the inferior sulcus. This region is one of the higher levels of the ventral stream of visual processing, associated with the representation of objects, places, faces, and colors. It may also be involved in face perception, and in the recognition of numbers and words.
Akinetopsia, also known as cerebral akinetopsia or motion blindness, is a term introduced by Semir Zeki to describe an extremely rare neuropsychological disorder, having only been documented in a handful of medical cases, in which a patient cannot perceive motion in their visual field, despite being able to see stationary objects without issue. The syndrome is the result of damage to visual area V5, whose cells are specialized to detect directional visual motion. There are varying degrees of akinetopsia: from seeing motion as frames of a cinema reel to an inability to discriminate any motion. There is currently no effective treatment or cure for akinetopsia.
Cerebral achromatopsia is a type of color blindness caused by damage to the cerebral cortex of the brain, rather than abnormalities in the cells of the eye's retina. It is often confused with congenital achromatopsia but underlying physiological deficits of the disorders are completely distinct. A similar, but distinct, deficit called color agnosia exists in which a person has intact color perception but has deficits in color recognition, such as knowing which color they are looking at.
The colour centre is a region in the brain primarily responsible for visual perception and cortical processing of colour signals received by the eye, which ultimately results in colour vision. The colour centre in humans is thought to be located in the ventral occipital lobe as part of the visual system, in addition to other areas responsible for recognizing and processing specific visual stimuli, such as faces, words, and objects. Many functional magnetic resonance imaging (fMRI) studies in both humans and macaque monkeys have shown colour stimuli to activate multiple areas in the brain, including the fusiform gyrus and the lingual gyrus. These areas, as well as others identified as having a role in colour vision processing, are collectively labelled visual area 4 (V4). The exact mechanisms, location, and function of V4 are still being investigated.
Globs are millimeter-sized color modules found beyond the visual area V2 in the brain's color processing ventral pathway. They are scattered throughout the posterior inferior temporal cortex in an area called the V4 complex. They are clustered by color preference, and organized as color columns. They are the first part of the brain in which color is processed in terms of the full range of hues found in color space.
In neuroscience, functional specialization is a theory which suggests that different areas in the brain are specialized for different functions. It is opposed to the anti-localizationist theories of and brain holism and equipotentialism.
A parasol cell, sometimes called an M cell or M ganglion cell, is one type of retinal ganglion cell (RGC) located in the ganglion cell layer of the retina. These cells project to magnocellular cells in the lateral geniculate nucleus (LGN) as part of the magnocellular pathway in the visual system. They have large cell bodies as well as extensive branching dendrite networks and as such have large receptive fields. Relative to other RGCs, they have fast conduction velocities. While they do show clear center-surround antagonism, they receive no information about color. Parasol ganglion cells contribute information about the motion and depth of objects to the visual system.
Ralph Mitchell Siegel, a researcher who studied the neurological underpinnings of vision, was a professor of neuroscience at Rutgers University, Newark, in the Center for Molecular and Behavioral Neuroscience. He died September 2, 2011, at his home following a long illness.
Binocular neurons are neurons in the visual system that assist in the creation of stereopsis from binocular disparity. They have been found in the primary visual cortex where the initial stage of binocular convergence begins. Binocular neurons receive inputs from both the right and left eyes and integrate the signals together to create a perception of depth.
Surround suppression is where the relative firing rate of a neuron may under certain conditions decrease when a particular stimulus is enlarged. It has been observed in electrophysiology studies of the brain and has been noted in many sensory neurons, most notably in the early visual system. Surround suppression is defined as a reduction in the activity of a neuron in response to a stimulus outside its classical receptive field.
no
no