Visual perception

Last updated

Visual perception is the ability to interpret the surrounding environment through photopic vision (daytime vision), color vision, scotopic vision (night vision), and mesopic vision (twilight vision), using light in the visible spectrum reflected by objects in the environment. This is different from visual acuity, which refers to how clearly a person sees (for example "20/20 vision"). A person can have problems with visual perceptual processing even if they have 20/20 vision.

Contents

The resulting perception is also known as vision, sight, or eyesight (adjectives visual, optical, and ocular, respectively). The various physiological components involved in vision are referred to collectively as the visual system, and are the focus of much research in linguistics, psychology, cognitive science, neuroscience, and molecular biology, collectively referred to as vision science.

Visual system

In humans and a number of other mammals, light enters the eye through the cornea and is focused by the lens onto the retina, a light-sensitive membrane at the back of the eye. The retina serves as a transducer for the conversion of light into neuronal signals. This transduction is achieved by specialized photoreceptive cells of the retina, also known as the rods and cones, which detect the photons of light and respond by producing neural impulses. These signals are transmitted by the optic nerve, from the retina upstream to central ganglia in the brain. The lateral geniculate nucleus, which transmits the information to the visual cortex. Signals from the retina also travel directly from the retina to the superior colliculus. [1]

The lateral geniculate nucleus sends signals to primary visual cortex, also called striate cortex. Extrastriate cortex, also called visual association cortex is a set of cortical structures, that receive information from striate cortex, as well as each other. [2] Recent descriptions of visual association cortex describe a division into two functional pathways, a ventral and a dorsal pathway. This conjecture is known as the two streams hypothesis.

The human visual system is generally believed to be sensitive to visible light in the range of wavelengths between 370 and 730 nanometers of the electromagnetic spectrum. [3] However, some research suggests that humans can perceive light in wavelengths down to 340 nanometers (UV-A), especially the young. [4] Under optimal conditions these limits of human perception can extend to 310 nm (UV) to 1100 nm (NIR). [5] [6]

Study

The major problem in visual perception is that what people see is not simply a translation of retinal stimuli (i.e., the image on the retina), with the brain altering the basic information taken in. Thus people interested in perception have long struggled to explain what visual processing does to create what is actually seen.

Early studies

The visual dorsal stream (green) and ventral stream (purple) are shown. Much of the human cerebral cortex is involved in vision. Ventral-dorsal streams.svg
The visual dorsal stream (green) and ventral stream (purple) are shown. Much of the human cerebral cortex is involved in vision.

There were two major ancient Greek schools, providing a primitive explanation of how vision works.

The first was the "emission theory" of vision which maintained that vision occurs when rays emanate from the eyes and are intercepted by visual objects. If an object was seen directly it was by 'means of rays' coming out of the eyes and again falling on the object. A refracted image was, however, seen by 'means of rays' as well, which came out of the eyes, traversed through the air, and after refraction, fell on the visible object which was sighted as the result of the movement of the rays from the eye. This theory was championed by scholars who were followers of Euclid's Optics and Ptolemy's Optics .

The second school advocated the so-called 'intromission' approach which sees vision as coming from something entering the eyes representative of the object. With its main propagator Aristotle ( De Sensu ), [7] and his followers, [7] this theory seems to have some contact with modern theories of what vision really is, but it remained only a speculation lacking any experimental foundation. (In eighteenth-century England, Isaac Newton, John Locke, and others, carried the intromission theory of vision forward by insisting that vision involved a process in which rays—composed of actual corporeal matter—emanated from seen objects and entered the seer's mind/sensorium through the eye's aperture.) [8]

Both schools of thought relied upon the principle that "like is only known by like", and thus upon the notion that the eye was composed of some "internal fire" that interacted with the "external fire" of visible light and made vision possible. Plato makes this assertion in his dialogue Timaeus (45b and 46b), as does Empedocles (as reported by Aristotle in his De Sensu, DK frag. B17). [7]

Leonardo da Vinci: The eye has a central line and everything that reaches the eye through this central line can be seen distinctly. Eye Line of sight.jpg
Leonardo da Vinci: The eye has a central line and everything that reaches the eye through this central line can be seen distinctly.

Alhazen (965 – c. 1040) carried out many investigations and experiments on visual perception, extended the work of Ptolemy on binocular vision, and commented on the anatomical works of Galen. [9] [10] He was the first person to explain that vision occurs when light bounces on an object and then is directed to one's eyes. [11]

Leonardo da Vinci (1452–1519) is believed to be the first to recognize the special optical qualities of the eye. He wrote "The function of the human eye ... was described by a large number of authors in a certain way. But I found it to be completely different." His main experimental finding was that there is only a distinct and clear vision at the line of sight—the optical line that ends at the fovea. Although he did not use these words literally he actually is the father of the modern distinction between foveal and peripheral vision. [12]

Isaac Newton (1642–1726/27) was the first to discover through experimentation, by isolating individual colors of the spectrum of light passing through a prism, that the visually perceived color of objects appeared due to the character of light the objects reflected, and that these divided colors could not be changed into any other color, which was contrary to scientific expectation of the day. [3]

Unconscious inference

Hermann von Helmholtz is often credited with the first modern study of visual perception. Helmholtz examined the human eye and concluded that it was incapable of producing a high-quality image. Insufficient information seemed to make vision impossible. He, therefore, concluded that vision could only be the result of some form of "unconscious inference", coining that term in 1867. He proposed the brain was making assumptions and conclusions from incomplete data, based on previous experiences. [13]

Inference requires prior experience of the world.

Examples of well-known assumptions, based on visual experience, are:

The study of visual illusions (cases when the inference process goes wrong) has yielded much insight into what sort of assumptions the visual system makes.

Another type of unconscious inference hypothesis (based on probabilities) has recently been revived in so-called Bayesian studies of visual perception. [15] Proponents of this approach consider that the visual system performs some form of Bayesian inference to derive a perception from sensory data. However, it is not clear how proponents of this view derive, in principle, the relevant probabilities required by the Bayesian equation. Models based on this idea have been used to describe various visual perceptual functions, such as the perception of motion, the perception of depth, and figure-ground perception. [16] [17] The "wholly empirical theory of perception" is a related and newer approach that rationalizes visual perception without explicitly invoking Bayesian formalisms.[ citation needed ]

Gestalt theory

Gestalt psychologists working primarily in the 1930s and 1940s raised many of the research questions that are studied by vision scientists today. [18]

The Gestalt Laws of Organization have guided the study of how people perceive visual components as organized patterns or wholes, instead of many different parts. "Gestalt" is a German word that partially translates to "configuration or pattern" along with "whole or emergent structure". According to this theory, there are eight main factors that determine how the visual system automatically groups elements into patterns: Proximity, Similarity, Closure, Symmetry, Common Fate (i.e. common motion), Continuity as well as Good Gestalt (pattern that is regular, simple, and orderly) and Past Experience.[ citation needed ]

Analysis of eye movement

Eye movement first 2 seconds (Yarbus, 1967) Vision 2 secondes.jpg
Eye movement first 2 seconds (Yarbus, 1967)

During the 1960s, technical development permitted the continuous registration of eye movement during reading, [19] in picture viewing, [20] and later, in visual problem solving, [21] and when headset-cameras became available, also during driving. [22]

The picture to the right shows what may happen during the first two seconds of visual inspection. While the background is out of focus, representing the peripheral vision, the first eye movement goes to the boots of the man (just because they are very near the starting fixation and have a reasonable contrast). Eye movements serve the function of attentional selection, i.e., to select a fraction of all visual inputs for deeper processing by the brain.[ citation needed ]

The following fixations jump from face to face. They might even permit comparisons between faces.[ citation needed ]

It may be concluded that the icon face is a very attractive search icon within the peripheral field of vision. The foveal vision adds detailed information to the peripheral first impression.

It can also be noted that there are different types of eye movements: fixational eye movements (microsaccades, ocular drift, and tremor), vergence movements, saccadic movements and pursuit movements. Fixations are comparably static points where the eye rests. However, the eye is never completely still, and gaze position will drift. These drifts are in turn corrected by microsaccades, very small fixational eye movements. Vergence movements involve the cooperation of both eyes to allow for an image to fall on the same area of both retinas. This results in a single focused image. Saccadic movements is the type of eye movement that makes jumps from one position to another position and is used to rapidly scan a particular scene/image. Lastly, pursuit movement is smooth eye movement and is used to follow objects in motion. [23]

Face and object recognition

There is considerable evidence that face and object recognition are accomplished by distinct systems. For example, prosopagnosic patients show deficits in face, but not object processing, while object agnosic patients (most notably, patient C.K.) show deficits in object processing with spared face processing. [24] Behaviorally, it has been shown that faces, but not objects, are subject to inversion effects, leading to the claim that faces are "special". [24] [25] Further, face and object processing recruit distinct neural systems. [26] Notably, some have argued that the apparent specialization of the human brain for face processing does not reflect true domain specificity, but rather a more general process of expert-level discrimination within a given class of stimulus, [27] though this latter claim is the subject of substantial debate. Using fMRI and electrophysiology Doris Tsao and colleagues described brain regions and a mechanism for face recognition in macaque monkeys. [28]

The inferotemporal cortex has a key role in the task of recognition and differentiation of different objects. A study by MIT shows that subset regions of the IT cortex are in charge of different objects. [29] By selectively shutting off neural activity of many small areas of the cortex, the animal gets alternately unable to distinguish between certain particular pairments of objects. This shows that the IT cortex is divided into regions that respond to different and particular visual features. In a similar way, certain particular patches and regions of the cortex are more involved in face recognition than other object recognition.

Some studies tend to show that rather than the uniform global image, some particular features and regions of interest of the objects are key elements when the brain needs to recognise an object in an image. [30] [31] In this way, the human vision is vulnerable to small particular changes to the image, such as disrupting the edges of the object, modifying texture or any small change in a crucial region of the image. [32]

Studies of people whose sight has been restored after a long blindness reveal that they cannot necessarily recognize objects and faces (as opposed to color, motion, and simple geometric shapes). Some hypothesize that being blind during childhood prevents some part of the visual system necessary for these higher-level tasks from developing properly. [33] The general belief that a critical period lasts until age 5 or 6 was challenged by a 2007 study that found that older patients could improve these abilities with years of exposure. [34]

Cognitive and computational approaches

In the 1970s, David Marr developed a multi-level theory of vision, which analyzed the process of vision at different levels of abstraction. In order to focus on the understanding of specific problems in vision, he identified three levels of analysis: the computational, algorithmic and implementational levels. Many vision scientists, including Tomaso Poggio, have embraced these levels of analysis and employed them to further characterize vision from a computational perspective. [35]

The computational level addresses, at a high level of abstraction, the problems that the visual system must overcome. The algorithmic level attempts to identify the strategy that may be used to solve these problems. Finally, the implementational level attempts to explain how solutions to these problems are realized in neural circuitry.

Marr suggested that it is possible to investigate vision at any of these levels independently. Marr described vision as proceeding from a two-dimensional visual array (on the retina) to a three-dimensional description of the world as output. His stages of vision include:

Marr's 212D sketch assumes that a depth map is constructed, and that this map is the basis of 3D shape perception. However, both stereoscopic and pictorial perception, as well as monocular viewing, make clear that the perception of 3D shape precedes, and does not rely on, the perception of the depth of points. It is not clear how a preliminary depth map could, in principle, be constructed, nor how this would address the question of figure-ground organization, or grouping. The role of perceptual organizing constraints, overlooked by Marr, in the production of 3D shape percepts from binocularly-viewed 3D objects has been demonstrated empirically for the case of 3D wire objects, e.g. [37] [38] For a more detailed discussion, see Pizlo (2008). [39]

A more recent, alternative framework proposes that vision is composed instead of the following three stages: encoding, selection, and decoding. [40] Encoding is to sample and represent visual inputs (e.g., to represent visual inputs as neural activities in the retina). Selection, or attentional selection, is to select a tiny fraction of input information for further processing, e.g., by shifting gaze to an object or visual location to better process the visual signals at that location. Decoding is to infer or recognize the selected input signals, e.g., to recognize the object at the center of gaze as somebody's face. In this framework, [41] attentional selection starts at the primary visual cortex along the visual pathway, and the attentional constraints impose a dichotomy between the central and peripheral visual fields for visual recognition or decoding.

Transduction

Transduction is the process through which energy from environmental stimuli is converted to neural activity. The retina contains three different cell layers: photoreceptor layer, bipolar cell layer and ganglion cell layer. The photoreceptor layer where transduction occurs is farthest from the lens. It contains photoreceptors with different sensitivities called rods and cones. The cones are responsible for color perception and are of three distinct types labelled red, green and blue. Rods are responsible for the perception of objects in low light. [42] Photoreceptors contain within them a special chemical called a photopigment, which is embedded in the membrane of the lamellae; a single human rod contains approximately 10 million of them. The photopigment molecules consist of two parts: an opsin (a protein) and retinal (a lipid). [43] There are 3 specific photopigments (each with their own wavelength sensitivity) that respond across the spectrum of visible light. When the appropriate wavelengths (those that the specific photopigment is sensitive to) hit the photoreceptor, the photopigment splits into two, which sends a signal to the bipolar cell layer, which in turn sends a signal to the ganglion cells, the axons of which form the optic nerve and transmit the information to the brain. If a particular cone type is missing or abnormal, due to a genetic anomaly, a color vision deficiency, sometimes called color blindness will occur. [44]

Opponent process

Transduction involves chemical messages sent from the photoreceptors to the bipolar cells to the ganglion cells. Several photoreceptors may send their information to one ganglion cell. There are two types of ganglion cells: red/green and yellow/blue. These neurons constantly fire—even when not stimulated. The brain interprets different colors (and with a lot of information, an image) when the rate of firing of these neurons alters. Red light stimulates the red cone, which in turn stimulates the red/green ganglion cell. Likewise, green light stimulates the green cone, which stimulates the green/red ganglion cell and blue light stimulates the blue cone which stimulates the blue/yellow ganglion cell. The rate of firing of the ganglion cells is increased when it is signaled by one cone and decreased (inhibited) when it is signaled by the other cone. The first color in the name of the ganglion cell is the color that excites it and the second is the color that inhibits it. i.e.: A red cone would excite the red/green ganglion cell and the green cone would inhibit the red/green ganglion cell. This is an opponent process. If the rate of firing of a red/green ganglion cell is increased, the brain would know that the light was red, if the rate was decreased, the brain would know that the color of the light was green. [44]

Artificial visual perception

Theories and observations of visual perception have been the main source of inspiration for computer vision (also called machine vision, or computational vision). Special hardware structures and software algorithms provide machines with the capability to interpret the images coming from a camera or a sensor.

For instance, the 2022 Toyota 86 uses the Subaru EyeSight system for driver-assist technology. [45]

See also

Vision deficiencies or disorders

Related Research Articles

<span class="mw-page-title-main">Retina</span> Part of the eye

The retina is the innermost, light-sensitive layer of tissue of the eye of most vertebrates and some molluscs. The optics of the eye create a focused two-dimensional image of the visual world on the retina, which then processes that image within the retina and sends nerve impulses along the optic nerve to the visual cortex to create visual perception. The retina serves a function which is in many ways analogous to that of the film or image sensor in a camera.

<span class="mw-page-title-main">Color constancy</span> How humans perceive color

Color constancy is an example of subjective constancy and a feature of the human color perception system which ensures that the perceived color of objects remains relatively constant under varying illumination conditions. A green apple for instance looks green to us at midday, when the main illumination is white sunlight, and also at sunset, when the main illumination is red. This helps us identify objects.

<span class="mw-page-title-main">Color vision</span> Ability to perceive differences in light frequency

Color vision, a feature of visual perception, is an ability to perceive differences between light composed of different frequencies independently of light intensity.

<span class="mw-page-title-main">Visual system</span> Body parts responsible for vision

The visual system is the physiological basis of visual perception. The system detects, transduces and interprets information concerning light within the visible range to construct an image and build a mental model of the surrounding environment. The visual system is associated with the eye and functionally divided into the optical system and the neural system.

<span class="mw-page-title-main">Sensory nervous system</span> Part of the nervous system

The sensory nervous system is a part of the nervous system responsible for processing sensory information. A sensory system consists of sensory neurons, neural pathways, and parts of the brain involved in sensory perception and interoception. Commonly recognized sensory systems are those for vision, hearing, touch, taste, smell, balance and visceral sensation. Sense organs are transducers that convert data from the outer physical world to the realm of the mind where people interpret the information, creating their perception of the world around them.

<span class="mw-page-title-main">Photoreceptor cell</span> Type of neuroepithelial cell

A photoreceptor cell is a specialized type of neuroepithelial cell found in the retina that is capable of visual phototransduction. The great biological importance of photoreceptors is that they convert light into signals that can stimulate biological processes. To be more specific, photoreceptor proteins in the cell absorb photons, triggering a change in the cell's membrane potential.

<span class="mw-page-title-main">Afterimage</span> Image that continues to appear in the eyes after a period of exposure to the original image

An afterimage is an image that continues to appear in the eyes after a period of exposure to the original image. An afterimage may be a normal phenomenon or may be pathological (palinopsia). Illusory palinopsia may be a pathological exaggeration of physiological afterimages. Afterimages occur because photochemical activity in the retina continues even when the eyes are no longer experiencing the original stimulus.

<span class="mw-page-title-main">Trichromacy</span> Possessing of three independent channels for conveying color information

Trichromacy or trichromatism is the possession of three independent channels for conveying color information, derived from the three different types of cone cells in the eye. Organisms with trichromacy are called trichromats.

In visual physiology, adaptation is the ability of the retina of the eye to adjust to various levels of light. Natural night vision, or scotopic vision, is the ability to see under low-light conditions. In humans, rod cells are exclusively responsible for night vision as cone cells are only able to function at higher illumination levels. Night vision is of lower quality than day vision because it is limited in resolution and colors cannot be discerned; only shades of gray are seen. In order for humans to transition from day to night vision they must undergo a dark adaptation period of up to two hours in which each eye adjusts from a high to a low luminescence "setting", increasing sensitivity hugely, by many orders of magnitude. This adaptation period is different between rod and cone cells and results from the regeneration of photopigments to increase retinal sensitivity. Light adaptation, in contrast, works very quickly, within seconds.

Stimulus modality, also called sensory modality, is one aspect of a stimulus or what is perceived after a stimulus. For example, the temperature modality is registered after heat or cold stimulate a receptor. Some sensory modalities include: light, sound, temperature, taste, pressure, and smell. The type and location of the sensory receptor activated by the stimulus plays the primary role in coding the sensation. All sensory modalities work together to heighten stimuli sensation when necessary.

<span class="mw-page-title-main">Visual acuity</span> Clarity of vision

Visual acuity (VA) commonly refers to the clarity of vision, but technically rates an animal's ability to recognize small details with precision. Visual acuity depends on optical and neural factors. Optical factors of the eye influence the sharpness of an image on its retina. Neural factors include the health and functioning of the retina, of the neural pathways to the brain, and of the interpretative faculty of the brain.

<span class="mw-page-title-main">Fovea centralis</span> Small pit in the retina of the eye responsible for all central vision

The fovea centralis is a small, central pit composed of closely packed cones in the eye. It is located in the center of the macula lutea of the retina.

The receptive field, or sensory space, is a delimited medium where some physiological stimuli can evoke a sensory neuronal response in specific organisms.

Visual processing is a term that is used to refer to the brain's ability to use and interpret visual information from the world around us. The process of converting light energy into a meaningful image is a complex process that is facilitated by numerous brain structures and higher level cognitive processes. On an anatomical level, light energy first enters the eye through the cornea, where the light is bent. After passing through the cornea, light passes through the pupil and then lens of the eye, where it is bent to a greater degree and focused upon the retina. The retina is where a group of light-sensing cells, called photoreceptors are located. There are two types of photoreceptors: rods and cones. Rods are sensitive to dim light and cones are better able to transduce bright light. Photoreceptors connect to bipolar cells, which induce action potentials in retinal ganglion cells. These retinal ganglion cells form a bundle at the optic disc, which is a part of the optic nerve. The two optic nerves from each eye meet at the optic chiasm, where nerve fibers from each nasal retina cross which results in the right half of each eye's visual field being represented in the left hemisphere and the left half of each eye's visual fields being represented in the right hemisphere. The optic tract then diverges into two visual pathways, the geniculostriate pathway and the tectopulvinar pathway, which send visual information to the visual cortex of the occipital lobe for higher level processing.

In the study of visual perception, scotopic vision is the vision of the eye under low-light conditions. The term comes from Greek skotos, meaning "darkness", and -opia, meaning "a condition of sight". In the human eye, cone cells are nonfunctional in low visible light. Scotopic vision is produced exclusively through rod cells, which are most sensitive to wavelengths of around 498 nm (blue-green) and are insensitive to wavelengths longer than about 640 nm (red-orange). This condition is called the Purkinje effect.

<span class="mw-page-title-main">Eye movement</span> Movement of the eyes

Eye movement includes the voluntary or involuntary movement of the eyes. Eye movements are used by a number of organisms to fixate, inspect and track visual objects of interests. A special type of eye movement, rapid eye movement, occurs during REM sleep.

Intrinsically photosensitive retinal ganglion cells (ipRGCs), also called photosensitive retinal ganglion cells (pRGC), or melanopsin-containing retinal ganglion cells (mRGCs), are a type of neuron in the retina of the mammalian eye. The presence of ipRGCs was first suspected in 1927 when rodless, coneless mice still responded to a light stimulus through pupil constriction, This implied that rods and cones are not the only light-sensitive neurons in the retina. Yet research on these cells did not advance until the 1980s. Recent research has shown that these retinal ganglion cells, unlike other retinal ganglion cells, are intrinsically photosensitive due to the presence of melanopsin, a light-sensitive protein. Therefore, they constitute a third class of photoreceptors, in addition to rod and cone cells.

<span class="mw-page-title-main">Lateral inhibition</span> Capacity of an excited neuron to reduce activity of its neighbors

In neurobiology, lateral inhibition is the capacity of an excited neuron to reduce the activity of its neighbors. Lateral inhibition disables the spreading of action potentials from excited neurons to neighboring neurons in the lateral direction. This creates a contrast in stimulation that allows increased sensory perception. It is also referred to as lateral antagonism and occurs primarily in visual processes, but also in tactile, auditory, and even olfactory processing. Cells that utilize lateral inhibition appear primarily in the cerebral cortex and thalamus and make up lateral inhibitory networks (LINs). Artificial lateral inhibition has been incorporated into artificial sensory systems, such as vision chips, hearing systems, and optical mice. An often under-appreciated point is that although lateral inhibition is visualised in a spatial sense, it is also thought to exist in what is known as "lateral inhibition across abstract dimensions." This refers to lateral inhibition between neurons that are not adjacent in a spatial sense, but in terms of modality of stimulus. This phenomenon is thought to aid in colour discrimination.

<span class="mw-page-title-main">Mammalian eye</span>

Mammals normally have a pair of eyes. Although mammalian vision is not so excellent as bird vision, it is at least dichromatic for most of mammalian species, with certain families possessing a trichromatic color perception.

<span class="mw-page-title-main">Parasol cell</span>

A parasol cell, sometimes called an M cell or M ganglion cell, is one type of retinal ganglion cell (RGC) located in the ganglion cell layer of the retina. These cells project to magnocellular cells in the lateral geniculate nucleus (LGN) as part of the magnocellular pathway in the visual system. They have large cell bodies as well as extensive branching dendrite networks and as such have large receptive fields. Relative to other RGCs, they have fast conduction velocities. While they do show clear center-surround antagonism, they receive no information about color. Parasol ganglion cells contribute information about the motion and depth of objects to the visual system.

References

  1. Sadun, Alfredo A.; Johnson, Betty M.; Smith, Lois E. H. (1986). "Neuroanatomy of the human visual system: Part II Retinal projections to the superior colliculus and pulvinar". Neuro-Ophthalmology. 6 (6): 363–370. doi:10.3109/01658108609016476. ISSN   0165-8107.
  2. Carlson, Neil R. (2013). "6". Physiology of Behaviour (11th ed.). Upper Saddle River, New Jersey, US: Pearson Education Inc. pp. 187–189. ISBN   978-0-205-23939-9.
  3. 1 2 Margaret, Livingstone (2008). Vision and art : the biology of seeing. Hubel, David H. New York: Abrams. ISBN   978-0-8109-9554-3. OCLC   192082768.
  4. Brainard, George C.; Beacham, Sabrina; Sanford, Britt E.; Hanifin, John P.; Streletz, Leopold; Sliney, David (March 1, 1999). "Near ultraviolet radiation elicits visual evoked potentials in children". Clinical Neurophysiology. 110 (3): 379–383. doi:10.1016/S1388-2457(98)00022-4. ISSN   1388-2457. PMID   10363758. S2CID   8509975.
  5. D. H. Sliney (February 2016). "What is light? The visible spectrum and beyond". Eye. 30 (2): 222–229. doi:10.1038/eye.2015.252. ISSN   1476-5454. PMC   4763133 . PMID   26768917.
  6. W. C. Livingston (2001). Color and light in nature (2nd ed.). Cambridge, UK: Cambridge University Press. ISBN   0-521-77284-2.
  7. 1 2 3 Finger, Stanley (1994). Origins of neuroscience: a history of explorations into brain function. Oxford [Oxfordshire]: Oxford University Press. pp. 67–69. ISBN   978-0-19-506503-9. OCLC   27151391.
  8. Swenson Rivka (2010). "Optics, Gender, and the Eighteenth-Century Gaze: Looking at Eliza Haywood's Anti-Pamela". The Eighteenth Century: Theory and Interpretation. 51 (1–2): 27–43. doi:10.1353/ecy.2010.0006. S2CID   145149737.
  9. Howard, I (1996). "Alhazen's neglected discoveries of visual phenomena". Perception. 25 (10): 1203–1217. doi:10.1068/p251203. PMID   9027923. S2CID   20880413.
  10. Khaleefa, Omar (1999). "Who Is the Founder of Psychophysics and Experimental Psychology?". American Journal of Islamic Social Sciences. 16 (2): 1–26. doi:10.35632/ajis.v16i2.2126.
  11. Adamson, Peter (July 7, 2016). Philosophy in the Islamic World: A History of Philosophy Without Any Gaps. Oxford University Press. p. 77. ISBN   978-0-19-957749-1.
  12. Keele, Kd (1955). "Leonardo da Vinci on vision". Proceedings of the Royal Society of Medicine. 48 (5): 384–390. doi:10.1177/003591575504800512. ISSN   0035-9157. PMC   1918888 . PMID   14395232.
  13. von Helmholtz, Hermann (1925). Handbuch der physiologischen Optik. Vol. 3. Leipzig: Voss. Archived from the original on September 27, 2018. Retrieved December 14, 2016.
  14. Hunziker, Hans-Werner (2006). Im Auge des Lesers: foveale und periphere Wahrnehmung – vom Buchstabieren zur Lesefreude [In the eye of the reader: foveal and peripheral perception – from letter recognition to the joy of reading]. Zürich: Transmedia Stäubli Verlag. ISBN   978-3-7266-0068-6.[ page needed ]
  15. Stone, JV (2011). "Footprints sticking out of the sand. Part 2: children's Bayesian priors for shape and lighting direction" (PDF). Perception. 40 (2): 175–90. doi:10.1068/p6776. PMID   21650091. S2CID   32868278.
  16. Mamassian, Pascal; Landy, Michael; Maloney, Laurence T. (2002). "Bayesian Modelling of Visual Perception". In Rao, Rajesh P. N.; Olshausen, Bruno A.; Lewicki, Michael S. (eds.). Probabilistic Models of the Brain: Perception and Neural Function. Neural Information Processing. MIT Press. pp. 13–36. ISBN   978-0-262-26432-7.
  17. "A Primer on Probabilistic Approaches to Visual Perception". Archived from the original on July 10, 2006. Retrieved October 14, 2010.
  18. Wagemans, Johan (November 2012). "A Century of Gestalt Psychology in Visual Perception". Psychological Bulletin. 138 (6): 1172–1217. CiteSeerX   10.1.1.452.8394 . doi:10.1037/a0029333. PMC   3482144 . PMID   22845751.
  19. Taylor, Stanford E. (November 1965). "Eye Movements in Reading: Facts and Fallacies". American Educational Research Journal. 2 (4): 187–202. doi:10.2307/1161646. JSTOR   1161646.
  20. Yarbus, A. L. (1967). Eye movements and vision, Plenum Press, New York[ page needed ]
  21. Hunziker, H. W. (1970). "Visuelle Informationsaufnahme und Intelligenz: Eine Untersuchung über die Augenfixationen beim Problemlösen" [Visual information acquisition and intelligence: A study of the eye fixations in problem solving]. Schweizerische Zeitschrift für Psychologie und Ihre Anwendungen (in German). 29 (1/2).[ page needed ]
  22. Cohen, A. S. (1983). "Informationsaufnahme beim Befahren von Kurven, Psychologie für die Praxis 2/83" [Information recording when driving on curves, psychology in practice 2/83]. Bulletin der Schweizerischen Stiftung für Angewandte Psychologie.[ page needed ]
  23. Carlson, Neil R.; Heth, C. Donald; Miller, Harold; Donahoe, John W.; Buskist, William; Martin, G. Neil; Schmaltz, Rodney M. (2009). Psychology the Science of Behaviour . Toronto Ontario: Pearson Canada. pp.  140–1. ISBN   978-0-205-70286-2.
  24. 1 2 Moscovitch, Morris; Winocur, Gordon; Behrmann, Marlene (1997). "What Is Special about Face Recognition? Nineteen Experiments on a Person with Visual Object Agnosia and Dyslexia but Normal Face Recognition". Journal of Cognitive Neuroscience. 9 (5): 555–604. doi:10.1162/jocn.1997.9.5.555. PMID   23965118. S2CID   207550378.
  25. Yin, Robert K. (1969). "Looking at upside-down faces". Journal of Experimental Psychology. 81 (1): 141–5. doi:10.1037/h0027474.
  26. Kanwisher, Nancy; McDermott, Josh; Chun, Marvin M. (June 1997). "The fusiform face area: a module in human extrastriate cortex specialized for face perception". The Journal of Neuroscience. 17 (11): 4302–11. doi:10.1523/JNEUROSCI.17-11-04302.1997. PMC   6573547 . PMID   9151747.
  27. Gauthier, Isabel; Skudlarski, Pawel; Gore, John C.; Anderson, Adam W. (February 2000). "Expertise for cars and birds recruits brain areas involved in face recognition". Nature Neuroscience . 3 (2): 191–7. doi:10.1038/72140. PMID   10649576. S2CID   15752722.
  28. Chang, Le; Tsao, Doris Y. (June 1, 2017). "The Code for Facial Identity in the Primate Brain". Cell. 169 (6): 1013–1028.e14. doi: 10.1016/j.cell.2017.05.011 . ISSN   0092-8674. PMC   8088389 . PMID   28575666.
  29. "How the brain distinguishes between objects". MIT News. March 13, 2019. Retrieved October 10, 2019.
  30. Srivastava, Sanjana; Ben-Yosef, Guy; Boix, Xavier (February 8, 2019). Minimal Images in Deep Neural Networks: Fragile Object Recognition in Natural Images. arXiv: 1902.03227 . OCLC   1106329907.
  31. Ben-Yosef, Guy; Assif, Liav; Ullman, Shimon (February 2018). "Full interpretation of minimal images". Cognition. 171: 65–84. doi:10.1016/j.cognition.2017.10.006. hdl: 1721.1/106887 . ISSN   0010-0277. PMID   29107889. S2CID   3372558.
  32. Elsayed, Gamaleldin F.; Shankar, Shreya; Cheung, Brian; Papernot, Nicolas; Kurakin, Alex; Goodfellow, Ian; Sohl-Dickstein, Jascha (February 22, 2018). "Adversarial Examples that Fool both Computer Vision and Time-Limited Humans" (PDF). Advances in Neural Information Processing Systems 31 (NeurIPS 2018). arXiv: 1802.08195 . OCLC   1106289156.
  33. Man with restored sight provides new insight into how vision develops
  34. Out Of Darkness, Sight: Rare Cases Of Restored Vision Reveal How The Brain Learns To See
  35. Poggio, Tomaso (1981). "Marr's Computational Approach to Vision". Trends in Neurosciences. 4: 258–262. doi:10.1016/0166-2236(81)90081-3. S2CID   53163190.
  36. Marr, D (1982). Vision: A Computational Investigation into the Human Representation and Processing of Visual Information. MIT Press.[ page needed ]
  37. Rock, Irvin; DiVita, Joseph (1987). "A case of viewer-centered object perception". Cognitive Psychology. 19 (2): 280–293. doi:10.1016/0010-0285(87)90013-2. PMID   3581759. S2CID   40154873.
  38. Pizlo, Zygmunt; Stevenson, Adam K. (1999). "Shape constancy from novel views". Perception & Psychophysics. 61 (7): 1299–1307. doi: 10.3758/BF03206181 . ISSN   0031-5117. PMID   10572459. S2CID   8041318.
  39. 3D Shape, Z. Pizlo (2008) MIT Press
  40. Zhaoping, Li (2014). Understanding vision: theory, models, and data. United Kingdom: Oxford University Press. ISBN   978-0199564668.
  41. Zhaoping, L (2019). "A new framework for understanding vision from the perspective of the primary visual cortex". Current Opinion in Neurobiology. 58: 1–10. doi:10.1016/j.conb.2019.06.001. PMID   31271931. S2CID   195806018.
  42. Hecht, Selig (April 1, 1937). "Rods, Cones, and the Chemical Basis of Vision". Physiological Reviews. 17 (2): 239–290. doi:10.1152/physrev.1937.17.2.239. ISSN   0031-9333.
  43. Carlson, Neil R. (2013). "6". Physiology of Behaviour (11th ed.). Upper Saddle River, New Jersey, US: Pearson Education Inc. p. 170. ISBN   978-0-205-23939-9.
  44. 1 2 Carlson, Neil R.; Heth, C. Donald (2010). "5". Psychology the science of behaviour (2nd ed.). Upper Saddle River, New Jersey, US: Pearson Education Inc. pp.  138–145. ISBN   978-0-205-64524-4.
  45. "2022 Toyota GR 86 embraces sports car evolution with fresh looks, more power".

Further reading