The field of view (FOV) is the angular extent of the observable world that is seen at any given moment. In the case of optical instruments or sensors, it is a solid angle through which a detector is sensitive to electromagnetic radiation. It is further relevant in photography .
In the context of human and primate vision, the term "field of view" is typically only used in the sense of a restriction to what is visible by external apparatus, like when wearing spectacles [1] or virtual reality goggles. Note that eye movements are allowed in the definition but do not change the field of view when understood this way.
If the analogy of the eye's retina working as a sensor is drawn upon, the corresponding concept in human (and much of animal vision) is the visual field. [2] It is defined as "the number of degrees of visual angle during stable fixation of the eyes". [3] Note that eye movements are excluded in the visual field's definition. Humans have a slightly over 210-degree forward-facing horizontal arc of their visual field (i.e. without eye movements), [4] [5] [6] (with eye movements included it is slightly larger, as you can try for yourself by wiggling a finger on the side), while some birds have a complete or nearly complete 360-degree visual field. The vertical range of the visual field in humans is around 150 degrees. [4]
The range of visual abilities is not uniform across the visual field, and by implication the FoV, and varies between species. For example, binocular vision, which is the basis for stereopsis and is important for depth perception, covers 114 degrees (horizontally) of the visual field in humans; [7] the remaining peripheral ~50 degrees on each side [6] have no binocular vision (because only one eye can see those parts of the visual field). Some birds have a scant 10 to 20 degrees of binocular vision.
Similarly, color vision and the ability to perceive shape and motion vary across the visual field; in humans color vision and form perception are concentrated in the center of the visual field, while motion perception is only slightly reduced in the periphery and thus has a relative advantage there. The physiological basis for that is the much higher concentration of color-sensitive cone cells and color-sensitive parvocellular retinal ganglion cells in the fovea – the central region of the retina, together with a larger representation in the visual cortex – in comparison to the higher concentration of color-insensitive rod cells and motion-sensitive magnocellular retinal ganglion cells in the visual periphery, and smaller cortical representation. Since rod cells require considerably less light to be activated, the result of this distribution is further that peripheral vision is much more sensitive at night relative to foveal vision (sensitivity is highest at around 20 deg eccentricity). [2]
Many optical instruments, particularly binoculars or spotting scopes, are advertised with their field of view specified in one of two ways: angular field of view, and linear field of view. Angular field of view is typically specified in degrees, while linear field of view is a ratio of lengths. For example, binoculars with a 5.8 degree (angular) field of view might be advertised as having a (linear) field of view of 102 mm per meter. As long as the FOV is less than about 10 degrees or so, the following approximation formulas allow one to convert between linear and angular field of view. Let be the angular field of view in degrees. Let be the linear field of view in millimeters per meter. Then, using the small-angle approximation:
In machine vision the lens focal length and image sensor size sets up the fixed relationship between the field of view and the working distance. Field of view is the area of the inspection captured on the camera’s imager. The size of the field of view and the size of the camera’s imager directly affect the image resolution (one determining factor in accuracy). Working distance is the distance between the back of the lens and the target object.
In tomography, the field of view is the area of each tomogram. In for example computed tomography, a volume of voxels can be created from such tomograms by merging multiple slices along the scan range.
In remote sensing, the solid angle through which a detector element (a pixel sensor) is sensitive to electromagnetic radiation at any one time, is called instantaneous field of view or IFOV. A measure of the spatial resolution of a remote sensing imaging system, it is often expressed as dimensions of visible ground area, for some known sensor altitude. [8] [9] Single pixel IFOV is closely related to concept of resolved pixel size, ground resolved distance, ground sample distance and modulation transfer function.
In astronomy, the field of view is usually expressed as an angular area viewed by the instrument, in square degrees, or for higher magnification instruments, in square arc-minutes. For reference the Wide Field Channel on the Advanced Camera for Surveys on the Hubble Space Telescope has a field of view of 10 sq. arc-minutes, and the High Resolution Channel of the same instrument has a field of view of 0.15 sq. arc-minutes. Ground-based survey telescopes have much wider fields of view. The photographic plates used by the UK Schmidt Telescope had a field of view of 30 sq. degrees. The 1.8 m (71 in) Pan-STARRS telescope, with the most advanced digital camera to date has a field of view of 7 sq. degrees. In the near infra-red WFCAM on UKIRT has a field of view of 0.2 sq. degrees and the VISTA telescope has a field of view of 0.6 sq. degrees. Until recently digital cameras could only cover a small field of view compared to photographic plates, although they beat photographic plates in quantum efficiency, linearity and dynamic range, as well as being much easier to process.
In photography, the field of view is that part of the world that is visible through the camera at a particular position and orientation in space; objects outside the FOV when the picture is taken are not recorded in the photograph. It is most often expressed as the angular size of the view cone, as an angle of view. For a normal lens focused at infinity, the diagonal (or horizontal or vertical) field of view can be calculated as:
where is the focal length, here the sensor size and are in the same unit of length, FOV is in radians.
In microscopy, the field of view in high power (usually a 400-fold magnification when referenced in scientific papers) is called a high-power field, and is used as a reference point for various classification schemes.
For an objective with magnification , the FOV is related to the Field Number (FN) by
if other magnifying lenses are used in the system (in addition to the objective), the total for the projection is used.
The field of view in video games refers to the field of view of the camera looking at the game world, which is dependent on the scaling method used.
Optics is the branch of physics that studies the behaviour and properties of light, including its interactions with matter and the construction of instruments that use or detect it. Optics usually describes the behaviour of visible, ultraviolet, and infrared light. Light is a type of electromagnetic radiation, and other forms of electromagnetic radiation such as X-rays, microwaves, and radio waves exhibit similar properties.
In photography, angle of view (AOV) describes the angular extent of a given scene that is imaged by a camera. It is used interchangeably with the more general term field of view.
A monocular is a compact refracting telescope used to magnify images of distant objects, typically using an optical prism to ensure an erect image, instead of using relay lenses like most telescopic sights. The volume and weight of a monocular are typically less than half of a pair of binoculars with similar optical properties, making it more portable and also less expensive. This is because binoculars are essentially a pair of monoculars packed together — one for each eye. As a result, monoculars only produce two-dimensional images, while binoculars can use two parallaxed images to produce binocular vision, which allows stereopsis and depth perception.
Angular resolution describes the ability of any image-forming device such as an optical or radio telescope, a microscope, a camera, or an eye, to distinguish small details of an object, thereby making it a major determinant of image resolution. It is used in optics applied to light waves, in antenna theory applied to radio waves, and in acoustics applied to sound waves. The colloquial use of the term "resolution" sometimes causes confusion; when an optical system is said to have a high resolution or high angular resolution, it means that the perceived distance, or actual angular distance, between resolved neighboring objects is small. The value that quantifies this property, θ, which is given by the Rayleigh criterion, is low for a system with a high resolution. The closely related term spatial resolution refers to the precision of a measurement with respect to space, which is directly connected to angular resolution in imaging instruments. The Rayleigh criterion shows that the minimum angular spread that can be resolved by an image-forming system is limited by diffraction to the ratio of the wavelength of the waves to the aperture width. For this reason, high-resolution imaging systems such as astronomical telescopes, long distance telephoto camera lenses and radio telescopes have large apertures.
An optical telescope is a telescope that gathers and focuses light mainly from the visible part of the electromagnetic spectrum, to create a magnified image for direct visual inspection, to make a photograph, or to collect data through electronic image sensors.
Depth perception is the ability to perceive distance to objects in the world using the visual system and visual perception. It is a major factor in perceiving the world in three dimensions. Depth perception happens primarily due to stereopsis and accommodation of the eye.
Peripheral vision, or indirect vision, is vision as it occurs outside the point of fixation, i.e. away from the center of gaze or, when viewed at large angles, in the "corner of one's eye". The vast majority of the area in the visual field is included in the notion of peripheral vision. "Far peripheral" vision refers to the area at the edges of the visual field, "mid-peripheral" vision refers to medium eccentricities, and "near-peripheral", sometimes referred to as "para-central" vision, exists adjacent to the center of gaze.
Magnification is the process of enlarging the apparent size, not physical size, of something. This enlargement is quantified by a size ratio called optical magnification. When this number is less than one, it refers to a reduction in size, sometimes called de-magnification.
Visual acuity (VA) commonly refers to the clarity of vision, but technically rates an animal's ability to recognize small details with precision. Visual acuity depends on optical and neural factors. Optical factors of the eye influence the sharpness of an image on its retina. Neural factors include the health and functioning of the retina, of the neural pathways to the brain, and of the interpretative faculty of the brain.
An eyepiece, or ocular lens, is a type of lens that is attached to a variety of optical devices such as telescopes and microscopes. It is named because it is usually the lens that is closest to the eye when someone looks through an optical device to observe an object or sample. The objective lens or mirror collects light from an object or sample and brings it to focus creating an image of the object. The eyepiece is placed near the focal point of the objective to magnify this image to the eyes. The amount of magnification depends on the focal length of the eyepiece.
The angular diameter, angular size, apparent diameter, or apparent size is an angular distance describing how large a sphere or circle appears from a given point of view. In the vision sciences, it is called the visual angle, and in optics, it is the angular aperture. The angular diameter can alternatively be thought of as the angular displacement through which an eye or camera must rotate to look from one side of an apparent circle to the opposite side. Humans can resolve with their naked eyes diameters down to about 1 arcminute. This corresponds to 0.3 m at a 1 km distance, or to perceiving Venus as a disk under optimal conditions.
The visual field is "that portion of space in which objects are visible at the same moment during steady fixation of the gaze in one direction"; in ophthalmology and neurology the emphasis is mostly on the structure inside the visual field and it is then considered “the field of functional capacity obtained and recorded by means of perimetry”.
Visual angle is the angle a viewed object subtends at the eye, usually stated in degrees of arc. It also is called the object's angular size.
In neuroscience, cortical magnification describes how many neurons in an area of the visual cortex are 'responsible' for processing a stimulus of a given size, as a function of visual field location. In the center of the visual field, corresponding to the center of the fovea of the retina, a very large number of neurons process information from a small region of the visual field. If the same stimulus is seen in the periphery of the visual field, it would be processed by a much smaller number of neurons. The reduction of the number of neurons per visual field area from foveal to peripheral representations is achieved in several steps along the visual pathway, starting already in the retina.
The following are common definitions related to the machine vision field.
In optics, defocus is the aberration in which an image is simply out of focus. This aberration is familiar to anyone who has used a camera, videocamera, microscope, telescope, or binoculars. Optically, defocus refers to a translation of the focus along the optical axis away from the detection surface. In general, defocus reduces the sharpness and contrast of the image. What should be sharp, high-contrast edges in a scene become gradual transitions. Fine detail in the scene is blurred or even becomes invisible. Nearly all image-forming optical devices incorporate some form of focus adjustment to minimize defocus and maximize image quality.
In human visual perception, the visual angle, denoted θ, subtended by a viewed object sometimes looks larger or smaller than its actual value. One approach to this phenomenon posits a subjective correlate to the visual angle: the perceived visual angle or perceived angular size. An optical illusion where the physical and subjective angles differ is then called a visual angle illusion or angular size illusion.
The globe effect, also known as rolling ball effect, is an optical illusion which can occur with optical instruments used visually, in particular binoculars or telescopes. If such an instrument is rectilinear, or free of rectilinear distortion, some observers get the impression of an image rolling on a convex surface when the instrument is panned.
Vernier acuity is a type of visual acuity – more precisely of hyperacuity – that measures the ability to discern a disalignment among two line segments or gratings. A subject's vernier acuity is the smallest visible offset between the stimuli that can be detected. Because the disalignments are often much smaller than the diameter and spacing of retinal receptors, vernier acuity requires neural processing and "pooling" to detect it. Because vernier acuity exceeds acuity by far, the phenomenon has been termed hyperacuity. Vernier acuity develops rapidly during infancy and continues to slowly develop throughout childhood. At approximately three to twelve months old, it surpasses grating acuity in foveal vision in humans. However, vernier acuity decreases more quickly than grating acuity in peripheral vision. Vernier acuity was first explained by Ewald Hering in 1899, based on earlier data by Alfred Volkmann in 1863 and results by Ernst Anton Wülfing in 1892.
A peripheral head-mounted display (PHMD) is avisual display mounted to the user's head that is in the peripheral of the user's field of view (FOV) / peripheral vision. Whereby the actual position of the mounting is considered to be irrelevant as long as it does not cover the entire FOV. While a PHMD provide an additional, always-available visual output channel, it does not limit the user performing real world tasks.
{{cite book}}
: CS1 maint: multiple names: authors list (link)