Image formation

Last updated

The study of image formation encompasses the radiometric and geometric processes by which 2D images of 3D objects are formed. In the case of digital images, the image formation process also includes analog to digital conversion and sampling.

Contents

Imaging

The imaging process is a mapping of an object to an image plane. Each point on the image corresponds to a point on the object. An illuminated object will scatter light toward a lens and the lens will collect and focus the light to create the image. The ratio of the height of the image to the height of the object is the magnification. The spatial extent of the image surface and the focal length of the lens determines the field of view of the lens. Image formation of mirror these have a center of curvature and its focal length of the mirror is half of the center of curvature.

Illumination

An object may be illuminated by the light from an emitting source such as the sun, a light bulb or a Light Emitting Diode. The light incident on the object is reflected in a manner dependent on the surface properties of the object. For rough surfaces, the reflected light is scattered in a manner described by the Bi-directional Reflectance Distribution Function (BRDF) of the surface. The BRDF of a surface is the ratio of the exiting power per square meter per steradian (radiance) to the incident power per square meter (irradiance). [1] The BRDF typically varies with angle and may vary with wavelength, but a specific important case is a surface that has constant BRDF. This surface type is referred to as Lambertian and the magnitude of the BRDF is R/π, where R is the reflectivity of the surface. The portion of scattered light that propagates toward the lens is collected by the entrance pupil of the imaging lens over the field of view.

Illum room.jpg

Field of view and imagery

The Field of view of a lens is limited by the size of the image plane and the focal length of the lens. The relationship between a location on the image and a location on the object is y = f*tan(θ), where y is the max extent of the image plane, f is the focal length of the lens and θ is the field of view. If y is the max radial size of the image then θ is the field of view of the lens. While the image created by a lens is continuous, it can be modeled as a set of discrete field points, each representing a point on the object. The quality of the image is limited by the aberrations in the lens and the diffraction created by the finite aperture stop.

Pupils and stops

The aperture stop of a lens is a mechanical aperture which limits the light collection for each field point. The entrance pupil is the image of the aperture stop created by the optical elements on the object side of the lens. The light scattered by an object is collected by the entrance pupil and focused onto the image plane via a series of refractive elements. The cone of the focused light at the image plane is set by the size of the entrance pupil and the focal length of the lens. This is often referred to as the f-stop or f-number of the lens. f/# = f/D where D is the diameter of the entrance pupil.

Pixelation and color vs. monochrome

In typical digital imaging systems, a sensor is placed at the image plane. The light is focused on to the sensor and the continuous image is pixelated. The light incident on each pixel in the sensor will be integrated within the pixel and a proportional electronic signal will be generated. [2] The angular geometric resolution of a pixel is given by atan(p/f), where p is the pitch of the pixel. This is also called the pixel field of view. The sensor may be monochrome or color. In the case of a monochrome sensor, the light incident on each pixel is integrated and the resulting image is a grayscale like picture. For color images, a mosaic color filter is typically placed over the pixels to create a color image. An example is a Bayer filter. The signal incident on each pixel is then digitized to a bit stream.

Image quality

The quality of an image is dependent upon both geometric and physical items. Geometrically, higher density of pixels across an image will give less blocky pixelation and thus a better geometric image quality. Lens aberrations also contribute to the quality of the image. Physically, diffraction due to the aperture stop will limit the resolvable spatial frequencies as a function of f-number.

In the frequency domain, Modulation Transfer Function (MTF) is a measure of the quality of the imaging system. The MTF is a measure of the visibility of a sinusoidal variation in irradiance on the image plane as a function of the frequency of the sinusoid. It includes the effects of diffraction, aberrations and pixelation. For the lens, the MTF is the autocorrelation of the pupil function, [3] so it accounts for the finite pupil extent and the lens aberrations. The sensor MTF is the Fourier Transform of the pixel geometry. For a square pixel, MTF(ξ) = sin(πξp)/πξp where p is the pixel width and ξ is the spatial frequency. The MTF of the combination of the lens and detector is the product of the two component MTFs.

Perception

Color images can be perceived via two means. In the case of computer vision the light incident on the sensor comprises the image. In the case of visual perception, the human eye has a color dependent response to light so this must be accounted for. This is important consideration when converting to grayscale.

Image formation in eye

The principal difference between the lens of the eye and an ordinary optical lens is that the former is flexible. The radius of the curvature of the anterior surface of the lens is greater than the radius of its posterior surface. The shape of the lens is controlled by tension in the fibers of the ciliary body. To focus on distant objects, the controlling muscles cause the lens to be relatively flattened. Similarly, these muscles allow the lens to become thicker in order to focus on objects near the eye.

The distance between the center of the lens and the retina (focal length) varies from approximately 17 mm to about 14 mm, as the refractive power of the lens increases from its minimum to its maximum. When the eye focuses on an object farther away than about 3 m, the lens exhibits its lowest refractive power. When the eye focuses on a close object, the lens is most strongly refractive.

Related Research Articles

<span class="mw-page-title-main">Optical aberration</span> Deviation from perfect paraxial optical behavior

In optics, aberration is a property of optical systems, such as lenses, that causes light to be spread out over some region of space rather than focused to a point. Aberrations cause the image formed by a lens to be blurred or distorted, with the nature of the distortion depending on the type of aberration. Aberration can be defined as a departure of the performance of an optical system from the predictions of paraxial optics. In an imaging system, it occurs when light from one point of an object does not converge into a single point after transmission through the system. Aberrations occur because the simple paraxial theory is not a completely accurate model of the effect of an optical system on light, rather than due to flaws in the optical elements.

<span class="mw-page-title-main">Lens</span> Optical device which transmits and refracts light

A lens is a transmissive optical device that focuses or disperses a light beam by means of refraction. A simple lens consists of a single piece of transparent material, while a compound lens consists of several simple lenses (elements), usually arranged along a common axis. Lenses are made from materials such as glass or plastic and are ground, polished, or molded to the required shape. A lens can focus light to form an image, unlike a prism, which refracts light without focusing. Devices that similarly focus or disperse waves and radiation other than visible light are also called "lenses", such as microwave lenses, electron lenses, acoustic lenses, or explosive lenses.

<span class="mw-page-title-main">Optics</span> Branch of physics that studies light

Optics is the branch of physics that studies the behaviour and properties of light, including its interactions with matter and the construction of instruments that use or detect it. Optics usually describes the behaviour of visible, ultraviolet, and infrared light. Because light is an electromagnetic wave, other forms of electromagnetic radiation such as X-rays, microwaves, and radio waves exhibit similar properties.

<span class="mw-page-title-main">Numerical aperture</span> Characteristic of an optical system

In optics, the numerical aperture (NA) of an optical system is a dimensionless number that characterizes the range of angles over which the system can accept or emit light. By incorporating index of refraction in its definition, NA has the property that it is constant for a beam as it goes from one material to another, provided there is no refractive power at the interface. The exact definition of the term varies slightly between different areas of optics. Numerical aperture is commonly used in microscopy to describe the acceptance cone of an objective, and in fiber optics, in which it describes the range of angles within which light that is incident on the fiber will be transmitted along it.

<span class="mw-page-title-main">Aperture</span> Hole or opening through which light travels

In optics, an aperture is a hole or an opening through which light travels. More specifically, the aperture and focal length of an optical system determine the cone angle of the bundle of rays that come to a focus in the image plane.

<span class="mw-page-title-main">Chromatic aberration</span> Failure of a lens to focus all colors on the same point

In optics, chromatic aberration (CA), also called chromatic distortion and spherochromatism, is a failure of a lens to focus all colors to the same point. It is caused by dispersion: the refractive index of the lens elements varies with the wavelength of light. The refractive index of most transparent materials decreases with increasing wavelength. Since the focal length of a lens depends on the refractive index, this variation in refractive index affects focusing. Chromatic aberration manifests itself as "fringes" of color along boundaries that separate dark and bright parts of the image.

f-number Measure of lens speed

An f-number is a measure of the light-gathering ability of an optical system such as a camera lens. It is calculated by dividing the system's focal length by the diameter of the entrance pupil. The f-number is also known as the focal ratio, f-ratio, or f-stop, and it is key in determining the depth of field, diffraction, and exposure of a photograph. The f-number is dimensionless and is usually expressed using a lower-case hooked f with the format f/N, where N is the f-number.

<span class="mw-page-title-main">Circle of confusion</span> Blurry region in optics

In optics, a circle of confusion (CoC) is an optical spot caused by a cone of light rays from a lens not coming to a perfect focus when imaging a point source. It is also known as disk of confusion, circle of indistinctness, blur circle, or blur spot.

<span class="mw-page-title-main">Camera lens</span> Optical lens or assembly of lenses used with a camera to create images

A camera lens is an optical lens or assembly of lenses used in conjunction with a camera body and mechanism to make images of objects either on photographic film or on other media capable of storing an image chemically or electronically.

<span class="mw-page-title-main">Angular resolution</span> Ability of any image-forming device to distinguish small details of an object

Angular resolution describes the ability of any image-forming device such as an optical or radio telescope, a microscope, a camera, or an eye, to distinguish small details of an object, thereby making it a major determinant of image resolution. It is used in optics applied to light waves, in antenna theory applied to radio waves, and in acoustics applied to sound waves. The colloquial use of the term "resolution" sometimes causes confusion; when an optical system is said to have a high resolution or high angular resolution, it means that the perceived distance, or actual angular distance, between resolved neighboring objects is small. The value that quantifies this property, θ, which is given by the Rayleigh criterion, is low for a system with a high resolution. The closely related term spatial resolution refers to the precision of a measurement with respect to space, which is directly connected to angular resolution in imaging instruments. The Rayleigh criterion shows that the minimum angular spread that can be resolved by an image forming system is limited by diffraction to the ratio of the wavelength of the waves to the aperture width. For this reason, high resolution imaging systems such as astronomical telescopes, long distance telephoto camera lenses and radio telescopes have large apertures.

<span class="mw-page-title-main">Optical telescope</span> Telescope for observations with visible light

An optical telescope is a telescope that gathers and focuses light mainly from the visible part of the electromagnetic spectrum, to create a magnified image for direct visual inspection, to make a photograph, or to collect data through electronic image sensors.

The science of photography is the use of chemistry and physics in all aspects of photography. This applies to the camera, its lenses, physical operation of the camera, electronic camera internals, and the process of developing film in order to take and develop pictures properly.

<span class="mw-page-title-main">Catadioptric system</span> Optical system where refraction and reflection are combined

A catadioptric optical system is one where refraction and reflection are combined in an optical system, usually via lenses (dioptrics) and curved mirrors (catoptrics). Catadioptric combinations are used in focusing systems such as searchlights, headlamps, early lighthouse focusing systems, optical telescopes, microscopes, and telephoto lenses. Other optical systems that use lenses and mirrors are also referred to as "catadioptric", such as surveillance catadioptric sensors.

<span class="mw-page-title-main">Telecentric lens</span> Optical lens

A telecentric lens is a special optical lens that has its entrance or exit pupil, or both, at infinity. The size of images produced by a telecentric lens is insensitive to either the distance between an object being imaged and the lens, or the distance between the image plane and the lens, or both, and such an optical property is called telecentricity. Telecentric lenses are used for precision optical two-dimensional measurements, reproduction, and other applications that are sensitive to the image magnification or the angle of incidence of light.

<span class="mw-page-title-main">Optical transfer function</span> Function that specifies how different spatial frequencies are captured by an optical system

The optical transfer function (OTF) of an optical system such as a camera, microscope, human eye, or projector specifies how different spatial frequencies are captured or transmitted. It is used by optical engineers to describe how the optics project light from the object or scene onto a photographic film, detector array, retina, screen, or simply the next item in the optical transmission chain. A variant, the modulation transfer function (MTF), neglects phase effects, but is equivalent to the OTF in many situations.

In Gaussian optics, the cardinal points consist of three pairs of points located on the optical axis of a rotationally symmetric, focal, optical system. These are the focal points, the principal points, and the nodal points; there are two of each. For ideal systems, the basic imaging properties such as image size, location, and orientation are completely determined by the locations of the cardinal points; in fact, only four points are necessary: the two focal points and either the principal points or the nodal points. The only ideal system that has been achieved in practice is a plane mirror, however the cardinal points are widely used to approximate the behavior of real optical systems. Cardinal points provide a way to analytically simplify an optical system with many components, allowing the imaging characteristics of the system to be approximately determined with simple calculations.

<span class="mw-page-title-main">Image sensor format</span> Shape and size of a digital cameras image sensor

In digital photography, the image sensor format is the shape and size of the image sensor.

The design of photographic lenses for use in still or cine cameras is intended to produce a lens that yields the most acceptable rendition of the subject being photographed within a range of constraints that include cost, weight and materials. For many other optical devices such as telescopes, microscopes and theodolites where the visual image is observed but often not recorded the design can often be significantly simpler than is the case in a camera where every image is captured on film or image sensor and can be subject to detailed scrutiny at a later stage. Photographic lenses also include those used in enlargers and projectors.

<span class="mw-page-title-main">Petzval field curvature</span> Optical aberration

Petzval field curvature, named for Joseph Petzval, describes the optical aberration in which a flat object normal to the optical axis cannot be brought properly into focus on a flat image plane. Field curvature can be corrected with the use of a field flattener, designs can also incorporate a curved focal plane like in the case of the human eye in order to improve image quality at the focal surface.

A flat lens is a lens whose flat shape allows it to provide distortion-free imaging, potentially with arbitrarily-large apertures. The term is also used to refer to other lenses that provide a negative index of refraction. Flat lenses require a refractive index close to −1 over a broad angular range. In recent years, flat lenses based on metasurfaces were also demonstrated.

References

  1. Ross., McCluney (1994). Introduction to radiometry and photometry. Boston: Artech House. ISBN   0890066787. OCLC   30031974.
  2. E., Umbaugh, Scott (2017). Digital Image Processing and Analysis with MATLAB and CVIPtools, Third Edition (3rd ed.). ISBN   9781498766029. OCLC   1016899766.{{cite book}}: CS1 maint: multiple names: authors list (link)
  3. W., Goodman, Joseph (1996). Introduction to Fourier optics (2nd ed.). New York: McGraw-Hill. ISBN   0070242542. OCLC   35242460.{{cite book}}: CS1 maint: multiple names: authors list (link)