Glossary of machine vision

Last updated

The following are common definitions related to the machine vision field.

Contents

General related fields

0-9

3D rendering example Engine movingparts.jpg
3D rendering example
3D-laser-scanner mounted on a tripod 3D-Laserscanner on tripod.jpg
3D-laser-scanner mounted on a tripod

A

B

"Wikipedia" encoded in Code 128-B Wikipedia-barcode-128B.png
"Wikipedia" encoded in Code 128-B

C

Relation between Computer vision and various other fields CVoverview2.svg
Relation between Computer vision and various other fields
The CIE 1931 color space chromaticity diagram. The outer curved boundary is the spectral (or monochromatic) locus, with wavelengths shown in nanometers. Note that the colors depicted depend on the color space of the device on which you are viewing the image, and no device has a gamut large enough to present an accurate representation of the chromaticity at every position. CIExy1931.png
The CIE 1931 color space chromaticity diagram. The outer curved boundary is the spectral (or monochromatic) locus, with wavelengths shown in nanometers. Note that the colors depicted depend on the color space of the device on which you are viewing the image, and no device has a gamut large enough to present an accurate representation of the chromaticity at every position.

D

"Wikipedia, the free encyclopedia" encoded in the DataMatrix 2D barcode Datamatrix.svg
"Wikipedia, the free encyclopedia" encoded in the DataMatrix 2D barcode

E

F

G

A typical CRT gamut.
The grayed-out horseshoe shape is the entire range of possible chromaticities. The colored triangle is the gamut available to a typical computer monitor; it does not cover the entire space. CIExy1931 srgb gamut.png
A typical CRT gamut.
The grayed-out horseshoe shape is the entire range of possible chromaticities. The colored triangle is the gamut available to a typical computer monitor; it does not cover the entire space.

H

A photograph with its luminosity histogram beneath it Luminosity histogram.jpg
A photograph with its luminosity histogram beneath it
HSV color space as a color wheel Hsv sample.png
HSV color space as a color wheel

I

Image of a dog taken in mid-infrared ("thermal") light (false color) Infrared dog.jpg
Image of a dog taken in mid-infrared ("thermal") light (false color)

J

K

L

M

N

Simplified view of an artificial neural network Neural network.svg
Simplified view of an artificial neural network

O

P

Prime lens with a maximum aperture of f/2 Lens aperture side.jpg
Prime lens with a maximum aperture of f/2

Q

,

where is the resonant frequency, is the stored energy in the cavity, and is the power dissipated. The optical Q is equal to the ratio of the resonant frequency to the bandwidth of the cavity resonance. The average lifetime of a resonant photon in the cavity is proportional to the cavity's Q. If the Q factor of a laser's cavity is abruptly changed from a low value to a high one, the laser will emit a pulse of light that is much more intense than the laser's normal continuous output. This technique is known as Q-switching.

R

A Representation of RGB additive color mixing. AdditiveColorMixing.svg
A Representation of RGB additive color mixing.

S

T

U

V

W

Wide angle lens - 17-40 f/4 L Canon 17-40 f4 L lens.jpg
Wide angle lens - 17-40 f/4 L

X

An X-ray picture (radiograph), taken by Wilhelm Rontgen, of his wife's hand. X-ray by Wilhelm Rontgen of Albert von Kolliker's hand - 18960123-02.jpg
An X-ray picture (radiograph), taken by Wilhelm Röntgen, of his wife's hand.

Y

Z

A 70-200mm Zoom lens Canon EF 70-200mm.jpg
A 70-200mm Zoom lens
Zoom Principle Zoom prinzip.gif
Zoom Principle

See also

Related Research Articles

<span class="mw-page-title-main">Pixel</span> Physical point in a raster image

In digital imaging, a pixel, pel, or picture element is the smallest addressable element in a raster image, or the smallest point in an all points addressable display device. In most digital display devices, pixels are the smallest element that can be manipulated through software.

<span class="mw-page-title-main">RGB color model</span> Additive color model based on combining red, green, and blue

The RGB color model is an additive color model in which the red, green and blue primary colors of light are added together in various ways to reproduce a broad array of colors. The name of the model comes from the initials of the three additive primary colors, red, green, and blue.

<span class="mw-page-title-main">Camera</span> Optical device for recording images

A camera is an optical instrument that can capture an image. Most cameras can capture 2D images, with some more advanced models being able to capture 3D images. At a basic level, most cameras consist of sealed boxes, with a small hole that allows light to pass through in order to capture an image on a light-sensitive surface. Cameras have various mechanisms to control how the light falls onto the light-sensitive surface. Lenses focus the light entering the camera, and the aperture can be narrowed or widened. A shutter mechanism determines the amount of time the photosensitive surface is exposed to the light.

<span class="mw-page-title-main">Digital camera</span> Camera that captures photographs or video in digital format

A digital camera is a camera that captures photographs in digital memory. Most cameras produced today are digital, largely replacing those that capture images on photographic film. Digital cameras are now widely incorporated into mobile devices like smartphones with the same or more capabilities and features of dedicated cameras. High-end, high-definition dedicated cameras are still commonly used by professionals and those who desire to take higher-quality photographs.

<span class="mw-page-title-main">Astrophotography</span> Imaging of astronomical objects

Astrophotography, also known as astronomical imaging, is the photography or imaging of astronomical objects, celestial events, or areas of the night sky. The first photograph of an astronomical object was taken in 1840, but it was not until the late 19th century that advances in technology allowed for detailed stellar photography. Besides being able to record the details of extended objects such as the Moon, Sun, and planets, modern astrophotography has the ability to image objects invisible to the human eye such as dim stars, nebulae, and galaxies. This is done by long time exposure since both film and digital cameras can accumulate and sum photons over these long periods of time.

<span class="mw-page-title-main">Camera lens</span> Optical lens or assembly of lenses used with a camera to create images

A camera lens is an optical lens or assembly of lenses used in conjunction with a camera body and mechanism to make images of objects either on photographic film or on other media capable of storing an image chemically or electronically.

<span class="mw-page-title-main">Cinematography</span> Art of motion picture photography

Cinematography is the art of motion picture photography.

<span class="mw-page-title-main">Autofocus</span> Optical system to focus on an automatically or manually selected point or area

An autofocus optical system uses a sensor, a control system and a motor to focus on an automatically or manually selected point or area. An electronic rangefinder has a display instead of the motor; the adjustment of the optical system has to be done manually until indication. Autofocus methods are distinguished as active, passive or hybrid types.

<span class="mw-page-title-main">Digital single-lens reflex camera</span> Digital cameras combining the parts of a single-lens reflex camera and a digital camera back

A digital single-lens reflex camera is a digital camera that combines the optics and the mechanisms of a single-lens reflex camera with a digital imaging sensor.

<span class="mw-page-title-main">Vignetting</span> Reduction of an images brightness or saturation toward the periphery compared to the image center

In photography and optics, vignetting is a reduction of an image's brightness or saturation toward the periphery compared to the image center. The word vignette, from the same root as vine, originally referred to a decorative border in a book. Later, the word came to be used for a photographic portrait that is clear at the center and fades off toward the edges. A similar effect is visible in photographs of projected images or videos off a projection screen, resulting in a so-called "hotspot" effect.

Optical resolution describes the ability of an imaging system to resolve detail, in the object that is being imaged.

<span class="mw-page-title-main">Light field camera</span> Type of camera that can also capture the direction of travel of light rays

A light field camera, also known as a plenoptic camera, is a camera that captures information about the light field emanating from a scene; that is, the intensity of light in a scene, and also the precise direction that the light rays are traveling in space. This contrasts with conventional cameras, which record only light intensity at various wavelengths.

<span class="mw-page-title-main">Telecentric lens</span> Optical lens

A telecentric lens is a special optical lens that has its entrance or exit pupil, or both, at infinity. Telecentric lenses are often used for precision optical two-dimensional measurements or reproduction and other applications that are sensitive to the image magnification or the angle of incidence of light.

<span class="mw-page-title-main">Image sensor</span> Device that converts an optical image into an electronic signal

An image sensor or imager is a sensor that detects and conveys information used to make an image. It does so by converting the variable attenuation of light waves into signals, small bursts of current that convey the information. The waves can be light or other electromagnetic radiation. Image sensors are used in electronic imaging devices of both analog and digital types, which include digital cameras, camera modules, camera phones, optical mouse devices, medical imaging equipment, night vision equipment such as thermal imaging devices, radar, sonar, and others. As technology changes, electronic and digital imaging tends to replace chemical and analog imaging.

<span class="mw-page-title-main">Optical transfer function</span> Function that specifies how different spatial frequencies are captured by an optical system

The optical transfer function (OTF) of an optical system such as a camera, microscope, human eye, or projector specifies how different spatial frequencies are captured or transmitted. It is used by optical engineers to describe how the optics project light from the object or scene onto a photographic film, detector array, retina, screen, or simply the next item in the optical transmission chain. A variant, the modulation transfer function (MTF), neglects phase effects, but is equivalent to the OTF in many situations.

The following outline is provided as an overview of and topical guide to computer vision:

The following outline is provided as an overview of and topical guide to photography:

Document cameras, also known as visual presenters, visualizers, digital overheads, or docucams, are real-time image capture devices for displaying an object to a large audience. Like an opaque projector, a document camera is able to magnify and project the images of actual, three-dimensional objects, as well as transparencies. They are, in essence, high resolution web cams, mounted on arms so as to facilitate their placement over a page. This allows a teacher, lecturer or presenter to write on a sheet of paper or to display a two or three-dimensional object while the audience watches. Theoretically, all objects can be displayed by a document camera. Most objects are simply placed under the camera. The camera takes the picture which in turn produces a live picture using a projector or monitor. Different types of document camera/visualizer allow great flexibility in terms of placement of objects. Larger objects, for example, can simply be placed in front of the camera and the camera rotated as necessary, or a ceiling mounted document camera can also be used to allow a larger working area to be used.

The study of image formation encompasses the radiometric and geometric processes by which 2D images of 3D objects are formed. In the case of digital images, the image formation process also includes analog to digital conversion and sampling.

This glossary defines terms that are used in the document "Defining Video Quality Requirements: A Guide for Public Safety", developed by the Video Quality in Public Safety (VQIPS) Working Group. It contains terminology and explanations of concepts relevant to the video industry. The purpose of the glossary is to inform the reader of commonly used vocabulary terms in the video domain. This glossary was compiled from various industry sources.

References

  1. Hartley, Richard I. (15 May 1998). "Minimizing algebraic error" (PDF). Philosophical Transactions of the Royal Society of London. Series A: Mathematical, Physical and Engineering Sciences. 356 (1740): 1175–1192. doi:10.1098/rsta.1998.0216.