Glossary of machine vision

Last updated

The following are common definitions related to the machine vision field.

Contents

General related fields

0-9

3D rendering example Engine movingparts.jpg
3D rendering example
3D-laser-scanner mounted on a tripod 3D-Laserscanner on tripod.jpg
3D-laser-scanner mounted on a tripod

A

B

"Wikipedia" encoded in Code 128-B Wikipedia-barcode-128B.png
"Wikipedia" encoded in Code 128-B

C

Relation between computer vision and various other fields CVoverview2.svg
Relation between computer vision and various other fields
The CIE 1931 color space chromaticity diagram. The outer curved boundary is the spectral (or monochromatic) locus, with wavelengths shown in nanometers. Note that the colors depicted depend on the color space of the device on which you are viewing the image, and no device has a gamut large enough to present an accurate representation of the chromaticity at every position. CIExy1931.png
The CIE 1931 color space chromaticity diagram. The outer curved boundary is the spectral (or monochromatic) locus, with wavelengths shown in nanometers. Note that the colors depicted depend on the color space of the device on which you are viewing the image, and no device has a gamut large enough to present an accurate representation of the chromaticity at every position.

D

"Wikipedia, the free encyclopedia" encoded in the DataMatrix 2D barcode Datamatrix.svg
"Wikipedia, the free encyclopedia" encoded in the DataMatrix 2D barcode

E

F

G

A typical CRT gamut.
The grayed-out horseshoe shape is the entire range of possible chromaticities. The colored triangle is the gamut available to a typical computer monitor; it does not cover the entire space. CIExy1931 srgb gamut.png
A typical CRT gamut.
The grayed-out horseshoe shape is the entire range of possible chromaticities. The colored triangle is the gamut available to a typical computer monitor; it does not cover the entire space.

H

A photograph with its luminosity histogram beneath it Luminosity histogram.jpg
A photograph with its luminosity histogram beneath it
HSV color space as a color wheel Hsv sample.png
HSV color space as a color wheel

I

Image of a dog taken in mid-infrared ("thermal") light (false color) Infrared dog.jpg
Image of a dog taken in mid-infrared ("thermal") light (false color)

J

K

L

M

N

Simplified view of an artificial neural network Neural network.svg
Simplified view of an artificial neural network

O

P

Prime lens with a maximum aperture of f/2 Lens aperture side.jpg
Prime lens with a maximum aperture of f/2

Q

,

where is the resonant frequency, is the stored energy in the cavity, and is the power dissipated. The optical Q is equal to the ratio of the resonant frequency to the bandwidth of the cavity resonance. The average lifetime of a resonant photon in the cavity is proportional to the cavity's Q. If the Q factor of a laser's cavity is abruptly changed from a low value to a high one, the laser will emit a pulse of light that is much more intense than the laser's normal continuous output. This technique is known as Q-switching.

R

A representation of RGB additive color mixing AdditiveColorMixing.svg
A representation of RGB additive color mixing

S

T

U

V

W

Wide angle lens - 17-40 f/4 L Canon 17-40 f4 L lens.jpg
Wide angle lens - 17-40 f/4 L

X

An X-ray picture (radiograph), taken by Wilhelm Rontgen, of his wife's hand X-ray by Wilhelm Rontgen of Albert von Kolliker's hand - 18960123-02.jpg
An X-ray picture (radiograph), taken by Wilhelm Röntgen, of his wife's hand

Y

Z

A 70-200mm Zoom lens Canon EF 70-200mm.jpg
A 70-200mm Zoom lens
Zoom principle Zoom prinzip.gif
Zoom principle

See also

Related Research Articles

<span class="mw-page-title-main">Pixel</span> Physical point in a raster image

In digital imaging, a pixel, pel, or picture element is the smallest addressable element in a raster image, or the smallest addressable element in a dot matrix display device. In most digital display devices, pixels are the smallest element that can be manipulated through software.

<span class="mw-page-title-main">Camera</span> Optical device for recording images

A camera is an instrument used to capture and store images and videos, either digitally via an electronic image sensor, or chemically via a light-sensitive material such as photographic film. As a pivotal technology in the fields of photography and videography, cameras have played a significant role in the progression of visual arts, media, entertainment, surveillance, and scientific research. The invention of the camera dates back to the 19th century and has since evolved with advancements in technology, leading to a vast array of types and models in the 21st century.

<span class="mw-page-title-main">Digital camera</span> Camera that captures photographs or video in digital format

A digital camera, also called a digicam, is a camera that captures photographs in digital memory. Most cameras produced today are digital, largely replacing those that capture images on photographic film or film stock. Digital cameras are now widely incorporated into mobile devices like smartphones with the same or more capabilities and features of dedicated cameras. High-end, high-definition dedicated cameras are still commonly used by professionals and those who desire to take higher-quality photographs.

<span class="mw-page-title-main">Astrophotography</span> Imaging of astronomical objects

Astrophotography, also known as astronomical imaging, is the photography or imaging of astronomical objects, celestial events, or areas of the night sky. The first photograph of an astronomical object was taken in 1840, but it was not until the late 19th century that advances in technology allowed for detailed stellar photography. Besides being able to record the details of extended objects such as the Moon, Sun, and planets, modern astrophotography has the ability to image objects outside of the visible spectrum of the human eye such as dim stars, nebulae, and galaxies. This is accomplished through long time exposure as both film and digital cameras can accumulate and sum photons over long periods of time or using specialized optical filters which limit the photons to a certain wavelength.

<span class="mw-page-title-main">Camera lens</span> Optical lens or assembly of lenses used with a camera to create images

A camera lens is an optical lens or assembly of lenses used in conjunction with a camera body and mechanism to make images of objects either on photographic film or on other media capable of storing an image chemically or electronically.

<span class="mw-page-title-main">Camcorder</span> Video camera with built-in video recorder

A camcorder is a self-contained portable electronic device with video and recording as its primary function. It is typically equipped with an articulating screen mounted on the left side, a belt to facilitate holding on the right side, hot-swappable battery facing towards the user, hot-swappable recording media, and an internally contained quiet optical zoom lens.

<span class="mw-page-title-main">3D display</span> Display device

A 3D display is a display device capable of conveying depth to the viewer. Many 3D displays are stereoscopic displays, which produce a basic 3D effect by means of stereopsis, but can cause eye strain and visual fatigue. Newer 3D displays such as holographic and light field displays produce a more realistic 3D effect by combining stereopsis and accurate focal length for the displayed content. Newer 3D displays in this manner cause less visual fatigue than classical stereoscopic displays.

<span class="mw-page-title-main">Computational photography</span> Set of digital image capture and processing techniques

Computational photography refers to digital image capture and processing techniques that use digital computation instead of optical processes. Computational photography can improve the capabilities of a camera, or introduce features that were not possible at all with film-based photography, or reduce the cost or size of camera elements. Examples of computational photography include in-camera computation of digital panoramas, high-dynamic-range images, and light field cameras. Light field cameras use novel optical elements to capture three dimensional scene information which can then be used to produce 3D images, enhanced depth-of-field, and selective de-focusing. Enhanced depth-of-field reduces the need for mechanical focusing systems. All of these features use computational imaging techniques.

<span class="mw-page-title-main">Digital single-lens reflex camera</span> Digital cameras combining the parts of a single-lens reflex camera and a digital camera back

A digital single-lens reflex camera is a digital camera that combines the optics and mechanisms of a single-lens reflex camera with a solid-state image sensor and digitally records the images from the sensor.

<span class="mw-page-title-main">Vignetting</span> Reduction of an images brightness or saturation toward the periphery compared to the image center

In photography and optics, vignetting is a reduction of an image's brightness or saturation toward the periphery compared to the image center. The word vignette, from the same root as vine, originally referred to a decorative border in a book. Later, the word came to be used for a photographic portrait that is clear at the center and fades off toward the edges. A similar effect is visible in photographs of projected images or videos off a projection screen, resulting in a so-called "hotspot" effect.

<span class="mw-page-title-main">Image noise</span> Visible interference in an image

Image noise is random variation of brightness or color information in images, and is usually an aspect of electronic noise. It can be produced by the image sensor and circuitry of a scanner or digital camera. Image noise can also originate in film grain and in the unavoidable shot noise of an ideal photon detector. Image noise is an undesirable by-product of image capture that obscures the desired information. Typically the term “image noise” is used to refer to noise in 2D images, not 3D images.

<span class="mw-page-title-main">Light field camera</span> Type of camera that can also capture the direction of travel of light rays

A light field camera, also known as a plenoptic camera, is a camera that captures information about the light field emanating from a scene; that is, the intensity of light in a scene, and also the precise direction that the light rays are traveling in space. This contrasts with conventional cameras, which record only light intensity at various wavelengths.

<span class="mw-page-title-main">Telecentric lens</span> Optical lens

A telecentric lens is a special optical lens that has its entrance or exit pupil, or both, at infinity. The size of images produced by a telecentric lens is insensitive to either the distance between an object being imaged and the lens, or the distance between the image plane and the lens, or both, and such an optical property is called telecentricity. Telecentric lenses are used for precision optical two-dimensional measurements, reproduction, and other applications that are sensitive to the image magnification or the angle of incidence of light.

<span class="mw-page-title-main">Image sensor</span> Device that converts images into electronic signals

An image sensor or imager is a sensor that detects and conveys information used to form an image. It does so by converting the variable attenuation of light waves into signals, small bursts of current that convey the information. The waves can be light or other electromagnetic radiation. Image sensors are used in electronic imaging devices of both analog and digital types, which include digital cameras, camera modules, camera phones, optical mouse devices, medical imaging equipment, night vision equipment such as thermal imaging devices, radar, sonar, and others. As technology changes, electronic and digital imaging tends to replace chemical and analog imaging.

The following outline is provided as an overview of and topical guide to computer vision:

Range imaging is the name for a collection of techniques that are used to produce a 2D image showing the distance to points in a scene from a specific point, normally associated with some type of sensor device.

Document cameras, also known as visual presenters, visualizers, digital overheads, or docucams, are real-time image capture devices used to display an object to a large audience, such as in a classroom. They can also serve as replacements for image scanners. Similar to opaque projectors, document cameras can magnify and project the images of actual, three-dimensional objects, as well as transparencies. In essence, they are high-resolution web cams, mounted on arms, allowing them to be positioned over a page. The camera connects to a projector or similar video streaming system, enabling a teacher, lecturer, or presenter to write on a sheet of paper or display a two- or three-dimensional object while the audience watches. Different types of document cameras and visualizers offer flexibility in object placement. Larger objects, for instance, can be positioned in front of the camera, which can then be rotated as needed. Alternatively, a ceiling-mounted document camera can be used to create a larger working area.

<span class="mw-page-title-main">Digital microscope</span>

A digital microscope is a variation of a traditional optical microscope that uses optics and a digital camera to output an image to a monitor, sometimes by means of software running on a computer. A digital microscope often has its own in-built LED light source, and differs from an optical microscope in that there is no provision to observe the sample directly through an eyepiece. Since the image is focused on the digital circuit, the entire system is designed for the monitor image. The optics for the human eye are omitted.

The study of image formation encompasses the radiometric and geometric processes by which 2D images of 3D objects are formed. In the case of digital images, the image formation process also includes analog to digital conversion and sampling.

This glossary defines terms that are used in the document "Defining Video Quality Requirements: A Guide for Public Safety", developed by the Video Quality in Public Safety (VQIPS) Working Group. It contains terminology and explanations of concepts relevant to the video industry. The purpose of the glossary is to inform the reader of commonly used vocabulary terms in the video domain. This glossary was compiled from various industry sources.

References

  1. Hartley, Richard I. (15 May 1998). "Minimizing algebraic error" (PDF). Philosophical Transactions of the Royal Society of London. Series A: Mathematical, Physical and Engineering Sciences. 356 (1740): 1175–1192. Bibcode:1998RSPTA.356.1175H. doi:10.1098/rsta.1998.0216. S2CID   2842771.