Flat-field correction (FFC) is a digital imaging technique to mitigate the image detector pixel-to-pixel sensitivity and distortions in the optical path. It is a standard calibration procedure in everything from personal digital cameras to large telescopes.
Flat fielding refers to the process of compensating for different gains and dark currents in a detector. Once a detector has been appropriately flat-fielded, a uniform signal will create a uniform output (hence flat-field). This then means any further signal is due to the phenomenon being detected and not a systematic error.
A flat-field image is acquired by imaging a uniformly-illuminated screen, thus producing an image of uniform color and brightness across the frame. For handheld cameras, the screen could be a piece of paper at arm's length, but a telescope will frequently image a clear patch of sky at twilight, when the illumination is uniform and there are few, if any, stars visible. [1] Once the images are acquired, processing can begin.
A flat-field consists of two numbers for each pixel, the pixel's gain and its dark current (or dark frame). The pixel's gain is how the amount of signal given by the detector varies as a function of the amount of light (or equivalent). The gain is almost always a linear variable, as such the gain is given simply as the ratio of the input and output signals. The dark-current is the amount of signal given out by the detector when there is no incident light (hence dark frame). In many detectors this can also be a function of time, for example in astronomical telescopes it is common to take a dark-frame of the same time as the planned light exposure. The gain and dark-frame for optical systems can also be established by using a series of neutral density filters to give input/output signal information and applying a least squares fit to obtain the values for the dark current and gain. where:
In this equation, capital letters are 2D matrices, and lowercase letters are scalars. All matrix operations are performed element-by-element.
In order for an astrophotographer to capture a light frame, they must place a light source over the imaging instrument's objective lens such that the light source emanates evenly through the users optics. The photographer must then adjust the exposure of their imaging device (charge-coupled device (CCD) or digital single-lens reflex camera (DSLR) ) so that when the histogram of the image is viewed, a peak reaching about 40–70% of the dynamic range (maximum range of pixel values) of the imaging device is seen. The photographer typically takes 15–20 light frames and performs median stacking. Once the desired light frames are acquired, the objective lens is covered so that no light is allowed in, then 15–20 dark frames are taken, each of equal exposure time as a light frame. These are called Dark-Flat frames.
In X-ray imaging, the acquired projection images generally suffer from fixed-pattern noise, which is one of the limiting factors of image quality. It may stem from beam inhomogeneity, gain variations of the detector response due to inhomogeneities in the photon conversion yield, losses in charge transport, charge trapping, or variations in the performance of the readout. Also, the scintillator screen may accumulate dust and/or scratches on its surface, resulting in systematic patterns in every acquired X-ray projection image. In X-ray computed tomography (CT), fixed-pattern noise is known to significantly degrade the achievable spatial resolution and generally leads to ring or band artifacts in the reconstructed images. Fixed pattern noise can be easily removed using flat field correction. In conventional flat field correction, projection images without sample are acquired with and without the X-ray beam turned on, which are referred to as flat fields (F) and dark fields (D). Based on the acquired flat and dark fields, the measured projection images (P) with sample are then normalized to new images (N) according to: [3]
While conventional flat field correction is an elegant and easy procedure that largely reduces fixed-pattern noise, it heavily relies on the stationarity of the X-ray beam, scintillator response and CCD sensitivity. In practice, however, this assumption is only approximately met. Indeed, detector elements are characterized by intensity dependent, nonlinear response functions and the incident beam often shows time dependent non-uniformities, which render conventional FFC inadequate. In synchrotron X-ray tomography, many factors may cause flat field variations: instability of the bending magnets of the synchrotron, temperature variations due to the water cooling in mirrors and the monochromator, or vibrations of the scintillator and other beamline components. The latter is responsible for the biggest variations in the flat fields. To deal with such variations, a dynamicflat field correction procedure can be employed that estimates a flat field for each individual projection. Through principal component analysis of a set of flat fields, which are acquired prior and/or posterior to the actual scan, eigen flat fields can be computed. A linear combination of the most important eigen flat fields can then be used to individually normalize each X-ray projection: [3] where
A charge-coupled device (CCD) is an integrated circuit containing an array of linked, or coupled, capacitors. Under the control of an external circuit, each capacitor can transfer its electric charge to a neighboring capacitor. CCD sensors are a major technology used in digital imaging.
The Canny edge detector is an edge detection operator that uses a multi-stage algorithm to detect a wide range of edges in images. It was developed by John F. Canny in 1986. Canny also produced a computational theory of edge detection explaining why the technique works.
In digital image processing and computer vision, image segmentation is the process of partitioning a digital image into multiple image segments, also known as image regions or image objects. The goal of segmentation is to simplify and/or change the representation of an image into something that is more meaningful and easier to analyze. Image segmentation is typically used to locate objects and boundaries in images. More precisely, image segmentation is the process of assigning a label to every pixel in an image such that pixels with the same label share certain characteristics.
In optics, the Airy disk and Airy pattern are descriptions of the best-focused spot of light that a perfect lens with a circular aperture can make, limited by the diffraction of light. The Airy disk is of importance in physics, optics, and astronomy.
Optical resolution describes the ability of an imaging system to resolve detail, in the object that is being imaged. An imaging system may have many individual components, including one or more lenses, and/or recording and display components. Each of these contributes to the optical resolution of the system; the environment in which the imaging is done often is a further important factor.
Fixed-pattern noise (FPN) is the term given to a particular noise pattern on digital imaging sensors often noticeable during longer exposure shots where particular pixels are susceptible to giving brighter intensities above the average intensity.
Image noise is random variation of brightness or color information in images, and is usually an aspect of electronic noise. It can be produced by the image sensor and circuitry of a scanner or digital camera. Image noise can also originate in film grain and in the unavoidable shot noise of an ideal photon detector. Image noise is an undesirable by-product of image capture that obscures the desired information. Typically the term “image noise” is used to refer to noise in 2D images, not 3D images.
Phase-contrast imaging is a method of imaging that has a range of different applications. It measures differences in the refractive index of different materials to differentiate between structures under analysis. In conventional light microscopy, phase contrast can be employed to distinguish between structures of similar transparency, and to examine crystals on the basis of their double refraction. This has uses in biological, medical and geological science. In X-ray tomography, the same physical principles can be used to increase image contrast by highlighting small details of differing refractive index within structures that are otherwise uniform. In transmission electron microscopy (TEM), phase contrast enables very high resolution (HR) imaging, making it possible to distinguish features a few Angstrom apart.
The optical transfer function (OTF) of an optical system such as a camera, microscope, human eye, or projector specifies how different spatial frequencies are captured or transmitted. It is used by optical engineers to describe how the optics project light from the object or scene onto a photographic film, detector array, retina, screen, or simply the next item in the optical transmission chain. A variant, the modulation transfer function (MTF), neglects phase effects, but is equivalent to the OTF in many situations.
The following are common definitions related to the machine vision field.
An active-pixel sensor (APS) is an image sensor, which was invented by Peter J.W. Noble in 1968, where each pixel sensor unit cell has a photodetector and one or more active transistors. In a metal–oxide–semiconductor (MOS) active-pixel sensor, MOS field-effect transistors (MOSFETs) are used as amplifiers. There are different types of APS, including the early NMOS APS and the now much more common complementary MOS (CMOS) APS, also known as the CMOS sensor. CMOS sensors are used in digital camera technologies such as cell phone cameras, web cameras, most modern digital pocket cameras, most digital single-lens reflex cameras (DSLRs), mirrorless interchangeable-lens cameras (MILCs), and lensless imaging for cells.
In digital photography, dark-frame subtraction is a way to reduce image noise in photographs shot with long exposure times, at high ISO sensitivity or at high temperatures. It takes advantage of two components of image noise that remain the same from one shot to the next, dark current and fixed-pattern noise. Noise from the image sensor include hot pixels, which light up more brightly than surrounding pixels. The technique works by taking a picture with the shutter closed and subtracting that electronically from the original photo exhibiting the noise.
In digital photography, the image sensor format is the shape and size of the image sensor.
Compressed sensing is a signal processing technique for efficiently acquiring and reconstructing a signal, by finding solutions to underdetermined linear systems. This is based on the principle that, through optimization, the sparsity of a signal can be exploited to recover it from far fewer samples than required by the Nyquist–Shannon sampling theorem. There are two conditions under which recovery is possible. The first one is sparsity, which requires the signal to be sparse in some domain. The second one is incoherence, which is applied through the isometric property, which is sufficient for sparse signals. Compressed sensing has applications in, for example, MRI where the incoherence condition is typically satisfied.
Flat-panel detectors are a class of solid-state x-ray digital radiography devices similar in principle to the image sensors used in digital photography and video. They are used in both projectional radiography and as an alternative to x-ray image intensifiers (IIs) in fluoroscopy equipment.
Phase-contrast X-ray imaging or phase-sensitive X-ray imaging is a general term for different technical methods that use information concerning changes in the phase of an X-ray beam that passes through an object in order to create its images. Standard X-ray imaging techniques like radiography or computed tomography (CT) rely on a decrease of the X-ray beam's intensity (attenuation) when traversing the sample, which can be measured directly with the assistance of an X-ray detector. However, in phase contrast X-ray imaging, the beam's phase shift caused by the sample is not measured directly, but is transformed into variations in intensity, which then can be recorded by the detector.
This glossary defines terms that are used in the document "Defining Video Quality Requirements: A Guide for Public Safety", developed by the Video Quality in Public Safety (VQIPS) Working Group. It contains terminology and explanations of concepts relevant to the video industry. The purpose of the glossary is to inform the reader of commonly used vocabulary terms in the video domain. This glossary was compiled from various industry sources.
Kinetic imaging is an imaging technology developed by Szabolcs Osváth and Krisztián Szigeti in the Department of Biophysics and Radiation Biology at Semmelweis University. The technology allows the visualization of motion based on an altered data acquisition and image processing algorithm combined with imaging techniques that use penetrating radiation. Kinetic imaging has the potential for use in a wide variety of areas including medicine, engineering, and surveillance. For example, physiological movements, such as the circulation of blood or motion of organs(e.g., palpitations, arrhythmia) can be visualized using kinetic imaging. Because of the reduced noise and the motion-based image contrast, kinetic imaging can be used to reduce X-ray dose and/or amount of required contrast agent in medical imaging. In fact, clinical trials are underway in the fields of vascular surgery and interventional radiology. Non-medical applications include non-destructive testing of products and port security scanning for stowaway pests.
An MRI artifact is a visual artifact in magnetic resonance imaging (MRI). It is a feature appearing in an image that is not present in the original object. Many different artifacts can occur during MRI, some affecting the diagnostic quality, while others may be confused with pathology. Artifacts can be classified as patient-related, signal processing-dependent and hardware (machine)-related.
Single-pixel imaging is a computational imaging technique for producing spatially-resolved images using a single detector instead of an array of detectors. A device that implements such an imaging scheme is called a single-pixel camera. Combined with compressed sensing, the single-pixel camera can recover images from fewer measurements than the number of reconstructed pixels.