Color mapping is a function that maps (transforms) the colors of one (source) image to the colors of another (target) image. A color mapping may be referred to as the algorithm that results in the mapping function or the algorithm that transforms the image colors. Color mapping is also sometimes called color transfer or, when grayscale images are involved, brightness transfer function (BTF).
In mathematics, a function was originally the idealization of how a varying quantity depends on another quantity. For example, the position of a planet is a function of time. Historically, the concept was elaborated with the infinitesimal calculus at the end of the 17th century, and, until the 19th century, the functions that were considered were differentiable. The concept of function was formalized at the end of the 19th century in terms of set theory, and this greatly enlarged the domains of application of the concept.
Color, or colour, is the characteristic of human visual perception described through color categories, with names such as red, orange, yellow, green, blue, or purple. This perception of color derives from the stimulation of cone cells in the human eye by electromagnetic radiation in the visible spectrum. Color categories and physical specifications of color are associated with objects through the wavelength of the light that is reflected from them. This reflection is governed by the object's physical properties such as light absorption, emission spectra, etc.
A digital image is a numeric representation, normally binary, of a two-dimensional image. Depending on whether the image resolution is fixed, it may be of vector or raster type. By itself, the term "digital image" usually refers to raster images or bitmapped images.
There are two types of color mapping algorithms: those that employ the statistics of the colors of two images, and those that rely on a given pixel correspondence between the images.
In digital imaging, a pixel, pel, or picture element is a physical point in a raster image, or the smallest addressable element in an all points addressable display device; so it is the smallest controllable element of a picture represented on the screen.
An example of an algorithm that employs the statistical properties of the images is histogram matching. This is a classic algorithm for color mapping, suffering from the problem of sensitivity to image content differences. Newer statistic-based algorithms deal with this problem. An example of such algorithm is adjusting the mean and the standard deviation of Lab channels of the two images.
In image processing, histogram matching or histogram specification is the transformation of an image so that its histogram matches a specified histogram. The well-known histogram equalization method is a special case in which the specified histogram is uniformly distributed.
There are several kinds of mean in various branches of mathematics.
In statistics, the standard deviation is a measure that is used to quantify the amount of variation or dispersion of a set of data values. A low standard deviation indicates that the data points tend to be close to the mean of the set, while a high standard deviation indicates that the data points are spread out over a wider range of values.
A common algorithm for computing the color mapping when the pixel correspondence is given is building the joint-histogram (see also co-occurrence matrix) of the two images and finding the mapping by using dynamic programming based on the joint-histogram values.
A co-occurrence matrix or co-occurrence distribution is a matrix that is defined over an image to be the distribution of co-occurring pixel values at a given offset.
Dynamic programming is both a mathematical optimization method and a computer programming method. The method was developed by Richard Bellman in the 1950s and has found applications in numerous fields, from aerospace engineering to economics. In both contexts it refers to simplifying a complicated problem by breaking it down into simpler sub-problems in a recursive manner. While some decision problems cannot be taken apart this way, decisions that span several points in time do often break apart recursively. Likewise, in computer science, if a problem can be solved optimally by breaking it into sub-problems and then recursively finding the optimal solutions to the sub-problems, then it is said to have optimal substructure.
When the pixel correspondence is not given and the image contents are different (due to different point of view), the statistics of the image corresponding regions can be used as an input to statistics-based algorithms, such as histogram matching. The corresponding regions can be found by detecting the corresponding features.
In computer vision and image processing feature detection includes methods for computing abstractions of image information and making local decisions at every image point whether there is an image feature of a given type at that point or not. The resulting features will be subsets of the image domain, often in the form of isolated points, continuous curves or connected regions.
Color mapping can serve two different purposes: one is calibrating the colors of two cameras for further processing using two or more sample images, the second is adjusting the colors of two images for perceptual visual compatibility.
Color calibration is an important pre-processing task in computer vision applications. Many applications simultaneously process two or more images and, therefore, need their colors to be calibrated. Examples of such applications are: Image differencing, registration, object recognition, multi-camera tracking, co-segmentation and stereo reconstruction.
The aim of color calibration is to measure and/or adjust the color response of a device to a known state. In International Color Consortium (ICC) terms, this is the basis for an additional color characterization of the device and later profiling. In non-ICC workflows, calibration refers sometimes to establishing a known relationship to a standard color space in one go. The device that is to be calibrated is sometimes known as a calibration source; the color space that serves as a standard is sometimes known as a calibration target. Color calibration is a requirement for all devices taking an active part of a color-managed workflow, and is used by many industries, such as television production, gaming, photography, engineering, chemistry, medicine and more.
Computer vision is an interdisciplinary scientific field that deals with how computers can be made to gain high-level understanding from digital images or videos. From the perspective of engineering, it seeks to automate tasks that the human visual system can do.
Image differencing is an image processing technique used to determine changes between images. The difference between two images is calculated by finding the difference between each pixel in each image, and generating an image based on the result. For this technique to work, the two images must first be aligned so that corresponding points coincide, and their photometric values must be made compatible, either by careful calibration, or by post-processing. The complexity of the pre-processing needed before differencing varies with the type of image.
Image registration is the process of transforming different sets of data into one coordinate system. Data may be multiple photographs, data from different sensors, times, depths, or viewpoints. It is used in computer vision, medical imaging, military automatic target recognition, and compiling and analyzing images and data from satellites. Registration is necessary in order to be able to compare or integrate the data obtained from these different measurements.
In image processing and photography, a color histogram is a representation of the distribution of colors in an image. For digital images, a color histogram represents the number of pixels that have colors in each of a fixed list of color ranges, that span the image's color space, the set of all possible colors.
Fractal flames is a member of the iterated function system class of fractals created by Scott Draves in 1992. Draves' open-source code was later ported into Adobe After Effects graphics software and translated into the Apophysis fractal flame editor.
Motion estimation is the process of determining motion vectors that describe the transformation from one 2D image to another; usually from adjacent frames in a video sequence. It is an ill-posed problem as the motion is in three dimensions but the images are a projection of the 3D scene onto a 2D plane. The motion vectors may relate to the whole image or specific parts, such as rectangular blocks, arbitrary shaped patches or even per pixel. The motion vectors may be represented by a translational model or many other models that can approximate the motion of a real video camera, such as rotation and translation in all three dimensions and zoom.
Tone mapping is a technique used in image processing and computer graphics to map one set of colors to another to approximate the appearance of high-dynamic-range images in a medium that has a more limited dynamic range. Print-outs, CRT or LCD monitors, and projectors all have a limited dynamic range that is inadequate to reproduce the full range of light intensities present in natural scenes. Tone mapping addresses the problem of strong contrast reduction from the scene radiance to the displayable range while preserving the image details and color appearance important to appreciate the original scene content.
An image histogram is a type of histogram that acts as a graphical representation of the tonal distribution in a digital image. It plots the number of pixels for each tonal value. By looking at the histogram for a specific image a viewer will be able to judge the entire tonal distribution at a glance.
Image stitching or photo stitching is the process of combining multiple photographic images with overlapping fields of view to produce a segmented panorama or high-resolution image. Commonly performed through the use of computer software, most approaches to image stitching require nearly exact overlaps between images and identical exposures to produce seamless results, although some stitching algorithms actually benefit from differently exposed images by doing high dynamic range imaging in regions of overlap. Some digital cameras can stitch their photos internally.
The following are common definitions related to the machine vision field.
Histogram equalization is a method in image processing of contrast adjustment using the image's histogram.
In computer graphics, color quantization or color image quantization is quantization applied to color spaces; it is a process that reduces the number of distinct colors used in an image, usually with the intention that the new image should be as visually similar as possible to the original image. Computer algorithms to perform color quantization on bitmaps have been studied since the 1970s. Color quantization is critical for displaying images with many colors on devices that can only display a limited number of colors, usually due to memory limitations, and enables efficient compression of certain types of images.
Linux color management has the same goal as the color management systems (CMS) for other operating systems: to achieve the best possible color reproduction throughout an imaging workflow from its source, through imaging software, and finally onto an output medium. In particular, color management attempts to enable color consistency across media and throughout a color-managed workflow.
The stereo cameras approach is a method of distilling a noisy video signal into a coherent data set that a computer can begin to process into actionable symbolic objects, or abstractions. Stereo cameras is one of many approaches used in the broader fields of computer vision and machine vision.
Windows Color System (WCS) is a platform for color management, first included with Windows Vista, that aims to achieve color consistency across various software and hardware, including cameras, monitors, printers and scanners. Different devices interpret the same colors differently, according to their software and hardware configurations. As a result, they must be properly calibrated to reproduce colors consistently across different devices. WCS aims to make this process of color calibration automatic and transparent, as an evolution of ICC profiles.
Image rectification is a transformation process used to project images onto a common image plane. This process has several degrees of freedom and there are many strategies for transforming images to the common plane.
A color chart or color reference card is a flat, physical object that has many different color samples present. They can be available as a one-page chart, or in the form of swatchbooks or color-matching fans.
Camera auto-calibration is the process of determining internal camera parameters directly from multiple uncalibrated images of unstructured scenes. In contrast to classic camera calibration, auto-calibration does not require any special calibration objects in the scene. In the visual effects industry, camera auto-calibration is often part of the "Match Moving" process where a synthetic camera trajectory and intrinsic projection model are solved to reproject synthetic content into video.
Computer stereo vision is the extraction of 3D information from digital images, such as those obtained by a CCD camera. By comparing information about a scene from two vantage points, 3D information can be extracted by examining the relative positions of objects in the two panels. This is similar to the biological process Stereopsis. Stereoscopic images are often stored as MPO files. Recently, researchers pushed to develop methods to reduce the storage needed for these files in order to maintain the high quality of the stereo image.
Image editing encompasses the processes of altering images, whether they are digital photographs, traditional photo-chemical photographs, or illustrations. Traditional analog image editing is known as photo retouching, using tools such as an airbrush to modify photographs, or editing illustrations with any traditional art medium. Graphic software programs, which can be broadly grouped into vector graphics editors, raster graphics editors, and 3D modelers, are the primary tools with which a user may manipulate, enhance, and transform images. Many image editing programs are also used to render or create computer art from scratch.
Chessboards arise frequently in computer vision theory and practice because their highly structured geometry is well-suited for algorithmic detection and processing. The appearance of chessboards in computer vision can be divided into two main areas: camera calibration and feature extraction. This article provides a unified discussion of the role that chessboards play in the canonical methods from these two areas, including references to the seminal literature, examples, and pointers to software implementations.