Image color transfer is a function that maps (transforms) the colors of one (source) image to the colors of another (target) image. A color mapping may be referred to as the algorithm that results in the mapping function or the algorithm that transforms the image colors. The image modification process is sometimes called color transfer or, when grayscale images are involved, brightness transfer function (BTF); it may also be called photometric camera calibration or radiometric camera calibration.
The term image color transfer is a bit of a misnomer since most common algorithms transfer both color and shading. (Indeed, the example shown on this page predominantly transfers shading other than a small orange region within the image that is adjusted to yellow.)
There are two types of image color transfer algorithms: those that employ the statistics of the colors of two images, and those that rely on a given pixel correspondence between the images. In a wide-ranging review, Faridul and others [1] identify a third broad category of implementation, namely user-assisted methods.
An example of an algorithm that employs the statistical properties of the images is histogram matching. This is a classic algorithm for color transfer, but it can suffer from the problem that it is too precise so that it copies very particular color quirks from the target image, rather than the general color characteristics, giving rise to color artifacts. Newer statistic-based algorithms deal with this problem. An example of such algorithm is one that adjusts the mean and the standard deviation of each of the source image channels to match those of the corresponding reference image channels. This adjustment process is typically performed in the Lαβ or Lab color spaces. [2]
A common algorithm for computing the color mapping when the pixel correspondence is given is building the joint-histogram (see also co-occurrence matrix) of the two images and finding the mapping by using dynamic programming based on the joint-histogram values. [3]
When the pixel correspondence is not given and the image contents are different (due to different point of view), the statistics of the image corresponding regions can be used as an input to statistics-based algorithms, such as histogram matching. The corresponding regions can be found by detecting the corresponding features. [4]
Liu [5] provides a review of image color transfer methods. The review extends into considerations of video color transfer and deep learning methods including Neural style transfer.
Color transfer processing can serve two different purposes: one is calibrating the colors of two cameras for further processing using two or more sample images, the second is adjusting the colors of two images for perceptual visual compatibility.
Color calibration is an important pre-processing task in computer vision applications. Many applications simultaneously process two or more images and, therefore, need their colors to be calibrated. Examples of such applications are: Image differencing, registration, object recognition, multi-camera tracking, co-segmentation and stereo reconstruction.
Other applications of image color transfer have been suggested. These include the co-option of color palettes from recognised sources such as famous paintings and the use as a further alternative to color modification methods commonly found in commercial image processing applications such as ‘posterise’, ‘solarise’ and ‘gradient’. [6] A web application has been made available to explore these possibilities.
The use of the terms source and target in this article reflects the usage in the seminal paper by Reinhard et al. [2] However, others such as Xiao and Ma [7] reverse that usage and indeed it seems more natural to consider that the colors from a source image are directed at a target image. Adobe use the term source for the color reference image in the Photoshop Match Color function. Because of confusion over this terminology some software has been released into the public domain with incorrect functionality. To minimise further confusion, it may be good practice henceforth to utilise terms such as input image or base image and color source image or color palette image respectively.
Rendering is the process of generating a photorealistic or non-photorealistic image from input data such as 3D models. The word "rendering" originally meant the task performed by an artist when depicting a real or imaginary thing. Today, to "render" commonly means to generate an image or video from a precise description using a computer program.
Gamma correction or gamma is a nonlinear operation used to encode and decode luminance or tristimulus values in video or still image systems. Gamma correction is, in the simplest cases, defined by the following power-law expression:
In image processing and photography, a color histogram is a representation of the distribution of colors in an image. For digital images, a color histogram represents the number of pixels that have colors in each of a fixed list of color ranges, that span the image's color space, the set of all possible colors.
Fractal flames are a member of the iterated function system class of fractals created by Scott Draves in 1992. Draves' open-source code was later ported into Adobe After Effects graphics software and translated into the Apophysis fractal flame editor.
In computer graphics, a shader is a computer program that calculates the appropriate levels of light, darkness, and color during the rendering of a 3D scene—a process known as shading. Shaders have evolved to perform a variety of specialized functions in computer graphics special effects and video post-processing, as well as general-purpose computing on graphics processing units.
Tone mapping is a technique used in image processing and computer graphics to map one set of colors to another to approximate the appearance of high-dynamic-range (HDR) images in a medium that has a more limited dynamic range. Print-outs, CRT or LCD monitors, and projectors all have a limited dynamic range that is inadequate to reproduce the full range of light intensities present in natural scenes. Tone mapping addresses the problem of strong contrast reduction from the scene radiance to the displayable range while preserving the image details and color appearance important to appreciate the original scene content.
Image stitching or photo stitching is the process of combining multiple photographic images with overlapping fields of view to produce a segmented panorama or high-resolution image. Commonly performed through the use of computer software, most approaches to image stitching require nearly exact overlaps between images and identical exposures to produce seamless results, although some stitching algorithms actually benefit from differently exposed images by doing high-dynamic-range imaging in regions of overlap. Some digital cameras can stitch their photos internally.
In the fields of computing and computer vision, pose represents the position and the orientation of an object, each usually in three dimensions. Poses are often stored internally as transformation matrices. The term “pose” is largely synonymous with the term “transform”, but a transform may often include scale, whereas pose does not.
The following are common definitions related to the machine vision field.
Histogram equalization is a method in image processing of contrast adjustment using the image's histogram.
The aim of color calibration is to measure and/or adjust the color response of a device to a known state. In International Color Consortium (ICC) terms, this is the basis for an additional color characterization of the device and later profiling. In non-ICC workflows, calibration sometimes refers to establishing a known relationship to a standard color space in one go. The device that is to be calibrated is sometimes known as a calibration source; the color space that serves as a standard is sometimes known as a calibration target. Color calibration is a requirement for all devices taking an active part in a color-managed workflow and is used by many industries, such as television production, gaming, photography, engineering, chemistry, medicine, and more.
In computer graphics, color quantization or color image quantization is quantization applied to color spaces; it is a process that reduces the number of distinct colors used in an image, usually with the intention that the new image should be as visually similar as possible to the original image. Computer algorithms to perform color quantization on bitmaps have been studied since the 1970s. Color quantization is critical for displaying images with many colors on devices that can only display a limited number of colors, usually due to memory limitations, and enables efficient compression of certain types of images.
The Advanced Very-High-Resolution Radiometer (AVHRR) instrument is a space-borne sensor that measures the reflectance of the Earth in five spectral bands that are relatively wide by today's standards. AVHRR instruments are or have been carried by the National Oceanic and Atmospheric Administration (NOAA) family of polar orbiting platforms (POES) and European MetOp satellites. The instrument scans several channels; two are centered on the red (0.6 micrometres) and near-infrared (0.9 micrometres) regions, a third one is located around 3.5 micrometres, and another two the thermal radiation emitted by the planet, around 11 and 12 micrometres.
Image rectification is a transformation process used to project images onto a common image plane. This process has several degrees of freedom and there are many strategies for transforming images to the common plane. Image rectification is used in computer stereo vision to simplify the problem of finding matching points between images, and in geographic information systems (GIS) to merge images taken from multiple perspectives into a common map coordinate system.
In computer vision and computer graphics, 3D reconstruction is the process of capturing the shape and appearance of real objects. This process can be accomplished either by active or passive methods. If the model is allowed to change its shape in time, this is referred to as non-rigid or spatio-temporal reconstruction.
Computer stereo vision is the extraction of 3D information from digital images, such as those obtained by a CCD camera. By comparing information about a scene from two vantage points, 3D information can be extracted by examining the relative positions of objects in the two panels. This is similar to the biological process of stereopsis.
In image processing, histogram matching or histogram specification is the transformation of an image so that its histogram matches a specified histogram. The well-known histogram equalization method is a special case in which the specified histogram is uniformly distributed.
Chessboards arise frequently in computer vision theory and practice because their highly structured geometry is well-suited for algorithmic detection and processing. The appearance of chessboards in computer vision can be divided into two main areas: camera calibration and feature extraction. This article provides a unified discussion of the role that chessboards play in the canonical methods from these two areas, including references to the seminal literature, examples, and pointers to software implementations.
Underwater computer vision is a subfield of computer vision. In recent years, with the development of underwater vehicles, the need to be able to record and process huge amounts of information has become increasingly important. Applications range from inspection of underwater structures for the offshore industry to the identification and counting of fishes for biological research. However, no matter how big the impact of this technology can be to industry and research, it still is in a very early stage of development compared to traditional computer vision. One reason for this is that, the moment the camera goes into the water, a whole new set of challenges appear. On one hand, cameras have to be made waterproof, marine corrosion deteriorates materials quickly and access and modifications to experimental setups are costly, both in time and resources. On the other hand, the physical properties of the water make light behave differently, changing the appearance of a same object with variations of depth, organic material, currents, temperature etc.
This is a glossary of terms relating to computer graphics.