Image color transfer

Last updated
Color mapping example
Img HistMatch Source.jpg
Source image
Img HistMatch Target.jpg
Reference image
Img HistMatch SourceAfter.jpg
Source image color mapped using histogram matching

Image color transfer is a function that maps (transforms) the colors of one (source) image to the colors of another (target) image. A color mapping may be referred to as the algorithm that results in the mapping function or the algorithm that transforms the image colors. The image modification process is sometimes called color transfer or, when grayscale images are involved, brightness transfer function (BTF); it may also be called photometric camera calibration or radiometric camera calibration.

Contents

The term image color transfer is a bit of a misnomer since most common algorithms transfer both color and shading. (Indeed, the example shown on this page predominantly transfers shading other than a small orange region within the image that is adjusted to yellow.)

Algorithms

There are two types of image color transfer algorithms: those that employ the statistics of the colors of two images, and those that rely on a given pixel correspondence between the images. In a wide-ranging review, Faridul and others [1] identify a third broad category of implementation, namely user-assisted methods.

An example of an algorithm that employs the statistical properties of the images is histogram matching. This is a classic algorithm for color transfer, but it can suffer from the problem that it is too precise so that it copies very particular color quirks from the target image, rather than the general color characteristics, giving rise to color artifacts. Newer statistic-based algorithms deal with this problem. An example of such algorithm is one that adjusts the mean and the standard deviation of each of the source image channels to match those of the corresponding reference image channels. This adjustment process is typically performed in the Lαβ or Lab color spaces. [2]

A common algorithm for computing the color mapping when the pixel correspondence is given is building the joint-histogram (see also co-occurrence matrix) of the two images and finding the mapping by using dynamic programming based on the joint-histogram values. [3]

When the pixel correspondence is not given and the image contents are different (due to different point of view), the statistics of the image corresponding regions can be used as an input to statistics-based algorithms, such as histogram matching. The corresponding regions can be found by detecting the corresponding features. [4]

Liu [5] provides a review of image color transfer methods. The review extends into considerations of video color transfer and deep learning methods including Neural style transfer.

Applications

Color transfer processing can serve two different purposes: one is calibrating the colors of two cameras for further processing using two or more sample images, the second is adjusting the colors of two images for perceptual visual compatibility.

Color calibration is an important pre-processing task in computer vision applications. Many applications simultaneously process two or more images and, therefore, need their colors to be calibrated. Examples of such applications are: Image differencing, registration, object recognition, multi-camera tracking, co-segmentation and stereo reconstruction.

A photograph of 21st century London recolored to match an 18th century painting by Canaletto. Image Colour Transfer Demo using a Colour Palette from a Canaletto Painting.jpg
A photograph of 21st century London recolored to match an 18th century painting by Canaletto.

Other applications of image color transfer have been suggested. These include the co-option of color palettes from recognised sources such as famous paintings and the use as a further alternative to color modification methods commonly found in commercial image processing applications such as ‘posterise’, ‘solarise’ and ‘gradient’. [6] A web application has been made available to explore these possibilities.

Nomenclature

The use of the terms source and target in this article reflects the usage in the seminal paper by Reinhard et al. [2] However, others such as Xiao and Ma [7] reverse that usage and indeed it seems more natural to consider that the colors from a source image are directed at a target image. Adobe use the term source for the color reference image in the Photoshop Match Color function. Because of confusion over this terminology some software has been released into the public domain with incorrect functionality. To minimise further confusion, it may be good practice henceforth to utilise terms such as input image or base image and color source image or color palette image respectively.

See also

Related Research Articles

<span class="mw-page-title-main">Rendering (computer graphics)</span> Process of generating an image from a model

Rendering or image synthesis is the process of generating a photorealistic or non-photorealistic image from a 2D or 3D model by means of a computer program. The resulting image is referred to as the render. Multiple models can be defined in a scene file containing objects in a strictly defined language or data structure. The scene file contains geometry, viewpoint, textures, lighting, and shading information describing the virtual scene. The data contained in the scene file is then passed to a rendering program to be processed and output to a digital image or raster graphics image file. The term "rendering" is analogous to the concept of an artist's impression of a scene. The term "rendering" is also used to describe the process of calculating effects in a video editing program to produce the final video output.

Gamma correction or gamma is a nonlinear operation used to encode and decode luminance or tristimulus values in video or still image systems. Gamma correction is, in the simplest cases, defined by the following power-law expression:

In image processing and photography, a color histogram is a representation of the distribution of colors in an image. For digital images, a color histogram represents the number of pixels that have colors in each of a fixed list of color ranges, that span the image's color space, the set of all possible colors.

<span class="mw-page-title-main">Fractal flame</span>

Fractal flames are a member of the iterated function system class of fractals created by Scott Draves in 1992. Draves' open-source code was later ported into Adobe After Effects graphics software and translated into the Apophysis fractal flame editor.

<span class="mw-page-title-main">Shader</span> Type of program in a graphical processing unit (GPU)

In computer graphics, a shader is a computer program that calculates the appropriate levels of light, darkness, and color during the rendering of a 3D scene—a process known as shading. Shaders have evolved to perform a variety of specialized functions in computer graphics special effects and video post-processing, as well as general-purpose computing on graphics processing units.

<span class="mw-page-title-main">Tone mapping</span> Image processing technique

Tone mapping is a technique used in image processing and computer graphics to map one set of colors to another to approximate the appearance of high-dynamic-range (HDR) images in a medium that has a more limited dynamic range. Print-outs, CRT or LCD monitors, and projectors all have a limited dynamic range that is inadequate to reproduce the full range of light intensities present in natural scenes. Tone mapping addresses the problem of strong contrast reduction from the scene radiance to the displayable range while preserving the image details and color appearance important to appreciate the original scene content.

In computer graphics, per-pixel lighting refers to any technique for lighting an image or scene that calculates illumination for each pixel on a rendered image. This is in contrast to other popular methods of lighting such as vertex lighting, which calculates illumination at each vertex of a 3D model and then interpolates the resulting values over the model's faces to calculate the final per-pixel color values.

<span class="mw-page-title-main">Image stitching</span> Combining multiple photographic images with overlapping fields of view

Image stitching or photo stitching is the process of combining multiple photographic images with overlapping fields of view to produce a segmented panorama or high-resolution image. Commonly performed through the use of computer software, most approaches to image stitching require nearly exact overlaps between images and identical exposures to produce seamless results, although some stitching algorithms actually benefit from differently exposed images by doing high-dynamic-range imaging in regions of overlap. Some digital cameras can stitch their photos internally.

In the fields of computing and computer vision, pose represents the position and orientation of an object, usually in three dimensions. Poses are often stored internally as transformation matrices. The term “pose” is largely synonymous with the term “transform”, but a transform may often include scale, whereas pose does not.

The following are common definitions related to the machine vision field.

<span class="mw-page-title-main">Histogram equalization</span> Method in image processing of contrast adjustment using the images histogram

Histogram equalization is a method in image processing of contrast adjustment using the image's histogram.

<span class="mw-page-title-main">Color quantization</span>

In computer graphics, color quantization or color image quantization is quantization applied to color spaces; it is a process that reduces the number of distinct colors used in an image, usually with the intention that the new image should be as visually similar as possible to the original image. Computer algorithms to perform color quantization on bitmaps have been studied since the 1970s. Color quantization is critical for displaying images with many colors on devices that can only display a limited number of colors, usually due to memory limitations, and enables efficient compression of certain types of images.

Camera resectioning is the process of estimating the parameters of a pinhole camera model approximating the camera that produced a given photograph or video; it determines which incoming light ray is associated with each pixel on the resulting image. Basically, the process determines the pose of the pinhole camera.

<span class="mw-page-title-main">Image rectification</span>

Image rectification is a transformation process used to project images onto a common image plane. This process has several degrees of freedom and there are many strategies for transforming images to the common plane. Image rectification is used in computer stereo vision to simplify the problem of finding matching points between images, and in geographic information systems to merge images taken from multiple perspectives into a common map coordinate system.

Computer stereo vision is the extraction of 3D information from digital images, such as those obtained by a CCD camera. By comparing information about a scene from two vantage points, 3D information can be extracted by examining the relative positions of objects in the two panels. This is similar to the biological process of stereopsis.

<span class="mw-page-title-main">Histogram matching</span>

In image processing, histogram matching or histogram specification is the transformation of an image so that its histogram matches a specified histogram. The well-known histogram equalization method is a special case in which the specified histogram is uniformly distributed.

Color normalization is a topic in computer vision concerned with artificial color vision and object recognition. In general, the distribution of color values in an image depends on the illumination, which may vary depending on lighting conditions, cameras, and other factors. Color normalization allows for object recognition techniques based on color to compensate for these variations.

Chessboards arise frequently in computer vision theory and practice because their highly structured geometry is well-suited for algorithmic detection and processing. The appearance of chessboards in computer vision can be divided into two main areas: camera calibration and feature extraction. This article provides a unified discussion of the role that chessboards play in the canonical methods from these two areas, including references to the seminal literature, examples, and pointers to software implementations.

Underwater computer vision is a subfield of computer vision. In recent years, with the development of underwater vehicles, the need to be able to record and process huge amounts of information has become increasingly important. Applications range from inspection of underwater structures for the offshore industry to the identification and counting of fishes for biological research. However, no matter how big the impact of this technology can be to industry and research, it still is in a very early stage of development compared to traditional computer vision. One reason for this is that, the moment the camera goes into the water, a whole new set of challenges appear. On one hand, cameras have to be made waterproof, marine corrosion deteriorates materials quickly and access and modifications to experimental setups are costly, both in time and resources. On the other hand, the physical properties of the water make light behave differently, changing the appearance of a same object with variations of depth, organic material, currents, temperature etc.

This is a glossary of terms relating to computer graphics.

References

  1. Faridul, H. Sheikh; Pouli, T.; Chamaret, C.; Stauder, J.; Reinhard, E.; Kuzovkin, D.; Tremeau, A. (February 2016). "Colour Mapping: A Review of Recent Methods, Extensions and Applications: Colour Mapping". Computer Graphics Forum. 35 (1): 59–88. doi:10.1111/cgf.12671. S2CID   13038481 . Retrieved 9 June 2023.
  2. 1 2 Color Transfer between Images
  3. Inter-Camera Color Calibration using Cross-Correlation Model Function
  4. Piecewise-consistent Color Mappings of Images Acquired Under Various Conditions Archived 2011-07-21 at the Wayback Machine
  5. Liu, Shiguang (2022). "An Overview of Color Transfer and Style Transfer for Images and Videos". arXiv: 2204.13339 [cs.CV].
  6. Johnson, Terry (28 May 2022). "A Free-toUse Web App for Image Colour Transfer Processing". Medium.
  7. Xioa, X; Ma, L (2006). "Color transfer in correlated color space". ACM: 305–309.