This article has multiple issues. Please help improve it or discuss these issues on the talk page . (Learn how and when to remove these template messages)
|
ICTCP, ICtCp, or ITP is a color representation format specified in the Rec. ITU-R BT.2100 standard that is used as a part of the color image pipeline in video and digital photography systems for high dynamic range (HDR) and wide color gamut (WCG) imagery. [1] It was developed by Dolby Laboratories [2] from the IPT color space by Ebner and Fairchild. [3] [4] The format is derived from an associated RGB color space by a coordinate transformation that includes two matrix transformations and an intermediate nonlinear transfer function that is informally known as gamma pre-correction. The transformation produces three signals called I, CT, and CP. The ICTCP transformation can be used with RGB signals derived from either the perceptual quantizer (PQ) or hybrid log–gamma (HLG) nonlinearity functions, but is most commonly associated with the PQ function (which was also developed by Dolby).
The I ("intensity") component is a luma component that represents the brightness of the video, and CT and CP are blue-yellow (named from tritanopia) and red-green (named from protanopia) chroma components. [2] Ebner also used IPT as short for "Image Processing Transform". [3]
The ICTCP color representation scheme is conceptually related to the LMS color space, as the color transformation from RGB to ICTCP is defined by first converting RGB to LMS with a 3×3 matrix transformation, then applying the nonlinearity function, and then converting the nonlinear signals to ICTCP using another 3×3 matrix transformation. [5] ICTCP was defined as a YCC digital format with support for 4:4:4, 4:2:2 and 4:2:0 chroma subsampling in CTA-861-H (that means that in limited range 10 bit mode 0, 1, 2, 3, 1020, 1021, 1022, 1023 values are reserved). [6]
ICTCP is defined by Rec. 2100 as being derived from linear RGB as follows: [1]
All three above mentioned matrixes were derived (only the first 2 are documented derivations [2] ) from the matrixes in IPT. The HLG matrix can be derived the same way as the PQ matrix, with the only difference being the scaling of the chroma rows. The inverted decoding ICTCP matrixes are specified in ITU-T Series H Supplement 18. [7]
ICTCP is defined such that the entire BT.2020 space fits into the range [0, 1] for I and [-0.5, +0.5] for the two chroma components. The related uniform color space ITP used in ΔEITP (Rec. 2124) scales CT by 0.5 to restore uniformity. [8] There is support for ICtCp in zimg (including zimg as part of FFmpeg) and color-science, for both HLG and PQ.
The preceder to ICTCP, Ebner & Fairchild IPT color appearance model (1998), has a mostly similar transformation pipeline of input → LMS → nonlinearity → IPT. [3] [9] The differences are that it defines its input to the more general CIEXYZ tristimulus color space and as a result has a more conventional Hunt-Pointer-Estevez (for D65) matrix for LMS. The nonlinearity is a fixed gamma of 0.43, quite close to the one used by RLAB. The second matrix here is slightly different from the ICTCP matrix, mainly in that is also considers S (blue cone) for intensity, but ICTCP has also Rotation matrix (to align skin tones) and Scalar matrix (scaled to fit the full BT.2020 gamut inside the -0.5 to 0.5 region) multiplied with this matrix: [2] [10]
IPTPQc2 is another related colorspace used by Dolby Vision profile 5 BL+RPU (without EL). [11] The "c2" in the name means a cross talk matrix is used with c = 2%. It uses full range quantization (0-1023 for 10 bit video, no values reserved). It is also often referred to as IPTPQc2/IPT, as the matrix is in fact the same as in the 1998 IPT paper, just in inverse representation. [12] Documentation on this format is scarce due to its proprietary nature, but a patent [13] on the "IPT-PQ" (perceptually quantized IPT) color space seems to describe how Dolby changed the domain to PQ by changing the traditional power function from 1998 IPT paper to PQ function for each of LMS components.[ speculation? ] The matrix is as follows:
Note the matrix inversion used and an error was made in patent in 1091 number[ clarification needed ] of the matrix (the matrix after inversion is correct in patent). In addition, this format has no nonlinearity, and is assumed to be BT.2020-based. [14]
The second step, the dynamic range adjustment modeling (reshaping [15] ), is also defined in the patent.
It is used by Disney+, Apple TV+ and Netflix.[ citation needed ]
Decoder of IPTPQc2 with reshaping and MMR (but no NLQ and dynamic metadata) is available in libplacebo. [16]
Support for decoding all stages was added in mpv.
ICTCP has near constant luminance. [17] The correlation coefficient between encoded I and true luminance is 0.998, much higher than the 0.819 for YCBCR. An improved constant luminance versus YCBCR is an advantage for color processing operations such as chroma subsampling and gamut mapping where only the color difference information is changed. [2]
ICTCP also improves hue linearity compared with YCBCR, which helps with compression performance and color volume mapping. [18] [19] Adaptive reshaping can further provide a 10% improvement on compression performance. [20]
Improvement in luminance and hue uniformity make scaled ICTCP a practical color space for calculating color differences (ΔEITP), as introduced by ITU-R Rec. BT.2124. [8]
In terms of CIEDE2000 color quantization error, 10-bit ICTCP would be equivalent to 11.5 bit YCBCR. [2]
ICTCP is supported in the HEVC video coding standard. [21] It is also a digital YCC format and can be signaled in EDID's Colorimetry block as part of CTA-861-H.
Chromatic adaptation is the human visual system’s ability to adjust to changes in illumination in order to preserve the appearance of object colors. It is responsible for the stable appearance of object colors despite the wide variation of light which might be reflected from an object and observed by our eyes. A chromatic adaptation transform (CAT) function emulates this important aspect of color perception in color appearance models.
YIQ is the color space used by the analog NTSC color TV system. I stands for in-phase, while Q stands for quadrature, referring to the components used in quadrature amplitude modulation. Other TV systems used different color spaces, such as YUV for PAL or YDbDr for SECAM. Later digital standards use the YCbCr color space. These color spaces are all broadly related, and work based on the principle of adding a color component named chrominance, to a black and white image named luma.
YCbCr, Y′CbCr, or Y Pb/Cb Pr/Cr, also written as YCBCR or Y′CBCR, is a family of color spaces used as a part of the color image pipeline in video and digital photography systems. Y′ is the luma component and CB and CR are the blue-difference and red-difference chroma components. Y′ is distinguished from Y, which is luminance, meaning that light intensity is nonlinearly encoded based on gamma corrected RGB primaries.
sRGB is a standard RGB color space that HP and Microsoft created cooperatively in 1996 to use on monitors, printers, and the World Wide Web. It was subsequently standardized by the International Electrotechnical Commission (IEC) as IEC 61966-2-1:1999. sRGB is the current defined standard colorspace for the web, and it is usually the assumed colorspace for images that are neither tagged for a colorspace nor have an embedded color profile.
The Adobe RGB (1998) color space or opRGB is a color space developed by Adobe Inc. in 1998. It was designed to encompass most of the colors achievable on CMYK color printers, but by using RGB primary colors on a device such as a computer display. The Adobe RGB (1998) color space encompasses roughly 30% of the visible colors specified by the CIELAB color space – improving upon the gamut of the sRGB color space, primarily in cyan-green hues. It was subsequently standardized by the IEC as IEC 61966-2-5:1999 with a name opRGB and is used in HDMI.
The CIE 1931 color spaces are the first defined quantitative links between distributions of wavelengths in the electromagnetic visible spectrum, and physiologically perceived colors in human color vision. The mathematical relationships that define these color spaces are essential tools for color management, important when dealing with color inks, illuminated displays, and recording devices such as digital cameras. The system was designed in 1931 by the "Commission Internationale de l'éclairage", known in English as the International Commission on Illumination.
LMS, is a color space which represents the response of the three types of cones of the human eye, named for their responsivity (sensitivity) peaks at long, medium, and short wavelengths.
Relative luminance follows the photometric definition of luminance including spectral weighting for human vision, but while luminance is a measure of light in units such as , relative luminance values are normalized as 0.0 to 1.0, with 1.0 being a theoretical perfect reflector of 100% reference white. Like the photometric definition, it is related to the luminous flux density in a particular direction, which is radiant flux density weighted by the luminous efficiency function of the CIE Standard Observer.
xvYCC or extended-gamut YCbCr is a color space that can be used in the video electronics of television sets to support a gamut 1.8 times as large as that of the sRGB color space. xvYCC was proposed by Sony, specified by the IEC in October 2005 and published in January 2006 as IEC 61966-2-4. xvYCC extends the ITU-R BT.709 tone curve by defining over-ranged values. xvYCC-encoded video retains the same color primaries and white point as BT.709, and uses either a BT.601 or BT.709 RGB-to-YCC conversion matrix and encoding. This allows it to travel through existing digital limited range YCC data paths, and any colors within the normal gamut will be compatible. It works by allowing negative RGB inputs and expanding the output chroma. These are used to encode more saturated colors by using a greater part of the RGB values that can be encoded in the YCbCr signal compared with those used in Broadcast Safe Level. The extra-gamut colors can then be displayed by a device whose underlying technology is not limited by the standard primaries.
Rec. 709, also known as Rec.709, BT.709, and ITU 709, is a standard developed by ITU-R for image encoding and signal characteristics of high-definition television.
A color appearance model (CAM) is a mathematical model that seeks to describe the perceptual aspects of human color vision, i.e. viewing conditions under which the appearance of a color does not tally with the corresponding physical measurement of the stimulus source.
The hybrid log–gamma (HLG) transfer function is a transfer function jointly developed by the BBC and NHK for high dynamic range (HDR) display. It's backward compatible with the transfer function of SDR. It was approved as ARIB STD-B67 by the Association of Radio Industries and Businesses (ARIB). It is also defined in ATSC 3.0, Digital Video Broadcasting (DVB) UHD-1 Phase 2, and International Telecommunication Union (ITU) Rec. 2100.
Standard-dynamic-range video is a video technology which represents light intensity based on the brightness, contrast and color characteristics and limitations of a cathode ray tube (CRT) display. SDR video is able to represent a video or picture's colors with a maximum luminance around 100 cd/m2, a black level around 0.1 cd/m2 and Rec.709 / sRGB color gamut. It uses the gamma curve as its electro-optical transfer function.
Ultra HD Forum is an organization whose goal is to help solve the real world hurdles in deploying Ultra HD video and thus to help promote UHD deployment. The Ultra HD Forum will help navigate amongst the standards related to high dynamic range (HDR), high frame rate (HFR), next generation audio (NGA), and wide color gamut (WCG). The Ultra HD Forum is an industry organisation that is complementary to the UHD Alliance, covering different aspects of the UHD ecosystem.
ITU-R Recommendation BT.2100, more commonly known by the abbreviations Rec. 2100 or BT.2100, introduced high-dynamic-range television (HDR-TV) by recommending the use of the perceptual quantizer or hybrid log–gamma (HLG) transfer functions instead of the traditional "gamma" previously used for SDR-TV.
The perceptual quantizer (PQ), published by SMPTE as SMPTE ST 2084, is a transfer function that allows for HDR display by replacing the gamma curve used in SDR. It is capable of representing luminance level up to 10000 cd/m2 (nits) and down to 0.0001 nits. It was developed by Dolby and standardized in 2014 by SMPTE and also in 2016 by ITU in Rec. 2100. ITU specifies the use of PQ or HLG as transfer functions for HDR-TV. PQ is the basis of HDR video formats and is also used for HDR still picture formats. PQ is not backward compatible with the BT.1886 EOTF, while HLG is compatible.
High-dynamic-range television (HDR-TV) is a technology that uses high dynamic range (HDR) to improve the quality of display signals. It is contrasted with the retroactively-named standard dynamic range (SDR). HDR changes the way the luminance and colors of videos and images are represented in the signal, and allows brighter and more detailed highlight representation, darker and more detailed shadows, and more intense colors.
Images and videos use specific transfer functions to describe the relationship between electrical signal, scene light and displayed light.
The EBU colour bars are a television test card used to check if a video signal has been altered by recording or transmission, and what adjustments must be made to bring it back to specification. It is also used for setting a television monitor or receiver to reproduce chrominance and luminance information correctly. The EBU bars are most commonly shown arranged side-by-side in a vertical manner, though some broadcasters – such as TVP in Poland, and Gabon Télévision in Gabon – were known to have aired a horizontal version of the EBU bars.