ICtCp

Last updated

ICTCP, ICtCp, or ITP is a color representation format specified in the Rec. ITU-R BT.2100 standard that is used as a part of the color image pipeline in video and digital photography systems for high dynamic range (HDR) and wide color gamut (WCG) imagery. [1] It was developed by Dolby Laboratories [2] from the IPT color space by Ebner and Fairchild. [3] [4] The format is derived from an associated RGB color space by a coordinate transformation that includes two matrix transformations and an intermediate nonlinear transfer function that is informally known as gamma pre-correction. The transformation produces three signals called I, CT, and CP. The ICTCP transformation can be used with RGB signals derived from either the perceptual quantizer (PQ) or hybrid log–gamma (HLG) nonlinearity functions, but is most commonly associated with the PQ function (which was also developed by Dolby).

Contents

The I ("intensity") component is a luma component that represents the brightness of the video, and CT and CP are blue-yellow (named from tritanopia) and red-green (named from protanopia) chroma components. [2] Ebner also used IPT as short for "Image Processing Transform". [3]

The ICTCP color representation scheme is conceptually related to the LMS color space, as the color transformation from RGB to ICTCP is defined by first converting RGB to LMS with a 3×3 matrix transformation, then applying the nonlinearity function, and then converting the nonlinear signals to ICTCP using another 3×3 matrix transformation. [5] ICTCP was defined as YCC digital format with support for 4:4:4, 4:2:2 and 4:2:0 chroma subsampling in CTA-861-H (that means that in limited range 10 bit mode 0, 1, 2, 3, 1020, 1021, 1022, 1023 values are reserved). [6]

ICtCp top view.png

Derivation

ICTCP is defined by Rec. 2100 as being derived from linear RGB as follows: [1]

  1. Calculate LMS from BT.2100 RGB:
  2. Normalize the LMS by a non-linearity:
    • If the PQ transfer function is used:
    • If the HLG transfer function is used:
  3. Calculate ICTCP:
    • for PQ:
    • for HLG:

All three above mentioned matrixes were derived (only the first 2 are documented derivations [2] ) from the matrixes in IPT. The HLG matrix can be derived the same way as the PQ matrix, with the only difference being the scaling of the chroma rows. The inverted decoding ICTCP matrixes are specified in ITU-T Series H Supplement 18. [7]

ICTCP is defined such that the entire BT.2020 space fits into the range [0, 1] for I and [-0.5, +0.5] for the two chroma components. The related uniform color space ITP used in ΔEITP (Rec. 2124) scales CT by 0.5 to restore uniformity. [8] There is support for ICtCp in zimg (including zimg as part of FFmpeg) and color-science, for both HLG and PQ.

In IPT

The preceder to ICTCP, Ebner & Fairchild IPT color appearance model (1998), has a mostly similar transformation pipeline of input LMS nonlinearity IPT. [3] [9] The differences are that it defines its input to the more general CIEXYZ tristimulus color space and as a result has a more conventional Hunt-Pointer-Estevez (for D65) matrix for LMS. The nonlinearity is a fixed gamma of 0.43, quite close to the one used by RLAB. The second matrix here is slightly different from the ICTCP matrix, mainly in that is also considers S (blue cone) for intensity, but ICTCP has also Rotation matrix (to align skin tones) and Scalar matrix (scaled to fit the full BT.2020 gamut inside the -0.5 to 0.5 region) multiplied with this matrix: [2] [10]

  1. Calculate LMS (see LMS color space § Hunt, RLAB for D65, slightly different [3] ):
  2. Nonlinearity (L'M'S'): For each of L, M, S components apply a power function:

IPTPQc2


IPTPQc2 is another related colorspace used by Dolby Vision profile 5 BL+RPU (without EL). [11] The "c2" in the name means a cross talk matrix is used with c = 2%. It uses full range quantization (0-1023 for 10 bit video, no values reserved). It is also often referred to as IPTPQc2/IPT, as the matrix is in fact the same as in the 1998 IPT paper, just in inverse representation. [12] Documentation on this format is scarce due to its proprietary nature, but a patent [13] on the "IPT-PQ" (perceptually quantized IPT) color space seems to describe how Dolby changed the domain to PQ by changing the traditional power function from 1998 IPT paper to PQ function for each of LMS components.[ speculation? ] The matrix is as follows:

Note the matrix inversion used and an error was made in patent in 1091 number[ clarification needed ] of the matrix (the matrix after inversion is correct in patent). In addition, this format has no nonlinearity, and is assumed to be BT.2020-based. [14]

The second step, the dynamic range adjustment modeling (reshaping [15] ), is also defined in the patent.

It is used by Disney+, Apple TV+ and Netflix.[ citation needed ]

Decoder of IPTPQc2 with reshaping and MMR (but no NLQ and dynamic metadata) is available in libplacebo. [16]

Support for decoding all stages was added in mpv.

Characteristics

ICTCP has near constant luminance, which improves chroma subsampling versus YCBCR. [17] ICTCP also improves hue linearity compared with YCBCR, which helps with compression performance and color volume mapping. [18] [19] When combined with adaptive reshaping ICTCP can improve compression performance by 10%. [20] For CIEDE2000 color quantization errors, 10-bit ICTCP would be equivalent to 11.5 bit YCBCR, [2] that is why ΔEITP standard was introduced as ITU-R Rec. BT.2124 [21] and is already used in Calman. Luminance constancy is improved with ICTCP, which has a luminance relationship of 0.998 between the luma and encoded brightness while YCBCR has a luminance relationship of 0.819. [2] An improved constant luminance is an advantage for color processing operations such as chroma subsampling and gamut mapping where only the color difference information is changed. [2]

Uses

ICTCP is supported in the HEVC video coding standard. [22] It is also a digital YCC format and can be signaled in EDID's Colorimetry block as part of CTA-861-H.

Related Research Articles

<span class="mw-page-title-main">YUV</span>

YUV is a color model typically used as part of a color image pipeline. It encodes a color image or video taking human perception into account, allowing reduced bandwidth for chrominance components, compared to a "direct" RGB-representation. Historically, the terms YUV and Y′UV were used for a specific analog encoding of color information in television systems. Today, the term YUV is commonly used in the computer industry to describe colorspaces that are encoded using YCbCr.

<span class="mw-page-title-main">YIQ</span>

YIQ is the color space used by the analog NTSC color TV system, employed mainly in North and Central America, and Japan. I stands for in-phase, while Q stands for quadrature, referring to the components used in quadrature amplitude modulation. Other TV systems used different color spaces, such as YUV for PAL or YDbDr for SECAM. Later digital standards use the YCbCr color space. These color spaces are all broadly related, and work based on the principle of adding a color component named chrominance, to a black and white image named luma.

<span class="mw-page-title-main">YCbCr</span> Family of digital colour spaces

YCbCr, Y′CbCr, or Y Pb/Cb Pr/Cr, also written as YCBCR or Y′CBCR, is a family of color spaces used as a part of the color image pipeline in video and digital photography systems. Y′ is the luma component and CB and CR are the blue-difference and red-difference chroma components. Y′ is distinguished from Y, which is luminance, meaning that light intensity is nonlinearly encoded based on gamma corrected RGB primaries.

sRGB Standard RGB color space

sRGB is a standard RGB color space that HP and Microsoft created cooperatively in 1996 to use on monitors, printers, and the World Wide Web. It was subsequently standardized by the International Electrotechnical Commission (IEC) as IEC 61966-2-1:1999. sRGB is the current defined standard colorspace for the web, and it is usually the assumed colorspace for images that are neither tagged for a colorspace nor have an embedded color profile.

<span class="mw-page-title-main">Adobe RGB color space</span> Color space developed by Adobe

The Adobe RGB (1998) color space or opRGB is a color space developed by Adobe Systems, Inc. in 1998. It was designed to encompass most of the colors achievable on CMYK color printers, but by using RGB primary colors on a device such as a computer display. The Adobe RGB (1998) color space encompasses roughly 50% of the visible colors specified by the CIELAB color space – improving upon the gamut of the sRGB color space, primarily in cyan-green hues. It was subsequently standardized by the IEC as IEC 61966-2-5:1999 with a name opRGB and is used in HDMI.

<span class="mw-page-title-main">LMS color space</span> Color space represented by the response of the three types of cones of the human eye

LMS, is a color space which represents the response of the three types of cones of the human eye, named for their responsivity (sensitivity) peaks at long, medium, and short wavelengths.

Relative luminance follows the photometric definition of luminance including spectral weighting for human vision, but while luminance is a measure of light in units such as , Relative luminance values are normalized as 0.0 to 1.0, with 1.0 being a theoretical perfect reflector of 100% reference white. Like the photometric definition, it is related to the luminous flux density in a particular direction, which is radiant flux density weighted by the luminous efficiency function y(λ) of the CIE Standard Observer.

xvYCC or extended-gamut YCbCr is a color space that can be used in the video electronics of television sets to support a gamut 1.8 times as large as that of the sRGB color space. xvYCC was proposed by Sony, specified by the IEC in October 2005 and published in January 2006 as IEC 61966-2-4. xvYCC extends the ITU-R BT.709 tone curve by defining over-ranged values. xvYCC-encoded video retains the same color primaries and white point as BT.709, and uses either a BT.601 or BT.709 RGB-to-YCC conversion matrix and encoding. This allows it to travel through existing digital limited range YCC data paths, and any colors within the normal gamut will be compatible. It works by allowing negative RGB inputs and expanding the output chroma. These are used to encode more saturated colors by using a greater part of the RGB values that can be encoded in the YCbCr signal compared with those used in Broadcast Safe Level. The extra-gamut colors can then be displayed by a device whose underlying technology is not limited by the standard primaries.

<span class="mw-page-title-main">Rec. 709</span> Standard for HDTV image encoding and signal characteristics

Rec. 709, also known as Rec.709, BT.709, and ITU 709, is a standard developed by ITU-R for image encoding and signal characteristics of high-definition television.

<span class="mw-page-title-main">Rec. 2020</span> ITU-R recommendation

ITU-R Recommendation BT.2020, more commonly known by the abbreviations Rec. 2020 or BT.2020, defines various aspects of ultra-high-definition television (UHDTV) with standard dynamic range (SDR) and wide color gamut (WCG), including picture resolutions, frame rates with progressive scan, bit depths, color primaries, RGB and luma-chroma color representations, chroma subsamplings, and an opto-electronic transfer function. The first version of Rec. 2020 was posted on the International Telecommunication Union (ITU) website on August 23, 2012, and two further editions have been published since then.

The YCoCg color model, also known as the YCgCo color model, is the color space formed from a simple transformation of an associated RGB color space into a luma value and two chroma values called chrominance green (Cg) and chrominance orange (Co). It is supported in video and image compression designs such as H.264/MPEG-4 AVC, HEVC, VVC, JPEG XR, and Dirac. It is simple to compute, has good transform coding gain, and can be losslessly converted to and from RGB with fewer bits than are needed with other color models. A reversible scaled version with even lower bit depth, YCoCg-R, is also supported in most of these designs and is also used in Display Stream Compression. The more complete definition with variable bit depths of Y and chrominance values is given in ITU-T H.273.

A color appearance model (CAM) is a mathematical model that seeks to describe the perceptual aspects of human color vision, i.e. viewing conditions under which the appearance of a color does not tally with the corresponding physical measurement of the stimulus source.

<span class="mw-page-title-main">Hybrid log–gamma</span> High dynamic range standard that was jointly developed by the BBC and NHK

The hybrid log–gamma (HLG) transfer function is a transfer function jointly developed by the BBC and NHK for high dynamic range (HDR) display. It's backward compatible with the transfer function of SDR. It was approved as ARIB STD-B67 by the Association of Radio Industries and Businesses (ARIB). It is also defined in ATSC 3.0, Digital Video Broadcasting (DVB) UHD-1 Phase 2, and International Telecommunication Union (ITU) Rec. 2100.

Standard-dynamic-range (SDR) video is a video technology which represents light intensity based on the brightness, contrast and color characteristics and limitations of a cathode ray tube (CRT) display. SDR video is able to represent a video or picture's colors with a maximum luminance around 100 cd/m2, a black level around 0.1 cd/m2 and Rec.709 / sRGB color gamut. It uses the gamma curve as its electro-optical transfer function.

Ultra HD Forum is an organization whose goal is to help solve the real world hurdles in deploying Ultra HD video and thus to help promote UHD deployment. The Ultra HD Forum will help navigate amongst the standards related to high dynamic range (HDR), high frame rate (HFR), next generation audio (NGA), and wide color gamut (WCG). The Ultra HD Forum is an industry organisation that is complementary to the UHD Alliance, covering different aspects of the UHD ecosystem.

ITU-R Recommendation BT.2100, more commonly known by the abbreviations Rec. 2100 or BT.2100, introduced high-dynamic-range television (HDR-TV) by recommending the use of the perceptual quantizer (PQ) or hybrid log–gamma (HLG) transfer functions instead of the traditional "gamma" previously used for SDR-TV.

The perceptual quantizer (PQ), published by SMPTE as SMPTE ST 2084, is a transfer function that allows for HDR display by replacing the gamma curve used in SDR. It is capable of representing luminance level up to 10000 cd/m2 (nits) and down to 0.0001 nits. It has been developed by Dolby and standardized in 2014 by SMPTE and also in 2016 by ITU in Rec. 2100. ITU specifies the use of PQ or HLG as transfer functions for HDR-TV. PQ is the basis of HDR video formats and is also used for HDR still picture formats. PQ is not backward compatible with the BT.1886 EOTF, while HLG is compatible.

High-dynamic-range television is a technology that improves the quality of display signals. It is contrasted with the retroactively-named standard dynamic range (SDR). HDR changes the way the luminance and colors of videos and images are represented in the signal, and allows brighter and more detailed highlight representation, darker and more-detailed shadows, and a wider array of more intense colors.

<span class="mw-page-title-main">EBU colour bars</span> Test card

The EBU colour bars is a television test card used to check if a video signal has been altered by recording or transmission, and what adjustments must be made to bring it back to specification. It is also used for setting a television monitor or receiver to reproduce chrominance and luminance information correctly. The EBU bars are most commonly shown arranged side-by-side in a vertical manner, though some broadcasters – such as TVP in Poland, and Gabon Télévision in Gabon – were known to have aired a horizontal version of the EBU bars.

References

  1. 1 2 "BT.2100-2: Image parameter values for high dynamic range television for use in production and international programme exchange". ITU-R . July 2018.
  2. 1 2 3 4 5 6 7 "What Is ICtCp – Introduction?" (PDF). Dolby. Retrieved 2016-04-20.
  3. 1 2 3 4 Ebner, Fritz (1998-07-01). "Derivation and modelling hue uniformity and development of the IPT color space". Theses.
  4. F.Ebner, M.D.Fairchild, Development and testing of a color space (IPT) with improved hue uniformity. In: Proceedings of The Sixth Color Imaging Conference, 8-13, 1998
  5. "ST 2084:2014". Society of Motion Picture and Television Engineers.
  6. "A DTV Profile for Uncompressed High Speed Digital Interfaces (ANSI/CTA-861-H)". Consumer Technology Association®. Retrieved 2021-03-11.
  7. "ITU-T Recommendation database". ITU. hdl: 11.1002/1000/13441 . Retrieved 2020-11-14.{{cite web}}: CS1 maint: url-status (link)
  8. "Recommendation ITU-R BT.2124-0 Objective metric for the assessment of the potential visibility of colour differences in television" (PDF). January 2019.
  9. Ebner, Fritz; Fairchild, Mark D. (1998-01-01). "Development and Testing of a Color Space (IPT) with Improved Hue Uniformity". Color and Imaging Conference. 1998 (1): 8–13. Closed Access logo transparent.svg
  10. Xue, Yang (1 November 2008). "Uniform color spaces based on CIECAM02 and IPT color difference equations". RITTheses: 7.
  11. Dolby. "Dolby Vision Profiles and Levels Version 1.3.2 - Specification" (PDF). Archived from the original (PDF) on 29 September 2020. Retrieved 27 April 2021.
  12. "Dolby Vision with wrong colors · Issue #7326 · mpv-player/mpv". GitHub.
  13. USpatent 20180131938A1,Lu, Taoran; Pu, Fangjun& Yin, Penget al.,"Signal reshaping and coding in the ipt-pq color space",published 2018-05-10,issued 2019-11-19, assigned to Dolby Laboratories Licensing Corp
  14. "testing-av/testing-video: IPTPQc2.java". GitHub.
  15. "Description of the reshaper parameters derivation process in ETM reference software". phenix.it-sudparis.eu. Retrieved 2020-11-14.
  16. "colorspace: add support for Dolby Vision (!207) · Merge requests · VideoLAN / libplacebo". GitLab. Retrieved 2021-12-11.
  17. "Subsampling in ICtCp vs YCbCr" (PDF). Dolby Laboratories, Inc. Archived from the original (PDF) on 20 September 2020.
  18. "ITP Colour Space and Its Compression Performance for High Dynamic Range and Wide Colour Gamut Video Distribution". ZTE.
  19. Cotton, Andrew; Thompson, Simon (2018). "Scene-light conversions: the key to enabling live HDR production". SMPTE 2018. pp. 10–11. doi:10.5594/M001822. ISBN   978-1-61482-960-7. S2CID   188363770.
  20. "Evaluation of ICtCp color space and an Adaptive Reshaper for HDR and WCG". IEEE. 2018. doi:10.1109/MCE.2017.2714696. S2CID   4800923.{{cite journal}}: Cite journal requires |journal= (help)
  21. "BT.2124: Objective metric for the assessment of the potential visibility of colour differences in television". www.itu.int. Retrieved 24 June 2020.
  22. Peng Yin; Chad Fogg; Gary J. Sullivan; Alexis Michael Tourapis (2016-03-19). "Draft text for ICtCp support in HEVC (Draft 1)". JCT-VC. Retrieved 2016-04-20.