Color depth or colour depth (see spelling differences), also known as bit depth , is either the number of bits used to indicate the color of a single pixel, in a bitmapped image or video framebuffer, or the number of bits used for each color component of a single pixel. For consumer video standards, the bit depth specifies the number of bits used for each color component.When referring to a pixel, the concept can be defined as bits per pixel (bpp). When referring to a color component, the concept can be defined as bits per component, bits per channel, bits per color (all three abbreviated bpc), and also bits per pixel component, bits per color channel or bits per sample (bps).
Color depth is only one aspect of color representation, expressing the precision with which the amount of each primary can be expressed; the other aspect is how broad a range of colors can be expressed (the gamut). The definition of both color precision and gamut is accomplished with a color encoding specification which assigns a digital code value to a location in a color space.
The number of bits of resolved intensity in a color channel is also known as radiometric resolution, especially in the context of satellite images.
With the relatively low color depth, the stored value is typically a number representing the index into a color map or palette (a form of vector quantization). The colors available in the palette itself may be fixed by the hardware or modifiable by software. Modifiable palettes are sometimes referred to as pseudocolor palettes.
Old graphics chips, particularly those used in home computers and video game consoles, often have the ability to use a different palette per sprites and tiles in order to increase the maximum number of simultaneously displayed colors, while minimizing use of then-expensive memory (and bandwidth). For example, in the ZX Spectrum the picture is stored in a two-color format, but these two colors can be separately defined for each rectangular block of 8×8 pixels.
The palette itself has a color depth (number of bits per entry). While the best VGA systems only offered an 18-bit (262,144 color) palette from which colors could be chosen, all color Macintosh video hardware offered a 24-bit (16 million color) palette. 24-bit palettes are pretty much universal on any recent hardware or file format using them.
If instead the color can be directly figured out from the pixel values, it is "direct color". Palettes were rarely used for depths greater than 12 bits per pixel, as the memory consumed by the palette would exceed the necessary memory for direct color on every pixel.
2 colors, often black and white (or whatever color the CRT phosphor was) direct color. Sometimes 1 meant black and 0 meant white, the inverse of modern standards. Most of the first graphics displays were of this type, the X window system was developed for such displays, and this was assumed for a 3M computer. The first Macintoshes, Atari ST high resolution. In the late 80's there were professional displays with resolutions up to 300dpi (the same as a contemporary laser printer) but color proved more popular.
4 colors, usually from a selection of fixed palettes. The CGA, gray-scale early NeXTstation, color Macintoshes, Atari ST medium resolution.
8 colors, almost always all combinations of full-intensity red, green, and blue. Many early home computers with TV displays, including the ZX Spectrum and BBC Micro.
16 colors, usually from a selection of fixed palettes. Used by the EGA and by the least common denominator VGA standard at higher resolution, color Macintoshes, Atari ST low resolution, Commodore 64, Amstrad CPC.
32 colors from a programmable palette, used by the Original Amiga chipset.
256 colors, usually from a fully-programmable palette. Most early color Unix workstations, VGA at low resolution, Super VGA, color Macintoshes, Atari TT, Amiga AGA chipset, Falcon030, Acorn Archimedes. Both X and Windows provided elaborate systems to try to allow each program to select its own palette, often resulting in incorrect colors in any window other than the one with focus.
Some systems placed a color cube in the palette for a direct-color system (and so all programs would use the same palette). Usually fewer levels of blue were provided than others, as the normal human eye is less sensitive to the blue component than to the red or green (two thirds of the eye's receptors process the longer wavelengths) Popular sizes were:
4096 colors, usually from a fully-programmable palette (though it was often set to a 16×16×16 color cube). Some Silicon Graphics systems, Color NeXTstation systems, and Amiga systems in HAM mode.
In high-color systems, two bytes (16 bits) are stored for each pixel. Most often, each component (R, G, and B) is assigned five bits, plus one unused bit (or used for a mask channel or to switch to indexed color); this allows 32,768 colors to be represented. However, an alternate assignment which reassigns the unused bit to the G channel allows 65,536 colors to be represented, but without transparency.These color depths are sometimes used in small devices with a color display, such as mobile phones, and are sometimes considered sufficient to display photographic images. Occasionally 4 bits per color are used plus 4 bits for alpha, giving 4096 colors.
The term "high color" has recently been used to mean color depths greater than 24 bits.
Almost all of the least expensive LCDs (such as typical twisted nematic types) provide 18-bit color (64×64×64 = 262,144 combinations) to achieve faster color transition times, and use either dithering or frame rate control to approximate 24-bit-per-pixel true color,or throw away 6 bits of color information entirely. More expensive LCDs (typically IPS) can display 24-bit color depth or greater.
24 bits almost always use 8 bits each of R, G, and B. As of 2018, 24-bit color depth is used by virtually every computer and phone display [ citation needed ] and the vast majority of image storage formats. Almost all cases of 32 bits per pixel assigns 24 bits to the color, and the remaining 8 are the alpha channel or unused.
224 gives 16,777,216 color variations. The human eye can discriminate up to ten million colorsand since the gamut of a display is smaller than the range of human vision, this means this should cover that range with more detail than can be perceived. However, displays do not evenly distribute the colors in human perception space, so humans can see the changes between some adjacent colors as color banding. Monochromatic images set all three channels to the same value, resulting in only 256 different colors and thus, potentially, more visible banding, as the average human eye can only distinguish between about 30 shades of gray. Some software attempts to dither the gray level into the color channels to increase this, although in modern software this is more often used for subpixel rendering to increase the space resolution on LCD screens where the colors have slightly different positions.
The DVD-Video and Blu-ray Disc standards support a bit depth of 8 bits per color in YCbCr with 4:2:0 chroma subsampling.YCbCr can be losslessly converted to RGB.
Macintosh systems refer to 24-bit color as "millions of colors". The term true color is sometimes used to mean what this article is calling direct color.It is also often used to refer to all color depths greater or equal to 24.
Deep color consists of a billion or more colors.230 is approximately 1.073 billion. Usually this is 10 bits each of red, green, and blue. If an alpha channel of the same size is added then each pixel takes 40 bits.
Some earlier systems placed three 10-bit channels in a 32-bit word, with 2 bits unused (or used as a 4-level alpha channel); the Cineon file format, for example, used this. Some SGI systems had 10- (or more) bit digital-to-analog converters for the video signal and could be set up to interpret data stored this way for display. BMP files define this as one of its formats, and it is called "HiColor" by Microsoft.
Video cards with 10 bits per component started coming to market in the late 1990s. An early example was the Radius ThunderPower card for the Macintosh, which included extensions for QuickDraw and Adobe Photoshop plugins to support editing 30-bit images.Some vendors call their 24-bit color depth with FRC panels 30-bit panels; however, true deep color displays have 10-bit or more color depth without FRC.
The HDMI 1.3 specification defines a bit depth of 30 bits (as well as 36 and 48 bit depths).In that regard, the Nvidia Quadro graphics cards manufactured after 2006 support 30-bit deep color and Pascal or later GeForce and Titan cards when paired with the Studio Driver as do some models of the Radeon HD 5900 series such as the HD 5970. The ATI FireGL V7350 graphics card supports 40- and 64-bit pixels (30 and 48 bit color depth with an alpha channel).
The DisplayPort specification also supports color depths greater than 24 bpp in version 1.3 through "VESA Display Stream Compression, which uses a visually lossless low-latency algorithm based on predictive DPCM and YCoCg-R color space and allows increased resolutions and color depths and reduced power consumption."
At WinHEC 2008, Microsoft announced that color depths of 30 bits and 48 bits would be supported in Windows 7, along with the wide color gamut scRGB.
High Efficiency Video Coding (HEVC or H.265) defines the Main 10 profile, which allows for 8 or 10 bits per sample with 4:2:0 chroma subsampling.The Main 10 profile was added at the October 2012 HEVC meeting based on proposal JCTVC-K0109 which proposed that a 10-bit profile be added to HEVC for consumer applications. The proposal stated that this was to allow for improved video quality and to support the Rec. 2020 color space that will be used by UHDTV. The second version of HEVC has five profiles that allow for a bit depth of 8 bits to 16 bits per sample.
As of 2020, some smartphones have started using 30-bit color depth, such as the OnePlus 8 Pro, Oppo Find X2 & Find X2 Pro, Sony Xperia 1 II, Xiaomi Mi 10 Ultra, Motorola Edge+, ROG Phone 3 and Sharp Aquos Zero 2.
Using 12 bits per color channel produces 36 bits, approximately 68.71 billion colors. If an alpha channel of the same size is added then there are 48 bits per pixel.
Using 16 bits per color channel produces 48 bits, approximately 281.5 trillion colors. If an alpha channel of the same size is added then there are 64 bits per pixel.
Image editing software such as Photoshop started using 16 bits per channel fairly early in order to reduce the quantization on intermediate results (i.e. if an operation is divided by 4 and then multiplied by 4, it would lose the bottom 2 bits of 8-bit data, but if 16 bits were used it would lose none of the 8-bit data). In addition, digital cameras were able to produce 10 or 12 bits per channel in their raw data; as 16 bits is the smallest addressable unit larger than that, using it would allow the raw data to be manipulated.
Some systems started using those bits for numbers outside the 0–1 range rather than for increasing the resolution. Numbers greater than 1 were for colors brighter than the display could show, as in high-dynamic-range imaging (HDRI). Negative numbers can increase the gamut to cover all possible colors, and for storing the results of filtering operations with negative filter coefficients. The Pixar Image Computer used 12 bits to store numbers in the range [-1.5,2.5), with 2 bits for the integer portion and 10 for the fraction. The Cineon imaging system used 10-bit professional video displays with the video hardware adjusted so that a value of 95 was black and 685 was white.The amplified signal tended to reduce the lifetime of the CRT.
More bits also encouraged the storage of light as linear values, where the number directly corresponds to the amount of light emitted. Linear levels makes calculation of light (in the context of computer graphics) much easier. However, linear color results in disproportionately more samples near white and fewer near black, so the quality of 16-bit linear is about equal to 12-bit sRGB.
Floating point numbers can represent linear light levels spacing the samples semi-logarithmically. Floating point representations also allow for drastically larger dynamic ranges as well as negative values. Most systems first supported 32-bit per channel single-precision, which far exceeded the accuracy required for most applications. In 1999, Industrial Light & Magic released the open standard image file format OpenEXR which supported 16-bit-per-channel half-precision floating-point numbers. At values near 1.0, half precision floating point values have only the precision of an 11-bit integer value, leading some graphics professionals to reject half-precision in situations where the extended dynamic range is not needed.
Virtually all television displays and computer displays form images by varying the strength of just three primary colors: red, green, and blue. For example, bright yellow is formed by roughly equal red and green contributions, with no blue contribution.
Additional color primaries can widen the color gamut of a display, since it is no longer limited to the shape of a triangle in the CIE 1931 color space. Recent technologies such as Texas Instruments's BrilliantColor augment the typical red, green, and blue channels with up to three other primaries: cyan, magenta and yellow. [ citation needed ] The Sharp Aquos line of televisions has introduced Quattron technology, which augments the usual RGB pixel components with a yellow subpixel. However, formats and media supporting these extended color primaries are extremely uncommon.Mitsubishi and Samsung, among others, use this technology in some TV sets to extend the range of displayable colors.
For storing and working on images, it is possible to use "imaginary" primary colors that are not physically possible so that the triangle does enclose a much larger gamut, so whether more than three primaries results in a difference to the human eye is not yet proven, since humans are primarily trichromats, though tetrachromats exist.
Portable Network Graphics is a raster-graphics file format that supports lossless data compression. PNG was developed as an improved, non-patented replacement for Graphics Interchange Format (GIF).
In computer graphics, a Raster graphics or Bitmap image is a dot matrix data structure that represents a generally rectangular grid of pixels, viewable via a monitor, paper, or other display medium. Raster images are stored in image files with varying formats.
The RGB color model is an additive color model in which red, green, and blue light are added together in various ways to reproduce a broad array of colors. The name of the model comes from the initials of the three additive primary colors, red, green, and blue.
Video Graphics Array (VGA) is a video display controller and accompanying de-facto graphics standard, first introduced with the IBM PS/2 line of computers in 1987, which became ubiquitous in the PC industry within three years. The term can now refer either to the computer display standard, the 15-pin D-subminiature VGA connector, or the 640×480 resolution characteristic of the VGA hardware.
High color graphics is a method of storing image information in a computer's memory such that each pixel is represented by two bytes. Usually the color is represented by all 16 bits, but some devices also support 15-bit high color.
A framebuffer is a portion of random-access memory (RAM) containing a bitmap that drives a video display. It is a memory buffer containing data representing all the pixels in a complete video frame. Modern video cards contain framebuffer circuitry in their cores. This circuitry converts an in-memory bitmap into a video signal that can be displayed on a computer monitor.
The Enhanced Graphics Adapter (EGA) is an IBM PC graphics adapter and de facto computer display standard from 1984 that superseded the CGA standard introduced with the original IBM PC, and was itself superseded by the VGA standard in 1987. In addition to the original EGA card manufactured by IBM, many compatible third-party cards were manufactured, and EGA graphics modes continued to be supported by VGA and later standards.
The BMP file format, also known as bitmap image file, device independent bitmap (DIB) file format and bitmap, is a raster graphics image file format used to store bitmap digital images, independently of the display device, especially on Microsoft Windows and OS/2 operating systems.
The Color Graphics Adapter (CGA), originally also called the Color/Graphics Adapter or IBM Color/Graphics Monitor Adapter, introduced in 1981, was IBM's first color graphics card for the IBM PC and established a de-facto computer display standard.
A thin-film-transistor liquid-crystal display is a variant of a liquid-crystal display (LCD) that uses thin-film-transistor (TFT) technology to improve image qualities such as addressability and contrast. A TFT LCD is an active matrix LCD, in contrast to passive matrix LCDs or simple, direct-driven LCDs with a few segments.
Hold-And-Modify, usually abbreviated as HAM, is a display mode of the Commodore Amiga computer. It uses a highly unusual technique to express the color of pixels, allowing many more colors to appear on screen than would otherwise be possible. HAM mode was commonly used to display digitized photographs or video frames, bitmap art and occasionally animation. At the time of the Amiga's launch in 1985, this near-photorealistic display was unprecedented for a home computer and it was widely used to demonstrate the Amiga's graphical capability. However, HAM has significant technical limitations which prevent it from being used as a general purpose display mode.
Color digital images are made of pixels, and pixels are made of combinations of primary colors represented by a series of code. A channel in this context is the grayscale image of the same size as a color image, made of just one of these primary colors. For instance, an image from a standard digital camera will have a red, green and blue channel. A grayscale image has just one channel.
In computer graphics, a palette, also called color lookup table (CLUT), is a correspondence table in which selected colors from a certain color space's color reproduction range are assigned an index, by which they can be referenced. By referencing the colors via an index, which takes less information than the one needed to describe the actual colors in said color space, this technique aims to reduce data usage, be it as processing payload, transfer bandwidth, RAM usage or persistent storage. Images in which colors are indicated by references to a CLUT are called indexed color images.
In computing, indexed color is a technique to manage digital images' colors in a limited fashion, in order to save computer memory and file storage, while speeding up display refresh and file transfers. It is a form of vector quantization compression.
ITU-R Recommendation BT.2020, more commonly known by the abbreviations Rec. 2020 or BT.2020, defines various aspects of ultra-high-definition television (UHDTV) with standard dynamic range (SDR) and wide color gamut (WCG), including picture resolutions, frame rates with progressive scan, bit depths, color primaries, RGB and luma-chroma color representations, chroma subsamplings, and an opto-electronic transfer function. The first version of Rec. 2020 was posted on the International Telecommunication Union (ITU) website on August 23, 2012, and two further editions have been published since then. It is expanded in several ways by Rec. 2100.