This article needs additional citations for verification .(September 2007) |
A compression artifact (or artefact) is a noticeable distortion of media (including images, audio, and video) caused by the application of lossy compression. Lossy data compression involves discarding some of the media's data so that it becomes small enough to be stored within the desired disk space or transmitted (streamed) within the available bandwidth (known as the data rate or bit rate). If the compressor cannot store enough data in the compressed version, the result is a loss of quality, or introduction of artifacts. The compression algorithm may not be intelligent enough to discriminate between distortions of little subjective importance and those objectionable to the user.
The most common digital compression artifacts are DCT blocks, caused by the discrete cosine transform (DCT) compression algorithm used in many digital media standards, such as JPEG, MP3, and MPEG video file formats. [1] [2] [3] These compression artifacts appear when heavy compression is applied, [1] and occur often in common digital media, such as DVDs, common computer file formats such as JPEG, MP3 and MPEG files, and some alternatives to the compact disc, such as Sony's MiniDisc format. Uncompressed media (such as on Laserdiscs, Audio CDs, and WAV files) or losslessly compressed media (such as FLAC or PNG) do not suffer from compression artifacts.
The minimization of perceivable artifacts is a key goal in implementing a lossy compression algorithm. However, artifacts are occasionally intentionally produced for artistic purposes, a style known as glitch art [4] or datamoshing. [5]
Technically speaking, a compression artifact is a particular class of data error that is usually the consequence of quantization in lossy data compression. Where transform coding is used, it typically assumes the form of one of the basis functions of the coder's transform space.
When performing block-based discrete cosine transform (DCT) [1] coding for quantization, as in JPEG-compressed images, several types of artifacts can appear.
Other lossy algorithms, which use pattern matching to deduplicate similar symbols, are prone to introducing hard to detect errors in printed text. For example, the numbers "6" and "8" may get replaced. This has been observed to happen with JBIG2 in certain photocopier machines. [6] [7]
At low bit rates, any lossy block-based coding scheme introduces visible artifacts in pixel blocks and at block boundaries. These boundaries can be transform block boundaries, prediction block boundaries, or both, and may coincide with macroblock boundaries. The term macroblocking is commonly used regardless of the artifact's cause. Other names include blocking, [8] tiling, [9] mosaicing, pixelating, quilting, and checkerboarding.
Block-artifacts are a result of the very principle of block transform coding. The transform (for example the discrete cosine transform) is applied to a block of pixels, and to achieve lossy compression, the transform coefficients of each block are quantized. The lower the bit rate, the more coarsely the coefficients are represented and the more coefficients are quantized to zero. Statistically, images have more low-frequency than high-frequency content, so it is the low-frequency content that remains after quantization, which results in blurry, low-resolution blocks. In the most extreme case only the DC-coefficient, that is the coefficient which represents the average color of a block, is retained, and the transform block is only a single color after reconstruction.
Because this quantization process is applied individually in each block, neighboring blocks quantize coefficients differently. This leads to discontinuities at the block boundaries. These are most visible in flat areas, where there is little detail to mask the effect.
Various approaches have been proposed to reduce image compression effects, but to use standardized compression/decompression techniques and retain the benefits of compression (for instance, lower transmission and storage costs), many of these methods focus on "post-processing"—that is, processing images when received or viewed. No post-processing technique has been shown to improve image quality in all cases; consequently, none has garnered widespread acceptance, though some have been implemented and are in use in proprietary systems. Many photo editing programs, for instance, have proprietary JPEG artifact reduction algorithms built-in. Consumer equipment often calls this post-processing "MPEG Noise Reduction". [10]
Boundary artifact in JPEG can be turned into more pleasing "grains" not unlike those in high ISO photographic films. Instead of just multiplying the quantized coefficients with the quantisation step Q pertaining to the 2D-frequency, intelligent noise in the form of a random number in the interval [-Q/2; Q/2] can be added to the dequantized coefficient. This method can be added as an integral part to JPEG decompressors working on the trillions of existing and future JPEG images. As such it is not a "post-processing" technique. [11]
The ringing issue can be reduced at encode time by overshooting the DCT values, clamping the rings away. [12]
Posterization generally only happens at low quality, when the DC values are given too little importance. Tuning the quantization table helps. [13]
When motion prediction is used, as in MPEG-1, MPEG-2 or MPEG-4, compression artifacts tend to remain on several generations of decompressed frames, and move with the optic flow of the image, leading to a peculiar effect, part way between a painting effect and "grime" that moves with objects in the scene.
Data errors in the compressed bit-stream, possibly due to transmission errors, can lead to errors similar to large quantization errors, or can disrupt the parsing of the data stream entirely for a short time, leading to "break-up" of the picture. Where gross errors have occurred in the bit-stream, decoders continue to apply updates to the damaged picture for a short interval, creating a "ghost image" effect, until receiving the next independently compressed frame. In MPEG picture coding, these are known as "I-frames", with the 'I' standing for "intra". Until the next I-frame arrives, the decoder can perform error concealment.
Block boundary discontinuities can occur at edges of motion compensation prediction blocks. In motion compensated video compression, the current picture is predicted by shifting blocks (macroblocks, partitions, or prediction units) of pixels from previously decoded frames. If two neighboring blocks use different motion vectors, there will be a discontinuity at the edge between the blocks.
Video compression artifacts include cumulative results of compression of the comprising still images, for instance ringing or other edge busyness in successive still images appear in sequence as a shimmering blur of dots around edges, called mosquito noise, as they resemble mosquitoes swarming around the object. [14] [15] The so-called "mosquito noise" is caused by the block-based discrete cosine transform (DCT) compression algorithm used in most video coding standards, such as the MPEG formats. [3]
The artifacts at block boundaries can be reduced by applying a deblocking filter. As in still image coding, it is possible to apply a deblocking filter to the decoder output as post-processing.
In motion-predicted video coding with a closed prediction loop, the encoder uses the decoder output as the prediction reference from which future frames are predicted. To that end, the encoder conceptually integrates a decoder. If this "decoder" performs a deblocking, the deblocked picture is then used as a reference picture for motion compensation, which improves coding efficiency by preventing a propagation of block artifacts across frames. This is referred to as an in-loop deblocking filter. Standards which specify an in-loop deblocking filter include VC-1, H.263 Annex J, H.264/AVC, and H.265/HEVC.
Lossy audio compression typically works with a psychoacoustic model—a model of human hearing perception. Lossy audio formats typically involve the use of a time/frequency domain transform, such as a modified discrete cosine transform. With the psychoacoustic model, masking effects such as frequency masking and temporal masking are exploited, so that sounds that should be imperceptible are not recorded. For example, in general, human beings are unable to perceive a quiet tone played simultaneously with a similar but louder tone. A lossy compression technique might identify this quiet tone and attempt to remove it. Also, quantization noise can be "hidden" where they would be masked by more prominent sounds. With low compression, a conservative psy-model is used with small block sizes.
When the psychoacoustic model is inaccurate, when the transform block size is restrained, or when aggressive compression is used, this may result in compression artifacts. Compression artifacts in compressed audio typically show up as ringing, pre-echo, "birdie artifacts", drop-outs, rattling, warbling, metallic ringing, an underwater feeling, hissing, or "graininess".
An example of compression artifacts in audio is applause in a relatively highly compressed audio file (e.g. 96 kbit/sec MP3). In general, musical tones have repeating waveforms and more predictable variations in volume, whereas applause is essentially random, therefore hard to compress. A highly compressed track of applause may have "metallic ringing" and other compression artifacts.
Compression artifacts may intentionally be used as a visual style, sometimes known as glitch art . Rosa Menkman's glitch art makes use of compression artifacts, [16] particularly the discrete cosine transform blocks (DCT blocks) found in most digital media data compression formats such as JPEG digital images and MP3 digital audio. [2] In still images, an example is Jpegs by German photographer Thomas Ruff, which uses intentional JPEG artifacts as the basis of the picture's style. [17] [18]
In video art, one technique used is datamoshing, where two videos are interleaved so intermediate frames are interpolated from two separate sources. Another technique involves simply transcoding from one lossy video format to another, which exploits the difference in how the separate video codecs process motion and color information. [19] The technique was pioneered by artists Bertrand Planes in collaboration with Christian Jacquemin in 2006 with DivXPrime, [20] Sven König, Takeshi Murata, Jacques Perconte and Paul B. Davis in collaboration with Paperrad, and more recently used by David OReilly and within music videos for Chairlift and by Nabil Elderkin in the "Welcome to Heartbreak" music video for Kanye West. [21] [22]
There is also a genre of internet memes where often nonsensical images are purposefully heavily compressed sometimes multiple times for comedic effect. Images created using this technique are often referred to as "deep fried." [23]
In information theory, data compression, source coding, or bit-rate reduction is the process of encoding information using fewer bits than the original representation. Any particular compression is either lossy or lossless. Lossless compression reduces bits by identifying and eliminating statistical redundancy. No information is lost in lossless compression. Lossy compression reduces bits by removing unnecessary or less important information. Typically, a device that performs data compression is referred to as an encoder, and one that performs the reversal of the process (decompression) as a decoder.
JPEG is a commonly used method of lossy compression for digital images, particularly for those images produced by digital photography. The degree of compression can be adjusted, allowing a selectable tradeoff between storage size and image quality. JPEG typically achieves 10:1 compression with little perceptible loss in image quality. Since its introduction in 1992, JPEG has been the most widely used image compression standard in the world, and the most widely used digital image format, with several billion JPEG images produced every day as of 2015.
In information technology, lossy compression or irreversible compression is the class of data compression methods that uses inexact approximations and partial data discarding to represent the content. These techniques are used to reduce data size for storing, handling, and transmitting content. Higher degrees of approximation create coarser images as more details are removed. This is opposed to lossless data compression which does not degrade the data. The amount of data reduction possible using lossy compression is much higher than using lossless techniques.
MPEG-1 is a standard for lossy compression of video and audio. It is designed to compress VHS-quality raw digital video and CD audio down to about 1.5 Mbit/s without excessive quality loss, making video CDs, digital cable/satellite TV and digital audio broadcasting (DAB) practical.
Image compression is a type of data compression applied to digital images, to reduce their cost for storage or transmission. Algorithms may take advantage of visual perception and the statistical properties of image data to provide superior results compared with generic data compression methods which are used for other digital data.
Transform coding is a type of data compression for "natural" data like audio signals or photographic images. The transformation is typically lossless on its own but is used to enable better quantization, which then results in a lower quality copy of the original input.
Motion compensation in computing is an algorithmic technique used to predict a frame in a video given the previous and/or future frames by accounting for motion of the camera and/or objects in the video. It is employed in the encoding of video data for video compression, for example in the generation of MPEG-2 files. Motion compensation describes a picture in terms of the transformation of a reference picture to the current picture. The reference picture may be previous in time or even from the future. When images can be accurately synthesized from previously transmitted/stored images, the compression efficiency can be improved.
A video codec is software or hardware that compresses and decompresses digital video. In the context of video compression, codec is a portmanteau of encoder and decoder, while a device that only compresses is typically called an encoder, and one that only decompresses is a decoder.
A discrete cosine transform (DCT) expresses a finite sequence of data points in terms of a sum of cosine functions oscillating at different frequencies. The DCT, first proposed by Nasir Ahmed in 1972, is a widely used transformation technique in signal processing and data compression. It is used in most digital media, including digital images, digital video, digital audio, digital television, digital radio, and speech coding. DCTs are also important to numerous other applications in science and engineering, such as digital signal processing, telecommunication devices, reducing network bandwidth usage, and spectral methods for the numerical solution of partial differential equations.
JPEG 2000 (JP2) is an image compression standard and coding system. It was developed from 1997 to 2000 by a Joint Photographic Experts Group committee chaired by Touradj Ebrahimi, with the intention of superseding their original JPEG standard, which is based on a discrete cosine transform (DCT), with a newly designed, wavelet-based method. The standardized filename extension is .jp2 for ISO/IEC 15444-1 conforming files and .jpx for the extended part-2 specifications, published as ISO/IEC 15444-2. The registered MIME types are defined in RFC 3745. For ISO/IEC 15444-1 it is image/jp2.
H.261 is an ITU-T video compression standard, first ratified in November 1988. It is the first member of the H.26x family of video coding standards in the domain of the ITU-T Study Group 16 Video Coding Experts Group. It was the first video coding standard that was useful in practical terms.
Cinepak is a lossy video codec developed by Peter Barrett at SuperMac Technologies, and released in 1991 with the Video Spigot, and then in 1992 as part of Apple Computer's QuickTime video suite. One of the first video compression tools to achieve full motion video on CD-ROM, it was designed to encode 320×240 resolution video at 1× CD-ROM transfer rates. The original name of this codec was Compact Video, which is why its FourCC identifier is CVID. The codec was ported to Microsoft Windows in 1993. It was also used on fourth- and fifth-generation game consoles, such as the Atari Jaguar CD, Sega CD, Sega Saturn, and 3DO. libavcodec includes a Cinepak decoder and an encoder, both licensed under the terms of the LGPL.
Quantization, involved in image processing, is a lossy compression technique achieved by compressing a range of values to a single quantum (discrete) value. When the number of discrete symbols in a given stream is reduced, the stream becomes more compressible. For example, reducing the number of colors required to represent a digital image makes it possible to reduce its file size. Specific applications include DCT data quantization in JPEG and DWT data quantization in JPEG 2000.
Digital artifact in information science, is any undesired or unintended alteration in data introduced in a digital process by an involved technique and/or technology.
JPEG XR is an image compression standard for continuous tone photographic images, based on the HD Photo specifications that Microsoft originally developed and patented. It supports both lossy and lossless compression, and is the preferred image format for Ecma-388 Open XML Paper Specification documents.
The macroblock is a processing unit in image and video compression formats based on linear block transforms, typically the discrete cosine transform (DCT). A macroblock typically consists of 16×16 samples, and is further subdivided into transform blocks, and may be further subdivided into prediction blocks. Formats which are based on macroblocks include JPEG, where they are called MCU blocks, H.261, MPEG-1 Part 2, H.262/MPEG-2 Part 2, H.263, MPEG-4 Part 2, and H.264/MPEG-4 AVC. In H.265/HEVC, the macroblock as a basic processing unit has been replaced by the coding tree unit.
PGF is a wavelet-based bitmapped image format that employs lossless and lossy data compression. PGF was created to improve upon and replace the JPEG format. It was developed at the same time as JPEG 2000 but with a focus on speed over compression ratio.
A video coding format is a content representation format of digital video content, such as in a data file or bitstream. It typically uses a standardized video compression algorithm, most commonly based on discrete cosine transform (DCT) coding and motion compensation. A specific software, firmware, or hardware implementation capable of compression or decompression in a specific video coding format is called a video codec.
Nasir Ahmed is an Indian-American electrical engineer and computer scientist. He is Professor Emeritus of Electrical and Computer Engineering at University of New Mexico (UNM). He is best known for inventing the discrete cosine transform (DCT) in the early 1970s. The DCT is the most widely used data compression transformation, the basis for most digital media standards and commonly used in digital signal processing. He also described the discrete sine transform (DST), which is related to the DCT.
ZPEG is a motion video technology that applies a human visual acuity model to a decorrelated transform-domain space, thereby optimally reducing the redundancies in motion video by removing the subjectively imperceptible. This technology is applicable to a wide range of video processing problems such as video optimization, real-time motion video compression, subjective quality monitoring, and format conversion.