Video coding format

Last updated

A video coding format [a] (or sometimes video compression format) is a content representation format of digital video content, such as in a data file or bitstream. It typically uses a standardized video compression algorithm, most commonly based on discrete cosine transform (DCT) coding and motion compensation. A computer software or hardware component that compresses or decompresses a specific video coding format is a video codec.

Contents

Some video coding formats are documented by a detailed technical specification document known as a video coding specification. Some such specifications are written and approved by standardization organizations as technical standards, and are thus known as a video coding standard. There are de facto standards and formal standards.

Video content encoded using a particular video coding format is normally bundled with an audio stream (encoded using an audio coding format) inside a multimedia container format such as AVI, MP4, FLV, RealMedia, or Matroska. As such, the user normally does not have a H.264 file, but instead has a video file, which is an MP4 container of H.264-encoded video, normally alongside AAC-encoded audio. Multimedia container formats can contain one of several different video coding formats; for example, the MP4 container format can contain video coding formats such as MPEG-2 Part 2 or H.264. Another example is the initial specification for the file type WebM, which specifies the container format (Matroska), but also exactly which video (VP8) and audio (Vorbis) compression format is inside the Matroska container, even though Matroska is capable of containing VP9 video, and Opus audio support was later added to the WebM specification.

Distinction between format and codec

A format is the layout plan for data produced or consumed by a codec.

Although video coding formats such as H.264 are sometimes referred to as codecs, there is a clear conceptual difference between a specification and its implementations. Video coding formats are described in specifications, and software, firmware, or hardware to encode/decode data in a given video coding format from/to uncompressed video are implementations of those specifications. As an analogy, the video coding format H.264 (specification) is to the codec OpenH264 (specific implementation) what the C Programming Language (specification) is to the compiler GCC (specific implementation). Note that for each specification (e.g., H.264), there can be many codecs implementing that specification (e.g., x264, OpenH264, H.264/MPEG-4 AVC products and implementations).

This distinction is not consistently reflected terminologically in the literature. The H.264 specification calls H.261, H.262, H.263, and H.264 video coding standards and does not contain the word codec. [2] The Alliance for Open Media clearly distinguishes between the AV1 video coding format and the accompanying codec they are developing, but calls the video coding format itself a video codec specification. [3] The VP9 specification calls the video coding format VP9 itself a codec. [4]

As an example of conflation, Chromium's [5] and Mozilla's [6] pages listing their video formats support both call video coding formats, such as H.264 codecs. As another example, in Cisco's announcement of a free-as-in-beer video codec, the press release refers to the H.264 video coding format as a codec ("choice of a common video codec"), but calls Cisco's implementation of a H.264 encoder/decoder a codec shortly thereafter ("open-source our H.264 codec"). [7]

A video coding format does not dictate all algorithms used by a codec implementing the format. For example, a large part of how video compression typically works is by finding similarities between video frames (block-matching) and then achieving compression by copying previously-coded similar subimages (such as macroblocks) and adding small differences when necessary. Finding optimal combinations of such predictors and differences is an NP-hard problem, [8] meaning that it is practically impossible to find an optimal solution. Though the video coding format must support such compression across frames in the bitstream format, by not needlessly mandating specific algorithms for finding such block-matches and other encoding steps, the codecs implementing the video coding specification have some freedom to optimize and innovate in their choice of algorithms. For example, section 0.5 of the H.264 specification says that encoding algorithms are not part of the specification. [2] Free choice of algorithm also allows different space–time complexity trade-offs for the same video coding format, so a live feed can use a fast but space-inefficient algorithm, and a one-time DVD encoding for later mass production can trade long encoding-time for space-efficient encoding.

History

The concept of analog video compression dates back to 1929, when R.D. Kell in Britain proposed the concept of transmitting only the portions of the scene that changed from frame-to-frame. The concept of digital video compression dates back to 1952, when Bell Labs researchers B.M. Oliver and C.W. Harrison proposed the use of differential pulse-code modulation (DPCM) in video coding. In 1959, the concept of inter-frame motion compensation was proposed by NHK researchers Y. Taki, M. Hatori and S. Tanaka, who proposed predictive inter-frame video coding in the temporal dimension. [9] In 1967, University of London researchers A.H. Robinson and C. Cherry proposed run-length encoding (RLE), a lossless compression scheme, to reduce the transmission bandwidth of analog television signals. [10]

The earliest digital video coding algorithms were either for uncompressed video or used lossless compression, both methods inefficient and impractical for digital video coding. [11] [12] Digital video was introduced in the 1970s, [11] initially using uncompressed pulse-code modulation (PCM), requiring high bitrates around 45200 Mbit/s for standard-definition (SD) video, [11] [12] which was up to 2,000 times greater than the telecommunication bandwidth (up to 100  kbit/s) available until the 1990s. [12] Similarly, uncompressed high-definition (HD) 1080p video requires bitrates exceeding 1  Gbit/s, significantly greater than the bandwidth available in the 2000s. [13]

Motion-compensated DCT

Practical video compression emerged with the development of motion-compensated DCT (MC DCT) coding, [12] [11] also called block motion compensation (BMC) [9] or DCT motion compensation. This is a hybrid coding algorithm, [9] which combines two key data compression techniques: discrete cosine transform (DCT) coding [12] [11] in the spatial dimension, and predictive motion compensation in the temporal dimension. [9]

DCT coding is a lossy block compression transform coding technique that was first proposed by Nasir Ahmed, who initially intended it for image compression, while he was working at Kansas State University in 1972. It was then developed into a practical image compression algorithm by Ahmed with T. Natarajan and K. R. Rao at the University of Texas in 1973, and was published in 1974. [14] [15] [16]

The other key development was motion-compensated hybrid coding. [9] In 1974, Ali Habibi at the University of Southern California introduced hybrid coding, [17] [18] [19] which combines predictive coding with transform coding. [9] [20] He examined several transform coding techniques, including the DCT, Hadamard transform, Fourier transform, slant transform, and Karhunen-Loeve transform. [17] However, his algorithm was initially limited to intra-frame coding in the spatial dimension. In 1975, John A. Roese and Guner S. Robinson extended Habibi's hybrid coding algorithm to the temporal dimension, using transform coding in the spatial dimension and predictive coding in the temporal dimension, developing inter-frame motion-compensated hybrid coding. [9] [21] For the spatial transform coding, they experimented with different transforms, including the DCT and the fast Fourier transform (FFT), developing inter-frame hybrid coders for them, and found that the DCT is the most efficient due to its reduced complexity, capable of compressing image data down to 0.25-bit per pixel for a videotelephone scene with image quality comparable to a typical intra-frame coder requiring 2-bit per pixel. [22] [21]

The DCT was applied to video encoding by Wen-Hsiung Chen, [23] who developed a fast DCT algorithm with C.H. Smith and S.C. Fralick in 1977, [24] [25] and founded Compression Labs to commercialize DCT technology. [23] In 1979, Anil K. Jain and Jaswant R. Jain further developed motion-compensated DCT video compression. [26] [9] This led to Chen developing a practical video compression algorithm, called motion-compensated DCT or adaptive scene coding, in 1981. [9] Motion-compensated DCT later became the standard coding technique for video compression from the late 1980s onwards. [11] [27]

Video coding standards

The first digital video coding standard was H.120, developed by the CCITT (now ITU-T) in 1984. [28] H.120 was not usable in practice, as its performance was too poor. [28] H.120 used motion-compensated DPCM coding, [9] a lossless compression algorithm that was inefficient for video coding. [11] During the late 1980s, a number of companies began experimenting with discrete cosine transform (DCT) coding, a much more efficient form of compression for video coding. The CCITT received 14 proposals for DCT-based video compression formats, in contrast to a single proposal based on vector quantization (VQ) compression. The H.261 standard was developed based on motion-compensated DCT compression. [11] [27] H.261 was the first practical video coding standard, [28] and uses patents licensed from a number of companies, including Hitachi, PictureTel, NTT, BT, and Toshiba, among others. [29] Since H.261, motion-compensated DCT compression has been adopted by all the major video coding standards (including the H.26x and MPEG formats) that followed. [11] [27]

MPEG-1, developed by the Moving Picture Experts Group (MPEG), followed in 1991, and it was designed to compress VHS-quality video. [28] It was succeeded in 1994 by MPEG-2/H.262, [28] which was developed with patents licensed from a number of companies, primarily Sony, Thomson and Mitsubishi Electric. [30] MPEG-2 became the standard video format for DVD and SD digital television. [28] Its motion-compensated DCT algorithm was able to achieve a compression ratio of up to 100:1, enabling the development of digital media technologies such as video on demand (VOD) [12] and high-definition television (HDTV). [31] In 1999, it was followed by MPEG-4/H.263, which was a major leap forward for video compression technology. [28] It uses patents licensed from a number of companies, primarily Mitsubishi, Hitachi and Panasonic. [32]

The most widely used video coding format as of 2019 is H.264/MPEG-4 AVC. [33] It was developed in 2003, and uses patents licensed from a number of organizations, primarily Panasonic, Godo Kaisha IP Bridge and LG Electronics. [34] In contrast to the standard DCT used by its predecessors, AVC uses the integer DCT. [23] [35] H.264 is one of the video encoding standards for Blu-ray Discs; all Blu-ray Disc players must be able to decode H.264. It is also widely used by streaming internet sources, such as videos from YouTube, Netflix, Vimeo, and the iTunes Store, web software such as the Adobe Flash Player and Microsoft Silverlight, and also various HDTV broadcasts over terrestrial (ATSC standards, ISDB-T, DVB-T or DVB-T2), cable (DVB-C), and satellite (DVB-S2). [36]

A main problem for many video coding formats has been patents, making it expensive to use or potentially risking a patent lawsuit due to submarine patents. The motivation behind many recently designed video coding formats such as Theora, VP8, and VP9 have been to create a (libre) video coding standard covered only by royalty-free patents. [37] Patent status has also been a major point of contention for the choice of which video formats the mainstream web browsers will support inside the HTML video tag.

The current-generation video coding format is HEVC (H.265), introduced in 2013. AVC uses the integer DCT with 4x4 and 8x8 block sizes, and HEVC uses integer DCT and DST transforms with varied block sizes between 4x4 and 32x32. [38] HEVC is heavily patented, mostly by Samsung Electronics, GE, NTT, and JVCKenwood. [39] It is challenged by the AV1 format, intended for free license. As of 2019, AVC is by far the most commonly used format for the recording, compression, and distribution of video content, used by 91% of video developers, followed by HEVC which is used by 43% of developers. [33]

List of video coding standards

Timeline of international video compression standards
Basic algorithmVideo coding standardYearPublishersCommitteesLicensorsMarket presence (2019) [33] Popular implementations
DPCM H.120 1984 CCITT VCEG Un­known
DCT H.261 1988CCITTVCEG Hitachi, PictureTel, NTT, BT, Toshiba, etc. [29] Videoconferencing, videotelephony
Motion JPEG (MJPEG)1992 JPEG JPEG ISO / Open Source does NOT mean free! [40] QuickTime
MPEG-1 Part 2 1993 ISO, IEC MPEG Fujitsu, IBM, Matsushita, etc. [41] Video CD, Internet video
H.262 / MPEG-2 Part 2 (MPEG-2 Video)1995ISO, IEC, ITU-T MPEG, VCEG Sony, Thomson, Mitsubishi, etc. [30] 29% DVD Video, Blu-ray, DVB, ATSC, SVCD, SDTV
DV 1995IEC IEC Sony, Panasonic Un­known Camcorders, digital cassettes
H.263 1996ITU-TVCEGMitsubishi, Hitachi, Panasonic, etc. [32] Un­knownVideoconferencing, videotelephony, H.320, ISDN, [42] [43] mobile video (3GP), MPEG-4 Visual
MPEG-4 Part 2 (MPEG-4 Visual)1999ISO, IECMPEGMitsubishi, Hitachi, Panasonic, etc. [32] Un­knownInternet video, DivX, Xvid
DWT Motion JPEG 2000 (MJ2)2001JPEG [44] JPEG [45] Un­known Digital cinema [46]
DCT Advanced Video Coding (H.264 / MPEG-4 AVC)2003ISO, IEC, ITU-TMPEG, VCEGPanasonic, Godo Kaisha IP Bridge, LG, etc. [34] 91% Blu-ray, HD DVD, HDTV (DVB, ATSC), video streaming (YouTube, Netflix, Vimeo), iTunes Store, iPod Video, Apple TV, videoconferencing, Flash Player, Silverlight, VOD
Theora 2004 Xiph Xiph Un­knownInternet video, web browsers
VC-1 2006 SMPTE SMPTE Microsoft, Panasonic, LG, Samsung, etc. [47] Un­knownBlu-ray, Internet video
Apple ProRes 2007 Apple Apple Apple Un­known Video production, post-production
High Efficiency Video Coding (H.265 / MPEG-H HEVC)2013ISO, IEC, ITU-TMPEG, VCEGSamsung, GE, NTT, JVCKenwood, etc. [39] [48] 43% UHD Blu-ray, DVB, ATSC 3.0, UHD streaming, HEIF, macOS High Sierra, iOS 11
AV1 2018 AOMedia AOMedia 7% HTML video
Versatile Video Coding (VVC / H.266)2020 JVET JVET Un­known

Lossless, lossy, and uncompressed

Consumer video is generally compressed using lossy video codecs, since that results in significantly smaller files than lossless compression. Some video coding formats designed explicitly for either lossy or lossless compression, and some video coding formats such as Dirac and H.264 support both. [49]

Uncompressed video formats, such as Clean HDMI, is a form of lossless video used in some circumstances such as when sending video to a display over a HDMI connection. Some high-end cameras can also capture video directly in this format.

Intra-frame

Interframe compression complicates editing of an encoded video sequence. [50] One subclass of relatively simple video coding formats are the intra-frame video formats, such as DV, in which each frame of the video stream is compressed independently without referring to other frames in the stream, and no attempt is made to take advantage of correlations between successive pictures over time for better compression. One example is Motion JPEG, which is simply a sequence of individually JPEG-compressed images. This approach is quick and simple, at the expense of the encoded video being much larger than a video coding format supporting Inter frame coding.

Because interframe compression copies data from one frame to another, if the original frame is simply cut out (or lost in transmission), the following frames cannot be reconstructed properly. Making cuts in intraframe-compressed video while video editing is almost as easy as editing uncompressed video: one finds the beginning and ending of each frame, and simply copies bit-for-bit each frame that one wants to keep, and discards the frames one does not want. Another difference between intraframe and interframe compression is that, with intraframe systems, each frame uses a similar amount of data. In most interframe systems, certain frames (such as I-frames in MPEG-2) are not allowed to copy data from other frames, so they require much more data than other frames nearby. [51]

It is possible to build a computer-based video editor that spots problems caused when I frames are edited out while other frames need them. This has allowed newer formats like HDV to be used for editing. However, this process demands a lot more computing power than editing intraframe compressed video with the same picture quality. But, this compression is not very effective to use for any audio format. [52]

Profiles and levels

A video coding format can define optional restrictions to encoded video, called profiles and levels. It is possible to have a decoder which only supports decoding a subset of profiles and levels of a given video format, for example to make the decoder program/hardware smaller, simpler, or faster. [53]

A profile restricts which encoding techniques are allowed. For example, the H.264 format includes the profiles baseline, main and high (and others). While P-slices (which can be predicted based on preceding slices) are supported in all profiles, B-slices (which can be predicted based on both preceding and following slices) are supported in the main and high profiles but not in baseline. [54]

A level is a restriction on parameters such as maximum resolution and data rates. [54]

See also

Notes

Related Research Articles

In information theory, data compression, source coding, or bit-rate reduction is the process of encoding information using fewer bits than the original representation. Any particular compression is either lossy or lossless. Lossless compression reduces bits by identifying and eliminating statistical redundancy. No information is lost in lossless compression. Lossy compression reduces bits by removing unnecessary or less important information. Typically, a device that performs data compression is referred to as an encoder, and one that performs the reversal of the process (decompression) as a decoder.

<span class="mw-page-title-main">Digital video</span> Digital electronic representation of moving visual images

Digital video is an electronic representation of moving visual images (video) in the form of encoded digital data. This is in contrast to analog video, which represents moving visual images in the form of analog signals. Digital video comprises a series of digital images displayed in rapid succession, usually at 24, 25, 30, or 60 frames per second. Digital video has many advantages such as easy copying, multicasting, sharing and storage.

H.263 is a video compression method originally designed as a low-bit-rate compressed format for videotelephony. It was standardized by the ITU-T Video Coding Experts Group (VCEG) in a project ending in 1995/1996. It is a member of the H.26x family of video coding standards in the domain of the ITU-T.

<span class="mw-page-title-main">Lossy compression</span> Data compression approach that reduces data size while discarding or changing some of it

In information technology, lossy compression or irreversible compression is the class of data compression methods that uses inexact approximations and partial data discarding to represent the content. These techniques are used to reduce data size for storing, handling, and transmitting content. Higher degrees of approximation create coarser images as more details are removed. This is opposed to lossless data compression which does not degrade the data. The amount of data reduction possible using lossy compression is much higher than using lossless techniques.

<span class="mw-page-title-main">Motion compensation</span> Video compression technique, used to efficiently predict and generate video frames

Motion compensation in computing is an algorithmic technique used to predict a frame in a video given the previous and/or future frames by accounting for motion of the camera and/or objects in the video. It is employed in the encoding of video data for video compression, for example in the generation of MPEG-2 files. Motion compensation describes a picture in terms of the transformation of a reference picture to the current picture. The reference picture may be previous in time or even from the future. When images can be accurately synthesized from previously transmitted/stored images, the compression efficiency can be improved.

<span class="mw-page-title-main">Video codec</span> Digital video coder/decoder

A video codec is software or hardware that compresses and decompresses digital video. In the context of video compression, codec is a portmanteau of encoder and decoder, while a device that only compresses is typically called an encoder, and one that only decompresses is a decoder.

Motion JPEG is a video compression format in which each video frame or interlaced field of a digital video sequence is compressed separately as a JPEG image.

<span class="mw-page-title-main">Compression artifact</span> Distortion of media caused by lossy data compression

A compression artifact is a noticeable distortion of media caused by the application of lossy compression. Lossy data compression involves discarding some of the media's data so that it becomes small enough to be stored within the desired disk space or transmitted (streamed) within the available bandwidth. If the compressor cannot store enough data in the compressed version, the result is a loss of quality, or introduction of artifacts. The compression algorithm may not be intelligent enough to discriminate between distortions of little subjective importance and those objectionable to the user.

<span class="mw-page-title-main">Advanced Video Coding</span> Most widely used standard for video compression

Advanced Video Coding (AVC), also referred to as H.264 or MPEG-4 Part 10, is a video compression standard based on block-oriented, motion-compensated coding. It is by far the most commonly used format for the recording, compression, and distribution of video content, used by 91% of video industry developers as of September 2019. It supports a maximum resolution of 8K UHD.

H.261 is an ITU-T video compression standard, first ratified in November 1988. It is the first member of the H.26x family of video coding standards in the domain of the ITU-T Study Group 16 Video Coding Experts Group. It was the first video coding standard that was useful in practical terms.

The macroblock is a processing unit in image and video compression formats based on linear block transforms, typically the discrete cosine transform (DCT). A macroblock typically consists of 16×16 samples, and is further subdivided into transform blocks, and may be further subdivided into prediction blocks. Formats which are based on macroblocks include JPEG, where they are called MCU blocks, H.261, MPEG-1 Part 2, H.262/MPEG-2 Part 2, H.263, MPEG-4 Part 2, and H.264/MPEG-4 AVC. In H.265/HEVC, the macroblock as a basic processing unit has been replaced by the coding tree unit.

In video coding, a group of pictures, or GOP structure, specifies the order in which intra- and inter-frames are arranged. The GOP is a collection of successive pictures within a coded video stream. Each coded video stream consists of successive GOPs, from which the visible frames are generated. Encountering a new GOP in a compressed video stream means that the decoder doesn't need any previous frames in order to decode the next ones, and allows fast seeking through the video.

<span class="mw-page-title-main">VP8</span> Open and royalty-free video coding format released by Google in 2010

VP8 is an open and royalty-free video compression format released by On2 Technologies in 2008.

High Efficiency Video Coding (HEVC), also known as H.265 and MPEG-H Part 2, is a video compression standard designed as part of the MPEG-H project as a successor to the widely used Advanced Video Coding. In comparison to AVC, HEVC offers from 25% to 50% better data compression at the same level of video quality, or substantially improved video quality at the same bit rate. It supports resolutions up to 8192×4320, including 8K UHD, and unlike the primarily 8-bit AVC, HEVC's higher fidelity Main 10 profile has been incorporated into nearly all supporting hardware.

<span class="mw-page-title-main">VP9</span> Open and royalty-free video coding format released by Google in 2013

VP9 is an open and royalty-free video coding format developed by Google.

<span class="mw-page-title-main">Audio coding format</span> Digitally coded format for audio signals

An audio coding format is a content representation format for storage or transmission of digital audio. Examples of audio coding formats include MP3, AAC, Vorbis, FLAC, and Opus. A specific software or hardware implementation capable of audio compression and decompression to/from a specific audio coding format is called an audio codec; an example of an audio codec is LAME, which is one of several different codecs which implements encoding and decoding audio in the MP3 audio coding format in software.

ZPEG is a motion video technology that applies a human visual acuity model to a decorrelated transform-domain space, thereby optimally reducing the redundancies in motion video by removing the subjectively imperceptible. This technology is applicable to a wide range of video processing problems such as video optimization, real-time motion video compression, subjective quality monitoring, and format conversion.

Versatile Video Coding (VVC), also known as H.266, ISO/IEC 23090-3, and MPEG-I Part 3, is a video compression standard finalized on 6 July 2020, by the Joint Video Experts Team (JVET) of the VCEG working group of ITU-T Study Group 16 and the MPEG working group of ISO/IEC JTC 1/SC 29. It is the successor to High Efficiency Video Coding. It was developed with two primary goals – improved compression performance and support for a very broad range of applications.

References

  1. Thomas Wiegand; Gary J. Sullivan; Gisle Bjontegaard & Ajay Luthra (July 2003). "Overview of the H.264 / AVC Video Coding Standard" (PDF). IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY.
  2. 1 2 "SERIES H: AUDIOVISUAL AND MULTIMEDIA SYSTEMS : Infrastructure of audiovisual services – Coding of moving video : Advanced video coding for generic audiovisual services". Itu.int. Retrieved January 6, 2015.
  3. "Front Page". Alliance for Open Media. Retrieved May 23, 2016.
  4. Adrian Grange; Peter de Rivaz & Jonathan Hunt. "VP9 Bitstream & Decoding Process Specification" (PDF).
  5. "Audio/Video". The Chromium Projects. Retrieved May 23, 2016.
  6. "Media formats supported by the HTML audio and video elements". Mozilla. Retrieved May 23, 2016.
  7. Rowan Trollope (October 30, 2013). "Open-Sourced H.264 Removes Barriers to WebRTC". Cisco. Archived from the original on May 14, 2019. Retrieved May 23, 2016.
  8. "Chapter 3 : Modified A* Prune Algorithm for finding K-MCSP in video compression" (PDF). Shodhganga.inflibnet.ac.in. Retrieved January 6, 2015.
  9. 1 2 3 4 5 6 7 8 9 10 "History of Video Compression". ITU-T . Joint Video Team (JVT) of ISO/IEC MPEG & ITU-T VCEG (ISO/IEC JTC1/SC29/WG11 and ITU-T SG16 Q.6). July 2002. pp. 11, 24–9, 33, 40–1, 53–6. Retrieved November 3, 2019.
  10. Robinson, A. H.; Cherry, C. (1967). "Results of a prototype television bandwidth compression scheme". Proceedings of the IEEE . 55 (3). IEEE: 356–364. doi:10.1109/PROC.1967.5493.
  11. 1 2 3 4 5 6 7 8 9 Ghanbari, Mohammed (2003). Standard Codecs: Image Compression to Advanced Video Coding. Institution of Engineering and Technology. pp. 1–2. ISBN   9780852967102.
  12. 1 2 3 4 5 6 Lea, William (1994). Video on demand: Research Paper 94/68. House of Commons Library . Retrieved September 20, 2019.
  13. Lee, Jack (2005). Scalable Continuous Media Streaming Systems: Architecture, Design, Analysis and Implementation. John Wiley & Sons. p. 25. ISBN   9780470857649.
  14. Ahmed, Nasir (January 1991). "How I Came Up With the Discrete Cosine Transform". Digital Signal Processing . 1 (1): 4–5. Bibcode:1991DSP.....1....4A. doi:10.1016/1051-2004(91)90086-Z.
  15. Ahmed, Nasir; Natarajan, T.; Rao, K. R. (January 1974), "Discrete Cosine Transform", IEEE Transactions on Computers, C-23 (1): 90–93, doi:10.1109/T-C.1974.223784, S2CID   149806273
  16. Rao, K. R.; Yip, P. (1990), Discrete Cosine Transform: Algorithms, Advantages, Applications, Boston: Academic Press, ISBN   978-0-12-580203-1
  17. 1 2 Habibi, Ali (1974). "Hybrid Coding of Pictorial Data". IEEE Transactions on Communications. 22 (5): 614–624. doi:10.1109/TCOM.1974.1092258.
  18. Chen, Z.; He, T.; Jin, X.; Wu, F. (2019). "Learning for Video Compression". IEEE Transactions on Circuits and Systems for Video Technology. 30 (2): 566–576. arXiv: 1804.09869 . doi:10.1109/TCSVT.2019.2892608. S2CID   13743007.
  19. Pratt, William K. (1984). Advances in Electronics and Electron Physics: Supplement. Academic Press. p. 158. ISBN   9780120145720. A significant advance in image coding methodology occurred with the introduction of the concept of hybrid transform/DPCM coding (Habibi, 1974).
  20. Ohm, Jens-Rainer (2015). Multimedia Signal Coding and Transmission. Springer. p. 364. ISBN   9783662466919.
  21. 1 2 Roese, John A.; Robinson, Guner S. (October 30, 1975). Tescher, Andrew G. (ed.). "Combined Spatial And Temporal Coding Of Digital Image Sequences". Efficient Transmission of Pictorial Information. 0066. International Society for Optics and Photonics: 172–181. Bibcode:1975SPIE...66..172R. doi:10.1117/12.965361. S2CID   62725808.
  22. Huang, T. S. (1981). Image Sequence Analysis. Springer Science & Business Media. p. 29. ISBN   9783642870378.
  23. 1 2 3 Stanković, Radomir S.; Astola, Jaakko T. (2012). "Reminiscences of the Early Work in DCT: Interview with K.R. Rao" (PDF). Reprints from the Early Days of Information Sciences. 60. Retrieved October 13, 2019.
  24. Chen, Wen-Hsiung; Smith, C. H.; Fralick, S. C. (September 1977). "A Fast Computational Algorithm for the Discrete Cosine Transform". IEEE Transactions on Communications . 25 (9): 1004–1009. doi:10.1109/TCOM.1977.1093941.
  25. "T.81 – Digital compression and coding of continuous-tone still images – Requirements and guidelines" (PDF). CCITT. September 1992. Retrieved July 12, 2019.
  26. Cianci, Philip J. (2014). High Definition Television: The Creation, Development and Implementation of HDTV Technology. McFarland. p. 63. ISBN   9780786487974.
  27. 1 2 3 Li, Jian Ping (2006). Proceedings of the International Computer Conference 2006 on Wavelet Active Media Technology and Information Processing: Chongqing, China, 29-31 August 2006. World Scientific. p. 847. ISBN   9789812709998.
  28. 1 2 3 4 5 6 7 "The History of Video File Formats Infographic". RealNetworks . April 22, 2012. Retrieved August 5, 2019.
  29. 1 2 "ITU-T Recommendation declared patent(s)". ITU. Retrieved July 12, 2019.
  30. 1 2 "MPEG-2 Patent List" (PDF). MPEG LA . Retrieved July 7, 2019.
  31. Shishikui, Yoshiaki; Nakanishi, Hiroshi; Imaizumi, Hiroyuki (October 26–28, 1993). "An HDTV Coding Scheme using Adaptive-Dimension DCT". Signal Processing of HDTV: Proceedings of the International Workshop on HDTV '93, Ottawa, Canada. Elsevier: 611–618. doi:10.1016/B978-0-444-81844-7.50072-3. ISBN   9781483298511.
  32. 1 2 3 "MPEG-4 Visual - Patent List" (PDF). MPEG LA . Retrieved July 6, 2019.
  33. 1 2 3 "Video Developer Report 2019" (PDF). Bitmovin . 2019. Retrieved November 5, 2019.
  34. 1 2 "AVC/H.264 – Patent List" (PDF). MPEG LA. Retrieved July 6, 2019.
  35. Wang, Hanli; Kwong, S.; Kok, C. (2006). "Efficient prediction algorithm of integer DCT coefficients for H.264/AVC optimization". IEEE Transactions on Circuits and Systems for Video Technology. 16 (4): 547–552. doi:10.1109/TCSVT.2006.871390. S2CID   2060937.
  36. "Digital Video Broadcasting (DVB); Specification for the use of video and audio coding in DVB services delivered directly over IP" (PDF).
  37. "World, Meet Thor – a Project to Hammer Out a Royalty Free Video Codec". August 11, 2015.
  38. Thomson, Gavin; Shah, Athar (2017). "Introducing HEIF and HEVC" (PDF). Apple Inc. Retrieved August 5, 2019.
  39. 1 2 "HEVC Patent List" (PDF). MPEG LA . Retrieved July 6, 2019.
  40. ISO. "Home". International Standards Organization. ISO. Retrieved August 3, 2022.
  41. "ISO Standards and Patents". ISO. Retrieved July 10, 2019.
  42. Davis, Andrew (June 13, 1997). "The H.320 Recommendation Overview". EE Times . Retrieved November 7, 2019.
  43. IEEE WESCANEX 97: communications, power, and computing : conference proceedings. University of Manitoba, Winnipeg, Manitoba, Canada: Institute of Electrical and Electronics Engineers. May 22–23, 1997. p. 30. ISBN   9780780341470. H.263 is similar to, but more complex than H.261. It is currently the most widely used international video compression standard for video telephony on ISDN (Integrated Services Digital Network) telephone lines.
  44. "Motion JPEG 2000 Part 3". Joint Photographic Experts Group, JPEG, and Joint Bi-level Image experts Group, JBIG. Archived from the original on October 5, 2012. Retrieved June 21, 2014.
  45. Taubman, David; Marcellin, Michael (2012). JPEG2000 Image Compression Fundamentals, Standards and Practice: Image Compression Fundamentals, Standards and Practice. Springer Science & Business Media. ISBN   9781461507994.
  46. Swartz, Charles S. (2005). Understanding Digital Cinema: A Professional Handbook. Taylor & Francis. p. 147. ISBN   9780240806174.
  47. "VC-1 Patent List" (PDF). MPEG LA . Retrieved July 11, 2019.
  48. "HEVC Advance Patent List". HEVC Advance . Archived from the original on August 24, 2020. Retrieved July 6, 2019.
  49. Filippov, Alexey; Norkin, Aney; Alvarez, José Roberto (April 2020). "RFC 8761 - Video Codec Requirements and Evaluation Methodology". datatracker.ietf.org. Retrieved February 10, 2022.
  50. Bhojani, D.R. "4.1 Video Compression" (PDF). Hypothesis. Retrieved March 6, 2013.
  51. Jaiswal, R.C. (2009). Audio-Video Engineering. Pune, Maharashtra: Nirali Prakashan. p. 3.55. ISBN   9788190639675.
  52. "WebCodecs". www.w3.org. Retrieved February 10, 2022.
  53. "Video Rendering - an overview | ScienceDirect Topics". www.sciencedirect.com. Retrieved February 10, 2022.
  54. 1 2 Jan Ozer. "Encoding options for H.264 video". Adobe.com. Retrieved January 6, 2015.