MPEG-2 Part 3

Last updated

Part 3 of the MPEG-2 standard (formally known as ISO/IEC 13818-3, also known as MPEG-2 Audio or MPEG-2 BC) defines audio coding:

The MPEG-2 Part 3 should not be confused with MPEG-2 Part 7: AAC a.k.a. MPEG-2 NBC (Non-Backward Compatible) - the MPEG-2 Advanced Audio Coding with support for multichannel encoding (up to 48 channels). [1] [2]

Overview

MPEG-2 Part 3 introduced new audio encoding methods compared to MPEG-1 Part 3: [7] MPEG-2 BC (backward compatible with MPEG-1 audio formats) [1] [2] [5]

Related Research Articles

MP3 Digital audio format

MP3 is a coding format for digital audio developed largely by the Fraunhofer Society in Germany, with support from other digital scientists in the United States and elsewhere. Originally defined as the third audio format of the MPEG-1 standard, it was retained and further extended — defining additional bit-rates and support for more audio channels — as the third audio format of the subsequent MPEG-2 standard. A third version, known as MPEG 2.5 — extended to better support lower bit rates — is commonly implemented, but is not a recognized standard.

MPEG-1 is a standard for lossy compression of video and audio. It is designed to compress VHS-quality raw digital video and CD audio down to about 1.5 Mbit/s without excessive quality loss, making video CDs, digital cable/satellite TV and digital audio broadcasting (DAB) practical.

MPEG-2 Video encoding standard

MPEG-2 is a standard for "the generic coding of moving pictures and associated audio information". It describes a combination of lossy video compression and lossy audio data compression methods, which permit storage and transmission of movies using currently available storage media and transmission bandwidth. While MPEG-2 is not as efficient as newer standards such as H.264/AVC and H.265/HEVC, backwards compatibility with existing hardware and software means it is still widely used, for example in over-the-air digital television broadcasting and in the DVD-Video standard.

MPEG-4 is a method of defining compression of audio and visual (AV) digital data. It was introduced in late 1998 and designated a standard for a group of audio and video coding formats and related technology agreed upon by the ISO/IEC Moving Picture Experts Group (MPEG) under the formal standard ISO/IEC 14496 – Coding of audio-visual objects. Uses of MPEG-4 include compression of AV data for Internet video and CD distribution, voice and broadcast television applications. The MPEG-4 standard was developed by a group led by Touradj Ebrahimi and Fernando Pereira.

MPEG-1 Audio Layer II or MPEG-2 Audio Layer II is a lossy audio compression format defined by ISO/IEC 11172-3 alongside MPEG-1 Audio Layer I and MPEG-1 Audio Layer III (MP3). While MP3 is much more popular for PC and Internet applications, MP2 remains a dominant standard for audio broadcasting.

Advanced Audio Coding (AAC) is an audio coding standard for lossy digital audio compression. Designed to be the successor of the MP3 format, AAC generally achieved higher sound quality than 20th century MP3 encoders at the same bit rate.

MPEG-4 Part 3 or MPEG-4 Audio is the third part of the ISO/IEC MPEG-4 international standard developed by Moving Picture Experts Group. It specifies audio coding methods. The first version of ISO/IEC 14496-3 was published in 1999.

H.262 or MPEG-2 Part 2 is a video coding format standardised and jointly maintained by ITU-T Study Group 16 Video Coding Experts Group (VCEG) and ISO/IEC Moving Picture Experts Group (MPEG), and developed with the involvement of many companies. It is the second part of the ISO/IEC MPEG-2 standard. The ITU-T Recommendation H.262 and ISO/IEC 13818-2 documents are identical.

High-Efficiency Advanced Audio Coding Audio codec

High-Efficiency Advanced Audio Coding (HE-AAC) is an audio coding format for lossy data compression of digital audio defined as an MPEG-4 Audio profile in ISO/IEC 14496-3. It is an extension of Low Complexity AAC (AAC-LC) optimized for low-bitrate applications such as streaming audio. The usage profile HE-AAC v1 uses spectral band replication (SBR) to enhance the modified discrete cosine transform (MDCT) compression efficiency in the frequency domain. The usage profile HE-AAC v2 couples SBR with Parametric Stereo (PS) to further enhance the compression efficiency of stereo signals.

TwinVQ is an audio compression technique developed by Nippon Telegraph and Telephone Corporation (NTT) Human Interface Laboratories in 1994. The compression technique has been used in both standardized and proprietary designs.

FAAC or Freeware Advanced Audio Coder is a software project which includes the AAC encoder FAAC and decoder FAAD2. It supports MPEG-2 AAC as well as MPEG-4 AAC. It supports several MPEG-4 Audio object types, file formats, multichannel and gapless encoding/decoding and MP4 metadata tags. The encoder and decoder is compatible with standard-compliant audio applications using one or more of these object types and facilities. It also supports Digital Radio Mondiale.

MPEG-4 Audio Lossless Coding, also known as MPEG-4 ALS, is an extension to the MPEG-4 Part 3 audio standard to allow lossless audio compression. The extension was finalized in December 2005 and published as ISO/IEC 14496-3:2005/Amd 2:2006 in 2006. The latest description of MPEG-4 ALS was published as subpart 11 of the MPEG-4 Audio standard in December 2019.

MPEG Multichannel is an extension to the MPEG-1 Layer II audio compression specification, as defined in the MPEG-2 Audio standard which allows it provide up to 5.1-channels of audio. To maintain backwards compatibility with the older 2-channel (stereo) audio specification, it uses a channel matrixing scheme, where the additional channels are mixed into the two backwards compatible channels. Extra information in the data stream contains signals to process extra channels from the matrix.

MPEG-4 SLS Extension to the MPEG-4 Audio standard

MPEG-4 SLS, or MPEG-4 Scalable to Lossless as per ISO/IEC 14496-3:2005/Amd 3:2006 (Scalable Lossless Coding), is an extension to the MPEG-4 Part 3 (MPEG-4 Audio) standard to allow lossless audio compression scalable to lossy MPEG-4 General Audio coding methods (e.g., variations of AAC). It was developed jointly by the Institute for Infocomm Research (I2R) and Fraunhofer, which commercializes its implementation of a limited subset of the standard under the name of HD-AAC. Standardization of the HD-AAC profile for MPEG-4 Audio is under development (as of September 2009).

Program stream is a container format for multiplexing digital audio, video and more. The PS format is specified in MPEG-1 Part 1 and MPEG-2 Part 1, Systems. The MPEG-2 Program Stream is analogous and similar to ISO/IEC 11172 Systems layer and it is forward compatible.

MPEG Surround, also known as Spatial Audio Coding (SAC) is a lossy compression format for surround sound that provides a method for extending mono or stereo audio services to multi-channel audio in a backwards compatible fashion. The total bit rates used for the core and the MPEG Surround data are typically only slightly higher than the bit rates used for coding of the core. MPEG Surround adds a side-information stream to the core bit stream, containing spatial image data. Legacy stereo playback systems will ignore this side-information while players supporting MPEG Surround decoding will output the reconstructed multi-channel audio.

MPEG-1 Audio Layer I, commonly abbreviated to MP1, is one of three audio formats included in the MPEG-1 standard. It is a deliberately simplified version of MPEG-1 Audio Layer II, created for applications where lower compression efficiency could be tolerated in return for a less complex algorithm that could be executed with simpler hardware requirements. While supported by most media players, the codec is considered largely obsolete, and replaced by MP2 or MP3.

2D Plus Delta is a method of encoding 3D image listed as a part of MPEG2 and MPEG4 standards, specifically on the H.264 implementation of Multiview Video Coding extension. This technology originally started as a proprietary method for Stereoscopic Video Coding and content deployment that utilizes the Left or Right channel as the 2D version and the optimized difference or disparity (Delta) between that image channel view and a second eye image view is injected into the videostream as user_data, secondary stream, independent stream, enhancement layer or NALu for deployment. The Delta data can be either a spatial stereo disparity, temporal predictive, bidirectional or optimized motion compensation.

Unified Speech and Audio Coding (USAC) is an audio compression format and codec for both music and speech or any mix of speech and audio using very low bit rates between 12 and 64 kbit/s. It was developed by Moving Picture Experts Group (MPEG) and was published as an international standard ISO/IEC 23003-3 and also as an MPEG-4 Audio Object Type in ISO/IEC 14496-3:2009/Amd 3 in 2012.

References

  1. 1 2 3 ISO (October 1998). "MPEG Audio FAQ Version 9 - MPEG-1 and MPEG-2 BC". ISO. Retrieved 2009-10-28.
  2. 1 2 3 MPEG.ORG. "AAC". Archived from the original on 2007-08-31. Retrieved 2009-10-28.
  3. ISO (2006-01-15), ISO/IEC 13818-7, Fourth edition, Part 7 - Advanced Audio Coding (AAC) (PDF), retrieved 2009-10-28
  4. ISO (2004-10-15), ISO/IEC 13818-7, Third edition, Part 7 - Advanced Audio Coding (AAC) (PDF), archived from the original (PDF) on 2011-07-13, retrieved 2009-10-19
  5. 1 2 Werner Oomen; Leon van de Kerkhof. "MPEG-2 Audio Layer I/II". chiariglione.org. Retrieved 2009-12-29.
  6. Predrag Supurovic, MPEG Audio Frame Header, Retrieved on 2009-07-11
  7. D. Thom, H. Purnhagen, and the MPEG Audio Subgroup (October 1998). "MPEG Audio FAQ Version 9 - MPEG Audio" . Retrieved 2009-10-31.CS1 maint: multiple names: authors list (link)