Delivery Multimedia Integration Framework

Last updated

DMIF, or Delivery Multimedia Integration Framework, is a uniform interface between the application and the transport, that allows the MPEG-4 application developer to stop worrying about that transport. DMIF was defined in MPEG-4 Part 6 (ISO/IEC 14496-6) in 1999. [1] DMIF defines two interfaces: the DAI (DMIF/Application Interface) and the DNI (DMIF-Network Interface). [2] [3] A single application can run on different transport layers when supported by the right DMIF instantiation. MPEG-4 DMIF supports the following functionalities:

DMIF expands upon the MPEG-2 DSM-CC standard (ISO/IEC 13818-6:1998) to enable the convergence of interactive, broadcast and conversational multimedia into one specification which will be applicable to set tops, desktops and mobile stations. The DSM-CC work was extended as part of the ISO/IEC 14496-6, with the DSM-CC Multimedia Integration Framework (DMIF). DSM-CC stands for Digital Storage Media - Command and Control. [4] [5] [6] DMIF was also a name of working group within Moving Picture Experts Group. The acronym "DSM-CC" was replaced by "Delivery" (Delivery Multimedia Integration Framework) in 1997. [7]

Related Research Articles

Moving Picture Experts Group Alliance of working groups to set standards for multimedia coding

The Moving Picture Experts Group (MPEG) is an alliance of working groups established jointly by ISO and IEC that sets standards for media coding, including compression coding of audio, video, graphics and genomic data, and transmission and file formats for various applications. Together with JPEG, MPEG is organized under ISO/IEC JTC 1/SC 29 – Coding of audio, picture, multimedia and hypermedia information.

MPEG-2 Video encoding standard

MPEG-2 is a standard for "the generic coding of moving pictures and associated audio information". It describes a combination of lossy video compression and lossy audio data compression methods, which permit storage and transmission of movies using currently available storage media and transmission bandwidth. While MPEG-2 is not as efficient as newer standards such as H.264/AVC and H.265/HEVC, backwards compatibility with existing hardware and software means it is still widely used, for example in over-the-air digital television broadcasting and in the DVD-Video standard.

MPEG-4 is a method of defining compression of audio and visual (AV) digital data. It was introduced in late 1998 and designated a standard for a group of audio and video coding formats and related technology agreed upon by the ISO/IEC Moving Picture Experts Group (MPEG) under the formal standard ISO/IEC 14496 – Coding of audio-visual objects. Uses of MPEG-4 include compression of AV data for Internet video and CD distribution, voice and broadcast television applications. The MPEG-4 standard was developed by a group led by Touradj Ebrahimi and Fernando Pereira.

MPEG-1 Audio Layer II or MPEG-2 Audio Layer II is a lossy audio compression format defined by ISO/IEC 11172-3 alongside MPEG-1 Audio Layer I and MPEG-1 Audio Layer III (MP3). While MP3 is much more popular for PC and Internet applications, MP2 remains a dominant standard for audio broadcasting.

The MPEG-21 standard, from the Moving Picture Experts Group, aims at defining an open framework for multimedia applications. MPEG-21 is ratified in the standards ISO/IEC 21000 - Multimedia framework (MPEG-21).

Advanced Audio Coding (AAC) is an audio coding standard for lossy digital audio compression. Designed to be the successor of the MP3 format, AAC generally achieves higher sound quality than MP3 encoders at the same bit rate.

MPEG-4 Part 3 or MPEG-4 Audio is the third part of the ISO/IEC MPEG-4 international standard developed by Moving Picture Experts Group. It specifies audio coding methods. The first version of ISO/IEC 14496-3 was published in 1999.

Digital storage media command and control (DSM-CC) is a toolkit for developing control channels associated with MPEG-1 and MPEG-2 streams. It is defined in part 6 of the MPEG-2 standard and uses a client/server model connected via an underlying network.

High-Efficiency Advanced Audio Coding Audio codec

High-Efficiency Advanced Audio Coding (HE-AAC) is an audio coding format for lossy data compression of digital audio defined as an MPEG-4 Audio profile in ISO/IEC 14496-3. It is an extension of Low Complexity AAC (AAC-LC) optimized for low-bitrate applications such as streaming audio. The usage profile HE-AAC v1 uses spectral band replication (SBR) to enhance the modified discrete cosine transform (MDCT) compression efficiency in the frequency domain. The usage profile HE-AAC v2 couples SBR with Parametric Stereo (PS) to further enhance the compression efficiency of stereo signals.

TwinVQ is an audio compression technique developed by Nippon Telegraph and Telephone Corporation (NTT) Human Interface Laboratories in 1994. The compression technique has been used in both standardized and proprietary designs.

MPEG-4 Part 2, MPEG-4 Visual is a video compression format developed by the Moving Picture Experts Group (MPEG). It belongs to the MPEG-4 ISO/IEC standards. It is uses block-wise motion compensation and a discrete cosine transform (DCT), similar to previous standards such as MPEG-1 Part 2 and H.262/MPEG-2 Part 2.

MPEG-4 Part 11Scene description and application engine was published as ISO/IEC 14496-11 in 2005. MPEG-4 Part 11 is also known as BIFS, XMT, MPEG-J. It defines:

MPEG-4 Audio Lossless Coding, also known as MPEG-4 ALS, is an extension to the MPEG-4 Part 3 audio standard to allow lossless audio compression. The extension was finalized in December 2005 and published as ISO/IEC 14496-3:2005/Amd 2:2006 in 2006. The latest description of MPEG-4 ALS was published as subpart 11 of the MPEG-4 Audio standard in December 2019.

MPEG-4 SLS Extension to the MPEG-4 Audio standard

MPEG-4 SLS, or MPEG-4 Scalable to Lossless as per ISO/IEC 14496-3:2005/Amd 3:2006 (Scalable Lossless Coding), is an extension to the MPEG-4 Part 3 (MPEG-4 Audio) standard to allow lossless audio compression scalable to lossy MPEG-4 General Audio coding methods (e.g., variations of AAC). It was developed jointly by the Institute for Infocomm Research (I2R) and Fraunhofer, which commercializes its implementation of a limited subset of the standard under the name of HD-AAC. Standardization of the HD-AAC profile for MPEG-4 Audio is under development (as of September 2009).

MPEG-4 Part 14 MP4; digital format for storing video and audio

MPEG-4 Part 14 or MP4 is a digital multimedia container format most commonly used to store video and audio, but it can also be used to store other data such as subtitles and still images. Like most modern container formats, it allows streaming over the Internet. The only filename extension for MPEG-4 Part 14 files as defined by the specification is .mp4. MPEG-4 Part 14 is a standard specified as a part of MPEG-4.

MPEG-4 Part 20, or MPEG-4 Lightweight Application Scene Representation (LASeR) is a rich media standard dedicated to the mobile, embedded and consumer electronics industries specified by the MPEG standardization group. LASeR is based on SVG Tiny and adds methods for sending dynamic updates and a binary compression format.

ISO/IEC base media file format (ISOBMFF) defines a general structure for time-based multimedia files such as video and audio. It is standardized in ISO/IEC 14496-12 – MPEG-4 Part 12. The text was also published as ISO/IEC 15444-12.

MPEG media transport (MMT), specified as ISO/IEC 23008-1, is a digital container standard developed by Moving Picture Experts Group (MPEG) that supports High Efficiency Video Coding (HEVC) video. MMT was designed to transfer data using the all-Internet Protocol (All-IP) network.

ISO/IEC JTC 1/SC 29, entitled Coding of audio, picture, multimedia and hypermedia information, is a standardization subcommittee of the Joint Technical Committee ISO/IEC JTC 1 of the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC). It develops and facilitates international standards, technical reports, and technical specifications within the field of audio, picture, multimedia, and hypermedia information coding. The standards developed by SC 29 have been recognized by seven Emmy Awards.

LCEVC Video coding standard

Low Complexity Enhancement Video Coding (LCEVC) is a ISO/IEC video coding standard developed by the Moving Picture Experts Group (MPEG) under the project name MPEG-5 Part 2 LCEVC.

References

  1. ISO. "ISO/IEC 14496-6:2000 - Information technology -- Coding of audio-visual objects -- Part 6: Delivery Multimedia Integration Framework (DMIF)". ISO. Retrieved 2010-08-01.
  2. MPEG (2001). "MPEG Systems (1-2-4-7) FAQ, Version 16.0 - Delivery Multimedia Integration Framework (DMIF)". MPEG. Retrieved 2010-08-01.
  3. MPEG (2001). "The Delivery Layer in MPEG-4 - G. Franceschini - CSELT Centro Studi e Laboratori Telecomunicazioni S.p.A". MPEG. Archived from the original on 2010-06-21. Retrieved 2010-08-01.
  4. MPEG (July 1997). "mpeg Press & Public Release - Stockholm". MPEG. Archived from the original on 2010-07-05. Retrieved 2010-08-01.
  5. MPEG (1997-02-21). "DSM-CC FAQ Version 1.0". MPEG. Retrieved 2010-08-01.
  6. IEEE (1996). "An Introduction to Digital Storage Media - Command and Control (DSM-CC)". MPEG. Retrieved 2010-08-01.
  7. Leonardo Chiariglione (2005-03-08). "Riding the Media Bits - MPEG's third steps". Archived from the original on 2011-01-22. Retrieved 2010-08-01.