Content format

Last updated
Graphical representations of electrical data: analog audio content format (red), 4-bit digital pulse code modulated content format (blue). Pcm.svg
Graphical representations of electrical data: analog audio content format (red), 4-bit digital pulse code modulated content format (blue).
Chinese calligraphy written in a language content format by Song dynasty (A.D. 1051-1108) poet Mi Fu. Mi Fu-On Calligraphy.jpg
Chinese calligraphy written in a language content format by Song dynasty (A.D. 1051-1108) poet Mi Fu.
A series of numbers encoded in a Universal Product Code digital numeric content format. 12345678901-2-23456 barcode UPC(A).svg
A series of numbers encoded in a Universal Product Code digital numeric content format.

A content format is an encoded format for converting a specific type of data to displayable information. Content formats are used in recording and transmission to prepare data for observation or interpretation. [1] [2] This includes both analog and digitized content. Content formats may be recorded and read by either natural or manufactured tools and mechanisms.

In addition to converting data to information, a content format may include the encryption and/or scrambling of that information. [3] Multiple content formats may be contained within a single section of a storage medium (e.g. track, disk sector, computer file, document, page, column) or transmitted via a single channel (e.g. wire, carrier wave) of a transmission medium. With multimedia, multiple tracks containing multiple content formats are presented simultaneously. Content formats may either be recorded in secondary signal processing methods such as a software container format (e.g. digital audio, digital video) or recorded in the primary format (e.g. spectrogram, pictogram).

Observable data is often known as raw data, or raw content. [4] A primary raw content format may be directly observable (e.g. image, sound, motion, smell, sensation) or physical data which only requires hardware to display it, such as a phonographic needle and diaphragm or a projector lamp and magnifying glass.

There has been a countless number of content formats throughout history. The following are examples of some common content formats and content format categories (covering: sensory experience, model, and language used for encoding information):

See also

Related Research Articles

A codec is a device or computer program that encodes or decodes a data stream or signal. Codec is a portmanteau of coder/decoder.

In information theory, data compression, source coding, or bit-rate reduction is the process of encoding information using fewer bits than the original representation. Any particular compression is either lossy or lossless. Lossless compression reduces bits by identifying and eliminating statistical redundancy. No information is lost in lossless compression. Lossy compression reduces bits by removing unnecessary or less important information. Typically, a device that performs data compression is referred to as an encoder, and one that performs the reversal of the process (decompression) as a decoder.

<span class="mw-page-title-main">Digital television</span> Television transmission using digital encoding

Digital television (DTV) is the transmission of television signals using digital encoding, in contrast to the earlier analog television technology which used analog signals. At the time of its development it was considered an innovative advancement and represented the first significant evolution in television technology since color television in the 1950s. Modern digital television is transmitted in high-definition television (HDTV) with greater resolution than analog TV. It typically uses a widescreen aspect ratio in contrast to the narrower format (4:3) of analog TV. It makes more economical use of scarce radio spectrum space; it can transmit up to seven channels in the same bandwidth as a single analog channel, and provides many new features that analog television cannot. A transition from analog to digital broadcasting began around 2000. Different digital television broadcasting standards have been adopted in different parts of the world; below are the more widely used standards:

<span class="mw-page-title-main">Digital data</span> Discrete, discontinuous representation of information

Digital data, in information theory and information systems, is information represented as a string of discrete symbols, each of which can take on one of only a finite number of values from some alphabet, such as letters or digits. An example is a text document, which consists of a string of alphanumeric characters. The most common form of digital data in modern information systems is binary data, which is represented by a string of binary digits (bits) each of which can have one of two values, either 0 or 1.

<span class="mw-page-title-main">Digital video</span> Digital electronic representation of moving visual images

Digital video is an electronic representation of moving visual images (video) in the form of encoded digital data. This is in contrast to analog video, which represents moving visual images in the form of analog signals. Digital video comprises a series of digital images displayed in rapid succession, usually at 24, 25, 30, or 60 frames per second. Digital video has many advantages such as easy copying, multicasting, sharing and storage.

<span class="mw-page-title-main">Lossy compression</span> Data compression approach that reduces data size while discarding or changing some of it

In information technology, lossy compression or irreversible compression is the class of data compression methods that uses inexact approximations and partial data discarding to represent the content. These techniques are used to reduce data size for storing, handling, and transmitting content. The different versions of the photo of the cat on this page show how higher degrees of approximation create coarser images as more details are removed. This is opposed to lossless data compression which does not degrade the data. The amount of data reduction possible using lossy compression is much higher than using lossless techniques.

<span class="mw-page-title-main">Video</span> Electronic moving image

Video is an electronic medium for the recording, copying, playback, broadcasting, and display of moving visual media. Video was first developed for mechanical television systems, which were quickly replaced by cathode-ray tube (CRT) systems, which, in turn, were replaced by flat-panel displays of several types.

<span class="mw-page-title-main">Digital audio</span> Technology that records, stores, and reproduces sound

Digital audio is a representation of sound recorded in, or converted into, digital form. In digital audio, the sound wave of the audio signal is typically encoded as numerical samples in a continuous sequence. For example, in CD audio, samples are taken 44,100 times per second, each with 16-bit sample depth. Digital audio is also the name for the entire technology of sound recording and reproduction using audio signals that have been encoded in digital form. Following significant advances in digital audio technology during the 1970s and 1980s, it gradually replaced analog audio technology in many areas of audio engineering, record production and telecommunications in the 1990s and 2000s.

<span class="mw-page-title-main">Video codec</span> Digital video processing

A video codec is software or hardware that compresses and decompresses digital video. In the context of video compression, codec is a portmanteau of encoder and decoder, while a device that only compresses is typically called an encoder, and one that only decompresses is a decoder.

<span class="mw-page-title-main">LaserDisc</span> Optical analog video disc format

The LaserDisc (LD) is a home video format and the first commercial optical disc storage medium, initially licensed, sold and marketed as MCA DiscoVision in the United States in 1978. Its diameter typically spans 30 cm (12 in). Unlike most optical-disc standards, LaserDisc is not fully digital, and instead requires the use of analog video signals.

In telecommunications and computing, bit rate is the number of bits that are conveyed or processed per unit of time.

<span class="mw-page-title-main">Recording format</span>

A recording format is a format for encoding data for storage on a storage medium. The format can be container information such as sectors on a disk, or user/audience information (content) such as analog stereo audio. Multiple levels of encoding may be achieved in one format. For example, a text encoded page may contain HTML and XML encoding, combined in a plain text file format, using either EBCDIC or ASCII character encoding, on a UDF digitally formatted disk.

Transcoding is the direct digital-to-digital conversion of one encoding to another, such as for video data files, audio files, or character encoding. This is usually done in cases where a target device does not support the format or has limited storage capacity that mandates a reduced file size, or to convert incompatible or obsolete data to a better-supported or modern format.

Generation loss is the loss of quality between subsequent copies or transcodes of data. Anything that reduces the quality of the representation when copying, and would cause further reduction in quality on making a copy of the copy, can be considered a form of generation loss. File size increases are a common result of generation loss, as the introduction of artifacts may actually increase the entropy of the data through each generation.

<span class="mw-page-title-main">Electronic media</span> Media that require electronics or electromechanical means to be accessed by the audience

Electronic media are media that use electronics or electromechanical means for the audience to access the content. This is in contrast to static media, which today are most often created digitally, but do not require electronics to be accessed by the end user in the printed form. The primary electronic media sources familiar to the general public are video recordings, audio recordings, multimedia presentations, slide presentations, CD-ROM and online content. Most new media are in the form of digital media. However, electronic media may be in either analogue electronics data or digital electronic data format.

MUSE, commercially known as Hi-Vision was a Japanese analog high-definition television system, with design efforts going back to 1979.

Audio-to-video synchronization refers to the relative timing of audio (sound) and video (image) parts during creation, post-production (mixing), transmission, reception and play-back processing. AV synchronization can be an issue in television, videoconferencing, or film.

Uncompressed video is digital video that either has never been compressed or was generated by decompressing previously compressed digital video. It is commonly used by video cameras, video monitors, video recording devices, and in video processors that perform functions such as image resizing, image rotation, deinterlacing, and text and graphics overlay. It is conveyed over various types of baseband digital video interfaces, such as HDMI, DVI, DisplayPort and SDI. Standards also exist for the carriage of uncompressed video over computer networks.

Pulse-code modulation (PCM) is a method used to digitally represent analog signals. It is the standard form of digital audio in computers, compact discs, digital telephony and other digital audio applications. In a PCM stream, the amplitude of the analog signal is sampled at uniform intervals, and each sample is quantized to the nearest value within a range of digital steps.

This glossary defines terms that are used in the document "Defining Video Quality Requirements: A Guide for Public Safety", developed by the Video Quality in Public Safety (VQIPS) Working Group. It contains terminology and explanations of concepts relevant to the video industry. The purpose of the glossary is to inform the reader of commonly used vocabulary terms in the video domain. This glossary was compiled from various industry sources.

References

  1. Bob Boiko, Content Management Bible, Nov 2004 pp:79, 240, 830
  2. Ann Rockley, Managing Enterprise Content: A Unified Content Strategy, Oct 2002 pp:269, 320, 516
  3. Jessica Keyes, Technology Trendlines, Jul 1995 pp:201
  4. Oge Marques and Borko Furht, Content-Based Image and Video Retrieval, April 2002 pp:15
  5. David Austerberry, The Technology of Video and Audio Streaming, Second Edition, Sep 2004 pp: 328
  6. M. Ghanbari, Standard Codecs: Image Compression to Advanced Video Coding, Jun 2003 pp:364