Raptor code

Last updated

In computer science, Raptor codes (rapid tornado; [1] see Tornado codes) are the first known class of fountain codes with linear time encoding and decoding. They were invented by Amin Shokrollahi in 2000/2001 and were first published in 2004 as an extended abstract. Raptor codes are a significant theoretical and practical improvement over LT codes, which were the first practical class of fountain codes.

Contents

Raptor codes, as with fountain codes in general, encode a given source block of data consisting of a number k of equal size source symbols into a potentially limitless sequence of encoding symbols such that reception of any k or more encoding symbols allows the source block to be recovered with some non-zero probability. The probability that the source block can be recovered increases with the number of encoding symbols received above k becoming very close to 1, once the number of received encoding symbols is only very slightly larger than k. For example, with the latest generation of Raptor codes, the RaptorQ codes, the chance of decoding failure when k encoding symbols have been received is less than 1%, and the chance of decoding failure when k+2 encoding symbols have been received is less than one in a million. (See Recovery probability and overhead section below for more discussion on this.) A symbol can be any size, from a single byte to hundreds or thousands of bytes.

Raptor codes may be systematic or non-systematic. In the systematic case, the symbols of the original source block, i.e. the source symbols, are included within the set of encoding symbols. Some examples of a systematic Raptor code is the use by the 3rd Generation Partnership Project in mobile cellular wireless broadcasting and multicasting, and also by DVB-H standards for IP datacast to handheld devices (see external links). The Raptor codes used in these standards is also defined in IETF RFC 5053.

Online codes are an example of a non-systematic fountain code.

RaptorQ code

The most advanced version of Raptor is the RaptorQ code defined in IETF RFC 6330. The RaptorQ code is a systematic code, can be implemented in a way to achieve linear time encoding and decoding performance, has near-optimal recovery properties (see Recovery probability and overhead section below for more details), supports up to 56,403 source symbols, and can support an essentially unlimited number of encoding symbols.

The RaptorQ code defined in IETF RFC 6330 is specified as a part of the Next Gen TV (ATSC 3.0) standard to enable high quality broadcast video streaming (robust mobile TV) and efficient and reliable broadcast file delivery (datacasting). In particular, the RaptorQ code is specified in A/331: Signaling, Delivery, Synchronization, and Error Protection within ATSC 3.0 (see List of ATSC standards for a list of the ATSC 3.0 standard parts). Next Gen TV (ATSC 3.0) goes well-beyond traditional TV to provide a Broadcast internet enabling general data delivery services.

Overview

Raptor codes are formed by the concatenation of two codes.

A fixed rate erasure code, usually with a fairly high rate, is applied as a 'pre-code' or 'outer code'. This pre-code may itself be a concatenation of multiple codes, for example in the code standardized by 3GPP a high density parity check code derived from the binary Gray sequence is concatenated with a simple regular low density parity check code. Another possibility would be a concatenation of a Hamming code with a low density parity check code.

The inner code takes the result of the pre-coding operation and generates a sequence of encoding symbols. The inner code is a form of LT codes. Each encoding symbol is the XOR of a pseudo-randomly chosen set of symbols from the pre-code output. The number of symbols which are XOR'ed together to form an output symbol is chosen pseudo-randomly for each output symbol according to a specific probability distribution.

This distribution, as well as the mechanism for generating pseudo-random numbers for sampling this distribution and for choosing the symbols to be XOR'ed, must be known to both sender and receiver. In one approach, each symbol is accompanied with an identifier which can be used as a seed to a pseudo-random number generator to generate this information, with the same process being followed by both sender and receiver.

In the case of non-systematic Raptor codes, the source data to be encoded is used as the input to the pre-coding stage.

In the case of systematic Raptor codes, the input to the pre-coding stage is obtained by first applying the inverse of the encoding operation that generates the first k output symbols to the source data. Thus, applying the normal encoding operation to the resulting symbols causes the original source symbols to be regenerated as the first k output symbols of the code. It is necessary to ensure that the pseudo-random processes which generate the first k output symbols generate an operation which is invertible.

Decoding

Two approaches are possible for decoding Raptor codes. In a concatenated approach, the inner code is decoded first, using a belief propagation algorithm, as used for the LT codes. Decoding succeeds if this operation recovers a sufficient number of symbols, such that the outer code can recover the remaining symbols using the decoding algorithm appropriate for that code.

In a combined approach, the relationships between symbols defined by both the inner and outer codes are considered as a single combined set of simultaneous equations which can be solved by the usual means, typically by Gaussian elimination.

Computational complexity

Raptor codes require O(symbol size) time to generate an encoding symbol from a source block, and require O(source block size) time to recover a source block from at least k encoding symbols.

Recovery probability and overhead

The overhead is how many additional encoding symbols beyond the number k of source symbols in the original source block need to be received to completely recover the source block. (Based on elementary information theory considerations, complete recovery of a source block with k source symbols is not possible if less than k encoding symbols are received.) The recovery probability is the probability that the source block is completely recovered upon receiving a given number of random encoding symbols generated from the source block.

The RaptorQ code specified in IETF RFC 6330 has the following trade-off between recovery probability and recovery overhead:

These statements hold for the entire range of k supported in IETF RFC 6330, i.e., k=1,...,56403. See IETF RFC 6330 for more details.

Qualcomm, Inc. has published an IPR statement for the Raptor code specified in IETF RFC 5053, and an IPR statement for the more advanced RaptorQ code specified in IETF RFC 6330. These statements mirror the licensing commitment Qualcomm, Inc. has made with respect to the MPEG DASH standard. The MPEG DASH standard has been deployed by a wide variety of companies, including DASH Industry Forum member companies.

See also

Notes

  1. Amin Shokrollahi (31 January 2011). The Development of Raptor Codes (Speech). Invited talk at the Kungliga Tekniska högskolan . Retrieved 24 February 2012.


Related Research Articles

<span class="mw-page-title-main">Huffman coding</span> Technique to compress data

In computer science and information theory, a Huffman code is a particular type of optimal prefix code that is commonly used for lossless data compression. The process of finding or using such a code is Huffman coding, an algorithm developed by David A. Huffman while he was a Sc.D. student at MIT, and published in the 1952 paper "A Method for the Construction of Minimum-Redundancy Codes".

H.263 is a video compression standard originally designed as a low-bit-rate compressed format for videotelephony. It was standardized by the ITU-T Video Coding Experts Group (VCEG) in a project ending in 1995/1996. It is a member of the H.26x family of video coding standards in the domain of the ITU-T.

In telecommunication, a convolutional code is a type of error-correcting code that generates parity symbols via the sliding application of a boolean polynomial function to a data stream. The sliding application represents the 'convolution' of the encoder over the data, which gives rise to the term 'convolutional coding'. The sliding nature of the convolutional codes facilitates trellis decoding using a time-invariant trellis. Time invariant trellis decoding allows convolutional codes to be maximum-likelihood soft-decision decoded with reasonable complexity.

<span class="mw-page-title-main">Arithmetic coding</span> Form of entropy encoding used in data compression

Arithmetic coding (AC) is a form of entropy encoding used in lossless data compression. Normally, a string of characters is represented using a fixed number of bits per character, as in the ASCII code. When a string is converted to arithmetic encoding, frequently used characters will be stored with fewer bits and not-so-frequently occurring characters will be stored with more bits, resulting in fewer bits used in total. Arithmetic coding differs from other forms of entropy encoding, such as Huffman coding, in that rather than separating the input into component symbols and replacing each with a code, arithmetic coding encodes the entire message into a single number, an arbitrary-precision fraction q, where 0.0 ≤ q < 1.0. It represents the current information as a range, defined by two numbers. A recent family of entropy coders called asymmetric numeral systems allows for faster implementations thanks to directly operating on a single natural number representing the current information.

In computing, Deflate is a lossless data compression file format that uses a combination of LZ77 and Huffman coding. It was designed by Phil Katz, for version 2 of his PKZIP archiving tool. Deflate was later specified in RFC 1951 (1996).

In computer programming, Base64 is a group of tetrasexagesimal binary-to-text encoding schemes that represent binary data in sequences of 24 bits that can be represented by four 6-bit Base64 digits.

The Lempel–Ziv–Markov chain algorithm (LZMA) is an algorithm used to perform lossless data compression. It has been under development since either 1996 or 1998 by Igor Pavlov and was first used in the 7z format of the 7-Zip archiver. This algorithm uses a dictionary compression scheme somewhat similar to the LZ77 algorithm published by Abraham Lempel and Jacob Ziv in 1977 and features a high compression ratio and a variable compression-dictionary size, while still maintaining decompression speed similar to other commonly used compression algorithms.

Punycode is a representation of Unicode with the limited ASCII character subset used for Internet hostnames. Using Punycode, host names containing Unicode characters are transcoded to a subset of ASCII consisting of letters, digits, and hyphens, which is called the letter–digit–hyphen (LDH) subset. For example, München is encoded as Mnchen-3ya.

In information theory, a low-density parity-check (LDPC) code is a linear error correcting code, a method of transmitting a message over a noisy transmission channel. An LDPC code is constructed using a sparse Tanner graph. LDPC codes are capacity-approaching codes, which means that practical constructions exist that allow the noise threshold to be set very close to the theoretical maximum for a symmetric memoryless channel. The noise threshold defines an upper bound for the channel noise, up to which the probability of lost information can be made as small as desired. Using iterative belief propagation techniques, LDPC codes can be decoded in time linear in their block length.

<span class="mw-page-title-main">Online codes</span>

In computer science, online codes are an example of rateless erasure codes. These codes can encode a message into a number of symbols such that knowledge of any fraction of them allows one to recover the original message. Rateless codes produce an arbitrarily large number of symbols which can be broadcast until the receivers have enough symbols.

In coding theory, an erasure code is a forward error correction (FEC) code under the assumption of bit erasures, which transforms a message of k symbols into a longer message with n symbols such that the original message can be recovered from a subset of the n symbols. The fraction r = k/n is called the code rate. The fraction k’/k, where k’ denotes the number of symbols required for recovery, is called reception efficiency.

In coding theory, Tornado codes are a class of erasure codes that support error correction. Tornado codes require a constant C more redundant blocks than the more data-efficient Reed–Solomon erasure codes, but are much faster to generate and can fix erasures faster. Software-based implementations of tornado codes are about 100 times faster on small lengths and about 10,000 times faster on larger lengths than Reed–Solomon erasure codes. Since the introduction of Tornado codes, many other similar erasure codes have emerged, most notably Online codes, LT codes and Raptor codes.

In computer science, Luby transform codes are the first class of practical fountain codes that are near-optimal erasure correcting codes. They were invented by Michael Luby in 1998 and published in 2002. Like some other fountain codes, LT codes depend on sparse bipartite graphs to trade reception overhead for encoding and decoding speed. The distinguishing characteristic of LT codes is in employing a particularly simple algorithm based on the exclusive or operation to encode and decode the message.

In coding theory, fountain codes are a class of erasure codes with the property that a potentially limitless sequence of encoding symbols can be generated from a given set of source symbols such that the original source symbols can ideally be recovered from any subset of the encoding symbols of size equal to or only slightly larger than the number of source symbols. The term fountain or rateless refers to the fact that these codes do not exhibit a fixed code rate.

In computing, telecommunication, information theory, and coding theory, forward error correction (FEC) or channel coding is a technique used for controlling errors in data transmission over unreliable or noisy communication channels.

In data networking and transmission, 64b/66b is a line code that transforms 64-bit data to 66-bit line code to provide enough state changes to allow reasonable clock recovery and alignment of the data stream at the receiver. It was defined by the IEEE 802.3 working group as part of the IEEE 802.3ae-2002 amendment which introduced 10 Gbit/s Ethernet. At the time 64b/66b was deployed, it allowed 10 Gb Ethernet to be transmitted with the same lasers used by SONET OC-192, rather than requiring the 12.5 Gbit/s lasers that were not expected to be available for several years.

In coding theory, a systematic code is any error-correcting code in which the input data are embedded in the encoded output. Conversely, in a non-systematic code the output does not contain the input symbols.

<span class="mw-page-title-main">Michael Luby</span> Information theorist and cryptographer

Michael George Luby is a mathematician and computer scientist, CEO of BitRipple, senior research scientist at the International Computer Science Institute (ICSI), former VP Technology at Qualcomm, co-founder and former chief technology officer of Digital Fountain. In coding theory he is known for leading the invention of the Tornado codes and the LT codes. In cryptography he is known for his contributions showing that any one-way function can be used as the basis for private cryptography, and for his analysis, in collaboration with Charles Rackoff, of the Feistel cipher construction. His distributed algorithm to find a maximal independent set in a computer network has also been influential.

Consistent Overhead Byte Stuffing (COBS) is an algorithm for encoding data bytes that results in efficient, reliable, unambiguous packet framing regardless of packet content, thus making it easy for receiving applications to recover from malformed packets. It employs a particular byte value, typically zero, to serve as a packet delimiter. When zero is used as a delimiter, the algorithm replaces each zero data byte with a non-zero value so that no zero data bytes will appear in the packet and thus be misinterpreted as packet boundaries.

References