A Viterbi decoder uses the Viterbi algorithm for decoding a bitstream that has been encoded using a convolutional code or trellis code.
There are other algorithms for decoding a convolutionally encoded stream (for example, the Fano algorithm). The Viterbi algorithm is the most resource-consuming, but it does the maximum likelihood decoding. It is most often used for decoding convolutional codes with constraint lengths k≤3, but values up to k=15 are used in practice.
Viterbi decoding was developed by Andrew J. Viterbi and published in the paper Viterbi, A. (April 1967). "Error Bounds for Convolutional Codes and an Asymptotically Optimum Decoding Algorithm". IEEE Transactions on Information Theory . 13 (2): 260–269. doi:10.1109/tit.1967.1054010.
There are both hardware (in modems) and software implementations of a Viterbi decoder.
Viterbi decoding is used in the iterative Viterbi decoding algorithm.
A hardware Viterbi decoder for basic (not punctured) code usually consists of the following major blocks:
A branch metric unit's function is to calculate branch metrics, which are normed distances between every possible symbol in the code alphabet, and the received symbol.
There are hard decision and soft decision Viterbi decoders. A hard decision Viterbi decoder receives a simple bitstream on its input, and a Hamming distance is used as a metric. A soft decision Viterbi decoder receives a bitstream containing information about the reliability of each received symbol. For instance, in a 3-bit encoding, this reliability information can be encoded as follows:
value | meaning | |
---|---|---|
000 | strongest | 0 |
001 | relatively strong | 0 |
010 | relatively weak | 0 |
011 | weakest | 0 |
100 | weakest | 1 |
101 | relatively weak | 1 |
110 | relatively strong | 1 |
111 | strongest | 1 |
Of course, it is not the only way to encode reliability data.
The squared Euclidean distance is used as a metric for soft decision decoders.
A path metric unit summarizes branch metrics to get metrics for paths, where K is the constraint length of the code, one of which can eventually be chosen as optimal. Every clock it makes decisions, throwing off wittingly nonoptimal paths. The results of these decisions are written to the memory of a traceback unit.
The core elements of a PMU are ACS (Add-Compare-Select) units. The way in which they are connected between themselves is defined by a specific code's trellis diagram.
Since branch metrics are always , there must be an additional circuit (not shown on the image) preventing metric counters from overflow. An alternate method that eliminates the need to monitor the path metric growth is to allow the path metrics to "roll over"; to use this method it is necessary to make sure the path metric accumulators contain enough bits to prevent the "best" and "worst" values from coming within 2(n-1) of each other. The compare circuit is essentially unchanged.
It is possible to monitor the noise level on the incoming bit stream by monitoring the rate of growth of the "best" path metric. A simpler way to do this is to monitor a single location or "state" and watch it pass "upward" through say four discrete levels within the range of the accumulator. As it passes upward through each of these thresholds, a counter is incremented that reflects the "noise" present on the incoming signal.
Back-trace unit restores an (almost) maximum-likelihood path from the decisions made by PMU. Since it does it in inverse direction, a viterbi decoder comprises a FILO (first-in-last-out) buffer to reconstruct a correct order.
Note that the implementation shown on the image requires double frequency. There are some tricks that eliminate this requirement.
In order to fully exploit benefits of soft decision decoding, one needs to quantize the input signal properly. The optimal quantization zone width is defined by the following formula:
where is a noise power spectral density, and k is a number of bits for soft decision.
The squared norm () distance between the received and the actual symbols in the code alphabet may be further simplified into a linear sum/difference form, which makes it less computationally intensive.
Consider a 1/2 convolutional code, which generates 2 bits (00, 01, 10 or 11) for every input bit (1 or 0). These Return-to-Zero signals are translated into a Non-Return-to-Zero form shown alongside.
code alphabet | vector mapping |
---|---|
00 | +1, +1 |
01 | +1, −1 |
10 | −1, +1 |
11 | −1, −1 |
Each received symbol may be represented in vector form as vr = {r0, r1}, where r0 and r1 are soft decision values, whose magnitudes signify the joint reliability of the received vector, vr.
Every symbol in the code alphabet may, likewise, be represented by the vector vi = {±1, ±1}.
The actual computation of the Euclidean distance metric is:
Each square term is a normed distance, depicting the energy of the symbol. For ex., the energy of the symbol vi = {±1, ±1} may be computed as
Thus, the energy term of all symbols in the code alphabet is constant (at (normalized) value 2).
The Add-Compare-Select (ACS) operation compares the metric distance between the received symbol ||vr|| and any 2 symbols in the code alphabet whose paths merge at a node in the corresponding trellis, ||vi(0)|| and ||vi(1)||. This is equivalent to comparing
and
But, from above we know that the energy of vi is constant (equal to (normalized) value of 2), and the energy of vr is the same in both cases. This reduces the comparison to a minima function between the 2 (middle) dot product terms,
since a min operation on negative numbers may be interpreted as an equivalent max operation on positive quantities.
Each dot product term may be expanded as
where, the signs of each term depend on symbols, vi(0) and vi(1), being compared. Thus, the squared Euclidean metric distance calculation to compute the branch metric may be performed with a simple add/subtract operation.
The general approach to traceback is to accumulate path metrics for up to five times the constraint length (5 (K - 1)), find the node with the largest accumulated cost, and begin traceback from this node.
The commonly used rule of thumb of a truncation depth of five times the memory (constraint length K-1) of a convolutional code is accurate only for rate 1/2 codes. For an arbitrary rate, an accurate rule of thumb is 2.5(K - 1)/(1−r) where r is the code rate. [1]
However, computing the node which has accumulated the largest cost (either the largest or smallest integral path metric) involves finding the maxima or minima of several (usually 2K-1) numbers, which may be time consuming when implemented on embedded hardware systems.
Most communication systems employ Viterbi decoding involving data packets of fixed sizes, with a fixed bit/byte pattern either at the beginning or/and at the end of the data packet. By using the known bit/byte pattern as reference, the start node may be set to a fixed value, thereby obtaining a perfect Maximum Likelihood Path during traceback.
A physical implementation of a Viterbi decoder will not yield an exact maximum-likelihood stream due to quantization of the input signal, branch and path metrics, and finite traceback length. Practical implementations do approach within 1 dB of the ideal.
The output of a Viterbi decoder, when decoding a message damaged by an additive Gaussian channel, has errors grouped in error bursts. [2] [3] Single-error-correcting codes alone can't correct such bursts, so either the convolutional code and the Viterbi decoder must be designed powerful enough to drive down errors to an acceptable rate, or burst error-correcting codes must be used.
A hardware viterbi decoder of punctured codes is commonly implemented in such a way:
One of the most time-consuming operations is an ACS butterfly, which is usually implemented using assembly language and an appropriate instruction set extensions (such as SSE2) to speed up the decoding time.
The Viterbi decoding algorithm is widely used in the following areas:
In information theory and coding theory with applications in computer science and telecommunications, error detection and correction (EDAC) or error control are techniques that enable reliable delivery of digital data over unreliable communication channels. Many communication channels are subject to channel noise, and thus errors may be introduced during transmission from the source to a receiver. Error detection techniques allow detecting such errors, while error correction enables reconstruction of the original data in many cases.
In telecommunication, a convolutional code is a type of error-correcting code that generates parity symbols via the sliding application of a boolean polynomial function to a data stream. The sliding application represents the 'convolution' of the encoder over the data, which gives rise to the term 'convolutional coding'. The sliding nature of the convolutional codes facilitates trellis decoding using a time-invariant trellis. Time invariant trellis decoding allows convolutional codes to be maximum-likelihood soft-decision decoded with reasonable complexity.
In information theory and coding theory, Reed–Solomon codes are a group of error-correcting codes that were introduced by Irving S. Reed and Gustave Solomon in 1960. They have many applications, including consumer technologies such as MiniDiscs, CDs, DVDs, Blu-ray discs, QR codes, Data Matrix, data transmission technologies such as DSL and WiMAX, broadcast systems such as satellite communications, DVB and ATSC, and storage systems such as RAID 6.
A satellite modem or satmodem is a modem used to establish data transfers using a communications satellite as a relay. A satellite modem's main function is to transform an input bitstream to a radio signal and vice versa.
The Viterbi algorithm is a dynamic programming algorithm for obtaining the maximum a posteriori probability estimate of the most likely sequence of hidden states—called the Viterbi path—that results in a sequence of observed events. This is done especially in the context of Markov information sources and hidden Markov models (HMM).
Coding theory is the study of the properties of codes and their respective fitness for specific applications. Codes are used for data compression, cryptography, error detection and correction, data transmission and data storage. Codes are studied by various scientific disciplines—such as information theory, electrical engineering, mathematics, linguistics, and computer science—for the purpose of designing efficient and reliable data transmission methods. This typically involves the removal of redundancy and the correction or detection of errors in the transmitted data.
In computer data storage, partial-response maximum-likelihood (PRML) is a method for recovering the digital data from the weak analog read-back signal picked up by the head of a magnetic disk drive or tape drive. PRML was introduced to recover data more reliably or at a greater areal-density than earlier simpler schemes such as peak-detection. These advances are important because most of the digital data in the world is stored using magnetic storage on hard disk or tape drives.
In information theory, turbo codes are a class of high-performance forward error correction (FEC) codes developed around 1990–91, but first published in 1993. They were the first practical codes to closely approach the maximum channel capacity or Shannon limit, a theoretical maximum for the code rate at which reliable communication is still possible given a specific noise level. Turbo codes are used in 3G/4G mobile communications and in satellite communications as well as other applications where designers seek to achieve reliable information transfer over bandwidth- or latency-constrained communication links in the presence of data-corrupting noise. Turbo codes compete with low-density parity-check (LDPC) codes, which provide similar performance. Until the patent for turbo codes expired, the patent-free status of LDPC codes was an important factor in LDPC's continued relevance.
In information theory, a low-density parity-check (LDPC) code is a linear error correcting code, a method of transmitting a message over a noisy transmission channel. An LDPC code is constructed using a sparse Tanner graph. LDPC codes are capacity-approaching codes, which means that practical constructions exist that allow the noise threshold to be set very close to the theoretical maximum for a symmetric memoryless channel. The noise threshold defines an upper bound for the channel noise, up to which the probability of lost information can be made as small as desired. Using iterative belief propagation techniques, LDPC codes can be decoded in time linear in their block length.
In coding theory, a linear code is an error-correcting code for which any linear combination of codewords is also a codeword. Linear codes are traditionally partitioned into block codes and convolutional codes, although turbo codes can be seen as a hybrid of these two types. Linear codes allow for more efficient encoding and decoding algorithms than other codes.
In coding theory, decoding is the process of translating received messages into codewords of a given code. There have been many common methods of mapping messages to codewords. These are often used to recover messages sent over a noisy channel, such as a binary symmetric channel.
Space–time trellis codes (STTCs) are a type of space–time code used in multiple-antenna wireless communications. This scheme transmits multiple, redundant copies of a generalised TCM signal distributed over time and a number of antennas ('space'). These multiple, 'diverse' copies of the data are used by the receiver to attempt to reconstruct the actual transmitted data. For an STC to be used, there must necessarily be multiple transmit antennas, but only a single receive antennas is required; nevertheless multiple receive antennas are often used since the performance of the system is improved by the resulting spatial diversity.
In computing, telecommunication, information theory, and coding theory, forward error correction (FEC) or channel coding is a technique used for controlling errors in data transmission over unreliable or noisy communication channels.
In coding theory, a systematic code is any error-correcting code in which the input data are embedded in the encoded output. Conversely, in a non-systematic code the output does not contain the input symbols.
The Bahl-Cocke-Jelinek-Raviv (BCJR) algorithm is an algorithm for maximum a posteriori decoding of error correcting codes defined on trellises. The algorithm is named after its inventors: Bahl, Cocke, Jelinek and Raviv. This algorithm is critical to modern iteratively-decoded error-correcting codes, including turbo codes and low-density parity-check codes.
In coding theory, concatenated codes form a class of error-correcting codes that are derived by combining an inner code and an outer code. They were conceived in 1966 by Dave Forney as a solution to the problem of finding a code that has both exponentially decreasing error probability with increasing block length and polynomial-time decoding complexity. Concatenated codes became widely used in space communications in the 1970s.
Recognised by John Wozencraft, sequential decoding is a limited memory technique for decoding tree codes. Sequential decoding is mainly used as an approximate decoding algorithm for long constraint-length convolutional codes. This approach may not be as accurate as the Viterbi algorithm but can save a substantial amount of computer memory. It was used to decode a convolutional code in 1968 Pioneer 9 mission.
WSPR is an acronym for Weak Signal Propagation Reporter. It is a protocol, implemented in a computer program, used for weak-signal radio communication between amateur radio operators. The protocol was designed, and a program written initially, by Joe Taylor, K1JT. The software code is now open source and is developed by a small team. The program is designed for sending and receiving low-power transmissions to test propagation paths on the MF and HF bands.
Noise-Predictive Maximum-Likelihood (NPML) is a class of digital signal-processing methods suitable for magnetic data storage systems that operate at high linear recording densities. It is used for retrieval of data recorded on magnetic media.
In information theory, polar codes are a linear block error-correcting codes. The code construction is based on a multiple recursive concatenation of a short kernel code which transforms the physical channel into virtual outer channels. When the number of recursions becomes large, the virtual channels tend to either have high reliability or low reliability, and the data bits are allocated to the most reliable channels. It is the first code with an explicit construction to provably achieve the channel capacity for symmetric binary-input, discrete, memoryless channels (B-DMC) with polynomial dependence on the gap to capacity. Polar codes were developed by Erdal Arikan, a professor of electrical engineering at Bilkent University.