Polar code (coding theory)

Last updated

In information theory, a polar code is a linear block error-correcting code. The code construction is based on a multiple recursive concatenation of a short kernel code which transforms the physical channel into virtual outer channels. When the number of recursions becomes large, the virtual channels tend to either have high reliability or low reliability (in other words, they polarize or become sparse), and the data bits are allocated to the most reliable channels. It is the first code with an explicit construction to provably achieve the channel capacity for symmetric binary-input, discrete, memoryless channels (B-DMC) with polynomial dependence on the gap to capacity. [1] Polar codes were developed by Erdal Arikan, a professor of electrical engineering at Bilkent University.

Contents

Notably, polar codes have modest encoding and decoding complexity O(n log n), which renders them attractive for many applications. Moreover, the encoding and decoding energy complexity of generalized polar codes can reach the fundamental lower bounds for energy consumption of two dimensional circuitry to within an O(nε polylog n) factor for any ε > 0. [2]

Industrial applications

Polar codes have some limitations when used in industrial applications. Primarily, the original design of the polar codes achieves capacity when block sizes are asymptotically large with a successive cancellation decoder. However, with the block sizes used in industry, the performance of the successive cancellation is poor compared to well-defined and implemented coding schemes such as low-density parity-check code (LDPC) and turbo code. Polar performance can be improved with successive cancellation list decoding, but its usability in real applications is still questionable due to very poor implementation efficiencies caused by the iterative approach. [3]

In October 2016, Huawei announced that it had achieved 27 Gbit/s in 5G field trial tests using polar codes for channel coding. The improvements have been introduced so that the channel performance has now almost closed the gap to the Shannon limit, which sets the bar for the maximum rate for a given bandwidth and a given noise level. [4]

In November 2016, 3GPP agreed to adopt polar codes for the eMBB (Enhanced Mobile Broadband) control channels for the 5G NR (New Radio) interface. At the same meeting, 3GPP agreed to use LDPC for the corresponding data channel. [5]

PAC code

In 2020, Arıkan introduced a novel polar coding method dubbed polarization-adjusted convolutional (PAC) codes. At short blocklengths, such codes outperform both convolutional codes and CRC-aided list decoding of conventional polar codes. [6] [7]

Neural Polar Decoders

Neural Polar Decoders (NPDs) [8] are an advancement in channel coding that combine neural networks (NNs) with polar codes, providing unified decoding for channels with or without memory, without requiring an explicit channel model. They use four neural networks to approximate the functions of polar decoding: the embedding (E) NN, the check-node (F) NN, the bit-node (G) NN, and the embedding-to-LLR (H) NN. The weights of these NNs are determined by estimating the mutual information of the synthetic channels. By the end of training, the weights of the NPD are fixed and can then be used for decoding.

The computational complexity of NPDs is determined by the parameterization of the neural networks, unlike successive cancellation (SC) trellis decoders [9] , whose complexity is determined by the channel model and are typically used for finite-state channels (FSCs). The computational complexity of NPDs is , where is the number of hidden units in the neural networks, is the dimension of the embedding, and is the block length. In contrast, the computational complexity of SC trellis decoders is , where is the state space of the channel model.

NPDs can be integrated into SC decoding [1] schemes such as SC list decoding and CRC-aided SC decoding [10] . They are also compatible with non-uniform and i.i.d. input distributions by integrating them into the Honda-Yamamoto scheme [11] . This flexibility allows NPDs to be used in various decoding scenarios, improving error correction performance while maintaining manageable computational complexity.

Related Research Articles

Information theory is the mathematical study of the quantification, storage, and communication of information. The field was originally established by the works of Harry Nyquist and Ralph Hartley, in the 1920s, and Claude Shannon in the 1940s. It is at the intersection of electronic engineering, mathematics, statistics, computer science, neurobiology, physics, and electrical engineering.

<span class="mw-page-title-main">Orthogonal frequency-division multiplexing</span> Method of encoding digital data on multiple carrier frequencies

In telecommunications, orthogonal frequency-division multiplexing (OFDM) is a type of digital transmission used in digital modulation for encoding digital (binary) data on multiple carrier frequencies. OFDM has developed into a popular scheme for wideband digital communication, used in applications such as digital television and audio broadcasting, DSL internet access, wireless networks, power line networks, and 4G/5G mobile communications.

Channel capacity, in electrical engineering, computer science, and information theory, is the theoretical maximum rate at which information can be reliably transmitted over a communication channel.

<span class="mw-page-title-main">Quantization (signal processing)</span> Process of mapping a continuous set to a countable set

Quantization, in mathematics and digital signal processing, is the process of mapping input values from a large set to output values in a (countable) smaller set, often with a finite number of elements. Rounding and truncation are typical examples of quantization processes. Quantization is involved to some degree in nearly all digital signal processing, as the process of representing a signal in digital form ordinarily involves rounding. Quantization also forms the core of essentially all lossy compression algorithms.

<span class="mw-page-title-main">Coding theory</span> Study of the properties of codes and their fitness

Coding theory is the study of the properties of codes and their respective fitness for specific applications. Codes are used for data compression, cryptography, error detection and correction, data transmission and data storage. Codes are studied by various scientific disciplines—such as information theory, electrical engineering, mathematics, linguistics, and computer science—for the purpose of designing efficient and reliable data transmission methods. This typically involves the removal of redundancy and the correction or detection of errors in the transmitted data.

In information theory, turbo codes are a class of high-performance forward error correction (FEC) codes developed around 1990–91, but first published in 1993. They were the first practical codes to closely approach the maximum channel capacity or Shannon limit, a theoretical maximum for the code rate at which reliable communication is still possible given a specific noise level. Turbo codes are used in 3G/4G mobile communications and in satellite communications as well as other applications where designers seek to achieve reliable information transfer over bandwidth- or latency-constrained communication links in the presence of data-corrupting noise. Turbo codes compete with low-density parity-check (LDPC) codes, which provide similar performance.

In information theory, a low-density parity-check (LDPC) code is a linear error correcting code, a method of transmitting a message over a noisy transmission channel. An LDPC code is constructed using a sparse Tanner graph. LDPC codes are capacity-approaching codes, which means that practical constructions exist that allow the noise threshold to be set very close to the theoretical maximum for a symmetric memoryless channel. The noise threshold defines an upper bound for the channel noise, up to which the probability of lost information can be made as small as desired. Using iterative belief propagation techniques, LDPC codes can be decoded in time linear in their block length.

Elias ω coding or Elias omega coding is a universal code encoding the positive integers developed by Peter Elias. Like Elias gamma coding and Elias delta coding, it works by prefixing the positive integer with a representation of its order of magnitude in a universal code. Unlike those other two codes, however, Elias omega recursively encodes that prefix; thus, they are sometimes known as recursive Elias codes.

In information theory, the noisy-channel coding theorem, establishes that for any given degree of noise contamination of a communication channel, it is possible to communicate discrete data nearly error-free up to a computable maximum rate through the channel. This result was presented by Claude Shannon in 1948 and was based in part on earlier work and ideas of Harry Nyquist and Ralph Hartley.

A timeline of events related to  information theory,  quantum information theory and statistical physics,  data compression,  error correcting codes and related subjects.

In computing, telecommunication, information theory, and coding theory, forward error correction (FEC) or channel coding is a technique used for controlling errors in data transmission over unreliable or noisy communication channels.

In coding theory, concatenated codes form a class of error-correcting codes that are derived by combining an inner code and an outer code. They were conceived in 1966 by Dave Forney as a solution to the problem of finding a code that has both exponentially decreasing error probability with increasing block length and polynomial-time decoding complexity. Concatenated codes became widely used in space communications in the 1970s.

Cooperative diversity is a cooperative multiple antenna technique for improving or maximising total network channel capacities for any given set of bandwidths which exploits user diversity by decoding the combined signal of the relayed signal and the direct signal in wireless multihop networks. A conventional single hop system uses direct transmission where a receiver decodes the information only based on the direct signal while regarding the relayed signal as interference, whereas the cooperative diversity considers the other signal as contribution. That is, cooperative diversity decodes the information from the combination of two signals. Hence, it can be seen that cooperative diversity is an antenna diversity that uses distributed antennas belonging to each node in a wireless network. Note that user cooperation is another definition of cooperative diversity. User cooperation considers an additional fact that each user relays the other user's signal while cooperative diversity can be also achieved by multi-hop relay networking systems.

In radio, cooperative multiple-input multiple-output is a technology that can effectively exploit the spatial domain of mobile fading channels to bring significant performance improvements to wireless communication systems. It is also called network MIMO, distributed MIMO, virtual MIMO, and virtual antenna arrays.

<span class="mw-page-title-main">MIMO</span> Use of multiple antennas in radio

In radio, multiple-input and multiple-output (MIMO) is a method for multiplying the capacity of a radio link using multiple transmission and receiving antennas to exploit multipath propagation. MIMO has become an essential element of wireless communication standards including IEEE 802.11n, IEEE 802.11ac, HSPA+ (3G), WiMAX, and Long Term Evolution (LTE). More recently, MIMO has been applied to power-line communication for three-wire installations as part of the ITU G.hn standard and of the HomePlug AV2 specification.

A Zadoff–Chu (ZC) sequence, also referred to as Chu sequence or Frank–Zadoff–Chu (FZC) sequence, is a complex-valued mathematical sequence which, when applied to a signal, gives rise to a new signal of constant amplitude. When cyclically shifted versions of a Zadoff–Chu sequence are imposed upon a signal the resulting set of signals detected at the receiver are uncorrelated with one another.

In information theory and communication, the Slepian–Wolf coding, also known as the Slepian–Wolf bound, is a result in distributed source coding discovered by David Slepian and Jack Wolf in 1973. It is a method of theoretically coding two lossless compressed correlated sources.

In coding theory, triangular network coding (TNC) is a non-linear network coding based packet coding scheme introduced by Qureshi, Foh & Cai (2012). Previously, packet coding for network coding was done using linear network coding (LNC). The drawback of LNC over large finite field is that it resulted in high encoding and decoding computational complexity. While linear encoding and decoding over GF(2) alleviates the concern of high computational complexity, coding over GF(2) comes at the tradeoff cost of degrading throughput performance.

Noise-Predictive Maximum-Likelihood (NPML) is a class of digital signal-processing methods suitable for magnetic data storage systems that operate at high linear recording densities. It is used for retrieval of data recorded on magnetic media.

Erdal Arıkan is a Turkish professor in Electrical and Electronics Engineering Department at Bilkent University, Ankara, Turkey. He is known for his implementation of polar coding.

References

  1. 1 2 Arikan, Erdal (2009). "Channel Polarization: A Method for Constructing Capacity-Achieving Codes for Symmetric Binary-Input Memoryless Channels". IEEE Transactions on Information Theory. 55 (7): 3051–3073. doi:10.1109/TIT.2009.2021379. ISSN   0018-9448.
  2. Blake, Christopher G. (2017). "Energy Consumption of Error Control Coding Circuits" (PDF). University of Toronto . Retrieved 2019-10-18.
  3. Arikan, Erdal, et al. "Challenges and some new directions in channel coding." arXiv:1504.03916 (2015).
  4. "Huawei achieves 27Gbps 5G speeds with Polar Code" . Retrieved 2016-10-10.
  5. "3GPP RAN1 meeting #87 final report". 3GPP. Retrieved 31 August 2017.[ dead link ]
  6. Moradi, Mohsen, et al. "Performance and complexity of sequential decoding of PAC codes." arXiv:2012.04990 (2020).
  7. Yao, Hanwen; Fazeli, Arman; Vardy, Alexander (2021). "List Decoding of Arıkan's PAC Codes". Entropy. 23 (7): 841. arXiv: 2005.13711 . Bibcode:2021Entrp..23..841Y. doi: 10.3390/e23070841 . PMC   8303677 . PMID   34209050.
  8. Aharoni, Ziv; Huleihel, Bashar; Pfister, Henry D.; Permuter, Haim H. (2023-09-06), Data-Driven Neural Polar Codes for Unknown Channels With and Without Memory, doi:10.48550/arXiv.2309.03148 , retrieved 2024-07-15
  9. Wang, Runxin; Honda, Junya; Yamamoto, Hirosuke; Liu, Rongke; Hou, Yi (2015). "Construction of polar codes for channels with memory". 2015 IEEE Information Theory Workshop - Fall (ITW). IEEE: 187–191. doi:10.1109/ITWF.2015.7360760. ISBN   978-1-4673-7852-9.
  10. Tal, Ido; Vardy, Alexander (2015). "List Decoding of Polar Codes". IEEE Transactions on Information Theory. 61 (5): 2213–2226. doi:10.1109/TIT.2015.2410251. ISSN   0018-9448.
  11. Honda, Junya; Yamamoto, Hirosuke (2013). "Polar Coding Without Alphabet Extension for Asymmetric Models". IEEE Transactions on Information Theory. 59 (12): 7829–7838. doi:10.1109/TIT.2013.2282305. ISSN   0018-9448.