Binary symmetric channel

Last updated

A binary symmetric channel (or BSCp) is a common communications channel model used in coding theory and information theory. In this model, a transmitter wishes to send a bit (a zero or a one), and the receiver will receive a bit. The bit will be "flipped" with a "crossover probability" of p, and otherwise is received correctly. This model can be applied to varied communication channels such as telephone lines or disk drive storage.

Contents

The noisy-channel coding theorem applies to BSCp, saying that information can be transmitted at any rate up to the channel capacity with arbitrarily low error. The channel capacity is bits, where is the binary entropy function. Codes including Forney's code have been designed to transmit information efficiently across the channel.

Definition

The binary symmetric channel sees each bit of a message transmitted correctly with probability 1-p and incorrectly with probability p, due to noise across the transmission medium. Binary symmetric channel (en).svg
The binary symmetric channel sees each bit of a message transmitted correctly with probability 1–p and incorrectly with probability p, due to noise across the transmission medium.

A binary symmetric channel with crossover probability , denoted by BSCp, is a channel with binary input and binary output and probability of error . That is, if is the transmitted random variable and the received variable, then the channel is characterized by the conditional probabilities: [1]

It is assumed that . If , then the receiver can swap the output (interpret 1 when it sees 0, and vice versa) and obtain an equivalent channel with crossover probability .

Capacity

Graph showing the proportion of a channel's capacity (y-axis) that can be used for payload based on how noisy the channel is (probability of bit flips; x-axis). Noisy-channel coding theorem -- channel capacity graph.png
Graph showing the proportion of a channel’s capacity (y-axis) that can be used for payload based on how noisy the channel is (probability of bit flips; x-axis).

The channel capacity of the binary symmetric channel, in bits, is: [2]

where is the binary entropy function, defined by: [2]

Noisy-channel coding theorem

Shannon's noisy-channel coding theorem gives a result about the rate of information that can be transmitted through a communication channel with arbitrarily low error. We study the particular case of .

The noise that characterizes is a random variable consisting of n independent random bits (n is defined below) where each random bit is a with probability and a with probability . We indicate this by writing "".

Theorem  For all all , all sufficiently large (depending on and ), and all , there exists a pair of encoding and decoding functions and respectively, such that every message has the following property:

.

What this theorem actually implies is, a message when picked from , encoded with a random encoding function , and sent across a noisy , there is a very high probability of recovering the original message by decoding, if or in effect the rate of the channel is bounded by the quantity stated in the theorem. The decoding error probability is exponentially small.

Proof

The theorem can be proved directly with a probabilistic method. Consider an encoding function that is selected at random. This means that for each message , the value is selected at random (with equal probabilities). For a given encoding function , the decoding function is specified as follows: given any received codeword , we find the message such that the Hamming distance is as small as possible (with ties broken arbitrarily). ( is called a maximum likelihood decoding function.)

The proof continues by showing that at least one such choice satisfies the conclusion of theorem, by integration over the probabilities. Suppose and are fixed. First we show that, for a fixed and chosen randomly, the probability of failure over noise is exponentially small in n. At this point, the proof works for a fixed message . Next we extend this result to work for all messages . We achieve this by eliminating half of the codewords from the code with the argument that the proof for the decoding error probability holds for at least half of the codewords. The latter method is called expurgation. This gives the total process the name random coding with expurgation.

Converse of Shannon's capacity theorem

The converse of the capacity theorem essentially states that is the best rate one can achieve over a binary symmetric channel. Formally the theorem states:

Theorem  If then the following is true for every encoding and decoding function : and : respectively: [.

The intuition behind the proof is however showing the number of errors to grow rapidly as the rate grows beyond the channel capacity. The idea is the sender generates messages of dimension , while the channel introduces transmission errors. When the capacity of the channel is , the number of errors is typically for a code of block length . The maximum number of messages is . The output of the channel on the other hand has possible values. If there is any confusion between any two messages, it is likely that . Hence we would have , a case we would like to avoid to keep the decoding error probability exponentially small.

Codes

Very recently, a lot of work has been done and is also being done to design explicit error-correcting codes to achieve the capacities of several standard communication channels. The motivation behind designing such codes is to relate the rate of the code with the fraction of errors which it can correct.

The approach behind the design of codes which meet the channel capacities of or the binary erasure channel have been to correct a lesser number of errors with a high probability, and to achieve the highest possible rate. Shannon's theorem gives us the best rate which could be achieved over a , but it does not give us an idea of any explicit codes which achieve that rate. In fact such codes are typically constructed to correct only a small fraction of errors with a high probability, but achieve a very good rate. The first such code was due to George D. Forney in 1966. The code is a concatenated code by concatenating two different kinds of codes.

Forney's code

Forney constructed a concatenated code to achieve the capacity of the noisy-channel coding theorem for . In his code,

For the outer code , a Reed-Solomon code would have been the first code to have come in mind. However, we would see that the construction of such a code cannot be done in polynomial time. This is why a binary linear code is used for .

For the inner code we find a linear code by exhaustively searching from the linear code of block length and dimension , whose rate meets the capacity of , by the noisy-channel coding theorem.

The rate which almost meets the capacity. We further note that the encoding and decoding of can be done in polynomial time with respect to . As a matter of fact, encoding takes time . Further, the decoding algorithm described takes time as long as ; and .

Decoding error probability

A natural decoding algorithm for is to:

  • Assume
  • Execute on

Note that each block of code for is considered a symbol for . Now since the probability of error at any index for is at most and the errors in are independent, the expected number of errors for is at most by linearity of expectation. Now applying Chernoff bound, we have bound error probability of more than errors occurring to be . Since the outer code can correct at most errors, this is the decoding error probability of . This when expressed in asymptotic terms, gives us an error probability of . Thus the achieved decoding error probability of is exponentially small as the noisy-channel coding theorem.

We have given a general technique to construct . For more detailed descriptions on and please read the following references. Recently a few other codes have also been constructed for achieving the capacities. LDPC codes have been considered for this purpose for their faster decoding time. [4]

Applications

The binary symmetric channel can model a disk drive used for memory storage: the channel input represents a bit being written to the disk and the output corresponds to the bit later being read. Error could arise from the magnetization flipping, background noise or the writing head making an error. Other objects which the binary symmetric channel can model include a telephone or radio communication line or cell division, from which the daughter cells contain DNA information from their parent cell. [5]

This channel is often used by theorists because it is one of the simplest noisy channels to analyze. Many problems in communication theory can be reduced to a BSC. Conversely, being able to transmit effectively over the BSC can give rise to solutions for more complicated channels.

See also

Notes

  1. MacKay (2003), p. 4.
  2. 1 2 MacKay (2003), p. 15.
  3. Cover & Thomas (1991), p. 187.
  4. Richardson and Urbanke
  5. MacKay (2003), p. 3–4.

Related Research Articles

Information theory is the mathematical study of the quantification, storage, and communication of information. The field was originally established by the works of Harry Nyquist and Ralph Hartley, in the 1920s, and Claude Shannon in the 1940s. The field, in applied mathematics, is at the intersection of probability theory, statistics, computer science, statistical mechanics, information engineering, and electrical engineering.

<span class="mw-page-title-main">Wiener process</span> Stochastic process generalizing Brownian motion

In mathematics, the Wiener process is a real-valued continuous-time stochastic process named in honor of American mathematician Norbert Wiener for his investigations on the mathematical properties of the one-dimensional Brownian motion. It is often also called Brownian motion due to its historical connection with the physical process of the same name originally observed by Scottish botanist Robert Brown. It is one of the best known Lévy processes and occurs frequently in pure and applied mathematics, economics, quantitative finance, evolutionary biology, and physics.

<span class="mw-page-title-main">Law of large numbers</span> Averages of repeated trials converge to the expected value

In probability theory, the law of large numbers (LLN) is a mathematical theorem that states that the average of the results obtained from a large number of independent and identical random samples converges to the true value, if it exists. More formally, the LLN states that given a sample of independent and identically distributed values, the sample mean converges to the true mean.

<span class="mw-page-title-main">Logistic regression</span> Statistical model for a binary dependent variable

In statistics, the logistic model is a statistical model that models the log-odds of an event as a linear combination of one or more independent variables. In regression analysis, logistic regression is estimating the parameters of a logistic model. Formally, in binary logistic regression there is a single binary dependent variable, coded by an indicator variable, where the two values are labeled "0" and "1", while the independent variables can each be a binary variable or a continuous variable. The corresponding probability of the value labeled "1" can vary between 0 and 1, hence the labeling; the function that converts log-odds to probability is the logistic function, hence the name. The unit of measurement for the log-odds scale is called a logit, from logistic unit, hence the alternative names. See § Background and § Definition for formal mathematics, and § Example for a worked example.

Channel capacity, in electrical engineering, computer science, and information theory, is the theoretical maximum rate at which information can be reliably transmitted over a communication channel.

In coding theory, decoding is the process of translating received messages into codewords of a given code. There have been many common methods of mapping messages to codewords. These are often used to recover messages sent over a noisy channel, such as a binary symmetric channel.

In error detection and correction, majority logic decoding is a method to decode repetition codes, based on the assumption that the largest number of occurrences of a symbol was the transmitted symbol.

In information theory, the noisy-channel coding theorem, establishes that for any given degree of noise contamination of a communication channel, it is possible to communicate discrete data nearly error-free up to a computable maximum rate through the channel. This result was presented by Claude Shannon in 1948 and was based in part on earlier work and ideas of Harry Nyquist and Ralph Hartley.

In statistics, binomial regression is a regression analysis technique in which the response has a binomial distribution: it is the number of successes in a series of independent Bernoulli trials, where each trial has probability of success . In binomial regression, the probability of a success is related to explanatory variables: the corresponding concept in ordinary regression is to relate the mean value of the unobserved response to explanatory variables.

In information theory, the error exponent of a channel code or source code over the block length of the code is the rate at which the error probability decays exponentially with the block length of the code. Formally, it is defined as the limiting ratio of the negative logarithm of the error probability to the block length of the code for large block lengths. For example, if the probability of error of a decoder drops as , where is the block length, the error exponent is . In this example, approaches for large . Many of the information-theoretic theorems are of asymptotic nature, for example, the channel coding theorem states that for any rate less than the channel capacity, the probability of the error of the channel code can be made to go to zero as the block length goes to infinity. In practical situations, there are limitations to the delay of the communication and the block length must be finite. Therefore, it is important to study how the probability of error drops as the block length go to infinity.

<span class="mw-page-title-main">Hadamard code</span> Error-correcting code

The Hadamard code is an error-correcting code named after Jacques Hadamard that is used for error detection and correction when transmitting messages over very noisy or unreliable channels. In 1971, the code was used to transmit photos of Mars back to Earth from the NASA space probe Mariner 9. Because of its unique mathematical properties, the Hadamard code is not only used by engineers, but also intensely studied in coding theory, mathematics, and theoretical computer science. The Hadamard code is also known under the names Walsh code, Walsh family, and Walsh–Hadamard code in recognition of the American mathematician Joseph Leonard Walsh.

In mathematics, uniform integrability is an important concept in real analysis, functional analysis and measure theory, and plays a vital role in the theory of martingales.

In theoretical computer science, a small-bias sample space is a probability distribution that fools parity functions. In other words, no parity function can distinguish between a small-bias sample space and the uniform distribution with high probability, and hence, small-bias sample spaces naturally give rise to pseudorandom generators for parity functions.

In coding theory, list decoding is an alternative to unique decoding of error-correcting codes for large error rates. The notion was proposed by Elias in the 1950s. The main idea behind list decoding is that the decoding algorithm instead of outputting a single possible message outputs a list of possibilities one of which is correct. This allows for handling a greater number of errors than that allowed by unique decoding.

A randomness extractor, often simply called an "extractor", is a function, which being applied to output from a weak entropy source, together with a short, uniformly random seed, generates a highly random output that appears independent from the source and uniformly distributed. Examples of weakly random sources include radioactive decay or thermal noise; the only restriction on possible sources is that there is no way they can be fully controlled, calculated or predicted, and that a lower bound on their entropy rate can be established. For a given source, a randomness extractor can even be considered to be a true random number generator (TRNG); but there is no single extractor that has been proven to produce truly random output from any type of weakly random source.

A locally testable code is a type of error-correcting code for which it can be determined if a string is a word in that code by looking at a small number of bits of the string. In some situations, it is useful to know if the data is corrupted without decoding all of it so that appropriate action can be taken in response. For example, in communication, if the receiver encounters a corrupted code, it can request the data be re-sent, which could increase the accuracy of said data. Similarly, in data storage, these codes can allow for damaged data to be recovered and rewritten properly.

A locally decodable code (LDC) is an error-correcting code that allows a single bit of the original message to be decoded with high probability by only examining a small number of bits of a possibly corrupted codeword. This property could be useful, say, in a context where information is being transmitted over a noisy channel, and only a small subset of the data is required at a particular time and there is no need to decode the entire message at once. Note that locally decodable codes are not a subset of locally testable codes, though there is some overlap between the two.

In the theory of quantum communication, the quantum capacity is the highest rate at which quantum information can be communicated over many independent uses of a noisy quantum channel from a sender to a receiver. It is also equal to the highest rate at which entanglement can be generated over the channel, and forward classical communication cannot improve it. The quantum capacity theorem is important for the theory of quantum error correction, and more broadly for the theory of quantum computation. The theorem giving a lower bound on the quantum capacity of any channel is colloquially known as the LSD theorem, after the authors Lloyd, Shor, and Devetak who proved it with increasing standards of rigor.

Fuzzy extractors are a method that allows biometric data to be used as inputs to standard cryptographic techniques, to enhance computer security. "Fuzzy", in this context, refers to the fact that the fixed values required for cryptography will be extracted from values close to but not identical to the original key, without compromising the security required. One application is to encrypt and authenticate users records, using the biometric inputs of the user as a key.

In mathematics and theoretical computer science, analysis of Boolean functions is the study of real-valued functions on or from a spectral perspective. The functions studied are often, but not always, Boolean-valued, making them Boolean functions. The area has found many applications in combinatorics, social choice theory, random graphs, and theoretical computer science, especially in hardness of approximation, property testing, and PAC learning.

References