In information theory, a relay channel is a probability model of the communication between a sender and a receiver aided by one or more intermediate relay nodes.
A discrete memoryless single-relay channel can be modelled as four finite sets, and , and a conditional probability distribution on these sets. The probability distribution of the choice of symbols selected by the encoder and the relay encoder is represented by .
o------------------o | Relay Encoder | o------------------o Λ | | y1 x2 | | V o---------o x1 o------------------o y o---------o | Encoder |--->| p(y,y1|x1,x2) |--->| Decoder | o---------o o------------------o o---------o
There exist three main relaying schemes: Decode-and-Forward, Compress-and-Forward and Amplify-and-Forward. The first two schemes were first proposed in the pioneer article by Cover and El-Gamal.
The first upper bound on the capacity of the relay channel is derived in the pioneer article by Cover and El-Gamal and is known as the Cut-set upper bound. This bound says where C is the capacity of the relay channel. The first term and second term in the minimization above are called broadcast bound and multi-access bound, respectively.
A relay channel is said to be degraded if y depends on only through and , i.e., . In the article by Cover and El-Gamal it is shown that the capacity of the degraded relay channel can be achieved using Decode-and-Forward scheme. It turns out that the capacity in this case is equal to the Cut-set upper bound.
A relay channel is said to be reversely degraded if . Cover and El-Gamal proved that the Direct Transmission Lower Bound (wherein relay is not used) is tight when the relay channel is reversely degraded.
This section is empty.You can help by adding to it.(July 2010) |
In a relay-without-delay channel (RWD), each transmitted relay symbol can depend on relay's past as well as present received symbols. Relay Without Delay was shown to achieve rates that are outside the Cut-set upper bound. Recently, it was also shown that instantaneous relays (a special case of relay-without-delay) are capable of improving not only the capacity, but also Degrees of Freedom (DoF) of the 2-user interference channel.
JPEG is a commonly used method of lossy compression for digital images, particularly for those images produced by digital photography. The degree of compression can be adjusted, allowing a selectable tradeoff between storage size and image quality. JPEG typically achieves 10:1 compression with little perceptible loss in image quality. Since its introduction in 1992, JPEG has been the most widely used image compression standard in the world, and the most widely used digital image format, with several billion JPEG images produced every day as of 2015.
In algebra and algebraic geometry, the spectrum of a commutative ring R, denoted by , is the set of all prime ideals of R. It is commonly augmented with the Zariski topology and with a structure sheaf, turning it into a locally ringed space. A locally ringed space of this form is called an affine scheme.
Reed–Solomon codes are a group of error-correcting codes that were introduced by Irving S. Reed and Gustave Solomon in 1960. They have many applications, the most prominent of which include consumer technologies such as CDs, DVDs, Blu-ray discs, QR codes, data transmission technologies such as DSL and WiMAX, broadcast systems such as satellite communications, DVB and ATSC, and storage systems such as RAID 6.
A binary symmetric channel is a common communications channel model used in coding theory and information theory. In this model, a transmitter wishes to send a bit, and the receiver receives a bit. It is assumed that the bit is usually transmitted correctly, but that it will be "flipped" with a small probability. This channel is used frequently in information theory because it is one of the simplest channels to analyze.
Rate–distortion theory is a major branch of information theory which provides the theoretical foundations for lossy data compression; it addresses the problem of determining the minimal number of bits per symbol, as measured by the rate R, that should be communicated over a channel, so that the source can be approximately reconstructed at the receiver without exceeding an expected distortion D.
Additive white Gaussian noise (AWGN) is a basic noise model used in information theory to mimic the effect of many random processes that occur in nature. The modifiers denote specific characteristics:
Channel capacity, in electrical engineering, computer science, and information theory, is the tight upper bound on the rate at which information can be reliably transmitted over a communication channel.
Quantization, in mathematics and digital signal processing, is the process of mapping input values from a large set to output values in a (countable) smaller set, often with a finite number of elements. Rounding and truncation are typical examples of quantization processes. Quantization is involved to some degree in nearly all digital signal processing, as the process of representing a signal in digital form ordinarily involves rounding. Quantization also forms the core of essentially all lossy compression algorithms.
In coding theory, block codes are a large and important family of error-correcting codes that encode data in blocks. There is a vast number of examples for block codes, many of which have a wide range of practical applications. The abstract definition of block codes is conceptually useful because it allows coding theorists, mathematicians, and computer scientists to study the limitations of all block codes in a unified way. Such limitations often take the form of bounds that relate different parameters of the block code to each other, such as its rate and its ability to detect and correct errors.
In coding theory, decoding is the process of translating received messages into codewords of a given code. There have been many common methods of mapping messages to codewords. These are often used to recover messages sent over a noisy channel, such as a binary symmetric channel.
Network coding is a field of research founded in a series of papers from the late 1990s to the early 2000s. However, the concept of network coding, in particular linear network coding, appeared much earlier. In a 1978 paper, a scheme for improving the throughput of a two-way communication through a satellite was proposed. In this scheme, two users trying to communicate with each other transmit their data streams to a satellite, which combines the two streams by summing them modulo 2 and then broadcasts the combined stream. Each of the two users, upon receiving the broadcast stream, can decode the other stream by using the information of their own stream.
In information theory, the noisy-channel coding theorem, establishes that for any given degree of noise contamination of a communication channel, it is possible to communicate discrete data nearly error-free up to a computable maximum rate through the channel. This result was presented by Claude Shannon in 1948 and was based in part on earlier work and ideas of Harry Nyquist and Ralph Hartley.
Lossless JPEG is a 1993 addition to JPEG standard by the Joint Photographic Experts Group to enable lossless compression. However, the term may also be used to refer to all lossless compression schemes developed by the group, including JPEG 2000 and JPEG-LS.
In information theory, the error exponent of a channel code or source code over the block length of the code is the rate at which the error probability decays exponentially with the block length of the code. Formally, it is defined as the limiting ratio of the negative logarithm of the error probability to the block length of the code for large block lengths. For example, if the probability of error of a decoder drops as , where is the block length, the error exponent is . In this example, approaches for large . Many of the information-theoretic theorems are of asymptotic nature, for example, the channel coding theorem states that for any rate less than the channel capacity, the probability of the error of the channel code can be made to go to zero as the block length goes to infinity. In practical situations, there are limitations to the delay of the communication and the block length must be finite. Therefore, it is important to study how the probability of error drops as the block length go to infinity.
A Z-channel is a communications channel used in coding theory and information theory to model the behaviour of some data storage systems.
In coding theory, list decoding is an alternative to unique decoding of error-correcting codes for large error rates. The notion was proposed by Elias in the 1950s. The main idea behind list decoding is that the decoding algorithm instead of outputting a single possible message outputs a list of possibilities one of which is correct. This allows for handling a greater number of errors than that allowed by unique decoding.
Cooperative diversity is a cooperative multiple antenna technique for improving or maximising total network channel capacities for any given set of bandwidths which exploits user diversity by decoding the combined signal of the relayed signal and the direct signal in wireless multihop networks. A conventional single hop system uses direct transmission where a receiver decodes the information only based on the direct signal while regarding the relayed signal as interference, whereas the cooperative diversity considers the other signal as contribution. That is, cooperative diversity decodes the information from the combination of two signals. Hence, it can be seen that cooperative diversity is an antenna diversity that uses distributed antennas belonging to each node in a wireless network. Note that user cooperation is another definition of cooperative diversity. User cooperation considers an additional fact that each user relays the other user's signal while cooperative diversity can be also achieved by multi-hop relay networking systems.
Distributed source coding (DSC) is an important problem in information theory and communication. DSC problems regard the compression of multiple correlated information sources that do not communicate with each other. By modeling the correlation between multiple sources at the decoder side together with channel codes, DSC is able to shift the computational complexity from encoder side to decoder side, therefore provide appropriate frameworks for applications with complexity-constrained sender, such as sensor networks and video/multimedia compression. One of the main properties of distributed source coding is that the computational burden in encoders is shifted to the joint decoder.
A locally decodable code (LDC) is an error-correcting code that allows a single bit of the original message to be decoded with high probability by only examining a small number of bits of a possibly corrupted codeword. This property could be useful, say, in a context where information is being transmitted over a noisy channel, and only a small subset of the data is required at a particular time and there is no need to decode the entire message at once. Note that locally decodable codes are not a subset of locally testable codes, though there is some overlap between the two.
In the theory of quantum communication, an amplitude damping channel is a quantum channel that models physical processes such as spontaneous emission. A natural process by which this channel can occur is a spin chain through which a number of spin states, coupled by a time independent Hamiltonian, can be used to send a quantum state from one location to another. The resulting quantum channel ends up being identical to an amplitude damping channel, for which the quantum capacity, the classical capacity and the entanglement assisted classical capacity of the quantum channel can be evaluated.