Zero-forcing (or null-steering) precoding is a method of spatial signal processing by which a multiple antenna transmitter can null the multiuser interference in a multi-user MIMO wireless communication system. [1] When the channel state information is perfectly known at the transmitter, the zero-forcing precoder is given by the pseudo-inverse of the channel matrix. Zero-forcing has been used in LTE mobile networks. [2]
In a multiple antenna downlink system which comprises transmit antenna access points and single receive antenna users, such that , the received signal of user is described as
where is the vector of transmitted symbols, is the noise signal, is the channel vector and is some linear precoding vector. Here is the matrix transpose, is the square root of transmit power, and is the message signal with zero mean and variance .
The above signal model can be more compactly re-written as
where
A zero-forcing precoder is defined as a precoder where intended for user is orthogonal to every channel vector associated with users where . That is,
Thus the interference caused by the signal meant for one user is effectively nullified for rest of the users via zero-forcing precoder.
From the fact that each beam generated by zero-forcing precoder is orthogonal to all the other user channel vectors, one can rewrite the received signal as
The orthogonality condition can be expressed in matrix form as
where is some diagonal matrix. Typically, is selected to be an identity matrix. This makes the right Moore-Penrose pseudo-inverse of given by
Given this zero-forcing precoder design, the received signal at each user is decoupled from each other as
Quantify the amount of the feedback resource required to maintain at least a given throughput performance gap between zero-forcing with perfect feedback and with limited feedback, i.e.,
Jindal showed that the required feedback bits of a spatially uncorrelated channel should be scaled according to SNR of the downlink channel, which is given by: [3]
where M is the number of transmit antennas and is the SNR of the downlink channel.
To feed back B bits though the uplink channel, the throughput performance of the uplink channel should be larger than or equal to 'B'
where is the feedback resource consisted of multiplying the feedback frequency resource and the frequency temporal resource subsequently and is SNR of the feedback channel. Then, the required feedback resource to satisfy is
Note that differently from the feedback bits case, the required feedback resource is a function of both downlink and uplink channel conditions. It is reasonable to include the uplink channel status in the calculation of the feedback resource since the uplink channel status determines the capacity, i.e., bits/second per unit frequency band (Hz), of the feedback link. Consider a case when SNR of the downlink and uplink are proportion such that is constant and both SNRs are sufficiently high. Then, the feedback resource will be only proportional to the number of transmit antennas
It follows from the above equation that the feedback resource () is not necessary to scale according to SNR of the downlink channel, which is almost contradict to the case of the feedback bits. One, hence, sees that the whole systematic analysis can reverse the facts resulted from each reductioned situation.
If the transmitter knows the downlink channel state information (CSI) perfectly, ZF-precoding can achieve almost the system capacity when the number of users is large. On the other hand, with limited channel state information at the transmitter (CSIT) the performance of ZF-precoding decreases depending on the accuracy of CSIT. ZF-precoding requires the significant feedback overhead with respect to signal-to-noise-ratio (SNR) so as to achieve the full multiplexing gain. [3] Inaccurate CSIT results in the significant throughput loss because of residual multiuser interferences. Multiuser interferences remain since they can not be nulled with beams generated by imperfect CSIT.
In probability theory and statistics, the multivariate normal distribution, multivariate Gaussian distribution, or joint normal distribution is a generalization of the one-dimensional (univariate) normal distribution to higher dimensions. One definition is that a random vector is said to be k-variate normally distributed if every linear combination of its k components has a univariate normal distribution. Its importance derives mainly from the multivariate central limit theorem. The multivariate normal distribution is often used to describe, at least approximately, any set of (possibly) correlated real-valued random variables each of which clusters around a mean value.
In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of an assumed probability distribution, given some observed data. This is achieved by maximizing a likelihood function so that, under the assumed statistical model, the observed data is most probable. The point in the parameter space that maximizes the likelihood function is called the maximum likelihood estimate. The logic of maximum likelihood is both intuitive and flexible, and as such the method has become a dominant means of statistical inference.
In electromagnetism, the Mie solution to Maxwell's equations describes the scattering of an electromagnetic plane wave by a homogeneous sphere. The solution takes the form of an infinite series of spherical multipole partial waves. It is named after German physicist Gustav Mie.
In fluid dynamics, the Euler equations are a set of quasilinear partial differential equations governing adiabatic and inviscid flow. They are named after Leonhard Euler. In particular, they correspond to the Navier–Stokes equations with zero viscosity and zero thermal conductivity.
In statistics, propagation of uncertainty is the effect of variables' uncertainties on the uncertainty of a function based on them. When the variables are the values of experimental measurements they have uncertainties due to measurement limitations which propagate due to the combination of variables in the function.
In the theory of stochastic processes, the Karhunen–Loève theorem, also known as the Kosambi–Karhunen–Loève theorem states that a stochastic process can be represented as an infinite linear combination of orthogonal functions, analogous to a Fourier series representation of a function on a bounded interval. The transformation is also known as Hotelling transform and eigenvector transform, and is closely related to principal component analysis (PCA) technique widely used in image processing and in data analysis in many fields.
In commutative algebra and field theory, the Frobenius endomorphism is a special endomorphism of commutative rings with prime characteristic p, an important class that includes finite fields. The endomorphism maps every element to its p-th power. In certain contexts it is an automorphism, but this is not true in general.
In statistical mechanics, the radial distribution function, in a system of particles, describes how density varies as a function of distance from a reference particle.
Ewald summation, named after Paul Peter Ewald, is a method for computing long-range interactions in periodic systems. It was first developed as the method for calculating the electrostatic energies of ionic crystals, and is now commonly used for calculating long-range interactions in computational chemistry. Ewald summation is a special case of the Poisson summation formula, replacing the summation of interaction energies in real space with an equivalent summation in Fourier space. In this method, the long-range interaction is divided into two parts: a short-range contribution, and a long-range contribution which does not have a singularity. The short-range contribution is calculated in real space, whereas the long-range contribution is calculated using a Fourier transform. The advantage of this method is the rapid convergence of the energy compared with that of a direct summation. This means that the method has high accuracy and reasonable speed when computing long-range interactions, and it is thus the de facto standard method for calculating long-range interactions in periodic systems. The method requires charge neutrality of the molecular system to accurately calculate the total Coulombic interaction. A study of the truncation errors introduced in the energy and force calculations of disordered point-charge systems is provided by Kolafa and Perram.
In linear algebra, the Schmidt decomposition refers to a particular way of expressing a vector in the tensor product of two inner product spaces. It has numerous applications in quantum information theory, for example in entanglement characterization and in state purification, and plasticity.
In quantum information theory, quantum relative entropy is a measure of distinguishability between two quantum states. It is the quantum mechanical analog of relative entropy.
Limited-memory BFGS is an optimization algorithm in the family of quasi-Newton methods that approximates the Broyden–Fletcher–Goldfarb–Shanno algorithm (BFGS) using a limited amount of computer memory. It is a popular algorithm for parameter estimation in machine learning. The algorithm's target problem is to minimize over unconstrained values of the real-vector where is a differentiable scalar function.
In statistics, Bayesian multivariate linear regression is a Bayesian approach to multivariate linear regression, i.e. linear regression where the predicted outcome is a vector of correlated random variables rather than a single scalar random variable. A more general treatment of this approach can be found in the article MMSE estimator.
In the field of mathematical modeling, a radial basis function network is an artificial neural network that uses radial basis functions as activation functions. The output of the network is a linear combination of radial basis functions of the inputs and neuron parameters. Radial basis function networks have many uses, including function approximation, time series prediction, classification, and system control. They were first formulated in a 1988 paper by Broomhead and Lowe, both researchers at the Royal Signals and Radar Establishment.
Precoding is a generalization of beamforming to support multi-stream transmission in multi-antenna wireless communications. In conventional single-stream beamforming, the same signal is emitted from each of the transmit antennas with appropriate weighting such that the signal power is maximized at the receiver output. When the receiver has multiple antennas, single-stream beamforming cannot simultaneously maximize the signal level at all of the receive antennas. In order to maximize the throughput in multiple receive antenna systems, multi-stream transmission is generally required.
In computer science, locality-sensitive hashing (LSH) is a fuzzy hashing technique that hashes similar input items into the same "buckets" with high probability. Since similar items end up in the same buckets, this technique can be used for data clustering and nearest neighbor search. It differs from conventional hashing techniques in that hash collisions are maximized, not minimized. Alternatively, the technique can be seen as a way to reduce the dimensionality of high-dimensional data; high-dimensional input items can be reduced to low-dimensional versions while preserving relative distances between items.
In radio, multiple-input and multiple-output (MIMO) is a method for multiplying the capacity of a radio link using multiple transmission and receiving antennas to exploit multipath propagation. MIMO has become an essential element of wireless communication standards including IEEE 802.11n, IEEE 802.11ac, HSPA+ (3G), WiMAX, and Long Term Evolution (LTE). More recently, MIMO has been applied to power-line communication for three-wire installations as part of the ITU G.hn standard and of the HomePlug AV2 specification.
In cryptography, learning with errors (LWE) is a mathematical problem that is widely used to create secure encryption algorithms. It is based on the idea of representing secret information as a set of equations with errors. In other words, LWE is a way to hide the value of a secret by introducing noise to it. In more technical terms, it refers to the computational problem of inferring a linear -ary function over a finite ring from given samples some of which may be erroneous. The LWE problem is conjectured to be hard to solve, and thus to be useful in cryptography.
In the theory of quantum communication, the quantum capacity is the highest rate at which quantum information can be communicated over many independent uses of a noisy quantum channel from a sender to a receiver. It is also equal to the highest rate at which entanglement can be generated over the channel, and forward classical communication cannot improve it. The quantum capacity theorem is important for the theory of quantum error correction, and more broadly for the theory of quantum computation. The theorem giving a lower bound on the quantum capacity of any channel is colloquially known as the LSD theorem, after the authors Lloyd, Shor, and Devetak who proved it with increasing standards of rigor.
Per-user unitary rate control (PU2RC) is a multi-user MIMO (multiple-input and multiple-output) scheme. PU2RC uses both transmission pre-coding and multi-user scheduling. By doing that, the network capacity is further enhanced than the capacity of the single-user MIMO scheme.