Complex normal distribution

Last updated
Complex normal
Parameters

location
covariance matrix (positive semi-definite matrix)

Contents

relation matrix (complex symmetric matrix)
Support
PDF complicated, see text
Mean
Mode
Variance
CF

In probability theory, the family of complex normal distributions, denoted or , characterizes complex random variables whose real and imaginary parts are jointly normal. [1] The complex normal family has three parameters: location parameter μ, covariance matrix , and the relation matrix . The standard complex normal is the univariate distribution with , , and .

An important subclass of complex normal family is called the circularly-symmetric (central) complex normal and corresponds to the case of zero relation matrix and zero mean: and . [2] This case is used extensively in signal processing, where it is sometimes referred to as just complex normal in the literature.

Definitions

Complex standard normal random variable

The standard complex normal random variable or standard complex Gaussian random variable is a complex random variable whose real and imaginary parts are independent normally distributed random variables with mean zero and variance . [3] :p. 494 [4] :pp. 501 Formally,

(Eq.1)

where denotes that is a standard complex normal random variable.

Complex normal random variable

Suppose and are real random variables such that is a 2-dimensional normal random vector. Then the complex random variable is called complex normal random variable or complex Gaussian random variable. [3] :p. 500

(Eq.2)

Complex standard normal random vector

A n-dimensional complex random vector is a complex standard normal random vector or complex standard Gaussian random vector if its components are independent and all of them are standard complex normal random variables as defined above. [3] :p. 502 [4] :pp. 501 That is a standard complex normal random vector is denoted .

(Eq.3)

Complex normal random vector

If and are random vectors in such that is a normal random vector with components. Then we say that the complex random vector

is a complex normal random vector or a complex Gaussian random vector.

(Eq.4)

Mean, covariance, and relation

The complex Gaussian distribution can be described with 3 parameters: [5]

where denotes matrix transpose of , and denotes conjugate transpose. [3] :p. 504 [4] :pp. 500

Here the location parameter is a n-dimensional complex vector; the covariance matrix is Hermitian and non-negative definite; and, the relation matrix or pseudo-covariance matrix is symmetric. The complex normal random vector can now be denoted as

Moreover, matrices and are such that the matrix

is also non-negative definite where denotes the complex conjugate of . [5]

Relationships between covariance matrices

As for any complex random vector, the matrices and can be related to the covariance matrices of and via expressions

and conversely

Density function

The probability density function for complex normal distribution can be computed as

where and .

Characteristic function

The characteristic function of complex normal distribution is given by [5]

where the argument is an n-dimensional complex vector.

Properties

where and .

Circularly-symmetric central case

Definition

A complex random vector is called circularly symmetric if for every deterministic the distribution of equals the distribution of . [4] :pp. 500–501

Central normal complex random vectors that are circularly symmetric are of particular interest because they are fully specified by the covariance matrix .

The circularly-symmetric (central) complex normal distribution corresponds to the case of zero mean and zero relation matrix, i.e. and . [3] :p. 507 [7] This is usually denoted

Distribution of real and imaginary parts

If is circularly-symmetric (central) complex normal, then the vector is multivariate normal with covariance structure

where .

Probability density function

For nonsingular covariance matrix , its distribution can also be simplified as [3] :p. 508

.

Therefore, if the non-zero mean and covariance matrix are unknown, a suitable log likelihood function for a single observation vector would be

The standard complex normal (defined in Eq.1 ) corresponds to the distribution of a scalar random variable with , and . Thus, the standard complex normal distribution has density

Properties

The above expression demonstrates why the case , is called “circularly-symmetric”. The density function depends only on the magnitude of but not on its argument. As such, the magnitude of a standard complex normal random variable will have the Rayleigh distribution and the squared magnitude will have the exponential distribution, whereas the argument will be distributed uniformly on .

If are independent and identically distributed n-dimensional circular complex normal random vectors with , then the random squared norm

has the generalized chi-squared distribution and the random matrix

has the complex Wishart distribution with degrees of freedom. This distribution can be described by density function

where , and is a nonnegative-definite matrix.

See also

Related Research Articles

<span class="mw-page-title-main">Cauchy distribution</span> Probability distribution

The Cauchy distribution, named after Augustin Cauchy, is a continuous probability distribution. It is also known, especially among physicists, as the Lorentz distribution, Cauchy–Lorentz distribution, Lorentz(ian) function, or Breit–Wigner distribution. The Cauchy distribution is the distribution of the x-intercept of a ray issuing from with a uniformly distributed angle. It is also the distribution of the ratio of two independent normally distributed random variables with mean zero.

<span class="mw-page-title-main">Pauli matrices</span> Matrices important in quantum mechanics and the study of spin

In mathematical physics and mathematics, the Pauli matrices are a set of three 2 × 2 complex matrices that are Hermitian, involutory and unitary. Usually indicated by the Greek letter sigma, they are occasionally denoted by tau when used in connection with isospin symmetries.

In probability theory, the central limit theorem (CLT) states that, under appropriate conditions, the distribution of a normalized version of the sample mean converges to a standard normal distribution. This holds even if the original variables themselves are not normally distributed. There are several versions of the CLT, each applying in the context of different conditions.

<span class="mw-page-title-main">Multivariate random variable</span> Random variable with multiple component dimensions

In probability, and statistics, a multivariate random variable or random vector is a list or vector of mathematical variables each of whose value is unknown, either because the value has not yet occurred or because there is imperfect knowledge of its value. The individual variables in a random vector are grouped together because they are all part of a single mathematical system — often they represent different properties of an individual statistical unit. For example, while a given person has a specific age, height and weight, the representation of these features of an unspecified person from within a group would be a random vector. Normally each element of a random vector is a real number.

<span class="mw-page-title-main">Multivariate normal distribution</span> Generalization of the one-dimensional normal distribution to higher dimensions

In probability theory and statistics, the multivariate normal distribution, multivariate Gaussian distribution, or joint normal distribution is a generalization of the one-dimensional (univariate) normal distribution to higher dimensions. One definition is that a random vector is said to be k-variate normally distributed if every linear combination of its k components has a univariate normal distribution. Its importance derives mainly from the multivariate central limit theorem. The multivariate normal distribution is often used to describe, at least approximately, any set of (possibly) correlated real-valued random variables, each of which clusters around a mean value.

<span class="mw-page-title-main">Chi-squared distribution</span> Probability distribution and special case of gamma distribution

In probability theory and statistics, the chi-squared distribution with degrees of freedom is the distribution of a sum of the squares of independent standard normal random variables. The chi-squared distribution is a special case of the gamma distribution and is one of the most widely used probability distributions in inferential statistics, notably in hypothesis testing and in construction of confidence intervals. This distribution is sometimes called the central chi-squared distribution, a special case of the more general noncentral chi-squared distribution.

In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of an assumed probability distribution, given some observed data. This is achieved by maximizing a likelihood function so that, under the assumed statistical model, the observed data is most probable. The point in the parameter space that maximizes the likelihood function is called the maximum likelihood estimate. The logic of maximum likelihood is both intuitive and flexible, and as such the method has become a dominant means of statistical inference.

<span class="mw-page-title-main">Ellipsoid</span> Quadric surface that looks like a deformed sphere

An ellipsoid is a surface that can be obtained from a sphere by deforming it by means of directional scalings, or more generally, of an affine transformation.

In probability theory and statistics, two real-valued random variables, , , are said to be uncorrelated if their covariance, , is zero. If two variables are uncorrelated, there is no linear relationship between them.

<span class="mw-page-title-main">Covariance matrix</span> Measure of covariance of components of a random vector

In probability theory and statistics, a covariance matrix is a square matrix giving the covariance between each pair of elements of a given random vector.

<span class="mw-page-title-main">Beta distribution</span> Probability distribution

In probability theory and statistics, the beta distribution is a family of continuous probability distributions defined on the interval [0, 1] or in terms of two positive parameters, denoted by alpha (α) and beta (β), that appear as exponents of the variable and its complement to 1, respectively, and control the shape of the distribution.

<span class="mw-page-title-main">Cross-correlation</span> Covariance and correlation

In signal processing, cross-correlation is a measure of similarity of two series as a function of the displacement of one relative to the other. This is also known as a sliding dot product or sliding inner-product. It is commonly used for searching a long signal for a shorter, known feature. It has applications in pattern recognition, single particle analysis, electron tomography, averaging, cryptanalysis, and neurophysiology. The cross-correlation is similar in nature to the convolution of two functions. In an autocorrelation, which is the cross-correlation of a signal with itself, there will always be a peak at a lag of zero, and its size will be the signal energy.

In statistics, sometimes the covariance matrix of a multivariate random variable is not known but has to be estimated. Estimation of covariance matrices then deals with the question of how to approximate the actual covariance matrix on the basis of a sample from the multivariate distribution. Simple cases, where observations are complete, can be dealt with by using the sample covariance matrix. The sample covariance matrix (SCM) is an unbiased and efficient estimator of the covariance matrix if the space of covariance matrices is viewed as an extrinsic convex cone in Rp×p; however, measured using the intrinsic geometry of positive-definite matrices, the SCM is a biased and inefficient estimator. In addition, if the random variable has a normal distribution, the sample covariance matrix has a Wishart distribution and a slightly differently scaled version of it is the maximum likelihood estimate. Cases involving missing data, heteroscedasticity, or autocorrelated residuals require deeper considerations. Another issue is the robustness to outliers, to which sample covariance matrices are highly sensitive.

<span class="mw-page-title-main">Directional statistics</span>

Directional statistics is the subdiscipline of statistics that deals with directions, axes or rotations in Rn. More generally, directional statistics deals with observations on compact Riemannian manifolds including the Stiefel manifold.

<span class="mw-page-title-main">Characteristic function (probability theory)</span> Fourier transform of the probability density function

In probability theory and statistics, the characteristic function of any real-valued random variable completely defines its probability distribution. If a random variable admits a probability density function, then the characteristic function is the Fourier transform of the probability density function. Thus it provides an alternative route to analytical results compared with working directly with probability density functions or cumulative distribution functions. There are particularly simple results for the characteristic functions of distributions defined by the weighted sums of random variables.

Differential entropy is a concept in information theory that began as an attempt by Claude Shannon to extend the idea of (Shannon) entropy, a measure of average (surprisal) of a random variable, to continuous probability distributions. Unfortunately, Shannon did not derive this formula, and rather just assumed it was the correct continuous analogue of discrete entropy, but it is not. The actual continuous version of discrete entropy is the limiting density of discrete points (LDDP). Differential entropy is commonly encountered in the literature, but it is a limiting case of the LDDP, and one that loses its fundamental association with discrete entropy.

In statistics, the multivariate t-distribution is a multivariate probability distribution. It is a generalization to random vectors of the Student's t-distribution, which is a distribution applicable to univariate random variables. While the case of a random matrix could be treated within this structure, the matrix t-distribution is distinct and makes particular use of the matrix structure.

A ratio distribution is a probability distribution constructed as the distribution of the ratio of random variables having two other known distributions. Given two random variables X and Y, the distribution of the random variable Z that is formed as the ratio Z = X/Y is a ratio distribution.

In probability theory and statistics, a cross-covariance matrix is a matrix whose element in the i, j position is the covariance between the i-th element of a random vector and j-th element of another random vector. A random vector is a random variable with multiple dimensions. Each element of the vector is a scalar random variable. Each element has either a finite number of observed empirical values or a finite or infinite number of potential values. The potential values are specified by a theoretical joint probability distribution. Intuitively, the cross-covariance matrix generalizes the notion of covariance to multiple dimensions.

<span class="mw-page-title-main">Complex random vector</span>

In probability theory and statistics, a complex random vector is typically a tuple of complex-valued random variables, and generally is a random variable taking values in a vector space over the field of complex numbers. If are complex-valued random variables, then the n-tuple is a complex random vector. Complex random variables can always be considered as pairs of real random vectors: their real and imaginary parts.

References

  1. Goodman, N.R. (1963). "Statistical analysis based on a certain multivariate complex Gaussian distribution (an introduction)". The Annals of Mathematical Statistics. 34 (1): 152–177. doi: 10.1214/aoms/1177704250 . JSTOR   2991290.
  2. bookchapter, Gallager.R, pg9.
  3. 1 2 3 4 5 6 Lapidoth, A. (2009). A Foundation in Digital Communication. Cambridge University Press. ISBN   9780521193955.
  4. 1 2 3 4 Tse, David (2005). Fundamentals of Wireless Communication. Cambridge University Press. ISBN   9781139444668.
  5. 1 2 3 Picinbono, Bernard (1996). "Second-order complex random vectors and normal distributions". IEEE Transactions on Signal Processing. 44 (10): 2637–2640. Bibcode:1996ITSP...44.2637P. doi:10.1109/78.539051.
  6. Daniel Wollschlaeger. "The Hoyt Distribution (Documentation for R package 'shotGroups' version 0.6.2)".[ permanent dead link ]
  7. bookchapter, Gallager.R