Wishart distribution

Last updated
Wishart
NotationX ~ Wp(V, n)
Parameters n > p − 1 degrees of freedom (real)
V > 0 scale matrix (p × p pos. def)
Support X (p × p) positive definite matrix
PDF

Contents

Mean
Mode (np − 1)V for np + 1
Variance
Entropy see below
CF

In statistics, the Wishart distribution is a generalization of the gamma distribution to multiple dimensions. It is named in honor of John Wishart, who first formulated the distribution in 1928. [1] Other names include Wishart ensemble (in random matrix theory, probability distributions over matrices are usually called "ensembles"), or Wishart–Laguerre ensemble (since its eigenvalue distribution involve Laguerre polynomials), or LOE, LUE, LSE (in analogy with GOE, GUE, GSE). [2]

It is a family of probability distributions defined over symmetric, positive-definite random matrices (i.e. matrix-valued random variables). These distributions are of great importance in the estimation of covariance matrices in multivariate statistics. In Bayesian statistics, the Wishart distribution is the conjugate prior of the inverse covariance-matrix of a multivariate-normal random-vector. [3]

Definition

Suppose G is a p × n matrix, each column of which is independently drawn from a p-variate normal distribution with zero mean:

Then the Wishart distribution is the probability distribution of the p × p random matrix [4]

known as the scatter matrix. One indicates that S has that probability distribution by writing

The positive integer n is the number of degrees of freedom . Sometimes this is written W(V, p, n). For np the matrix S is invertible with probability 1 if V is invertible.

If p = V = 1 then this distribution is a chi-squared distribution with n degrees of freedom.

Occurrence

The Wishart distribution arises as the distribution of the sample covariance matrix for a sample from a multivariate normal distribution. It occurs frequently in likelihood-ratio tests in multivariate statistical analysis. It also arises in the spectral theory of random matrices [ citation needed ] and in multidimensional Bayesian analysis. [5] It is also encountered in wireless communications, while analyzing the performance of Rayleigh fading MIMO wireless channels . [6]

Probability density function

Spectral density of Wishart-Laguerre ensemble with dimensions (8, 15). A reconstruction of Figure 1 of . Spectral density of Wishart-Laguerre ensemble (8, 15).png
Spectral density of Wishart-Laguerre ensemble with dimensions (8, 15). A reconstruction of Figure 1 of .

The Wishart distribution can be characterized by its probability density function as follows:

Let X be a p × p symmetric matrix of random variables that is positive semi-definite. Let V be a (fixed) symmetric positive definite matrix of size p × p.

Then, if np, X has a Wishart distribution with n degrees of freedom if it has the probability density function

where is the determinant of and Γp is the multivariate gamma function defined as

The density above is not the joint density of all the elements of the random matrix X (such -dimensional density does not exist because of the symmetry constrains ), it is rather the joint density of elements for (, [1] page 38). Also, the density formula above applies only to positive definite matrices for other matrices the density is equal to zero.

Spectral density

The joint-eigenvalue density for the eigenvalues of a random matrix is, [8] [9]

where is a constant.

In fact the above definition can be extended to any real n > p − 1. If np − 1, then the Wishart no longer has a densityinstead it represents a singular distribution that takes values in a lower-dimension subspace of the space of p × p matrices. [10]

Use in Bayesian statistics

In Bayesian statistics, in the context of the multivariate normal distribution, the Wishart distribution is the conjugate prior to the precision matrix Ω = Σ−1, where Σ is the covariance matrix. [11] :135 [12]

Choice of parameters

The least informative, proper Wishart prior is obtained by setting n = p.[ citation needed ]

A common choice for V leverages the fact that the mean of X ~Wp(V, n) is nV. Then V is chosen so that nV equals an initial guess for X. For instance, when estimating a precision matrix Σ−1 ~ Wp(V, n) a reasonable choice for V would be n−1Σ0−1, where Σ0 is some prior estimate for the covariance matrix Σ.

Properties

Log-expectation

The following formula plays a role in variational Bayes derivations for Bayes networks involving the Wishart distribution. From equation (2.63), [13]

where is the multivariate digamma function (the derivative of the log of the multivariate gamma function).

Log-variance

The following variance computation could be of help in Bayesian statistics:

where is the trigamma function. This comes up when computing the Fisher information of the Wishart random variable.

Entropy

The information entropy of the distribution has the following formula: [11] :693

where B(V, n) is the normalizing constant of the distribution:

This can be expanded as follows:

Cross-entropy

The cross-entropy of two Wishart distributions with parameters and with parameters is

Note that when and we recover the entropy.

KL-divergence

The Kullback–Leibler divergence of from is

Characteristic function

The characteristic function of the Wishart distribution is

where E[⋅] denotes expectation. (Here Θ is any matrix with the same dimensions as V, 1 indicates the identity matrix, and i is a square root of −1). [9] Properly interpreting this formula requires a little care, because noninteger complex powers are multivalued; when n is noninteger, the correct branch must be determined via analytic continuation. [14]

Theorem

If a p × p random matrix X has a Wishart distribution with m degrees of freedom and variance matrix V — write — and C is a q × p matrix of rank q, then [15]

Corollary 1

If z is a nonzero p × 1 constant vector, then: [15]

In this case, is the chi-squared distribution and (note that is a constant; it is positive because V is positive definite).

Corollary 2

Consider the case where zT = (0, ..., 0, 1, 0, ..., 0) (that is, the j-th element is one and all others zero). Then corollary 1 above shows that

gives the marginal distribution of each of the elements on the matrix's diagonal.

George Seber points out that the Wishart distribution is not called the “multivariate chi-squared distribution” because the marginal distribution of the off-diagonal elements is not chi-squared. Seber prefers to reserve the term multivariate for the case when all univariate marginals belong to the same family. [16]

Estimator of the multivariate normal distribution

The Wishart distribution is the sampling distribution of the maximum-likelihood estimator (MLE) of the covariance matrix of a multivariate normal distribution. [17] A derivation of the MLE uses the spectral theorem.

Bartlett decomposition

The Bartlett decomposition of a matrix X from a p-variate Wishart distribution with scale matrix V and n degrees of freedom is the factorization:

where L is the Cholesky factor of V, and:

where and nij ~ N(0, 1) independently. [18] This provides a useful method for obtaining random samples from a Wishart distribution. [19]

Marginal distribution of matrix elements

Let V be a 2 × 2 variance matrix characterized by correlation coefficient −1 < ρ < 1 and L its lower Cholesky factor:

Multiplying through the Bartlett decomposition above, we find that a random sample from the 2 × 2 Wishart distribution is

The diagonal elements, most evidently in the first element, follow the χ2 distribution with n degrees of freedom (scaled by σ2) as expected. The off-diagonal element is less familiar but can be identified as a normal variance-mean mixture where the mixing density is a χ2 distribution. The corresponding marginal probability density for the off-diagonal element is therefore the variance-gamma distribution

where Kν(z) is the modified Bessel function of the second kind. [20] Similar results may be found for higher dimensions. In general, if follows a Wishart distribution with parameters, , then for , the off-diagonal elements

. [21]

It is also possible to write down the moment-generating function even in the noncentral case (essentially the nth power of Craig (1936) [22] equation 10) although the probability density becomes an infinite sum of Bessel functions.

The range of the shape parameter

It can be shown [23] that the Wishart distribution can be defined if and only if the shape parameter n belongs to the set

This set is named after Gindikin, who introduced it [24] in the 1970s in the context of gamma distributions on homogeneous cones. However, for the new parameters in the discrete spectrum of the Gindikin ensemble, namely,

the corresponding Wishart distribution has no Lebesgue density.

Relationships to other distributions

See also

Related Research Articles

<span class="mw-page-title-main">Multivariate normal distribution</span> Generalization of the one-dimensional normal distribution to higher dimensions

In probability theory and statistics, the multivariate normal distribution, multivariate Gaussian distribution, or joint normal distribution is a generalization of the one-dimensional (univariate) normal distribution to higher dimensions. One definition is that a random vector is said to be k-variate normally distributed if every linear combination of its k components has a univariate normal distribution. Its importance derives mainly from the multivariate central limit theorem. The multivariate normal distribution is often used to describe, at least approximately, any set of (possibly) correlated real-valued random variables, each of which clusters around a mean value.

In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of an assumed probability distribution, given some observed data. This is achieved by maximizing a likelihood function so that, under the assumed statistical model, the observed data is most probable. The point in the parameter space that maximizes the likelihood function is called the maximum likelihood estimate. The logic of maximum likelihood is both intuitive and flexible, and as such the method has become a dominant means of statistical inference.

<span class="mw-page-title-main">Covariance matrix</span> Measure of covariance of components of a random vector

In probability theory and statistics, a covariance matrix is a square matrix giving the covariance between each pair of elements of a given random vector.

In statistics, sometimes the covariance matrix of a multivariate random variable is not known but has to be estimated. Estimation of covariance matrices then deals with the question of how to approximate the actual covariance matrix on the basis of a sample from the multivariate distribution. Simple cases, where observations are complete, can be dealt with by using the sample covariance matrix. The sample covariance matrix (SCM) is an unbiased and efficient estimator of the covariance matrix if the space of covariance matrices is viewed as an extrinsic convex cone in Rp×p; however, measured using the intrinsic geometry of positive-definite matrices, the SCM is a biased and inefficient estimator. In addition, if the random variable has a normal distribution, the sample covariance matrix has a Wishart distribution and a slightly differently scaled version of it is the maximum likelihood estimate. Cases involving missing data, heteroscedasticity, or autocorrelated residuals require deeper considerations. Another issue is the robustness to outliers, to which sample covariance matrices are highly sensitive.

In probability theory and statistics, the generalized extreme value (GEV) distribution is a family of continuous probability distributions developed within extreme value theory to combine the Gumbel, Fréchet and Weibull families also known as type I, II and III extreme value distributions. By the extreme value theorem the GEV distribution is the only possible limit distribution of properly normalized maxima of a sequence of independent and identically distributed random variables. that a limit distribution needs to exist, which requires regularity conditions on the tail of the distribution. Despite this, the GEV distribution is often used as an approximation to model the maxima of long (finite) sequences of random variables.

<span class="mw-page-title-main">Inverse-gamma distribution</span> Two-parameter family of continuous probability distributions

In probability theory and statistics, the inverse gamma distribution is a two-parameter family of continuous probability distributions on the positive real line, which is the distribution of the reciprocal of a variable distributed according to the gamma distribution.

Bayesian linear regression is a type of conditional modeling in which the mean of one variable is described by a linear combination of other variables, with the goal of obtaining the posterior probability of the regression coefficients and ultimately allowing the out-of-sample prediction of the regressandconditional on observed values of the regressors. The simplest and most widely used version of this model is the normal linear model, in which given is distributed Gaussian. In this model, and under a particular choice of prior probabilities for the parameters—so-called conjugate priors—the posterior can be found analytically. With more arbitrarily chosen priors, the posteriors generally have to be approximated.

In statistics, Bayesian multivariate linear regression is a Bayesian approach to multivariate linear regression, i.e. linear regression where the predicted outcome is a vector of correlated random variables rather than a single scalar random variable. A more general treatment of this approach can be found in the article MMSE estimator.

In statistics, the multivariate t-distribution is a multivariate probability distribution. It is a generalization to random vectors of the Student's t-distribution, which is a distribution applicable to univariate random variables. While the case of a random matrix could be treated within this structure, the matrix t-distribution is distinct and makes particular use of the matrix structure.

In statistics, the inverse Wishart distribution, also called the inverted Wishart distribution, is a probability distribution defined on real-valued positive-definite matrices. In Bayesian statistics it is used as the conjugate prior for the covariance matrix of a multivariate normal distribution.

In probability theory and statistics, partial correlation measures the degree of association between two random variables, with the effect of a set of controlling random variables removed. When determining the numerical relationship between two variables of interest, using their correlation coefficient will give misleading results if there is another confounding variable that is numerically related to both variables of interest. This misleading information can be avoided by controlling for the confounding variable, which is done by computing the partial correlation coefficient. This is precisely the motivation for including other right-side variables in a multiple regression; but while multiple regression gives unbiased results for the effect size, it does not give a numerical value of a measure of the strength of the relationship between the two variables of interest.

A ratio distribution is a probability distribution constructed as the distribution of the ratio of random variables having two other known distributions. Given two random variables X and Y, the distribution of the random variable Z that is formed as the ratio Z = X/Y is a ratio distribution.

<span class="mw-page-title-main">Normal-inverse-gamma distribution</span>

In probability theory and statistics, the normal-inverse-gamma distribution is a four-parameter family of multivariate continuous probability distributions. It is the conjugate prior of a normal distribution with unknown mean and variance.

In quantum mechanics, and especially quantum information theory, the purity of a normalized quantum state is a scalar defined as where is the density matrix of the state and is the trace operation. The purity defines a measure on quantum states, giving information on how much a state is mixed.

The purpose of this page is to provide supplementary materials for the ordinary least squares article, reducing the load of the main article with mathematics and improving its accessibility, while at the same time retaining the completeness of exposition.

In probability theory, the family of complex normal distributions, denoted or , characterizes complex random variables whose real and imaginary parts are jointly normal. The complex normal family has three parameters: location parameter μ, covariance matrix , and the relation matrix . The standard complex normal is the univariate distribution with , , and .

In probability theory and statistics, the negative multinomial distribution is a generalization of the negative binomial distribution (NB(x0, p)) to more than two outcomes.

In statistics, the matrix t-distribution is the generalization of the multivariate t-distribution from vectors to matrices.

In the mathematical theory of probability, multivariate Laplace distributions are extensions of the Laplace distribution and the asymmetric Laplace distribution to multiple variables. The marginal distributions of symmetric multivariate Laplace distribution variables are Laplace distributions. The marginal distributions of asymmetric multivariate Laplace distribution variables are asymmetric Laplace distributions.

The complex inverse Wishart distribution is a matrix probability distribution defined on complex-valued positive-definite matrices and is the complex analog of the real inverse Wishart distribution. The complex Wishart distribution was extensively investigated by Goodman while the derivation of the inverse is shown by Shaman and others. It has greatest application in least squares optimization theory applied to complex valued data samples in digital radio communications systems, often related to Fourier Domain complex filtering.

References

  1. 1 2 Wishart, J. (1928). "The generalised product moment distribution in samples from a normal multivariate population". Biometrika . 20A (1–2): 32–52. doi:10.1093/biomet/20A.1-2.32. JFM   54.0565.02. JSTOR   2331939.
  2. Livan, Giacomo; Novaes, Marcel; Vivo, Pierpaolo (2018), Livan, Giacomo; Novaes, Marcel; Vivo, Pierpaolo (eds.), "Classical Ensembles: Wishart-Laguerre", Introduction to Random Matrices: Theory and Practice, SpringerBriefs in Mathematical Physics, Cham: Springer International Publishing, pp. 89–95, doi:10.1007/978-3-319-70885-0_13, ISBN   978-3-319-70885-0 , retrieved 2023-05-17
  3. Koop, Gary; Korobilis, Dimitris (2010). "Bayesian Multivariate Time Series Methods for Empirical Macroeconomics". Foundations and Trends in Econometrics. 3 (4): 267–358. doi: 10.1561/0800000013 .
  4. Gupta, A. K.; Nagar, D. K. (2000). Matrix Variate Distributions. Chapman & Hall /CRC. ISBN   1584880465.
  5. Gelman, Andrew (2003). Bayesian Data Analysis (2nd ed.). Boca Raton, Fla.: Chapman & Hall. p. 582. ISBN   158488388X . Retrieved 3 June 2015.
  6. Zanella, A.; Chiani, M.; Win, M.Z. (April 2009). "On the marginal distribution of the eigenvalues of wishart matrices" (PDF). IEEE Transactions on Communications. 57 (4): 1050–1060. doi:10.1109/TCOMM.2009.04.070143. hdl: 1721.1/66900 . S2CID   12437386.
  7. Livan, Giacomo; Vivo, Pierpaolo (2011). "Moments of Wishart-Laguerre and Jacobi ensembles of random matrices: application to the quantum transport problem in chaotic cavities". Acta Physica Polonica B. 42 (5): 1081. arXiv: 1103.2638 . doi:10.5506/APhysPolB.42.1081. ISSN   0587-4254. S2CID   119599157.
  8. Muirhead, Robb J. (2005). Aspects of Multivariate Statistical Theory (2nd ed.). Wiley Interscience. ISBN   0471769851.
  9. 1 2 Anderson, T. W. (2003). An Introduction to Multivariate Statistical Analysis (3rd ed.). Hoboken, N. J.: Wiley Interscience. p. 259. ISBN   0-471-36091-0.
  10. Uhlig, H. (1994). "On Singular Wishart and Singular Multivariate Beta Distributions". The Annals of Statistics. 22: 395–405. doi: 10.1214/aos/1176325375 .
  11. 1 2 3 Bishop, C. M. (2006). Pattern Recognition and Machine Learning. Springer.
  12. Hoff, Peter D. (2009). A First Course in Bayesian Statistical Methods. New York: Springer. pp. 109–111. ISBN   978-0-387-92299-7.
  13. Nguyen, Duy. "AN IN DEPTH INTRODUCTION TO VARIATIONAL BAYES NOTE". SSRN   4541076 . Retrieved 15 August 2023.
  14. Mayerhofer, Eberhard (2019-01-27). "Reforming the Wishart characteristic function". arXiv: 1901.09347 [math.PR].
  15. 1 2 Rao, C. R. (1965). Linear Statistical Inference and its Applications. Wiley. p. 535.
  16. Seber, George A. F. (2004). Multivariate Observations. Wiley. ISBN   978-0471691211.
  17. Chatfield, C.; Collins, A. J. (1980). Introduction to Multivariate Analysis. London: Chapman and Hall. pp.  103–108. ISBN   0-412-16030-7.
  18. Anderson, T. W. (2003). An Introduction to Multivariate Statistical Analysis (3rd ed.). Hoboken, N. J.: Wiley Interscience. p. 257. ISBN   0-471-36091-0.
  19. Smith, W. B.; Hocking, R. R. (1972). "Algorithm AS 53: Wishart Variate Generator". Journal of the Royal Statistical Society, Series C . 21 (3): 341–345. JSTOR   2346290.
  20. Pearson, Karl; Jeffery, G. B.; Elderton, Ethel M. (December 1929). "On the Distribution of the First Product Moment-Coefficient, in Samples Drawn from an Indefinitely Large Normal Population". Biometrika. 21 (1/4). Biometrika Trust: 164–201. doi:10.2307/2332556. JSTOR   2332556.
  21. Fischer, Adrian; Gaunt, Robert E.; Andrey, Sarantsev (2023). "The Variance-Gamma Distribution: A Review". arXiv: 2303.05615 [math.ST].
  22. Craig, Cecil C. (1936). "On the Frequency Function of xy". Ann. Math. Statist. 7: 1–15. doi: 10.1214/aoms/1177732541 .
  23. Peddada and Richards, Shyamal Das; Richards, Donald St. P. (1991). "Proof of a Conjecture of M. L. Eaton on the Characteristic Function of the Wishart Distribution". Annals of Probability . 19 (2): 868–874. doi: 10.1214/aop/1176990455 .
  24. Gindikin, S.G. (1975). "Invariant generalized functions in homogeneous domains". Funct. Anal. Appl. 9 (1): 50–52. doi:10.1007/BF01078179. S2CID   123288172.
  25. Dwyer, Paul S. (1967). "Some Applications of Matrix Derivatives in Multivariate Analysis". J. Amer. Statist. Assoc. 62 (318): 607–625. doi:10.1080/01621459.1967.10482934. JSTOR   2283988.