Inverse-Wishart distribution

Last updated
Inverse-Wishart
Notation
Parameters degrees of freedom (real)
, scale matrix (pos. def.)
Support is p × p positive definite
PDF

Contents

Mean For
Mode [1] :406
Variance see below

In statistics, the inverse Wishart distribution, also called the inverted Wishart distribution, is a probability distribution defined on real-valued positive-definite matrices. In Bayesian statistics it is used as the conjugate prior for the covariance matrix of a multivariate normal distribution.

We say follows an inverse Wishart distribution, denoted as , if its inverse has a Wishart distribution . Important identities have been derived for the inverse-Wishart distribution. [2]

Density

The probability density function of the inverse Wishart is: [3]

where and are positive definite matrices, is the determinant, and is the multivariate gamma function.

Theorems

Distribution of the inverse of a Wishart-distributed matrix

If and is of size , then has an inverse Wishart distribution . [4]

Marginal and conditional distributions from an inverse Wishart-distributed matrix

Suppose has an inverse Wishart distribution. Partition the matrices and conformably with each other

where and are matrices, then we have

  1. is independent of and , where is the Schur complement of in ;
  2. ;
  3. , where is a matrix normal distribution;
  4. , where ;

Conjugate distribution

Suppose we wish to make inference about a covariance matrix whose prior has a distribution. If the observations are independent p-variate Gaussian variables drawn from a distribution, then the conditional distribution has a distribution, where .

Because the prior and posterior distributions are the same family, we say the inverse Wishart distribution is conjugate to the multivariate Gaussian.

Due to its conjugacy to the multivariate Gaussian, it is possible to marginalize out (integrate out) the Gaussian's parameter , using the formula and the linear algebra identity :

(this is useful because the variance matrix is not known in practice, but because is known a priori, and can be obtained from the data, the right hand side can be evaluated directly). The inverse-Wishart distribution as a prior can be constructed via existing transferred prior knowledge. [5]

Moments

The following is based on Press, S. J. (1982) "Applied Multivariate Analysis", 2nd ed. (Dover Publications, New York), after reparameterizing the degree of freedom to be consistent with the p.d.f. definition above.

Let with and , so that .

The mean: [4] :85

The variance of each element of :

The variance of the diagonal uses the same formula as above with , which simplifies to:

The covariance of elements of are given by:

The same results are expressed in Kronecker product form by von Rosen [6] as follows:

where

commutation matrix

There appears to be a typo in the paper whereby the coefficient of is given as rather than , and that the expression for the mean square inverse Wishart, corollary 3.1, should read

To show how the interacting terms become sparse when the covariance is diagonal, let and introduce some arbitrary parameters :

where denotes the matrix vectorization operator. Then the second moment matrix becomes

which is non-zero only when involving the correlations of diagonal elements of , all other elements are mutually uncorrelated, though not necessarily statistically independent. The variances of the Wishart product are also obtained by Cook et al. [7] in the singular case and, by extension, to the full rank case.

Muirhead [8] shows in Theorem 3.2.8 that if is distributed as and is an arbitrary vector, independent of then and , one degree of freedom being relinquished by estimation of the sample mean in the latter. Similarly, Bodnar et.al. further find that and setting the marginal distribution of the leading diagonal element is thus

and by rotating end-around a similar result applies to all diagonal elements .

A corresponding result in the complex Wishart case was shown by Brennan and Reed [9] and the uncorrelated inverse complex Wishart was shown by Shaman [10] to have diagonal statistical structure in which the leading diagonal elements are correlated, while all other element are uncorrelated.

i.e., the inverse-gamma distribution, where is the ordinary Gamma function.
Thus, an arbitrary p-vector with length can be rotated into the vector without changing the pdf of , moreover can be a permutation matrix which exchanges diagonal elements. It follows that the diagonal elements of are identically inverse chi squared distributed, with pdf in the previous section though they are not mutually independent. The result is known in optimal portfolio statistics, as in Theorem 2 Corollary 1 of Bodnar et al, [12] where it is expressed in the inverse form .

See also

Related Research Articles

<span class="mw-page-title-main">Normal distribution</span> Probability distribution

In probability theory and statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is The parameter is the mean or expectation of the distribution, while the parameter is the variance. The standard deviation of the distribution is (sigma). A random variable with a Gaussian distribution is said to be normally distributed, and is called a normal deviate.

In particle physics, the Dirac equation is a relativistic wave equation derived by British physicist Paul Dirac in 1928. In its free form, or including electromagnetic interactions, it describes all spin-1/2 massive particles, called "Dirac particles", such as electrons and quarks for which parity is a symmetry. It is consistent with both the principles of quantum mechanics and the theory of special relativity, and was the first theory to account fully for special relativity in the context of quantum mechanics. It was validated by accounting for the fine structure of the hydrogen spectrum in a completely rigorous way. It has become vital in the building of the Standard Model.

<span class="mw-page-title-main">Multivariate normal distribution</span> Generalization of the one-dimensional normal distribution to higher dimensions

In probability theory and statistics, the multivariate normal distribution, multivariate Gaussian distribution, or joint normal distribution is a generalization of the one-dimensional (univariate) normal distribution to higher dimensions. One definition is that a random vector is said to be k-variate normally distributed if every linear combination of its k components has a univariate normal distribution. Its importance derives mainly from the multivariate central limit theorem. The multivariate normal distribution is often used to describe, at least approximately, any set of (possibly) correlated real-valued random variables, each of which clusters around a mean value.

<span class="mw-page-title-main">Student's t-distribution</span> Probability distribution

In probability theory and statistics, Student's t distribution is a continuous probability distribution that generalizes the standard normal distribution. Like the latter, it is symmetric around zero and bell-shaped.

In statistics, the Wishart distribution is a generalization of the gamma distribution to multiple dimensions. It is named in honor of John Wishart, who first formulated the distribution in 1928. Other names include Wishart ensemble, or Wishart–Laguerre ensemble, or LOE, LUE, LSE.

In statistics, the matrix normal distribution or matrix Gaussian distribution is a probability distribution that is a generalization of the multivariate normal distribution to matrix-valued random variables.

<span class="mw-page-title-main">Scaled inverse chi-squared distribution</span> Probability distribution

The scaled inverse chi-squared distribution, where is the scale parameter, equals the univariate inverse Wishart distribution with degrees of freedom .

In mathematics and economics, transportation theory or transport theory is a name given to the study of optimal transportation and allocation of resources. The problem was formalized by the French mathematician Gaspard Monge in 1781.

In statistics, Bayesian multivariate linear regression is a Bayesian approach to multivariate linear regression, i.e. linear regression where the predicted outcome is a vector of correlated random variables rather than a single scalar random variable. A more general treatment of this approach can be found in the article MMSE estimator.

In statistics, the multivariate t-distribution is a multivariate probability distribution. It is a generalization to random vectors of the Student's t-distribution, which is a distribution applicable to univariate random variables. While the case of a random matrix could be treated within this structure, the matrix t-distribution is distinct and makes particular use of the matrix structure.

In mathematical physics, spacetime algebra (STA) is the application of Clifford algebra Cl1,3(R), or equivalently the geometric algebra G(M4) to physics. Spacetime algebra provides a "unified, coordinate-free formulation for all of relativistic physics, including the Dirac equation, Maxwell equation and General Relativity" and "reduces the mathematical divide between classical, quantum and relativistic physics."

<span class="mw-page-title-main">Normal-inverse-gamma distribution</span>

In probability theory and statistics, the normal-inverse-gamma distribution is a four-parameter family of multivariate continuous probability distributions. It is the conjugate prior of a normal distribution with unknown mean and variance.

In probability theory and statistics, the normal-Wishart distribution is a multivariate four-parameter family of continuous probability distributions. It is the conjugate prior of a multivariate normal distribution with unknown mean and precision matrix.

In probability theory and statistics, the normal-inverse-Wishart distribution is a multivariate four-parameter family of continuous probability distributions. It is the conjugate prior of a multivariate normal distribution with unknown mean and covariance matrix.

In statistics, the matrix t-distribution is the generalization of the multivariate t-distribution from vectors to matrices.

Lagrangian field theory is a formalism in classical field theory. It is the field-theoretic analogue of Lagrangian mechanics. Lagrangian mechanics is used to analyze the motion of a system of discrete particles each with a finite number of degrees of freedom. Lagrangian field theory applies to continua and fields, which have an infinite number of degrees of freedom.

In mathematical physics, the Gordon decomposition of the Dirac current is a splitting of the charge or particle-number current into a part that arises from the motion of the center of mass of the particles and a part that arises from gradients of the spin density. It makes explicit use of the Dirac equation and so it applies only to "on-shell" solutions of the Dirac equation.

In statistics, the complex Wishart distribution is a complex version of the Wishart distribution. It is the distribution of times the sample Hermitian covariance matrix of zero-mean independent Gaussian random variables. It has support for Hermitian positive definite matrices.

The complex inverse Wishart distribution is a matrix probability distribution defined on complex-valued positive-definite matrices and is the complex analog of the real inverse Wishart distribution. The complex Wishart distribution was extensively investigated by Goodman while the derivation of the inverse is shown by Shaman and others. It has greatest application in least squares optimization theory applied to complex valued data samples in digital radio communications systems, often related to Fourier Domain complex filtering.

In statistics, the matrix F distribution is a matrix variate generalization of the F distribution which is defined on real-valued positive-definite matrices. In Bayesian statistics it can be used as the semi conjugate prior for the covariance matrix or precision matrix of multivariate normal distributions, and related distributions.

References

  1. A. O'Hagan, and J. J. Forster (2004). Kendall's Advanced Theory of Statistics: Bayesian Inference. Vol. 2B (2 ed.). Arnold. ISBN   978-0-340-80752-1.
  2. Haff, LR (1979). "An identity for the Wishart distribution with applications". Journal of Multivariate Analysis. 9 (4): 531–544. doi:10.1016/0047-259x(79)90056-3.
  3. Gelman, Andrew; Carlin, John B.; Stern, Hal S.; Dunson, David B.; Vehtari, Aki; Rubin, Donald B. (2013-11-01). Bayesian Data Analysis, Third Edition (3rd ed.). Boca Raton: Chapman and Hall/CRC. ISBN   9781439840955.
  4. 1 2 Kanti V. Mardia, J. T. Kent and J. M. Bibby (1979). Multivariate Analysis. Academic Press. ISBN   978-0-12-471250-8.
  5. Shahrokh Esfahani, Mohammad; Dougherty, Edward (2014). "Incorporation of Biological Pathway Knowledge in the Construction of Priors for Optimal Bayesian Classification". IEEE Transactions on Bioinformatics and Computational Biology. 11 (1): 202–218. doi:10.1109/tcbb.2013.143. PMID   26355519. S2CID   10096507.
  6. Rosen, Dietrich von (1988). "Moments for the Inverted Wishart Distribution". Scand. J. Stat. 15: 97–109 via JSTOR.
  7. Cook, R D; Forzani, Liliana (August 2019). Cook, Brian (ed.). "On the mean and variance of the generalized inverse of a singular Wishart matrix". Electronic Journal of Statistics. 5. doi:10.4324/9780429344633. ISBN   9780429344633. S2CID   146200569.
  8. Muirhead, Robb (1982). Aspects of Multivariate Statistical Theory. USA: Wiley. p. 93. ISBN   0-471-76985-1.
  9. Brennan, L E; Reed, I S (January 1982). "An Adaptive Array Signal Processing Algorithm for Communications". IEEE Transactions on Aerospace and Electronic Systems. 18 (1): 120–130. Bibcode:1982ITAES..18..124B. doi:10.1109/TAES.1982.309212. S2CID   45721922.
  10. Shaman, Paul (1980). "The Inverted Complex Wishart Distribution and Its Application to Spectral Estimation" (PDF). Journal of Multivariate Analysis. 10: 51–59. doi:10.1016/0047-259X(80)90081-0.
  11. Triantafyllopoulos, K. (2011). "Real-time covariance estimation for the local level model". Journal of Time Series Analysis. 32 (2): 93–107. arXiv: 1311.0634 . doi:10.1111/j.1467-9892.2010.00686.x. S2CID   88512953.
  12. Bodnar, T.; Mazur, S.; Podgórski, K. (January 2015). "Singular Inverse Wishart Distribution with Application to Portfolio Theory". Department of Statistics, Lund University. (Working Papers in Statistics, Nr. 2): 1–17.
  13. 1 2 Bodnar, T; Mazur, S; Podgorski, K (2015). "Singular Inverse Wishart Distribution with Application to Portfolio Theory". Journal of Multivariate Analysis. 143: 314–326. doi:10.1016/j.jmva.2015.09.021.