In the mathematical theory of probability, multivariate Laplace distributions are extensions of the Laplace distribution and the asymmetric Laplace distribution to multiple variables. The marginal distributions of symmetric multivariate Laplace distribution variables are Laplace distributions. The marginal distributions of asymmetric multivariate Laplace distribution variables are asymmetric Laplace distributions. [1]
Parameters | μ ∈ Rk — location Σ ∈ Rk×k — covariance (positive-definite matrix) | ||
---|---|---|---|
Support | x ∈ μ + span(Σ) ⊆ Rk | ||
| |||
Mean | μ | ||
Mode | μ | ||
Variance | Σ | ||
Skewness | 0 | ||
CF |
A typical characterization of the symmetric multivariate Laplace distribution has the characteristic function:
where is the vector of means for each variable and is the covariance matrix. [2]
Unlike the multivariate normal distribution, even if the covariance matrix has zero covariance and correlation the variables are not independent. [1] The symmetric multivariate Laplace distribution is elliptical. [1]
If , the probability density function (pdf) for a k-dimensional multivariate Laplace distribution becomes:
where:
and is the modified Bessel function of the second kind. [1]
In the correlated bivariate case, i.e., k = 2, with the pdf reduces to:
where:
and are the standard deviations of and , respectively, and is the correlation coefficient of and . [1]
For the uncorrelated bivariate Laplace case, that is k = 2, and , the pdf becomes:
Parameters | μ ∈ Rk — location Σ ∈ Rk×k — covariance (positive-definite matrix) | ||
---|---|---|---|
Support | x ∈ μ + span(Σ) ⊆ Rk | ||
where and is the modified Bessel function of the second kind. | |||
Mean | μ | ||
Variance | Σ + μ ' μ | ||
Skewness | non-zero unless μ=0 | ||
CF |
A typical characterization of the asymmetric multivariate Laplace distribution has the characteristic function:
As with the symmetric multivariate Laplace distribution, the asymmetric multivariate Laplace distribution has mean , but the covariance becomes . [3] The asymmetric multivariate Laplace distribution is not elliptical unless , in which case the distribution reduces to the symmetric multivariate Laplace distribution with . [1]
The probability density function (pdf) for a k-dimensional asymmetric multivariate Laplace distribution is:
where:
and is the modified Bessel function of the second kind. [1]
The asymmetric Laplace distribution, including the special case of , is an example of a geometric stable distribution. [3] It represents the limiting distribution for a sum of independent, identically distributed random variables with finite variance and covariance where the number of elements to be summed is itself an independent random variable distributed according to a geometric distribution. [1] Such geometric sums can arise in practical applications within biology, economics and insurance. [1] The distribution may also be applicable in broader situations to model multivariate data with heavier tails than a normal distribution but finite moments. [1]
The relationship between the exponential distribution and the Laplace distribution allows for a simple method for simulating bivariate asymmetric Laplace variables (including for the case of ). Simulate a bivariate normal random variable vector from a distribution with and covariance matrix . Independently simulate an exponential random variables W from an Exp(1) distribution. will be distributed (asymmetric) bivariate Laplace with mean and covariance matrix . [1]
In probability theory and statistics, the multivariate normal distribution, multivariate Gaussian distribution, or joint normal distribution is a generalization of the one-dimensional (univariate) normal distribution to higher dimensions. One definition is that a random vector is said to be k-variate normally distributed if every linear combination of its k components has a univariate normal distribution. Its importance derives mainly from the multivariate central limit theorem. The multivariate normal distribution is often used to describe, at least approximately, any set of (possibly) correlated real-valued random variables, each of which clusters around a mean value.
In statistics, correlation or dependence is any statistical relationship, whether causal or not, between two random variables or bivariate data. Although in the broadest sense, "correlation" may indicate any type of association, in statistics it usually refers to the degree to which a pair of variables are linearly related. Familiar examples of dependent phenomena include the correlation between the height of parents and their offspring, and the correlation between the price of a good and the quantity the consumers are willing to purchase, as it is depicted in the so-called demand curve.
In probability theory and statistics, a covariance matrix is a square matrix giving the covariance between each pair of elements of a given random vector.
In statistics, the Pearson correlation coefficient (PCC) is a correlation coefficient that measures linear correlation between two sets of data. It is the ratio between the covariance of two variables and the product of their standard deviations; thus, it is essentially a normalized measurement of the covariance, such that the result always has a value between −1 and 1. As with covariance itself, the measure can only reflect a linear correlation of variables, and ignores many other types of relationships or correlations. As a simple example, one would expect the age and height of a sample of children from a primary school to have a Pearson correlation coefficient significantly greater than 0, but less than 1.
In statistics, the Wishart distribution is a generalization of the gamma distribution to multiple dimensions. It is named in honor of John Wishart, who first formulated the distribution in 1928. Other names include Wishart ensemble, or Wishart–Laguerre ensemble, or LOE, LUE, LSE.
In statistics, propagation of uncertainty is the effect of variables' uncertainties on the uncertainty of a function based on them. When the variables are the values of experimental measurements they have uncertainties due to measurement limitations which propagate due to the combination of variables in the function.
In statistics, the Bhattacharyya distance is a quantity which represents a notion of similarity between two probability distributions. It is closely related to the Bhattacharyya coefficient, which is a measure of the amount of overlap between two statistical samples or populations.
In statistics, particularly in hypothesis testing, the Hotelling's T-squared distribution (T2), proposed by Harold Hotelling, is a multivariate probability distribution that is tightly related to the F-distribution and is most notable for arising as the distribution of a set of sample statistics that are natural generalizations of the statistics underlying the Student's t-distribution. The Hotelling's t-squared statistic (t2) is a generalization of Student's t-statistic that is used in multivariate hypothesis testing.
In probability theory and statistics, the noncentral chi distribution is a noncentral generalization of the chi distribution. It is also known as the generalized Rayleigh distribution.
Oblate spheroidal coordinates are a three-dimensional orthogonal coordinate system that results from rotating the two-dimensional elliptic coordinate system about the non-focal axis of the ellipse, i.e., the symmetry axis that separates the foci. Thus, the two foci are transformed into a ring of radius in the x-y plane. Oblate spheroidal coordinates can also be considered as a limiting case of ellipsoidal coordinates in which the two largest semi-axes are equal in length.
The sensitivity index or discriminability index or detectability index is a dimensionless statistic used in signal detection theory. A higher index indicates that the signal can be more readily detected.
Bayesian linear regression is a type of conditional modeling in which the mean of one variable is described by a linear combination of other variables, with the goal of obtaining the posterior probability of the regression coefficients and ultimately allowing the out-of-sample prediction of the regressandconditional on observed values of the regressors. The simplest and most widely used version of this model is the normal linear model, in which given is distributed Gaussian. In this model, and under a particular choice of prior probabilities for the parameters—so-called conjugate priors—the posterior can be found analytically. With more arbitrarily chosen priors, the posteriors generally have to be approximated.
In statistics, the multivariate t-distribution is a multivariate probability distribution. It is a generalization to random vectors of the Student's t-distribution, which is a distribution applicable to univariate random variables. While the case of a random matrix could be treated within this structure, the matrix t-distribution is distinct and makes particular use of the matrix structure.
A ratio distribution is a probability distribution constructed as the distribution of the ratio of random variables having two other known distributions. Given two random variables X and Y, the distribution of the random variable Z that is formed as the ratio Z = X/Y is a ratio distribution.
In probability theory and statistics, the normal-inverse-gamma distribution is a four-parameter family of multivariate continuous probability distributions. It is the conjugate prior of a normal distribution with unknown mean and variance.
In mathematics, in the area of quantum information geometry, the Bures metric or Helstrom metric defines an infinitesimal distance between density matrix operators defining quantum states. It is a quantum generalization of the Fisher information metric, and is identical to the Fubini–Study metric when restricted to the pure states alone.
In probability theory and statistics, the negative multinomial distribution is a generalization of the negative binomial distribution (NB(x0, p)) to more than two outcomes.
In probability theory, a logit-normal distribution is a probability distribution of a random variable whose logit has a normal distribution. If Y is a random variable with a normal distribution, and t is the standard logistic function, then X = t(Y) has a logit-normal distribution; likewise, if X is logit-normally distributed, then Y = logit(X)= log (X/(1-X)) is normally distributed. It is also known as the logistic normal distribution, which often refers to a multinomial logit version (e.g.).
Lagrangian field theory is a formalism in classical field theory. It is the field-theoretic analogue of Lagrangian mechanics. Lagrangian mechanics is used to analyze the motion of a system of discrete particles each with a finite number of degrees of freedom. Lagrangian field theory applies to continua and fields, which have an infinite number of degrees of freedom.
In directional statistics, the projected normal distribution is a probability distribution over directions that describes the radial projection of a random variable with n-variate normal distribution over the unit (n-1)-sphere.
{{cite journal}}
: CS1 maint: multiple names: authors list (link)