Probability density function | |||
Cumulative distribution function | |||
Notation | |||
---|---|---|---|
Parameters | , vector of weights of noncentral chi-square components , vector of degrees of freedom of noncentral chi-square components , vector of non-centrality parameters of chi-square components , scale of normal term , offset | ||
Support | |||
no closed-form expression | |||
CDF | no closed-form expression | ||
Mean | |||
Variance | |||
MGF | |||
CF |
In probability theory and statistics, the generalized chi-squared distribution (or generalized chi-square distribution) is the distribution of a quadratic form of a multinormal variable (normal vector), or a linear combination of different normal variables and squares of normal variables. Equivalently, it is also a linear sum of independent noncentral chi-square variables and a normal variable. There are several other such generalizations for which the same term is sometimes used; some of them are special cases of the family discussed here, for example the gamma distribution.
The generalized chi-squared variable may be described in multiple ways. One is to write it as a weighted sum of independent noncentral chi-square variables and a standard normal variable : [1] [2]
Here the parameters are the weights , the degrees of freedom and non-centralities of the constituent non-central chi-squares, and the coefficients and of the normal. Some important special cases of this have all weights of the same sign, or have central chi-squared components, or omit the normal term.
Since a non-central chi-squared variable is a sum of squares of normal variables with different means, the generalized chi-square variable is also defined as a sum of squares of independent normal variables, plus an independent normal variable: that is, a quadratic in normal variables.
Another equivalent way is to formulate it as a quadratic form of a normal vector : [3] [4]
Here is a matrix, is a vector, and is a scalar. These, together with the mean and covariance matrix of the normal vector , parameterize the distribution.
For the most general case, a reduction towards a common standard form can be made by using a representation of the following form: [5]
where D is a diagonal matrix and where x represents a vector of uncorrelated standard normal random variables.
A generalized chi-square variable or distribution can be parameterized in two ways. The first is in terms of the weights , the degrees of freedom and non-centralities of the constituent non-central chi-squares, and the coefficients and of the added normal term. The second parameterization is using the quadratic form of a normal vector, where the paremeters are the matrix , the vector , and the scalar , and the mean and covariance matrix of the normal vector.
The parameters of the first expression (in terms of non-central chi-squares, a normal and a constant) can be calculated in terms of the parameters of the second expression (quadratic form of a normal vector). [4]
The parameters of the second expression (quadratic form of a normal vector) can also be calculated in terms of the parameters of the first expression (in terms of non-central chi-squares, a normal and a constant). [6]
There exists Matlab code to convert from one set of parameters to another.
The probability density, cumulative distribution, and inverse cumulative distribution functions of a generalized chi-squared variable do not have simple closed-form expressions. But there exist several methods to compute them numerically: Ruben's method, [7] Imhof's method, [8] IFFT method, [6] ray method, [6] and ellipse approximation. [6]
Numerical algorithms [5] [2] [8] [4] and computer code (Fortran and C, Matlab, R, Python, Julia) have been published that implement some of these methods to compute the PDF, CDF, and inverse CDF, and to generate random numbers.
The following table shows the best methods to use to compute the CDF and PDF for the different parts of the generalized chi-square distribution in different cases: [6]
type | part | best cdf/pdf method(s) |
---|---|---|
ellipse: same sign, | body | Ruben, Imhof, IFFT, ray |
finite tail | Ruben, ray (if ), ellipse | |
infinite tail | Ruben, ray | |
not ellipse: mixed signs, and/or | body | Imhof, IFFT, ray |
infinite tails | ray |
The generalized chi-squared is the distribution of statistical estimates in cases where the usual statistical theory does not hold, as in the examples below.
If a predictive model is fitted by least squares, but the residuals have either autocorrelation or heteroscedasticity, then alternative models can be compared (in model selection) by relating changes in the sum of squares to an asymptotically valid generalized chi-squared distribution. [3]
If is a normal vector, its log likelihood is a quadratic form of , and is hence distributed as a generalized chi-squared. The log likelihood ratio that arises from one normal distribution versus another is also a quadratic form, so distributed as a generalized chi-squared. [4]
In Gaussian discriminant analysis, samples from multinormal distributions are optimally separated by using a quadratic classifier, a boundary that is a quadratic function (e.g. the curve defined by setting the likelihood ratio between two Gaussians to 1). The classification error rates of different types (false positives and false negatives) are integrals of the normal distributions within the quadratic regions defined by this classifier. Since this is mathematically equivalent to integrating a quadratic form of a normal vector, the result is an integral of a generalized-chi-squared variable. [4]
The following application arises in the context of Fourier analysis in signal processing, renewal theory in probability theory, and multi-antenna systems in wireless communication. The common factor of these areas is that the sum of exponentially distributed variables is of importance (or identically, the sum of squared magnitudes of circularly-symmetric centered complex Gaussian variables).
If are k independent, circularly-symmetric centered complex Gaussian random variables with mean 0 and variance , then the random variable
has a generalized chi-squared distribution of a particular form. The difference from the standard chi-squared distribution is that are complex and can have different variances, and the difference from the more general generalized chi-squared distribution is that the relevant scaling matrix A is diagonal. If for all i, then , scaled down by (i.e. multiplied by ), has a chi-squared distribution, , also known as an Erlang distribution. If have distinct values for all i, then has the pdf [9]
If there are sets of repeated variances among , assume that they are divided into M sets, each representing a certain variance value. Denote to be the number of repetitions in each group. That is, the mth set contains variables that have variance It represents an arbitrary linear combination of independent -distributed random variables with different degrees of freedom:
The pdf of is [10]
where
with from the set of all partitions of (with ) defined as
In probability theory and statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is The parameter is the mean or expectation of the distribution, while the parameter is the variance. The standard deviation of the distribution is (sigma). A random variable with a Gaussian distribution is said to be normally distributed, and is called a normal deviate.
In probability theory and statistics, the multivariate normal distribution, multivariate Gaussian distribution, or joint normal distribution is a generalization of the one-dimensional (univariate) normal distribution to higher dimensions. One definition is that a random vector is said to be k-variate normally distributed if every linear combination of its k components has a univariate normal distribution. Its importance derives mainly from the multivariate central limit theorem. The multivariate normal distribution is often used to describe, at least approximately, any set of (possibly) correlated real-valued random variables, each of which clusters around a mean value.
In probability theory and statistics, the chi-squared distribution with degrees of freedom is the distribution of a sum of the squares of independent standard normal random variables.
In probability theory and statistics, a covariance matrix is a square matrix giving the covariance between each pair of elements of a given random vector.
In probability and statistics, an exponential family is a parametric set of probability distributions of a certain form, specified below. This special form is chosen for mathematical convenience, including the enabling of the user to calculate expectations, covariances using differentiation based on some useful algebraic properties, as well as for generality, as exponential families are in a sense very natural sets of distributions to consider. The term exponential class is sometimes used in place of "exponential family", or the older term Koopman–Darmois family. Sometimes loosely referred to as "the" exponential family, this class of distributions is distinct because they all possess a variety of desirable properties, most importantly the existence of a sufficient statistic.
In statistics, an expectation–maximization (EM) algorithm is an iterative method to find (local) maximum likelihood or maximum a posteriori (MAP) estimates of parameters in statistical models, where the model depends on unobserved latent variables. The EM iteration alternates between performing an expectation (E) step, which creates a function for the expectation of the log-likelihood evaluated using the current estimate for the parameters, and a maximization (M) step, which computes parameters maximizing the expected log-likelihood found on the E step. These parameter-estimates are then used to determine the distribution of the latent variables in the next E step. It can be used, for example, to estimate a mixture of gaussians, or to solve the multiple linear regression problem.
In statistics, propagation of uncertainty is the effect of variables' uncertainties on the uncertainty of a function based on them. When the variables are the values of experimental measurements they have uncertainties due to measurement limitations which propagate due to the combination of variables in the function.
In statistics, particularly in hypothesis testing, the Hotelling's T-squared distribution (T2), proposed by Harold Hotelling, is a multivariate probability distribution that is tightly related to the F-distribution and is most notable for arising as the distribution of a set of sample statistics that are natural generalizations of the statistics underlying the Student's t-distribution. The Hotelling's t-squared statistic (t2) is a generalization of Student's t-statistic that is used in multivariate hypothesis testing.
In statistics, the theory of minimum norm quadratic unbiased estimation (MINQUE) was developed by C. R. Rao. MINQUE is a theory alongside other estimation methods in estimation theory, such as the method of moments or maximum likelihood estimation. Similar to the theory of best linear unbiased estimation, MINQUE is specifically concerned with linear regression models. The method was originally conceived to estimate heteroscedastic error variance in multiple linear regression. MINQUE estimators also provide an alternative to maximum likelihood estimators or restricted maximum likelihood estimators for variance components in mixed effects models. MINQUE estimators are quadratic forms of the response variable and are used to estimate a linear function of the variances.
In statistics, ordinary least squares (OLS) is a type of linear least squares method for choosing the unknown parameters in a linear regression model by the principle of least squares: minimizing the sum of the squares of the differences between the observed dependent variable in the input dataset and the output of the (linear) function of the independent variable. Some sources consider OLS to be linear regression.
Weighted least squares (WLS), also known as weighted linear regression, is a generalization of ordinary least squares and linear regression in which knowledge of the unequal variance of observations (heteroscedasticity) is incorporated into the regression. WLS is also a specialization of generalized least squares, when all the off-diagonal entries of the covariance matrix of the errors, are null.
The sensitivity index or discriminability index or detectability index is a dimensionless statistic used in signal detection theory. A higher index indicates that the signal can be more readily detected.
Bayesian linear regression is a type of conditional modeling in which the mean of one variable is described by a linear combination of other variables, with the goal of obtaining the posterior probability of the regression coefficients and ultimately allowing the out-of-sample prediction of the regressandconditional on observed values of the regressors. The simplest and most widely used version of this model is the normal linear model, in which given is distributed Gaussian. In this model, and under a particular choice of prior probabilities for the parameters—so-called conjugate priors—the posterior can be found analytically. With more arbitrarily chosen priors, the posteriors generally have to be approximated.
In statistics, the multivariate t-distribution is a multivariate probability distribution. It is a generalization to random vectors of the Student's t-distribution, which is a distribution applicable to univariate random variables. While the case of a random matrix could be treated within this structure, the matrix t-distribution is distinct and makes particular use of the matrix structure.
Non-linear least squares is the form of least squares analysis used to fit a set of m observations with a model that is non-linear in n unknown parameters (m ≥ n). It is used in some forms of nonlinear regression. The basis of the method is to approximate the model by a linear one and to refine the parameters by successive iterations. There are many similarities to linear least squares, but also some significant differences. In economic theory, the non-linear least squares method is applied in (i) the probit regression, (ii) threshold regression, (iii) smooth regression, (iv) logistic link regression, (v) Box–Cox transformed regressors ().
The purpose of this page is to provide supplementary materials for the ordinary least squares article, reducing the load of the main article with mathematics and improving its accessibility, while at the same time retaining the completeness of exposition.
Linear least squares (LLS) is the least squares approximation of linear functions to data. It is a set of formulations for solving statistical problems involved in linear regression, including variants for ordinary (unweighted), weighted, and generalized (correlated) residuals. Numerical methods for linear least squares include inverting the matrix of the normal equations and orthogonal decomposition methods.
In probability theory, a logit-normal distribution is a probability distribution of a random variable whose logit has a normal distribution. If Y is a random variable with a normal distribution, and t is the standard logistic function, then X = t(Y) has a logit-normal distribution; likewise, if X is logit-normally distributed, then Y = logit(X)= log (X/(1-X)) is normally distributed. It is also known as the logistic normal distribution, which often refers to a multinomial logit version (e.g.).
In statistics, the class of vector generalized linear models (VGLMs) was proposed to enlarge the scope of models catered for by generalized linear models (GLMs). In particular, VGLMs allow for response variables outside the classical exponential family and for more than one parameter. Each parameter can be transformed by a link function. The VGLM framework is also large enough to naturally accommodate multiple responses; these are several independent responses each coming from a particular statistical distribution with possibly different parameter values.
In the mathematical theory of probability, multivariate Laplace distributions are extensions of the Laplace distribution and the asymmetric Laplace distribution to multiple variables. The marginal distributions of symmetric multivariate Laplace distribution variables are Laplace distributions. The marginal distributions of asymmetric multivariate Laplace distribution variables are asymmetric Laplace distributions.